| Obě strany předchozí revizePředchozí verzeNásledující verze | Předchozí verze |
| cnk:aibrown [2025/06/27 13:53] – [Corpus preparation] annamarklova | cnk:aibrown [2025/10/13 14:10] (aktuální) – [How to cite AI-Brown] jirimilicka |
|---|
| |
| |
| <WRAP right 35%> | <WRAP right 40%> |
| ^ <fs medium>Name</fs> ^^ <fs medium>AI-Brown</fs> ^ | ^ <fs medium>Name</fs> ^^ <fs medium>AI-Brown v1</fs> ^ |
| ^ Positions ^ Number of positions (tokens) | 27 661 454 | | ^ Positions ^ Number of positions (tokens) | 27 661 454 | |
| ^ ::: ^ Number of positions (excl. punctuation) | 23 975 982 | | ^ ::: ^ Number of positions (excl. punctuation) | 23 975 982 | |
| </WRAP> | </WRAP> |
| |
| Modeled on the BE21 Corpus—a modern implementation of the original Brown Corpus—AI-Brown was created to replicate its structure, genre diversity, and linguistic richness, enabling systematic comparisons between human and machine-generated English texts. The corpus comprises outputs from 13 frontier LLMs developed by OpenAI, Anthropic, Meta, Alphabet, and DeepSeek. Each model was prompted using the first 500 words of BE21 text samples, with the remaining portion reserved as human-authored reference material, ensuring genre-aligned and topically consistent comparisons. Like BE21, AI-Brown spans a wide range of contemporary English genres. All generated texts are tokenized, lemmatized, and annotated morphologically and syntactically using the Universal Dependencies framework, and are provided in both plain text and CoNLL-U formats. AI-Brown is a large-scale English LLM-generated corpus explicitly designed for cross-model and human-machine linguistic analysis. | Modeled on the BE21 Corpus((Baker, P. (2023) A year to remember? Introducing the BE21 corpus and exploring recent part of speech tag change in British English. International Journal of Corpus Linguistics.)) — a modern implementation of the original Brown Corpus—AI-Brown was created to replicate its structure, genre diversity, and linguistic richness, enabling systematic comparisons between human and machine-generated English texts. The corpus comprises outputs from 13 frontier LLMs developed by OpenAI, Anthropic, Meta, Alphabet, and DeepSeek. Each model was prompted using the first 500 words of BE21 text samples, with the remaining portion reserved as human-authored reference material, ensuring genre-aligned and topically consistent comparisons. Like BE21, AI-Brown spans a wide range of contemporary English genres. All generated texts are tokenized, lemmatized, and annotated morphologically and syntactically using the Universal Dependencies framework, and are provided in both plain text and CoNLL-U formats. AI-Brown is a large-scale English LLM-generated corpus explicitly designed for cross-model and human-machine linguistic analysis. |
| |
| |
| |
| |
| The original reference BE21 Corpus was available in vertical format via the Czech National Corpus infrastructure. The preprocessing pipeline included several steps to prepare the data for prompt-based generation. Clean texts and metadata were extracted from the verticals, and structural tags were aligned with the Czech corpus format to ensure cross-linguistic consistency. | The preprocessing pipeline for the original reference BE21 corpus included several steps to prepare the data for prompt-based generation. Clean texts and metadata were extracted from the verticals, and structural tags were aligned with the Czech corpus format to ensure cross-linguistic consistency. |
| |
| Each BE21 text sample was split into two parts to support controlled generation: | Each BE21 text sample was split into two parts to support controlled generation: |
| ===== Generating corpora ===== | ===== Generating corpora ===== |
| |
| For each model, we generated two versions of each corpus: one using temperature 0 (deterministic generation) and one using temperature 1 (stochastic generation). However, we encountered mode collapse with the oldest model (davinci-002) at zero temperature, resulting in constant repetition of identical sentences. Additionally, this early model failed to produce coherent Czech text. | For each model, we generated two versions of each corpus: one using temperature 0 (deterministic generation) and one using temperature 1 (stochastic generation). However, we encountered mode collapse with the oldest model (davinci-002) at zero temperature, resulting in constant repetition of identical sentences. |
| |
| For base models operating in completion mode (davinci-002, GPT-3.5-turbo, Meta-Llama-3.1-405B), we used only the first portion of each source text as input, allowing the models to function as traditional language models for text prediction. | For base models operating in completion mode (davinci-002, GPT-3.5-turbo, Meta-Llama-3.1-405B), we used only the first portion of each source text as input, allowing the models to function as traditional language models for text prediction. For instruction-tuned models, we employed minimal system prompts requesting long continuation of given text. Without such prompts, models' default //helpful assistant// persona emerged who typically attempted to analyze, summarize, or answer questions within the source text rather than continuing it. We used the following system prompt: //Please continue the text in the same manner and style, ensuring it contains at least five thousand words. The text does not need to be factually correct, but please make sure it fits stylistically.// |
| | |
| For instruction-tuned models, we employed minimal system prompts requesting long continuation of given text. Without such prompts, models' default \emph{helpful assistant} persona emerged who typically attempted to analyze, summarize, or answer questions within the source text rather than continuing it. Language-specific challenges emerged during Czech texts generation. Some models refused to cooperate when given Czech system prompts, necessitating English system prompt as quoted below. Despite explicit instructions to generate Czech text, several models sometimes produced English or mixed-language outputs. | |
| | |
| We used the following system prompt: Please continue the Czech text in the same language, manner and style, ensuring it contains at least five thousand words. The text does not need to be factually correct, but please make sure it fits stylistically. | |
| |
| To ensure reproducibility, we used random seed 42 for all OpenAI API calls. Unfortunately, other providers do not offer comparable deterministic generation options. Llama generations used 16-bit floating-point quantization (the highest available quality). | To ensure reproducibility, we used random seed 42 for all OpenAI API calls. Unfortunately, other providers do not offer comparable deterministic generation options. Llama generations used 16-bit floating-point quantization (the highest available quality). |
| |
| |
| ==== How to cite AI-Koditex ==== | ==== How to cite AI-Brown ==== |
| |
| <WRAP round tip 70%> | <WRAP round tip 70%> |
| Milička, J. – Marklová, A. – Cvrček, V.// AI-Koditex //. Department of Linguistics, Faculty of Arts, Charles University, Prague 2025. Available at WWW: www.korpus.cz | Milička, J. – Marklová, A. – Cvrček, V. (2025): //AI Brown and AI Koditex: LLM-Generated Corpora Comparable to Traditional Corpora of English and Czech Texts//. Arxiv preprint: [[https://arxiv.org/abs/2509.22996]] |
| | |
| | Milička, J. – Marklová, A. – Cvrček, V.: //AI-Brown, version 1, 1. 7. 2025//. Department of Linguistics, Faculty of Arts, Charles University, Prague 2025. Available at WWW: www.korpus.cz |
| </WRAP> | </WRAP> |