用户:Hmgqzx/sandbox7
Corpus linguistics is the study of language as expressed in samples (corpora) of "real world" text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are now largely derived by an automated process.
Corpus linguistics adherents believe that reliable language analysis best occurs on field-collected samples, in natural contexts and with minimal experimental interference. Within corpus linguistics there are divergent views as to the value of corpus annotation, from John Sinclair[1] advocating minimal annotation and allowing texts to 'speak for themselves', to others, such as the Survey of English Usage team (based in University College, London)[2] advocating annotation as a path to greater linguistic understanding and rigour.
语言学 |
---|
基础领域 |
语言变化 |
理论框架 |
应用语言学 |
其他 |
History
[编辑]A landmark in modern corpus linguistics was the publication by Henry Kucera and W. Nelson Francis of Computational Analysis of Present-Day American English in 1967, a work based on the analysis of the Brown Corpus, a carefully compiled selection of current American English, totalling about a million words drawn from a wide variety of sources. Kucera and Francis subjected it to a variety of computational analyses, from which they compiled a rich and variegated opus, combining elements of linguistics, language teaching, psychology, statistics, and sociology. A further key publication was Randolph Quirk's 'Towards a description of English Usage' (1960)[3] in which he introduced The Survey of English Usage.
Shortly thereafter, Boston publisher Houghton-Mifflin approached Kucera to supply a million word, three-line citation base for its new American Heritage Dictionary, the first dictionary to be compiled using corpus linguistics. The AHD made the innovative step of combining prescriptive elements (how language should be used) with descriptive information (how it actually is used).
Other publishers followed suit. The British publisher Collins' COBUILD monolingual learner's dictionary, designed for users learning English as a foreign language, was compiled using the Bank of English. The Survey of English Usage Corpus was used in the development of one of the most important Corpus-based Grammars, the Comprehensive Grammar of English (Quirk et al. 1985).[4]
The Brown Corpus has also spawned a number of similarly structured corpora: the LOB Corpus (1960s British English), Kolhapur (Indian English), Wellington (New Zealand English), Australian Corpus of English (Australian English), the Frown Corpus (early 1990s American English), and the FLOB Corpus (1990s British English). Other corpora represent many languages, varieties and modes, and include the International Corpus of English, and the British National Corpus, a 100 million word collection of a range of spoken and written texts, created in the 1990s by a consortium of publishers, universities (Oxford and Lancaster) and the British Library. For contemporary American English, work has stalled on the American National Corpus, but the 400+ million word Corpus of Contemporary American English (1990–present) is now available through a web interface.
The first computerized corpus of transcribed spoken language was constructed in 1971 by the Montreal French Project,[5] containing one million words, which inspired Shana Poplack's much larger corpus of spoken French in the Ottawa-Hull area.[6]
Besides these corpora of living languages, computerized corpora have also been made of collections of texts in ancient languages. An example is the Andersen-Forbes database of the Hebrew Bible, developed since the 1970s, in which every clause is parsed using graphs representing up to seven levels of syntax, and every segment tagged with seven fields of information.[7][8] The Quranic Arabic Corpus is an annotated corpus for the Classical Arabic language of the Quran. This is a recent project with multiple layers of annotation including morphological segmentation, part-of-speech tagging, and syntactic analysis using dependency grammar.[9]
Methods
[编辑]Corpus Linguistics has generated a number of research methods, attempting to trace a path from data to theory. Wallis and Nelson (2001)[10] first introduced what they called the 3A perspective: Annotation, Abstraction and Analysis.
- Annotation consists of the application of a scheme to texts. Annotations may include structural markup, part-of-speech tagging, parsing, and numerous other representations.
- Abstraction consists of the translation (mapping) of terms in the scheme to terms in a theoretically motivated model or dataset. Abstraction typically includes linguist-directed search but may include e.g., rule-learning for parsers.
- Analysis consists of statistically probing, manipulating and generalising from the dataset. Analysis might include statistical evaluations, optimisation of rule-bases or knowledge discovery methods.
Most lexical corpora today are part-of-speech-tagged (POS-tagged). However even corpus linguists who work with 'unannotated plain text' inevitably apply some method to isolate terms that they are interested in from surrounding words. In such situations annotation and abstraction are combined in a lexical search.
The advantage of publishing an annotated corpus is that other users can then perform experiments on the corpus. Linguists with other interests and differing perspectives than the originators' can exploit this work. By sharing data, corpus linguists are able to treat the corpus as a locus of linguistic debate, rather than as an exhaustive fount of knowledge.
See also
[编辑]- Concordance (KWIC)
- Collocation
- Collostructional analysis
- Keyword (linguistics)
- Linguistic Data Consortium
- Machine translation
- Natural Language Toolkit
- Pattern grammar
- Search engines: they access the "web corpus".
- Semantic prosody
- Text corpus
- Translation memory
- Treebank
- Xaira: a general purpose XML aware open-source corpus analysis tool
References
[编辑]- ^ Sinclair, J. 'The automatic analysis of corpora', in Svartvik, J. (ed.) Directions in Corpus Linguistics (Proceedings of Nobel Symposium 82). Berlin: Mouton de Gruyter. 1992.
- ^ Wallis, S. 'Annotation, Retrieval and Experimentation', in Meurman-Solin, A. & Nurmi, A.A. (ed.) Annotating Variation and Change. Helsinki: Varieng, [University of Helsinki]. 2007.e-Published
- ^ Quirk, R. 'Towards a description of English Usage', Transactions of the Philological Society. 1960. 40–61.
- ^ Quirk, R., Greenbaum, S., Leech, G. and Svartvik, J. A Comprehensive Grammar of the English Language London: Longman. 1985.
- ^ Sankoff, D. & Sankoff, G. Sample survey methods and computer-assisted analysis in the study of grammatical variation. In Darnell R. (ed.) Canadian Languages in their Social Context Edmonton: Linguistic Research Incorporated. 1973. 7–64.
- ^ Poplack, S. The care and handling of a mega-corpus. In Fasold, R. & Schiffrin D. (eds.) Language Change and Variation, Amsterdam: Benjamins. 1989. 411–451.
- ^ Andersen, Francis I.; Forbes, A. Dean, Hebrew Grammar Visualized: I. Syntax, Ancient Near Eastern Studies 40, 2003, 40: 43–61 [45]
- ^ Eyland, E. Ann, Revelations from Word Counts, Newing, Edward G.; Conrad, Edgar W. (编), Perspectives on Language and Text: Essays and Poems in Honor of Francis I. Andersen's Sixtieth Birthday, July 28, 1985, Winona Lake, IN: Eisenbrauns: 51, 1987, ISBN 0-931464-26-9
- ^ Dukes, K., Atwell, E. and Habash, N. 'Supervised Collaboration for Syntactic Annotation of Quranic Arabic'. Language Resources and Evaluation Journal. 2011.
- ^ Wallis, S. and Nelson G. 'Knowledge discovery in grammatically analysed corpora'. Data Mining and Knowledge Discovery, 5: 307–340. 2001.
Journals
[编辑]There are several international peer-reviewed journals dedicated to corpus linguistics, for example, Corpora, Corpus Linguistics and Linguistic Theory, ICAME Journal and the International Journal of Corpus Linguistics.
Book series
[编辑]Book series in this field include Language and Computers, Studies in Corpus Linguistics and English Corpus Linguistics
Other
[编辑]- Biber, D., Conrad, S., Reppen R. Corpus Linguistics, Investigating Language Structure and Use, Cambridge: Cambridge UP, 1998. ISBN 0-521-49957-7
- McCarthy, D., and Sampson G. Corpus Linguistics: Readings in a Widening Discipline, Continuum, 2005. ISBN 0-8264-8803-X
- Facchinetti, R. Theoretical Description and Practical Applications of Linguistic Corpora. Verona: QuiEdit, 2007 ISBN 978-88-89480-37-3
- Facchinetti, R. (ed.) Corpus Linguistics 25 Years on. New York/Amsterdam: Rodopi, 2007 ISBN 978-90-420-2195-2
- Facchinetti, R. and Rissanen M. (eds.) Corpus-based Studies of Diachronic English. Bern: Peter Lang, 2006 ISBN 3-03910-851-4
External links
[编辑]- Bookmarks for Corpus-based Linguists – very comprehensive site with categorized and annotated links to language corpora, software, references, etc.
- Corpora discussion list
- Freely-available, web-based corpora (100 million – 400 million words each): American (COCA, COHA), British (BNC), TIME, Spanish, Portuguese
- Manuel Barbera's overview site
- Przemek Kaszubski's list of references
- AskOxford.com the composition and use of the Oxford Corpus
- DMCBC.com
- Datum Multilanguage Corpora Based on chinese free sample download
- Corpus4u Community a Chinese online forum for corpus linguistics
- McEnery and Wilson's Corpus Linguistics Page
- Corpus Linguistics with R mailing list
- Research and Development Unit for English Studies
- Survey of English Usage
- The Centre for Corpus Linguistics at Birmingham University
- Gateway to Corpus Linguistics on the Internet: an annotated guide to corpus resources on the web
- Biomedical corpora
- Linguistic Data Consortium, a major distributor of corpora
- Penn Parsed Corpora of Historical English
- Corsis: (formerly Tenka Text) an open-source (GPLed) corpus analysis tool written in C#
- ICECUP and Fuzzy Tree Fragments
- Research and Development Unit for English Studies
- Discussion group text mining