Filtern nach
Letzte Suchanfragen

Ergebnisse für *

Es wurden 4 Ergebnisse gefunden.

Zeige Ergebnisse 1 bis 4 von 4.

Sortieren

  1. Web corpus construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael, California]

    Humboldt-Universität zu Berlin, Universitätsbibliothek, Jacob-und-Wilhelm-Grimm-Zentrum
    uneingeschränkte Fernleihe, Kopie und Ausleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (URL des Erstveröffentlichers)
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (Verfasser)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis lectures on human language technologies ; #22
    Umfang: 1 Online-Ressource (xv, 129 Seiten), Diagramme
  2. Web Corpus Construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael]

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are... mehr

    Universität Potsdam, Universitätsbibliothek
    uneingeschränkte Fernleihe, Kopie und Ausleihe

     

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several advantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i. e., web crawling) and the usual cleanups including boilerplate removal and removal of duplicated content. Linguistic processing and problems with linguistic processing coming from the different kinds of noise in web corpora are also covered. Finally, the authors show how web corpora can be evaluated and compared to other corpora (such as traditionally compiled corpora) 1. Web corpora -- 2. Data collection -- 2.1 Introduction -- 2.2 The structure of the web -- 2.2.1 General properties -- 2.2.2 Accessibility and stability of web pages -- 2.2.3 What's in a (national) top level domain? -- 2.2.4 Problematic segments of the web -- 2.3 Crawling basics -- 2.3.1 Introduction -- 2.3.2 Corpus construction from search engine results -- 2.3.3 Crawlers and crawler performance -- 2.3.4 Configuration details and politeness -- 2.3.5 Seed URL generation -- 2.4 More on crawling strategies -- 2.4.1 Introduction -- 2.4.2 Biases and the pagerank -- 2.4.3 Focused crawling -- 3. Post-processing -- 3.1 Introduction -- 3.2 Basic cleanups -- 3.2.1 HTML stripping -- 3.2.2 Character references and entities -- 3.2.3 Character sets and conversion -- 3.2.4 Further normalization -- 3.3 Boilerplate removal -- 3.3.1 Introduction to boilerplate -- 3.3.2 Feature extraction -- 3.3.3 Choice of the machine learning method -- 3.4 Language identification -- 3.5 Duplicate detection -- 3.5.1 Types of duplication -- 3.5.2 Perfect duplicates and hashing -- 3.5.3 Near duplicates, Jaccard coefficients, and shingling -- 4. Linguistic processing -- 4.1 Introduction -- 4.2 Basics of tokenization, part-of-speech tagging, and lemmatization -- 4.2.1 Tokenization -- 4.2.2 Part-of-speech tagging -- 4.2.3 Lemmatization -- 4.3 Linguistic post-processing of noisy data -- 4.3.1 Introduction -- 4.3.2 Treatment of noisy data -- 4.4 Tokenizing web texts -- 4.4.1 Example: missing whitespace -- 4.4.2 Example: emoticons -- 4.5 POS tagging and lemmatization of web texts -- 4.5.1 Tracing back errors in POS tagging -- 4.6 Orthographic normalization -- 4.7 Software for linguistic post-processing -- 5. Corpus evaluation and comparison -- 5.1 Introduction -- 5.2 Rough quality check -- 5.2.1 Word and sentence lengths -- 5.2.2 Duplication -- 5.3 Measuring corpus similarity -- 5.3.1 Inspecting frequency lists -- 5.3.2 Hypothesis testing with -- 5.3.3 Hypothesis testing with Spearman's rank correlation -- 5.3.4 Using test statistics without hypothesis testing -- 5.4 Comparing keywords -- 5.4.1 Keyword extraction with x2 -- 5.4.2 Keyword extraction using the ratio of relative frequencies -- 5.4.3 Variants and refinements -- 5.5 Extrinsic evaluation -- 5.6 Corpus composition -- 5.6.1 Estimating corpus composition -- 5.6.2 Measuring corpus composition -- 5.6.3 Interpreting corpus composition -- 5.7 Summary -- Bibliography -- Authors' biographies

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (VerfasserIn)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis Lectures on Human Language Technologies ; #22
    Schlagworte: Web search engines; Computational linguistics; Corpora (Linguistics)
    Umfang: 1 Online-Ressource (222 Seiten), Illustrationen
    Bemerkung(en):

    Description based upon print version of record

    Also available in print.

    :

    :

    :

    :

    :

    :

  3. Web Corpus Construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael]

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are... mehr

    Universität Potsdam, Universitätsbibliothek
    keine Fernleihe

     

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several advantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i. e., web crawling) and the usual cleanups including boilerplate removal and removal of duplicated content. Linguistic processing and problems with linguistic processing coming from the different kinds of noise in web corpora are also covered. Finally, the authors show how web corpora can be evaluated and compared to other corpora (such as traditionally compiled corpora) 1. Web corpora -- 2. Data collection -- 2.1 Introduction -- 2.2 The structure of the web -- 2.2.1 General properties -- 2.2.2 Accessibility and stability of web pages -- 2.2.3 What's in a (national) top level domain? -- 2.2.4 Problematic segments of the web -- 2.3 Crawling basics -- 2.3.1 Introduction -- 2.3.2 Corpus construction from search engine results -- 2.3.3 Crawlers and crawler performance -- 2.3.4 Configuration details and politeness -- 2.3.5 Seed URL generation -- 2.4 More on crawling strategies -- 2.4.1 Introduction -- 2.4.2 Biases and the pagerank -- 2.4.3 Focused crawling -- 3. Post-processing -- 3.1 Introduction -- 3.2 Basic cleanups -- 3.2.1 HTML stripping -- 3.2.2 Character references and entities -- 3.2.3 Character sets and conversion -- 3.2.4 Further normalization -- 3.3 Boilerplate removal -- 3.3.1 Introduction to boilerplate -- 3.3.2 Feature extraction -- 3.3.3 Choice of the machine learning method -- 3.4 Language identification -- 3.5 Duplicate detection -- 3.5.1 Types of duplication -- 3.5.2 Perfect duplicates and hashing -- 3.5.3 Near duplicates, Jaccard coefficients, and shingling -- 4. Linguistic processing -- 4.1 Introduction -- 4.2 Basics of tokenization, part-of-speech tagging, and lemmatization -- 4.2.1 Tokenization -- 4.2.2 Part-of-speech tagging -- 4.2.3 Lemmatization -- 4.3 Linguistic post-processing of noisy data -- 4.3.1 Introduction -- 4.3.2 Treatment of noisy data -- 4.4 Tokenizing web texts -- 4.4.1 Example: missing whitespace -- 4.4.2 Example: emoticons -- 4.5 POS tagging and lemmatization of web texts -- 4.5.1 Tracing back errors in POS tagging -- 4.6 Orthographic normalization -- 4.7 Software for linguistic post-processing -- 5. Corpus evaluation and comparison -- 5.1 Introduction -- 5.2 Rough quality check -- 5.2.1 Word and sentence lengths -- 5.2.2 Duplication -- 5.3 Measuring corpus similarity -- 5.3.1 Inspecting frequency lists -- 5.3.2 Hypothesis testing with -- 5.3.3 Hypothesis testing with Spearman's rank correlation -- 5.3.4 Using test statistics without hypothesis testing -- 5.4 Comparing keywords -- 5.4.1 Keyword extraction with x2 -- 5.4.2 Keyword extraction using the ratio of relative frequencies -- 5.4.3 Variants and refinements -- 5.5 Extrinsic evaluation -- 5.6 Corpus composition -- 5.6.1 Estimating corpus composition -- 5.6.2 Measuring corpus composition -- 5.6.3 Interpreting corpus composition -- 5.7 Summary -- Bibliography -- Authors' biographies

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (VerfasserIn)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis Lectures on Human Language Technologies ; #22
    Schlagworte: Web search engines; Computational linguistics; Corpora (Linguistics)
    Umfang: 1 Online-Ressource (222 Seiten), Illustrationen
    Bemerkung(en):

    Description based upon print version of record

    Also available in print.

    :

    :

    :

    :

    :

    :

  4. Web corpus construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael, California]

    Universitätsbibliothek Erlangen-Nürnberg, Hauptbibliothek
    uneingeschränkte Fernleihe, Kopie und Ausleihe
    Universitätsbibliothek Regensburg
    uneingeschränkte Fernleihe, Kopie und Ausleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (URL des Erstveröffentlichers)
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (Verfasser)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis lectures on human language technologies ; #22
    Umfang: 1 Online-Ressource (xv, 129 Seiten), Diagramme