MEDLINE N-Gram Set

The MEDLINE N-gram Set 2019: by Split, Group, Filter, and Combine Algorithm

The MEDLINE n-gram set (generated by split, group, filter, and combine algorithm) is listed as bellows. For each MEDLINE record, title and abstract are used as the source of n-grams. They are combined, tokenized into sentences, and then tokenized into tokens (words use space as word boundary). Finally, n-grams are generated by filtering out terms with more than 50 characters or the total word count is less than 30. The specifications of generating these n-grams are listed as follows:

  • MEDLINE: 2019 - TI and AB (from MEDLINE Baseline Repository - MBR, pubmed19nXXXX.xml -> PmidTiAbS19nXXXX.txt: 1 ~ 972)
  • Method: Split, Combine, Filter Algorithm
  • Max. Character Size: 50
  • Min. word count: 30
  • Min. document count: 1

  • Total document count: 29,138,919
  • Total sentence count: 185,619,887
  • Total token count: 3,824,268,997

  • N-gram files
    • File format - 3 fields:
      Document countWord countN-gram
    • Sorted by document count, word count, then alphabetic order of n-grams. N-gram set is not sorted. It can be sorted by nGramUtil package.

  • Download:
    N-gramsFileZip SizeActual SizeNo. of n-grams
    Unigrams1-gram.2019.tgz7.2 MB18 MB1,075,227
    Bigrams2-gram.2019.tgz46 MB135 MB6,336,698
    Trigrams3-gram.2019.tgz71 MB233 MB9,078,536
    Four-grams4-gram.2019.tgz50 MB174 MB5,729,590
    Five-grams5-gram.2019.tgz24 MB86 MB2,446,765
    N-gram SetnGramSet.2019.30.tgz196 MB644 MB24,666,816
    Distilled N-gram SetdistilledNGram.2019.tgz77 MB250 MB9,595,606