MEDLINE N-Gram Set

The MEDLINE N-gram Set 2017: by Split, Group, Filter, and Combine Algorithm

The MEDLINE n-gram set (generated by split, group, filter, and combine algorithm) is listed as bellows. For each MEDLINE record, title and abstract are used as the source of n-grams. They are combined, tokenized into sentences, and then tokenized into tokens (words use space as word boundary). Finally, n-grams are generated by filtering out terms with more than 50 characters or the total word count is less than 30. The specifications of generating these n-grams are listed as follows:

  • MEDLINE: 2017 - TI and AB (from MEDLINE Baseline Repository - MBR, PmidTiAbS17nXXXX.txt: 1 ~ 892)
  • Method: Split, Combine, Filter Algorithm
  • Max. Character Size: 50
  • Min. word count: 30
  • Min. document count: 1

  • Total document count: 26,759,399
  • Total sentence count: 163,021,640
  • Total token count: 3,386,661,350

  • N-gram files
    • File format - 3 fields:
      Document countWord countN-gram
    • Sorted by document count, word count, then alphabetic order of n-grams. N-gram set is not sorted. It can be sorted by nGramUtil package.

    Download:

    N-gramsFileZip SizeActual SizeNo. of n-grams
    Unigrams1-gram.2017.tgz6.5 MB16 MB976,872
    Bigrams2-gram.2017.tgz41 MB122 MB5,722,210
    Trigrams3-gram.2017.tgz63 MB207 MB8,096,532
    Four-grams4-gram.2017.tgz44 MB152 MB5,044,153
    Five-grams5-gram.2017.tgz21 MB75 MB2,123,270
    N-gram SetnGramSet.2017.30.tgz174 MB570 MB21,963,037
    Distilled N-gram SetdistilledNGram.2017.tgz68 MB220 MB8,461,972