NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to lexical resources such as WordNet.It also has text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning



    Install NLTK: run sudo pip install -U nltk

    Install Numpy (optional): run sudo pip install -U numpy

    Test installation: run python then type import nltk

  2. Pattern

    Pattern has tools for natural language processing like part-of-speech taggers, n-gram search, sentiment analysis, WordNet.It supports machine learning vector space model, clustering, SVM.




    pip install pattern

  3. TextBlob

    TextBlob is a Python library for processing textual data. It provides a simple API for diving into common natural language processing tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.



    pip install -U textblob
  4. Gensim

    Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. It can process input larger than RAM. According to the author it is "the most robust, efficient and hassle-free piece of software to realize unsupervised semantic modelling from plain text


    pip install -U gensim

  5. PyNLPl

    Python Natural Language Processing Library(pronounced as: pineapple) is a Python library for Natural Language Processing. It is a collection of various independent or loosely interdependent modules useful for common, and less common, NLP tasks. PyNLPl can be used for example the computation of n-grams, frequency lists and distributions, language models. There are also more complex data types, such as Priority Queues, and search algorithms, such as Beam Search



    sudo apt-get install pymol

    # Fedora

    yum install pymol

  6. spaCy

    It’s commercial open source software. Industrial strength NLP with Python and Cython. Its a pipeline for fast, state-of-the-art natural language processing.



    pip install spacy
  7. Polyglot

    Polyglot is a natural language pipeline that supports massive multilingual applications. It supports tokenization for 165 languages, Language detection for 196 languages, Named Entity Recognition for 40 languages, Part of Speech Tagging for 16 languages, Sentiment Analysis for 136 languages, Word Embeddings for 137 languages, Morphological analysis for 135 languages, Transliteration for 69 languages



    pip install polyglot
  8. MontyLingua

    MontyLingua is a free, commonsense-enriched, end-to-end natural language understander for English. Feed raw English text into MontyLingua, and the output will be a semantic interpretation of that text. Perfect for information retrieval and extraction, request processing, and question answering. From English sentences, it extracts subject/verb/object tuples, extracts adjectives, noun phrases and verb phrases, and extracts people's names, places, events, dates and times, and other semantic information



  9. BLLIP Parser
    BLLIP Parser (also known as the Charniak-Johnson parser) is a statistical natural language parser including a generative constituent parser and discriminative maximum entropy reranker. It includes command-line and Python interfaces.
  10. Quepy

    Quepy is a python framework to transform natural language questions to queries in a database query language. It can be easily customized to different kinds of questions in natural language and database queries. So, with little coding you can build your own system for natural language access to your database.


  11. hello all. thanks for approved.
    hello all. thanks for approved.
Add a Resource to this List
Not more than 250 characters.