Sentiment Evaluation is a tool designed to compare and evaluate the accuracy of several sentiment analysis APIs. These APIs are compared to a gold standard dataset, which contains a list of manually annotated sentences.
Using this tool requires proficiency in Python!Sentiment Evalutation on GitHub
Before you begin
This tool supports the following sentiment analysis APIs:
- AIApplied. Service that extracts categories, sentiment, and demographics.
- AlchemyAPI. One of the world’s most popular NLP solutions.
- Bitext. Semantic technologies solution with a sentiment analysis API that claims to have the highest accuracy on the market.
- Chatterbox.Social technology engine that uses machine learning for sentiment analysis.
- Datumbox. Leverages machine learning for sentiment analysis.
- Lymbix. A sentiment analysis tool that includes emotions in their analyses.
- Repustate. Service aimed at providing 3rd party developers the tools necessary to build solutions for content extraction.
- Semantria. Modern, fast-growing NLP solution based on Lexalytics’ Salience engine.
- Sentigem. A free sentiment analysis tool, currently in their beta stage of development.
- Skyttle. Extracts patterns within text and structures them for later in-depth analysis.
- Viralheat. Social media monitoring solution that offers a sentiment analysis API for 3rd-party integrators.
In order to start evaluating these APIs, you will need to create accounts with the services your want to evaluate.
Using Sentiment Evaluation
Using this tool requires Python!
1. Install requirements
pip install -r requirements.txt
The following requirement is optional and is used for testing the code:
pip install -r requirements-testing.txt
2. Configure the Gold Standard text file
The label can be “0″ for neutral, “+” for positive, “-” for negative, or “X” for irrelevant. Irrelevant documents will be excluded from the evaluation.
An example Gold Standard text file can be found in:
3. Enter the API keys for selected services
4. Select the services to evaluate (Optional)
5. Start using!
``python compare.py path-to-text-file-with-annotated-data path-to-config-file``