About ReviewAid.
ReviewAid is an open-source AI-driven tool designed to streamline full-text screening and data extraction phases of systematic reviews. It leverages advanced large language models to classify papers based on PICO criteria and extract custom data fields, drastically reducing manual workload for researchers.
Want more details / documentation about ReviewAid? Please visit aurumz-rgb.github.io/ReviewAid/ or Github Repository.
If the primary ReviewAid system is experiencing high usage, resource saturation, or memory limitations, users may utilize available mirror versions of ReviewAid without restriction.
Key Features
1. Full-text PICO Screening
AI-based PICO specified inclusion/exclusion classification
2. Full-text Data Extraction
Custom field extraction from text
3. Batch Processing
Process up to 20 papers at once
4. Multiple Exports
Export CSV, Excel, and Word formats
5. Live Terminal
Real-time processing logs to ensure transparency
6. Confidence Scoring
Estimates reliability of extraction to guide researchers to trust/not trust
7. Configuration
Configure any AI model using API key
8. Use
Runs locally/online, highly reusable
9. Open-source
Made by Researchers to ensure no proprietary "black box"
You can also check out the full walkthrough and demonstration of ReviewAid on YouTube
Confidence Scoring System
This layered approach ensures that high-confidence decisions are automated safely, while ambiguous or unreliable cases are clearly flagged for human oversight.
| Confidence Score | Classification | Description | Implication |
|---|---|---|---|
| 1.0 (100%) | Definitive Match | Deterministic rule-based classification / No ambiguity. | Fully automated decision |
| 0.8 – 1.0 | Very High | AI strongly validates the decision using explicit textual evidence. | Safe to accept |
| 0.6 – 0.79 | High | Criteria appear satisfied based on standard academic structure and content. | Review optional |
| 0.4 – 0.59 | Moderate | Ambiguous context or loosely met criteria. | Manual verification recommended |
| 0.1 – 0.39 | Low | Based mainly on heuristic keyword estimation. | High risk of error |
| < 0.1 | Unreliable | Derived from fallback or failed extraction methods. | Mandatory manual review |
Usage & Installation
Follow these instructions to run ReviewAid online or locally.
⚡ Usage (Online)
-
Launch Online Streamlit hosted web app
Access the application directly from your browser without installation. -
Select Mode:
- Full-text Paper Screener: Choose this mode to screen papers based on PICO (Population, Intervention, Comparison, Outcome) criteria.
- Full-text Data Extractor: Choose this mode to extract specific fields (Author, Year, Conclusion, etc.) from research papers.
-
Workflow (Screener):
- Enter your PICO criteria (Inclusion/Exclusion) in the input fields.
- Upload your PDF papers (Batch upload supported).
- Click "Screen Papers".
- Monitor the "System Terminal" for real-time logs of extraction, API calls, and processing status.
- View the "Screening Dashboard" for a pie chart of Included/Excluded/Maybe decisions.
- Download results as CSV, XLSX, or DOCX.
-
Workflow (Extractor):
- Enter the fields you want to extract (comma-separated).
- Upload your PDF papers.
- Click "Process Papers".
- Monitor the "System Terminal" for logs.
- View extracted data in the dashboard.
- Download extracted data as CSV, XLSX, or DOCX.
-
Configuration:
- For using API key, you can select the respective AI model in either Screener/Extractor.
⚡ Usage (run streamlit Locally)
To run ReviewAid locally with your own API keys (OpenAI, DeepSeek, etc.), follow these steps:
-
Clone the repository
git clone https://github.com/aurumz-rgb/ReviewAid.git
cd ReviewAid -
Create and activate a virtual environment (recommended)
python -m venv venv
source venv/bin/activate# macOS / Linux
venv\Scripts\activate# Windows -
Install dependencies
pip install -r requirements.txt -
Start the Streamlit application
streamlit run app.py -
Configure AI model along with API key inside the UI
- Select AI model as the provider
- Enter your API Key
🖥️ Running ReviewAid Locally with Ollama (No API Key Required)
ReviewAid supports local inference using Ollama, allowing you to run the application without any external API keys. This is ideal for users who prefer offline usage, enhanced privacy, or full local control.
Prerequisites
Ensure the following are installed on your system:
- Python 3.12+
- Ollama (installed and running locally)
- Download: https://ollama.com
- At least one supported Ollama model (e.g.,
llama3)
Pull a model (example):
ollama pull llama3
Verify Ollama is running:
ollama list
▶️ Running ReviewAid with Ollama
-
Clone the repository
git clone https://github.com/aurumz-rgb/ReviewAid.git
cd ReviewAid -
Create and activate a virtual environment (recommended)
python -m venv venv
source venv/bin/activate# macOS / Linux
venv\Scripts\activate# Windows -
Install dependencies
pip install -r requirements.txt -
Start the Streamlit application
streamlit run app.py -
Configure Ollama inside the UI
- Select Ollama (Local) as the provider
- Choose a local model (e.g.,
llama3) - No API key is required
Privacy Advantage
When using Ollama:
- All inference runs entirely on your local machine
- No data is sent to external servers
- No API keys are required or stored
This makes Ollama the most privacy-preserving configuration supported by ReviewAid.
- Performance depends on your local hardware (CPU/GPU/RAM)
- Large PDFs or batch sizes may take longer on CPU-only systems
- For best results, ensure Ollama is running before launching Streamlit
Configuration
ReviewAid also supports configuration of OpenAI, Claude, Deepseek, Cohere, Z.ai and Ollama (locally) via API key as well. To protect your privacy, API keys are not stored at any time.
For tested tasks, the following models were successful:
OpenAI – GPT-4o
Deepseek – deepseek-chat
Cohere – command-a-03-2025
Z.AI – GLM-4.6V-Flash, GLM-4.5V-Flash
Anthropic – Claude-Sonnet-4-20250514
Ollama (local) – Llama3
Default – GLM-4.6V-Flash
Acknowledgements
I gratefully acknowledge developers of GLM-4.6V-Flash (Z.ai) for providing the AI model used in ReviewAid.
The visual and text-based reasoning capabilities of GLM-4.6V-Flash have greatly enhanced ReviewAid's full-text screening and data extraction workflows.
For more information, please see GLM-4.6V-Flash paper and GLM-4.6V-Flash Hugging Face.
I would also like to thank Mohith Balakrishnan for his thorough validation of ReviewAid, including batch testing, error checks, and confidence verification, which significantly improved the tool’s reliability and accuracy.
Citation
For ReviewAid's preprint paper, please check ReviewAid MetaArXiV.
If you use ReviewAid in your research, please cite it using the following format: