PeerRead Agent Usage
For quick start, module architecture, and review storage details, see README.md and architecture.md.
Available Agent Tools¶
The agent has access to the following tools, defined in src/app/tools/peerread_tools.py.
Paper Retrieval¶
get_peerread_paper(paper_id: str) -> PeerReadPaper: Retrieves a specific paper’s metadata from the PeerRead dataset.query_peerread_papers(venue: str = "", min_reviews: int = 1) -> list[PeerReadPaper]: Queries papers with filters like venue and minimum number of reviews.get_paper_content(paper_id: str) -> str: Reads the full text content of a paper by ID, returning extracted text for analysis.
Review Generation¶
generate_paper_review_content_from_template(paper_id: str, review_focus: str = "comprehensive", tone: str = "professional") -> str: Creates a review template for a specific paper. WARNING: This creates a template structure, not an actual review. Designed for demonstration purposes.
Parameters:
review_focus: Type of review —"comprehensive","technical","high-level"tone: Review tone —"professional","constructive","critical"
Review Persistence¶
save_structured_review(paper_id: str, structured_review: GeneratedReview) -> str: Saves a validatedGeneratedReviewobject to persistent storage. Recommended for structured reviews.save_paper_review(paper_id: str, review_text: str, recommendation: str = "", confidence: float = 0.0) -> str: Saves raw review text with optional recommendation and confidence scores.
Storage Format:
- Files saved as:
{paper_id}_{timestamp}.json - Structured reviews also create:
{paper_id}_{timestamp}_structured.json
CLI Options¶
Dataset Management¶
# Download sample PeerRead data (recommended for testing)
make app_cli ARGS="--download-peerread-samples-only"
# Download full PeerRead dataset (large download)
make app_cli ARGS="--download-peerread-full-only"
# Limit sample download size
make app_cli ARGS="--download-peerread-samples-only --peerread-max-papers-per-sample-download 50"
Agent Configuration¶
# Enable specific agent types
make app_cli ARGS="--paper-id=1105.1072 --include-researcher --include-analyst --include-synthesiser"
# Enable streaming output
make app_cli ARGS="--paper-id=1105.1072 --pydantic-ai-stream"
# Use custom chat configuration
make app_cli ARGS="--paper-id=1105.1072 --chat-config-file=/path/to/config.json"
Evaluation Control¶
# Skip evaluation after agent run
make app_cli ARGS="--paper-id=1105.1072 --skip-eval"
# Generate a Markdown report after evaluation (mutually exclusive with --skip-eval)
make app_cli ARGS="--paper-id=1105.1072 --generate-report"
# Override Tier 2 judge provider/model
make app_cli ARGS="--paper-id=1105.1072 --judge-provider=openai --judge-model=gpt-4o"
Review Tools Control¶
# Disable review generation tools (opt-out)
make app_cli ARGS="--paper-id=1105.1072 --no-review-tools"
# Explicitly enable review tools (default, rarely needed)
make app_cli ARGS="--paper-id=1105.1072 --enable-review-tools"
Execution Engine¶
# MAS engine (default)
make app_cli ARGS="--paper-id=1105.1072 --engine=mas"
# Claude Code headless engine (requires claude CLI installed)
make app_cli ARGS="--paper-id=1105.1072 --engine=cc"
# Claude Code with Agent Teams mode
make app_cli ARGS="--paper-id=1105.1072 --engine=cc --cc-teams"
Sweep & Profiling¶
# Sweep across multiple papers and MAS compositions
make app_sweep ARGS="--paper-ids 1105.1072,2301.00001 --repetitions 3 --all-compositions"
Supported Chat Providers¶
All providers configured in src/app/config/config_chat.json are available. Common choices:
github— GitHub Models APIollama— Local Ollama installation (seemake setup_ollama)openai— OpenAI APIanthropic— Anthropic Claude APIgemini,groq,cerebras,mistral,openrouter— and more (seePROVIDER_REGISTRYinapp_models.py)
Troubleshooting¶
Paper not found error:
- Ensure PeerRead dataset is downloaded:
make app_cli ARGS="--download-peerread-samples-only" - Paper IDs are arxiv IDs (e.g.,
1105.1072), not sequential numbers - Use
query_peerread_papersvia the agent to list available papers
Agent tools not working:
- Verify chat provider configuration in
config_chat.json - Check API keys are set in
.envfor the chosen provider - Review logs for specific error messages
Review saving failures:
- Ensure output directory is writable (created automatically on first run)
- Verify
GeneratedReviewobject structure for structured reviews
Claude Code engine failures (--engine=cc):
- Check
claudeCLI is installed:which claude - Ensure
ANTHROPIC_API_KEYis set in.env
For more detailed documentation, review docstrings in src/app/tools/peerread_tools.py and configuration examples in src/app/config/.