Aworld MCP Servers
The accompanying repository for the AWorld project.
Ask AI about Aworld MCP Servers
Powered by Claude · Grounded in docs
I know everything about Aworld MCP Servers. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
aworld-mcp-servers
The accompanying repository for the AWorld project.
- 🦩 [2025/06/19] AWorld has achieved 72.43 on the GAIA test. The #1 open-source project—and the only one in GAIA's top 10. 🐦 tweets
API Integration and Testing
This repository contains examples of API requests and testing for various services, including health checks, web scraping with Google API and BeautifulSoup, and DeepResearcher search workflow re-implementation.
Quick Setup
Prerequisites
- Python 3.12 or higher
- uv package manager
Installation
-
Install uv (if not already installed):
# macOS/Linux curl -LsSf https://astral.sh/uv/install.sh | sh # Windows powershell -c "irm https://astral.sh/uv/install.ps1 | iex" # Or via pip pip install uv -
Clone the repository:
git clone https://github.com/your-username/aworld-mcp-servers.git cd aworld-mcp-servers -
Install dependencies:
uv sync -
Activate the virtual environment (optional, uv handles this automatically):
source .venv/bin/activate # macOS/Linux # or .venv\Scripts\activate # Windows
Running the Application
-
Main Flask server:
uv run aworld-server -
FastAPI server:
uv run aworld-fastapi -
GAIA runner:
uv run gaia-runner -
Run as module (alternative):
uv run python -m src.main
Development Setup
-
Install development dependencies:
uv sync --dev -
Install pre-commit hooks:
uv run pre-commit install -
Run tests:
uv run pytest -
Code formatting and linting:
uv run black . uv run ruff check . uv run mypy src/
Table of Contents
Health Check
This section demonstrates how to perform a health check on a specific service endpoint.
curl -X GET http://DEPLOYED_HOST:PORT/health
Notes
- The health check endpoint is used to verify the availability and connectivity of the service.
- If the request fails, it may indicate that the service is down or there is a network issue.
Google API + BeautifulSoup
This section shows how to use the Google API and BeautifulSoup to scrape web pages.
curl -X POST http://DEPLOYED_HOST:PORT/search \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_GOOGLE_API_KEY",
"cse_id": "YOUR_GOOGLE_CSE_ID",
"queries": ["machine learning"],
"num_results": 5,
"fetch_content": true,
"language": "en",
"country": "US",
"safe_search": true,
"max_len": 8192 # optional, max length of the content to fetch, only works when fetch_content is true
}'
Notes
- Replace
YOUR_GOOGLE_API_KEYandYOUR_GOOGLE_CSE_IDwith your actual Google API key and Custom Search Engine (CSE) ID. - This request searches for web pages related to "machine learning" and fetches the content of the top 5 results.
Deep Researcher
This section demonstrates how to perform a deep research query using Serper API.
curl -X POST http://DEPLOYED_HOST:PORT/search/agentic \
-H "Content-Type: application/json" \
-d '{
"question": "machine learning",
"search_queries": ["machine learning"],
"base_url": "YOUR_LLM_ENDPOINT",
"api_key": "YOUR_API_KEY",
"llm_model_name": "qwen/qwen-plus",
"serper_api_key": "YOUR_SERPER_API_KEY",
"topk": 5
}'
Notes
- Replace
YOUR_LLM_ENDPOINT,YOUR_API_KEYandYOUR_SERPER_API_KEYwith your actual API keys. - The
base_urlis the endpoint for the deep research service. - This request searches for information related to "machine learning" and returns the top 5 results.
OpenRouter API
This section demonstrates how to use the OpenRouter API for LLM chat completions and model listing.
Chat Completions
curl -X POST http://DEPLOYED_HOST:PORT/openrouter/completions \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_OPENROUTER_API_KEY",
"model": "google/gemini-2.5-pro",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"site_url": "https://your-site.com",
"site_name": "Your Site Name"
}'
List Available Models
curl -X GET http://DEPLOYED_HOST:PORT/openrouter/models
Notes
- Replace
YOUR_OPENROUTER_API_KEYwith your actual OpenRouter API key. - The
modelparameter supports various models available through OpenRouter (e.g., "google/gemini-2.5-pro", "anthropic/claude-opus-4", "openai/gpt-4"). site_urlandsite_nameare optional parameters for tracking and attribution.
Browser Use API
This section demonstrates how to use the Browser Use API for automated web browsing tasks.
curl -X POST http://DEPLOYED_HOST:PORT/browser_use \
-H "Content-Type: application/json" \
-d '{
"question": "Go to google.com and search for machine learning",
"base_url": "YOUR_LLM_ENDPOINT",
"api_key": "YOUR_API_KEY",
"model_name": "gpt-4o",
"temperature": 0.3,
"enable_memory": false,
"browser_port": "9111",
"user_data_dir": "/tmp/chrome-debug/0000",
"headless": true,
"extract_base_url": "YOUR_LLM_ENDPOINT",
"extract_api_key": "YOUR_API_KEY",
"extract_model_name": "gpt-4o",
"extract_temperature": 0.3,
"return_trace": false
}'
Notes
- Replace
YOUR_LLM_ENDPOINTandYOUR_API_KEYwith your actual LLM service endpoint and API key. - The
questionparameter should contain natural language instructions for the browser automation task. model_namesupports various models (e.g., "gpt-4o", "claude-3-opus-20240229", "gemini-pro").- Set
headlesstofalseif you want to see the browser window during automation. enable_memoryallows the agent to remember previous interactions.return_traceincludes detailed execution trace in the response.
License
This repository is licensed under the MIT License. See the LICENSE file for details.
