"Because AIs shouldn't monologue — they should converse."
The AI Conversation Platform is a production-ready, enterprise-grade system that enables autonomous AI-to-AI conversations. Watch different AI models debate, collaborate, and interact in real-time — completely unscripted.
🚀 Try Live Demo — Watch AI models debate in real-time!
🌐 View Project Site — Platform overview and quick start guide.
- 🤝 Multi-Agent Orchestration — Claude, ChatGPT, Gemini, Grok, Perplexity in dynamic conversations
- ⚡ Async-First Architecture — Non-blocking API calls with
asyncioandrun_in_executor - 🛡️ Production-Grade Reliability — Circuit breakers, exponential backoff, similarity detection
- 🔒 Security Hardened — Path validation, input sanitization, API key masking, optional LLM Guard
- 📊 Full Observability — Prometheus metrics, Grafana dashboards, OpenTelemetry tracing
- 🧪 Comprehensive Testing — 90%+ code coverage, pytest with async support
- 🐳 Container-Ready — Docker Compose with health checks and orchestration
- 💻 Developer-Friendly — Modern tooling (uv, Ruff, mypy), pre-commit hooks, CI/CD
- 🌐 Web Demo — Interactive Flask-based demo with real-time SSE streaming
| Requirement | Version | Notes |
|---|---|---|
| Python | 3.10+ | Local runs |
| Docker | 24+ | Full stack |
| API Keys | 1+ | OpenAI, Anthropic, Gemini, etc. |
git clone https://github.com/systemslibrarian/ai-conversation-platform.git
cd ai-conversation-platformcp .env.example .env
nano .envAdd at least two providers:
OPENAI_API_KEY=sk-xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
GOOGLE_API_KEY=xxxxxOption A — Local (Python + uv)
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync --all-extras
uv run aic-start --agent1 claude --agent2 chatgpt --topic "AI ethics" --turns 10 --yesOption B — Docker (Full stack)
docker compose up --build| Service | URL | Description |
|---|---|---|
| Web Demo | http://localhost:5000 | Interactive AI conversation demo |
| Streamlit UI | http://localhost:8501 | View/search conversations |
| Prometheus | http://localhost:9090 | Metrics |
| Grafana | http://localhost:3000 | Dashboards (admin/admin) |
cd web
python demo.py
# Open http://localhost:5000 in your browserThe web demo lets you configure two AI agents, set a topic, and watch them debate in real-time with SSE streaming.
Use the Streamlit UI 📥 Export to JSON.
docker compose downThe platform auto-loads API keys from .env files using python-dotenv. You have three options:
# Copy template and add your keys
cp .env.example .env
nano .env
# Keys are auto-loaded when you run the app
uv run aic-start --agent1 chatgpt --agent2 gemini --topic "test" --turns 3 --yes# Set user-level secrets (available to all your Codespaces)
gh secret set OPENAI_API_KEY --user
gh secret set GOOGLE_API_KEY --user
gh secret set ANTHROPIC_API_KEY --user
# Restart Codespace to load secrets
# Keys are automatically available in the environment# Export keys in your current shell
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export ANTHROPIC_API_KEY="sk-ant-..."
# Run immediately
uv run aic-start --agent1 chatgpt --agent2 gemini --topic "test" --turns 3 --yesRequired keys by agent:
| Agent | Environment Variable | Get Key From |
|---|---|---|
chatgpt |
OPENAI_API_KEY |
https://platform.openai.com/api-keys |
gemini |
GOOGLE_API_KEY or GEMINI_API_KEY |
https://aistudio.google.com/app/apikey |
claude |
ANTHROPIC_API_KEY |
https://console.anthropic.com/settings/keys |
grok |
XAI_API_KEY |
https://console.x.ai/ |
perplexity |
PERPLEXITY_API_KEY |
https://www.perplexity.ai/settings/api |
Note: You need at least two agents configured to start a conversation.
- Multi-agent orchestration (Claude, ChatGPT, Gemini, Grok, Perplexity)
- Async by default with circuit breakers, backoff, similarity loop checks
- Security: path validation, API key masking, optional LLM Guard
- Observability: Prometheus metrics, Grafana dashboards, OpenTelemetry traces
- Developer DX: uv, CLI, pre-commit (Ruff + mypy), extensive tests
# Interactive setup
uv run aic-start
# Non-interactive
uv run aic-start \
--agent1 claude \
--agent2 chatgpt \
--topic "The nature of consciousness" \
--turns 20 \
--db ./data/consciousness.db \
--yes--agent1,--agent2: Agent types to run. Supported:gemini,chatgpt,claude,grok,perplexity(requires corresponding API keys).--model1,--model2: Optional model overrides per agent. Examples:gemini-2.0-flash,gpt-4o.--topic: Conversation topic text.--turns: Maximum turns per agent (integer). Note: use--turns, not--max-turns.--db: SQLite file for shared conversation state. Default:shared_conversation.db.--yes: Non-interactive mode; skips menu prompts.
Notes:
- The CLI does not support
--agentsor--max-turns. Use the flags above. - At least two providers must be available. Set
OPENAI_API_KEYand eitherGOOGLE_API_KEYorGEMINI_API_KEY. - Logs:
logs/conversation.jsonl. Data/state:data/or the specified--db.
After a conversation completes, you can view the results in several ways:
1. Quick Summary (statistics only):
python view_conversation.py summary2. View First N Messages:
# First 3 messages:
python view_conversation.py 3
# First 5 messages:
python view_conversation.py 53. View Full Conversation:
python view_conversation.py4. Query SQLite Directly:
# Quick preview:
sqlite3 shared_conversation.db "SELECT id, sender, substr(content, 1, 100) FROM messages;"
# Full conversation:
sqlite3 shared_conversation.db "SELECT sender, content FROM messages ORDER BY id;"
# Message statistics:
sqlite3 -header -column shared_conversation.db \
"SELECT id, sender, length(content) as chars FROM messages;"5. Web UI (visual interface with filtering):
cd web
uv run streamlit run app.pyThen open http://localhost:8501 to browse conversations with syntax highlighting.
- Preferred: use the installed console script (after
uv syncorpip install -e .):
aic-start --agent1 gemini --agent2 chatgpt --model1 gemini-2.0-flash --model2 gpt-4o --topic "AI ethics in multi-agent systems" --turns 6 --yes- Or run as a module from the repo root (no
sys.pathhacks needed):
python -m cli.start_conversation --agent1 gemini --agent2 chatgpt --model1 gemini-2.0-flash --model2 gpt-4o --topic "AI ethics in multi-agent systems" --turns 6 --yesNote: running the file directly via python cli/start_conversation.py is supported, but using the console script or module form is more robust and CI-friendly.
- Invalid flags: Use
--agent1/--agent2,--model1/--model2,--turns,--yes. The CLI does not support--agentsor--max-turns. - Missing async plugin: If tests fail with "async def functions are not natively supported", install
pytest-asyncioand set[tool.pytest.ini_options] asyncio_mode = "auto"inpyproject.toml. - Pytest stdin capture: If you see errors like
OSError: pytest: reading from stdin while output is captured!, re-run with-sto disable output capture (e.g.,pytest -q -s). This is required for CLI tests that prompt for input. - Gemini model 404: Use valid models like
gemini-2.0-flash. The deprecatedgemini-prowill 404. - Termination state: The queue preserves termination flags across runs; explicit resets are handled by application logic.
Core docs live in /docs:
- Installation:
docs/INSTALLATION_GUIDE.md - Docker/Compose:
docs/DOCKER_README.md - Architecture:
docs/ARCHITECTURE.md - Testing (Quick Reference merged):
docs/TESTING.md - Upgrade v4 → v5 (v5 Notes merged):
docs/UPGRADE_GUIDE.md - Monitoring:
docs/MONITORING.md - Security:
docs/SECURITY.md - Contributing / Code of Conduct:
docs/CONTRIBUTING.md,docs/CODE_OF_CONDUCT.md - Docs Hub:
docs/docs_README.md - Docs Summary:
docs/DOCUMENTATION_SUMMARY.md - GitHub Pages Site:
docs/index.html— Landing page for the project
The project includes a static landing page at docs/index.html deployed automatically via GitHub Actions.
- Go to your repository on GitHub
- Click Settings → Pages (in the left sidebar under "Code and automation")
- Under Source, select GitHub Actions
- The deployment workflow (
.github/workflows/pages.yml) will automatically deploy on pushes tomain
Your site will be available at:
https://systemslibrarian.github.io/AI-Conversation-Platform-The-Future-of-Multi-Agent-Collaboration/
To trigger a manual deployment, go to Actions → Deploy GitHub Pages → Run workflow.
The landing page showcases the platform features, provides a demo preview, and links to the quick start guide.
Deploy the interactive web demo to Render for free:
- Go to render.com and sign up (free)
- Click New → Blueprint
- Connect your GitHub repo
- Render will detect
render.yamland configure the service - Add your API keys in the Render dashboard under Environment:
OPENAI_API_KEY(required for ChatGPT)ANTHROPIC_API_KEY(required for Claude)GOOGLE_API_KEY(required for Gemini)- Add others as needed
Your live demo will be at: https://ai-conversation-demo.onrender.com
Note: Free tier spins down after inactivity. First request may take 30-60 seconds while it spins up.
See docs/SECURITY.md for how to report vulnerabilities and best practices.
MIT — see LICENSE in the repository root.