A research experiment: multiple LLMs debate your task in structured rounds before implementing. Uses your existing Claude Code or Gemini CLI. Expect bugs. This is not production software.
OpenCode already had providers, agents, git, tools, and a polished TUI. Conclave adds one thing: a team debate engine. ~12 files modified. The rest is OpenCode doing what it does best.
No single model decides. Structured rounds with LEAD, SUPPORT, ALIGN, BUILD, CHALLENGE signals. Winner chosen by endorsement score.
One model catches what another misses. Architecture flaws, edge cases, and security issues caught during deliberation.
Watch each model think in real-time. Click to expand reasoning blocks. Round summaries always visible.
Connect Claude Code, Gemini CLI, or Codex as team members. No API keys needed — uses your existing CLI auth. Perfect for those with subscriptions but no API access.
OpenAI, Anthropic, DeepSeek, Google, NVIDIA, Groq, Ollama. Mix API and CLI models in the same team.
Multiple named teams with different model combos. Survive restarts. Switch with a single command.
LLMs autonomously split into sub-teams for complex tasks. Frontend and backend in parallel with cross-team communication.
Each model has its own context limit. Conclave auto-truncates the debate thread per model — large-context models see everything, small-context models see signal summaries. No crashes, no compaction failures. Pair a 1M-token model (DeepSeek) with a 128K model (Gemini Flash) for optimal depth vs speed.
3 models debating 3 rounds = 9 API calls per message. Responses take longer than a single LLM. The tradeoff: better output quality for more complex tasks. For simple questions, switch to single model mode.
Each team member makes independent API calls. A 3-model team costs 3× more per message. Free tiers (Gemini CLI, Ollama local models) offset this. Choose your team based on task complexity, not just model quality.
Conclave is weeks old. The debate engine works, teams persist, providers connect. But expect rough edges. Context optimization, live streaming, and Breaking Teams are on the roadmap. Feedback and contributions welcome.