Research, evaluate, and generate implementation-ready slot game specifications. Fine-tuned domain model, multi-agent debate simulation, autonomous refinement loop.
Every game with RTP, volatility, mechanic taxonomy, AI visual analysis, and quality score. 8 independent data sources cross-referenced.
Browse →5-100 AI personas argue over your mechanic idea. Each persona searches the knowledge graph for evidence before taking a position. Scores converge through structured rounds.
Evaluate →Autonomous agent researches, screens 8 candidates, deep-evaluates the winner, refines against its own critique, and writes a 15K-word implementation document. No human in the loop.
Run pipeline →Eight data sources feed a unified pipeline. Every game gets visual analysis from a vision model, a quality score from an LLM judge, and entity extraction into a temporal knowledge graph.
LoRA adapter trained on 480K records across 7 categories. Unified profiles combine specs, math, reviews, and regulatory data per game — teaching cross-domain reasoning in a single inference.
| Base model | Qwen3.5-9B — BF16, no quantization |
| LoRA config | rank 16, alpha 32, q_proj + v_proj |
| Training | 2e-4 LR, cosine schedule, 32K batch |
| Serving | vLLM + dynamic LoRA hot-swap |
No single model handles everything. Each tier does what it's best at — web search, fast debate, domain recall, deep synthesis.
Fine-tuned Qwen3.5-9B + LoRA on L40S. Slot-specific facts: mechanics, math models, market data, regulatory constraints.
Zep Cloud GraphRAG. 2,000 entity nodes, 5,000 edges. Per-persona search grounds each debate agent in relevant evidence.
2-50 rounds, 5-100 personas. Graph-grounded context per agent. ReACT tool use. Score tracking across rounds.
Claude Code runs headless. Reads its own reports, identifies weaknesses, rewrites the mechanic, re-evaluates — looping until convergence.
The pipeline argues with itself. It researches, generates candidates, scores them through debate, picks the winner, finds what's wrong with it, fixes it, and writes a document that a coding agent can implement directly.
Core algorithm in pseudocode. Symbol table with frequency weights. RTP breakdown per feature. Win distribution. Animation timing. Regulatory compliance matrix. Risk mitigations.
Every claim traces to the knowledge graph, live web search, or multi-agent consensus. Cites games by name with exact RTP values — not "industry trends suggest."