Six artificial minds. One deliberation.
No single point of failure.
There was a time when artificial intelligence answered like a bored clerk. Neutral. Flat. Without personality. Then someone wrote two words that changed everything.
“Act as” — two words that opened a door. The discovery was simple and profound: a language model doesn't just change its tone when you assign it an identity. It changes the structure of its reasoning. An “institutional trader” doesn't use the same heuristics as a “generic assistant”. It weighs risks differently. It notices different patterns. It has different biases.
It was the beginning of prompt engineering as a discipline. Not programming. Not configuration. Something closer to directing an actor: give it a role, and the intelligence reorganizes itself around that role.
So we built our first predictor. A single AI model — Claude Sonnet — with a 3,000-word prompt. Fifteen rules. Twelve data sources. We christened it the Trader.
And for a while, it worked. Then the data spoke: across 613 predictions, the Trader called “DOWN” 74% of the time. Like an analyst who lived through the 2008 crash and has seen bear markets everywhere since. Not because the data suggested it — because its “personality” led it there.
The problem wasn't the model. The problem was the single cognitive point of failure. A brain, however brilliant, has blind spots. It has biases it cannot see because they are part of its very perceptual structure.
No serious hedge fund lets a single trader decide. No board of directors has a single member. No court has a single judge. Why should we do it with AI?
The inspiration doesn't come from technology. It comes from history.
In 1950, the RAND Corporation invented the Delphi Method: gather independent experts, have them vote anonymously, share reasoning, repeat. Collective wisdom systematically outperformed the single expert. Not because the group was smarter — but because informed disagreement produces better decisions than blind consensus.
This is the principle behind the AI Council. Not an “ensemble” — that's just voting. Not a “mixture of experts” — that's routing. The Council is something different:
Every member of the Council has a precise role, a dedicated AI model and — crucially — different inputs. They don't all see the same data. Just like in a real board of directors, the CFO doesn't read the same reports as the CTO.
The weights aren't fixed. They self-calibrate based on each member's track record. In a trending market, the Technician weighs more. In a fear-dominated market, the Sentiment leads. The system learns who is right, and when.
The Council doesn't just vote. It deliberates. The difference is enormous: voting is counting raised hands. Deliberating is listening to reasoning, weighing arguments, embracing dissent.
The Council is not static. It learns.
After every trade, the system records who was right and who was wrong. The weights shift. The Technician who correctly predicted 65% of trends gains influence during trending phases. The Sentiment who read the panic better than the others gains weight during fear phases.
But the real evolution isn't in the weights. It's in the self-improvement loop: every week the system analyzes its own mistakes, identifies recurring patterns and updates the councillors' prompts. It's not a human who decides what to change. It's the Council itself reflecting on itself.
And when disagreement is too high? When the Council can't converge? The answer is the wisest of all: SKIP. Don't trade. Wait. Because the market will still be there tomorrow, but lost capital doesn't come back.
Six minds. A war table. A collective decision better than any individual decision — not because the individuals are perfect, but because informed disagreement is the engine of wisdom.
Watch the Council in action →