LOOTR is a startup intelligence platform that helps you discover emerging opportunities — and then pressure-test whether they're worth building.
Discovery finds the idea. The Tribunal tells you whether it's worth building.
Whether you found the idea on LOOTR or brought your own, this is how LOOTR evaluates it.
Your idea doesn't get a score.
It gets a trial.
LOOTR puts every startup idea through a structured tribunal — debated by 5 AI analysts, judged by a final verdict engine, and audited for quality. Not a quiz. Not a chatbot opinion. A repeatable intelligence process.
Most idea validators give you a score.
We give you a verdict.
Most idea tools ask a few questions, apply a formula, and return a number. No debate. No challenge. No opposing view.
Real startup decisions are not made by formulas alone. They are shaped by conflicting incentives, operational trade-offs, and hard questions: Is the market real? Is the wedge defensible? Is this a company or just a feature? Can this actually be sold?
LOOTR recreates that room. Every idea is analyzed from 5 competing perspectives, scored against structured rubrics, and audited for consistency. The result is not a guess — it is a deliberated verdict backed by evidence, debate, and self-correction.
Evidence. Debate. Verdict. Audit.
Seven phases. Each one adds a layer of scrutiny that conventional tools skip entirely.
Evidence Gathering
Market size, failure patterns, and success precedents are collected first. The process begins with external evidence, not opinion.
Rubric Builder
Timing, feasibility, competition, and willingness-to-pay are scored independently. These become the structured baseline for the debate.
Analyst Debate
Five perspectives examine the same idea from different angles. The Prosecutor attacks weak assumptions. The Defender argues the strongest case. The VC Partner evaluates economic upside. The Operator stress-tests execution. The Devil's Advocate looks for blind spots and second-order risks.
Cross-Examination & Refinement
In Full Tribunal, analysts respond to each other across multiple rounds. Weak arguments get exposed. Strong arguments get refined. Positions evolve.
Supreme Verdict
An independent judge weighs the evidence, the rubric, and the debate. The system returns a score, verdict, confidence level, and revenue outlook.
Independent Audit
A separate audit layer checks the verdict for overclaims, inconsistency, and weak grounding. If needed, the judge revises the conclusion.
The system doesn't just evaluate your idea. It checks its own work.
Five perspectives. One idea. No consensus required.
Prosecutor
Finds every reason this idea should fail. Attacks weak assumptions, questions market fit, and exposes what the founder might be ignoring. If there's a crack in the foundation, the Prosecutor will find it.
Defender
Builds the strongest good-faith case for the idea. Finds the market angle, the timing advantage, the practical wedge. Even weak ideas have something worth examining — the Defender finds it.
VC Partner
Evaluates the idea as an investment. Market size, revenue ceiling, unit economics, return potential. Asks the question a seed investor would: 'Would I write a check for this?'
Operator
Stress-tests execution. Hiring, ops, churn, infrastructure, go-to-market. Doesn't care if the idea sounds good on paper — asks whether it can actually be built and run as a business.
Devil's Advocate
Looks for what everyone else missed. Second-order risks, regulatory blind spots, timing traps, hidden dependencies. The contrarian voice that prevents groupthink.
Quick filters. Full decides.
| Feature | Quick Scan | Full Tribunal |
|---|---|---|
| Time | ~3 minutes | ~6 minutes |
| Cost | Free | 2 credits |
| Analysts | 3 perspectives | 5 perspectives |
| Debate rounds | 1 round | 3 rounds |
| Cross-examination | No | Yes |
| Audit | Lite check | Full independent audit |
| Judge revision | No | Yes |
Time
Quick Scan
~3 minutes
Full Tribunal
~6 minutes
Cost
Quick Scan
Free
Full Tribunal
2 credits
Analysts
Quick Scan
3 perspectives
Full Tribunal
5 perspectives
Debate rounds
Quick Scan
1 round
Full Tribunal
3 rounds
Cross-examination
Quick Scan
No
Full Tribunal
Yes
Audit
Quick Scan
Lite check
Full Tribunal
Full independent audit
Judge revision
Quick Scan
No
Full Tribunal
Yes
Quick Scan
A fast directional signal. Use it to filter ideas before investing time. Three perspectives, one round, lite audit. Enough to know if an idea deserves deeper attention.
Full Tribunal
The deep evaluation. Cross-examination forces analysts to defend their positions. The audit catches errors. The judge can revise. Use it when you're serious about an idea and need a decision-grade assessment.
Upgrade anytime. Start with Quick Scan, then upgrade to Full Tribunal for the same idea — no re-submission needed.
Structured rubrics, not vibes.
Every idea is scored across 6 dimensions before the debate even begins. These scores feed directly into the analyst debate and the final verdict.
Timing
Is the market ready now? Too early is as dangerous as too late. Scored against macro trends, technology readiness, and buyer behavior signals.
Feasibility
Can this actually be built? Technical complexity, infrastructure requirements, regulatory burden. Higher score means lower implementation risk.
Competition
How defensible is this? Number of competitors, moat trajectory, switching costs, network effects. A crowded market with no moat scores low.
Willingness to Pay
Will someone pay for this? Based on comparable pricing in adjacent markets, buyer segment analysis, and pricing confidence. Wide ranges indicate uncertainty.
Market Size
How big is the opportunity? TAM estimated from industry data. Larger markets score higher but only matter if the idea can capture a meaningful slice.
Virality
Can this grow without paid acquisition? Network effects, referral mechanics, social sharing, community loops.
These rubric scores feed directly into the analyst debate and the final verdict. They are the structured backbone — not the whole answer, but the evidence foundation.
We don't score you. We score the idea — and show you how to win.
Execution Lens is not a grade on your skills. It's a coaching layer that maps your advantages, highlights your gaps, and gives you concrete next moves — for this specific idea. No numeric score. No pass/fail. Just honest guidance.
What it looks at
Problem Insight
Do you understand the problem this idea solves? There are three equally valid ways to get here: living it as a domain insider, bringing transferable insight from an adjacent field, or doing serious customer discovery as an outsider. This is about depth of understanding, not industry pedigree.
Go-to-Market Fit
Can you reach the right people and convince them to pay? This is not just "do you know people" — it's about channel understanding, buyer conversation ability, and launch path clarity. Building the product is one challenge. Getting it to paying hands is another.
Execution Leverage
Can you actually ship this — through any means? AI-assisted, hands-on, with a team, or outsourced. What matters is not how you build, but whether you finish. Track record of shipping anything — side projects, MVPs, open source — counts. Using AI tools is great, but the signal is finishing, not tooling. In the AI era, build barriers are lower than ever. We evaluate capacity, not credentials.
What you get
Instead of a score, Execution Lens gives you three things:
Your Advantages
What's working in your favor for this specific idea.
Your Gaps
Where extra effort or external help would reduce your risk. Framed as risks to manage, never as personal judgments.
Suggested Next Moves
Concrete, actionable steps — not generic advice. "Run 10 customer interviews with dental office managers" — not "do more research."
⚠️ Friction Flags
Some ideas carry structural friction that's not about you — it's about the idea's operating reality. Regulated industries, hardware dependencies, long enterprise sales cycles, capital intensity. These show up as informational flags, not penalties.
Why no score?
A great idea can come from someone outside the domain. An insider can fail at execution. Reducing this to a number would oversimplify — and risk burying good ideas behind a low founder score.
So we score the idea (that's the Tribunal's job) and coach the founder (that's the Execution Lens). Two separate systems. On purpose.
Important:Execution Lens does NOT influence the Tribunal verdict. Your idea gets the same score regardless of your background. A 9/10 idea stays 9/10 whether you're a domain expert or a complete outsider. We score the idea. We coach the founder. Two different systems.
One radar chart. Six dimensions. Your idea's fingerprint.
Idea DNA distills your idea into 6 measurable dimensions. The DNA Score tells you the shape of your idea — where it's strong, where it's exposed, and what to fix first.
Idea DNA Dimensions
The DNA Score is a weighted composite. It does not replace the verdict — it helps you see the shape of the opportunity.
Confidence Level
Not every verdict carries the same certainty. LOOTR's confidence reflects how much the analyst panel agreed — not how sure the system feels.
Analysts converged. Low spread. Strong agreement. The verdict is reliable.
Some disagreement. The verdict is directionally useful but has uncertainty.
Analysts diverged significantly. Wide spread, low agreement. Treat the verdict as one data point, not a final answer.
Confidence is calibrated to panel consensus. If the analysts disagree, the system tells you honestly — it doesn't pretend to be certain when it isn't.
See what a Tribunal verdict actually looks like.
A B2B SaaS tool that tracks API changes and deprecation notices for small software teams.
Analyst Panel
“Existing players can add this as a feature.”
“Timing is perfect, PLG model works.”
“Interesting but unit economics unclear.”
“Technical parsing burden is unsustainable.”
Recommendation
Not just a number, but a clear map of what's strong (timing, market need), what's weak (competition, feasibility), and what to do next.
Four different analysts looked at the same idea and reached different conclusions. The Tribunal synthesized those perspectives into a verdict that acknowledges disagreement rather than hiding it. That's what makes it useful.
Built to argue with itself.
Multi-agent debate
5 analysts with different incentives evaluate every idea. The Prosecutor and Defender are structurally opposed. Agreement is earned, not assumed.
Structured rubrics
Scoring starts with established frameworks, not gut feelings. The rubric is built before the debate begins and serves as the source of truth.
Independent audit
After the judge delivers a verdict, a separate audit system checks for overclaims, inconsistencies, and evidence gaps. If problems are found, the judge revises.
Calibrated confidence
The system doesn't pretend to be certain when analysts disagree. Low convergence produces low confidence. This is by design.
Self-correction
LOOTR is designed to check and revise its own output when the audit finds inconsistencies. The score can change. The verdict can change.
Evidence-first
Market data, failure patterns, and success precedents are gathered before any analyst speaks. Claims are grounded in evidence, not startup conventional wisdom.
What LOOTR is not
- Not a quiz that spits out a number
- Not a chatbot giving you an opinion
- Not a formula you can game by choosing the right answers
- Not a cheerleader that tells everyone their idea is great
Straight answers.
Your idea is waiting.
Put your idea on trial in under 3 minutes.
Or go deeper — Upgrade any Quick Scan to a Full Tribunal for cross-examination, audit, and revision.
The best time to stress-test your idea is before you spend months building it.