Last weekend, OpenAI, Anthropic, Google — and Lovable — teamed up to host The AI Showdown, promising $75 000 in prizes. I dove in, spun up my demo at [t3clone-niko.lovable.app](https://t3clone-niko.lovable.app/), and built an LLM chat app like T3.chat. It has an end-to-end signup → chat flow that cannot be bypassed. (it will be important in a moment)
Disclaimer: I knew my chances weren’t high — there are plenty of incredible engineers out there with groundbreaking apps. But not even having my submission clicked through or reviewed by the judges utterly destroys the whole mood and motivation.
Yet when winners were announced, not a single user signed up and no chat requests ever hit my servers. Zero.
My Submission in a Nutshell
1. Signup-gated demo
Supabase Auth–backed signup, onboarding screens, real-time WebSocket chat, audit-grade analytics
2. $40 000 Build Challenge
Mandatory ≥ 90 % of prompts through one model, project started after June 14 8 AM CET
Submission window until June 16 9 AM CET
3. What I Saw in Logs
No
POST /auth/v1/signup
(0 signed-up users other than me)No
POST /rest/v1/chats
Only health checks, profile GETs, and WebSocket handshakes
Why This Matters
Mandatory signup means every judge had to register to evaluate my project.
Zero records confirms judges never clicked through.
My Call to Lovable
“If you didn’t click the signup button, you never saw my build.”
Transparency isn’t optional when you’re hosting "a world-class AI showdown". We need more than buzzwords — we need a clear window into how our work was evaluated, every step of the way. (or at least register and make couple clicks?)
🔗 See my demo: [t3clone-niko.lovable.app](https://t3clone-niko.lovable.app/)
P.S. I put #devchallenge tag to maybe tell me why I'm wrong and that maybe that's normal practice to review just code itself in hackathons/challenges submissions and not the actual project (which I honestly don't believe is the case, but maybe?)