DevSpot's first Builder Ecosystem Report. 8,382 builders. 11 hackathons. 1,251 projects. The data behind the next generation of founders.
DevSpot hosted its first hackathon on June 5, 2025. What you're reading is a record of everything we've learned since — from the builders who showed up, the projects they shipped, and the patterns hiding inside 1,251 submissions.
We publish this report every six months. H1 covers June 2025 through April 2026. H2 will follow. The dataset gets richer with every event we run, every builder who joins, every project that gets scored.
We're publishing it because the ecosystem deserves transparency on what's actually happening at the builder layer — and because the signals in this data are ones that capital allocators, ecosystem partners, and builders themselves need to see.
This is not a retrospective. It's a living intelligence product. The builders on DevSpot are the earliest signal of where technology is going — before the startups form, before the funding rounds, before the press releases. We're publishing what we see.
8,382 builders have registered on DevSpot since June 2025. 5,657 have competed in at least one hackathon across 10 months and 11 events — with organizers deploying over $370,000 in prize capital and producing 1,251 submitted projects.
| Country | Builders | Avg Score |
|---|---|---|
| 🇮🇩 Indonesia | 6.94 | |
| 🇳🇬 Nigeria | 6.09 | |
| 🇮🇳 India | 5.95 | |
| 🇦🇷 Argentina | 5.86 | |
| 🇺🇸 United States | 5.77 | |
| 🇰🇪 Kenya | 5.09 |
| Group | Projects | Avg Score | Avg Technical | Avg UX | Avg Commits | Avg Dev Days |
|---|---|---|---|---|---|---|
| AI coding tool detected | 108 | 5.98 | 6.27 | 5.88 | 62.1 | 33.3 |
| No AI coding tool | 418 | 5.85 | 6.21 | 5.62 | 35.3 | 21.0 |
| Tool | Projects | Avg Score | Avg Commits |
|---|---|---|---|
| 🟣 Claude Code | 84 | 5.94 | 63.0 |
| ⬛ Cursor | 28 | 6.13 | 68.3 |
| ⚡ Bolt | 3 | 5.72 | 66.3 |
| 💚 Lovable | 1 | 6.51 | 101.0 |
| 🌊 Windsurf | 1 | 4.29 | — |
DevSpot ran 11 hackathons across virtual and in-person formats, spanning multiple ecosystems and geographies. Total prize capital deployed: approximately $370,000. What the numbers show is a platform that's learning fast — submission rates have improved steadily as the builder experience has matured.
Submission rate is the most honest metric a hackathon platform has. It measures intent converting to output. DevSpot went from 9.3% on its first event to 36–59% on recent events.
| Event | Format | Date | Reg. | Subs. | Sub Rate | Avg Score | Prize Pool | Cost / Builder | Cost / Project |
|---|---|---|---|---|---|---|---|---|---|
| PL Genesis: Frontiers | Virtual | Nov 2025 | 2,477 | 572 | 23.1% | 5.85 | $155,500 | $63 | $272 |
| PL Genesis: Modular Worlds | Virtual | Jun 2025 | 1,876 | 175 | 9.3% | 7.80 | $255,000 | $136 | $1,457 |
| NEARCON Innovation Sandbox | Virtual | Jan 2026 | 485 | 85 | 17.5% | 6.01 | $22,500 | $46 | $265 |
| LNMHacks 8.0 🏢 | In-Person | Dec 2025 | 334 | 135 | 40.4% | ||||
| Hacklanta 🏢 | In-Person | Mar 2026 | 204 | 74 | 36.3% | 6.36 | $5,000+ | $25+ | $68+ |
| RealFi Hack | Virtual | Sep 2025 | 237 | 64 | 27.0% | 5.97 | — | — | — |
| Intelligence at the Frontier 🏢 | In-Person | Feb 2026 | 148 | 53 | 35.8% | 4.97 | $26,750 | $181 | $505 |
| Physical AI Hacks 🏢 | In-Person | Jan 2026 | 81 | 37 | 45.7% | 5.85 | $12,000 | $148 | $324 |
| CodeNYC 🏢 | In-Person | Aug 2025 | 77 | 46 | 59.7% | 6.03 | — | — | — |
| Hackathon in Paradise 🏢 | In-Person | Feb 2026 | 29 | 10 | 34.5% | — | $20,000 | $690 | $2,000 |
| Code & Capital 🏢 | In-Person | Mar 2026 | 31 | 0 | Ongoing | — | — | — | — |
🏢 = In-person event. Cost per builder/project calculated from published prize pool where available.
Every submitted project on DevSpot receives a JudgeBot score across four dimensions: technical execution, innovation, business viability, and UX. This gives DevSpot 100% scoring coverage — something no human judging operation can match at this volume.
Hacklanta — with no published prize pool — produced the highest average quality scores of any event. Intelligence at the Frontier — explicitly focused on AI innovation — produced the lowest innovation scores. Prize size doesn't predict quality. Builder intent does. And the AI builders doing genuinely novel work aren't showing up at AI-themed events — they're at general-track hackathons, building without a label.
The conventional wisdom in venture capital is that you need a founding team. Two or three people, complementary skills, shared equity. DevSpot's data challenges that assumption at its earliest stage.
The productivity advantage of adding a co-founder at the earliest stage is marginal when measured by output quality. 2-person teams score 6.03. Solo builders score 5.90. The difference is 0.13 points — a rounding error in the context of a 10-point scale. More striking: 3-person teams score 4.92 and 4+ person teams score 5.28 — both below solo. Coordination cost actively hurts output quality at this stage.
What's driving this? The AI coding tools era has fundamentally changed the productivity calculus of solo building. A single builder with Claude Code or Cursor can now produce output that previously required a team. The data suggests we may be entering a period where the first-principles case for "you need a co-founder" no longer holds — at least not at the prototype-to-submission stage. The question is no longer can one person build this? It's can one person maintain and scale this? — and that's a different conversation for a later stage.
↗ Read more: The One-Person Unicorn Isn't a Fantasy — Sabeen Ali on Substack
What builders are actually shipping — sourced from 1,107 GitHub repositories submitted across 10 hackathons. This is ground truth: not what builders say they use, but what the repos contain. The gap between declared skills and actual stack choices is one of the more interesting findings in the dataset.
TypeScript is the dominant language at 58.5% — despite JavaScript being the most commonly declared skill on builder profiles. Builders say JavaScript; they ship TypeScript. The AI tooling layer is arriving fast: 13.9% of analysed repos contain detectable AI coding tool signatures — Claude Code, Cursor, Bolt. 11.5% use AI libraries in production.
Builder profiles show JavaScript as the #1 declared skill. The repos tell a different story: 58.5% TypeScript, 14.5% JavaScript, 10.5% Python. Builders are shipping in TypeScript at nearly 4× the rate they claim it as their primary language. Declared skills are a lagging indicator. GitHub repos are the leading one.
For the first time, DevSpot project quality is benchmarked against two external standards: ETH Global hackathon prize winners (apples-to-apples — same format, same time pressure, same builder profile) and early-stage YC-backed repositories (the aspirational gold standard — what investors consider fundable at the earliest stage). Same signals. Same methodology. First-of-its-kind comparison.
Claiming builder quality is easy. Proving it against an external reference is harder. The ETH Global comparison shows where DevSpot builders stand among their peers — and the result is flattering. The YC comparison shows the gap — and exactly where it is. The answer is not where most people assume.
| Signal | ETH Global Winners | DevSpot (all events) | Gap |
|---|---|---|---|
| Primary language | TypeScript 75% | TypeScript 58.5% | Aligned |
| Has CI/CD | 0% | 13.4% | DevSpot leads |
| Has test directory | 0% | 13.5% | DevSpot leads |
| Has Dockerfile | 12% | 7.4% | ETH Global leads |
| AI tool detected | 12% (OpenAI SDK only) | 13.9% | Aligned |
| Avg commits (full history) | 23 | 35–62 (Frontiers) | DevSpot leads |
| Signal | YC Early-Stage | DevSpot (all events) | Gap |
|---|---|---|---|
| Primary language | TypeScript 55% · Python 41% | TypeScript 58.5% | Aligned |
| Has CI/CD | 100% | 13.4% | YC leads significantly |
| Has test directory | 36% | 13.5% | YC leads |
| Has Dockerfile | 18% | 7.4% | YC leads |
| Pinned dependency versions | 64% | ~15% (est.) | YC leads |
| AI tool detected | 55% | 13.9% | YC leads |
| Avg commits / first 30 days | 63 | 35–62 (Frontiers) | Closing fast |
This is not a summary. These are specific, data-backed positions on what the DevSpot dataset means for capital allocation — and what it should change about how you source, evaluate, and fund early-stage builders.
Nigerian builders on DevSpot score 6.09 on average. Indian builders score 5.95. American builders score 5.77. The quality gap between Global South and Western builders that VCs have historically assumed exists — isn't there.
The difference in output quality between a solo builder and a 2-person team is 0.13 points on a 10-point scale. The "you need a co-founder" heuristic may be a relic of the pre-AI-tools era.
8.2% of submitted projects score 8+ out of 10 on JudgeBot. Across Frontiers alone, that's 47 projects meeting an investor-grade quality bar by our scoring methodology. The DevSpot investor dashboard will surface this cohort systematically.
Submission rate improved from 9.3% on DevSpot's first event to 59.7% on CodeNYC and 36–45% consistently on recent events. Builder satisfaction averages 4.4/5 across all events. This is a maturing platform, not an experiment.
64.7% of analysed Frontiers projects are still actively committing to their GitHub repos 30+ days after the April 1st submission deadline. These aren’t weekend experiments — they’re ongoing products. The builders who score highest and keep building longest represent a cohort that looks structurally similar to pre-seed founders: shipping fast, iterating publicly, and not waiting for permission to build.
DevSpot is building the full infrastructure for the builder entrepreneurial journey — from first hackathon to funded company. Here's where we're going.
Any builder, any project — not just hackathon participants. If you're building something interesting, submit it to DevSpot. This expands the dataset from 1,251 submitted projects to potentially tens of thousands, and gives builders a permanent home for their work.
A curated, scored, searchable interface for capital allocators. Powered by JudgeBot scoring and GitHub quality signals from this report. Filter by technology, geography, score tier, and team composition. The 8.2% that matter — surfaced automatically.
Infrastructure supporting builders along the entrepreneurial journey beyond the hackathon. Tools, credits, and resources matched to where each builder is in their journey.
The next step after the hackathon. DevSpot moves from discovery to development — supporting the builders who show the most promise from submission data all the way to their first raise.
The H2 report will include completed ETH Global and YC benchmark comparisons, the first builder survey data on intent, protocol choice, and what builders plan to do next, post-hackathon project survival rates at the 6-month mark, and expanded GitHub analysis as the platform grows. It will be the most comprehensive dataset on early-stage builder output published anywhere.
We believe transparency about methodology is as important as the findings themselves. Here is exactly how this report was constructed, what limitations exist, and what we're fixing for H2.
All data sourced from DevSpot's production Supabase database. Builder count, geography, skills, and hackathon metrics are drawn directly from the users, hackathon_participants, participant_profile, projects, and judging_bot_scores tables.
11 real hackathons included. Internal test events (Testing Judging Hackathon, hacklanta test, Micathon test, test) excluded. ACE(M) Hack excluded — 346 registrations but 0 project submissions in the database, consistent with an in-person event where demos occurred offline. SensAI Hack SF excluded — 5 participants, insufficient for analysis.
JudgeBot is DevSpot's AI scoring system, trained to evaluate projects across four dimensions: technical execution, innovation, business viability, and UX. Scores are on a 0–10 scale. JudgeBot was not enabled for LNMHacks 8.0 — that event is included in all non-scoring analysis. JudgeBot scores for PL Genesis Modular Worlds include 7 outlier entries scored on a 0–100 scale from an early version of the system; these are excluded from all cross-event comparisons.
The judging_entries table stores both bot-generated and human judge scores, linked via judging_bot_scores_id. Genuine human override scores are identified as entries where judging_entries.score ≠ judging_bot_scores.score. NEARCON and Hacklanta are excluded from delta analysis — 100% exact score matches confirm no independent human scores were entered at these events; organisers used JudgeBot as the primary judging mechanism. A schema fix (is_human_override boolean flag) is being implemented to make this distinction explicit for H2.
All technology claims in this report are sourced from participant_profile.skills — builder-declared skills at profile creation. Project technology tags (projects.technologies) contain a data quality issue where structured objects were serialised as the string "[object Object]" (142 occurrences). Project tags are not used for any claim in this report. This bug is being fixed; H2 will use clean project tag data.
1,251 submitted projects across 10 hackathons were analysed via the GitHub API. 1,107 returned successful results (88.5%). 138 repos were not found (likely deleted or made private post-event); 2 had no URL. AI tool detection uses repository config file signatures (e.g. .cursor, CLAUDE.md, .bolt). AI library detection uses package.json and requirements.txt dependency parsing. All signals applied using identical tooling across all repos. Benchmark extractions (ETH Global and YC early-stage) are in progress and will be added once complete.
8 prize-winning projects from 5 ETH Global events in 2024 (Bangkok, San Francisco, Singapore, Brussels, London). Only 1st place and finalist-level winners included. Full repository history analysed — no date window restriction, as these repos were created for the hackathon. 3 of 11 target repos were not found (deleted or made private post-event).
Projects included: LemonPay (Brussels · The Graph 1st place) · SuperTweets (SF · SKALE 1st place) · Battle of Nouns (Singapore · Mina Protocol 1st place) · F1 Bets (Singapore · Fhenix 1st place) · ProtoStar (Bangkok · Mina Protocol 1st place) · BenderBite (SF · Circle 1st place) · dnsRegistry-AVS (Brussels · ZK Email 1st place) · Zubernetes / ZK8S (Bangkok · ETHGlobal Finalist + Phala)
Not found (404): PrismX (Bangkok) · not so secret agent (London) · GuardiansOfThePaymas (London)
22 open-source YC-backed companies from S20 through W24 batches — specifically repos made public at or near founding with intact early commit history. All signals measured within the first 30 days of repo.created_at. Repos were verified for history integrity before inclusion; candidates with squashed history or multi-year gaps were dropped (Infisical, Crowd.dev, Rallly). Composio excluded — history squashed on open-sourcing. Documenso included with a methodology note — code predates public repo by 4 months. Known selection bias: open-source-first YC companies skew toward higher baseline code quality. This benchmark represents an aspirational comparison, not a median.
Companies included: Supabase (S20) · Roboflow (S20) · Stytch (W21) · Goldfinch (W21) · Encord (W21) · Tinybird (S21) · Modal (S21) · Hyperlane (S22) · Cal.com (W22) · LlamaIndex (W22) · Resend (W23) · Fern (W23) · Hegel AI (W23) · Outerbase (W23) · Papermark (W23) · Trigger.dev (W23) · E2B (S23) · Formbricks (S23) · Letta / MemGPT (S23) · Nillion (W23) · OpenStatus (W24) · Documenso (W24)
This report covers the period June 5, 2025 (DevSpot's first hackathon) through April 2026. PL Genesis: Frontiers of Collaboration ran November 2025 through April 1, 2026 and is included in full. DevSpot publishes H1 and H2 reports on a semi-annual cadence.
The H2 report includes full GitHub benchmarks, YC comparisons, ETH Global analysis, and the first builder survey data. We also maintain a curated high-potential startup list for investors — sourced directly from DevSpot project scores and GitHub signals.
No spam. Unsubscribe any time. Report published semi-annually.