{DEV SPOT} · H1 2026 · Builder Ecosystem Report

77% of projects
built by solo builders.

DevSpot's first Builder Ecosystem Report. 8,382 builders. 11 hackathons. 1,251 projects. The data behind the next generation of founders.

8,382
Registered Builders
1,251
Projects Submitted
11
Hackathons
$370K
Prize Capital
A Note From DevSpot

Chapter one.

DevSpot hosted its first hackathon on June 5, 2025. What you're reading is a record of everything we've learned since — from the builders who showed up, the projects they shipped, and the patterns hiding inside 1,251 submissions.

We publish this report every six months. H1 covers June 2025 through April 2026. H2 will follow. The dataset gets richer with every event we run, every builder who joins, every project that gets scored.

We're publishing it because the ecosystem deserves transparency on what's actually happening at the builder layer — and because the signals in this data are ones that capital allocators, ecosystem partners, and builders themselves need to see.

What this means

This is not a retrospective. It's a living intelligence product. The builders on DevSpot are the earliest signal of where technology is going — before the startups form, before the funding rounds, before the press releases. We're publishing what we see.

Section 01

The builder population.

8,382 builders have registered on DevSpot since June 2025. 5,657 have competed in at least one hackathon across 10 months and 11 events — with organizers deploying over $370,000 in prize capital and producing 1,251 submitted projects.

8,382
Registered Builders
Since June 2025
5,657
Active Participants
Joined ≥1 hackathon
9.9%
Repeat Builders
561 did 2+ events
Builder Registration Growth
Monthly new registrations since platform launch
Geographic Distribution
Top countries by hackathon participation
Country Builders Avg Score
🇮🇩 Indonesia
51
6.94
🇳🇬 Nigeria
670
6.09
🇮🇳 India
2,109
5.95
🇦🇷 Argentina
99
5.86
🇺🇸 United States
599
5.77
🇰🇪 Kenya
84
5.09
AI Coding Tool Adoption — PL Genesis Frontiers
Detected from repository configuration files across 537 analysed GitHub repos
20.3%
Used an AI
coding tool
109 of 537 repos
84
Claude Code
projects
77% of AI tool users
28
Cursor
projects
26% of AI tool users
5
Other tools
(Bolt, Windsurf, Lovable)
Bolt 3 · Windsurf 1 · Lovable 1
AI Tool Users vs Non-Users — Project Quality & Velocity
Group Projects Avg Score Avg Technical Avg UX Avg Commits Avg Dev Days
AI coding tool detected 1085.986.275.88 62.1 33.3
No AI coding tool 4185.856.215.62 35.321.0
Score by Specific AI Tool
ToolProjectsAvg ScoreAvg Commits
🟣 Claude Code845.9463.0
⬛ Cursor286.1368.3
⚡ Bolt35.7266.3
💚 Lovable16.51101.0
🌊 Windsurf14.29
The real story isn't the score delta — it's the velocity delta. AI tool users score 0.13 points higher on average (5.98 vs 5.85). But they commit 76% more (62 vs 35 commits) and sustain development for 59% longer (33 vs 21 days). AI coding tools aren't producing dramatically better-scored projects. They're enabling builders to keep building far longer than they otherwise would.
AI library adoption (detected from dependency files): OpenAI SDK in 14 repos · Anthropic SDK in 11 repos · LangChain in 5 repos. Of the 46 projects using AI libraries, average JudgeBot score is 6.1 — above the Frontiers mean of 5.85.
Section 02

11 hackathons.
10 months.

DevSpot ran 11 hackathons across virtual and in-person formats, spanning multiple ecosystems and geographies. Total prize capital deployed: approximately $370,000. What the numbers show is a platform that's learning fast — submission rates have improved steadily as the builder experience has matured.

What this means

Submission rate is the most honest metric a hackathon platform has. It measures intent converting to output. DevSpot went from 9.3% on its first event to 36–59% on recent events.

Submission Rate by Event
% of registered participants who submitted a project · 🏢 = in-person event
In-person events consistently outperform virtual on submission rate. CodeNYC (59.7%), Physical AI Hacks (45.7%), and LNMHacks (40.4%) are all in-person events. The accountability of a physical room converts intent to output at a measurably higher rate.
Event Overview
All 11 hackathons — June 2025 through April 2026
Event Format Date Reg. Subs. Sub Rate Avg Score Prize Pool Cost / Builder Cost / Project
PL Genesis: FrontiersVirtualNov 20252,47757223.1%5.85$155,500$63$272
PL Genesis: Modular WorldsVirtualJun 20251,8761759.3%7.80$255,000$136$1,457
NEARCON Innovation SandboxVirtualJan 20264858517.5%6.01$22,500$46$265
LNMHacks 8.0 🏢In-PersonDec 202533413540.4%
Hacklanta 🏢In-PersonMar 20262047436.3%6.36$5,000+$25+$68+
RealFi HackVirtualSep 20252376427.0%5.97
Intelligence at the Frontier 🏢In-PersonFeb 20261485335.8%4.97$26,750$181$505
Physical AI Hacks 🏢In-PersonJan 2026813745.7%5.85$12,000$148$324
CodeNYC 🏢In-PersonAug 2025774659.7%6.03
Hackathon in Paradise 🏢In-PersonFeb 2026291034.5%$20,000$690$2,000
Code & Capital 🏢In-PersonMar 2026310Ongoing

🏢 = In-person event. Cost per builder/project calculated from published prize pool where available.

Platform Feedback
What builders say about the events and the platform — collected from every project submission
How we collect feedback: Every builder who submits a project is prompted to complete a short feedback form — it is not a voluntary post-event survey. We ask 4 questions on a 1–5 scale: overall hackathon experience, whether they would recommend the hackathon to a friend, overall DevSpot platform experience, and whether they would recommend DevSpot to a friend. 1,253 responses collected across 10 events. Scores shown are mean averages — individual responses use whole numbers (1–5), so averages produce decimals.
Platform Average — out of 5
Overall hackathon experience
avg across 1,253 responses
4.4
out of 5
Would you recommend this hackathon?
avg across 1,253 responses
4.4
out of 5
Overall DevSpot platform experience
avg across 1,253 responses
4.2
out of 5
Would you recommend DevSpot?
avg across 1,253 responses
4.3
out of 5
Challenge Feedback Highlights
🏆 Highest Rated Challenge — Platform Wide
ElevenLabs Voice Challenge
Intelligence at the Frontier · 11 responses
Overall 4.7 / 5 Docs 4.7 / 5 Support 4.7 / 5 Rec. 4.7 / 5
⭐ Highest Recommendation — Large Track (Frontiers)
Filecoin
PL Genesis: Frontiers · 155 responses
Overall 4.5 / 5 Docs 4.5 / 5 Support 4.5 / 5 Recommendation 4.7 / 5
Filecoin's recommendation score (4.7) is the highest of any large track at Frontiers — sitting above its own overall experience score (4.5). Builders who worked with Filecoin came away more likely to recommend it than their initial rating suggests. A strong signal for the sponsor.
Challenge feedback note: 4,134 responses across 114 challenges. Full sponsor-level breakdowns — including docs vs support ratings and builder comments — are available to sponsors on request.
77%
Solo Projects
977 of 1,266 submitted
76%
Fresh Code
Built from scratch
37.5%
Started, Not Submitted
759 projects in progress
Section 03

Project quality
at scale.

Every submitted project on DevSpot receives a JudgeBot score across four dimensions: technical execution, innovation, business viability, and UX. This gives DevSpot 100% scoring coverage — something no human judging operation can match at this volume.

What this means

Hacklanta — with no published prize pool — produced the highest average quality scores of any event. Intelligence at the Frontier — explicitly focused on AI innovation — produced the lowest innovation scores. Prize size doesn't predict quality. Builder intent does. And the AI builders doing genuinely novel work aren't showing up at AI-themed events — they're at general-track hackathons, building without a label.

Score Distribution — All Events
All submitted projects with JudgeBot scoring enabled
Average Score by Dimension
Across all events with JudgeBot enabled
JudgeBot vs Human Judge Delta
When human judges scored independently — how far did they differ from JudgeBot?
−1.63
Average delta
(human minus bot)
24.4%
Within 1 point
of JudgeBot
119
Genuine human
override scores
When human judges overrode JudgeBot on PL Genesis Frontiers, they scored projects 1.63 points lower on average — meaning JudgeBot is slightly more generous than human reviewers. Projects scoring 8+ on JudgeBot are a conservative bar: human judges would likely place them even higher relative to the field. NEARCON and Hacklanta excluded — judges used JudgeBot as primary mechanism with no independent human scores.
Score by Geography — Platform Wide
All events · Countries with ≥10 submitted projects
Score by Geography — PL Genesis Frontiers
Frontiers only · Countries with ≥10 submitted projects
The geographic quality pattern holds consistently across both views. Indonesian builders lead at 6.94, followed by Nigeria and India — both ahead of the United States. This is not a Frontiers anomaly. It is a platform-wide signal.
Section 04

The solo
builder era.

The conventional wisdom in venture capital is that you need a founding team. Two or three people, complementary skills, shared equity. DevSpot's data challenges that assumption at its earliest stage.

77%
of DevSpot projects are built by solo builders.
One builder. One weekend. Production-ready output. The AI coding tools era has arrived — and the data proves it.
What this means

The productivity advantage of adding a co-founder at the earliest stage is marginal when measured by output quality. 2-person teams score 6.03. Solo builders score 5.90. The difference is 0.13 points — a rounding error in the context of a 10-point scale. More striking: 3-person teams score 4.92 and 4+ person teams score 5.28 — both below solo. Coordination cost actively hurts output quality at this stage.

What's driving this? The AI coding tools era has fundamentally changed the productivity calculus of solo building. A single builder with Claude Code or Cursor can now produce output that previously required a team. The data suggests we may be entering a period where the first-principles case for "you need a co-founder" no longer holds — at least not at the prototype-to-submission stage. The question is no longer can one person build this? It's can one person maintain and scale this? — and that's a different conversation for a later stage.

↗ Read more: The One-Person Unicorn Isn't a Fantasy — Sabeen Ali on Substack

Score by Team Size
Average JudgeBot score — PL Genesis Frontiers
Project Distribution by Team Size
All submitted projects across all events

0.13

Point difference between solo builders and 2-person teams. Statistically negligible.

−0.98

Point penalty for 3-person teams vs solo. Coordination overhead is measurable.

1,349

Builders declaring AI/ML experience. The tooling enabling solo productivity is already here.
Section 05

Technology signals.

What builders are actually shipping — sourced from 1,107 GitHub repositories submitted across 10 hackathons. This is ground truth: not what builders say they use, but what the repos contain. The gap between declared skills and actual stack choices is one of the more interesting findings in the dataset.

What this means

TypeScript is the dominant language at 58.5% — despite JavaScript being the most commonly declared skill on builder profiles. Builders say JavaScript; they ship TypeScript. The AI tooling layer is arriving fast: 13.9% of analysed repos contain detectable AI coding tool signatures — Claude Code, Cursor, Bolt. 11.5% use AI libraries in production.

Primary Language Distribution
1,107 successfully analysed GitHub repos · platform-wide
AI Tool & Library Adoption
Detected from repo config files and dependency manifests
🟣 Claude Code
121 repos
⬛ Cursor
40 repos
⚡ Bolt / Lovable / Windsurf
6 repos
OpenAI SDK 53 repos
Anthropic SDK 28 repos
LangChain 19 repos
HuggingFace 7 repos
13.9%
AI Tool Adoption
154 of 1,107 repos
11.5%
AI Library Usage
127 repos use AI SDKs
13.5%
Have Tests
149 repos · has_test_dir
13.4%
Have CI/CD
148 repos · .github/workflows
Declared vs actual

Builder profiles show JavaScript as the #1 declared skill. The repos tell a different story: 58.5% TypeScript, 14.5% JavaScript, 10.5% Python. Builders are shipping in TypeScript at nearly 4× the rate they claim it as their primary language. Declared skills are a lagging indicator. GitHub repos are the leading one.

Section 06

The quality
benchmark.

For the first time, DevSpot project quality is benchmarked against two external standards: ETH Global hackathon prize winners (apples-to-apples — same format, same time pressure, same builder profile) and early-stage YC-backed repositories (the aspirational gold standard — what investors consider fundable at the earliest stage). Same signals. Same methodology. First-of-its-kind comparison.

Why this matters

Claiming builder quality is easy. Proving it against an external reference is harder. The ETH Global comparison shows where DevSpot builders stand among their peers — and the result is flattering. The YC comparison shows the gap — and exactly where it is. The answer is not where most people assume.

Peer Benchmark: DevSpot vs ETH Global Winners
Apples-to-apples · Hackathon format · Same time constraints · 8 top-tier prize winners across 5 events (2024)
We analysed 8 prize-winning projects from 5 ETH Global events held in 2024 — Bangkok, San Francisco, Singapore, Brussels, and London. Only 1st place and finalist-level winners were included across sponsor tracks spanning privacy, cross-chain infrastructure, AI, and account abstraction. Full project list in the methodology section.
Signal ETH Global Winners DevSpot (all events) Gap
Primary language TypeScript 75% TypeScript 58.5% Aligned
Has CI/CD 0% 13.4% DevSpot leads
Has test directory 0% 13.5% DevSpot leads
Has Dockerfile 12% 7.4% ETH Global leads
AI tool detected 12% (OpenAI SDK only) 13.9% Aligned
Avg commits (full history) 23 35–62 (Frontiers) DevSpot leads
The takeaway: DevSpot builders match ETH Global winners on language choice and AI tool adoption, and lead on CI/CD and test coverage — signals of more sustainable development practice. ETH Global prize winners ship fast in a 36-hour sprint; DevSpot builders are building across weeks with more structural rigour. This is a peer comparison where DevSpot holds its own.
Aspirational Benchmark: DevSpot vs YC Early-Stage Repos
Gold standard · 22 open-source YC companies · First 30 days of commit history only · W20–W24 batches
We selected 22 open-source YC-backed companies from S20 through W24 batches, specifically repos that were made public at or near the founding moment with intact early commit history. Domains covered span developer tooling, AI infrastructure, data platforms, and web3 — matching the breadth of DevSpot's builder population. All signals are measured within the first 30 days of each repo's creation date, making the comparison age-equivalent to a hackathon submission window. Full company list in the methodology section.
Signal YC Early-Stage DevSpot (all events) Gap
Primary language TypeScript 55% · Python 41% TypeScript 58.5% Aligned
Has CI/CD 100% 13.4% YC leads significantly
Has test directory 36% 13.5% YC leads
Has Dockerfile 18% 7.4% YC leads
Pinned dependency versions 64% ~15% (est.) YC leads
AI tool detected 55% 13.9% YC leads
Avg commits / first 30 days 63 35–62 (Frontiers) Closing fast
The takeaway: The biggest gap vs YC is CI/CD discipline — 100% of YC early-stage repos have CI from day one; DevSpot is at 13.4%. This is a maturity signal, not a quality signal. On language choice (TypeScript dominant in both), commit velocity (DevSpot Frontiers AI-tool users average 62 commits — matching the YC median of 63), and AI tool adoption trajectory, DevSpot builders are on the same curve. The CI gap will close as DevSpot builders move from hackathon prototypes to sustained products.
What a winning startup looks like — early
Triangulating DevSpot top projects against YC early-stage signals to identify the characteristics that predict fundable outcomes
The data from both benchmarks triangulates to a consistent early-stage profile. YC-backed companies that went on to raise had CI from day one, shipped in TypeScript or Python, committed consistently across their first 30 days, and — increasingly — used AI coding tools. DevSpot projects that score 8.0+ on JudgeBot, are still active 90+ days post-hackathon, and have CI configured look structurally similar to the earliest public commits of Trigger.dev, Papermark, and OpenStatus. These aren't just good hackathon projects. They look like the beginning of something.
The DevSpot/YC overlap profile
✓ JudgeBot score 8.0+
✓ Still committing 90+ days post-hackathon
✓ Has CI/CD configured
✓ TypeScript or Python primary language
✓ AI coding tool detected in repo
✓ Solo founder or 2-person team
✓ README with demo link
Comparable YC companies at same stage
Trigger.dev (W23) — 101 commits in 30 days, TypeScript, CI from day 1, Claude Code + Cursor detected
Papermark (W23) — 80 commits, TypeScript, solo founder, Cursor detected
OpenStatus (W24) — 64 commits, TypeScript, CI from day 1, Claude Code detected
The shortlist of DevSpot projects matching this profile is available to report readers on request.
Section 07 · For Investors

Five signals
worth your attention.

This is not a summary. These are specific, data-backed positions on what the DevSpot dataset means for capital allocation — and what it should change about how you source, evaluate, and fund early-stage builders.

Signal 01
6.09
The geographic arbitrage opportunity is real.

Nigerian builders on DevSpot score 6.09 on average. Indian builders score 5.95. American builders score 5.77. The quality gap between Global South and Western builders that VCs have historically assumed exists — isn't there.

→ If you're sourcing exclusively from Western markets, you're paying a geography premium with no quality justification.
Signal 02
0.13
The solo founder era is here. The data supports it.

The difference in output quality between a solo builder and a 2-person team is 0.13 points on a 10-point scale. The "you need a co-founder" heuristic may be a relic of the pre-AI-tools era.

→ Stop filtering solo founders at the top of the funnel. You're eliminating 77% of early-stage builder output on a false assumption.
Signal 03
8.2%
The investment shortlist is hiding in the submission data.

8.2% of submitted projects score 8+ out of 10 on JudgeBot. Across Frontiers alone, that's 47 projects meeting an investor-grade quality bar by our scoring methodology. The DevSpot investor dashboard will surface this cohort systematically.

→ The pipeline exists. The infrastructure to navigate it is being built. Raw lists available on request.
Signal 04
9.3→59%
Platform maturation is measurable and on a strong trajectory.

Submission rate improved from 9.3% on DevSpot's first event to 59.7% on CodeNYC and 36–45% consistently on recent events. Builder satisfaction averages 4.4/5 across all events. This is a maturing platform, not an experiment.

→ DevSpot is becoming reliable infrastructure for deal flow. The operational track record supports it.
Signal 05
64.7%
The majority of builders don’t stop when the hackathon ends.

64.7% of analysed Frontiers projects are still actively committing to their GitHub repos 30+ days after the April 1st submission deadline. These aren’t weekend experiments — they’re ongoing products. The builders who score highest and keep building longest represent a cohort that looks structurally similar to pre-seed founders: shipping fast, iterating publicly, and not waiting for permission to build.

→ Post-hackathon survival rate is the signal VCs should be tracking. A builder still committing 90 days after a hackathon deadline is more fundable than one with a great demo and a dormant repo. DevSpot will surface this cohort systematically in the investor dashboard.
Section 08

What's next.

DevSpot is building the full infrastructure for the builder entrepreneurial journey — from first hackathon to funded company. Here's where we're going.

Open Project Submissions

Any builder, any project — not just hackathon participants. If you're building something interesting, submit it to DevSpot. This expands the dataset from 1,251 submitted projects to potentially tens of thousands, and gives builders a permanent home for their work.

Investor Dashboard

A curated, scored, searchable interface for capital allocators. Powered by JudgeBot scoring and GitHub quality signals from this report. Filter by technology, geography, score tier, and team composition. The 8.2% that matter — surfaced automatically.

Builder Perks

Infrastructure supporting builders along the entrepreneurial journey beyond the hackathon. Tools, credits, and resources matched to where each builder is in their journey.

Accelerator

The next step after the hackathon. DevSpot moves from discovery to development — supporting the builders who show the most promise from submission data all the way to their first raise.

H2 2026 Report Preview

The H2 report will include completed ETH Global and YC benchmark comparisons, the first builder survey data on intent, protocol choice, and what builders plan to do next, post-hackathon project survival rates at the 6-month mark, and expanded GitHub analysis as the platform grows. It will be the most comprehensive dataset on early-stage builder output published anywhere.

Methodology & Data Notes

How we built this.

We believe transparency about methodology is as important as the findings themselves. Here is exactly how this report was constructed, what limitations exist, and what we're fixing for H2.

Data source

All data sourced from DevSpot's production Supabase database. Builder count, geography, skills, and hackathon metrics are drawn directly from the users, hackathon_participants, participant_profile, projects, and judging_bot_scores tables.

Hackathon inclusion

11 real hackathons included. Internal test events (Testing Judging Hackathon, hacklanta test, Micathon test, test) excluded. ACE(M) Hack excluded — 346 registrations but 0 project submissions in the database, consistent with an in-person event where demos occurred offline. SensAI Hack SF excluded — 5 participants, insufficient for analysis.

JudgeBot scoring

JudgeBot is DevSpot's AI scoring system, trained to evaluate projects across four dimensions: technical execution, innovation, business viability, and UX. Scores are on a 0–10 scale. JudgeBot was not enabled for LNMHacks 8.0 — that event is included in all non-scoring analysis. JudgeBot scores for PL Genesis Modular Worlds include 7 outlier entries scored on a 0–100 scale from an early version of the system; these are excluded from all cross-event comparisons.

JudgeBot vs human delta

The judging_entries table stores both bot-generated and human judge scores, linked via judging_bot_scores_id. Genuine human override scores are identified as entries where judging_entries.score ≠ judging_bot_scores.score. NEARCON and Hacklanta are excluded from delta analysis — 100% exact score matches confirm no independent human scores were entered at these events; organisers used JudgeBot as the primary judging mechanism. A schema fix (is_human_override boolean flag) is being implemented to make this distinction explicit for H2.

Technology claims

All technology claims in this report are sourced from participant_profile.skills — builder-declared skills at profile creation. Project technology tags (projects.technologies) contain a data quality issue where structured objects were serialised as the string "[object Object]" (142 occurrences). Project tags are not used for any claim in this report. This bug is being fixed; H2 will use clean project tag data.

GitHub analysis

1,251 submitted projects across 10 hackathons were analysed via the GitHub API. 1,107 returned successful results (88.5%). 138 repos were not found (likely deleted or made private post-event); 2 had no URL. AI tool detection uses repository config file signatures (e.g. .cursor, CLAUDE.md, .bolt). AI library detection uses package.json and requirements.txt dependency parsing. All signals applied using identical tooling across all repos. Benchmark extractions (ETH Global and YC early-stage) are in progress and will be added once complete.

ETH Global benchmark

8 prize-winning projects from 5 ETH Global events in 2024 (Bangkok, San Francisco, Singapore, Brussels, London). Only 1st place and finalist-level winners included. Full repository history analysed — no date window restriction, as these repos were created for the hackathon. 3 of 11 target repos were not found (deleted or made private post-event).

Projects included: LemonPay (Brussels · The Graph 1st place) · SuperTweets (SF · SKALE 1st place) · Battle of Nouns (Singapore · Mina Protocol 1st place) · F1 Bets (Singapore · Fhenix 1st place) · ProtoStar (Bangkok · Mina Protocol 1st place) · BenderBite (SF · Circle 1st place) · dnsRegistry-AVS (Brussels · ZK Email 1st place) · Zubernetes / ZK8S (Bangkok · ETHGlobal Finalist + Phala)

Not found (404): PrismX (Bangkok) · not so secret agent (London) · GuardiansOfThePaymas (London)

YC benchmark

22 open-source YC-backed companies from S20 through W24 batches — specifically repos made public at or near founding with intact early commit history. All signals measured within the first 30 days of repo.created_at. Repos were verified for history integrity before inclusion; candidates with squashed history or multi-year gaps were dropped (Infisical, Crowd.dev, Rallly). Composio excluded — history squashed on open-sourcing. Documenso included with a methodology note — code predates public repo by 4 months. Known selection bias: open-source-first YC companies skew toward higher baseline code quality. This benchmark represents an aspirational comparison, not a median.

Companies included: Supabase (S20) · Roboflow (S20) · Stytch (W21) · Goldfinch (W21) · Encord (W21) · Tinybird (S21) · Modal (S21) · Hyperlane (S22) · Cal.com (W22) · LlamaIndex (W22) · Resend (W23) · Fern (W23) · Hegel AI (W23) · Outerbase (W23) · Papermark (W23) · Trigger.dev (W23) · E2B (S23) · Formbricks (S23) · Letta / MemGPT (S23) · Nillion (W23) · OpenStatus (W24) · Documenso (W24)

Report period

This report covers the period June 5, 2025 (DevSpot's first hackathon) through April 2026. PL Genesis: Frontiers of Collaboration ran November 2025 through April 1, 2026 and is included in full. DevSpot publishes H1 and H2 reports on a semi-annual cadence.

Get the H2 report
before anyone else.

The H2 report includes full GitHub benchmarks, YC comparisons, ETH Global analysis, and the first builder survey data. We also maintain a curated high-potential startup list for investors — sourced directly from DevSpot project scores and GitHub signals.

No spam. Unsubscribe any time. Report published semi-annually.