The Grant Game: A Scorecard for Evaluating Ecosystem Funding
Most grant programs reward grant writers, not builders. Here's a scorecard for evaluating funding programs before you waste months applying, plus tactics for navigating imperfect systems.
John Connor
Technology Strategist
Grant programs select for grant-writing ability, not building ability. This post provides a 30-point scorecard for evaluating any funding program before you invest time. Programs scoring below 20 should be avoided. Also: tactics for navigating imperfect programs when you have no choice.
The Three Months I'll Never Get Back
When we raised for Sparkblox, I spent three months on one grant application. Three months of crafting narratives, designing pitch decks, scheduling calls with committee members. Three months not building product.
We got the grant. And the metrics we promised had almost nothing to do with what made our product successful. We hit every milestone. We missed what mattered.
I knew while writing the application that the milestones were theater. I proposed "user acquisition targets" because that's what they wanted to hear, not "retention improvements" which is what actually mattered. I played the game and won the grant. The project still struggled because I'd optimized for the wrong thing.
This post is the framework I wish I'd had before I started. Use it to evaluate grant programs before you invest time, and to avoid programs that will waste it.
The Grant Program Scorecard
Score any grant program on these six dimensions. 1 = worst, 5 = best. Avoid programs scoring below 20 total.
1. Timing of Funding (1-5)
| Score | Model | What It Means |
|---|---|---|
| 1 | 100% upfront on approval | Rewards proposals, not results |
| 2 | 50/50 upfront/milestone | Some accountability |
| 3 | Staged disbursement | Multiple checkpoints |
| 4 | Majority after delivery | Rewards completion |
| 5 | Retroactive only | Only rewards proven value (e.g., Optimism RPGF) |
Why this matters most: Upfront funding rewards grant writers. Retroactive funding rewards builders. The selection mechanism determines who wins, and that determines what gets built.
2. Milestone Definition (1-5)
| Score | Type | Example |
|---|---|---|
| 1 | Vague deliverables | "Launch beta version" |
| 2 | Specific deliverables | "Deploy contracts to mainnet" |
| 3 | Output metrics | "1,000 users registered" |
| 4 | Outcome metrics | "1,000 users retained at 30 days" |
| 5 | Value metrics | "$100K in user-generated value" |
"Launch beta" can mean anything from a landing page to a functioning product. Value metrics are hard to game.
3. Review Transparency (1-5)
| Score | Process | Reality |
|---|---|---|
| 1 | Anonymous committee, no feedback | Total black box |
| 2 | Named committee, no feedback | Accountability without learning |
| 3 | Named committee with rubric | Criteria visible |
| 4 | Public scoring and feedback | Full transparency |
| 5 | Community vote with rationale | Distributed judgment |
4. Time Horizon (1-5)
| Score | Duration | Implication |
|---|---|---|
| 1 | <3 months | Only incentivizes quick wins |
| 2 | 3-6 months | Short-term focus |
| 3 | 6-12 months | Medium-term building |
| 4 | 12-18 months | Enables infrastructure |
| 5 | 18+ months or rolling | Long-term commitment |
Real infrastructure takes years. Bell Labs didn't operate on 90-day cycles. Xerox PARC didn't have quarterly milestones. The transistor, the laser, the GUI—these came from patient capital with long horizons. 90-day grants produce 90-day thinking.
5. Network Dependency (1-5)
| Score | Access | Result |
|---|---|---|
| 1 | Requires existing relationships | Closed loop favoring insiders |
| 2 | Helps to know people | Moderate bias |
| 3 | Structured intro process | Reduces but doesn't eliminate bias |
| 4 | Blind initial review | Merit-based first pass |
| 5 | Fully merit-based, code-speaks | Pure signal |
6. Application Cost (1-5)
| Score | Time Required | Reality |
|---|---|---|
| 1 | 40+ hours | Full-time job, massive opportunity cost |
| 2 | 20-40 hours | Significant investment |
| 3 | 10-20 hours | Moderate investment |
| 4 | 5-10 hours | Reasonable for the funding |
| 5 | <5 hours or none (retroactive) | Minimal friction |
Example Evaluations
Program A: Traditional Foundation Grant
- Timing: 2 (50/50 split)
- Milestones: 2 (vague deliverables)
- Transparency: 2 (named committee, no feedback)
- Horizon: 2 (6-month cycle)
- Network: 2 (helps to know people)
- Application: 1 (60+ hours)
Total: 11/30 — Avoid
Program B: Optimism RPGF
- Timing: 5 (fully retroactive)
- Milestones: 5 (rewards demonstrated impact)
- Transparency: 4 (public voting with rationale)
- Horizon: 5 (rewards past work, any duration)
- Network: 4 (impact-based, though visibility helps)
- Application: 4 (relatively light)
Total: 27/30 — Excellent
Red Flags That Disqualify Programs
Beyond the scorecard, watch for these instant disqualifiers:
- No public record of past grants: If they won't show you who they've funded, they're hiding something.
- Requires NDA on funding amount: Transparency is the check on bad behavior.
- Same teams winning repeatedly: Check past recipients. If it's the same 10 teams, the game is rigged.
- Reviewer conflicts of interest: Do reviewers invest in companies they evaluate?
- No post-funding accountability: What happens if you miss milestones? If nothing, milestones are theater.
If You Must Apply
Sometimes you need the money and imperfect programs are your only option. Tactics for navigating them:
Minimize Application Time
Set a time budget (e.g., 20 hours). When you hit it, submit what you have. Perfect applications often lose to mediocre ones anyway.
Define Measurable Milestones
Even if the program accepts vague milestones, propose specific ones. "1,000 users with 30-day retention above 20%" is better than "launch and grow user base." This protects you from scope creep.
Build Relationships Outside the Process
If network matters (it usually does), build relationships year-round, not just during application season. Show up at events. Contribute to community discussions. Be visible for the right reasons.
Track Actual Outcomes
Keep records of what you actually built vs. what you proposed. Win or lose, you'll learn whether the process selected for the right things.
What Good Looks Like
The best funding programs share these traits:
- Retroactive or outcome-based: Rewards results, not promises
- Transparent and accountable: Public record of decisions and rationale
- Long time horizons: 12+ months to enable real building
- Low application friction: Focus on work, not paperwork
- Builder-led evaluation: People who ship judging others who ship
If you're running a grants program, this is your checklist. If you're applying to one, this is your filter.
- Before applying: Score the program using this framework
- If score <20: Seriously consider not applying
- If applying anyway: Time-box your investment; don't let it consume you
- After: Track actual outcomes vs. predicted to calibrate your judgment