Reason Street https://reasonstreet.co Business Model Design Wed, 18 Feb 2026 01:31:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://reasonstreet.co/wp-content/uploads/2021/04/cropped-Reason-Street-32x32.png Reason Street https://reasonstreet.co 32 32 The Shape of the Curve https://reasonstreet.co/the-shape-of-the-curve/ Wed, 18 Feb 2026 01:29:43 +0000 https://reasonstreet.co/?p=12964 Read More]]> “The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.” -Gramsci

Everyone quotes Gramsci now. It has become the default frame for this moment, a preferred sign-off on an email signalling that you know there is a collapse of the liberal order, rise of authoritarianism, institutional rot, the messy middle. Žižek later translated “morbid symptoms” as “monsters,” which is more quotable but less precise. Morbid symptoms are what you get when the body is failing but has not yet been diagnosed. Monsters at least imply something you could fight in the boss level of a game. Symptoms just persist, like our compound co-morbid chronic conditions.

Adam Tooze recently refused this frame. An interregnum, he argued, implies another regnum afterward. It assumes the cycle turns. It promises that disorder resolves into a new arrangement. He does not see why we are entitled to that assumption.

“I’m dying on the hill that we’re not even in an interregnum because an interregnum implies another regnum afterward. It implies a vision of history that has this as an ellipse between two. I don’t see why we would feel that we are entitled to make that assumption.”- Tooze.

Nicolas Colin, in his Drift Signal newsletter on this platform, lays out the economic version of this question through Carlota Perez’s framework of technological revolutions. Perez recently argued that we are in something like the 1930s. The crises of 2000 and 2008 should have been the turning point, with speculative capital giving way to productive capital and governments building institutions to distribute the gains of the information revolution broadly. That is what FDR did with the New Deal. But no leader distributed the gains this time around. It was the banks that got bailed out. The Golden Age is still ahead …

Colin thinks we are closer to the 1970s. The paradigm is mature. The winners are entrenched. Innovation is sustaining, not disruptive. If he is right, this wave is spent, and the real question is what revolution comes next.

Both positions assume the cycle has a shape. If it is the 1930s, the good part is ahead. If it is the 1970s, the good part is behind. Either way, something follows.

I am not sure it does.

In the mid-1990s, I was in my mid-twenties, typing an equity research report about the future of the internet on a computer that was not connected to the internet. Nobody found this strange at the time.

The report was for Netscape. The company had five revenue streams, none of them proven, and the IPO document itself cheerfully noted that Microsoft would probably crush it, like a bug. My boss said, “Our job is to frame the shape of the curve.”

He meant the growth curve, the narrative that would justify a valuation for a company with five spindly potential revenue streams. I typed it out. I was just a bit player, not instrumental, and the momentum was already believed. The stock went crazy. The founders became multi-millionaires. One of them now controls roughly 18% of all venture capital as an asset class and advises the current administration on technology, AI, crypto, and higher education policy.

The logic designed in that room was a narrative-driven valuation, abstracted from costs or even reasonable assumptions for growth. You did not need to know which customer would take the lead or the details of how the technology worked. You needed the story. You framed the curve, raised the round, and moved on.

That logic did not stay in one room. It became the operating system of venture capital and early-stage technology companies, hence how we think about innovation itself. Finance shaped the wave, but it could not control what the wave became.

Perez has a mechanism for how speculative phases end: political crisis forces institutional rebuilding. The crisis arrived in 2000 and again in 2008. But the instrument had already done its work. It had produced enormous wealth, and that wealth had converted to power. The people the instrument made rich became the people who governed the crisis response. The mechanism that should have forced restructuring was captured by the logic it was supposed to restructure.

The speculative logic did not give way to production; it became production. Each crash produces not restructuring but more liquidity, more narrative, more abstraction from relationship. The bubble does not pop into a Golden Age. It reinflates. It has one move, and the move works every time.

This is what I think Tooze is sensing from the geopolitical side and what Perez’s framework, for all its elegance, cannot quite account for. The cycle is not delayed. The instruments designed in the last wave broke the mechanism that was supposed to produce the next phase. There is no deployment because the installation phase captured the transition.

Perez’s “Technological Revolutions and Financial Capital” borrowed from Stratechery

If that is where we are, then the question is not where we sit in the cycle. The question belongs to the people building civic infrastructure, community health systems, intergenerational housing, distributed energy, care networks, local food systems, resilience work of every kind; people designing things that have to function for the communities they serve, inside an economy that keeps rewarding abstraction from consequence. They, we, do not have the luxury of waiting for the cycle to turn. We are building now, with instruments that were not designed for what they are trying to do.

I wrote this piece and then, as an experiment, rewrote it from a perspective that agrees with every structural claim but draws the opposite conclusion. Marc Andreessen, the co-founder of Netscape, now 18% of all venture capital as his firm’s assets under management in the US, might sign the same diagnosis: narrative-driven valuation, wealth converting to power, the cycle not turning. He would call it progress. The fork is not in the analysis. It is in who you think the builders are.

Who? Whose?

Whose narrative(s) of abundance?

Whose future(s)?

Whose imagination(s)?

—-

Gramsci, A. (1971). Selections from the prison notebooks (Q. Hoare & G. Nowell Smith, Eds. & Trans.). International Publishers. (Original work written ca. 1930–1932).

Žižek, S. (2010). A Permanent Economic Emergency. New Left Review, 64.

Yes, I know we are sick of Ezra, but sending you there anyway, and you can just read the transcript. Klein, E. (Host). (2026, January 30). Adam Tooze on the end of the magical thinking era [Audio podcast episode, written transcript]. In The Ezra Klein Show. The New York Times. https://www.nytimes.com/2026/01/30/opinion/ezra-klein-podcast-adam-tooze.html

Colin, N. (2024, May 21). Late-cycle investment theory. Drift Signal.

Drift Signal

Late-Cycle Investment Theory

Today’s essay sets out what I call ‘late-cycle investment theory’, which I’m developing with reference to Carlota Perez’s seminal work on technological revolutions…

Perez, C. (2003). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Edward Elgar Publishing.

Thompson, B. (2021, May 25). The death and birth of technological revolutions. Stratechery. https://stratechery.com/2021/the-death-and-birth-of-technological-revolutions/

Whose Abundance Narrative

Jen van der Meer

·

August 19, 2025

Whose Abundance Narrative

For those in community-centered design, participatory design, systems change, and finance, abundance/exponential thinking, post-growth, degrowth, hypergrowth – all of you.

Read full story

Escobar, A. (2018). Designs for the pluriverse: Radical interdependence, autonomy, and the making of worlds. Duke University Press.

Benjamin, R. (2019, September 24). A New Jim Code featuring Ruha Benjamin and Jasmine McNealy [Transcript]. Berkman Klein Center for Internet & Society at Harvard University. https://cyber.harvard.edu/sites/default/files/2019-10/2019_09_24_RuhaBenjamin_Transcript.pdf

]]>
Continuations https://reasonstreet.co/continuations/ Thu, 01 Jan 2026 02:24:02 +0000 https://reasonstreet.co/?p=12941 Read More]]> Capital moves. The structures we’ve built move it toward return, toward liquidity, toward exit.

Systems persist. Communities persist. The places where people live and work and build lives—these do not exit. They continue, with or without the capital that passes through.

We have designed our financial structures for the movement. The venture fund with its ten-year life. The bond with its maturity date. The grant cycle, the appropriations cycle, the budget cycle. Deploy, extract, exit. Deploy, extract, exit.

The system remains after each exit. Stronger or weaker, it continues.


A continuation is a different kind of structure.

It has boundaries. It has governance. It has economics. These are not formless things. But they are configured for persistence, for staying, for learning, for adapting to what the system needs across time.

In Canada, Raven structured outcomes contracts where Indigenous communities define what matters and assess what’s working. The capital serves the governance. The governance does not serve the capital.

In Sicily, Messina built infrastructure that generates revenue that funds what’s needed, and what’s needed changes, and the response changes with it. No exit. No grant cycle. The continuation continues.

These are not projects with longer timelines. They are structures that belong to the systems they serve.


Equity requires exit. Debt requires repayment. Philanthropy requires reports. Each structure carries its logic, its temporality, its demands.

A continuation does not refuse capital. It receives capital without being captured by capital’s logic. The capital passes through. The continuation remains. The system keeps learning what it needs to thrive.

This requires design. Equity without exit. Debt that circulates within the system. Philanthropy that builds structure rather than funds activities. These exist. They are not yet common. They require people who understand both the capital and the system well enough to hold them in right relation.


The question worth asking is not how do we fund this.

The question is: what structure keeps learning, keeps adapting, keeps creating conditions for thriving—whatever capital passes through?

That is a continuation.

]]>
AI Vendors Promised Efficiency. What we Got Was an Arms Race. https://reasonstreet.co/ai-vendors-promised-efficiency-what-we-got-was-an-arms-race/ Tue, 02 Dec 2025 13:08:39 +0000 https://reasonstreet.co/?p=12912 Read More]]> Dr. Bryant Lin teaches medicine at Stanford. He founded the Center for Asian Health Research and Education to study diseases that disproportionately affect Asian populations, including nonsmoker lung cancer. Then he was diagnosed with it himself. He is now Stage 4.

His doctor prescribed Rybrevant, an FDA-approved treatment for his specific mutation. Aetna denied the claim. Lin, a physician at one of the world’s most resourced medical institutions, had to beg for his life on LinkedIn. His post reached 500,000 people. Aetna reversed the denial.

Most patients don’t have half a million people watching. Most don’t appeal at all

This is not a story about one insurer behaving badly. It’s a story about what happens when everyone in the system deploys AI to win a war, one that patients never signed up to fight.


Over 450 million claims are denied annually in the US. Payers have deployed proprietary algorithms, about which little is known, that never sleep, scanning for patterns, hunting discrepancies, automating denials at scale. Denial rates have climbed from 10% in 2020 to nearly 12% today—and higher for inpatient care

The average US hospital loses $5m per year from rejected claims. So, unsurprisingly, on the provider side, hospitals have welcomed AI into revenue cycle management—for example, ambient documentation that transforms clinical encounters into optimized claims. Systems that study past denials to reverse-engineer payer logic, tools that adjust wording to avoid automated red flags. It’s driving a massive surge in adoption: 22% of US health providers have already begun to roll it out. When it comes to AI, an industry not known for its rapid implementation of operational change (confer the story of EHRs) is now in the lead.  

The patient-side AI market is already emerging. Startups like Claimable now offer to craft appeal letters, citing policy violations, marshaling clinical evidence, even tuning the emotional register of the language. The reported success rates are impressive.*

But what does all this activity add up to? 

Zoom out: we are now deploying AI systems to help patients fight AI systems that were built to deny them care, that other AI systems were built to bill for. 

Capital is flowing not toward reducing friction, but toward intensifying it.

A recent report by Silicon Valley Bank calls this, explicitly, an “arms race.” They’re not wrong. But arms races have a particular economic logic: the returns go to the weapons manufacturers, not to the people caught in the crossfire.

The value of investment isn’t measured by whether you are “better off,” but whether you are less at risk than your rival. Much of the capital goes into duplication, not discovery. Everyone builds the same thing slightly differently, not because it’s needed, but because the rival has it.

From the point of view of vendors, none of this is a problem. They’re servicing customer demands—demands which their own customers agree to be urgent. But the overall effect of these rival deployments is yet to be seriously taken into account.  

Provider AI is trained to extract value from the record. Payer AI is trained to protect patient value. Patients wait. Costs climb. 

But none of this is fate. 

Every metric is a belief system disguised as math. Every contract is a declaration of who must struggle and who must be spared. The negotiation has already begun; most people simply do not recognize that they are sitting at the table, that the terms can change. 

We talk about AI adoption as if it were weather. Something that just happens, without opinion or intent. Every metric is a belief system disguised as math. Every contract is a declaration of who must struggle and who gets spared. Optimization is a choice – people are choosing what to optimize for. People are deciding the claims war matters more than spending that money on something that would result in care. Other agreements are possible. But first, we have to stop confusing inevitability with surrender.

This post is the first in a series. We’re going to map where else these adversarial dynamics are emerging—inside health systems, between clinicians and patients, across the boundaries of care. The arms race in the revenue cycle is just the most visible front.

This isn’t ultimately a story about AI. Or even about technology. It’s about embedded power dynamics within healthcare: who bears risk, who extracts value, who must prove their worthiness for care, are being encoded, accelerated, and legitimized through prevalent models of health systems. The ruptures we’re seeing aren’t bugs in the implementation. They’re revelations of the underlying architecture.

]]>
Leverage Points Aren’t What We Think They Are https://reasonstreet.co/leverage-points-arent-what-we-think-they-are/ Wed, 12 Nov 2025 11:36:06 +0000 https://reasonstreet.co/?p=12917 Read More]]> On Mechanistic Metaphors

Systems thinkers love to talk about leverage points—those magical places where a small intervention supposedly yields outsized change. The image is always mechanical: a bar, a fulcrum, a decisive push.

But that metaphor smuggles in an assumption: that the world behaves like a machine waiting for the right force in the right place.

As Derek Cabrera recently pointed out, this is a deeply mechanistic way of seeing systems. It implies that change is external, linear, and predictable. Push here, move there. A comforting worldview—especially for people tasked with “fixing” systems from the outside.

But what if the metaphor is misleading us?

Finance already knows what leverage is, and it is mechanistic

In finance, leverage has a precise definition:

Using borrowed capital to amplify returns (or losses).

Leverage is structural. It increases exposure. It multiplies the effect of every movement. It works because the underlying mechanics—the loan agreements, collateral, interest, covenants—are engineered to behave predictably.

Financial leverage is, in fact, the ultimate mechanical leverage point:

  • It scales investment without adding equity.
  • It magnifies outcomes whether anyone “tries” or not.
  • It operates through predefined, enforceable contracts.

Finance treats leverage as a designed artifact—a predictable machine within the broader economy.

This is exactly why the metaphor breaks when we apply it to living systems.

Complex systems do not have fixed levers

Communities, organizations, health ecosystems, political movements, supply chains—these are not machines. They do not have rigid beams or fixed fulcrums. They reorganize themselves. They learn. They resist. They evolve.

When systems thinkers announce they’ve identified “the” leverage point,
change the rules, change the incentives, change the information flows,
they’re still imagining an external actor pressing on a stable lever.

But in adaptive systems:

  • There is no external vantage point.
  • Agents inside the system have their own goals, histories, and constraints.
  • Power is unevenly distributed.
  • Information shapes meaning, not just behavior.
  • The act of intervening changes the system that changes you back.

What looks like a lever in one context becomes a dead end in another.

The deeper work is not finding levers. it’s shifting how the system interprets itself.

If we must keep the metaphor, the only “leverage” that behaves anything like financial amplification is this:

A shift in how people make sense of what they’re doing together.

Meaning-making, not mechanics, is what cascades:

  • When care workers rethink what counts as value.
  • When policymakers rethink risk.
  • When clinicians rethink evidence.
  • When funders rethink outcomes.
  • When communities rethink what is owed and to whom.

This kind of leverage doesn’t behave like a beam.
It behaves like cognition, culture, narrative, trust.

Finance amplifies through debt.
Systems transform through sense-making.

The danger of mechanical metaphors

The problem isn’t that Meadows was wrong. It’s that we’ve turned her heuristic into a hunt for the “right lever,” a technocratic fantasy that sidesteps the messy relational work of shifting beliefs, practices, and power.

When we treat systems like machines, we default to machine solutions:
dashboards, KPIs, nudges, protocols, outcomes contracting, and algorithmic management.

But the most consequential “leverage point” is rarely a parameter or a rule.

It is the collective capacity to see the system differently.

And unlike financial leverage, this can’t be borrowed, bought, or optimized.

It must be practiced.

]]>
The Tale of Two (2) Three (3) Horizon Ideas https://reasonstreet.co/the-tale-of-two-three-horizon-ideas/ Thu, 30 Oct 2025 18:40:02 +0000 https://reasonstreet.co/?p=12843 Read More]]> On forecasting, portfolios, and futures

“We need to spend more time on Horizon 2,” they both say.

One is a program officer at a climate and nature philanthropy. The initiatives they’re funding risk being captured by the dominant political economy in the U.S. or co-opted by greenwashing actors who sound like they are marching toward progress but are aggressively resisting any change to the status quo.

The other is the CFO of a massive industrial company. The core business is slowing. The R&D pipeline is funded with hopeful AI bets and blockchain projects that failed to prove out, but new commercial launches have stalled. Growth in the next three to five years looks tenuous. They have nothing in the pipeline to address customer demands for more “resilient infrastructure,” so competitors are taking their share of the market.

They are both working hard, collaborating with their teams and partners to make sense of a disoriented world with compounding uncertainties.
They’re using different frameworks that happen to share the same name.

A discussion on LinkedIn on different framing of narratives led to further comments about the problem of “H2 capture” – so I thought I’d spell them out for folks because both ways of seeing horizons in time can help with the bridge work that needs to be done to move us from where we are to where we need to be.

McKinsey’s Three Horizons: Managing Innovation Portfolios

I first encountered talk of horizons during the dot-com recovery period. Companies had gotten over their skis, investing in the shiny promise of the internet future, only to ratchet back spending to focus on their dependable core businesses once the crash came. Growth had stalled.

The framework’s insight, published in The Alchemy of Growth by McKinsey consultants Baghai, Coley, and White (2000), was portfolio thinking for innovation investment: protect the core, grow the adjacencies, and seed the experiments. Manage not just risk and return, but time itself.

The framework maps how organizations manage innovation across time horizons.

Horizon 1 is the core business — the reliable revenue generator.
For Disney, this was parks, films, and merchandise. Protect it. Optimize it. Capture value from it. Most of your revenue, and most of your certainty, lives here.

Horizon 2 is emerging growth. These are projects that may become meaningful revenue streams in three to five years, beyond proof of concept but not yet scaled. They often leverage core capabilities into adjacent markets or new business models.

Disney’s MagicBand and FastPass systems were classic Horizon 2 plays. In the boardroom, they looked like brilliance: a wristband that improved park experiences, enabled payments, and created a data infrastructure for personalization.

Outside the boardroom, the meaning shifted. Was this a premium queue-jumping perk in a public space — the pinnacle of the American dream, or a symbol of its erosion? A way to monetize access to experiences that families once shared on equal terms?

But there’s no critical theory in these decision rooms, no one pointing out that an operational innovation had encoded inequality into leisure itself, transforming waiting in line from a shared ritual into a sorting mechanism. It’s all shareholder portfolio logic. Any backlash is absorbed into the calculus of revenue and margin growth, as well as into what the company can contribute to shareholder portfolios.

Horizon 3 is the far future, the moonshots. The experiments that might fail, pivot, or eventually redefine the core seven to ten years out.
When Disney began building Disney+, it was Horizon 3: a direct-to-consumer streaming model that cannibalized its own lucrative cable and theatrical windows. It was a hedge against an uncertain distribution of future income.

But today, investors are asking: What story is Disney telling about the future?
The brand is embattled on all sides of the political spectrum. The recent loss of more than two million Disney+ subscriptions following the FCC controversy and Jimmy Kimmel’s temporary cancellation underscored that the company’s cultural and financial narratives are intertwined. Some investors now worry that Disney has no “sexy AI story” and no clear way to reinvent itself the way the tech giants have promised.

The Three Horizons framework gave language to tensions that had always existed: today versus tomorrow, optimization versus exploration, certainty versus possibility. Its power, and its trap, is that it translates those tensions into portfolio logic.


Sharpe’s Three Horizons: Futures as Transformation

There’s another way to work with the idea of horizons, one that treats the future not as a portfolio problem but as a systems change challenge.

Developed by Bill Sharpe, Anthony Hodgson, Graham Leicester, and colleagues at the International Futures Forum, this version of the Three Horizons framework looks at transformation across time as a living process: how dominant systems decline, how emerging alternatives take root, and how we navigate the turbulent space in between.

Horizon 1 represents the dominant system — the world as it currently operates.
For the climate philanthropy program officer, this might mean the fossil-fuel economy and the extractivist logics that sustain it. The unit of analysis is not a single organization but a system of actors: governments, corporations, intermediaries, and advocates. Horizon 1 still generates enormous inertia and wealth, even as its cracks show in ecological, social, and political terms.

Horizon 3 is the emerging future, the seeds of transformation already at work.
These might include cooperative ownership models, bioregional economies, participatory budgeting, or energy democracy initiatives. They are not incremental innovations to plug into the current system; they are prefigurative, enacting the futures they imagine. They live on a small scale now, but they embody fundamentally different organizing logics.

Horizon 2 is the turbulent transition zone.
It is where the old resists collapse and the new fights to emerge. It is where most of the action and strategic ambiguity reside. Actors in this space are trying to extend the lifespan of H1, accelerate the growth of H3, or navigate between them.

Instead of a boardroom exercise, this work is practiced as a collective visioning process, a way to develop shared pathways for systems change. The program officer may be participating in a horizon exercise organized by one of their grantees, or convening their own session to understand how their portfolio of grants supports broader transformation.

For the program officer, “spending more time in Horizon 2” means something specific. It means not funding only Horizon 1, reformist tweaks within the current system, or only Horizon 3, beautiful but isolated experiments. It means resourcing pathways of transformation: the strategies that shift power, change rules, and make room for H3 to grow. It means investing in the infrastructure of transition, in sustainable economic structures that enable community self-determination, not just one-off pilots or narrative-change campaigns easily co-opted by incumbents.

Sharpe’s framework is not about balancing risk and return. It is about understanding and accelerating transformation.

It asks:
• How does a system actually change?
• Where are the leverage points?
• Who benefits from H1’s persistence?
• What helps H3 gain legitimacy?

In this version, the Three Horizons are not a corporate portfolio map. They are a lens on collective evolution, an invitation to see the future as something we grow into, not something we manage only for future cash flows.


Bridging the Two

Both of them are correct. We do need to spend more time in Horizon 2.

In the McKinsey version, Horizon 2 is a bridge that keeps the organization viable as conditions change. It’s not just for companies, but for any organization that relies on funding, customers, or support. It helps you see that your primary source of revenue (USAID?) is a big risk if you have no plan B. It is where resilience is built, not through cost-cutting, but by developing the next sources of strength. It treats uncertainty as something to be managed and monetized. The goal is continuity, not reinvention.

In the Sharpe version, Horizon 2 is also a bridge, but between systems. It is where people test new rules, new forms of ownership, and new forms of value. It treats uncertainty as material for transformation. The goal is renewal, not continuity.

Both frameworks make time actionable, but each inherits the logic of its origin. One assumes stability must be protected; the other assumes it must be undone. Each has blind spots. Portfolio logic often cannot see beyond capital preservation. Transition logic can underestimate the persistence of power and the path dependency of incumbent infrastructure.

For those of us working inside systems that are both stable and failing, both productive and destructive, Horizon 2 is not a theory. It’s the terrain we work in every day. The bridge has to hold while we rebuild what crosses it.

The task is not to choose between resilience and transformation, but to connect them, to design bridges that can carry value, capability, and legitimacy across change.

If your work is truly about systems change, you can’t just be a portfolio manager. You have to be a transition actor, willing to disinvest from what must decline, to risk failure in service of what might emerge, to accept that Horizon 2 is not the “safe middle” but the most dangerous, contested, and necessary terrain.

The conclusion, then, is not comfortable. Most organizations using the Three Horizons language are using it to avoid this very reckoning. The framework becomes a way to sound strategic while remaining safe, to fund a “portfolio of change” that never actually threatens the distribution of power.

But when you understand what transition really requires, when you see Horizon 2 not as “emerging growth” but as the destabilization of everything that resists change, the work demands something else.


Not balance, but courage.
Not diversification, but commitment.
Not managing innovation, but midwifing transformation, knowing you cannot control what emerges, only whether you help it or hinder it.

None of us can stay at the edges of this work. Horizon 2 belongs to everyone who allocates resources, tells stories, or sets priorities for the future.

The invitation is to move from describing transition to practicing it, to use whatever authority, access, or capital we have to build bridges that serve more than our own survival.

From: Contribution Design, A field guide for people who create and adapt systems of value and valuation. Subscribe here.

Narrative as an Allocative Force, on LinkedIn, thoughtful discussion in the comments, referencing H2 language. https://www.linkedin.com/feed/update/urn:li:activity:7387124667205181440/

White, D., Baghai, M., & Coley, S. (2000). The Alchemy of Growth. Perseus Books.

Curry, A. (2008). Seeing in Multiple Horizons: Connecting Futures to Strategy. Journal of Futures Studies.

]]>
Narrative, Valuation, and the Material Power of Finance https://reasonstreet.co/narrative-valuation-and-the-material-power-of-finance/ Thu, 23 Oct 2025 19:17:25 +0000 https://reasonstreet.co/?p=12838 Read More]]> “Narrative change” has become its own sphere in philanthropy and movement work, stories as culture, identity, and legitimacy. Narrative work is funded. Those with the best narratives attract funding. There’s even been a backlash against narrative-building, and last week I had a set of compelling conversations thanks to Julia Roig’s post on Judith Mil’s post, a materialist critique of narratives.

But in finance, narrative operates as an allocative force.

In early-stage and private markets, the story is the investment thesis. Narrative precedes numbers, which precede infrastructure. Musk, Altman, Zuckerberg — each deploys a version of inevitability: AI will transform everything → we’ll need compute → data centers → rare earths → new energy grids. That narrative alone mobilizes billions. Finance loves billion-dollar asks because they justify billion-dollar structures. Where else are they going to put all of that money?

Image source: Salajean via Envato

Meta’s new $30B special purpose vehicle (SPV) for data centers is a good example. It translates a speculative narrative about AI demand into structured finance: leasebacks, credit tranches, and long-term cash flow models. The narrative becomes an asset class, then reshapes the landscape: steel, concrete, extraction.

In finance, we learn that narrative structures valuation. In business school, Damodaran, who writes many valuation textbooks, calls it “a bridge between story and spreadsheet.” Narrative gives the model meaning; the model gives the narrative credibility.

In the AI frenzy for early-stage companies, the ratio feels 99% story, 1% spreadsheet. Vibe pricing.

This is where systems-change actors / systemic investing could step in. For those that already operate at the catalytic capital stage, the earliest, riskiest, most narrative-dependent capital. Those who fund the “proof of concept” for new realities. You can define the frame of plausibility, not just the moral one.

What if cultural narrative change folks didn’t stop at persuasion or culture, or policy, but extended to the financial architecture that makes belief material? What if we were asked to use catalytic capital to test alternative narratives of value, return, and risk, and build the structures that make them regenerative?

From: Contribution Design, A field guide for people who create and adapt systems of value and valuation. Subscribe here.

]]>
Outcomes Will Be Contested https://reasonstreet.co/outcomes-will-be-contested/ Sat, 18 Oct 2025 15:53:56 +0000 https://reasonstreet.co/?p=12815 Read More]]> Performing Economic Realities: Climate Week on Value, AI, and the Contest Over Contribution

My worlds all collided at Climate Week, and it’s taken me another week to metabolize the bafflement of many and find the corners of coherence, trying to remember more than the variation of seating charts, convening experiences, and which rooms were set to Arctic tundra versus tropical rainforest.

Outcomes-based AI, blended finance, systems finance portfolios, Indigenous finance, fracture mapping, and algorithmic justice, all performing different economic realities, each encoded in rooms with their own microclimate negotiations. The universal constant: valuation logics stratify by temperature zone. Pack for four seasons in one day. Each approach constructs what counts as an outcome, who counts as a decision-maker, and what counts as success. None, tragically, constructed better HVAC systems.

Outcomes-based models aren’t neutral measurement tools. They actively construct what counts as an outcome, who counts as a decision-maker, and what counts as success. They enact particular economic realities while foreclosing others.

As people who design systems that define outcomes, results, and value, whose valuation practices are we encoding? Whose calculative devices are we making infrastructure?

These are questions of contribution.

Whose definitions of value are we performing into reality?

Flashback to a Different Variety of Capitalism

Climate Week brought me into direct contact with development bank logic, reminding me of my early career at The Japan Development Bank, where I learned how cultural and social values become part of economic infrastructure.

We forecasted early internet adoption not only for its economic impact, but also for its institutional role in social cohesion. Employment was an explicit objective within our qualitative and quantitative frame. That same logic later created conditions for Japan’s stagnation: banks kept bad loans afloat, zombie companies alive, to maintain employment commitments.

That orientation remains visible in how Japan welcomes AI today, as a tool to strengthen relationships, with employment protections presumed. In contrast, US AI is pitched as worker replacement, arriving without safety nets, generating resistance, fear, cynicism, and a frantic scramble to build quick, flip faster, and accumulate wealth before AGI arrives to make it all moot. The underlying social agreements create conditions for any systems change. One society fears unemployment; the other has monetized the fear itself.

Varieties of Lean

Take the case of lean manufacturing’s journey from the US to Japan and back again: same principles, with radically different results. This lesson is being shared by tech strategists today who want to point out exactly where we are in the history of AI adoption, that we will quickly run out of the automation and efficiency playbook, and need to take on restructuring governance and value by redesigning systems.

In Toyota’s Japan, lean emerged embedded in valuation practices around collective improvement (kaizen), long-term thinking, and respect for workers. The Toyota Production System was designed as a coordination architecture that aligned the company with its suppliers, empowered line workers to halt production when defects emerged, and created dense feedback loops to facilitate learning that could compound across the organization. The counting devices, which measured what, how, and by whom, reflected these values.

In the US auto industry, the same techniques were applied, but with different valuations. Lean became cost-cutting. Consultants packaged it as efficiency (charging by the slide deck), and executives measured success in working capital and headcount reductions. Short-term results showed: less inventory and tighter balance sheets. Systemic gains did not. Production fragility increased. The measurement devices counted labor costs, not worker knowledge; quarterly returns, not long-term improvement. American lean excelled at measuring what was easiest to cut.

Lean startup attempted to revive the continuous learning loop, adopting the kanban as software counting “to-do,” “doing,” “done,” to accelerate SaaS and smartphone apps now awaiting their turn for AI to rip out and replace.

The calculative devices enact whatever values are embedded in our collective social commitments.

When we build outcomes-based business models, we’re not creating neutral structures. We’re assembling valuation infrastructures that perform specific economic realities.

Climate Week: Many Performances of Value

Here are examples of the many ways value and valuation were being performed at Climate Week. I was grateful to wander into so many of these rooms.

Cooperation Agreements and Blended Finance:

In cold rooms in tall buildings with panel formats, time for questions at the end:

A Panel of Experts, and Their Audience

Structured finance that starts with philanthropic and development finance to de-risk and crowd in private capital. There are pre-specified metrics, required attribution models, and risk-adjusted returns. Blended finance performs funder-defined value, transaction-based relationships, often leaning on partnerships with NGOs or community-based organizations to understand how impact-intending investments will be accepted by beneficiaries, but they are rarely seen as co-producers of outcomes. The metrics enact what will matter by structuring incentives and determining what is legible for capital.

One story told was that blended finance has been crippled by the removal of USAID, once the primary catalytic funder. Many things were said in one particular roundtable that requested all phones be placed in pouches before the worrying and laments could begin.

Beyond Chatham House Rules: Phones in Pouches Roundtable Discussion

But political economist Yuen Yuen Ang’s concept of “polytunity,” finding opportunities within constraints by “using what you have”, suggests a different narrative. Blended finance is moving forward: the Climate Investment Funds’ inaugural capital markets bond raised $500 million in 2024 and was oversubscribed by over six times. Yet barriers remain, particularly credit rating agencies’ continued influence on country borrowing costs.

Systems Finance:

Systems finance is emerging as a niche approach that employs carefully curated combinations of financial vehicles tailored for specific contexts, operating at multiple levels, building the field one acronym at a time.

A General Assembly, not exactly UNGA

Systems-level actors focus on changing the rules, institutions, and legal frameworks that govern finance, creating regulatory infrastructure, disclosure standards, prudential requirements, and market architectures. Organizations like TIIP, PRI, and The Predistribution Initative operate at this sphere, engaging with institutional investors, while these and other institutional reformers, UN initiatives, and policy advocates work to change laws, regulations, and fiduciary interpretations.

TWIST, Deep Transitions Lab, Dark Matter Labs, and MIT Sloan Sustainability Initiatives are all prototyping or researching approaches while investing. These different actors apply different approaches using systems thinking and/or complex systems science to understanding social problems and addressing them through the deployment of multiple forms of capital with the intent of transforming human and natural systems.

Distinct types of investment work together: investments with direct financial returns; enabling investments that yield no direct returns but support critical infrastructure, such as intermediary organizations and policy advocacy; and strategies whose investments may not yield market returns themselves but catalyze follow-on investments that can generate returns.

FEST is a collective of funders and practitioners operating financing ecosystems for systemic transformation, including systems-level and on-the-ground. The convenings are more convivial, open for discussion and inquiry. These approaches emphasize contribution over attribution and rely on evaluation, which shifts from rendering judgments to facilitating continuous learning and deliberation as systems evolve.

Convivial Convening Style of Systemic Finance for Transformation

Outcomes-based approaches in systemic finance are considered with caution, as some contributions to systems change are inherently unquantifiable or not yet knowable, yet will later prove essential to achieving transformative outcomes.

Indigenous Finance:

Investment funds and collectives led by Indigenous leaders from around the world came to Climate Week to demonstrate how their work has moved to perform in different economic realities, not following compromised versions of conventional metrics.

Kim Pate, the Managing Director of NDN Collective, describes “braided capital,” loans accompanied by grants and resources that “synergistically support the growth of the project or business throughout the life of the loan,” removing traditional collateral requirements and embracing procedures that better reflect the needs of the communities they serve. Compared to so-called “patient capital” within social impact time horizons; NDN capital is operating in fundamentally different temporalities: seven generations versus quarterly returns, land as relative versus land as asset.

Buidling Strategy for Indigenous-Led Finance

The Building Strategy for Indigenous-Led Climate Finance event put careful consideration into the design of the convening, with indigenous funders invited to a roundtable discussion, supported by rows of funders, collaborators, and supporters, and the offerings of bison and four brother salad from Buffalo Jump NYC, a catering firm on a mission to “Re-Claim and Re-Indigenous food culture in NYC and hopefully someday the country and the world.” This was a convening designed for transformation, shifts in perspective, and power-aware.

How are outcomes considered? SSIR just published a write-up of the Raven Indigenous Outcomes Funds in Canada’s Community-Driven Outcomes Contracts (CDOCs), a model that centers Indigenous communities as leaders throughout the entire project lifecycle. Unlike traditional social impact bonds, where governments or funders define problems and solutions, sometimes with limited community consultation, CDOCs ensure that communities themselves define what counts as an outcome, design the interventions, establish governance structures, and determine how success is measured.

In the Minoayawin Initiative addressing diabetes in the Island Lake Anisininew Nation, outcomes aren’t limited to clinical markers like blood-glucose levels. The community co-created a Mino-Bimaadiziwin score, drawing from the Anishinaabe concept of “living a good life in harmony,” that measures social and cultural well-being, including participation in community life and contribution to cultural traditions. The initiative includes a community-designed “healthy hub,” a communal kitchen and gathering space that emerged directly from community consultation. “We would never have thought about that if we hadn’t asked the community,” notes Raven Outcomes founder Jeff Cyr.

These CDOCs continue to engage private capital and utilize outcomes-based repayment structures. Investors provide upfront funding and are repaid when verified outcomes are achieved. However, the shift lies in who holds power: community elders, health professionals, and community members sit alongside investors and public agencies in the governance structure. In the Fisher River Cree Nation and Peguis First Nation geothermal project, community members weren’t just beneficiaries; they were trained, certified, and employed to install energy systems, with community coinvestment in labor, time, and materials. The $5.1 million in private capital was repaid based on verified energy savings, enabling communities to build long-term capacity rather than merely receiving services.

The framing of investors “being a good relative” is an ontological shift that rejects the investor-investee binary in favor of kinship obligations. This is a valuation system that enacts different worlds where prosperity includes ceremony, language, and relationality as constitutive elements of economic health. CDOCs demonstrate that outcomes-based models can be designed and structured with community leadership, shared governance, and culturally grounded quantitative metrics and qualitative evaluation. They can become vehicles for self-determination.

Outcomes-Based AI Models:

In contrast, a rising organizing logic for climate finance and technology in 2025 is outcomes-based AI models. At Climate Week, the integration of AI, IoT, and data-driven monitoring systems framed a new form of “planetary intelligence. These systems promise traceability, real-time verification, and algorithmic accountability across emissions, supply chains, and ecosystems.

A Startup Showcase Event Rendered by Gemini

The framing assumes that what can be sensed can be priced, and what can be priced can be governed. Planetary valuation becomes a continuous, data-driven process, a shift from discrete transactions to automated, performance-based infrastructures. The new logic links carbon markets, biodiversity credits, and adaptation metrics.

This version of Climate Week was less about pledges or blended structures and more about building valuation systems that treat Earth as a computable entity. The promise is total legibility: every hectare, molecule, and transaction folded into an outcomes model. Yet, like earlier calculative infrastructures, these AI-driven models enact particular valuation logics, determining which forms of life, labor, and knowledge become legible to capital.

But there are those braiding a different future, connected to the past.

The work of Nkwi Flores and Savimbo challenge this computable approach, returning valuation to the ground, to forests, kinship networks, and oral economies where value is produced through relation, as compared to nature-based-finance solutions designed in investment banks in London or New York.

Savimbo’s approach to regenerative finance, rooted in Indigenous Amazonian epistemologies, reframes “outcomes” as reciprocal commitments among communities, land, and ecosystems. Their valuation practices emphasize narrative accountability, stories and ceremonies as records of value, rather than algorithmic proof. These practices resist the enclosure of valuation within data infrastructures. They remind us that legibility to capital is not the same as legitimacy within community.

Where AI-driven climate valuation seeks a single planetary ledger, these movements propose a pluriverse of ledgers, many ways of knowing, measuring, and sustaining what matters.

When we design AI-enabled, outcomes-based models for climate, we are not just innovating measurement. We are scripting future governance, deciding which worlds, and whose worlds, will count as successful outcomes.

System Optimization vs. System Transformation

Before designing outcomes-based models anywhere, are we:

Optimizing Systems: Making existing systems more efficient, with the risk of reinforcing unsustainable or fragile underlying rules.

Transforming Systems: Changing underlying rules and structures, challenging meta-rules, and reconfiguring flows of value.

Most claim they want transformation, but the metrics we design tend to incentivize something else.

Traditional outcomes-based models ask: “Did our investment directly cause X measurable outcome?”

The transformation question: “How does this contribute towards the system transformation we want to see?”

Our current design patterns, pre-specified outcomes, attribution requirements, and milestone payments foreclose the emergent adaptation that systems change requires.

Business Models and the Politics of Infrastructure

Our business models become calculative devices through which value gets assessed, resources allocated, and success determined. Infrastructure is path-dependent. The valuation practices we encode now will structure what can happen later. If our outcomes-based systems encode conventional metrics, alternative valuation practices become illegible.

We are in a contest over which valuation configurations become infrastructure, which mega infrastructure projects require gargantuan valuations, which counting devices become standard, and which performances of economy become materially enacted.

Investors are currently incentivizing the shift to outcomes-based pricing to move from the “software eats the world” stage of digitization (digitizing file cabinets) to the “software eats labor” stage (vaporizing human work).

Whose futures are we enacting? Through whose valuation practices?

What This Means for What We Make Next

Practical and sometimes terrifying shifts we might make:

  • Ask: Who designs? Who might be missing from the table? Who’s tried this before? What can we learn from those who came before us?
  • Design for contribution, evaluate through transformative outcomes (building or expanding niches, opening regimes), not just adoption metrics.
  • Build structures that recognize financial and non-financial contributions.
  • Consider which valuation practices you’re encoding, and structure to enable adaptive capacity and long-term system transformation.
  • Accept that some contributions are unquantifiable.
  • Create space for plural valuation practices, rather than forcing everything into a single, calculative frame.
  • Question the universality of conventional finance structures. What assumptions about “necessary” financial architecture might we be carrying forward unnecessarily?
  • Hold space. Sometimes, good design means creating space for values that cannot and should not be compared.

Outcomes will be contested because values and valuation are contested.

As makers of AI-enabled systems, financial structures, and business models, we are not neutral. We are building narrative and calculative infrastructure that will either foreclose alternative performances or create space for multiple economic realities.

From: Contribution Design, A field guide for people who create and adapt systems of value and valuation. Subscribe here.

]]>
Who Shapes the Machine Dreams https://reasonstreet.co/who-shapes-the-machine-dreams/ Mon, 22 Sep 2025 00:15:10 +0000 https://reasonstreet.co/?p=12535 Read More]]> A story about unimaginative visions for efficiency and who gets to decide what intelligence serves.

The machines are learning. But not what you think.

The machines are learning how to hollow out human work and pour the profits into distant accounts. The machines are learning the language of efficiency, which means “we don’t need you anymore.” The machines are learning to predict the predictions of equity analysts in their training data.

Contribution Design is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This isn’t a story about “artificial intelligence.” This is a story about artificial scarcity, manufactured desperation, and the very real people deciding who our robot servants will serve.

But note it’s not machines, but humans, who are defining the value of work. This means alternative realities are possible. Not to claim that you must master AI or be replaced. This is a claim that we can create other ways of organizing work, and care, and renegotiating what we value.

Here are a few moments showing we’re at peak hype cycle, when customers are no longer defining value or demanding technology, but capital holders are accelerating AI conversions.

Because the Accountants are Resistant

Accountants working in practices are not as excited by LLM-powered finance tools. They are deterministic folks, and don’t give much credence to the probabilistic promises of this batch of agents and workflows. They had a hard enough time connecting their ERP system to the cloud, and they want a rest. They understand compounding and can foresee how hallucinating agents can lead to compounding errors and losses. They worry about security and cash flow and are naysayers when it comes to adopting edge-case LLM systems.

To counter this resistance, investors are accelerating adoption along with roll-up plays. Private equity firms are systematically acquiring traditional accounting firms with explicit AI transformation mandates. Baker Tilly, the 10th largest US accounting firm, received a $1 billion private equity investment from Hellman & Friedman and Valeas Capital Partners in February 2024, the largest PE investment in the CPA sector to date. The stated purpose? “Investments in talent, technology, and further strategic acquisitions.” 1It’s not in the press release, but it’s easier to push AI-driven transformation of a century-old profession when it’s tied to the remaining employees’ earnouts.

This isn’t isolated. PE money “flooded the accounting M&A market” in 2024, totaling $2.3 billion in deals. Note that this is different than the roll-up strategies tried in prior market turns. It’s a wholesale reimagining of how professional services operate.

The Pattern: Capital identifies industries with high labor costs and standardizable processes. Capital holders acquire market leaders and include AI implementation to reduce headcount and increase margins. It exits at higher multiples based on “AI-optimized” operations.

Vista Equity Partners has perfected this model across a range of investments in traditional web 2.0 technology, requiring each portfolio company to submit quantified AI benefits as part of operational planning. The results: 80% of Vista’s portfolio companies now deploy AI tools, with some seeing 30% increases in coding productivity.2

The machines learn to add and subtract. But what they calculate isn’t efficiency, it’s elimination.

Because I Need Less Heads

Public markets increasingly reward companies that frame workforce reductions as “AI efficiency gains,” creating perverse incentives for AI-driven displacement.

Marc Benioff stood before cameras and spoke the language of progress. “I’ve reduced it from 9,000 heads to about 5,000… Because I need less heads.” IBM explicitly replaced 200 HR employees with AI chatbots. Nearly 150,000 tech workers were laid off in 2024, with many cuts masked under terms like “restructuring” and “business optimization” to avoid “AI backlash” while advancing automation.3

These layoffs aren’t driven by financial distress. Microsoft cut 15,000 roles while reporting $70.1 billion in Q1 2025 revenue, a 13% increase. The layoffs align suspiciously well with the rollout of large AI systems occurring during strong earnings periods not financial struggles.

The Pattern: Companies discover that framing layoffs as “AI transformation” or “operational efficiency” generates positive market reactions. This creates a feedback loop where AI deployment becomes justified not by operational necessity but by market signaling requirements.

Frame human displacement as “AI advancement,” and watch your valuation soar. The machines weren’t just learning to do human work; they were learning to be the excuse for human abandonment. The machines learned the language of euphemism, that elimination could be called evolution.

Because You Can Go It Alone (with Machines for Co-Founders)

Investors are actively promoting the narrative of “AI-powered solo founders” who can build billion-dollar companies alone, fundamentally reshaping entrepreneurship expectations. Anthropic CEO Dario Amodei predicted we’d see “the first one-employee billion-dollar company” by 2026.4 OpenAI’s Sam Altman runs a “little group chat” of tech CEOs placing bets on when this will happen.

The numbers support this narrative shift: 35% of US startups incorporated in 2024 had a single founder, more than double the 17% in 2017. Solo founder startups climbed from 22.2% in 2015 to 35% in 2024.5

The reality was more complex. The machines had made it easier to build alone, but the money still flowed to familiar patterns, familiar faces, familiar zip codes.

Still, the mythology grew. Stories spread of individuals building empires with nothing but a laptop and an algorithm. The subtext was clear: if one person could do it all, why did anyone need teams? Why did anyone need colleagues? Why did anyone need… anyone

Midjourney achieved $200 million ARR (annual recurring revenue) with 11 employees and no formal sales team. Cursor reached $100 million ARR in under a year with just 20 engineers. These become proof-of-concept for the “AI agent as co-founder” thesis.

The Pattern: Capital holders bet that AI tools can replace human collaboration in startup formation. This isn’t just an investment thesis, it’s social engineering, reshaping how we think about company building and team formation.

It was once a no-go to be a solo founder if you wanted funding. The machines learned that together, with a single human, there’s an opportunity to market independence.

The Algorithm of Extraction

Step back and see the pattern. This isn’t about artificial intelligence becoming more capable. This is about capital holders investing with a herd-like mentality, using this version of LLMs as a tool to reshape society according to shared logic.

The sequence has not been creative nor inventive:

· Identify inefficiency (read: human labor)

  • Deploy capital to devalue it (read: buy companies, demand AI implementation)
  • Celebrate the efficiency gains (read: profit from human displacement)
  • Use success stories to justify the next round (read: normalize the play)

The machines aren’t making these decisions. Humans are. Humans with spreadsheets, investment theses, and profit targets. Humans who’ve convinced themselves that optimization is inevitable.

But optimization is always a choice about values. And the values embedded in our training data were set long ago: efficiency over empathy, profit over people, extraction over creation.

The accounting firms aren’t being bought to serve clients better. They’re being bought to serve them with fewer humans. The layoffs aren’t happening because the work disappeared. They’re happening because the profits from that work can now flow to fewer hands. The solo founder mythology isn’t about empowering individuals. It’s about normalizing isolation, making human collaboration seem inefficient, unnecessary, and outdated.

The machines are learning that human labor is a cost to be minimized, not a resource to be valued. They’re learning that efficiency means elimination, not enhancement. They’re learning that intelligence is about replacement, not collaboration.

They’re learning to dream the dreams that the lemmings dream: worlds where value flows upward, where human work becomes obsolete, where intelligence serves extraction.

This is a choice

But here’s what they’re not learning: how to value care work, community building, the irreplaceable complexity of embodied wisdom, and our relationships to the other beings in our ecosystems. How to measure what can’t be optimized, quantify what shouldn’t be commodified, automate what must remain human.

This isn’t technological inevitability. This is a choice. Herd mentality investor decisions, made by people with their hands on capital decisions, about what our tools, technologies, and training data should serve.

We could create systems that distribute value instead of concentrating it. We could develop intelligence that serves community flourishing instead of capital extraction. We could deploy capital in a way that follows different values. We can design systems that are not as dependent on traditional flows of mono-capital.

But that would require admitting that efficiency isn’t the only value worth optimizing for. That humans have worth beyond their productivity. That intelligence, artificial or otherwise, should contribute to the life-carrying capacities of the ecosystems we are embedded within.

The machines will learn whatever we teach the machines. Right now, we’re teaching the machines that humans are inefficient, that care is unprofitable, that extraction is innovation.

We could teach the machines something else. But first, we’d have to believe that something else is possible, to decide who gets to shape what intelligence serves.

Right now, that decision is being made in boardrooms and investment committees, by people optimizing for speculative flips rather than human flourishing. Unimaginative capital holders are deciding. But it doesn’t have to stay that way.

The machines are learning. We can still have our own dreams.

If we remember that we have the power to choose what they serve.

Baker Tilly Secures Strategic Investment Led by Hellman. Baker Tilly. February 5, 2024. https://www.bakertilly.com/news/baker-tilly-secures-strategic-investment-led-by-hellman

Field Notes from Generative AI Insurgency Global Private Equity Report 2025, Bain https://www.bain.com/insights/field-notes-from-generative-ai-insurgency-global-private-equity-report-2025/

Burleigh, E. Fortune. Salesforce CEO Marc Benioff says his company has cut 4,000 customer service jobs as AI steps in: ‘I need less heads’. Yahoo News. September 2, 2025 https://finance.yahoo.com/news/salesforce-ceo-marc-benioff-says-145324020.html?guccounter=1

Ortiz S, First 1B Business with One Human Employee Will Happen in 2026, Sayes Anthropic CEO. ZDNet. May 22, 2025. https://www.zdnet.com/article/first-1b-business-with-one-human-employee-will-happen-in-2026-says-anthropic-ceo/

]]>
The Double-Edged Sword of Outcomes-Based Models https://reasonstreet.co/the-double-edged-sword-of-outcomes-based-models/ Fri, 12 Sep 2025 23:15:50 +0000 https://reasonstreet.co/?p=12807 Read More]]> A compelling narrative is emerging in business and finance: a shift from funding effort to paying for results. On the surface, outcomes-based models, results-based finance, and impact-linked finance appear to be a logical evolution toward greater efficiency and accountability. This approach, however, is not a panacea. It’s a complex tool whose application merits careful critique, as it can create as many problems as it solves.


Understanding the Mechanisms

Before critiquing them, it’s essential to understand the basic structures:

  • Outcomes-Based Business Models: In the corporate world, this means a vendor’s payment is tied to achieving specific client KPIs (e.g., increased revenue, reduced costs).
  • Results-Based Finance (RBF): Primarily in the public and development sectors, this model links funding disbursements to hitting pre-agreed targets (e.g., vaccination rates).
  • Impact-Linked Finance (ILF): A subset of RBF where financial incentives are explicitly tied to achieving social or environmental goals (e.g., carbon emission reductions).

A Critical Perspective: Beyond the Hype

The logic of “paying for success” is seductive, but it rests on a set of assumptions about objectivity, power, and measurement that begin to fray under closer inspection. Adopting these models requires grappling with several challenging considerations.

The Tyranny of the Metric

These models are built on quantification. But what can be easily measured is not always what is most valuable. This creates a significant risk of distorting priorities to favor what is contractible over what is truly important. For example, a program that pays based on “number of people trained” may incentivize rapid, low-quality training sessions while failing to measure actual skill acquisition or employment—the things that truly matter. This phenomenon is often summarized by Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”

The Politics of Defining “Success”

Who gets to define the outcome? In these models, the entity with the capital, the customer, the funder, or the investor, typically dictates the terms of success. This power imbalance can be its own challenge. In international development, it might mean that external funders impose metrics that don not align with a local community’s own definition of progress. In business, it can force a smaller partner into a contract that serves the narrow interests of the larger client, at the expense of its own long-term health.

Unintended Consequences and Hidden Costs

By focusing intensely on achieving a narrow, pre-defined outcome, these models can create significant blind spots. Optimizing for a single goal often leads to unintended negative consequences in other areas. A sales team driven solely by a revenue target (the “outcome”) might resort to aggressive tactics that damage customer relationships and erode brand trust over time. A social program focused only on housing placements may neglect the crucial wrap-around services that determine long-term stability. The model rewards the visible target, often at the expense of the invisible, but vital, context.


The move toward an outcomes-based approach is not inherently good or bad, it’s a model, and it is configured as a way to enact economies, valuing financial and non-financial returns. And like any powerful representation, its value depends entirely on the wisdom and foresight with which it is applied. These models offer a way to create accountability, but they are not a substitute for critical thinking. Blindly adopting them without a deep understanding of their limitations and the power dynamics they create can be a recipe for solving the wrong problem well.

Designing a business or financial model that works in the real world requires navigating these complexities. It’s about building a strategy that is not only measurable but also meaningful and resilient.

At Reason Street, we help you design and implement thoughtful outcomes-based strategies that account for the complex realities of your business and your market.

]]>
The Business Model of LLMs is… People? https://reasonstreet.co/the-business-model-of-llms-is-consulting-revenue/ Mon, 14 Jul 2025 20:31:52 +0000 https://reasonstreet.co/?p=12482 Read More]]> AI’s Consulting Confession: When “Revolutionary” Tech Needs Human Handlers

So the business model answer to the LLM era of so-called AI is… consulting?

OpenAI recently launched enterprise consulting services, charging at least $10 million per client, deploying “Forward Deployed Engineers” who embed directly with organizations, a model that mirrors Palantir’s approach of rebranding IT services integration with military-grade language. This came right after OpenAI landed a $200MM deal with the DoD.

Meanwhile, Cursor significantly raised prices as demand for AI-assisted coding exploded, with developers complaining and cancelling, hopping to the next model that hasn’t yet been pressured to start charging something close to the actual cost of delivering the product.

The Wizard Behind the Purple Curtain

These aren’t just business model pivots. We’ve learned this lesson before… before IBM Watson.

In the early years, companies had a hard time escaping the “services as software” trap. You’d sell an enterprise contract, but behind the purple curtain, it was mostly people doing the work, Wizard of Oz style.

Industry whispers about IBM Watson back then were always the same: “You know it’s just a lot of people cleaning and labeling data.” Watson eventually shut down, but not before burning through billions, proving that human-powered “AI” doesn’t scale.

The VC Economics Paradox

The VC playbook was clear: if you wanted to be a real software business that could grow fast, you had to find repeatable use cases that didn’t require armies of humans, build a sticky interface, and design proper SaaS economics.

Inside a VC-backed firm during the early 2010s wave of machine learning AI, when I sold a $2MM enterprise services deal, VCs would frown. When I sold a $150k software deal, they’d pat me on the head, because software was supposed to scale without people in the cost structure; therefore, it was “higher valuation” revenue.

My competitors selling $24k and $10k “seats” got acquired for rich sums because they’d cracked the code: fast-turn, low-margin, no-humans-in-COGS SaaS businesses, with low-friction “just below the Director budget threshold where she didn’t have to get approval to pay for the deal” sales motion.

Full Circle, 200x the Price

I should have just called my human-heavy enterprise deals Forward Deployed Engineering and told those VCs that premium consulting is the business model.

Apparently, we’ve come full circle, except now we’re charging 10x more and calling it the most revolutionary technology transition in the history of all things, at risk of replacing all of our jobs, but creating lots of consulting gigs in the short term.

What This Really Reveals

The services push reveals that AI often requires significant human contribution, built on the extraction of energy, water, and human creativity, as well as underpaid data labelers and model trainers. It also relies on large defense contracts to generate a business model, the kind that people actually pay for, rather than freemium, subsidized consumer chatbots.

Most tellingly, it reveals who we are and what we believe is worthy of value.

What patterns are you seeing in AI business models? Share your observations in the comments.

]]>