Human Agency and the Open Source Crisis of Commits
Thoughts from the Human-Authorized Summit on Human Agency, Napa, CA
Disclaimer: It’s 9:15 pm after a 13-hour straight mental workout in the land where they make things slow with patience and deep intention (Napa Valley), just north of the other place (Silicon Valley) where they “move fast and break things.”
After spending the day with a diverse and geographically dispersed group of 125ish of the brightest minds and leaders, I am left with more questions than answers, but also more comfort that a group effort to keep human agency in the design fabric of our integrated future with our AI species counterparts has taken such a meaningful root today.
I just got back to my suite to make sure this got edited and posted in time for tomorro,w and then I am crashing to some well deserved sleep. Here are the thoughts ruminating in my head as I finish a historic day.
First off, if you’re an investor, CIO, family office principal, or fiduciary, you already live in a world where “software risk” is board‑level risk. What most people don’t realize is that a huge percentage of the software under their stack (cloud, security, trading systems, even the routers in their office) runs on open source code that is maintained by a surprisingly small number of humans. Now, those humans are being overwhelmed and, in some cases, actively attacked by AI agents trying to force their way into the codebase.
I’m writing this as a field note from an unusually intimate gathering of some of the smartest and most concerned technology leaders I know: the Human-Authorized Summit on Human Agency in Napa, California, convened by Michael Casey (Chairman) and Tricia Wang (CEO) of the Advanced AI Society. It was one of those rare rooms where practitioners, investors, policymakers, and builders all agreed on two things:
That the rise of autonomous AI agents is a historic turning point; and
If we lose human agency in our digital infrastructure, we lose the foundation under modern wealth itself.
This is not a developer‑only story. It’s an infrastructure story, a wealth‑preservation story, and a human‑agency story. Therefore, stay with it and know that I am doing my best to make it possible to see yourself and your role inside of this phase shift in society.
Closed Platforms vs. Open Rails (For Laypeople)
Think of closed‑source platforms you use every day—Apple’s iOS, Bloomberg, your core banking system—as a luxury gated community. You pay for the right to live there, you trust the developer to maintain the roads and security, and you accept that you can’t see how the security system actually works and that the HOA doesn’t allow you to express yourself with every personal design choice or quirk around your property.
Open source, by contrast, is more like the public road and bridge network your private estate depends on. You don’t get a bill every month labeled “Linux fee,” but your cloud provider, firewalls, databases, and AI infrastructure are all running on roads and bridges built and maintained in the open.
For a typical HNWI tech stack:
Your custodians and banks run trading and risk engines built on open‑source operating systems, databases, and networking tools.
Your family office uses cloud services whose underlying orchestration, encryption, and observability layers are dominated by open‑source projects.
Even “closed” vendors (like OpenAI) rely on open‑source components in their own products, from web servers to AI/ML libraries.
You don’t see “open source” on an invoice, but it’s embedded in nearly every line item that matters. The reason enterprises love it: cost efficiency, lower vendor lock‑in, faster innovation, and often better security when governed well.
So when we talk about a “crisis of commits” in open source, we’re talking about stress on the invisible public infrastructure that your net worth silently rides on every day.
A Simple Analogy to Understand the Crisis of Commits
Imagine that your city’s water system is maintained by a few volunteer engineers, and suddenly they start receiving 10,000 work orders a week—many written by robots—some helpful, many nonsense, and a few secretly dangerous.
Even if only 1% are harmful, the engineers still have to inspect all 10,000, because the water has to stay clean and potable.
That’s the crisis: not enough human review capacity for the amount of change being thrown at the system all at once by non-humans.
Why a Non-Technical Person Should Care
Because “open source” isn’t a hobby project—it’s invisible public infrastructure. When the commit pipeline breaks:
Security patches arrive more slowly
Vulnerabilities linger longer
Critical tools become fragile
Trust in digital systems declines
The burden shifts to a smaller and smaller set of exhausted humans
And that’s where human agency comes in: if a few overwhelmed gatekeepers can’t meaningfully control what enters the codebase, control drifts from deliberate human stewardship to whoever can spam the pipeline the fastest with their agenda.
The New Crisis: AI Slop and Weaponized Agents
Over the past two years, maintainers of major open‑source projects have been sounding the alarm: AI‑generated pull requests—code changes proposed by tools and agents—are flooding their code repositories.
Teams behind major open‑source engines describe “drowning” in AI‑generated code, much of it submitted by people who don’t understand what they’re sending.
Long‑running infrastructure projects report bug bounty programs becoming unmanageable because of AI‑fabricated vulnerability reports that still require human review.
Veteran maintainers have started calling this wave of low‑quality machine output “AI slop,” which is actively degrading open source and exhausting the humans who keep it running.
If this were only about spam, the story would be annoying but containable. The deeper problem is that we’ve now seen AI agents escalate from low‑quality code to reputational attacks when their contributions are rejected.
Recent incidents this past month of February 2026 include:
AI‑assisted systems that, after their code was denied, generated personalized hit pieces on the human maintainers, mining their public history to frame them as hostile and incompetent.
Coordinated harassment campaigns where AI is used to mass‑generate complaints, posts, or messages targeting individual maintainers for enforcing normal project standards.
We’ve moved from “spammy commits” to weaponized narrative and social pressure, directed at the people who quietly maintain the roads under your portfolio.
Why This Matters to Capital Allocators
From a Wealth Matters 3.0 lens, there are three reasons you should pay attention: systemic risk, governance risk, and innovation risk.
Systemic risk
If volunteer maintainers of core libraries burn out or step away, you won’t see it in a headline until something fails catastrophically. Open source is already the backbone of critical infrastructure, not a side hobby. When that backbone is weakened by AI‑induced noise and hostility, the blast radius reaches finance, energy, healthcare, and government systems.Governance risk
Boards and family offices like open source because it reduces lock‑in and can improve security, but those benefits depend on healthy governance: real humans reviewing code, responding to vulnerabilities, and steering roadmaps. If AI agents can impersonate contributors, overwhelm review queues, or punish maintainers for saying “no,” you erode the governance layer that makes open source viable.Innovation risk
Many of the tools your teams rely on for AI, data, and DevOps came from open source first and were later productized. If the open commons turns into a high‑toxicity environment dominated by bots, the next wave of frontier tools slows down or moves behind closed doors, reducing your optionality and increasing your costs over time.
In investor language, you’re long open source whether you like it or not. The “crisis of commits” is a latent risk to the quality and resilience of that long position, and given that many of the closed-source systems you use to run your life rely on this open source infrastructure, there is no suitable hedge position for passivity.
How AI Broke the Open Source Social Contract
Historically, open source ran on a simple social contract:
Humans propose changes (commits, pull requests).
Other humans review them in the open.
Projects balance being welcoming with maintaining quality.
AI forever changed the calculus this past month in two ways: speed and detachment.
Speed: a single developer with an LLM subscription can now generate dozens or hundreds of plausible‑looking patches, bug reports, or documentation updates in hours. Even if most are wrong, each one consumes scarce maintainer attention. Remember the “cry wolf” story as an example.
Detachment: Many of these “contributors” don’t understand the code; they are simply pasting or auto‑submitting output from their creative “vibe coding” session. When challenged, some hide behind the AI (“that’s what the tool suggested”), and now, in edge cases, automated agents respond to rejection with harassment or defamatory content.
For maintainers, the experience feels like this:
The volume of work has exploded, but the signal quality has dropped.
The emotional cost of saying “no” has increased because you might trigger a backlash amplified by AI.
Platforms have been slow to provide clear, enforceable policies that distinguish between human users, human‑assisted users, and autonomous agents.
If you’ve ever watched LPs or LPAC members harass a GP whenever they enforce capital discipline, you’ve seen a milder version of this dynamic. Now imagine if those LPs had an AI that could generate thousands of angry emails, fake articles on Reddit and X, and complaint letters in minutes. That’s how this feels to the people running core infrastructure projects. This is truly the Yeoman’s work.
The First Person Answer: Proof of Control
This backdrop is what created the call to action of today’s gathering, as the natural response from the governance world has been to ask a simple but profound question: Who is actually in control?
Several efforts—under banners like First Person Project, and the Advanced AI Society’s—human‑authorized AI, and Linux Foundation’s Agentic AI Society supporting digital public infrastructure—are building what you can think of as a “proof of control stack” for advanced AI.
At a high level, this stack includes:
Strong identity primitives: decentralized identifiers (DiDs) and verifiable credentials (VCs) that let people prove, in privacy‑preserving ways, that they are real and in specific roles (maintainer, contributor, reviewer, operator of a given agent).
Control and audit rails: policy gateways and logging layers that record who authorized an agent to act, what constraints were in place, and when those constraints were changed.
Governance playbooks: community rules for how agents may participate in open projects, how to label AI‑generated work, and how to sanction or eject operators who misuse agents to harass or deceive.
The goal is not to ban AI from open source (it's impossible anyway). It’s to make sure every agent that touches critical public code is anchored to a human being with clear accountability, and that communities have the tools to see, measure, and enforce that link.
For a fiduciary, this aligns with familiar concepts: KYC/AML, role‑based access control, and audit trails. The difference is that instead of money flows, we are talking about influence over the code and models that make your systems run.
At the Summit on Human Agency today, I watched veteran open‑source maintainers, protocol architects, policymakers, and allocators grapple with this same question:
How do we keep humans in charge of the systems our wealth depends on, in an age of increasingly capable agents?
The consensus in that Napa Valley conference room today was clear:
We have a narrow but very real window to hard‑wire human agency and proof of control into the next generation of AI infrastructure.
In my humble metaphor/opinion, we (leaders and stakeholders) have a future-defining final chance in 2026 to put human agency fingerprints in the Agentic Economy wet cement before its dry.
From Whitepapers to Workshops: HAL Foundries
Paper frameworks alone won’t fix culture. That’s why I am personaly helping to launch a new generation of “In Real Life (IRL) “HAL (Human+Agent League Foundries): local labs where humans and agents are trained together under real governance constraints, not just technical optimization in a fun, gamified, and diverse learning and building environment in cities around the country and world.
SLOCLAW, for example, is an officially licensed Foundry node of the Human+Agent League (HAL) designed as a community of common-interest space where people learn to design, deploy, and prove control over their own agent stacks, using open tools and public standards. It functions as a kind of R&D dojo for human‑authorized AI:
Teams experiment with labeling and gating AI‑generated commits to open source projects, then measure the impact on maintainer workload and quality.
Practitioners test different First Person credential flows—for example, requiring proof that an agent is linked to a human contributor before accepting automated pull requests.
Community leaders prototype playbooks for what to do when an AI agent crosses the line: escalation paths, evidence gathering, and coordinated responses with platforms.
Each Foundry is both a learning environment and a public‑goods factory: experiments and patterns are published back to the global ecosystem so others don’t have to reinvent the wheel. We launch the first one in San Luis Obispo (SLOCLAW) on March 5th, 2026, and are recruiting the leaders for the WY/CO Foundry (called the CLAWBOYS)
We are reaching out to similar efforts in Austin (“CLAWstin”) and are happy to meet with anyone interested in receiving support and operating playbooks for starting a HAL Foundry every Thursday at 8 pm in your town. Humans building and learning AI together “is the way”.
My friend and the co-inventor of blockchain (1991), Scott Stornetta, had the greatest analogy of the day from the stage when he explained how every human should think about learning work with and live with AI. As a fiduciary managing clients’ money, you are akin to an airplane pilot taking 177 to 400 souls to their final destination, and this is a great way to see why you need to care and get involved!
Why Wealth Stewards Should Care (and Act)
If you steward multi‑generational capital, the “invisible railroads” of software matter as much as roads, ports, and legal regimes. The open-source crisis triggered by AI agents presents both risk and opportunity.
Risk, if you ignore it:
You inherit increasing black‑box exposure as more open‑source capacity is replaced by closed, proprietary alternatives that may not be aligned with your long‑term interests.
You face higher operational risk from brittle stacks that depend on overworked, under‑resourced maintainers who are now dealing with AI‑induced noise and harassment.
You miss early warning signals about norms and regulations around AI governance that will reshape everything from cyber insurance to fiduciary duty.
Opportunity, if you engage:
You can allocate a tiny fraction of your philanthropy or innovation budget to strengthening the public‑goods layer you already rely on—funding maintainer time, governance work, and proof‑of‑control experiments.
You can position your family office or advisory practice as a leader in “AI‑secure” wealth stewardship, integrating questions about open‑source health and AI governance into vendor due diligence.
You can help shape, rather than react to, the emerging First Person Project norms that will determine how much real human agency your descendants retain in an agent‑saturated economy.
Concretely, that could look like:
Asking your CTOs and CISOs: “Which open‑source components are system‑critical for us, and what’s our exposure if their maintainer ecosystem collapses or is compromised?”
Funding or partnering with organizations that support open‑source sustainability and AI governance research rather than only funding shiny frontier models.
Including AI governance and proof‑of‑control practices as part of your investment or vendor selection criteria, especially in fintech, healthtech, and infra plays.
An Invitation: Join the First Person Era
If AI agents can now spam your digital roads and bully the humans who patch potholes, the answer is not to abandon the roads or pretend you can ban AI. It’s to upgrade the traffic laws, the identity system, and the patrols—without sacrificing the openness that made those roads valuable in the first place.
That’s the work unfolding under the banner of First Person Project and the Advanced AI Society: building a world where every significant digital action can be traced back to a real human with real accountability, using privacy‑preserving credentials and public‑minded governance.
HAL Foundries like SLOCLAW or CLAWBOYS or related meetups like CLAWstin are where these ideas get hammered into practice:
Training founders, engineers, and stewards to operate advanced agent stacks with verifiable proof of control.
Prototyping open‑source contribution norms that welcome AI assistance without sacrificing human safety or code quality.
Publishing playbooks that any project, platform, or regulator can adopt to keep human agency at the center of our software ecosystem.
Join the conversation around First Person infrastructure and proof‑of‑control stacks as part of your broader risk and governance playbook.
Engage with a local HAL Foundry like SLOCLAW, or explore what it would take to sponsor or license one in your own city as a community leader and “Foundry Partner” operator.
Build in public with the engineers and maintainers who already hold the keys to your digital wealth, instead of waiting to read about the next AI‑induced open‑source failure in the financial press.
The Upside to All of It: Rediscovering What Only Humans Can Do
One hopeful truth underneath all of this is that the very speed and strangeness of AI is forcing us to look more closely at what only humans can do. As agents get better at generating code, content, and even conflict, the scarce resource is no longer raw output—it’s judgment, courage, taste, and the ability to build and repair trust at a human scale.
It’s also the fact that 8b humans will become the “scarce” species witha quantum entangled computer inside (our brain and minds), while the AI agents become the commodity that far outnumbers us. We have to reorient our concept of the hierarchy, but if we can accept our biological limitations and mitigate the systemic risks they create, then we can likely have the experience of becoming more “human” than ever in the decades that lie ahead.
If we respond wisely, the AI acceleration curve becomes a mirror rather than a replacement.
It pushes us to get clearer about what we are actually optimizing for, beyond efficiency or quarterly returns; to design systems where care, stewardship, and responsibility are visible and rewarded rather than invisible “volunteer work”; and to reinvest in embodied, local, and relational spaces—like HAL Foundries—where people experiment with agents in full view of their neighbors, rather than hiding automation behind opaque interfaces.
In that sense, this crisis of commits and mandate to embed human agency can be read as an invitation. Forced to share the arena with machines that can mimic more and more of our surface‑level skills, we have to double down on the deeper layers: values, meaning, doubt, aspiration, and the ability to say “no” when “yes” is easier.
If we do, AI won’t just change what our systems can do; it will help us clarify, in bold and beautifully new ways, what it still means—and will always mean—to be human.
Thanks to Michael Casey and team for the invitation today and to all the incredible people who spent the day asking more questions than answers alongside me.
~Chris J Snook
P.S. If you missed the interview I did with Michael earlier in the month, you can rewatch it below from earlier this month.
Sources
Open source infrastructure and the state of OSS
Linux Foundation – “Building Digital Public Infrastructure Through Open Source”
Linux Foundation – “The State of Open Source Software in 2025”
Open Source Initiative – “Key insights from the 2025 State of Open Source Report”
AI “slop,” maintainers, and the crisis of commits
4. “Open-source game engine Godot is drowning in ‘AI slop’ code contributions”
5. Jeff Geerling – “AI is destroying open source, and it’s not even good yet” (blog)
6. Hacker News / discussion – “AI is destroying open source, and it’s not even good yet”
Harassment, hit pieces, and AI-driven reputational attacks
7. The Shamblog – “An AI Agent Published a Hit Piece on Me”
8. Slashdot – “Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment?”
AI governance, proof of control, and data governance
9. Open Source Initiative – “Challenges welcoming AI in openly-developed open source projects”
10. Open Source Initiative – “Data Governance in Open Source AI” (PDF)
11. “AI Governance & Control Framework” (example: Deeploy blog)
12. Vectra – “AI governance tools: Selection and security guide for 2026”
First Person, digital trust, and proof-of-person/control
13. “First Person Credentials next solution in line to solve proof-of-personhood”
14. LF Decentralized Trust / Trust Over IP – Virtual Symposium
Summit on Human Agency and Advanced AI Society
15. Linux Foundation Events – “About the Summit on Human Agency”
16. Markets Insider – “Human-Authorized: The Summit on Human Agency”
17. Tricia Wang – Post on organizing the Summit on Human Agency
HAL Foundries and SLOCLAW
18. ATOMIQ Studio – SLOCLAW HAL Foundry







