The Contributor Network

Directed graph of the CCQ collaboration: governance (blue, black border), contributors (green), invited scientists (amber), unavailable (red), candidates (gray). Paper versions shown as boxes. Hover any node for details. Drag to explore. Updated as the collaboration evolves.

The Making of Cooling Climate Quickly

A case study in evolving Human/Natural/Artificial Co-Intelligence.

It Began with a Question About Physics

On the morning of March 4, 2026, Jon Schull typed a question into claude.ai:

"Given that blackbody radiation increases as the fourth power of temperature, and given that the greenhouse effect is tuned to infrared radiation, and given that bare ground is much hotter than vegetated ground — what is the impact of vegetation on global warming?"

He added: "Critically research my premises and aim for an estimate couched as a percentage change in energy captured by the greenhouse effect."

This was not a policy question. It was a physics question — an attempt to ground-truth a physical intuition using the Stefan-Boltzmann law. The answer would turn out to be more consequential than either party expected.


The Spark

What began as a request for a quantitative check became, over the next nine hours, a systematic multi-mechanism analysis. The session moved through the physics layer by layer: surface temperature differentials and the T⁴ forcing premium (the "Boltzmann factor"), then latent heat redistribution through evapotranspiration, then cloud albedo driven by vegetation-sourced aerosols, then the marine analog — ocean phytoplankton as biological cloud-condensation nuclei, and Hansen's recent analysis of marine cloud decline following the 2020 IMO shipping regulation.

That last mechanism — the unintended global experiment of cutting marine fuel sulphur by 80% and watching the planet warm measurably within months — became the paper's rhetorical spine. It was empirical proof that aerosol-cloud forcing was real, large, fast, and had been missing from model predictions. Not a theoretical vulnerability in carbon-centric climate accounting — an observed failure, already in the literature.

By the end of the Mar 4 session, the assistant had produced a quantified comparison table with 23 footnotes covering six distinct cooling mechanisms, each estimated in W/m². Jon named the goal: "a standalone synthesis with a non-technical narrative for a sophisticated non-specialist, followed by a comparison table, then a technical justification." That document existed — the scientific case was assembled. What remained was to make it a paper.


The Drafting Marathon

The Mar 5 session opened at 2 AM UTC. Jon arrived with a commented Word document — he had exported the prior artifact, added margin comments, and was asking for v4 in HTML. The first challenge was technical: the AI couldn't access the Google Drive folder where supplementary materials lived. Jon uploaded a zip. The assistant began integrating.

At message 20, after the assistant had produced a v4 draft with five interactive Chart.js visualizations, Jon issued an instruction that changed the session's character: "Great — now reread the document and become a co-analyst, not just a code and content jockey."

The assistant complied. It reread the full document and came back with a structural critique: the abstract was doing too little work; the shipping story needed to move earlier as the motivating evidence rather than a calibration footnote; the two-audience structure (non-technical narrative followed by technical justification) needed more explicit signposting. This was the moment the collaboration shifted from human-directs-AI to something closer to mutual editing.

Over the next twelve hours, the session moved through v5 (a strategy brief), then v6 (the first full restructured HTML with the W/m² framework as unifying metric, the shipping regulation as opening hook, and the comparison table as visual centerpiece). Jon made a key positioning decision: ERA would not frame itself as a critic of the IPCC carbon framework, but as an expander — adding mechanisms that the current framework underweights. This shaped every subsequent framing choice.


The Restructuring

They started fresh — a new Opus Extended Thinking session on Mar 6, opened with a zip containing v6 HTML and two specification files. The specification was itself a product of the collaboration: Jon had articulated editorial decisions into a machine-readable format, and the assistant had helped structure the handoff across the context limit.

The Mar 6 session was the deepest intellectual work. It produced the paper's 4-panel energy budget figure — a disaggregated version of the standard IPCC energy budget schematic, showing each mechanism (bare ground, vegetated ground, ocean, cloud) as a separate panel with W/m² annotations. It also surfaced the Boltzmann extremes insight: a Jensen's inequality argument showing that temperature variability across surface types creates a systematic bias in greenhouse forcing estimates.

At message 120, the assistant identified a problem it had created: "We've been adding excellent material but the paper has grown from a clean 12-sentence argument into something that risks losing its narrative drive under the weight of additions." This self-diagnosis led to a restructuring of the abstract.

The Mar 6 session hit the claude.ai context limit at message 149.


Context Limits as Friction

Context limits were the primary friction point throughout the project. Three recovery patterns emerged:

  1. Fresh session with file upload: Jon would zip the current working files and upload them to a new session with a handoff message. This worked but lost conversational context.
  2. Specification-driven handoff: For larger restructuring tasks, Jon and the assistant developed the practice of writing detailed specification files before starting a new session — capturing not just the task but the reasoning.
  3. Claude Code for deterministic operations: When the task was applying a defined set of changes rather than generating content, Claude Code was more appropriate. The final deployment and the 185 Google Doc edits were both handled this way.

The workflow that emerged — claude.ai for ideation and drafting, specification files as cross-session memory, Claude Code for deployment and deterministic editing — wasn't planned at the outset. It evolved under pressure.


The Review Cycle

After deployment to the ERA website on March 11, Jon and colleagues reviewed the paper in a shared Google Doc. The doc generated 185 editorial items — structural, substantive, and copy-edit. Claude Code read the Google Doc changes and applied them systematically to the HTML source, with a PR and review trail in GitHub.

The abstract revision — splitting a dense monolithic paragraph into four clear, independently digestible paragraphs — was the most substantive change. It also required updating the same text in three locations in the HTML file, which the CTO handled correctly.


What Worked

The W/m² framework. Choosing a single quantitative metric as the basis for comparing every mechanism gave the paper internal coherence. Carbon accounting doesn't provide this; it measures stocks (CO₂ concentration) rather than flows (forcing rate). The W/m² framing made the comparison table possible and made the paper's central argument concrete.

The shipping regulation as narrative anchor. A natural experiment, already documented, already in the literature, already surprising to most readers. It made the paper's claim falsifiable and grounded — not theoretical.

The co-analyst relationship. The shift at message 20 of the Mar 5 session, when Jon asked the assistant to become a co-analyst rather than an executor, changed the output quality. The assistant began flagging oversold claims, proposing structural reorganizations, and catching its own errors.

Specification files as cross-session memory. The practice of writing detailed specifications before handing off to a new session — capturing decisions, not just tasks — solved the context limit problem better than any other approach tried.


The Circle Expands

By late March, the paper had left the lab. Jon began sharing it with scientists and colleagues, asking for review, inviting commentary. The responses arrived in clusters — some by email, some as Google Doc suggestions, some as forty-minute calls with colleagues who had been thinking about exactly these mechanisms for years.

The early responses sorted into types. Philip Bogdonoff, ERA's strategy advisor, had already contributed copy-edits that sharpened the prose without touching the science. Peter Bunyard arrived (Mar 24) with domain-specific corrections on gymnosperm transpiration rates and the latent heat radiation timescale. Rob de Laet turned out to be the most consequential early reviewer — a 55-minute call on Mar 24, followed by a six-mechanism stack with a decomposition of the W/m² estimates and a letter about TOA versus surface heat that reframed how the paper was arguing its numbers.

Anastassia Makarieva — the biotic pump theorist whose foundational work ran through the paper's third mechanism — declined co-authorship (Mar 23) but sent a detailed mechanism critique that was more useful than most acceptances would have been. Her central point: the biotic pump is not "surface cooling." It is a continental-scale water delivery and circulation system that operates at altitude, where the climate effects are global rather than local. Her critique forced a restructuring of the mechanism framing that made the science more defensible.

Others arrived through the network: Stuart Cowan at the Buckminster Fuller Institute (lunch with Jon and Sara Blenkhorn, Mar 28, Berkeley). Frederic Jennings, a PhD economist who offered co-authorship at a board meeting. Didi Pershouse with ten comments challenging the framing of soil mechanisms (Apr 8). Brian von Herzen of the Climate Foundation, whose marine permaculture work became Appendix I and who became a confirmed co-author with editor access (Apr 11). By April 2026, the contributor circle had grown to twenty-plus people across six countries. On Apr 7, v10e was sent to the full reviewer list — the first version circulated widely — along with formal invitations to Douglas Sheil, Michal Kravčík, Stefan Schwarzer, and others cited in the paper.


The Governance Pivot

On April 9, Jon convened a governance meeting with Philip Bogdonoff and Ananda Fitzsimmons. The question on the table was strategic: what kind of document was this, and how should the collaboration be formalized?

The discussion surfaced an insight that Philip named and articulated: that the paper's credibility could be amplified not through traditional peer review — which would take eighteen months — but through a deliberate coalition-building process. CCQ would be published as a Creative Commons resource, and the scientists whose work it synthesized would be invited to build on it in their own peer-reviewed work, on the condition that they cite it. This frees collaborators to publish their own variants without rights fights, acknowledges that the paper is a synthesis across many antecedents (including substantial AI drafting), and sidesteps the authorship-credit debate that can stall coalitions of this size. The working principle for growth: existing collaborators nominate new ones; the Governance Team stages the invitations. The key was sequencing — show the paper to scientists with intellectual respect rather than institutional prestige, and let the coalition grow from that foundation.

This became the organizing principle for the paper's next phase. A governance working document was created that afternoon — covering the rollout plan, a contribution taxonomy (distinguishing co-authors from reviewers from acknowledgments), a cover note template for scientist invitations, and an appendix on the role of AI in the process. The governance spreadsheet was extended to 21 rows. A kickoff email went to Jon, Ananda, and Philip that same evening. On Apr 11, v10f was published — 29 revisions driven primarily by Ali Bin Shahid's biome-specific analysis — and Brian von Herzen's introduction of Leon Simons marked the first scientist reached via the coalition network rather than Jon directly.


The Network as Artifact

Rob de Laet had been thinking about the coalition problem from the beginning: a paper this interdisciplinary, with this many contributors and potential contributors, needed a way to visualize who knew whom and who could open which doors. His Apr 12 proposal: a "flash mob" model — a staged sequence of personal outreach to prominent scientists, timed for coordinated social proof.

The first artifact of this strategy was a grid. Rows were the names he wanted to reach — Tier 1 scientists like Hansen and Hayhoe, literature scientists whose work was already cited, influencers and journalists who could amplify. Columns were the current co-author circle. Cells were connections: strong ones and casual ones. Filled in collaboratively, the grid would become a map of the coalition's relational reach. The CCQ agent built this as a Google Sheets tab — the "Connections" sheet, 54 rows × 31 columns, color-coded and formatted.

But the grid was a static view of a dynamic reality. The collaboration had a history — who joined when, who introduced whom, which version of the paper they contributed to. That history was a directed graph. Jon noted that the ERA website's landscape page already used vis.js for network visualization, and suggested borrowing from that code.

What followed was a half-day of rapid prototyping. The ERA landscape graph was undirected; the CCQ contributor network was inherently directed — introductions flow from people who know each other to people who don't, contributions flow from contributors to versions, co-authorship has a history. The directed vis.js network took shape: people nodes, version nodes, a left-to-right timeline layout, edges encoding relationship type through line style and color.

The visualization at the top of this page encodes the collaboration's full history: Jon as progenitor of v9, the governance circle with black node borders, contributors in green, scientists invited in amber, those unavailable in red, candidates in gray. Hover over any node and an info panel appears — name, role, affiliation, contribution, date joined. The network graph is itself an artifact of the process it documents — generated by the same AI-human collaboration it depicts, and updated as the collaboration evolves.


A New Kind of Scientific Process

The traditional model of scientific publication is solitary to a fault: one or a few authors, years of work, peer review as the sole quality gate, publication as the moment of release. The social infrastructure that actually determines what gets read and cited — who knows whom, who endorses, who amplifies — is invisible and inaccessible to outsiders.

Cooling Climate Quickly is running a different experiment. The paper is being built in public (within a permissioned circle), with its governance structure documented, its contributor history visible, and its process archived. The making-of document you are reading is itself part of the artifact. So is the network graph at the top of this page.

Several features of this process are worth naming as lessons, not just observations:

  1. The AI is a collaborator, not a tool. From the Mar 4 session onward, the assistant was functioning as a co-analyst — flagging oversold claims, proposing structural reorganizations, building the bibliography, drafting correspondence, maintaining the governance spreadsheet. This is not AI-assisted writing in the sense of autocomplete. It is a division of cognitive labor.
  2. The co-author circle is a resource, not just a list. Rob's connections grid made this explicit: the value of a co-author is not only their scientific contribution but their network reach. Making this legible — as a spreadsheet, as a graph — is itself a form of scientific infrastructure.
  3. Governance makes collaboration tractable. The April 9 governance meeting produced a working document that the entire contributor circle could read and respond to. It defined who counts as a co-author versus a reviewer, what the rollout sequence would be, and what the invitation letters would say. Without that structure, a coalition of twenty-plus scientists across six countries would be ungovernable.
  4. Process documentation is scientific output. The session archives, the handoff notes, the version history, the annotated diffs — these are the record of how scientific knowledge was assembled under real conditions: context limits, conflicting feedback, iterative revision, distributed collaboration. That record is reproducible in ways that most science isn't.

The paper argues that ecosystem restoration can cool the climate faster than carbon accounting suggests. The process of making it demonstrates something adjacent: that AI-facilitated open science can assemble knowledge faster, more transparently, and with broader collaboration than the traditional model allows. Neither claim is proven yet. Both are under active investigation.

This archive documents the making of a paper that argues ecosystem restoration can cool the climate faster than carbon accounting suggests. The process of making it was itself a demonstration of the thesis: that collaboration — between human judgment and AI capability, between strategic vision and technical execution — produces results neither could achieve alone.