Hyperproblems: Hyperproblems are scientific challenges whose scale, complexity, novelty and interdependence overwhelm traditional research models, requiring new forms of collective intelligence, modeling, coordination and communication.

Introduction

Ten years ago, when the Paris Accords were signed and we agreed (by the way, who's the we?) that we need to keep global temperature rise well below two degrees and ideally no more than 1.5 degrees above the pre-modern average, there was also a sense of urgency, that we need to set major shifts into motion by 2030 in order for the 1.5 degree goal to be met. That was the whole point of the Paris Accords.

That accord has failed for various reasons - almost everyone knowledgeable agrees that even 2 degrees is difficult - but at that time, I remember thinking to myself: how will research inform the pathways that will keep us within the 1.5 degree ceiling? I remember thinking to myself: do we have the time to do the science to inform the policies we need to be adopting?

A back of the envelope calculation made me pessimistic. It takes about three years on average to go from asking a scientific question to publishing the result, assuming you have the money and the people to do the research? Even with all those enabling conditions, it takes three years to ask questions, collect enough data, analyze the data, write up the paper, submit it for peer review, receive reviewer comments and eventually have it published.

Sometimes the cycle takes a lot longer, but even under the best of conditions it's a three-year process. Which is to say that we had five cycles of publishing (from 2015 - 2030) to arrive at research backed concrete pathways to achieve our policy goals, which even at that time struck me as insane. Why are we playing a game where you get only five shots to save the world? Why are scientific incentives and communication systems misaligned with the urgency of the problem?

There are many reasons why we aren't solving climate change; clearly politics and fossil fuel lobbies play a big role, but some of the blame should be assigned to science communication systems - the systems responsible for disseminating scientists' work to everyone else one. At that time, 2030 was fifteen years away, now it's a little over four years away. Can't say science comms has become better in the intervening decade, or that we have come closer to solving the climate crisis. In fact, I am writing this post as COP30 takes place in Brazil, and the mood is downcast. We have failed to set ourselves up for success.

The Urgency of Coordination

Here's another example of the chaos that besets science - in this case it's both the practice of science and how it's communicated. Let's take the IPCC report. If you type "IPCC" into Google you will get to the IPCC page where you can download a summary or you can download the entire report.

Then what?

If you're most people, what do you do? Despite LLMs being all the rage, you can't even query the IPCC report and have it answer questions, let alone have access to the underlying data sets to corroborate sectoral findings that you might be interested in. Let's say I am an agricultural scientist in Indonesia. How can I use the IPCC report in real time to inform my research and my guidance to farmers? Maybe integrating IPCC reports into Large Planet Models will help turn them into living documents.

We don't have any way of doing these things automatically and adaptively. We don't have science coordination and communication systems commensurate with the urgency of the problems humanity faces in the 21st century. Even the one exception I know proves the rule: the mobilization of research, government support, public health systems and commercial production to address the COVID pandemic. It's true that we raced from sequencing the viral DNA to mRNA vaccines to commercial production to society-wide distribution in about a year, which is:

  • amazing and

  • not replicable.

Why isn't the COVID level of coordination the norm when it comes to challenges of universal importance? Why do we not have knowledge systems that engage with these challenges with the urgency they deserve?

Science Trek

Moving away from problems that come with moral urgency towards curiosity driven inquiry: we have known for a while that scientific progress is slowing down, scientists are much more risk averse and it's hard to get support for bold new ideas (no, not the cranks). I am not trying to be a hater, so let me reframe these challenges in a positive way: how can we make it much easier for like-minded, competent and aligned researchers and citizen scientists to go where no one has gone before?

Let me give you a couple of examples that interest me personally:

Unicellular Cognition

For a while, I have been interested in cognitive biology, especially in the hypothesis that it doesn't take a brain to have a mind. Plants are doing information processing, and why stop there: perhaps bacteria have minds too. This is no longer wild speculation - there's an incipient field called 'basal cognition' that's beginning to investigate cognition in unicellular organisms. But if we are to move beyond philosophical speculation to replicable science, we need labs, we need modeling and we need to bring ideas from disparate disciplines together under one roof: biophysics, cell biology, machine learning, cognitive science and philosophy of mind is the smallest list I can come up with. The investigation of the mind started with the human mind and has slowly expanded to include other species.

What will it take to reframe the study of the mind with unicellular organisms at its core?

The Design of Mathematics

There's a lot of interest (hype?) in the use of AI in mathematics, with some of the best mathematicians in the world thinking it will change the way they do mathematics, or even be replaced altogether. Those are important developments, but I have a slightly different take. So far, math is a craft: individual craftspeople produce their individual artifacts (aka theorems), though they build their artifacts on top of other craftspeople's artifacts. What might the shift from craft to industrial design look like in the case of mathematics - might the mathematician as designer focus on designing new axiom systems and let machines turn those designs into theorems? Will aesthetic considerations play a bigger role in the math of the future - not the tacit considerations that inform mathematical work already, but explicit aesthetic schools. Will research mathematics slowly separate into design and engineering wings? What philosophy of mathematics might emerge from this shift?

I am personally invested in these problems and I have an axe to grind (yes, unicellular organisms have minds; yes, some mathematicians should become designers) and I could be terribly wrong. But I am not the only person making similar bets and it would be kind of nice if we could ride the Titanic together.

What is to be done?

Hyperproblems

I have barely scratched the surface of the theoretical and practical problems of science that are staring (hitting) us in the face. They are hyperproblems by their very nature - too big for a single mind to grok, let alone solve - but that's what's exciting about them: we have the opportunity to build new infrastructure for a new kind of science. Here's a crude typology of the problems that need addressing:

  • Exploration: How do we (notice the plural) boldly go where no one has gone before?

  • Coordination: How can we make it as easy as possible for people to start collaborating on a scientific challenge? What tools and platforms must we create for that purpose?

  • Automation: How do we incorporate AI into all aspects of doing science while giving scientists the autonomy and the control over research directions? How can AI augment scientists rather than replacing them?

  • Communication: How do we modularize science communication so that it can become a continuous process with milestones?

  • Sustainability: How do we fund new kinds of science, especially: a) ideas that break new ground and b) wicked problems such as climate change that go well beyond "pure science" and have enormous consequences for humanity as well as other species.

This publication is about thinking through those infrastructural issues in public with others, and hopefully, building systems that address the underlying challenges. There are many examples of emerging solutions (Leaflet itself is one! I am using it, aren't I?, but there are other in this space as well; e.g., Semble for another ATProto based solution), and this publication will cover them. It will also ask how these solutions can be connected into a system for doing and communicating science in the 21st century.

I am going to end this already long essay by sharing some thoughts on the architecture of that future system.

Common Source

If hyperproblems demand new architectures of collective intelligence, what might those look like in practice?

Somewhat surprisingly, science hasn't learned as much as it could from the success of open source. We talk a lot about “open data” and “open access,” but most of our scientific organizations still behave like miniature firms: inward-looking, protective of staff and IP, optimized for local incentives rather than hyperproblems. We don’t make software, for the most part, but we can ask a similar question: can we imagine organizational structures for science that have the openness, transparency, and remixability of open source?

One model I have in mind is what I’m going to call common source. If individual labs and institutes are like organs (a liver here, a kidney there), then common source is more like breath or blood—circulating knowledge, tools, and people wherever they are needed in the larger scientific body.

In simple terms,

Common Source = Open Source + People Sharing

That is: the IP is both Free as in Beer and Free as in Freedom, and the people who are skilled in that IP are made available to the ecosystem at large rather than being locked inside a single institution.

Common source starts from a simple observation: when problems span multiple domains, no single lab, method, or discipline can own the full pipeline from question to answer. Hyperproblems like climate modeling, unicellular cognition, or the design of future mathematics all require long chains of interdependence: experiment design in one place, data collection in another, modeling and theory in a third, deployment or application in a fourth. Traditional research models try to stitch these pieces together through grants, consortia, and “collaborations,” but the underlying assumption is still firm-like: each unit defends its boundaries.

Common source flips that assumption. Rather than maximizing control, it prioritizes trust, reciprocity, and credit. It treats scientific knowledge as a shared commons and scientific labor as a mobile, relational capacity. Think about what this would mean in practice. A common source project on unicellular cognition wouldn’t just put preprints on arXiv and datasets on Zenodo. It would:

  • publish modular protocols for experiments, written to be reused and adapted in other labs;

  • share model code and simulation environments under genuinely permissive licenses;

  • maintain open design documents for new experimental setups, not just the polished results;

  • and, crucially, enable researchers, postdocs, and students to move fluidly between sites, bringing their skills and tacit knowledge with them.

Similarly, a common source push in the “design of mathematics” would mean not just open-sourcing proof assistants, but opening up the design space itself: shared repositories of axiom systems, conjecture libraries, aesthetic criteria, and interactive environments where human mathematicians and machine collaborators co-design new mathematical worlds. The “asset” isn’t the theorem; it’s the living practice of designing mathematical universes—made available to a network rather than enclosed within a single mathematician or department or even a standalone institution.

In that sense, common source is a way of doing science as if hyperproblems were the norm: assuming from the outset that no one owns the problem, no one owns the pipeline, and the point of the game is to grow the collective capacity to see, model, and act, while giving credit where credit is due.

PS: Leaflet does not allow for multiple authors in one publication as of this writing, but I am hoping that will happen soon - this is not a topic on which one human being, acting alone, can make much headway.