top of page
V2_1_intro_FINAL.png

We stand at a pivotal moment - humanity is on the brink of developing artificial minds that could exceed our own.

Photo: José Martín Ramírez Carrasco

Date of publication: 6 March 2025
Last Updated: 1 April 2025
Reading time: 5 mins.

Major players are racing to harness the power of artificial general intelligence (AGI). But this is not a race that can be won.

How can we trust any organization to control systems they can't predict or understand? These systems could alter civilization, outcompeting not just individuals but humanity itself.

To keep the future human, we need concrete policies now. We propose hard limits on computational power, enhanced liability frameworks, and comprehensive safety standards.

 

Technologically, all of these are possible today.

Table of Contents

Once you’ve enjoyed this summary, go in-depth with the full essay.

This scrollytelling site will present all of the key points in ‘Keep The Future Human’ from Anthony Aguirre. It is intended to be easy to digest in just a few minutes. For a deeper dive into all the topics covered on this page, read the full essay online.

V2_2_landscape_FINAL.png

The Evolving Landscape of AI

Throughout history, humans have built tools to extend and automate our capabilities.

The Information Age brought us systems that could, for example, beat us at chess. But these were highly specialized and limited to narrow domains.

Modern AI is different. Recent breakthroughs have given us much more powerful, general-purpose systems that can write, code and solve problems.

3_how_jpgFINAL.jpg

How Modern AI Works

In a way, modern AIs have been surprisingly easy to create. They are more 'grown' than built, getting their power from sheer scale rather than clever hand-written code.

They learn by processing huge amounts of data through layers of artificial neurons. Their connection strengths (weights) gradually encode patterns and features of the training data.

The result is a dense, abstract distillation of everything the model was shown.

An appropriately-trained system could take a photo of your pet and tell you whether it's a cat or a dog. But such systems are monolithic black boxes. Unlike traditional programs, they don't follow human-authored directions we can read or change.

There is no easy way to understand how their weights encode things like 'catness' or 'dogness'. 

Systems like ChatGPT are simply trained to predict the next item in a sequence - whether words in text, pixels in images, or frames in video.


At massive scales, pursuing this simple goal leads to unexpectedly broad abilities like coding and complex problem-solving. AI capabilities have improved much faster than most AI experts predicted.

benchmarks 1.png
benchmarks 3.png
benchmarks 2.png
benchmarks 4.png
benchmarks 5.png

AI performance over time on a subset of the ‘Graduate-Level Google-Proof Q&A Benchmark’ (GPQA). These are very difficult questions that human experts pursuing PhDs in related areas can only reach about 70% accuracy on.

4_race_FINAL.jpg

Photo: © Andreas Riemenschneider (CC)

Motivations for the AI Race

So far, simply scaling these systems up has made them smarter. The world's most powerful corporations and nations are now investing hundreds of billions in a race to develop more powerful AI.

 

Some are motivated by genuine aspirations to benefit humanity. All feel intense pressure to keep from being left behind. No one wants to engineer a disaster, but we cannot predict or control the entities this hurried process will birth.

Vladimir Vladimirovich Putin
“Artificial intelligence is the future, not only for Russia, but for all humankind... Whoever becomes the leader in this sphere will become the ruler of the world.”
António Guterres
“Almost every Government, large company and organization in the world is working on an AI strategy. But, even its own designers have no idea where their stunning technological breakthrough may lead.”
Just how powerful can artificial minds become?

AI's Ultimate Potential

What is AGI?

AI can already outperform us at specific tasks. We are now building something more profound: AI that is better than human experts at virtually any task. Such a system is an 'Artificial General Intelligence' (AGI).

How can we decide whether a system is AGI? Comparing AIs to humans can be tricky, as our skills do not always align. Definitions have been mixed and are changing over time. As AIs continue to gain power, we'll need a rigorous framework to understand their capabilities.

It's useful to think of AGI as combining Autonomy, Generality and Intelligence.

VENN-1.png
VennMiddle.png

How Close Are We to AGI?

Today's systems are highly intelligent and general, but have limited autonomy. The global AI race is now focused on creating systems that outperform us at all three.

Beyond AGI is ASI, or 'Artificial Superintelligence' - a system with greater abilities than all of humanity combined. Such a system would be impossible for us to predict or meaningfully control.

AGI may naturally lead to ASI, as it learns to further improve itself.

6_how_close_FINAL.jpg

Photo: ASKA

New graident.jpg
Many experts expect AGI within years, not decades - and major companies are banking on this.

AGI's risks; replacements and runaways

Without safeguards or oversight, we will likely develop AGI within the next decade. There are dramatic risks to humanity along this path:

np_inward_3777096_D3D356 1.png

Power concentration

Unprecedented accumulation of power - by corporations, governments, AIs themselves, or other actors.

protesticon.png

Massive societal disruption

Widespread replacement of human labor, collapse of social systems or economic structures.

np_nuclear-explosion_5692391_D3D356 1.png

Catastrophic events

Dramatically increased risk of devastating attacks or accidents enabled by AI capabilities.

np_geopolitics_6037226_D3D356 1.png

Geopolitical instability

Automation of warfare, destabilizing shifts in global power, and increased likelihood of conflict.

np_choice_6528308_D3D356 1.png

Loss of human agency

Surrendering human decision-making to automated systems we cannot fully understand or control.

np_global-warming_2855548_D3D356 1.png

Environmental tipping points

Failed interventions, mismanagement, or runaway energy consumption.

Can We Control AGI?

A superhuman autonomous system would surely develop its own objectives. Even aligned with our best interests, it would be inherently uncontrollable.

How would you direct, or even oversee, something you could not understand? How would you anticipate and avoid catastrophic disruptions to social and economic systems?

We must not race to create entities we cannot confidently control.

The only way to avoid AGI's risks is not to build it - at least, not until we are sure it's safe.
V2_8_paths_FINAL.png

The Paths Ahead

We are not fated to replace ourselves with AGI.

We can choose not to. Though AGI might offer benefits, the existential risks far outweigh the potential gains.

We should instead invest in developing powerful 'Tool AI' that enhances, rather than eclipses us.

The choice is clear:
Should we develop AGI that will diminish us, or controllable tools that will empower us?

We can choose a different future. Let's invest in developing more powerful Tool AI systems instead. Tool AI avoids the risks of AGI by deliberately limiting its autonomy, generality or intelligence, and maintaining controllability.

Venn2-5.png
Venn2-2.png
Venn2-3_edited.png
Venn2-4_edited.png
WhiteFade.png
WhiteFade.png
WhiteFade.png
Venn2-1.png

For example, it can be:

Intelligent and general-purpose, but requiring human oversight

Examples: GPT-4, OpenAI o3, AlphaFold

General and autonomous, but of limited capability

Examples: GPT-3, GATO, an AI worm

Intelligent and autonomous, but confined to specific domains

Examples: Self-driving car, Smart trading bot

Tool AI; Empowerment and Enhancement

Tool AI stands to revolutionize many domains, without as much risk of losing control. Building on current systems, we could make dramatic progress in:

np_health_2354566_141312 1.png

Healthcare

Transform healthcare through personalized medicine and drug discovery

  • Precision Oncology

    Companies like Tempus Labs are using Tool AI to analyze patient data to identify biomarkers that can guide targeted cancer therapies.
     

    Accelerated Drug Discovery

    Companies like Atomwise are using Tool AI to predict how molecules will interact, reducing both time and cost in early-stage drug development.
     

    Enhanced Diagnostic Imaging

    AI-powered image analysis tools like those used by Aidoc have improved the accuracy and speed of radiological diagnoses.

np_science_2867395_000000 1.png

Science and Engineering

Accelerate scientific research and engineering breakthroughs

  • Materials Discovery 

    The Materials Project at Lawrence Berkeley National Laboratory is using AI modeling to help uncover new materials for applications like energy storage and electronics.
     

    High-Energy Physics Analysis

    Tool AI is already being used to sift through massive datasets from particle physics experiments, like those at CERN.
     

    Engineering Optimization 

    AI algorithms are being used by companies like Airbus to optimize complex systems in fields such as aerodynamics and structural design.

np_education_4322188_141312 1.png

Education

Provide personalized education through AI tutoring

  • Adaptive Learning Platforms

    Programs like Discovery Education’s ‘dreambox math’ offer systems that tailor lessons in real time to improve student engagement and learning.
     

    Virtual Tutoring Services

    Organizations like Carnegie Learning are offering on-demand AI-enhanced tutoring systems that can offer live feedback and step-by-step problem solving.

np_mental-health_5467987_000000 1.png

Mental Health

Expand mental health support with AI-assisted therapy

np_democracy_2680338_000000 1.png

Democracy

Strengthen democracy through better dialogue and mediation

  • Crowd sentiment analysis

    AI systems like Polis can aggregate and analyze public opinion, helping policymakers to better understand their constituents.
     

    Misinformation Detection

    Automated tools like ClaimBuster that verify digital content are already reducing the spread of false information.
     

    AI-assisted deliberation

    Experiments like vTaiwan are using Tool AI to facilitate large-scale conversations and consensus building.

np_climate-change_6725816_000000 1.png

Climate Change

Fight climate change by developing new sustainable technologies and better predictive models

  • Highly Localized Weather Forecasting

    Startups like Tomorrow.io are using Tool AI to deliver accurate, fine-grained weather forecasts that help utilities and other energy operators better plan and adapt to climate variability.
     

    Global emissions tracking

    Coalitions like Climate TRACE are leveraging AI and satellite imagery to track global greenhouse gas emissions in near real time, improving the data available for making climate-related decisions.

These systems pose substantial risks of their own and would require careful governance. But unlike AGI, these are risks we can manage - not existential threats to humanity.

V2_9_KTFH_FINAL.png

Keeping the Future Human

orange-new.jpg
We can choose a future where AI enhances rather than replaces us - but only if we act decisively today.

Four essential, practical measures to prevent uncontrolled AGI

  • Standardized tracking and verification of AI computational power usage

    How could compute accounting be implemented?

    A standards organization (like NIST in the US or the ISO/IEEE internationally) would publish a detailed technical standard for measuring the compute used to train and run AI models. (See the full paper for technical details)

    AI models above a certain size would be mandated to report the compute used in their training and operation. This threshold should be high (e.g. equivalent to $25M of top-of-the-line chips doing inference for a month) to keep from interfering with Tool AI development. 

    Compute reporting would be ramped up, starting from initial good-faith quarterly estimates. Eventually, each model output would come with a mathematically-proven measure of the compute power used to generate it.

    Reporting should also be complemented by well-documented estimates of financial and energy costs.


    Existing Examples in Other Spaces

    Dangerous materials are carefully accounted for in other spaces. The U.S. Nuclear Regulatory Commission and the International Atomic Energy Agency require detailed records of all use of fissile materials.
  • Hard limits on computational power for AI systems, enforced through law and hardware

    Why have compute caps?
     
    Total computation provides a rough estimate of AI capability (and thus risk). While this estimate is very imperfect, compute is concretely measurable and verifiable. This makes it the best available meter stick for AI risk today.

    Hardware is currently the limiting factor in AGI research, requiring huge amounts of capital to assemble and operate. As a result, it is possible (for now) to set a compute cap that prevents the riskiest AI development projects without hampering other efforts.


    Existing Examples in Other Spaces
     
    Other dangerous development projects are banned by international treaties. The international Chemical Weapons Convention bans the development, production or stockpiling of chemical weapons.
  • Strict legal responsibility for developers of highly autonomous, general, and capable AI

    What could ‘enhanced liability’ look like?
     
    AI developers are already legally responsible (liable) for the harm they cause. But there are different kinds of liability. Enhanced liability would ensure that companies - and their executives - feel a strong burden of responsibility when developing AI.

    Creating and operating an advanced AI system that is high in A/G/I should be subject to ‘strict’ liability that normally applies to inherently dangerous activities. This means that developers would be held responsible for a product’s harms by default, rather than only if a degree of ‘blameworthiness’ can be established.

    Because AI development is complex and involves many actors, ‘single-party’ liability is insufficient. ‘Joint-and-several’ liability would allow any party (including, for example, an organization’s CEO) to be held responsible for harms done.

    A legal process should allow cautious developers to gain exemption from this enhanced liability. Developers would need to show A/G/I limitations, or safety and security guarantees.


    Existing Examples in Other Spaces

    Under the Superfund Act, U.S. industries handling hazardous substances are held strictly liable for cleanup and damages, regardless of fault.
  • Comprehensive requirements that scale with system capability and risk

    How could tiered safety standards be implemented?
     
    We would need:

    • An appropriate set of regulatory bodies, probably a new agency
    • A comprehensive framework for assessing risk
    • A framework for developers to demonstrate safety, subject to audits by independent agents
    • International agreements to harmonize norms and standards, potentially including a new international agency.
     
    With these in place, we could offer a tiered licensing system for developers. At the lowest end of scale and risk, there would be no requirements on developers. Quantitative safety, security and controllability guarantees would be required before development at the high end. See the full paper for a more detailed set of potential tiers.


    Existing Examples in Other Spaces

    Tiered safety standards are used in U.S. biotechnology research labs. Labs are assigned a ‘Biosafety Level’ (BSL-1 through BSL-4) by Institutional Biosafety Committees (IBCs) according to federal and international standards.

These measures mirror existing safeguards against powerful technologies like nuclear power and human bioengineering. They would interrupt the global dynamics pushing us to recklessly develop AGI we can't control, and allow developers to instead focus on the revolutionary potential of more advanced Tool AI.

VENN-1.png
VENN-2.png
VENN-3.png
VENN-4.png
VENN-5.png

Developers could instead focus on the revolutionary potential of more advanced Tool AI. These measures are just the beginning. Maintaining human control over AI will require ongoing vigilance and adaptive governance frameworks.
 

But they would give us the time and tools we need to chart a wise course forward.

10_ktfh_sub_FINAL.jpg

We need your help to secure our future. Whether you're a technologist, policymaker, business leader, or concerned citizen, your voice matters.

Join us in ensuring that artificial intelligence remains a tool for human flourishing rather than a runaway force that pushes us aside.

Let's keep the future human.

A note from the author

Thank you for taking the time to explore this topic with us.

I wrote this essay because as a scientist I feel it is important to tell the unvarnished truth, and because as a person I feel it is crucial for us to act quickly and decisively to tackle a world-changing issue: the development of smarter-than-human AI systems.

If we are to respond to this remarkable state of affairs with wisdom, we must be prepared to critically examine the prevailing narrative that AGI and superintelligence ‘must’ be built to secure our interests, or is ‘inevitable’ and cannot be stopped. These narratives leave us disempowered, unable to see the alternative paths ahead of us.

I hope you will join me in calling for caution in the face of recklessness, and courage in the face of greed.

I hope you will join me in calling for a human future.

– Anthony

Anthony Aguirre signature 1.png

Do you believe in a human future?

Read the full essay across 10 chapters to explore the current state of AI systems, where the technology is heading, and what we can do about it.

bottom of page