Purpose

This document is the spokesperson's playbook. It defines the minimum that every PauseAI representative must be able to articulate clearly, in plain language, to any audience, from a journalist to a neighbour.

It is structured around three questions:

  1. What is the problem? — Why AI development as it stands is dangerous.
  2. What is the solution? — What PauseAI is asking for and why it's achievable.
  3. How do we say it? — The principles that guide our tone and approach.

<aside> 📌

Relationship with Core Positions PauseAI's ‣ document contains the full set of analytical claims that define what we believe. This messaging document deliberately highlights a subset of those positions — the ones that are most important for public communication — and focuses on how to explain them to everyday people. Some repetition is intentional: a spokesperson needs the key arguments ready at hand, not buried in a separate reference document. Think of Core Positions as what we stand on; this document is what we lead with and how we say it.

</aside>


Part 1: The problem: Why this matters

These are the key messages that explain why AI development is a problem that demands urgent attention. Each message is followed by guidance on how to articulate it accessibly.

1. AI is already here and it's powerful

Powerful AI systems already exist and are transforming society. This will prove to be the most transformational technology in human history.

<aside> 💬

How to say it: Don't lead with science fiction. Lead with what people already see — AI writing emails, generating images, replacing customer service jobs. Then: "And this is just the beginning. The systems being built right now are far more powerful than what you're seeing on your phone."

</aside>

2. It's getting more powerful, fast

AI systems are rapidly becoming more capable, with more resources and expertise being dedicated to the race toward superintelligence than ever before. No ceiling has been observed.

<aside> 💬

How to say it: "Every few months these systems take a major leap. The companies building them are spending tens of billions of dollars to make them smarter and they're telling us they expect to reach human-level AI within a few years."

Credibility anchor: Point to the AI companies' own statements: OpenAI, Google DeepMind and Anthropic have all publicly stated they expect to reach AGI soon.

</aside>

3. This is urgent - we may have less than a decade

Most AI experts believe a superintelligence could be developed within ten years, some think much sooner. The window for action is closing.

<aside> 💬

How to say it: "This isn't a problem for the next generation. The people building these systems say they expect to succeed within a few years. If they're even half right, we need to act now."

Credibility anchor: Surveys of AI researchers consistently show that the majority believes transformative AI will be here within a decade. Leaders of the top AI labs have publicly acknowledged this timeline.

</aside>

4. Nobody knows how to make it safe