Purpose

This document contains PauseAI's Core Positions: the fundamental claims that define our public stance on artificial intelligence. These are the statements that every Chapter must adopt as part of the Sandbox Floor, and that any PauseAI spokesperson should be able to articulate clearly and confidently.

They are stripped of technical detail by design. For the full analytical document with supporting reasoning, see ‣.


Core Positions

  1. AI systems are real and powerful. Current artificial intelligence systems exist, have concrete capabilities, and are already having a transformational impact across society.
  2. AI is the most transformational technology of our time. Its importance will soon surpass that of the internet. This is not just a speculative bubble.
  3. AI capabilities are advancing rapidly. Progress in AI development continues to accelerate, with no indication of a ceiling. New paradigms and techniques continue to yield expected gains.
  4. AI progress is likely to continue by default. Whether through scaling current architectures, algorithmic breakthroughs, or new ideas, it is unlikely that AI progress will stagnate in the coming years in the absence of regulations.
  5. Uncontrolled AGI poses a catastrophic risk. The creation of artificial general intelligence or superintelligence without first solving the alignment problem represents an existential or civilisation-scale threat.
  6. Alignment is extremely difficult. Ensuring that AI systems remain consistent with human values is an unsolved problem that will very likely not be solved within the next decade.
  7. AGI or superintelligence is likely within a decade. Given the pace of progress and unprecedented investment, this possibility must be taken with the utmost seriousness — a view shared by many leading experts.
  8. Current AI risks are real and worsened by speed. Risks such as disinformation, bias, and job displacement are genuine, but they are exacerbated by the pace of AI advancement. Addressing only today's harms without accounting for the trajectory of progress would be a fundamental error.
  9. We need rational, proportionate risk management. We advocate for assessing both the severity and probability of each danger — including loss of control and misuse — and preparing appropriate responses.
  10. Society must make an informed choice. Since alignment cannot be guaranteed in time, the responsible course is societal action: enabling citizens, policymakers, and international bodies to make an explicit, informed decision about the future of AI. This is PauseAI's primary approach.

<aside> 📝

Version note — Derived from Pause AI Position on Artificial Intelligence (v2025-09-01). To be reviewed quarterly alongside the source document.

</aside>