Skip to main content
AI Deterrence Is Our Best Option
Earlier this year, Dan Hendrycks, Eric Schmidt, and Alexandr Wang released "Superintelligence Strategy", a paper addressing the national security implications of states racing to develop artificial superintelligence (ASI) — AI systems that vastly exceed human capabilities across nearly all cognitive tasks. The paper argued that no superpower would remain passive while a rival transformed an AI lead into an insurmountable geopolitical advantage. Instead, capable nations would likely threaten to preemptively sabotage any AI projects they perceived as imminent threats to their survival. But with the right set of stabilizing measures, this impulse toward sabotage could be redirected into a deterrence framework called Mutual Assured AI Malfunction (MAIM). Since its publication, "Superintelligence Strategy" has sparked extended debate. This essay will respond to several critiques of MAIM, while also providing context to readers who are new to the discussion. First, we'll argue that creating ASI incentivizes state conflict and the tremendous tensions that its development produces are not confined to MAIM. Second, we'll consider whether MAIM's proposals reduce instability. Third, we'll explore the issue of redlines, and determine whether MAIM can effectively shape states' perceptions of risk.
Comments