The Kraken! The Risks Posed by Lethal Autonomous Weapon Systems (LAWS)

The "Kraken" in this context refers to the untamed, unpredictable, and potentially catastrophic risks posed by Lethal Autonomous Weapon Systems that operate without meaningful human control. Typed machine code—a form of programming that enforces strict rules on data types and operations at the lowest level—is argued to be the only effective mechanism for "caging" these systems by guaranteeing, rather than just testing, that they comply with safety and legal requirements. 

Here is why typed machine code is essential for taming autonomous weapons:

1. Eliminating Unintended Behaviour (Safety)

Preventing "Black Box" Failures: Modern AI often operates as a "black box," making it impossible for commanders to predict behaviour in novel environments. Typed code ensures that software adheres to pre-defined, rigid parameters, making the system's actions predictable.

Reducing Unintended Engagements: DoD Directive 3000.09 requires minimising risks of accidental, non-combatant casualties. Strong typing prevents runtime errors (like buffer overflows or memory corruption) that could cause a weapon to target civilian infrastructure or ignore safety constraints. 

2. Ensuring Legal and Ethical Compliance (Accountability)

Adherence to International Humanitarian Law (IHL): Autonomous systems struggle to make qualitative, ethical judgments regarding distinction and proportionality. Typed code can be used to mathematically prove (formally verify) that a system will never, for example, authorise a strike if a human is detected in the blast radius, effectively coding legal constraints into the machine's behaviour.

Enforcing Human Control: A critical challenge with autonomous weapons is the lack of accountability when they act independently. Using typed machine code forces a "glass-box" approach, ensuring that every action is traceable, auditable, and ultimately tethered to human-designed rules rather than emergent, unpredictable AI behaviours. 

3. Creating Verifiable "Kill Chains"

Rigorous Verification: Because the stakes are high, testing alone is insufficient. Typed, low-level code allows for formal verification methods, providing mathematical certainty that the code does exactly what it is designed to do, and nothing more.

"Right for Machine Hesitation": While AI is designed to match patterns rapidly, typed systems can be designed to force a "pause" or "abort" sequence when data falls outside defined, safe, or ethical boundaries, creating a "right to hesitate" rather than an automatic, impulsive engagement. 

In summary, as AI-enabled, autonomous weapons become more capable, the only way to ensure they do not become uncontrollable "Krakens" is to use rigid, typed programming that treats safety and legality as non-negotiable, verifiable constraints. 


Comments