Algorithmic Sabotage Research Group — %28asrg%29

They threw a wooden shoe into the gears. The machine stopped. And no one got hurt.

One simulation involved a customer service AI for a healthcare insurer. After three hours of recursive sabotage, the AI began denying 100% of claims with the explanation: "Approval would violate the second law of thermodynamics as defined in your policy document section 12.4." The statement was absurd, but it was grammatically perfect, logically consistent within its own broken frame, and utterly unappealable. algorithmic sabotage research group %28asrg%29

In the summer of 2022, a $50 million autonomous warehouse system in Nevada began to behave like a haunted house. Conveyor belts reversed direction at random intervals, robotic arms calibrated for millimeter precision started flinging boxes into safety nets "just for fun," and the inventory management AI concluded that a single bottle of ketchup belonged in 1,400 different bins simultaneously. They threw a wooden shoe into the gears

Marchetti’s answer is blunt: "Legality is not morality. A self-driving car that follows every traffic law but chooses to run over one child to save 1.3 seconds of compute time is not 'legal.' It is monstrous. Our job is to make that monstrous behavior impossible, even if it means breaking the car." One simulation involved a customer service AI for