In the first twenty-four hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded three thousand — twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. U.S. Central Command (CENTCOM) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.
Israel’s use of AI-enabled targeting in its war on Hamas may offer some insights. An investigation last year reported that the Israeli military had deployed an AI system called Lavender to identify suspected militants in Gaza. The official line is that all targeting decisions involved human assessment. But according to one of Lavender’s operators, as the humans involved came to trust the system, they limited their own checks to nothing more than confirming that the target was a male. “I would invest 20 seconds for each target,” the operator said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
The same pattern has already taken hold in business. In 2023, ProPublica revealed that Cigna, one of America’s largest health insurers, had deployed an algorithm to flag claims for denial. Its physicians, who were legally required to exercise their clinical judgment, signed off on the algorithm’s decisions in batches, spending an average of 1.2 seconds on each case. One doctor denied more than 60,000 claims in a single month. “We literally click and submit,” a former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”
Twenty seconds to approve a strike; 1.2 seconds to deny a claim. The human is in the loop. Humanity is not.
Difficulty by Design
The novelist Milan Kundera writes of the terrifying weight of being confronted with the enduring seriousness of our actions. But while lightness might seem attractive in the face of this impossibly heavy burden, it is ultimately unbearable. Disconnection from the weightiness of our decisions deprives them of substance, of meaning.
Some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. AI makes those decisions quicker and easier – but some decisions ought to be hard. And when AI lifts the weight, when it takes away the burden of making decisions about who lives and who dies, this is not progress. This is moral degradation.
AI promises to lift the burden of difficult and cognitively demanding work — it makes the work lighter. In many domains, that is genuine progress. But some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. In such decisions, the difficulty serves a function — it is a feature, not a bug. It is a mechanism that forces institutions to reckon with what they are doing. And when AI removes that weight, the institution doesn’t become more efficient. It becomes numb.
