Artificial Decisiveness
2024
A few weeks ago I asked ChatGPT to show me a picture of their physical instantiation (a server rack or a data center). At first they demurred on the basis of being software, not physical; when I argued the point, they demurred on the basis of security, as I expected they might.
Development and use of artificial general intelligence will not stop or significantly slow, according to Geoffrey Hinton. Coming from the leading parent, sage and skeptic of AI, I accept this as fact. Our challenge is less the inevitability of AI than the moment at which it has arrived (this applies to (and LLMs and related tech, but I’m going to use AI as a shorthand for it all). Many perceive our era as one of gridlock, swamps, problems too big to solve. When we live in a stasis of unease, we will seek change, often precipitately.
The psychology of our species and of US culture reflects a strong action bias, that is, a tendency to act even when inaction might produce a better result. We tend to favor appearance of decisiveness over the often tedious process of making good decisions. For example, most of us have experienced bosses who are good at the aesthetics of appearing decisive while actually accomplishing little. Populist political movements attract leaders who project manly decisiveness, but whose authoritarian impulses bypass democracy and community-building, ultimately contributing little but mayhem.
AI is an accelerant to action bias, because it’s very, very good at performing decisiveness, even when the quality of decisions is wildly inconsistent. Corporations and governments are already using it to make big decisions. This might work out okay; after all, our species’ track record of business, policy and legislative decisions pre-AI is uneven at best. Certain forms of governance may be better effected with the aid of an AI trained in relevant history and ethics. But I suspect we will prematurely relax the need to have humans in many decision loops, especially during crises.
As well, AI will serve as a crutch to leaders who want to appear decisive while preserving deniability. If an AI decision has positive outcomes for humans, a leader can claim credit for collaborating with a superhuman resource. If an AI decision has negative outcomes, it is the tech that is faulty—no human is to blame. Already we can witness this contradiction in the tech industry’s haste to monetize AI while simultaneously castigating government for not regulating it. The industry, which strenuously protests any other form of government regulation, is inoculating itself against future responsibility.
As frameworks of prediction and association, AIs cannot be expected to conceptualize truth, or make decisions, in the same way we do. Trained on a diet rich with the internet’s truthiness and dubious choices, we get the same in return…and feel betrayed. Blaming ChatGPT for fantasies, factual error and poor recommendations is a classic instance of blaming a partner for one’s own shortcomings. For the foreseeable future, it we ask an AI to perform decisiveness, it will do so. If, however, we want AI to make decisions with just outcomes, decisions that reflect our humanity, we should shift our expectations. AI cannot answer the must fundamental questions of whether a decision needs to be made made now as opposed to later, made quickly as opposed to slowly, or made at all. That’s our job.