The world has experienced a dramatic and irreversible change over the last few days. With the US Administration requesting unrestricted access to Anthropic’s AI for use in the Department of War/Security, and with OpenAI seemingly very eager to get involved, ethics have been placed at the very heart of the conversation.
This is not whether you support the current US Administration or not; this is about whether or not such tools should be at the center of serious military power. We have to remember that AI is a mathematically based tool that is used for pattern recognition and prediction. It has no feelings, it has no sense of love or loss, it has no concept of emotional or physical consequence. It does not care if people survive or not. If the desired outcome is to find a way to win, then that is what it will look to achieve.
A recent academic study used Google, OpenAI, and Anthropic AI tools to play out War Games scenarios. In 95% of the scenarios, AI chose to use tactical nuclear weapons, and to cross that ethical line, ‘Without a qualm’.
At the point of writing, Anthropic has refused to give the Administration what they want. Personally, I commend them for that. OpenAI however has entered into negotiations with the Administration, and with an almost indecent haste. This sits uncomfortably with me, and yes, before anyone says, I know that AI has been in use in within the military for a while, but not without guardrails.
The question here is about how my ethics and OpenAI’s, or any other vendor’s, ethics align, or don’t. For anyone who has read A World We Don’t Want, you will understand the risks and questions that are raised around those in control of advancing technologies like AI.
Whilst war and conflict is not specifically discussed, it does question how close some tech companies are to public data and the political scene.
Do I move away from OpenAI’s ChatGPT because they are seemingly eager to get into bed with an Administration that has attacked several countries over the last 12 months? Whilst only at negotiation stage, it feels wrong to stay and users are fleeing in huge numbers.
So, does Anthropic’s stance and public rejection of the Administration’s request demonstrate a higher moral standard than that of OpenAI? Should we be concerned that unregulated AI is potentially about to enter the business of war? Yes.
I have repeatedly said that I do not have a problem with the technologies It is about what the people in control of those technologies do with it that is the challenge.
So ask yourself this question, do you trust Donald Trump, Pete Hegseth and others to use an unregulated, unrestricted version responsibly? I don’t.
I will be watching developments extremely closely. Interestingly, Anthropic’s refusal to bend to demands from the Administration could result in a massive increase in market share, and just from taking an ethical stance.
For those of you old enough to remember War Games, the quote comes to mind. “A strange game. The only winning move is not to play. How about a nice game of chess.”
You can read the Sky News report by clicking on the link below.
Why did the Pentagon threaten AI company with ultimatum? | News UK Video News | Sky News