AI kill switches have been proposed by several academic institutions to prevent that whole Skynet thing from playing out

As AI continues to dominate the conversation in just about every space you can think of, a repeated question has emerged: How do we go about controlling this new technology? According to a paper from the University of Cambridge the answer may lie in numerous methods, including built in kill switches and remote lockouts built into the hardware that runs it.

The paper features contributions from several academic institutions including the University of Cambridge’s Leverhulme Centre, the Oxford Internet Institute and Georgetown University, alongside voices from ChatGPT creators OpenAI (via The Register). Among proposals that include stricter government regulations on the sale of AI processing hardware and other potential regulation methods is the suggestion that modified AI chips could “remotely attest to a regulator that they are operating legitimately, and cease to operate if not.” 

This is proposed to be achieved by onboard co-processors acting as a safeguard over the hardware, which would involve checking a digital certificate that would need to be periodically renewed, and de-activating or reducing the performance of the hardware if the license was found to be illegitimate or expired. 

This would effectively make the hardware used to compute AI tasks accountable to some degree for the legitimacy of its usage and providing a method of “killing” or subduing the process if certain qualifications were found to be lacking.

Later on the paper also suggests a proposal involving the sign off of several outside regulators before certain AI training tasks could be performed, noting that “Nuclear weapons use similar mechanisms called permissive action links”. 

While many of the proposals already have real world equivalents that seem to be working effectively, like the strict US trade sanctions levied at countries like China for the restriction of export for AI chips, the suggestion that at some level AI should be regulated and restricted by remote systems in case of an unforeseen event strikes as a prudent one.

As things currently stand AI development seems to be advancing at an ever rapid pace, and current AI models are already finding usage in a whole host of arenas that seem like they should lend pause for thought. From power plant infrastructure projects to military applications, AI seems to be finding a place in every major industry, and regulation has become a hotly discussed topic in recent years, with many leading voices in the tech industry and government institutions repeatedly calling for more discussion and better methods for dealing with the technology when issues may arise.

Thinking of upgrading?

(Image credit: Microsoft)

Windows 11 review: What we think of the latest OS.
How to install Windows 11: Our guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

At a meeting of the House of Lords communications and digital committee late last year, Microsoft and Meta bosses were asked outright as to whether an unsafe AI model could be recalled, and simply avoided the questioning, suggesting that as things stand the answer is currently no.

A built in kill switch or remote locking system agreed upon and regulated by multiple bodies would be a way of mitigating these potential risks, and would hopefully have those of us concerned by the wave of AI implementations taking our world by storm sleeping better at night.

We all like a fictional story of a machine intelligence gone wrong, but when it comes to the real world, putting some safeguards in play seems like the sensible thing to do. Not this time, Skynet. I prefer you with a bowl of popcorn on the sofa, and that’s very much where you should stay.

Leave a Reply

Your email address will not be published. Required fields are marked *