Oxford's oldest student newspaper

Independent since 1920


    Why an AI pause would be detrimental to humanity

    Imagine you had a pet parrot. One day, you heard it say “kill all humans”. Obviously, it doesn’t actually want to kill all humans; it can’t even understand what the phrase means. It’s just regurgitating what has been heard from elsewhere, perhaps from a TV programme in the background.

    An AI saying it wants to “kill all humans” is the same thing, albeit on a grander scale. It takes what has been fed into it, identifies the patterns and words, and spits out what is asked of it by humans. I would posit that AI poses as much a threat to human life as a parrot (perhaps even less, given it doesn’t have a beak).

    Nonetheless, countless reasons have been given for halting AI development, culminating in the recent petition to “pause” development for six months. Predictably, it views ever-evolving human ingenuity as a fundamentally bad thing. The irony of the petitioners saying AI “could represent a profound change in the history of life on Earth” on the internet (which, to my knowledge, is not a naturally occurring phenomenon) is not lost on me.

    Yes, “profound change” to history included such tragedies as world wars, famines, diseases, and nuclear weapons. But it also included the internet, penicillin, vaccines, modern agricultural methods, and countless other excellent inventions. Why can’t AI join the gallery of human progress? The petition argues that we cannot “understand, predict, or reliably control” AI. Firstly, how is pausing AI development going to help with this? Typically, understanding something requires more testing, not less. Secondly, if we limited ourselves to what we could predict, humanity would have gone nowhere. Alexander Fleming could never have predicted that leaving a petri dish out would lead to penicillin. Orville and Wilbur Wright could not have predicted their invention would have led to cross-Atlantic flights. Should we have paused Jonas Salk’s research until we were sure that “[its] effects [would] be positive and [its] risks [would] be manageable”?

    Moving on to the claim about jobs. I am always sceptical when technology is decried on the grounds of “taking away jobs”. Of course, I could pay hundreds of people to comb through encyclopaedias until I find what I’m looking for; or I could use Google. I could pay someone on the street to go down to Greenwich and adjust my clock based on theirs, or I could use a more accurate wristwatch. The economic process of creative destruction has made us richer and happier, and indeed helped the environment. I’m sure no one reading this article yearns for the days before the lightbulb when whaling for lamp oil was necessary. The question “Should we automate away all the jobs, including the fulfilling ones?” ignores the huge number of industries which have gone bust because better alternatives were found. I, for one, am glad that I don’t have to use horse riders to deliver mail to my parents, even if it did employ more people than the current postal system. Perhaps AI will cause a similar adjustment to employment; that’s no reason to pause development. Quite the opposite, actually – why should consumers be forced to pay for a more inefficient way of doing things? New industries can and do pop up when old ones fall; whalers were replaced by lightbulb manufacturers, horse riders by telegraph operators. Consider how fast industries related to computing have sprung up. Are we really to believe that no new jobs whatsoever will be created thanks to AI?

    The petition also claims that AI will lead to “propaganda and untruth” flooding social media. Firstly, AI will only have as much power as we choose to give it. ChatGPT cannot access sites like Twitter and Facebook without its creators giving it access to a vast network of accounts. Therefore, the only threat of AI comes from nefarious actors willing to give their AI a platform on social media. This brings me to my second point, which is that any pause to AI will not be heeded by bad actors. States like North Korea and Russia, intent on spreading discord within enemy states, are not going to listen to any pause. As Margaret Thatcher pointed out with nuclear weapons, what has been invented cannot be disinvented (of course, with the caveat that an ideal world would not contain nuclear weapons, a claim which does not hold with AI). No matter what, now that AI has been invented, it is in the hands of those who wish to do harm with it. Rather than slow down, the only logical course of action is to speed up, using AI for such actions as detecting this nefarious content. A pause will not benefit anyone but bad actors in this regard. 

    On a final note, Business Insider reported that Latitude, a much more basic AI model, pays $100,000 a month to run its servers. Given that the new Russian minimum wage is 19,242 roubles per month, Russia could afford to hire a troll farm of 423 people for the price of running an AI disinformation programme, not including development costs. 

    Predictably, the petition decries the “out-of-control” race to develop new AI. This completely ignores how products that we use today were created. The mobile phone was famously a competition between two companies; the first call was made to inform competitors that they had lost the race. Smartphone builders did not collaborate with each other to create the first touchscreen phones. It is only through competition that products improve. It’s ironic that the signatories of the petition include Elon Musk, a man who owes his entire career to competition in a free market economy.

    The Future of Life Institute is not improving the future of life with this petition; quite the opposite. It scaremongers about “losing control of civilisation”, as if anyone has seriously suggested giving AI the right to vote or run for office. Even if you still disagree that AI will be good for humanity, the fact is that the cat is out of the bag. No number of pauses, regulations, and bans will stop bad actors from using the technology. Unilaterally disarming ourselves is irrational. I don’t claim to know how AI will progress over the years any more than Nicolas-Joseph Cugnot could predict how cars would progress. Let’s allow it to develop to its potential, rather than shutting ourselves off from a better tomorrowrow.

    Image Credit: David S. Soriano//CC BY-SA 4.0 via Wikimedia Commons

    Support student journalism

    Student journalism does not come cheap. Now, more than ever, we need your support.

    Check out our other content

    Most Popular Articles