Sitemap

CorD Recommends

More...

Comment by Zoran Panović

From Lavrov to Macron

Despite the Serbian Progressive Party (SNS) having...

Goran Radosavljević, Ph.D. Vice Dean and Director of the FEFA Institute

We Need to Quadruple Our Growth Rate

Reducing corruption, reforming the energy sector and...

Luka Baturan, University of Novi Sad Faculty of Law

Arbitrary Tax Breaks Degrade the System

The biggest job that Serbia has to...

Nebojša Bjelotomić, Director of the Digital Serbia Initiative

Instead of Walls and Machines, We’re Investing in People and Their Knowhow

For countries with a falling population, it...

News

Montenegro Nominated for Europe’s Most Desirable Destination Award

Montenegro has been nominated for the prestigious title of Europe's most desirable destination in the Wanderlust Reader Travel Awards...

The International “Aleksandar Tišma” Literary Award ceremoniously presented to French writer Cécile Wajsbrot

The third International “Aleksandar Tišma” Literary Award was ceremoniously presented to French writer Cécile Wajsbrot on June 24, 2024,...

Medtronic to Launch Operations in Serbia with Strategic Bio4 Campus Partnership

American medical technology giant Medtronic is set to establish operations in Serbia following a memorandum of understanding signed with...

Mickoski Proposes New Government for North Macedonia

Hristijan Mickoski, leader of VMRO-DPMNE and the designated Prime Minister of North Macedonia, has formally submitted his proposed cabinet...

King Frederick X Inaugurates First Section of Undersea Tunnel Connecting Denmark and Germany

King Frederick X of Denmark has inaugurated the first segment of the ambitious 18-kilometre Fehmarn Belt tunnel beneath the...

Robert Skidelsky, A Member Of The British House Of Lords And Professor Emeritus Of Political Economy At Warwick University

Franken Tech

The emergence of generative artificial intelligence has seemingly caused panic among industry evangelists like Elon Musk, who recently called for a six-month pause in training new AI systems. But are our contemporary Victor Frankensteins sincere about tapping the brakes, or are they merely jockeying for position?

In Mary Shelley’s novel Frankenstein; or, The Modern Prometheus, scientist Victor Frankenstein famously uses dead body parts to create a hyperintelligent “superhuman” monster that – driven mad by human cruelty and isolation – ultimately turns on its creator. Since its publication in 1818, Shelley’s story of scientific research gone wrong has come to be seen as a metaphor for the danger (and folly) of trying to endow machines with human-like intelligence.

Shelley’s tale has taken on new resonance with the rapid emergence of generative artificial intelligence. On 22nd March, the Future of Life Institute issued an open letter signed by hundreds of tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause (or a government-imposed moratorium) in developing AI systems more powerful than OpenAI’s newly released ChatGPT-4. “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” says the letter, which currently has more than 25,000 signatories. The authors go on to warn of the “out-of-control” race “to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Musk, currently the world’s secondrichest person, is in many respects the Victor Frankenstein of our time. The famously boastful South Africa-born billionaire has already tried to automate the entire process of driving (albeit with mixedresults), claimed to invent a new mode of transportation with the Boring Company’s (still hypothetical) hyperloop project, and declared his intention to “preserve the light of consciousness” by using his rocket company SpaceX to establish a colony on Mars. Musk also happens to be a co-founder of OpenAI (he resigned from the company’s board in 2018 following a failed takeover attempt).

One of Musk’s pet projects is to combine AI and human consciousness. In August 2020, Musk showcased a pig with a computer chip implanted in its brain to demonstrate the so-called “brain-machine interface” developed by his tech start-up Neuralink. When Gertrude the pig ate or sniffed straw, a graph tracked its neural activity. This technology, Musk said, could be used to treat memory loss, anxiety, addiction, and even blindness. Months later, Neuralink released a video of a monkey playing a video game with its mind thanks to an implanted device.

“AI systems with humancompetitive intelligence can pose profound risks to society and humanity,” says the letter, which currently has more than 25,000 signatories. The authors go on to warn of the “out-of-control” race “to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”

These stunts were accompanied by Musk’s usual braggadocio. Neuralink’s brain augmentation technology, he hoped, could usher in an era of “superhuman cognition” in which computer chips that optimise mental functions would be widely (and cheaply) available. The procedure to implant them, he has claimed, would be fully automated and minimally invasive. Every few years, as the technology improves, the chips could be taken out and replaced with a new model. This is all hypothetical, however; Neuralink is still struggling to keep its test monkeys alive.

While Musk tries to create cyborgs, humans could soon find themselves replaced by machines. In his 2005 book The Singularity Is Near, futurist Ray Kurzweil predicted that technological singularity – the point at which AI exceeds human intelligence – will occur by 2045. From then on, technological progress would be overtaken by “conscious robots” and increase exponentially, ushering in a better, post-human future. Following the singularity, according to Kurzweil, artificial intelligence in the form of self-replicating nanorobots could spread across the universe until it becomes “saturated” with intelligent (albeit synthetic) life. Echoing Immanuel Kant, Kurzweil referred to this process as the universe “waking up.”

But now that the singularity is almost upon us, Musk and company appear to be having second thoughts. The release of Chat- GPT last year has seemingly caused panic among these former AI evangelists, causing them to shift from extolling the benefits of super-intelligent machines to figuring out how to stop them from going rogue.

Unlike Google’s search engine, which presents users with a list of links, ChatGPT can answer questions fluently and coherently. Recently, a philosopher friend of mine asked ChatGPT, “Is there a distinctively female style in moral philosophy?” and sent the answers to colleagues. One found it “uncannily human.” To be sure, she wrote, “it is a pretty trite essay, but at least it is clear, grammatical, and addresses the question, which makes it better than many of our students’ essays.”

In other words, ChatGPT passes the Turing test, exhibiting intelligent behaviour that is indistinguishable from that of a human being. Already, the technology is turning out to be a nightmare for academic instructors, and its rapid evolution suggests that its widespread adoption could have disastrous consequences.

In his 2005 book The Singularity Is Near, futurist Ray Kurzweil predicted that technological singularity – the point at which AI exceeds human intelligence – will occur by 2045

So, what is to be done? A recent policy brief by the Future of Life Institute (which is partly funded by Musk) suggests several possible ways to manage AI risks. Its proposals include mandating third-party auditing and certification, regulating access to computational power, creating “capable” regulatory agencies at the national level, establishing liability for harms caused by AI, increasing funding for safety research, and developing standards for identifying and managing AI-generated content.

But at a time of escalating geopolitical conflict and ideological polarisation, preventing new AI technologies from being weaponised, much less reaching an agreement on global standards, seems highly unlikely. Moreover, while the proposed moratorium is ostensibly meant to give industry leaders, researchers, and policymakers time to comprehend the existential risks associated with this technology and to develop proper safety protocols, there is little reason to believe that today’s tech leaders can grasp the ethical implications of their creations.

In any case, it is unclear what a pause would mean in practice. Musk, for example, is reportedly already working on an AI startup that would compete with OpenAI. Are our contemporary Victor Frankensteins sincere about pausing generative AI, or are they merely jockeying for position?