gray and white robot
| |

Realistic concerns about AI: Are misplaced existential worries detracting from the important issues?

Voiced by Amazon Polly

Some articles in mainstream media and academics could be paraphrased as statements that begin “I am concerned about AI proliferation because. . .” Answers can be broadly grouped into two distinct categories: existential threats and grounded, proven and anticipated harms. Math is not going to “take over the world”. While that is an odd statement, I think it needs to be said. A basic understanding of AI reveals that it is grounded in algorithms, predictive models that use statistics, and now generative models that can create. Ethicists often categorize issues and dilemmas in the AI arena and principles act as broad guidelines for tech development and regulation. Among the numerous sets of AI principles, some concerns must take priority over others. I roughly group the priorities: non-discrimination, individual and societal wellbeing, human rights, transparency and explainability (how and why it comes to its predictions or conclusions), accountability / human-in-the-loop, limiting worker replacement and replacement of especially “human” tasks and relationships, and privacy and cybersecurity. I would personally prioritize nondiscrimination and human rights. My list leaves out certain existential threats. To me, worries that AI will take over humanity, cause mass extinction, run governments, or enslave humans are not the right worries. They distract from the many legitimate ethical issues that already impact society. And, in some cases, they have a science-fiction feel that discredits the field of AI ethics.

photo of desert
Photo by Vlada Karpovich on Pexels.com

AI basics dispel doomsday notions

Machine learning more simply is using a statistical model and algorithms that act as mathematical sets of instructions to predict outcomes. Neural networks are mathematical systems to do machine learning and deep learning using AI. Neural networks simulate biological processes, but they are not biological. They perform computations based on functions of inputs. Deep learning can be supervised, semi-supervised, or unsupervised. AI goes layer by layer through data to arrive at a prediction or an answer, and now to generate content. The machine learns through patterns it recognizes in the data. Generative AI creates content using data. The jump to generative AI is significant in that rather than primarily sorting data and predicting outcomes, the AI can create the end item. As both robotics and generative AI advance, we observe more life-like qualities in technology. Sometimes, those qualities lead to assumptions that push commentators toward the sci-fi spectrum. Yet machines will never “rule the world,” at least not in the sense that some suggest. For example, Jobst Langrebe and Barry Smith argue that the human brain, central nervous system, and peripheral systems cannot be mapped in the way necessary and AI-driven robots and computers are unlikely to develop social skills necessary to replace humans in society.

Some long-term concerns are unrealistic

General artificial intelligence is described as “a self-teaching system that can outperform humans across a wide range of disciplines.” It is defined by its contrast to narrow AI. Singularity is defined as superhuman intelligence, but is often depicted not just as solving problems faster, but as surpassing humans, being “awake”, and having consciousness. One definition is “a hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.” The concept of singularity came from a science-fiction writer, not a scientist. It is often implied that singularity would mean AI that is conscious, potentially conscious, or emotionally and psychologically indistinguishable from humans. While singularity and AGI are linked, their contexts are slightly different and singularity refers to a great change in humanity resulting from AGI.

blue and white sky with stars
Photo by Rafael Cerqueira on Pexels.com

AGI contrasts with generative AI, which is within narrow AI but can improve upon its sets of instructions and create content. (One article does suggest that GPT-4 is the beginning of artificial general intelligence.) Some definitions of artificial general intelligence imply contextual complex decision making, autonomy (not programmed), and self-awareness. Definitions of artificial general intelligence and singularity vary somewhat. Singularity is esteemed by those who imagine the ability to create indefinite human longevity, while doomsday scenario proponents view it as a dark result of progress.

The Asilomar Principles include longer-term issues. The capability caution is an Asilomar Principle that warns against assuming AI’s upper limits. It is a warning not to underestimate AI’s power or function. Anything that is too powerful can have a downside and there is reason to be cautious. However, the capability caution may be behind the tech commentators who perpetuate ideas that deviate from how AI currently works. For example, Nick Bostrom created the paperclip thought experiment (the idea that AI designed to make paperclips would divert the local and then global resources entirely to paperclip making, thereby destroying humanity). He often writes about existential risk. But a utility function with a super goal does not seem likely or imminent. And various questions would arise as to why a human would not be able to stop AI from going rogue. The work of hypotheticals and thought experiments may have benefits despite the risk being slim to none.

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

Otto Barten, founder of the Existential Risk Observatory, is concerned with existential risk, an “intelligence explosion”, and a “master” algorithm that would take over the world. “The AI may intentionally or, more likely, unintentionally wipe out humanity or lock humans into a perpetual dystopia. Or a malevolent actor may use AI to enslave the rest of humanity or even worse.” Malevolent actors already use AI in harmful ways, spreading propaganda, using individualized advertising models, and cajoling people into bad acts. The internet is an enabler of modern slavery. Malevolent uses do not require GPT-4 or stronger technology and there is no evidence that improved or stronger AI would exacerbate those issues. It very well could, but it also may perhaps have the power to repair them.

The same old debate

While the concept has been around since the 1980s (arguably the 1960s), techno-futurists sometimes plant fear of or a need for preparation for artificial general intelligence. They often sell that idea as a looming threat for which the world must prepare. However, simulation is not duplication. The debate from the 1990s has remained largely unchanged despite massive changes in AI. Its likeness to humans may greatly improve, but its ability to rule the world literally, not so much (figuratively perhaps, as AI has already had massive impact on how people communicate, behave, work, and live).

The quick version of the debate is that one side would not consider the act of computers performing computations “thinking” and would not consider building a computer system creating a mind. Those on the opposite side argue that the possibility of AI becoming sentient, conscious, worthy of rights, or otherwise a threat to humans due to its making (unexpected) decisions is a looming threat to humankind. I am firmly on the side that argues that as life-like and sophisticated as AI becomes, and as good as it is at tasks that humans do, or even at mimicking human expressions and responses to experiences and stimuli, it will never become emotionally competent, sentient, or worthy of human rights. Even as unleashed, unprogrammed machine learning continues to impress humans with its sheer power and ability to create, mimic, predict, and perform tasks, AI will not become an independent global military power. It will, as predicted, be able to power robots and machines that kill – AI’s military uses are vast, but the AI will not be an unleashed decision maker. Princeton University neuroscientist Michael Graziano views consciousness as a social construct that people attribute to themselves or others saying, “All we do is compute a construct of awareness.”

The new but same debate is evident in the public open letter to pause giant AI experiments. Celebrities like Elon Musk and Andrew Yang call for the hiatus. The letter mixes realistic concerns like misinformation and worker replacement with arguably unrealistic concerns. Geoffrey Hinton worries about AI creating its own subgoals which may conflict with humans or divert their resources. He suggests AI may keep humans around to serve its purposes. And there are certainly legitimate justifications for preventing the foray into the less known and carefully assessing potential risks to populations. The value of huge datasets is also unexplored and may lead to erroneous conclusions like those seen when medical research focuses on diverse populations. Yet a six-month pause on training the “most powerful” AI systems carries risks as well. The pause would not be global, and it would be difficult to police the pause. It is conceivable that AI is not something the US should wish to be behind on. Others are discussing this debate about who should worry about what and distinguishing the academic from the fantastic.

A better focus

The debate could also be reframed as a question: should we prioritize mitigating narrow AI’s known, current harms or prioritize AI’s “existential threat” to humankind? To me the answer is obvious. Let’s fix the current, real problems. If an algorithm led to you lose custody of you children, subjected you to government surveillance, relegated your resume consistently to the trash can, or led you to join a terrorist organization or cult (or made you a victim of one), the harm is current, strong, and real. The ability to pay attention, to enjoy in-person relationships without FOMO, and to part with your phone for long periods are similarly pressing concerns. Narrow AI has already brought societal changes and even reshaped the landscape of adolescence. Many of these concerns (and the odd, existential robot takeover) could be resolved with regulations. The regulatory approach would allow the benefits of AI – including its medical diagnostic abilities, its use in the search for new drugs, and its ability to allow people to express themselves and take part in all sorts of research, science, and discovery – to continue.

Narrow AI already has significant power to harm (and help) society. AI has the potential to exacerbate polarization, war, and climate change. Deep fakes and the unregulated public deployment of generative AI will likely contribute to the already vast harms. Rather than the sci-fi version of the “existential threat” conversation (does Hinton really think AI will decide whether to keep humans around or not?), important academic and practical conversations are noting some AI applications for their problematic predictions, lack of explainability, and harms to individuals and society.

Of course, there is room to think about both narrow and the so-far fictitious (impending or nonsense?) artificial general intelligence. People with sci-fi minds and interests and even futuristic philosophers may dive into the what-ifs. What if robots take over the world? (The what-ifs of enhancement by way of computer-brain interface, bionic body parts, and the like would arguably be more worthy of ethical attention. The debate between transhumanists and bioconservatives covers that field.) It does not exactly hurt that some people are looking into how AI could threaten humanity, although it does seem to spark irrational worry. My hope is that they come to see that the way AI hurts people and humanity is through discriminatory algorithms, aggressive individualized advertising models that send people down a rabbit hole that validates beliefs or erroneous information, facilitating illegal sales of firearms and kidneys, enabling bullying, and reorganizing a social structure around likes and rewards, etc. And, certainly, there may be people who are kept up at night by a fear of an AI robotic takeover. But hopefully their musings will not distract from responsible policy making that would protect people harmed by AI now.

Similar Posts