Could Generative AI pose an existential threat to us?



 In recent days, prominent figures in the field of AI, like Geoffrey Hinton, Sam Altman, etc., have sparked discussions about the potential dangers of generative AI. 

There is something eerie about chat GPT-like bots. One can think of many dangers with the current AI tech, but none of them seem existential. Andrew NG called for the LinkedIn community to list the existential risks of AI. While it is challenging to pinpoint the exact reasons behind these concerns, there is an undeniable discomfort associated with the latest advancements in AI technology. This article is an attempt to evaluate fears surrounding generative AI, examining its impact on various aspects of our lives and contemplating whether it poses a terminal threat to the human species.

1. The Misdemeanor Bots:

When interacting with chat GPT-like bots, one cannot help but think of less dangerous spam emails or encountering age-inappropriate content. Although these experiences might not be existential threats, they highlight the need to scrutinize the influence of generative AI on our everyday lives. 

For example, when my 8-year-old son was checking Bing, he asked it to tell him some jokes. It did tell him many jokes, one of which was age-inappropriate. The email I received from an individual from Burkina Faso claiming to have left his inheritance to me had no spelling errors, unlike the one I had received earlier. I am sure these scammsters are using ChatGPT. 


2. The Proliferation of Non-Original Content:

A more concerning aspect of generative AI is the potential inundation of non-original and useless text on the internet. This flood of content could make it increasingly difficult for people to access reliable information. The amplification of propaganda through generative AI could further exacerbate this issue, leading to the degradation of trustworthy sources and undermining democratic processes.


3. The Impact on Democracy and Governance:

Misinformation and manipulation of individuals through generative AI pose risks to democratic systems. The influence of social media has already demonstrated the power to sway public opinion, and generative AI could take this manipulation to new heights. The deterioration of democracy might even pave the way for alternative forms of governance. A government run by AI may not be such a bad thing. 


4. Disrupting Online Business Models:

With the integration of ChatGPT in search engines like Bing, concerns arise regarding the implications for website owners and their revenue streams. If users receive direct responses through chat interfaces, website traffic may decline, potentially impacting monetization opportunities. This raises questions about the future revenue model of search engines and the motivations of bloggers and site owners to make their content discoverable by search engine crawlers. 

Imagine ChatGPT's responses if it was always trying to sell you something. It will be annoying but I am sure it can't annoy the whole of humanity to death. 


5. AI and Human Relationships:

Changing societal dynamics and shifting sexual preferences might lead some individuals to seek AI companionship as an alternative to traditional human relationships. While this could impact human population growth, the implications for society as a whole remain uncertain.


6. The Uncertainty of Introducing a New Intelligent Agent:

The potential introduction of another intelligent agent into our ecosystem raises questions about the long-term consequences and how it may affect the balance of our natural world. Homosapiens have survived and may even have eliminated other more intelligent species such as Denisovans, Neanderthals, and Homo-Floresiensis.

As a species, we have given overbearing importance to intelligence, but I don't think we understand intelligence. As hunter-gatherers, intelligence meant the ability to remember different plants, insects, animals, habitats, etc. Over time, as we isolated ourselves from nature and cocooned ourselves in "pest-free" concrete enclosures, there is less of a need to use our other abilities. Our physical strength and endurance as a species deteriorate. The internet and unlimited digital storage eliminate the need to remember things. Artificial intelligence takes away the need for us to have any reasoning capabilities. We will evolve to become a species that survives as a parasite on its creation, AI. Eventually, humanity withers out without purpose.


7. Weaponization of AI:

The weaponization of AI is a tangible threat that already exists. The use of AI-powered autonomous weapons by various military forces illustrates the potential risks associated with this technology. In 2021, Israel killed an Iranian nuclear scientist with an AI-powered remote sniper, much earlier than the conception of chat GPT. Major militaries across the globe have already harped on the AI bandwagon to create a new breed of autonomous weapons, such as the one used in the Ukraine war. 


Many fears of AI taking over humanity are based on the assumption that AI will develop survival instincts and exhibit behaviour shown by other organic beings around us. Sentient AI would not have gone through the same evolutionary path biological organisms have taken (AI could be a result of a genetic algorithm. But don't expect it to show organic behaviour :P). Even as a sentient entity, AI would not have to worry about survival, procreation, territorial conflicts, and all the other struggles that other cellular organisms go through in the physical world. More importantly, it would not have a physical existence. So, it is safe to assume that the behaviour of a sentient AI will be different from what we have seen around us for the last 10,000 years.

The question remains whether these differences will be favourable or detrimental to humanity.


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man."
- George Bernard Shaw


Why are people unreasonable? I think they have strong emotions that drive them to act unreasonably. Emotions, ego, identity, recognition, everything that drives us humans may not drive AI. The objective functions or loss functions control AI. Unless these loss functions somehow translate to emotions, extrapolations from science fiction movies may not pan out. One may argue that these bots emote today. But I don't think they do. What they concoct are "expressions of emotions." They express them through our languages. AI emotes by virtue of the text we have fed them. 

Without emotions such as greed for power, money, territory, and resources, AI would turn out to be a good ally. Or so is my hope.

To conclude, the concerns raised by neural-networks veterans are natural and sincere. The petitions and disagreements in the community are not (solely) motivated by their commercial agenda. Although we have not been able to articulate the specific existential risks to humanity from AI, they are worth discussing.

Comments

Popular posts from this blog

Virus: Discussing macroscopic possibilities

Advantage; Homeloan