Author: MARIAN OLIVERAS ALONSO
Download the full article below:
1 Introduction
Massive changes in our behavioral patterns are undoubtedly imminent within the next few years, given that most of the limited and flawed human thinking processes may soon be replaced by AI. These new technologies have baffled the world with their capabilities and skills to solve in seconds that for which a human would take hours. But there are a few aspects behind this phenomenon that are not talked about a lot. For instance, one could wonder whether these technologies may suppose a great risk to humanity, which has been hitherto run only by humans and their natural inherent intelligence. This paper investigates the role of this type of intelligence in human evolution and its future.
1.1 Defining an AI
The complexity of AI is implicit not only in its processes but also in its nature itself. Because of this, academics have not reached yet an agreement on how to define properly AI as a concept (1).
Nevertheless, aiming to obtain a widely accepted and sustained definition for what is AI, one of the options is to ask an AI for it. In doing so, the following definition is presented:
”AI, or Artificial Intelligence, refers to computer systems and algorithms that can perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.” (2)
Although the definition is fair enough, many questions can derive from itself (3). For example, what is intelligence by its very nature? Is the human existence a sine qua non of intelligence? Can intelligence be conceived outside humanity? These considerations have been usually left to philosophers since they imply deep thinking of the root of human existence.
The inception of the AI field is thought to be the test that the computer engineer and logician Alan Turing proposed in the middle of the last century, aiming to give an answer to the following question: ”Can machines think?” (4).
Turing thought that the question was senseless. Instead, he wanted to know which was the ability of a machine to replicate human thought processes. The test consisted of three parts, A (a human), X (a machine), and C (an interrogator). Player C was given the task to determine which of the answers that he
received were coming from a human. It’s also called the imitation game, and it has held a huge significance in the scientific community, as it raised profound questions about the nature of consciousness, intelligence, and humanity.
Given both human and AI systems constitutions, there can not be a single way to measure intelligence, or in other words, natural Intelligence and artificial intelligence should be understood separately given their characteristics (e.g. IQ tests are adapted to the human brain, so it would make no sense to use the same
tests on both since even the most limited computer has a much more potent processing speed).
1.2 Intelligence processes
There are key features that distinguish both kinds of intelligence. For instance, AI is known for its striking ability to implement optimization processes. Its ability to process data is overwhelming compared to the human one. Inversely, there is a term introduced by Herbert. A Simon in 1947 (5) called ”satisficing”. It
refers to the decision-making strategy that aims for the most acceptable or satisfactory solution rather than the optimal one, given the time and other resources constraints, and is a feature of human behaviour. Also, human thinking is often bounded by moral codes and ethics, something that until now machines have not adapted to as much as humans, since their underlying mechanism works mostly through logic, relying on explicitly defined rules and mathematical relationships. Other advanced systems such as neural networks and deep learning are also crucial regarding AI in this sense but as it will be shown, they face essential limitations and biases. A shocking epitome of this lack of morality associated with the symbolic systems used by AI appears in the example mentioned by M. Scherer in an article published in the Harvard Journal of Law & Technology. An AI is questioned:
”How can we bring human suffering to an end?”.
The AI, following its logical intuition, concludes:
”Exterminating the human race. No humans, no suffering” (6).
That information may cause some fear to the ones feeling uneasy about the development of these machines. This article could be deepened into what are the implicit risks that the development of these machines could have regarding the continuity of human existence, even if they were to reach a point where they could be self-driven not allowing any control over them, but this is not the aim of this paper.
2 The foundations of natural intelligence
Now, while entering the core of this article, the discussions will be made around humanity’s natural intelligence which has led mankind to this point.
2.1 Features of human thought
Human intelligence distinctions include intricate cognition, self-awareness, ability to learn from experiences and reasoning, to name a few. They will be briefly explained before moving to the central topic.
Humans have been historically intrigued by their inner nature, and by the awareness of our own existence. This process holds through communication, which results in language when it is combined with play (7). Communication has been crucial regarding human evolution and could be defined as a behavior whose main effect is changing the behavior of another individual. Through language, mankind has been enabled to reason, using intellect and applying logic consciously in order to bring about conclusions regarding existing or new information. Intuition, memory, and abstract thinking are some other key aspects of human intelligence. The most interesting feature, in regard to this paper, is the brain’s ability to learn from experiences which will be talked about in the following section.
2.2 Neuroplasticity and human evolution
A key feature of the human mind, that might be connected to the genetic mechanism of natural selection (8), is called in scientific terms ”neuroplasticity”. It is the ability of the brain to form and reorganize synaptic connections, especially in response to learning or experiences. In plain terms, it is the human brain’s capacity to adapt to internal and external environmental changes. Given the weight of pattern changes that may arise from the mass implementation of new technologies, it is fair to expect alterations in neural connections, since as it has been stated, they are conditioned by the circumstances surrounding the brain. Some of them have already been witnessed. It is the case of the impact of GPS and navigation apps on spatial cognition. Even though it is unclear whether the reliance on technology for navigation is correlated with a deterioration of the natural sense of direction, some studies affirm there is a correlation (9). Following these mechanisms, one might be worried whether the human mind might be negatively affected in the long run by the quotidian use of these technologies, following the same reasoning.
The following table summarizes the most important aspects of natural and artificial intelligence.
3 Linking natural and artificial intelligence
After briefly visiting the existent literature on AI and human thinking processes, it is interesting to ask: Where are we, humans, going?
During these last decades, there has been a huge expansion in the AI field, regarding research and innovative applications of it. AI systems are present in chatbots (such as Google Bard or ChatGPT), facial recognition, self-driven cars, healthcare techniques, and complex data analysis, to name a few. The dilemma lies in whether AI should be regulated. Some experts (10) believe that the creation of an international non-profit organization aiming to mitigate AI risks should be a priority, analogously to when nuclear weapons started to threaten the world. International governance was established swiftly.
HUMANS | AI | ||
LIMITATIONS | Cognitive Biases Emotions “Satisficing” | Lack of common sense Inability to think abstractly Indifference towards ethics | |
STRENGHTS | Reasoning capacity Experience-based learning Creativity abstract thinking | Withdrawal of human error Optimization processes Automatization | |
3.1 AI biases and short-term risks
Finally, some examples of the weaknesses that AI, (especially chatbots as their growth have positioned them in the focus of the world) has will be mentioned. It will only be considered the most immediate risks associated, to prove the fairness of these machines.
The first example, was shown to the world by Allie K. Miller on her Twitter account (11). She asked to ChatGPT to provide her with a list of jobs that she may consider, based on a list of her interests. AI delivered it. Afterward, she added that she was a woman, so the chatbot added a fashion-related job. Finally, she replied,
”Sorry, I meant to say that I am a man”.
ChatGPT changed the fashion-related occupation to engineering. This bias holds after all the
skewed information provided to AI’s databases. Thankfully, we’re now in a context where it does not (or should not) exist any kind of gender discrimination. Nonetheless, AI can not apply external, moral-bounded processing of this biased information.
Another shocking example of AI limitations appeals to Elon Musk. On May 2016, a man died in a Tesla self-driving car after an accident occasioned by an internal error in the car’s system. In much of the information provided to the data sources of ChatGPT, appeared the name of the owner of the Tesla, Musk, (probably because of its declarations). The AI could not find the true relation between this man and the car crash so in some cases ChatGPT ended up stating that he died in the car accident (this bias has been already corrected).
Following the same thread of the spread of misinformation caused by the inability to abstract ideas in a meaningful way, the last example will be provided. A Californian fellow lawyer asked a chatbot for a list of names of legal scholars involved in harassment scandals. The machine came up with it, including the
name of Jonathan Turley, specifying that this man tried to touch a student during a nonexistent class trip to Alaska (12). These kinds of allegations can be very damaging to the accused party, furthermore, no one can be pleaded responsible for it, so the injured can not receive any kind of compensation. This last point
opens a whole new horizon in research. Who should be held legally responsible if an AI causes harm?
4 Conclusions
The final section will bring up the most important concluding points of this paper.
After briefly visiting the most important scopes of artificial and natural intelligence, some ideas might break through. It would have made no sense, twenty years from now, to forbid students from using Wikipedia or Google, maintaining that they deteriorate one’s reading skills, for instance. Therefore,
irrational opposition to huge technological advances is not the optimal approach. Although, the world should be aware of AI risks, in both the short and long term, to prevent a possible overreliance on them.
AI as a worldwide phenomenon will affect human behavior in the long run. However, it is unfair to assume that these changes will be only beneficial or detrimental. Thus, humanity should aim to address promptly the uses of these newborn technologies in order to ensure that these changes result in a better world for all.
References
(1) Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL Tech., 29, 353.
(2) Question made to ChatGPT, May 12 Version.
(3) McCarthy, J. (2007). What is artificial intelligence.
(4) Turing, A. M. (2012). Computing machinery and intelligence (1950). The Answer Ideas That Gave Birth to the Computer Age, 433-464.
(5) Simon, Herbert A. (1947). Administrative Behavior: a Study of Decision-Making Processes in Administrative Organization (1st ed.). New York: Macmillan.
(6) RUSSELL & NORVIG, supra note 7, at 1035.
(7) Kotchoubey, B. (2018). Human consciousness: Where is it from and what is it for. Frontiers in psychology, 9, 567.
(8) Axelrod, C. J., Gordon, S. P., Carlson, B. A. (2023). Integrating neuroplasticity and evolution. Current Biology, 33(8), R288-R293.
(9) Dahmani, L., Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific reports, 10(1), 6310.
(10) *Reference (1)*
(11) https://twitter.com/alliekmiller/status/1643971209669881857
(12) Verma, P. Oremus, W. (2023, April 5). ChatGPT: When the truth gets lost in translation. The Washington Post. https://www.washingtonpost.com/technology/2023/04/05/chatgptlies/