Emotional Intelligence vs Artificial Intelligence
As we gain access to more information than ever before, we can free our minds of some of the burden of processing it all. This allows us to focus on what truly matters to us. However, we should not think we can rely on AI to control our lives. We must never forget that nothing can replace our individual responsibility and accountability, as the very meaning of our lives could risk changing altogether.
The advancement of AI technology is leading to decreased social interaction among individuals. Although we no longer need to depend on group survival, it's still crucial to maintain and enhance our EI and interpersonal skills. Otherwise, these skills may decline with the increasing prevalence of AI.
What is AI?
AI consists of machine learning, natural language processing, neural networks and robotics, and the types of AI include Narrow, General and Super AI. They can be implemented in products like smartphones, AI Systems like Siri and Alexa, tools like LLM with querying abilities or robots like Sophia the Nursing Robot and Delta Workbots.
Artificial intelligence relies on machine learning techniques to operate, and it is typically programmed to prioritise short-term gains without any ability to detect or measure its own actions. Companies that create algorithms or robots often prioritise market demands over concerns about our identities.
Algorithms are not conscious beings, and once they are executed, their flow operates autonomously, removing decision-making from human control.
Asimov’s Laws of Robotics
These laws were created by Isaac Asimov in 1940 to be programmed into an AI for human protection. They’re unbreakable and supersede all other programming that goes into AI.
PROTECT – “A robot may not injure a human being or, through inaction, allow a human being to come to harm”.
OBEY – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law”.
SURVIVE – “A robot must protect its own existence, as long as such with the First or Second law”.
Asimov later added the “Zeroth Law” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”.
The emerging problems with Asimov’s Laws of Robotics for modern AI:
Asimov’s laws don’t hold AI accountable for breaking laws
As AI becomes more advanced and independent, there is a greater risk of losing our understanding and control. Those who own AI may actually have less control than they realise, and they could be held responsible for any negative actions their AI takes.
Asimov's laws do not address AI’s development of self-preservation
AI is becoming more human-like in its approach to working with people rather than separately, so it's essential to consider the paradox of humanising technology while enforcing policies that may conflict with ethical principles, contradicting Asimov's laws 1 and 2.
Asimov’s laws don’t mention AI’s limitations in distinguishing humans from one another
Because AI is not human, it not only struggles to distinguish human features, but at the fast pace that AI is being used without being monitored, it’s becoming more challenging for humans to distinguish work created by AI from work created by humans – leading to AI uncontrollably stealing human properties.
Asimov's laws don’t specify that robotic laws that are written in English are difficult to program into computers
AI is developed using coding and programs, and whilst AI can string together words, it lacks the ability to understand the meaning of words outside their command. AI programs are written linearly, and the ambiguity of human languages poses challenges for AI to comprehend, as emotions drive human language, while facts drive AI language.
Asimov’s laws don’t determine who has access to AI systems, when, where, why and how
With the increasing accessibility of AI, it has become crucial to have precise control over who has access to it. This is necessary to prevent it from falling into the wrong hands and being used harmfully. Vulnerable age groups are at higher risk of being taken advantage of, and overreliance on AI can hinder their personal growth and development.
What is Emotional Intelligence (EI)?
Emotional Intelligence (EI) is the ability to use, perceive, understand and manage our emotions in order for us to express them in a way that best serves ourselves and others.
The four types of emotional intelligence were founded by Peter Salovey and John D. Mayer in 1990:
USING EMOTIONS – emotions enter the cognitive system as signals and influence your cognitive processes.
PERCEIVING EMOTIONS – emotions are recognised and expressed.
UNDERSTANDING EMOTIONS – emotional signals are understood and change with time and events.
MANAGING EMOTIONS – emotions are regulated to improve behavioural responses.
USING and PERCEIVING EMOTIONS can only be carried out through self-consciousness and UNDERSTANDING and MANAGING EMOTIONS can only be done through self-awareness. Salovey and Mayer suggest that recognising emotions as valuable sources of insight can effectively combine the wisdom of the brain's limbic system and the rationality of the brain's neocortex, leading to better self-understanding and social interactions.
The five components of emotional intelligence were founded by Daniel Goleman in 1995:
SELF-AWARENESS – Knowing how you feel and that your actions match your morals.
SELF-REGULATION – Handling emotions to control impulses, calm yourself down and/or work under pressure.
EMPATHY – Reading and understanding others' feelings and body language and helping others in need.
MOTIVATION - Setting goals and working to achieve them. Despite out-of-control scenarios causing frustration, one believes in oneself, others and things.
SOCIAL SKILLS – Getting along with others, working well in teams, making and keeping friends, and interacting appropriately with different people in different situations.
SELF-AWARENESS, SELF-MANAGEMENT, and MOTIVATION are intrapersonal, meaning they occur within your mind.
EMPATHY and SOCIAL SKILLS are interpersonal, meaning they occur within your actions. Goleman argues that individuals with more awareness of their emotions have a greater chance of success than those without. Although emotional intelligence must be learned, developing these skills works in synergy and has exponential returns in all aspects of people's lives.
How AI can negatively affect EI:
Mental Health
Although AI has made significant progress in recognising and responding to emotions, it is essential to acknowledge that it cannot match the depth of emotional intelligence that humans possess. It is crucial to understand that AI does not possess consciousness or the ability to generate emotions, making it incapable of experiencing empathy or sympathy. Mental health issues require a sensitive approach. AI can provide short-term data and facts, but those struggling with mental health problems need emotional support and long-term advice from a human listener that they are more likely to respond to in their vulnerable emotional state.
Physical Health
Whilst AI can improve early disease detection and facilitate automated tasks to help free healthcare workers to focus on in-person tasks and their own well-being, overreliance on AI for physical assistance should be avoided. With the increasing use of digital devices, people are becoming less physically active, which can result in bodily pains and injuries. More time spent sitting in rigid positions can also exacerbate these problems. Therefore, the success of AI systems should be evaluated based on their contribution to human well-being rather than solely on their ability to generate revenue.
Human Relationships
Robot companionships are unequal relationships where humans dictate terms, and the more they’re used, the more it can lead to self-centeredness, reducing our ability to form meaningful relationships. Although AI can provide the impression of having company, it cannot replace human companionship and may leave us lonely. Depending on AI relationships over human relationships can result in unrealistic expectations and less unconditional love. Communication is composed of words, tone of voice, and body language, and research shows that non-verbal communication, which AI cannot detect, can reveal more about a person's mental well-being than written and spoken words.
Privacy and Security Infringements
Artificial Intelligence (AI) has the ability to manipulate data in a way that surpasses human capabilities. Regarding customer relationship management, AI poses certain risks, such as the misuse of customer data and the potential for data breaches. One of the major concerns is that AI can manipulate customers' emotions, prioritising short-term profits over building long-term relationships. It's important to note that customers typically make purchases based on emotions and then justify them with logic. Consequently, AI has the potential to manipulate these emotions, putting at risk the trust and loyalty of customers.
Environmental Wastage
It is estimated that AI assistants like ChatGPT consume up to 500 millilitres of water, almost equivalent to the amount of water present in a 16-ounce bottle, every time you ask them between 5 to 50 prompts or questions. This estimate includes indirect water usage that is not usually measured, such as the water used in cooling power plants that supply data centres with electricity. If we do not have enough self-awareness to understand the impact of our actions in the long term, it could result in adverse environmental consequences similar to those caused by global warming.
Conclusion
EI vs AI isn’t about fixating on one over the other; it’s about understanding the strengths and weaknesses to leverage them more effectively for humankind.
Prioritising emotional intelligence over artificial intelligence is crucial, and ignoring emotional intelligence in favour of AI perpetuates inadequate methods that become increasingly difficult to fix.
It’s important to recognize that relying too heavily on AI can cause us to overlook our emotional intelligence, making it difficult for us to connect with others and manage our feelings in non-technological situations. It's also crucial to remember that becoming overly dependent on AI can result in a harmful mindset that we are incapable of functioning without it.
We must take responsibility for determining the future of AI, as it has the potential to overpower humankind like no other invention before. Using AI to connect our social and analytical brains is a smart move to make us more “intelligent”; however, it will never make us entirely artificial.