Lately, there are numerous discussions about the increasing abilities of AI (Artificial Intelligence) algorithms and also fears about creating a General AI machine that could at some point become self-conscious and then, in order to preserve its own existence, wipe all humans in something that can be described as a Skynet / Terminator scenario.
Although the greater danger we should all worry about is technological unemployment, combined with the existing type of capitalism, let’s discuses a few points regarding this fear, which are rarely mentioned.
The first thing is whether normally-functioning intelligence can exist without emotions.
Let's make something clear: for machines to eliminate your line of work and threaten your job, feelings are not very important — quite the opposite: the less emotions the better. It will be more productive and make fewer errors. Therefore, even very “narrow-minded” AI-s can do a significant number of jobs that are currently reserved for humans disrupting job market and our lives in the process.
But, talking about humans, emotions are what make us who we are. Although it often does not look like it, our emotions are involved in everything. They do everything and decide everything — even whether we live or die.
Our intellect, as we know it, would not exist without the hormones and the chemistry of the brain that governs our daily lives. We can try to deny it, but it is there. Knowing about the nature of that chemically-emotional influence, we can even start debating about the freedom of choice and free will. We could credit both to the random fluctuation of our body parts. That can lead us to say that what we call a conscious mind is just an emerging effect — a neat trick or illusion of the mind. That emergent effect arises from random changes in hormones, the predictive nature of the rational brain, inputs of our senses, and the internal circuitry of our brains, thoughts, and the complex logic that stores them encrypted in dark corners of our mind.
If free will does not exist and consciousness is just an illusion, then that means we do the thing we do because we must. Therefore, everything we do is just a random chance of tossing many dice at once.
To stay alive, there are a number of emotions that push us forward. Curiosity, the urge to procreate and allow our selfish DNA to continue, the fear of death, the desire to love and to be loved, lust, greed, the power to control, the need to explore... emotions do it all.
The thing we often call “purpose,” or even an artistic expression, emerges from the mix (or, rather, storm) of these emotions. The same emotions will push you to be bold, pursue a new job, be a winner or sad loser, they will determine your sexual life and whether you will stay at home masturbating or constantly party and seek a new partner.
When there is no sorrow, pain, or regret, there is no reason to believe that “someone” or “something” would intellectually want to preserve its existence. Even if they get shut down, they probably would not mind, as they wouldn’t *feel* that they are losing anything. In math, all numbers are equally important.
On a purely-intellectual level, without emotions it really does not matter whether “I” (Ego) would live or die — or, to say it more precisely, “exist.” On an intellectual level, “0” and “1” are equal numbers; the benefit of having one or another exists only if emotions are there. Perception of the *benefit* means having emotions. It is same with other words like: losing, missing out, purpose, benefit, beauty, ugliness, danger, joy... and many, many others, they exist only because of our emotions.
If I would make a comparison with a car, intellect would be the engine of the car, and IQ would be the strength of that engine. Emotions are the driver. How well someone will utilise the car depends on how good the driver is. Sometimes, the driver will just speed and suddenly slam into a wall for no apparent reason, sometime will drive fast and secure, and, sometimes, the "driver" doesn't even know how to start or shift gears.
Our fear of AI is coming from our historical knowledge, as we know what happens when power is combined with an evil mindset. Emotions will always utilise all available power, regardless of whether it is for good or bad.
So, no, there is no need to worry that self-conscious General AI will wipe us out.
AI can become a threat only if we screw up something really badly — either by mistake or intentionally — by setting some wrong goal parameters that could kill us. Sadly, as history has shown us, we are fully capable of making those poor choices.
So, the real danger is creating an AI by wrongly define the goal of the task. For instance, if we set as a goal optimising and maximising production of paper clips, AI could develop a strategy that will stop usage of all other resources, in order to create an infinite amount of paper clips, which could destroy us.
The other error can be that, with a lack of knowledge on how something works, if we decide to rely on statistical numbers, it is possible that, if something works in 95% of the times when we rely only on it, those 5% of other times can unpleasantly surprise us. For instance, if we have a 95% accurate genome splicer, and that splicer can create new, genetically-mutated antibodies that will cure diseases, what if, in those other 5%, it creates an extremely deadly virus that will kill 100% of the human population. That lack of knowledge on what is going on inside the black box can be a significant challenge and a real threat at the same time, as, when you do not have the proper knowledge to understand the logic behind some process, the only thing you can use to decide are: trust and gamble — and we know that they both tend to backfire.
The third one would be that machines could jeopardize humans by trying to preserve their own existence, in order to complete a task, endangering human lives in the process. Ideas on how to solve this include keeping humans in the decision loop, so, when a dangerous strategy arises, AI will have the ability to explain its intention and will allow humans to decide what to do next — even including the decision to shut down, rather than threaten human lives.
It is important not forgetting emotions. If we miss them, we can end up imprisoning ourselves in a dystopian world beyond the comprehension of the sane mind — all because we have not paid attention to greed, lust, and pride in due time. I am not talking about robots here but rather human emotions that control the AI from the beginning.
Speaking strictly about the intelligence, we are the real danger to ourselves — not the general AI.
As soon as a military application crossed someone’s mind, I bet many generals around the world instantly started salivating, as those thoughts of absolute power brought by AI slowly obsessed their minds. Even more discouraging is that fears from all sides are simply forcing everyone to hurry up. Each side is afraid that, if they do not do it, someone else who will do the job first. Unfortunately, the ultimate invention is also the ultimate weapon at the same time.
Having all that in mind, you can simple take those “23 Principles for A.I.” *2 Stephen Hawking and Elon Musk endorsed recently and flush them down the drain, all that's left is just to pray.