Fear of Artificial Intelligence
As I was growing up in India in the late 20th century, the word intelligence was looming large and becoming the talk of the town. Thousands of students were motivationally forced to take IQ and talent search examinations to prove their worth. Intelligence was something that was aspired upon and seen as a ladder to achieve success. Intelligence has always been political. A person decried as unintelligent is not merely a comment on the person's mental faculties, but also what that person is permitted to do or not. The assigning of roles in arbitrary fashion becomes easier through such tests.
From 2001: A Space Odyssey to Terminator films, many of us are hooked to films with a plot of Artificial Intelligence taking over humanity. Many scientists and entrepreneurs have tried to caution us intermittently. Several public figures such as theoretical physicist Stephen Hawking, Bill Gates and ex Tesla Chairman Elon Musk made several warnings about. The idea is that if the Artificial Intelligent robots gain consciousness and the sense of self-preservation, they might revolt against their masters and enslave humanity for good. This kind of reasoning is often associated with the fear of singularity, single track consciousness. Singularity is a hypothesis that the development of artificial super intelligence can result in unfathomable changes to human civilization.
The story of intelligence begins with Plato when the mention of philosopher kings came into question and put emphasis on thinking, declaring that an unexamined life is not worth living. So he propounded the idea of cleverest ruling the rest- an intellectual meritocracy. The idea may have been radical and revolutionary at that time, but the society is dynamic, not static. These ideas were followed by Aristotle, the master of the middle path. He took the notion of the primacy of reason and used it to establish natural social hierarchy. He said in his book Politics that some should rule and some should be ruled is not necessary, but expedient, from the hour of their birth some are deemed naturally fit for subjugation and some are deemed fit to rule. Hence the subjugation of women, slaves, lower classes, and peasants. Now comes the non-human animal which is at the lowest place on the social ladder, the dim wit animal is deemed fit to eat.
The premise for this kind of argument comes with the presumption that the thought process of a Super-intelligent machine would be very much alien to the human thought process. Evolutionary psychologists suggest that this fear comes from the male's innate character of dominating others, deposing their masters and taking over the world. "If the AI develops on female lines, fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization". (Steve Pinker)
Singularity takes the cue from the Manichean view of the world, good fighting evil, apocalyptic overtones, the urgency of 'we must do something now or it will be too late', the eschatological perspective of human salvation, and an appeal to fears and ignorance.
These fears generally come from the kind of mind that takes a cue from the human civilization Modus operandi and Modus Vivendi. The human history is full of examples of conquests, wars, rebellions, enslavement etc. Our mind's consciousness is derived and shaped according to the events occurred in human history and how the human civilization progressed or moved forward. Our past has taught us how the people with power maimed the others. This fear has apparently taken our minds. We fear what we have done to others and ourselves, same will be done to us by the AI bots. Basic contradictions in on philosophy, without it, will be quite difficult to make assumptions about AI impact. What will be the AI approach in case of trolley problems, what will be the impact of singularity on the thinking process? What will be the position of the machine in the utilitarian question?
Imagine a world shaped where intelligence is not used as an excuse to justify barbarism, would we still have such kind of fear,
Hitherto we have answered the question with our subconscious fears. If we are used to believing that top positions should be filled by the brainiest, then, of course, we should expect to be made redundant by the superior Bots.
Can we imagine in a hypothetical situation, the importance that we give too those brainiest ones who claim the right to rule, if same importance is given to those who acted mediators in remote places, to free themselves of worldly desires or the cleverest of all are those who returned to spread enlightenment and peace. Would be still fear those smarter robots