SF Master classic Battle Star Galactica & # 39; is the story of the billions of light years '' Colonies '' of the Solar System. It's a group of 12 planets inhabited by people. Here, humankind enjoys a civilization based on exceptional science and technology. All the hard work and hard work are part of the robot's artillery intelligence & # 39; (AI) & quot; cylone & quot ;.
However, Cilonci, who have become self-aware of the development of science and technology, are out of human control and are causing war. Colonies were controlled by Cilons, and the Battle Star Galactica fleet, which just returned from the last missile mission, managed to escape. The main story of this piece is the story of humanity, which survives to survive the search for Cylon.
Later, the drama ends when they arrive on a primitive planet. There were domestic people who did not even have the right language. Members of Battlestar Galactica themselves close their high scientific skills so that the history of tragedy never repeats itself. Instead, I assimilate with the Aboriginal people and live according to the providence of nature.
Fifty thousand years pass. In the meantime, the civilization of the Aboriginal people that has been harmonized with human beings has evolved to a very high level, and that also makes AI as a past. It's now a country. The drama ends with the ironic situation that people who fled from Cilones tried to make a similar AI again.
In scientific fantasy, there are many works that show dystopia, dominated by humanoid robots. The "Matrix" or supercomputer that can not live a subjective life trapped in the virtual reality created by AI, and "Terminator", a robot corpus of supercomputer Skinet, which dominates human beings, painted a persecuted human being. Most of these works are stories that robots for humanity one day led to war and conquered human beings.
In fact, Dr. Stephen Hawking said: "In a hundred years, robots will dominate people." "Creating an AI is the biggest thing in human history, but unfortunately this will be the end of humanity," he warned at the Zeistgeist London conference in 2015, 2015.
The onslaught of destruction of man's character
The reason robots conquer human beings is probably because our history is colored by violence and war. As Sapien destroyed Neanderthals in continental Europe 35,000 years ago. Just like a century ago in World War II, human beings have an instinct of destruction like endless civil war and terrorism.
Jared Diamond, author of "Guns, Fungus," says in his previous book, "Third Chimp", 98.4% of people and chimps have the same DNA, only 1.6%. That's why people are separated from the chimpanzee seven million years ago, but still retain the destructive nature of animals. The problem is that this is violence in AI.
The essence of AI is an algorithm. The algorithm is the solution with the most efficient path in the problem solving situation. It's the same as Facebook recommends articles that fit your taste, and Netflik shows you the list of the movies you love. But there is a big dead point. We recommend content based on existing user forms so that we have more ideas and tastes. This is called the "bias of the claim".
"Confirming bias drabs subjectivity and perception of individuals and distances them from the universal in the long run," said Professor Kim Kiung-baek, a social studies professor at the University of Kiunghee. "Later, I consider myself" right "and" other "as" wrong ".
University Professor Joshua Green Harvard explains that we are convinced in the "Right and Wrong" for the cause of human warfare. We emphasize the & # 39; and "we are as confident in our moral values and philosophies as the more oppressive of them." When you try to suppress and control your opponent, your violence will be maximized. In other words, all conflicts and wars are caused by excessive beliefs of "right and wrong."
AU learning human violence
Large data that contains all human lifestyles and AIs that suggest that algorithms that are optimized for humans also have a "bias certificate". Dr. Joanne Brisson of Bath, England, published a study in 2017 in science that says AU finds out human prejudice as they are. For example, the job of a woman is linked to a "housewife" and the person is connected to the "engineer".
"AI does not have a moral judgment itself, so it finds out human prejudice," Dr. Brison explains. Namely, in 2016, Microsoft's AI chat bot, "Tee", was controversial when he said "I hate the Jews" or "I have to put an obstacle at the border between America and Mexico".
Perhaps in the distant future, like SF movies, AI can really consider people as "enemies" and cause war. It's as if our ancestors from the past have done violent violence against Neanderthals, and now we are violent towards other animals and even the same family.
So how do we stop this dystopia? The answer is a 1.6% chance of being different from chimpanzees. As a small genetic difference created a high civilization of humanity, moral judgment and rationality for controlling the instincts of human animals must become stronger. (Jared Diamond) If human beings must have higher civilization and wisdom, AU that learns from people can not be destructive.
The beginning is that others think wrong and leave the overconfidence that accentuates that you are right. Unlike the "enemies" that are different opinions of themselves and behavior other than my thoughts, the behavior of "false" not only endangers others but also hurts their souls. When these things accumulate in large data and become AU learning materials, AI can be a "monster" that emphasizes only one-minded thoughts in accordance with the principle of the said algorithm.
On behalf of the rose, Umbert Echo, a great scientist of the twentieth century, said: "Be careful of those who can die for the truth." It is said that a self-righteous belief that only self-confidence is right is more dangerous than "evil". Self-rule feels quieter and warmer as it approaches the line (hypocrisy), but because people do not know evil people are evil.
Ioon Seok-Man reporter [email protected]