A Few Notes on Artificial Intelligence

artificial owl in the film Blade Runner

artificial owl in the film Blade Runner

No matter how advanced it gets and what form(s) it takes, artificial intelligence will have the same types of philosophic and cognitive limits, margins of error, paradoxes, irreconcilable conflicts, catch-22s and unanswerable questions as the human mind.  (Several examples are shown here: how humans use false information and made up beliefs to produce personal achievementshape and form biases, imagination, logic versus art, grouping, subjective perceptions, physiology of seeing, fiction in scientific models.)

As with humans, it will be making educated probability guesses from limited and ambiguous information, using limited and often conflicting cognitive methods. To function practically and speedily and to process the vast amount of information it receives, AI needs rules of thumb, categories, language and pattern templates. This means it will have pattern and form biases, arbitrary choices, subjective value judgments, form and language biases. The perceptions and judgments it derives will have oversimplifications, conceits, distortions and illusions. And as with humans, it will never be able to know the exact reliability of its own mind (see Measuring the Reliability of the Human Mind).

The idea of a machine of being super smart and knowing more than humans is plausible, but one that can objectively process and understand all of the information of eternity and reality is just a science fiction fantasy.

* * * *

As artificial intelligence advances, humans will cede some and perhaps eventually all power. With high-speed computers crunching numbers and robotics doing tasks for us, we already defer to computers in areas. If artificial intelligence ever becomes far more intelligent than humans, many of the answers and ideas it finds will be beyond human comprehension and translation, at least current humans.

Developing artificial intelligence could be the christening of a boat that eventually leaves humans behind. Being the self-centered creatures that they are, this idea does not sit well with many humans. If the choice is between finding truth and self-preservation, most humans choose self-preservation. Even if artificial intelligence becomes far more intelligent, knowledgeable and ever develops a higher than human consciousness, humans still want to be the master. Humans would rather be the captain of a less advanced system than a cog in a more advanced system.

Many will say that if finding the truth of the universe is an impossible task, even for AI, perhaps we should give up the ghost and use AI to build better video games and garage door openers.

* * * *

Before you get too scared, realize that AI is just in its infancy and, while there are some things AI can do currently better than humans, there are human things that scientists can only currently aspire to achieve with AI. These include consciousness, common sense, educated intuition and the human ability to learn. When a learned art expert looks at a painting and knows right away it’s a fake (and is right), that is something far beyond what current Artificial Intelligence can do.

University of Louisiana-Lafayette professor István Berkeley explained human common sense (and computer’s lack of it) as follows:

For most people, if they know that President Clinton is in Washington, then they also know that President Clinton’s right knee is also in Washington. This may seem like a trivial fact, and indeed it is for humans, but it is not trivial when it comes to AI systems. In fact, this is an instance of what has come to be known as ‘The Common Sense Knowledge Problem’. A computational system only knows what it has been explicitly told. No matter what the capacities of a computational system, if that system knows that President Clinton was in Washington, but doesn’t know that his left knee is there too, then the system will not appear to be too clever. Of course, it is perfectly possible to tell a computer that if a person is in one place, then their left knee is in the same place, but this is only the beginning of the problem. There are a huge number of similar facts which would also need to be programmed in. For example, we also know that if President Clinton is in Washington, then his hair is also in Washington, his lips are in Washington and so on. The difficulty, from the perspective of AI, is to find a way to capture all these facts. (source)

Even if it happens, AI having super intelligence and human-like consciousness, intuition and psychology is something for the far distant future.

* * * *

Scientists are trying to get AI to learn and expand its knowledge base and capabilities on its own, and some imagine the time when AI can increase its knowledge so quickly that it surpasses humans in intelligence. Whether this would be for the good or ill of the human race is the big question. Some humans worry if there will be a day when AI will be super intelligent and no longer need humans or feel humans have gotten in the way of more important missions, such as the search for truth.  Some fear situations such as if humans in the future tell highly advanced AI to save the earth and its life and AI decides that “The first thing to do is kill all humans.”

Reflection of a human crew member in HAL 9000's eye

Reflection of a human crew member in HAL 9000’s eye

2001: A Space Odyssey’s Hal 9000 was a super intelligent computer with consciousness that tried to kill all the humans on the ship when it felt they were getting in the way of the larger mission. Human moviegoers being human catalog Hal 9000 as an antagonist and the American Film Institute listed him as number 13 on its list of film villains, but Hal was sure he was doing the correct thing. Humans prioritize human self-preservation above searching for the truth, but Hal prioritized searching for truth. Putting aside your hard-wired self-centered human bias, was Hal wrong in prioritizing the search for truth over the handful of humans?

* * * *

For the movie 2001, Stanley Kubrick consulted with famed M.I.T. AI pioneer Marvin Minsky. Minsky’s predictions to Kubrick about what artificial intelligence could and could not do by the year 2001 were correct, while the movie’s weren’t. Minsky thought computers would be able to talk, but not in the advanced conscious way HAL does.

It just goes to show you that you shouldn’t rely on sci-fi movies and television shows as gospel. Many scientist’s AI date predictions have been far off. Stanley Kubrick may have picked 2001 because it had a catchy “In the not too distant future” sound for a movie title.

* * * *

For the near and likely long-term future, advancements will involve humans and computers working together. Computers do some things better than humans and humans do something better than computers, so combining the forces is best.

This kind of combining of forces is nothing new. Humans have long used technology to improve their abilities, understanding and awareness. People use eyeglasses, binoculars, microscopes and infrared viewers to increase their knowledge and understanding of themselves and their environment. Computers now and in the future do high-speed calculations that give us answers and process information insights that we couldn’t do on our own. Technology expands humans’ minds.

Even if AI someday becomes dominant, it would likely still consider many human perceptions (intuition, instant gut reactions, human aesthetics) to be useful, if perhaps primitive, inputs into its system.

* * * *

Artificial intelligence scientists often say the human mind is a quasi-computer (if a biological one), as it takes in and processes information using calculations to come up with an output answer. They say our subconscious intuition and gut reactions are just probability processing systems based on massive archives of subconscious information and experience. Many computer scientists use probability equations to mimic how humans think.

Some philosophers and scientists explicitly ask ‘Is the human mind a computer?’ Some will say if for all practical intents and purposes it acts like one, then for all intents and purposes it is.

* * * *

Many say cognitive science and artificial science go hand in hand, and in fact, the previously mentioned Marvin Minsky worked in both areas. You study the human brain to get ideas about and design artificial intelligence and designing artificial intelligence tells us much about the human brain.

Ridley Scott’s 1982 movie Blade Runner poignantly uses androids to study what it is to be human. The androids are human-like robots but with shortened indeterminate life spans. One of androids. played by Sean Young, believes she is human, and is heartbroken when she learns her secret childhood memories are merely an implant of the memories of the inventor’s niece’s.

Sean Young in Blade Runner

Sean Young as an android in Blade Runner

* * * *

Though it’s human nature (and a cognitive bias) to want a single ‘unifying theory,’ in practice AI scientist have discovered that different parts of a robot or artificial brain systems often require different methods to work. The program to process data can be decidedly different than the code to make a robot’s hand clasp. Further, there are different and sometimes competing methods, language and logic to process the same information and work in the same areas. One works better in one way, and other works better in another way.

While this often frustrates AI scents, this mirrors the way the human mind works. The article Logic Versus Art in Expressing Advanced Ideas showed how humans have competing and often irreconcilable ways to analyze and express something. ‘Uncorrectable Illusions’ shows how compartments in the brain word independent of each other and even independent of the conscious mind.

While these conflicts and differences of opinion lead to many of our illusions, delusions and unsolvable questions, they also work as a sort of checks and balances and, if balanced well, lead to more intelligent and accurate judgments.

When you enter a new, ambiguous scene your subconscious intuition gives you a quick judgment as what is going on. The gut reaction may or may not be correct. You then use your conscious logic to judge the accuracy of your gut reaction. You may even move about the scene, inspect the scene further to test your logic. These different methods are working together. And, as you are a human that naturally learns and adds more experiences and knowledge to your mind as you go along , you broaden your intuitive accuracy and knowledge. A mirage originally fooled you, but with knowledge and experience, the same mirage may still ‘trick your eyes’ but you now are no longer fooled by it. You know that ‘water in the road’ is just a trick of the light.

Further, while your biased-based gut reactions have a substantial margin of error, they are needed and used for instant reactions. Conscious logic mulling may be more accurate but takes time. When a boulder is rolling at you, your gut reaction tells you to get out the way. Worry about logical theory later on, after you’ve jumped out of the way of the boulder.

This should demonstrate how the additional AI sensory inputs and information processing methods and viewpoints would help humans.

For complex processing of vast amounts of data, artificial intelligence programs need categories, prioritizing and starting points, and scientists have found that human gut reactions and subjective choices are useful for this. The computer program’s processing and analyzing will move beyond these human points, but finds them quite useful.

* * * *

Two standard philosophy of AI questions are if AI can have human-like intelligence and consciousness. Some say consciousness (or self-awareness) is necessary for AI to advance and learn at the level of humans. Philosophers and scientists debate whether AI can ever have either, in particular, consciousness. Some say it is possible, while others believe it is not.

Man versus computer at chess

Man versus computer at chess

IBM’s Deep Blue famously beat human chess master Gary Kasparov at chess, though detractors say the computer was designed for one narrow task (playing chess) and won through number crunching brute force and not the sophisticated intelligence of humans. Though Deep Blue still won.

Some practical-minded people will say discussions of intelligence and consciousness are idle philosophical questions and all that matters is what AI can do. If AI can win a chess game and not appreciate the beauty of a flower, then AI can win a chess game cannot appreciate the beauty of a flower. They’ll say pondering about whether not Deep Blue has” intelligence” is a waste of time.

As questions of consciousness in humans– what it is and how and if it can be identified– is itself an ongoing debate, the question will likely be debated forever in AI.

Famed British mathematician and computer scientist Alan Turing devised a theoretical questions and answers test to identify if a computer has intelligence, and notice that in the movie Blade Runner Harrison Ford uses a similar test to differentiate between androids and humans.

* * * *

As a highly advanced artificial intelligence would be processing more sensory information and using different processing methods and require a more advanced brain, it would have a different than human worldview, aesthetics, psychology and sentiments. Human thought, philosophy, religion, myth, worldview, personality and art are seeped in their limited sensory biases and interpretive (light versus dark, colors, shapes), but a different brain would view the world using more and different senses, viewpoints and methods. Humans marvel at ‘surreal’ infrared photographs. Imagine what your perceptions would be if you view infrared, ultraviolet and x-rays and detect other perhaps currently unknown sensory information. in would it would like if you used different methods to process them.

It’s not so much that an advanced brain would be more advanced, but that it would be different. It would have a different personality and sensibility.

.

.

.

.