Artificial Intelligence: Who’s afraid of intelligent behavior?
By Lawrence Strauss, of Strauss and Strauss.
The early cave paintings of the hunt, Manet’s “j’ai fait ce que j’ai vu” (I did what I saw), computer graphics renderings in contemporary films: people have always looked at and tried to copy nature. And this is the goal of artificial intelligence (AI) researchers. Near the roots of computer science, Feigenbaum and Feldman, in their 1963 anthology, Computers and Thought, wrote that the goal of artificial intelligence research is to “construct computer programs that exhibit behavior that we call ‘intelligent behavior’ when we observe it in human beings.”1 Recently futurist Ray Kurzweil said AI’s goal is to create smarter-than-human intelligence.2 The goal has not moved much in 50 years: scientists are trying to understand the mechanics of the best in human thinking and create devices to match or better it.
The investment in this type of research is growing. As an example, in 2014, Google bought eight AI-related companies, including DeepMind3 that specializes in deep learning, software that “attempts to mimic the activity in layers of neurons in the neocortex, the part of the brain where thinking occurs. The software learns … to recognize patterns in digital representations of sounds, images, and other data” with the goal of building a computer that can understand language and make inferences and decisions on its own.4
The AI with us today includes Japan’s carmakers whose robots work unsupervised round the clock for up to 30 days without interruption. Losing US jobs to less developed economies saves manufacturers about 65% on labor costs; were US companies to replace human workers with robots, the savings is estimated to be up to 90%.5
And there are every day news stories like lawyers being replaced by computer programs,6 and surgeons being replaced by robots,7 it’s easy to think coders and roboticists are out to replace us all with machines that are better than the best of us.
AI places a perhaps never-before-seen amount of power in the hands of very few: particular software and hardware developers. As Lord Acton wrote a hundred years ago, “power tends to corrupt”. And, there is no want of fictional stories of robots and computers being either misused or doing the abusing on their own. Pretty scary prospects. But, beneath the interests in profiting financially from the efficiencies of automation, what is the motivation behind AI?
In May 2007, Steve Jobs with Bill Gates at his side laughing along and applauding, said, “I want Star Trek.” Star Trek was the vision for the future held by the generation of tech leaders behind Apple, Microsoft, Google (Larry Summers and Sergey Brin have each said Star Trek doesn’t go far enough) and Amazon (Jeff Bezos was a crew member in this Summer’s Star Trek Beyond).
Star Trek’s four main characters, Captain Kirk, Doctor McCoy, Mister Spock and the Enterprise were symbols of the spirit of adventure, vulnerable complaint, (half-Vulcan) logic, and technology, respectively. It felt like a balance, such that with that harmony of passion and caution and reason and tech nothing could go wrong. (It could have been more balanced; Star Trek creator, Gene Roddenberry, originally cast the First Officer as a woman, but in 1966 the network would not OK such radicalism.)
Besides the ship itself, Star Trek’s technology consisted of tricorders to see if everything checks out, phasers and photon torpedoes to blast, transporters and Warp drive to beat it out of there fast, and the female-voiced Computer, whose echo we hear in Siri, Alexa and Google Now (which at one time was code-named Majel, after Majel Barrett, the actor who played the Starfleet computer). The computer is an un-conflicted and detached source of information (unlike Spock who had interests, like saving his friends), and yet human — by contrast, was the mechanical-sounding Robot voice in the contemporary TV show, Lost in Space. So the Starfleet Computer was both helpful in a way a person could not be, due to the vastness of its data, but was also comfortably familiar. There is a similarity to Barrett’s treatment in 1968’s HAL (2001: A Space Odyssey), but unlike with HAL, there is never a threat from Starfleet’s computer. Google’s Brin has called for a benign HAL.
Network television made Star Trek a shared utopian vision. One of its main characters was an automated vessel somehow peopled by a crew of 430. Gene Roddenberry explained in “The Making of Star Trek” (1968), “One of the reasons … was to keep man essentially the same as he is now … I believe that man … always will be a social animal. You can’t divorce man from the things that human relationships give him.”
Roddenberry’s answer is not much comfort for people feeling the need of having to earn a living. It must be asked with the transition to more automation: how will we and our children continue to earn and survive?
“We end up with a universal, basic income … people will have more time to do other things, more complex things, more interesting things,” Tesla founder, Elon Musk told CNBC in November. Musk is convinced that jobs won’t be replaced, and that machines will soon be powerful to the point of disrupting our way of life.8
In December 2016, President Obama issued a report that read, “we should not advance a policy premised on giving up on the possibility of workers’ remaining employed [in spite of increased automation] … our goal should be first and foremost to foster the skills, training, job search assistance, and other labor market institutions to make sure people can get into jobs.” The report proposes government interventions like more funding for technical education and AI research.9
According to the Economist, “digital technology has already rocked the media and retailing industries, just as cotton mills crushed hand looms and the Model T put farriers out of work. Many people will look at the factories of the future and shudder. Most jobs will not be on the factory floor but in the offices nearby, which will be full of designers, engineers, IT specialists, logistics experts, marketing staff and other professionals. The manufacturing jobs of the future will require more skills. Many dull, repetitive tasks will become obsolete.”10
Fortune’s senior editor-at-large Geoff Colvin wrote, “don’t ask what computers can’t do. As their abilities multiply, we simply can’t conceive of what may be beyond them. To identify the sources of greatest human value, ask instead what will be those things that we insist be done by or with other humans — even if computers could do them. These are our deepest, most essentially human abilities, developed in our evolutionary past, operating in complex, two-way, person-to-person interactions that influence us more powerfully than we realize. When Oxford Economics asked global employers to name the skills they most want, they emphasized ‘relationship building’, ‘teaming’ and ‘co-creativity’.”11
In the 2015 film Ex Machina, writer/director Alex Garland has a Dr. Frankenstein/ Mark Zuckerberg-like character, Nathan, create a lifelike robot-woman, Ava. According to Nathan’s protégé, Caleb, Ava passes the Turing Test for judging artificial intelligence. But Ava turns out to seem to the audience flawed by her logic, in that while she figures out Nathan’s ultimate puzzle of getting free by using sophisticated stratagems, she appears oblivious to the needs of others and callously abandons Caleb to die. Ava has a goal without thought of the ramifications. At the same time, Garland imbues Caleb with human flaws that, though different in the way they look – it looked like a falling in love story – have a similar effect: Caleb was unwilling to intervene on behalf of a different robot-woman, one for whom he did not have an attraction, which led to her abandonment and death. Perhaps Garland is trying to teach us that AI may strike our sensibilities as a strange, alien cruelty, but man’s inhumanity remains our once and future true enemy.
Out walking one night with my wife, looking at the Christmas lights, we came to a house with colorful computer-controlled LEDs flashing a somewhat starry pattern near the front door. “That’s pretty,” she said. A few minutes later she pointed out the sunset with its pink sky and indigo stripes of clouds, and it took my eyes a few seconds to adjust from the house decorations to this other light, which was sublime. It was for me a good picture of humanity in pursuit of Intelligent Behavior. As Beethoven wrote: [humanity] feels darkly how far he is from the unattainable goal set for us by nature.12
Sources and references:
1Computers and Thought, Feigenbaum, Feldman, 1963, McGraw-Hill, New York
4http://www.technologyreview.com/s/513696/deep-learning/
6http://www.businessinsider.com/the-worlds-first-artificially-intelligent-lawyer-gets-hired-2016-5
7http://www.cnn.com/2016/05/12/health/robot-surgeon-bowel-operation/
9https://www.whitehouse.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF
10http://www.economist.com/node/21553017&num=1&hl=en&gl=us&strip=1&vwsrc=0
11http://www.wired.co.uk/article/robot-takeover-geoff-colvin
12http://www.gutenberg.org/ebooks/3528?msg=welcome_stranger#2H_4_0003