Here's another look at a column from my On The Net series at Asimov's Science Fiction Magazine
My friend John Kessel and I have had a longstanding disagreement about the future of artificial intelligence. Even though we have co-edited a couple of anthologies examining post-cyberpunk<tvtropes.org/pmwiki/pmwiki.php/Main/PostCyberPunk> futures and visions of the Singularity<singularity.com>, John remains skeptical about claims that we may soon be superseded by some kind of digital successor. He’s in general agreement with the celebrated mathematician Sir Roger Penrose <plus.maths.org/content/roger-penrose-knight-tiles>, who bases his critique of strong AI on its proponents’ assumption that intelligence can emerge from algorithms, if they are of a sufficient number and complexity. Penrose argues that intelligence is not algorithmic and we first need to develop an entirely different model, possibly based on quantum mechanics. So while expert systems to come may demonstrate some astonishing abilities, computers using algorithms will never become truly intelligent. Thus the smartest computers in the near futures imagined by writers of the Kessel School are something like Siri <apple.com/ios/siri/> on steroids.
I lean toward the position expressed by Penrose’s friend and collaborator Stephen Hawking <hawking.org.uk>, who has repeatedly warned of the dangers of unfettered AI research. At a conference last year, Hawking predicted that <techtimes.com/articles/53180/20150514/robots-will-overtake-human-intelligence-within-100-years-warns-stephen-hawking.htm> “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” In a recent reddit interview, he talked about a scenario “. . .where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.”
From the Spooky Serendipity Department: The day after I wrote the above, the New York Times ran an article that began, “A Google computer program trounced one of the world’s top players on Wednesday in a round of Go, which is believed to be the most complex board game ever created.” AI researchers had been using Go as a test platform since Deep Blue beat Garry Kasparov<en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov> in 1997. Go was the last game where humans held sway, but no longer. Yet what is more sobering is how the Google’s AI won. “AlphaGo does not try to consider all the possible moves in a match, as a traditional artificial intelligence machine like Deep Blue does. Rather, it narrows its options based on what it has learned from millions of matches played against itself and in 100,000 Go games available online.”
But it’s not only Hawking who is worried. In January 2015, he was joined by Elon Musk, Bill Gates, and several hundred other scientists, engineers, technologists, and business people in signing a document called Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter <futureoflife.org/ai-open-letter>. While primarily a call for the world to support and monitor the direction of AI research, it includes language that addresses the concerns of many of the signatories: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” (italics mine) Those paying attention were not surprised to see Musk’s name on the list; he has been an outspoken critic of unregulated AI research. Since one of his many business plans includes self-driving Teslas<cleantechnica.com/2016/01/11/tesla-announces-plans-self-driving-cars-without-driver/>, he is very familiar with the current state of the art. Here’s a compendium of his statements <techemergence.com/elon-musk-on-the-dangers-of-ai-a-catalogue-of-his-statements> about the dangers of strong AI. As just one example, consider this from an interview with CNN: “. . .the pace of (AI) progress is faster than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person, you’d be like ‘Whoa . . . that’s like . . . what’s that?’ . . . that would be really obvious. What’s not obvious is a huge server bank in a dark vault somewhere with an intelligence that’s potentially vastly greater than what a human mind can do. Its eyes and ears would be everywhere, every camera, every microphone, and device that’s network accessible.”
Another of the signatories of the Open Letter was the polymath/provocateur Nick Bostrom<nickbostrom.com>. You can hear his take on the prospects of developing superintelligence—and of taming it—in this TED Talk <ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are?language=en>. At the beginning of his talk, he mentions a survey <nickbostrom.com/papers/survey.pdf> he did in 2012-13. One hundred and seventy of the top researchers in AI responded. Here’s a quote from the abstract. “The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.”
TechEmergence <techemergence.com> did a more recent survey <techemergence.com/conscious-artificial-intelligence> that included several researchers who, like Friend Kessel, doubt the likelihood that we’ll be fending off superintelligent AIs anytime soon. Nonetheless, the overall results were similar to those of the Bostrom survey three years earlier. TechEmergence, by the way, is easily your best source the very latest news on AI.
Whether or not we believe that superintelligence is in prospect, there can be no doubt that automation and expert systems are remaking the world’s economy. The idea that machines are coming for our jobs has been around since the Luddites <en.wikipedia.org/wiki/Luddite> sought to derail the Industrial Revolution<history.com/topics/industrial-revolution>. It’s obvious that automated answering systems, ATM machines, and self-service checkouts have cost jobs; less obvious but no less real is the impact of digital economy giants like Amazon on unemployment. According to the World Economic Forum <weforum.org/press/2016/01/five-million-jobs-by-2020-the-real-challenge-of-the-fourth-industrial-revolution> “. . . the nature of change over the next five years is such that as many as 7.1 million jobs could be lost through redundancy, automation, or disintermediation, with the greatest losses in white-collar office and administrative roles.” In its most recent report to Congress <whitehouse.gov/sites/default/files/docs/ERP_2016_Book_Complete JA.pdf>, President Obama’s Council of Economic Advisers points out that occupations that are easiest to automate usually have the lowest wages. The Council estimates that workers who made less than $20 an hour have an 83 percent chance of losing that job to automation. Those making $20-40 an hour have a 31 percent chance of being laid off, while those making above $40 are secure with but a .04 percent chance of losing their jobs. What time frame are we talking about? A 2013 Oxford University study<oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf > estimates that nearly half of all jobs in the US will be at risk by 2033 due to automation. These grim reports have led some to wonder if we are on our way to A World Without Work <theatlantic.com/magazine/archive/2015/07/world-without-work/395294/>.
Although nobody is predicting that all the jobs will disappear, the potential for catastrophic unemployment due to expansion of the digital labor force points toward the near edge of the Singularity. The late great SFWA Grandmaster Frederick Pohl <frederikpohl.com> famously wrote that “A good science fiction story should be able to predict not the automobile but the traffic jam.” If we let AI and robots stand in for cars, then a jobless future might be the traffic jam. How will we get people moving again?
One radical answer comes from a growing movement calling for governments to give everyone a universal basic income. How is this different from current welfare schemes? According to the Basic Income Earth Network<basicincome.org>, “A basic income is an income unconditionally granted to all on an individual basis, without means test or work requirement. It is a form of minimum income guarantee that differs from those that now exist in various European countries in three important ways:
it is being paid to individuals rather than households, it is paid irrespective of any income from other sources, it is paid without requiring the performance of any work or the willingness to accept a job if offered.”
Does this sound like economic science fiction to you? Or perhaps economic fantasy? Then it’s time to review the arguments for and against. This reddit wiki<reddit.com/r/BasicIncome/wiki/index#wiki_what_are_the_benefits_of_basic_income.3F> does a creditable job, or if you’ve sworn off reddit on general principles<slate.com/articles/technology/technology/2014/10/reddit_scandals_does_the_site_have_a_transparency_problem.html> there’s this admittedly polemical essay <huffingtonpost.com/scott-santens/why-should-we-support-the_b_7630162.html>. Pilot programs are springing up around the world, like Bolsa Família<web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:21447054~pagePK:64257043~piPK:437376~theSitePK:4607,00.html> in Brazil. Finland plans a nationwide trial <vox.com/2015/12/8/9872554/finland-basic-income-experiment> beginning in 2017.
All right, you may say. Fine for Finland and Brazil, but it will never happen here in the United States, the capital of capitalism. Except it already is, on a small scale. Haven’t you heard of Alaska’s Permanent Fund Dividend<opendemocracy.net/ourkingdom/karl-widerquist/alaska-model-citizens-income-in-practice>?
And the idea has piqued the interest of American movers and shakers in the tech industries, who after all, can read the digital handwriting on their R&D walls. The article Why the Tech Elite Is Getting Behind Universal Basic Income ƒvice.com/read/something-for-everyone-0000546-v22n1> names a dozen venture capitalists, software CEOs and academics who are interested in the basic income model. Even the conservative think tank the Cato Institute is taking it seriously. “Basic income, it turns out, is in the peculiar class of political notions that can warm Leninist and libertarian hearts alike. Though it’s an essentially low-tech proposal, it appeals to Silicon Valley’s longing for simple, elegant algorithms to solve everything.”
Who is right about generalized human level intelligence being achieved in the next hundred years? Hawking or Penrose? Kelly or Kessel? We’ll know eventually, of course, but I’m in mind of the first of Arthur C. Clarke’s famous three laws <en.wikipedia.org/wiki/Clarke’s_three_laws> which he proposed in “Hazards of Prophecy: The Failure of Imagination” back in 1962. “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Me, I’m hoping that, if certain distinguished scientists are right about imminence of superintelligence, that there will be a kindly Hal2040 to cut my basic income check after the MicroGoogle™ essaybot takes over writing this column!
Copyright © 2016 James Patrick Kelly