I recently just finished reading (listening to…) Our Final Invention – Artificial Intelligence and the End of the Human Era by James Barrat . I was intrigued by this book when I saw it come across my Flipbook newsfeed. The book deals with a topic of particular interest to me being in the IT realm. The premise is that continued advancements in computing systems my soon lead to a truly intelligent Artificial Intelligence, that may spell the potential end of humanity . I’m normally not much of a book reader, but this book’s premise struck a curiosity with me and I wanted to find out more.
In recent months ironically a few big names in science and the tech industry have begun echoing Barat’s sentiments.
- Bill Gates is the latest brilliant person to warn artificial intelligence could kill us all (feb 3 2015)
- Stephen Hawking warns artificial intelligence could end mankind.
- Elon Musk explains why artificial intelligence should scare us to death
Basically the book’s premise that the author outlines is; we are now approaching a time technologically, when true general purpose AI systems are likely 20+ years into the NEAR future ( according to most of the top scientists, researchers he interviewed) , and he posits that we have not done enough to safeguard our future existence when we have to co-exist with an intelligence superior to ours. While pop-culture with films such as the Terminator , HAL 3000 from (Space Odyssey 2001) and even the 1970’s classic Colossus: The Forbin Project has made us aware of the possible dangers of a super-intelligent machines or system, this book tries to explore the real dangers that may come true, when an actual AI system is brought into existence.
He spends the first part of the book defining what is meant by AI , or more specifically various levels of true general purpose AI – AGI (Artificial General intelligence AGI) and the next iteration named ASI (Artificial SuperIntelligence) , using these terms he paints the first chapter with a simplistic doomsday scenario dubbed the the Busy Child, a scenario when the first true AGI machine becomes self-aware and wants to escape its world .. from there the author goes on to interview various luminaries in the field of Artificial Intelligence , who fall in different ends of the spectrum in terms of thinking about the dangers of such an advanced system to those who see its limitless potential to help mankind.
The point when a true AI system comes into existence, and when it begins to learn voraciously and exhibits self-aware behaviors is dubbed the Singularity ( most notably by leading futurist and computer scientist Ray Kurzweil ) and its at this point where options are discussed throughout the book.
The author does a nice job of interviewing and bringing together a good collection of today’s top researchers, from computer scientists to AI thinkers and discusses with them the various states of AI today, the challenges, the various approaches, and pokes and prods them into what happens to US, when AI emerges. He feels many are simply not thinking of the potential dangers of the outcome of such a system, because they are so fascinated with building one in the first place.
My takeaways from this book
- The author really paints AI as a true danger, strikes me a bit too alarmist… because of its nature, uncontrolled AI (and any AI might be uncontrollable). I wish he provided more concrete scenarios of how this could happen. Part of the fear-mongering is that AI will have malevolent intentions in order to maintain its own survival, not really sure how true that is..
- I would have liked to see more examples of hypothetical Doomsday AI scenarios, I would like to have more examples of a chain of events that could be conjured up by such a machine and why we couldn’t simply “unplug” it.
- Many the other big AI thinkers (24% according to his poll near the end of the book) are overly optimistic thinking AGI will be coming in the next 20 years. General purpose AI even with all the advancements and Moore’s law type developments, is probably closer to 50 years away, and not 20. Most AI thinkers fail to take into account, that humans live in a legally, culturally and ethically constrained societies, which inhibit rapid adoption of new paradigm shifting technologies. Case in point: Google’s self-driving car’s, they need to have special laws created in order to operate, special laws take time, because the technology is too radical and lawyers need to know who to sue if an accident happens with one of them, and then politicians need to debate that and so on and so on.. Basically even if the technology is there, society moves more slowly and doesn’t adhere to Moore’s law.
- A BIG OMISSION (not discussed in the book) is the socio-economic and class inequality issues associated with AI (or any advanced systems) (he does cursorily discuss government and corporation funded AI). I think a more probable dystopian future involving AI , is not so much runaway AI, but rather a scenario more similar to the movie Elysium where the elites will use the power of AI systems,to maintain their status and standards or living at the expense of everyone else.
- The author gets it right in saying we really , really don’t know what a true self-aware AI system is likely to do,. Frankly I believe that its behaviors could go either way (benevolent or malevolent), and a thread running through the book is we should be ready with some sort of ultimate off-switch if it really goes bad, because we may not get a second chance.
- The author suggests we shouldn’t associate anthropomorphic concepts on an AI system, but he routinely uses the anthropomorphic concept of survival as the premise for his book.. Why would the AI system need to survive? Why would its survival be such an inherit rule? Couldn’t it just turn itself off when not needed, and then start up again at a later time? why would it be “afraid” to “die”? That’s strikes me as a very human notion…
- He does mention modern day cutting edge research and operations AI -like systems. Its interesting to keep on eye on how these projects develop. Among the projects he refers to are: IBM Watson, Cyc Project, CALO, Nell Project, various DARPA funded AI projects, Google LAB X (research labs) etc.
- He points out the many governments and large corporations are working on both stealth (DARPA military AI and drone system) and public projects (Google self-driving cars, IBM Watson) to create AI systems, and because of the exponential growth and development of computing platforms we may see the first commercial AI systems in about 10-20 years.
Its a good read, although I think humanity might be befallen by less exotic demises in the near term such as war, disease , resource starvation,natural disaster, space weather etc..that may lead to our demise before any super-computer evil genius begins disassembling us…
More about this book…
- You can find a formal book review here at the New Yorker
- You can hear the author James Barrat discuss this book on Twit’s Network Podcast: Triangulation, by the way he alludes to a forthcoming documentary on this topic during the interview.
- there’s also a related Triangulation episode with Nick Bostrom on SuperIntelligence
You hit the nail on the head right there, Tony. An AI-pocalypse should be the least of our concerns.
Catherine, Thanks, yeah I think things may go south in a lot of other ways…Although I think the “Elysium” scenario is the most likely if we’re still around when the smart machines appear… BTW check out the podcast interview he’s planning on a documentary based on this book.