Hot Best Seller

Artificial Intelligence: A Guide for Thinking Humans

Availability: Ready to download

A sweeping examination of the current state of artificial intelligence and how it is remaking our world No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals AI’s turbulent history and the A sweeping examination of the current state of artificial intelligence and how it is remaking our world No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals AI’s turbulent history and the recent spate of apparent successes, grand hopes, and emerging fears surrounding it. In Artificial Intelligence, Mitchell turns to the most urgent questions concerning AI today: How intelligent—really—are the best AI programs? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Along the way, she introduces the dominant models of modern AI and machine learning, describing cutting-edge AI programs, their human inventors, and the historical lines of thought underpinning recent achievements. She meets with fellow experts such as Douglas Hofstadter, the cognitive scientist and Pulitzer Prize–winning author of the modern classic Gödel, Escher, Bach, who explains why he is “terrified” about the future of AI. She explores the profound disconnect between the hype and the actual achievements in AI, providing a clear sense of what the field has accomplished and how much further it has to go. Interweaving stories about the science of AI and the people behind it, Artificial Intelligence brims with clear-sighted, captivating, and accessible accounts of the most interesting and provocative modern work in the field, flavored with Mitchell’s humor and personal observations. This frank, lively book is an indispensable guide to understanding today’s AI, its quest for “human-level” intelligence, and its impact on the future for us all.


Compare

A sweeping examination of the current state of artificial intelligence and how it is remaking our world No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals AI’s turbulent history and the A sweeping examination of the current state of artificial intelligence and how it is remaking our world No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals AI’s turbulent history and the recent spate of apparent successes, grand hopes, and emerging fears surrounding it. In Artificial Intelligence, Mitchell turns to the most urgent questions concerning AI today: How intelligent—really—are the best AI programs? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Along the way, she introduces the dominant models of modern AI and machine learning, describing cutting-edge AI programs, their human inventors, and the historical lines of thought underpinning recent achievements. She meets with fellow experts such as Douglas Hofstadter, the cognitive scientist and Pulitzer Prize–winning author of the modern classic Gödel, Escher, Bach, who explains why he is “terrified” about the future of AI. She explores the profound disconnect between the hype and the actual achievements in AI, providing a clear sense of what the field has accomplished and how much further it has to go. Interweaving stories about the science of AI and the people behind it, Artificial Intelligence brims with clear-sighted, captivating, and accessible accounts of the most interesting and provocative modern work in the field, flavored with Mitchell’s humor and personal observations. This frank, lively book is an indispensable guide to understanding today’s AI, its quest for “human-level” intelligence, and its impact on the future for us all.

30 review for Artificial Intelligence: A Guide for Thinking Humans

  1. 4 out of 5

    Brian Clegg

    As Melanie Mitchell makes plain, humans have limitations in their visual abilities, typified by optical illusions, but artificial intelligence (AI) struggles at a much deeper level with recognising what's going on in images. Similarly in some ways, the visual appearance of this book misleads. It's worryingly fat and bears the ascetic light blue cover of the Pelican series, which since my childhood have been markers of books that were worthy but have rarely been readable. This, however, is an exc As Melanie Mitchell makes plain, humans have limitations in their visual abilities, typified by optical illusions, but artificial intelligence (AI) struggles at a much deeper level with recognising what's going on in images. Similarly in some ways, the visual appearance of this book misleads. It's worryingly fat and bears the ascetic light blue cover of the Pelican series, which since my childhood have been markers of books that were worthy but have rarely been readable. This, however, is an excellent book, giving a clear picture of how many AI systems go about their business and the huge problems designers of such systems face. Not only does Mitchell explain the main approaches clearly, her account is readable and engaging. I read a lot of popular science books, and it's rare that I keep wanting to go back to one when I'm not scheduled to be reading it - this is one of those rare examples. We discover how AI researchers have achieved the apparently remarkable abilities of, for example, the Go champion AlphaGo, or the Jeopardy! playing Watson. In each case these systems are tightly designed for a particular purpose and arguably have no intelligence in the broad sense. As for what's probably the most impressively broad AI application of modern times, self-driving cars, Mitchell emphasises how limited they truly are. Like so many AI applications, the hype far exceeds the reality - when companies and individuals talk of self-driving cars being commonplace in a few years' time, it's quite clear that this could only be the case in a tightly controlled environment. One example, that Mitchell explores in considerable detail are so-called adversarial attacks, a particularly AI form of hacking where, for example, those in the know can make changes to images that are invisible to the human eye but that force an AI system to interpret what they are seeing as something totally different. It's a sobering thought that, for example, by simply applying a small sticker to a stop sign on the road - unnoticeable to a human driver - an adversarial attacker can turn the sign into a speed limit sign as far as an AI system is concerned, with potentially fatal consequences. Don't get me wrong, Mitchell, a professor of computer science who has specialised in AI, is no AI luddite. But unlike many of the enthusiasts in the field (or, for that matter, those who are terrified AI is about to take over the world), she is able to give us a realistic, balanced view, showing us just how far AI has to go to come close to the more general abilities humans make use of all the time even in simple tasks. AI does a great job, for example, in something like Siri or Google Translate or unlocking a phone with a face - but AI systems still have no concept of, for example, understanding (as opposed to recognising) what is in an image. Mitchell makes it clear that where systems learn from large amounts of data, it is usually impossible to uncover how they are making decisions (which makes the EU's law requiring transparent AI decisions pretty much impossible to implement), so we really shouldn't trust them with important outcomes as they could easily be basing their outcomes on totally irrelevant inputs. Apart from occasionally finding the explanations of the workings of types of neural network a little hard to follow, the only thing that made me raise an eyebrow was being told that Marvin Minsky 'coined the phrase "suitcase word"' - I would have thought 'derived* the phrase from Lewis Carroll's term "portmanteau word"' would have been closer to reality. There have been good books on the basics of AI already, and excellent ones on the problems that 'deep learning' and big data systems throw up. But without a doubt, Mitchell's book sets a new standard in giving an understanding of what's possible and how difficult it is to go further. It should be read by every journalist, PR person and politician before they pump out yet more hype on the AI future. Recommended.

  2. 5 out of 5

    Tucker (TuckerTheReader)

    plot twist: this book was written by AI | Goodreads | Blog | Pinterest | LinkedIn | YouTube | Instagram plot twist: this book was written by AI | Goodreads | Blog | Pinterest | LinkedIn | YouTube | Instagram

  3. 4 out of 5

    Steve Agland

    This book should be widely read, especially by those with a technological or philosophical interest in artificial intelligence, which should be most people. It provides a succinct history of this ambitious thought-provoking field, and a beautiful overview of the current state of the art. It should be accessible to anyone unafraid of a little mathematics. Since it is such a quickly evolving field, this latter aspect may grow out of date rather quickly. But most importantly, this book is a well-arg This book should be widely read, especially by those with a technological or philosophical interest in artificial intelligence, which should be most people. It provides a succinct history of this ambitious thought-provoking field, and a beautiful overview of the current state of the art. It should be accessible to anyone unafraid of a little mathematics. Since it is such a quickly evolving field, this latter aspect may grow out of date rather quickly. But most importantly, this book is a well-argued reality check: a bucket of cold water. (You know what I mean by that analogy, but can a computer?) AI is a technological endeavour, and like other big sci-fi dreams - deep space travel, cheap clean energy, transhumanism - there is an enormous gap between our current capability (impressive though it is) and our vividly imagined end point. It's a gap that's easy to dismiss while breathlessly fretting over superintelligence and singularities, but that gap is filled with some extremely difficult challenges that we currently have little idea how to approach, let alone solve. Mitchell's prologue sets up the book as a quest to understand what is really going on in AI research, spurred by a colourful example of the starkly opposing views found in debates within and around the field. She recounts a visit by legendary AI researcher and author Douglas Hofstadter to Google's headquarters to give a talk. Hofstadter wrote "Godel, Escher, Bach: An Eternal Golden Braid" - a famous meditation on AI and the nature of human consciousness, which was a significant influence on many geeks, including both your humble reviewer and the author of this book. Mitchell sought out Hofstadter as a mentor as a result of reading it. Anyway, Hofstadter spoke to the younger hotshot Googlers of his aching anxiety that when general AI comes, it will reveal human consciousness as something not so special. Something explainable and easy to simulate. This was an unexpected departure from the usual AI-will-destroy-us/no-it-won't dichotomy. The Googlers, for the most part, also shared the faith that general AI was in some sense imminent, but that it would be a boon to society, and the existential angst didn't factor into their thinking. The topic of AI inspires a lot of this sort of philosophising  - as it should - and much has been written on it. But like all such high-concept scientific pondering, it's healthy to rest it on a bedrock of hard technical reality. This book provides that reality. How far have we really got on the quest to build our own replacement? Mitchell begins with sketch of the history of AI research from its birth in the 1950s, and outlines its key figures and main ideological branches. These are broadly classified as symbolic (programmed facts and rules for inferring new facts, analogous to conscious reasoning) and subsymbolic (biologically inspired structures which learn patterns and rules via lots of example data, analogous to subconscious learning). Early on the symbolists were the dominant sect, but the pendulum has swung dramatically in the other direction in the last decade or two thanks to the runway success of "deep learning". For a more detailed survey of the many approaches to the challenge of AI, I recommend "The Master Algorithm" by Pedro Domingos, who is quoted a number of times in this book. Mitchell then begins her coverage of the current state of the industry with a deep dive into its hottest algorithm (you hated that pun, but could a computer?). Deep neural networks are now being used everywhere, from image recognition, to automated translation, to self-driving cars. Their success has lead many to speculate that this is significant step toward the holy grail of "general AI" - that is, a system that can learn a wide range of domains and function in them all simultaneously. But Mitchell identifies a number of fatal flaws in current techniques that will limit how far they can be taken. And there are no obvious or easy solutions. Deep neural networks are very good at learning patterns from training data, but even with "big data" quantities of examples there will always be the "long tail" of rare or one-off exceptions that will cause deep neural networks to fail spectacularly. Just think of the sorts of strange things that might happen in the road in front of a self-driving car. These can cause bizarre and unpredictable outputs because the system has no "common sense" understanding of the world to fall back on. Similarly, such networks after vulnerable to malicious attack. Carefully designed and small changes to input data can induce incorrect responses, sometimes tuned to the attacker's wishes. Interestingly the changes can be so subtle that a human observer wouldn't notice the difference. This implies that the algorithms aren't really understanding the input, at least not in the way we do. And one of the most frustrating shortcomings of deep neural networks is their inability to "explain" their results. They may give correct answers but we don't know how they reached that answer, at least not in any conceptually meaningful way. Mitchell's explanation of the ideas behind the recent breakthrough at playing the came of "go" - that is, DeepMind's AlphaGo - was a very satisfying example of how various algorithmic techniques can be combined to yield spectacular results, albeit in narrow domains. AlphaGo combined deep neural networks with reinforcement learning, and Monte Carlo tree searching. Likewise, the survey of the successes and failures in the domain of natural language processing were fascinating, and provided the clearest example of the roadblock that's looming ahead in many of these subfields: that is, that current AI systems don't really understand the world. They don't understand what the words they are processing refer to. That difficult to articulate mapping between a word like "drink" and the abstract concept of a drink that we humans so easily grasp. The final part of the book has a couple of chapters on these critical missing puzzle pieces. One of the key skills we employ so effortlessly as humans, but which has proven so difficult to train into a computer, is that of making analogies. We instantly recognise objects or situations as instances of a abstract idea (drinks, arguments, friends, accidents) and draw upon our experience with similar instances and applying them to new situations. We easily spot the pertinent "sameness" between two instances of a thing, and understand which differences are relevant and which aren't. As such we can generalise our knowledge powerfully. We haven't figured out how to get a machine to do this well. Mitchell rounds out the book with a "self interview", in the style of Godel Escher Bach, where she asks herself many of the "big questions" related to AI and uses this to summarise her conclusions. She makes a compelling case using insider knowledge, copious examples, quotes from other leading thinkers, and an entertaining wry wit. "And if any computers are reading this, tell me what "it" refers to in the previous sentence, and you're welcome to join in the discussion."

  4. 5 out of 5

    Marta Sarrico

    A realistic overview of the AI field, what it has accomplish so far and the long road ahead. Good for any level of expertise to get an overall idea of the techniques and specially their limits, without any of the hype that surrounds this field. There’s still a lot of work to do, AI isn’t going to take over the world in any near future but still... what a time to be alive!

  5. 5 out of 5

    howl of minerva

    Dr. Mitchell, I salute you. This is a primer that actually does its job. In a field that tends to Sarah Connorish hysteria or I-am-become-Goddish euphoria, she finds a middle way. She also manages to explain some of the nuts and bolts of how AI actually works. She does so in an intuitive, conceptual way without eye-glazing equations, absurd similes or hyperbole. A tour de force. Did she have a helping hand from the future?

  6. 4 out of 5

    Minervas Owl

    This review has been hidden because it contains spoilers. To view it, click here. What I have learned: • The intuition behind neural network, CNN, RNN, word2vec, and reinforcement learning. • The distinction between symbolic and subsymbolic AI. The symbolic AI (e.g., solver for the Missionaries and Cannibals problem, expert systems) solves problems by applying human-interpretable concepts, the subsymbolic AI (e.g., the perceptron, deep learning) solves problems without using such concepts. • Don't be too psyched by promises of AI such as "The Singularity is Near"; we've seen What I have learned: • The intuition behind neural network, CNN, RNN, word2vec, and reinforcement learning. • The distinction between symbolic and subsymbolic AI. The symbolic AI (e.g., solver for the Missionaries and Cannibals problem, expert systems) solves problems by applying human-interpretable concepts, the subsymbolic AI (e.g., the perceptron, deep learning) solves problems without using such concepts. • Don't be too psyched by promises of AI such as "The Singularity is Near"; we've seen repeating cycle of AI Spring and Winter since the 1950s. • The recent successes of deep learning are still examples of "narrow" AI. For instance, AlphaGo cannot do anything other than playing Go. • Even in each of their narrow field, the current AI systems are not perfect: they tend to perform poorly in a long tail of low-probability and unexpected events. They are also vulnerable to adversarial attacks. Not to mention the ethical challenges they encounter. • AI performs poorly in challenges that test its understanding of the rich meaning of a situation. Examples of such challenges include the Winograd schemas, the Bongard problem, and why a photo is funny. • What makes humans different from AI are: 1) common sense, a.k.a., the core intuitive knowledge of physics, biology, and psychology (e.g., parts of an object tend to move together) 2) the capabilities of abstraction and analogy. • Because researchers have been struggling for decades to understand and reproduce common sense, abstraction, and analogy, we are still far, far away from creating general human-level AI.   Pros of the book: The book is an easy read. One can enjoy it without prior knowledge of AI methods such as deep learning. The author offers adequate intuitive explanations of state-of-the-art AI methods. The writing is clear and straightforward. The introduction of Winograd schemas, Bongard problem, and Hofstadter's string analogy puzzles are enchanting.   Cons: The essence of the book is not fully elaborated until it reaches part V, the last three chapters. The first 230 pages before part V read like a deep-learning crash course interlaced with news stories. I think the crash course is necessary, but the author could be skimp on AI history and anecdotes.   The book could offer a more in-depth analysis of certain topics, such as why CNN and RNN are vulnerable to adversarial examples, and how programs such as Copycat and Metacat could make analogies. I also wish the book could talk more about unsupervised deep learning.   Perhaps I am being a bit too harsh because of my wrong expectation. I read the book because I thought as a student of Hofstadter, Melanie Mitchel will offer some insightful philosophical analysis. But in fact, she is a professor in computer science rather than in philosophy. She intentionally planned to "entirely sidestep the question of consciousness, because it is so fraught scientifically."  Overall, I think the book is worth reading. Although not in the way I expected, the book has delivered what it had promised: an overview of the frontier of AI to the general audience. My reading experience has encouraged me to read Melanie's books on genetic algorithms and complexity as well.   Stray observations: • Humans are also susceptible to our own types of "adversarial examples", such as visual illusions. • Superstitions can be understood as a result of over-fitting in reinforcement learning. • One way to assess the challenge of a domain for computers is to see how well simple algorithms such as random search and genetic algorithms perform on it. 

  7. 5 out of 5

    Pantelis Pipergias-Analytis

    Melanie Mitchell provides an excellent summary of the current endeavours across fields in AI and an interim (bleak) assessment of the field's progress towards the holy grail of strong AI. Mitchell discusses recent milestones reached by AI (most based on approaches using some form of deep learning) taking a critical point of view, devoid of sensationalism as encountered in the media. Mitchell’s writing is lucid and engaging: many technical concepts are explained in clear and vidid language. What’ Melanie Mitchell provides an excellent summary of the current endeavours across fields in AI and an interim (bleak) assessment of the field's progress towards the holy grail of strong AI. Mitchell discusses recent milestones reached by AI (most based on approaches using some form of deep learning) taking a critical point of view, devoid of sensationalism as encountered in the media. Mitchell’s writing is lucid and engaging: many technical concepts are explained in clear and vidid language. What’s more, Mitchell is one of Douglas Hofstadter most successful students. Thus, the book may also read as as follow up to Hofstader's "GEB: an Eternal Golden Braid". Frequent references to Hofstadter's views on AI and to GEB are dispersed throughout the book and substantiate this interpretation. A lot has happened since GEB was originally published, and Mitchell's book can provide a much needed synthesis and constructive discussion of both Hofstader's ideas and the current state-of-art approaches for re-creating intelligence. The book covers many areas of AI, especially considering the astounding growth of the field, but there were sections that could be further developed: for instance, the section on creativity at the end was interesting and could be developed in a stand-alone chapter. Notwithstanding, Mitchell’s AI is bound to entertain readers across disciplines and can become an accessible and balanced entry point to research on AI for a wide audience.

  8. 5 out of 5

    Jake

    I read this cover to cover in about two sittings, first book I've done like that in a while. There's a lot of pop science books out about AI and machine learning and a lot of them aren't very good. This is intelligent but not obscure, conversational to the point that it's almost gossipy it reads like a quanta article if they delve just a bit deeper into their subject. Whether it's the cheating controversies or the history of AIs for games to the speculative portions in the later part of the book I read this cover to cover in about two sittings, first book I've done like that in a while. There's a lot of pop science books out about AI and machine learning and a lot of them aren't very good. This is intelligent but not obscure, conversational to the point that it's almost gossipy it reads like a quanta article if they delve just a bit deeper into their subject. Whether it's the cheating controversies or the history of AIs for games to the speculative portions in the later part of the book, I found it all engrossing and perfectly succinct. When my friends express their fears about the singularity or artificial intelligent this will be the book I give to them.

  9. 4 out of 5

    TS Allen

    "In any ranking of near-term worries about AI, superintelligence should be far down the list. In fact, the opposite of superintelligence is the real problem." "In any ranking of near-term worries about AI, superintelligence should be far down the list. In fact, the opposite of superintelligence is the real problem."

  10. 4 out of 5

    Florin Pitea

    Quite interesting. Recommended.

  11. 4 out of 5

    Jen

    ** I won an advance reader copy of this book for free through a Goodreads giveaway. ** This is a well-written and thought-provoking account of the history and potential future of artificial intelligence. The author writes in a style that allows both those who have a background in AI to gain new insight and those who have little to no background to follow along (I understand the basics but am by no means an expert and I had little to no trouble understanding). The author also does an excellent job ** I won an advance reader copy of this book for free through a Goodreads giveaway. ** This is a well-written and thought-provoking account of the history and potential future of artificial intelligence. The author writes in a style that allows both those who have a background in AI to gain new insight and those who have little to no background to follow along (I understand the basics but am by no means an expert and I had little to no trouble understanding). The author also does an excellent job of explaining misconceptions that exist about the field of AI and giving us a thoughtful look at the future of this field. Whether you are someone who is interested in going into the field or someone who fears the 'uprising', this book is an excellent place to start reading. Definitely recommended!

  12. 4 out of 5

    Thomas

    Excellent summary and overview of the current state and challenges facing artificial intelligence. Should be readily accessible to any interested reader without requiring pre-existing knowledge of the field.

  13. 5 out of 5

    Tam

    Wow, what a great read. No matter who you are, in this modern world it is highly likely that you are involved one way or another with AI. AI is so ubiquitous that at some point I want to dismantle the seemingly impenetrable barrier and peek inside a bit beyond the simplest definition. This book does exactly that, with such clarity. And it offers more than that. Besides providing some basic notions of various algorithms in AI, Mitchell brings into the table an in depth discussion of the overall pi Wow, what a great read. No matter who you are, in this modern world it is highly likely that you are involved one way or another with AI. AI is so ubiquitous that at some point I want to dismantle the seemingly impenetrable barrier and peek inside a bit beyond the simplest definition. This book does exactly that, with such clarity. And it offers more than that. Besides providing some basic notions of various algorithms in AI, Mitchell brings into the table an in depth discussion of the overall picture of the field, with her many years experience in the field as an academic researcher. She doesn't have a relentless confidence in AI and its wild promises. At the same time she doesn't talk as if the endeavor is inevitably doomed. The balance is hard to kept, and is kept. I have been a bit skeptical of the current achievements of AI. Mostly I have been afraid of the misuse and the intransparency. Mitchell certainly mentions those aspects, but she further clarifies all the shortcomings without being dramatic or sensational. I realize I should be even more concerned. Most if not all applications are so fragile to attack (hacking), and I tremble to think of consequences. Yes, some applications are extremely useful, but in the end the utmost importance lies in how humans regulate the scope of the use of AI. I think everyone should read this book, especially policy makers, especially young applied computer scientists. At the same time, the beauty of the book is that Mitchell helps to elucidate how "intelligent" humans are. Many things we consider easy turns out so gruellingly hard to analyze not to mention to recreate in a machine. I think of Kahneman and his Fast Thinking system. Oh yeah, we humans are trying to compete with nature that has been building intelligent life for a few billion years. Of course it is hard.

  14. 5 out of 5

    Andreea

    Great introduction to the field of artificial for a general reader, pre-college teenagers, or even developers keen to explore new areas. It's a fair and balanced account of the algorithms and methodologies used in AI today and how they've evolved over the years. It's a wonderful antidote to all the people hyping their releases in the field and in media/journalism, and it will help you understand these cycles of hype and deflated expectations a lot better once you read it. Great introduction to the field of artificial for a general reader, pre-college teenagers, or even developers keen to explore new areas. It's a fair and balanced account of the algorithms and methodologies used in AI today and how they've evolved over the years. It's a wonderful antidote to all the people hyping their releases in the field and in media/journalism, and it will help you understand these cycles of hype and deflated expectations a lot better once you read it.

  15. 4 out of 5

    Anastasia

    A nice summary of what has been achieved in the field of AI and the misconceptions and work still being done.

  16. 4 out of 5

    May Ling

    Summary: Melanie Mitchell's guide is approachable in its coverage of where the world is at in AI as of 2019. I love that it presents a balanced view and as a practitioner, she doesn't try to sell AI as doing more than it currently can. p. 5 - Funny that she's getting lost in the Google Maps building. Ha! p. 13 - Hofstadter's terror had to do with making humanity mundane more than a few of AI. p. 35 - Back Propagation (She references Perceptrons) - "is a way to take the error observed at the outpu Summary: Melanie Mitchell's guide is approachable in its coverage of where the world is at in AI as of 2019. I love that it presents a balanced view and as a practitioner, she doesn't try to sell AI as doing more than it currently can. p. 5 - Funny that she's getting lost in the Google Maps building. Ha! p. 13 - Hofstadter's terror had to do with making humanity mundane more than a few of AI. p. 35 - Back Propagation (She references Perceptrons) - "is a way to take the error observed at the output units (for example, a high confidence for the wrong digit in the example of figure 4) and to 'propogate' the blame for that error backward (in figure 4, this would be right to left) so as to assign proper blame to each oft he weights in the network." I have no idea what she's saying, but I think basically when you realize that one layer is wrong, you then refit the previous layers. p. 53 - On how AI will be created: "...we will set up an intricate hierarchy of self-organizing systems, based largely on the reverse engineering of the human brain, and then provide for its education ... hundred if not thousands of times faster than the comparable process for humans." p. 73 - WordNet provides something that appears to be similar to word networks. p. 72 ImageNet - Object research for adding human labeled categories of photographs. p. 88 - "With the proliferation of deep-learning systems in real-world applications, companies are finding themselves in need of a new labeled data set for training deep neural networks." She uses the example of self-driving cars. p. 95 - The idea that people trust people in their cognitive perception but not AI. Intriguing. p. 102 - This shifts to the moral questions by starting to ask what decisions AI should be allowed to make (Prison sentences, evaluation of loan applications, etc). p. 154 - She gives the status on speech recognition and its current weaknesses in distinguishing voices, noices, etc. p. 160 Word2Vec, an intriguing concept of placing word vectors into space as a way of unwrapping mental processes surrounding understanding word meanings. p. 175 - Translating images into sentences via captioning. Yeah. I forgot, that is really crazy that the brain does that. p. 208 Cyc - the unwritten knowledge that humans have. It's the stuff that's so obvious we don't talk about it, but in fact, you need to b/c that's where you start with programming. p. 229 EMI's music algo. p. 236 - She's still thinking about the space as a burgeoning discipline. I think the implication that anyone thinks we're already there is that the scope of what needs to be accomplished is still actually quite a bit bigger.

  17. 5 out of 5

    Kassie

    This was great and I bet if I wasn't so brain broken by pandemic I would have been able to finish it in a month instead of 4 because it is extremely well written and clear and just very good at explaining difficult concepts to huge dummies like me. This was great and I bet if I wasn't so brain broken by pandemic I would have been able to finish it in a month instead of 4 because it is extremely well written and clear and just very good at explaining difficult concepts to huge dummies like me.

  18. 5 out of 5

    Donal Hurley

    This is a great overview of artificial intelligence for the lay person.

  19. 4 out of 5

    Henrik Warne

    Really great book. The author (an AI researcher) explains how many of today's state of the art AI systems work. These include image recognition, game playing (like AlphaGo) and NLP systems like Google translate. The explanations for how those systems work are really good, with just the right amount of technical detail. We also get to see what the weak points of these systems are, and how far away we are from creating any true intelligence. I have written quite a long review/summary of it on my bl Really great book. The author (an AI researcher) explains how many of today's state of the art AI systems work. These include image recognition, game playing (like AlphaGo) and NLP systems like Google translate. The explanations for how those systems work are really good, with just the right amount of technical detail. We also get to see what the weak points of these systems are, and how far away we are from creating any true intelligence. I have written quite a long review/summary of it on my blog: https://henrikwarne.com/2020/05/19/ar...

  20. 5 out of 5

    Eric

    This felt like an 8th grade science report, and much of the writing was laughably bad. What became an entire book, perhaps ought to have been a blog post and a bibliography. There was far too much of this pattern: “here’s what I’m about to say”; some casual, repetitive exposition; and “I just said”; littered with plenty of “as I said before”. I can do without all the prefacing and summary, thanks very much. While I fully appreciate this sort of accesible, high-level overview is important and valua This felt like an 8th grade science report, and much of the writing was laughably bad. What became an entire book, perhaps ought to have been a blog post and a bibliography. There was far too much of this pattern: “here’s what I’m about to say”; some casual, repetitive exposition; and “I just said”; littered with plenty of “as I said before”. I can do without all the prefacing and summary, thanks very much. While I fully appreciate this sort of accesible, high-level overview is important and valuable, that doesn’t mean I have to praise it and its lack of depth, style, and quality. I recommend this if you’re bored and wanting something mildly interesting, but fairly mindless, to read. Or if you’re ignorant of developments in computer science and AI in the last 60 years or so. Reach for something like GEB, or even one of the Manning books on machine learning, instead, if you prefer to think and be challenged.

  21. 4 out of 5

    Xavier Guardiola

    A fresh, down to earth, review of the current AI craze. Melanie Mitchell is no stranger to the field (she did her Ph.D with Douglas Hofstadter, and, quite some years ago, published the best book about Genetic Algorithms you can find). She's quick to pinpoint the limits of current Neural Network (CNNs, RNNs, DQNs) centric approaches to AI, highlighting the need to overcome the "barrier of meaning" and search for models that could work with 'common sense' knowledge and the capacity for sophisticat A fresh, down to earth, review of the current AI craze. Melanie Mitchell is no stranger to the field (she did her Ph.D with Douglas Hofstadter, and, quite some years ago, published the best book about Genetic Algorithms you can find). She's quick to pinpoint the limits of current Neural Network (CNNs, RNNs, DQNs) centric approaches to AI, highlighting the need to overcome the "barrier of meaning" and search for models that could work with 'common sense' knowledge and the capacity for sophisticated abstraction and analogy making. Fun read, she writes very well.

  22. 5 out of 5

    Rada

    A wonderful introduction to AI, covering the latest techniques and resources in a truly accessible way. It deserves the title of “a guide for thinking humans” — I was impressed by the author’s ability to convey complex AI concepts in simple terms. A bit that I particularly liked — the recurring recipe for AI research (ie, hype): define narrow problem, achieve human-level performance, make big claims for broader problem.

  23. 5 out of 5

    David Readmont-Walker

    Well explained, balanced, nuanced evaluation of the current status of AI.

  24. 5 out of 5

    Harsha Kokel

    I think the best best way to describe this book is by quoting Fredrik Backman. In Man called Ove, Fredrik Backman describes that loving someone is like moving into a new house. At first, we love it because it's new, but gradually we learn about its nook and crannies and love it even more. This book talks about the nook and crannies of Artificial Intelligence. Prof. Mitchell provides insights into the current developments in Artificial Intelligence and highlights the cracks in this subject. Not t I think the best best way to describe this book is by quoting Fredrik Backman. In Man called Ove, Fredrik Backman describes that loving someone is like moving into a new house. At first, we love it because it's new, but gradually we learn about its nook and crannies and love it even more. This book talks about the nook and crannies of Artificial Intelligence. Prof. Mitchell provides insights into the current developments in Artificial Intelligence and highlights the cracks in this subject. Not to criticize the current progress but to take a stock of what has happened. She brings forward a beautiful narrative on what predictions were made, what promises were fulfilled, what controversies emerged, and what is still to be done in this discipline. She takes us through the beautiful journey of AI -- from the instigation of AI at Dartmouth Workshop in 1956, the advent of expert-based systems like MYCIN in the 1970s, the advent of machine learning, Deep Blue becoming the chess champion in 1997, IBM Watson's performance at Jeopardy in 2011, Emergence of Amazon Mechanical Turks and Image Net in 2010, the resurgence of deep learning with AlexNet in 2012, Release of ATARI game simulators in 2013, AlphaGo glamourously defeating Lee Sedol in 2016, AlphaGo zero playing multiple games, etc. The progress in Vision, NLP, and Speech recognition are very much evident in the day-to-day lives now with Virtual Assistants and Social media. She informs us of how all these technologies have evolved, what bets were placed, who shaped the field, and how current systems work. I found the last chapter of this book, the barrier of meaning, most enlightening. That is based on her and her advisor's work. I found myself agreeing to the crucial conclusion she draws after describing that work. Humans use analogies and metaphors a lot to drive the knowledge or understanding of abstract concepts. That is what makes us intelligent. Current progress in AI is shallow when evaluated from this perspective. The Turing Test (as described by Turing) is not the gold standard and we need better way to measure the intelligence. This book also gave me a perspective on how research efforts have evolved the field. How small thing, by chance or by choice, steered the discipline that we see now. It inspires me to continue working in this field. Some development in the field are missing in the book (like Bayesian Network, GNNs), some didn’t get enough attention (like Wordnet, Causality), and more developments have happened after the book was released (like BERT). So this book is by no means a complete guide. But that just speaks to the magnitude of the field. There is much done, but much MUCH more to be done!!

  25. 4 out of 5

    Venky

    René Descartes, a French philosopher, mathematician and scientist in elucidating his famous theory of dualism, expounded that there exist two kinds of foundation: mental and physical. While the mental can exist outside of the body, and the body cannot think. Popularly known as mind-body dualism or Cartesian Duality (after the theory’s proponent), the central tenet of this philosophy is that the immaterial mind and the material body, while being ontologically distinct substances, causally interac René Descartes, a French philosopher, mathematician and scientist in elucidating his famous theory of dualism, expounded that there exist two kinds of foundation: mental and physical. While the mental can exist outside of the body, and the body cannot think. Popularly known as mind-body dualism or Cartesian Duality (after the theory’s proponent), the central tenet of this philosophy is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. British philosopher Gilbert Ryle‘s in describing René Descartes’ mind-body dualism, introduced the now immortal phrase, “ghost in the machine” to highlight the view of Descartes and others that mental and physical activity occur simultaneously but separately. Ray Kurzweil, the high priest of futurism and Director of Engineering at Google, takes Cartesian Duality to a higher plane with his public advocacy of concepts such as Technological Singularity and radical life extension. Kurzweil argues that with giant leaps in the domain of Artificial Intelligence, mankind will experience a radical life extension by 2045. Skeptics on the other hand bristle at this very notion, claiming such “Kurzweilian” aspirations to be mere fantasies putting to shame even the most ludicrous of pipe dreams. The advances in the field of AI have spawned a seminal debate that has a vertical cleave. On one side of the chasm are the undying optimists such as Ray Kurzweil predicting a new epoch in the history of mankind, while on the other side of the divide are placed pessimists and naysayers such as Nick Bostrom, James Barrat and even the likes of Bill Gates, Elon Musk and Stephen Hawking who advocate extreme caution and warn about existential risks. So what is the actual fact? Melanie Mitchell, a computer science professor at Portland State University takes this conundrum head on in her eminently readable book, ““Artificial Intelligence: A Guide for Thinking Humans.” A measured book, that abhors mind numbing technicalities and arcane elaborations, Ms. Mitchell’s work embodies a matter-of-fact narrative that seeks to demystify the future of both AI and its users. The book begins with a meeting organized by Blaise Agüera y Arcas, a computer scientist leading Google’s foray into machine intelligence. In the meeting, the genius AI pioneer and author of the Pulitzer Prize winning book, “Gödel, Escher, Bach: an Eternal Golden Braid” (or just “gee-ee-bee’), Douglas Hofstadter expresses downright alarm at the principle of Singularity being touted by Kurzweil. “If this actually happens, “we will be superseded. We will be relics. We will be left in the dust.” A former research assistant of Hofstadter, Ms. Mitchell is surprised to hear such an exclamation from her mentor. This spurs her on to assess the impact of AI, in an unbiased vein. Tracing the modest trajectory of the beginning of AI, Ms. Mitchell informs her reader about a small workshop in Dartmouth in 1956 where the seeds of AI were first sown. John McCarthy, universally acknowledged as the father of AI and the inventor of the term itself, persuaded Marvin Minsky, a fellow student at Princeton, Claude Shannon, the inventor of information theory and Nathaniel Rochester, a pioneering electrical engineer, to help him organize “a 2 month, 10-man study of artificial intelligence to be carried out during the summer of 1956.” What began as a muted endeavor has now morphed into a creature that is both revered and reviled, in equal measure. Ms. Mitchell lends a technical element to the book by dwelling on concepts such as symbolic and sub-symbolic AI. Ms. Mitchell, however lends a fascinating insight into the myriad ways in which various intrepid pioneers and computer experts attempted to distill the element of “learning” into a computer thereby bestowing it with immense scalability and computational skills. For example, using a technique termed, back-propagation, errors are taken away at the output units and to “propagate” the blame for that error backward so as to assign proper blame to each of the weights in the network. This allows back-propagation to determine how much to change each weight in order to reduce the error. The beauty of Ms. Mitchell’s explanations lies in its simplicity. She breaks down seemingly esoteric concepts into small chunks of ‘learnable’ elements. It is these kind of techniques that have enabled IBM’s Watson to defeat World Chess Champion Garry Kasparov, and trump over Jeopardy! Champions Ken Jennings and Brad Rutter. So with such stupendous advances, is the time where Artificial Intelligence surpasses human intelligence already upon us? Ms. Mitchell does not think so. Taking recourse to the views of Alan Turing’s “argument from consciousness,” Ms. Mitchell brings to our attention, Turing’s summary of the neurologist Geoffrey Jefferson’s quote: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.” Ms. Mitchell also highlights – in a somewhat metaphysical manner – the inherent limitations of a computer to gainfully engage in the attributes of abstraction and analogy. In the words of her own mentor Hofstadter and his coauthor, the psychologist Emmanuel Sander, “Without concepts there can be no thought, and without analogies there can be no concepts.” If computers are bereft of common sense, it is not for the want of their users trying to ‘embed’ some into them. A famous case in point being Douglas Lenat’s Cyc project which ultimately turned out to be a bold, albeit futile exercise. A computer’s inherent limitation in thinking like a human being was also demonstrated by The Winograd schemas. These were schemas designed precisely to be easy for humans but tricky for computers. Hector Levesque, Ernest Davis, and Leora Morgenstern three AI researchers, “proposed using a large set of Winograd schemas as an alternative to the Turing test. The authors argued that, unlike the Turing test, a test that consists of Winograd schemas forestalls the possibility of a machine giving the correct answer without actually understanding anything about the sentence. The three researchers hypothesized (in notably cautious language) that “with a very high probability, anything that answers correctly is engaging in behaviour that we would say shows thinking in people.” Finally, Ms. Mitchell concludes by declaring that machines are as yet incapable of generalizing, understanding cause and effect, or transferring knowledge from situation to situation – skills human beings begin to develop in infancy. Thus while computers won’t dethrone man anytime soon, goading them on to bring such an endeavor to fruition might not be a wise idea, after all.

  26. 5 out of 5

    Yusei Nishiyama

    I immediately took this book after knowing the author studied under Douglas Hofstadter, the author of Gödel, Escher, Bach. By describing recent developments of machine learning techniques and their limitations, the author concludes: the take-home message from this book is that we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence. I totally agree with this. Although there has been great progress in AI communities and subsymbolic approaches like Deep Le I immediately took this book after knowing the author studied under Douglas Hofstadter, the author of Gödel, Escher, Bach. By describing recent developments of machine learning techniques and their limitations, the author concludes: the take-home message from this book is that we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence. I totally agree with this. Although there has been great progress in AI communities and subsymbolic approaches like Deep Learning already have changed our lives, there's no intelligence as we would ascribe to humans in it. The way the media anthropomorphizes AI is misleading and we don't need to be worried if it becomes too smart and makes us subordinate. "embodiment hypothesis" is a new concept I learnt from the book, which is: the premise that machine cannot attain human-level intelligence without having some kind of body that interacts with the world. It sounds almost as if only humans or something as sophisticated as humans can acquire intelligence. I share the same view. This book is not nearly as captivating as "Gödel, Escher, Bach" and rather dry but gives you a good sense of what the latest AI technologies really entail.

  27. 4 out of 5

    Henry

    All in all a great read. The chapters somewhat trace the current state of some of the central AI topics, while also giving nice summaries of how things work under the hood. This meandering style doesn't lend itself well to delving deep into philosophical dilemmas, nor describing all the fun gnarly bits of methods du jour. But I think it did a great job of painting a wide perspective that allowed for some thoughtful insights in the last few chapters on how hard strong AI really is, and what under All in all a great read. The chapters somewhat trace the current state of some of the central AI topics, while also giving nice summaries of how things work under the hood. This meandering style doesn't lend itself well to delving deep into philosophical dilemmas, nor describing all the fun gnarly bits of methods du jour. But I think it did a great job of painting a wide perspective that allowed for some thoughtful insights in the last few chapters on how hard strong AI really is, and what understanding really means.

  28. 4 out of 5

    Claudiu Vădean

    Tedious attempt to survey the state-of-the-art in AI research. It focuses mainly of deep learning related developments, while ignoring other avenues (brain simulation, neuroevolution, logic-based methods). I feel that Mitchell rushed to present as many applications of deep learning as possible, sacrificing depth in the process. More points are deducted for the sheer amount of repetitiveness (the "common sense" complaint for example) and the hand wavy approach to important questions (AI ethics). Tedious attempt to survey the state-of-the-art in AI research. It focuses mainly of deep learning related developments, while ignoring other avenues (brain simulation, neuroevolution, logic-based methods). I feel that Mitchell rushed to present as many applications of deep learning as possible, sacrificing depth in the process. More points are deducted for the sheer amount of repetitiveness (the "common sense" complaint for example) and the hand wavy approach to important questions (AI ethics). Not recommended.

  29. 4 out of 5

    Ruthann Wheeler

    This is such a computer geek book. So when I say I loved it, it tells you something about me. I'm fascinated by computer learning. In another life I would be a deep learning engineer. I recommend this book for anyone interested in how A.I. is evolving and how it may affect our lives. This is such a computer geek book. So when I say I loved it, it tells you something about me. I'm fascinated by computer learning. In another life I would be a deep learning engineer. I recommend this book for anyone interested in how A.I. is evolving and how it may affect our lives.

  30. 5 out of 5

    Gabriel Nicholas

    The perfect book for those of us who took machine learning coursework in school and then forgot absolutely all of it.

Add a review

Your email address will not be published. Required fields are marked *

Loading...