Stuart J. Russell quotes:

+1
Share
Pin
Like
Send
Share
  • Some PhD physicists write software or work for hedge funds, but physics still has a problem with having very smart people but not enough opportunities.

  • Google or other search engines are examples of AI, and relatively simple AI, but they're still AI. That plus an awful lot of hardware to make it work fast enough.

  • When I read philosophy or neuroscience papers about consciousness, I don't get the sense we're any closer to understanding it than we were 50 years ago.

  • No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

  • Leg locomotion was, for decades, thought to be an incredibly difficult problem. There has been very, very painstakingly slow progress there, and robots that essentially lumbered along at one step every 15 seconds and occasionally fell over.

  • The more we learn about AI and about how the brain works, the more amazing the brain seems. Just the sheer amount of computation it does is truly incredible, especially for a couple of pounds of meat.

  • You're able to use a search engine, like Google or Bing or whatever. But those engines don't understand anything about pages that they give you; they essentially index the pages based on the words that you're searching, and then they intersect that with the words in your query, and they use some tricks to figure out which pages are more important than others. But they don't understand anything.

  • A lot of people talk about sometime around 2030, machines will be more powerful than the human brain, in terms of the raw number of computations they can do per second. But that seems completely irrelevant. We don't know how the brain is organized, how it does what it does.

  • There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious.

  • You have a very precisely defined goal and you build a machine that's superhuman in its capabilities for achieving goals. If it turns out that the subsequent behavior of the robot in achieving that goal was not what you want, you have a real problem.

  • If you had a system that could read all the pages and understand the context, instead of just throwing back 26 million pages to answer your query, it could actually answer the question. You could ask a real question and get an answer as if you were talking to a person who read all those millions and billions of pages, understood them, and synthesized all that information.

  • AI's ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people's, actually. AI can have more knowledge of detailed categories, like animals and so on.

  • The singularity has nothing to do with consciousness.

  • Most of the AI goes into figuring which are the important pages you want. And to some extent what your query means, and what you're likely to be after based on your previous behavior and other information it collects about you.

  • Chess programs don't play chess the way humans play chess. We don't really know how humans play chess, but one of the things we do is spot some opportunity on the chess board toward a move to capture the opponent's queen.

  • Everything we have of value as human beings - as a civilization - is the result of our intelligence.

  • I expect that people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing.

  • I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works.

  • If human beings are losing every time, it doesn't matter whether they're losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.

  • Its really important to understand the difference between sentience and consciousness, which are important for human beings.

  • It's unlikely that machines would spontaneously decide they didn't like people, or that they had goals in opposition to those of human beings.

  • It's very hard to predict what kind of uses we'd make of assistants that could read and understand all the information the human race has ever generated. It could be really transformational.

  • Some people think that, inevitably, every robot that does any task is a bad thing for the human race, because it could be taking a job away. But that isn't necessarily true. You can also think of the robot as making a person more productive and enabling people to do things that are currently economically infeasible. But a person plus a robot or a fleet of robots could do things that would be really useful.

  • The robot is not going to want to be switched off because you've given it a goal to achieve and being switched off is a way of failing - so it will do its best not to be switched off. That's a story that isn't made clear in most movies but it I think is a real issue.

  • There are lots of companies that are really trying to collect as much information as they can about every single person on the planet because they think its going to be valuable and it probably already is valuable.

  • To my knowledge nobody - no one who is publishing papers in the main field of AI - is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress.

  • We call ourselves Homo sapiens--man the wise--because our intelligence is so important to us. For thousands of years, we have tried to understand how we think: that is, how a mere handful of matter can perceive, understand, predict, and manipulate a world far larger and more complicated than itself. The field of artificial intelligence, or AI, goes further still: it attempts not just to understand but also to build intelligent entities.

  • What AI could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward. It might be curing disease, it might be eliminating poverty. Certainly it should include preventing environmental catastrophe. If AI could be instrumental to all those things, then I would feel it was worthwhile.

  • When people talk about the singularity, when people talk about superintelligent AI, they're not talking about sentience or consciousness. They're talking about superhuman ability to make high-quality decisions.

+1
Share
Pin
Like
Send
Share