My name is Daniel Dewey. I am the Alexander Tamas Research Fellow on Machine Superintelligence and the Future of AI at the Future of Humanity Institute in Oxford, and a research associate at the Machine Intelligence Research Institute. I was previously at Google, Intel Labs Pittsburgh, and Carnegie Mellon University.
My current research focus is intelligence explosion. An intelligence explosion is a process in which an intelligent machine devises improvements to itself (or, equivalently, designs a more intelligent "successor"), then the improved machine improves itself, and so on in a chain reaction, or "explosion". It is plausible that such a process could occur very rapidly, and could continue until a machine much more intelligent than any human is created. If intelligence explosion is possible, many interesting problems gain practical importance:
Powerful computers will be able to perform accurate inferential and decision-theoretic calculations, and so will be able to choose effective courses of action to achieve any end they are designed to. Most ends that are easy to specify are not compatible, in their fullest realisations, with valuable futures. Are there ways to manage such potentially harmful ability?
Though a powerful computer should be able to determine what kinds of ends are "valuable" (by modelling our preferences and philosophical arguments), it will only choose actions that further those ends if we program it to do so; valuable ends are only pursued by very particular programs. Programming a computer to pursue what is valuable turns out to be a difficult problem (if it seems easy, look closer!).
As a computer self-improves, it may make mistakes; even if the first computer is programmed to pursue valuable ends, later ones may not be. Designing a stable self-improvement process involves some open problems in logic and decision theory.
Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. It is not clear how we could avoid an intelligence explosion.
...and, before you ask: I am not worried that self-driving cars, drones, or Siri will undergo an intelligence explosion!