My name is Daniel Dewey. I am the Alexander Tamas Research Fellow on Machine Superintelligence and the Future of AI at the Future of Humanity Institute in Oxford, and a research associate at the Machine Intelligence Research Institute. I was previously at Google, Intel Labs Pittsburgh, and Carnegie Mellon University.
My current focus is the long-term future of AI, of which the most significant and understudied feature seems to be intelligence explosion. An intelligence explosion is a process in which an intelligent machine devises improvements to itself, then the improved machine improves itself, and so on in a chain reaction, or "explosion". It is plausible that such a process could occur very rapidly, and could continue until a machine much more intelligent than any human is created. If intelligence explosion is possible, many interesting problems gain practical importance:
Powerful computers will be able to perform accurate inferential and decision-theoretic calculations, and so will be able to choose effective courses of action to achieve any end they are designed to. Most ends that are easy to specify are not compatible, in their fullest realisations, with valuable futures. Are there ways to manage such potentially harmful ability?
Though a powerful computer should be able to determine what kinds of ends are "valuable" (by modelling our preferences and philosophical arguments), it will only choose actions that further those ends if we program it to do so; valuable ends are only pursued by very particular programs. Programming a computer to pursue what is valuable turns out to be a difficult problem (if it seems easy, look closer!).
As a computer self-improves, it may make mistakes; even if the first computer is programmed to pursue valuable ends, later ones may not be. Designing a stable self-improvement process involves some open problems in logic and decision theory.
Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. It would require significant coordination to avoid intelligence explosions.
Little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.
I spoke at TEDxVienna in 2013. The talk is an introduction to intelligence explosion, and I close with four open questions for the future of AI. You can watch the video on YouTube, or check out the transcript, slides, and links here.Tweets by @danieldewey