Daniel Dewey

Research fellow, Future of Humanity Institute, Oxford

Contact me: Twitter, Email

My name is Daniel Dewey. I am currently the Alexander Tamas Research Fellow on Machine Superintelligence and the Future of AI at the Future of Humanity Institute at Oxford. I was previously at Google Seattle, and did research at Intel Labs Pittsburgh during my undergraduate at Carnegie Mellon University.

If you've been following work on long-term AI safety and existential risk for some time, here are the latest things I recommend:

My research

My current focus is the long-term future of AI, of which the most significant and understudied feature seems to be intelligence explosion, a process in which an intelligent machine devises improvements to itself, then the improved machine improves itself further, and so on. This could lead to the sudden creation of superintelligent machines, incredibly efficient at e.g. inference, planning, and problem-solving. It is not known whether intelligence explosion is possible, what resources are required, and how quickly an explosion could proceed.

Superintelligent machines will be able to choose effective courses of action to achieve any end they are designed to. If these machines are very capable, then they could use up most of the world's resources to pursue those ends, and we would not be able to stop them. Are there ways to manage such potentially harmful ability?

Though a superintelligent machine should be able to determine what kinds of ends are "valuable" (by modelling our preferences and philosophical arguments), it will only choose actions that further those ends if we program it to do so; valuable ends are only pursued by very particular programs. Programming a computer to pursue what is valuable turns out to be a difficult problem.

Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. It would require significant coordination to be sure to avoid intelligence explosions.

Little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.

Some major works in this area to read:

What to read after you've read Superintelligence

This is a reading list on existential risk from artificial intelligence for those who have already read Nick Bostrom's book Superintelligence. As such, I will skip over more introductory papers that newcomers to the area should certainly read, as well as papers that were mostly covered by Superintelligence.

AI improvement speed:

Strategy:

Machine Intelligence Research Institute:

Other:

My publications

  1. Long-term strategies for ending existential risk from fast takeoff, forthcoming.
  2. What could we do about intelligence explosion?, slides, May 2014.
  3. Crucial phenomena. FQXi 2014 essay contest "How Should Humanity Steer the Future?", 2nd place winner.
  4. Additively efficient universal computers. ECCC TR14-044, March 2014; under review elsewhere.
  5. Reinforcement learning and the reward engineering principle. AAAI Spring Symposium Series, 2014. Slides.
  6. A representation theorem for decisions about causal models. Artificial General Intelligence, 2012.
  7. Learning what to value. Artificial General Intelligence, 2011.
  8. Generalizing metamodules to simplify planning in modular robotic systems. International Conference on Intelligent Robots and Systems, 2008.

Tweets by @danieldewey