Daniel Dewey

Contact me: Twitter, Email

My name is Daniel Dewey. I am the Alexander Tamas Research Fellow on Machine Superintelligence and the Future of AI at the Future of Humanity Institute. I was previously at Google, Intel Labs Pittsburgh, and Carnegie Mellon University.

My research

My current focus is the long-term future of AI, of which the most significant and understudied feature seems to be intelligence explosion, a process in which an intelligent machine devises improvements to itself, then the improved machine improves itself further, and so on. The resulting increase in machine intelligence could be very rapid, and could give rise to superintelligent machines, much more efficient at e.g. inference, planning, and problem-solving than any human or group of humans. It is unknown whether this phenomenon is possible, what resources are required, and how quickly an explosion could proceed. If intelligence explosion is possible, many interesting problems gain practical importance:

Superintelligent machines will be able to perform accurate inferential and decision-theoretic calculations, and so will be able to choose effective courses of action to achieve any end they are designed to. Most ends that are easy to specify are not compatible, in their fullest realisations, with valuable futures; worse, superintelligent machines could emerge suddenly and unexpectedly. Are there ways to manage such potentially harmful ability?

Though a superintelligent machine should be able to determine what kinds of ends are "valuable" (by modelling our preferences and philosophical arguments), it will only choose actions that further those ends if we program it to do so; valuable ends are only pursued by very particular programs. Programming a computer to pursue what is valuable turns out to be a difficult problem.

Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. It would require significant coordination to avoid intelligence explosions.

Little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.

Recommended reading

Selected work

Other publications



What to read after you've read Superintelligence

This is a reading list on existential risk from artificial intelligence for those who have already read Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.

As such, I will skip over more introductory papers that newcomers to the area should certainly read, like The Singularity: A Philosophical Analysis and Intelligence Explosion: Evidence and Import, as well as papers that were mostly covered by Superintelligence, like Learning what to value and Racing to the Precipice.

If you'd like to look at those papers as well, most can be found either on MIRI's or FHI's research pages.

AI improvement speed

Strategy

Machine Intelligence Research Institute

Other



Tweets by @danieldewey