Daniel Dewey

Contact me: Twitter, Email

My name is Daniel Dewey. I'm currently an independent consultant and researcher on long-term AI safety based in Portland, OR. I was previously a research fellow at the Programme on the Impacts of Future Technology at the Future of Humanity Institute and Oxford Martin School, a software engineer at Google Seattle, and a student researcher at Intel Labs Pittsburgh during my undergraduate at Carnegie Mellon University.

If you've been following work on long-term AI safety and existential risk for some time, here are the latest things I recommend:

My research

My current focus is the long-term future of AI, of which the most significant and understudied feature seems to be intelligence explosion, a process in which an intelligent machine devises improvements to itself, then the improved machine improves itself further, and so on. This could lead to the sudden creation of superintelligent machines, incredibly efficient at e.g. inference, planning, and problem-solving. It is not known whether intelligence explosion is possible, what resources are required, and how quickly an explosion could proceed.

Superintelligent machines will be able to choose effective courses of action to achieve any end they are designed to. If these machines are very capable, then they could use up most of the world's resources to pursue those ends, and we would not be able to stop them. Are there ways to manage such potentially harmful ability?

Though a superintelligent machine should be able to determine what kinds of ends are "valuable" (by modelling our preferences and philosophical arguments), it will only choose actions that further those ends if we program it to do so; valuable ends are only pursued by very particular programs. Programming a computer to pursue what is valuable turns out to be a difficult problem.

Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. It would require significant coordination to be sure to avoid intelligence explosions.

Little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.

Some major works in this area to read:

What to read after you've read Superintelligence

This is a reading list on existential risk from artificial intelligence for those who have already read Nick Bostrom's book Superintelligence. As such, I will skip over more introductory papers that newcomers to the area should certainly read, as well as papers that were mostly covered by Superintelligence.

AI improvement speed:

Strategy:

Machine Intelligence Research Institute:

Other:

Tweets by @danieldewey