Daniel Dewey

Contact me: Twitter, Email

My name is Daniel Dewey. I'm currently an independent consultant and researcher on long-term AI safety based in Portland, OR. I was previously a research fellow at the Programme on the Impacts of Future Technology at the Future of Humanity Institute and Oxford Martin School, a software engineer at Google Seattle, and a student researcher at Intel Labs Pittsburgh during my undergraduate at Carnegie Mellon University.

My research

The societal challenges posed by modern machine learning are significantly different from the challenges posed by past developments in AI; likewise, we will probably face qualitatively different problems when AI systems are much more capable than they are today. One foreseeable challenge is called the superintelligent agent control problem. This is the problem of ensuring that AI agents that far surpass human capabilities in most domains do not act in ways that cause catastrophic harm – not through the emergence of "intentionality" or human-like motivation, but as a side-effect of highly competent pursuit of the agent's built-in goal. Though it is implausible that superintelligent agents will be built in the next few decades, some work to prepare for this development seems prudent, and some initial work is already underway.

For an introduction to this area along with an initial reading list, I'd recommend my post at the Global Priorities Project: Three Areas of Research on the Superintelligence Control Problem.

A few of my papers on this topic: