The term Moloch has been used as a metaphor for misaligned incentives in AI development.

Peter Clarke, a researcher at the University of Edinburgh, tweeted that Moloch is a well-tuned metaphor for misaligned incentives and has sparked some fascinating discussions¹. Daniel Schmachtenberger and Liv Boeree also discussed the game theory and exponential growth underlying our modern economic system and how recent advancements in AI are poised to turn up the pressure on that³.

  1. Peter Clarke on Twitter: “Is Moloch the Right Metaphor for AI? https://twitter.com/HeyPeterClarke/status/1645456632036618240
  2. Misalignment, AI & Moloch | Daniel Schmachtenberger and Liv Boeree. https://www.youtube.com/watch?v=KCSsKV5F4xc A deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen before.
  3. Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI. https://futureoflife.org/podcast/liv-boeree-on-moloch-beauty-filters-game-theory-institutions-and-ai/ Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv’s work here: livboeree.com

AI Alignment Forum – MOLOCH

This page is about Moloch, the personification of the forces that coerce competing individuals to take actions which, although locally optimal, ultimately lead to situations where everyone is worse off. 

Moloch’s Toolbox by Eliezer Yudkowsky

The page is titled “Moloch’s Toolbox” because it is a reference to the concept of Moloch as a metaphor for misaligned incentives in AI development. The author argues that the toolbox of reusable concepts for analyzing systems is inadequate for understanding the causes of civilizational failure.

Who is Eliezer Yudkowsky?

Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence¹. He was one of the earliest researchers to analyze the prospect of powerful Artificial Intelligence².

  1. Eliezer Yudkowsky – Wikipedia. https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
  2. The Only Way to Deal With the Threat From AI? Shut It Down | Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
  3. Eliezer Yudkowsky (@ESYudkowsky) / Twitter. https://twitter.com/ESYudkowsky
  4. Eliezer Yudkowsky – LessWrong https://www.lesswrong.com/users/eliezer_yudkowsky