Author: Nick Bostrom
Editor’s note: This superb analysis by one of the world’s clearest thinkers tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn’t become the last? (…) Read more
Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.
Sam Conniff Allende
Pirates didn't just break the rules, they rewrote them. They didn't just reject society, they…
Allan Afuah, Christopher L. Tucci and Gianluigi Viscusi
Examples of the value that can be created and captured through crowdsourcing go back to…