Apart from his comedic, dramatic, and literary endeavors, Stephen Fry is widely known for his avowed technophilia. He once wrote a column on that theme, “Dork Talk,” for the Guardian, in whose inaugural dispatch he laid out his credentials by claiming to have been the owner of only the second Macintosh computer sold in Europe (“Douglas Adams bought the first”), and never to have “met a smartphone I haven’t bought.” But now, like many of us who were “dippy about all things digital” at the end of the last century and the beginning of this one, Fry seems to have his doubts about certain big-tech projects in the works today: take the “$100 billion plan with a 70 percent risk of killing us all” described in the video above.
This plan, of course, has to do with artificial intelligence in general, and “the logical AI subgoals to survive, deceive, and gain power” in particular. Even in this relatively early stage of development, we’ve witnessed AI systems that seem to be altogether too good at their jobs, to the point of engaging in what would count as deceptive and unethical behavior were the subject a human being. (Fry cites the example of a stock market-investing AI that engaged in insider trading, then lied about having done so.) What’s more, “as AI agents take on more complex tasks, they create strategies and subgoals which we can’t see, because they’re hidden among billions of parameters,” and quasi-evolutionary “selection pressures also cause AI to evade safety measures.”
In the video, MIT physicist, and machine learning researcher Max Tegmark speaks portentously of the fact that we are, “right now, building creepy, super-capable, amoral psychopaths that never sleep, think much faster than us, can make copies of themselves, and have nothing human about them whatsoever.” Fry quotes computer scientist Geoffrey Hinton warning that, in inter-AI competition, “the ones with more sense of self-preservation will win, and the more aggressive ones will win, and you’ll get all the problems that jumped-up chimpanzees like us have.” Hinton’s colleague Stuart Russell explains that “we need to worry about machines not because they’re conscious, but because they’re competent. They may take preemptive action to ensure that they can achieve the objective that we gave them,” and that action may be less than impeccably considerate of human life.
Would we be better off just shutting the whole enterprise down? Fry raises philosopher Nick Bostrom’s argument that “stopping AI development could be a mistake, because we could eventually be wiped out by another problem that AI could’ve prevented.” This would seem to dictate a deliberately cautious form of development, but “nearly all AI research funding, hundreds of billions per year, is pushing capabilities for profit; safety efforts are tiny in comparison.” Though “we don’t know if it will be possible to maintain control of super-intelligence,” we can nevertheless “point it in the right direction, instead of rushing to create it with no moral compass and clear reasons to kill us off.” The mind, as they say, is a fine servant but a terrible master; the same holds true, as the case of AI makes us see afresh, for the mind’s creations.
Related content:
Stephen Fry Explains Cloud Computing in a Short Animated Video
Stephen Fry Takes Us Inside the Story of Johannes Gutenberg & the First Printing Press
Neural Networks for Machine Learning: A Free Online Course Taught by Geoffrey Hinton
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities and the book The Stateless City: a Walk through 21st-Century Los Angeles. Follow him on Twitter at @colinmarshall or on Facebook.