Ex Machina

I’m always a year late to comment on recent films, but I wanted to throw in that “Ex Machina” might be the smartest science-fiction film I’ve ever seen, for two (spoilerish) reasons, below:

  1. Takes a super well-known test in technology and philosophy — the Turing Test, which stipulates that a machine can be considered conscious if its performance were indistinguishable from a human’s in a series of interactions — and realizes that even the most clinical application of such a test might be rich dramatic ground. Watching a fully-informed, skeptical human observer interact with a machine in a way that convinces the observer, and by extension us, the viewers taking his perspective, that the machine may be every bit as alive as we are is riveting. To reach this point, the observer has to become mentally entangled with the machine, has to become invested in the machine’s well-being, has to realize against all his better judgment that the machine has come to mean more to him than even another human, standing next to him in the room. The film doesn’t fuss this up with a bunch of extraneous stakes — the test may extend outside the interaction room, but every element of the film is more or less still the test. And being “indistinguishable” from a human means more than getting jokes and pausing self-reflectively, the movie insists. The clinical application of such a simple, well-known test, taken to its logical ends, has to undress the value of being human.
  2. Takes a completely separate, rich question in technology and science — when and how will AI overtake us — and suggests the answer is buried in its treatment of Question One. Technophiles have been collectively freaking out about this issue for years — billionaire computing moguls invest spooky sums of money in efforts to understand and control the eventual AI uprising (are they not just a little self-flattering in their alarm, you have to wonder). Guys like Nick Bostrom, whose “Superintelligence” was one of the most widely read speculative science books on this subject in the last few years, spend pages wondering about how machines will violently usurp a people who cannot understand or anticipate their methods anymore. If we simply tell an AI to figure out how to make as many paperclips as it can, Bostrom speculates, who is to say that it won’t eventually think up some hitherto unimaginable means to get out of its confinement, take to the streets, and start cutting up human bodies to extract rare bits of metal inside that could be paperclip components? [Bostrom develops this idea out into a way more plausible nightmare scenario over many chapters, but this is the gist]. “Ex Machina” reframes the question in a visionary way: instead of worrying about the violent uprising of machines against humans who can’t understand their superintelligence in order to stop it, maybe we should be concerned by an essentially peaceful scenario. If a machine could convince us that it was fully conscious, with all of the relative moral value of a human, would it even have to overtake us violently? Might we more or less make way for it, understanding the entire time how it operates, because we cease to care that it is not human? Is that not the ultimate end of the Turing Test, where a machine carries all the moral weight of an organic human being in our eyes, and might make a transparent, compelling argument to inherit the earth?