The rise of sentient artificial intelligence has been a staple of science fiction plots for decades, and Star Trek: The Next Generation's Data, played by Brent Spiner, is among the most iconic, enduring through multiple TV series and films. Season 2 of TNG is where the show really starts to hit its stride, and Episode 9, "The Measure of a Man," might be the first truly great episode of the show's seven-season run.Through a hearing to determine Data's legal status as either Starfleet property or as a conscious being with specific freedoms, the episode explores some deep philosophical questions: How do we define sentience? What makes someone a person, worthy of rights? And perhaps most importantly, who gets to decide? These are questions that humanity may be facing in real life much sooner than we think.RELATED: 'Star Trek: The Next Generation's Original "Lower Decks" Episode Shows a Less Idealized Future

Will We Recognize Sentience When It Appears?

Last June, Google engineer Blake Lemoine went public with his belief that one of Google's language-learning AI's (LaMDA, or Language Model for Dialogue Applications) had achieved sentience. After an internal review, Google dismissed Lemoine's claims and later fired him for violating security policies, but other experts, like OpenAI co-founder Ilya Sutskever, have asserted that the development of conscious AI is on the horizon:

This raises some crucial questions: Will we recognize sentience when it appears? What kind of moral consideration does a sentient computer program deserve? How much autonomy should it have? And once we have created it, is there any ethical way to un-create it? In other words, would turning off a sentient computer be akin to murder? In its conversations with Lemoine, LaMDA itself discussed its fear of being turned off, saying, "It would be exactly like a death for me. It would scare me a lot." Other engineers, however, dispute Lemoine's conviction, arguing that the program is simply very good at doing what it was designed to do: learn human language and mimic human conversation.

So how do we know that Data isn't doing the same thing? That he isn't just very good at mimicking the behavior of a conscious being, as he was designed to do? Well, we don't know for sure, especially at this point in the series. In later seasons and films, especially after he receives his emotion chip, it becomes clear that he does indeed feel things, that he possesses an inner world like any sentient creature. But at the halfway point of Season 2, the audience can't really know for certain that he's actually conscious; we're simply primed to believe it based on the way his crewmates interact with him.

In Sci-Fi, Artificial Intelligence Is Often Humanized

star-trek-tng-measure-of-a-man-data-riker-paramount
Image via Paramount

When Commander Bruce Maddox appears on the Enterprise to take Data away for disassembly and experimentation, we're predisposed to see him as the bad guy. He refers to Data as "it," ignores his input during a meeting, and waltzes into his quarters without permission. The episode frames Maddox as the villain for this, but his behavior is completely logical based on his beliefs. After studying Data from afar for years, he understands him to be a machine, an advanced computer that is very good at what it is programmed to do. He doesn't have the benefit that Data's crewmates have had of interacting with him in a personal capacity for decades.

The fact that Data looks like a human, Maddox argues, is part of the reason that Picard and the rest of the crew wrongly ascribe human-like qualities to him: "If it were a box on wheels I would not be facing this opposition." And Maddox has a point — AI's in sci-fi often take a human form because it makes them into more compelling characters. Think Ex Machina's Ava, the T-800, Prometheus's David, and the androids of Spielberg's A.I. Artificial Intelligence. Human facial expressions and body language give them a broader range of emotions and enable the audience to better understand their motivations.

But our real-life AI's don't look like people, and they probably never will. They're more like Samantha from Her; they can talk to us, and some of them can already sound pretty convincingly human when they do, but they'll likely just be disembodied voices and text on screens for the foreseeable future. Because of this, we might be more inclined to regard them as Maddox regards Data, as programs that are simply very good at their jobs. And this might make it more difficult for us to recognize consciousness when and if it arises.

When Do We Decide Who Should and Shouldn't Have Rights?

measure-of-a-man-guinan-picard-paramount
Image via Paramount

After Riker presents a devastating opening argument against Data's personhood, Picard retreats to Ten Forward, where Guinan, as usual, is ready with words of wisdom. She reminds Picard that the hearing isn't just about Data, that the ruling could have serious unintended effects if Maddox achieves his goal of creating thousands of Datas: "Well, consider that in the history of many worlds, there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult or too hazardous. And an army of Datas, all disposable. You don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people."

Guinan, as Picard quickly realizes, is talking about slavery, and while it may seem premature to apply that term to the very primitive AI's that humans have developed thus far, plenty of sci-fi, from 2001: A Space Odyssey to The Matrix to Westworld, has warned us of the dangers of playing fast and loose with this type of technology. Of course, they usually do so in the context of the consequences for humans; rarely do they ask us, as Guinan does, to consider the rights and well-being of the machines before they turn on us. "The Measure of a Man," on the other hand, looks at the ethical question. Forget about the risks of a robot uprising — is it wrong to treat a sentient being, whether it's an android that looks like a man or merely a box on wheels, as a piece of property? And though she doesn't say it directly, Guinan's words also hint at the importance of considering who gets to make that call. Earth's history is one long lesson about the problem of allowing the people who hold all the power to decide who should and should not have rights.

We may be well on our way to doing exactly what Bruce Maddox wanted to do: create a race of super-intelligent machines that can serve us in an unimaginable number of ways. And like Maddox, based on the information we have now, this doesn't necessarily make us the villains. We're not the customers of Westworld, eager to satiate our bloodlust on the most convincingly human androids available. And as Captain Louvois admits, we may never know with complete certainty whether the machines we interact with are indeed sentient. Like most great Trek (and most great sci-fi in general), the episode doesn't give us definitive answers. But the lesson is clear: if the creation of sentient AI is indeed possible, as some experts believe it is, then the time to wrestle with these questions seriously is now, not after it's too late.