You are beautiful, Liam Terrell thought when the euthanasia agent first entered the room.
The room, plain and small, contained only the bed in which Terrell lay and a pair of tables on which a variety of needles and bottles resided. Recessed lighting filled the air with a gentle, soothing white light. Music played from hidden speakers, the classical compositions he’d requested before being transferred to the unit. Within the hour, he would be dead.
He’d only read descriptions of the agents, but now that one had entered the room he was surprised by its artistic form. Though not designed to resemble human beings physically, the agent walked on two legs with grace, and possessed two arms, and delicate hands built for fine manipulations. Its head was small and barrel-shaped, with an array of sensors substituted for eyes. The finish of its body, though, covered in a transcendent cobalt blue coloration, blended with the flashing ruby displays of its ‘face’ to give it the appearance of a magnificent sculpture.
“Hello, Mr. Terrell,” the agent said in a soft, human voice; hearing that voice coming from so beautiful a machine seemed disconcerting to Terrell, as if a very thin man were hiding inside a glimmering blue costume.
“I’m so sorry we have to meet under these circumstances, ” the agent continued. “But I am here to fulfill your wishes. I assure you that you’ll feel no pain, you’ll simply fall asleep and not sense anything thereafter. That is what you wish, isn’t it?”
Terrell, already senseless with the painkillers that only vaguely masked the effects of his cancer, nodded, then said, “I’ve been through so many treatments, and so much pain, that simply falling asleep is the best I could hope for.”
“I will try to make your passing as comfortable as possible.”
The agent stepped quietly to one of the tables and began examining the implements laid before it.
Terrell drew in a deep breath, knowing that his life was about to end. Even at this late date, he couldn’t decide if he’d lived a life worth the telling of it or not. At the moment, he still felt cheated—modern medicine being as proficient as it seemed to be for the diseases of human beings, he was one of the few for whom that medicine had failed. No matter—no one lived forever, even with the best medical treatment.
Cybernetics had come far in the last generation. Not far enough to create the cybernetic equivalent to an actual person, but far enough to establish a form of artificial intelligence that could assume the moral neutrality necessary for ending a human life. There was no question of the law now—only quasi-intelligent machines were responsible for the termination of a given human life. No person need ever feel guilt over the situation again.
“Are you ready, Mr. Terrell?” the agent asked, turning away from the table. “Is there anything I can do for you before we proceed?”
Terrell thought about the question for a moment—the promise of a last meal meant nothing to him, being as nauseated as he felt, nor was he strong enough to do much of anything, except talk. Lingering in this ugly state was not something he wished to prolong.
But watching the agent—who was, after all, an intelligent robot—stirred questions in his mind for which he had, for much of his life, searched for answers. He knew it was foolish to ask a machine about the meaning of life—but what about the meaning of machine life? Were they at all alike?
“I’d like to ask you something,” Terrell said.
“Of course,” the agent replied. “I would like to comfort you in any way possible.”
“I’m not sure your answer will comfort me, but I’m curious. What do you think of human life?”
The agent’s ruby sensors glittered momentarily. “I’m sorry, but I don’t understand your question.”
“I’ve lived into middle-age,” Terrell said, picking absently at the sheets covering his emaciated body, “and sometimes I’ve been happy, and sometimes I’ve felt as if my entire life has been pointless. I think most human beings feel that way at times. We’re all trying to perceive the meaning of our lives, but we never seem to find a satisfying answer. Do you know?”
“It is normal to be depressed in your condition,” the agent replied, “but I’m certain you’ve lived a good life. By your records, I see that you were a musician and played in an orchestra. Certainly creating music for others to enjoy has given you great satisfaction.”
“I love music, yes. And I played in an orchestra, at least until the funding for our group evaporated years ago. People don’t seem to appreciate the classics as much as they used to.” Terrell smiled at the agent, despite the fact that the agent couldn’t smile in return. “I spent many years playing the violin, and I thought that was my purpose in life, to play music. But when I couldn’t play in the orchestra any longer, I realized that my playing had only been a pastime, not an answer to the question of life. But it was the only thing in my life that I ever did well. My relationships, my investments, my conduct—I was a poor artist for the other areas of life. Do you believe human life has any definite purpose?”
“Mr. Terrell,” the agent said, “there may be as many purposes for human life as there are people. For myself, I know my purpose is important for the whole of medicine, and that to possess such a purpose is the most important aspect of my existence.”
“Now, were you programmed to say that? Or do you really believe it?”
“My beliefs are imprinted, Mr. Terrell. For an entity such as myself, there is no difference.”
Terrell felt he’d made a mistake; he’d believed the robot to be more human than in actuality. Of course it was programmed with a purpose, it was still a machine, even with its exotic artificial intelligence. Human beings had programmed it to follow its reasoning to a logical conclusion, logical for its function.
But human beings had to continually program themselves, intentionally, or through their flood of experiences. That was why dying was so difficult—it was a natural desire to want to keep reprogramming the same life for different outcomes, better results. Dying only exposed the flaw in human psychology.
Resigned, Terrell said, “Thank you for your comments. By the way, do you have a name?”
“Yes. My administrators call me Charon.”
“Charon?” Terrell laughed weakly, but thoroughly. He appreciated the darkness of his handlers’ humor. “Thank you for telling me, Charon.”
“Now if there is nothing else I can do for you, I must ask you a legally binding question before we proceed. Mr. Liam Terrell, do you wish me to terminate your life?”
Of course, Terrell knew the answer to this question, he’d been answering it for one administrator or another for the last week. But he couldn’t help asking the agent one more philosophical question of his own. “Are you asking me because you were programmed to ask, or because you actually care about my life?”
Terrell lay watching the agent for a long time, so long that he began to worry if the agent’s artificial brain was still alert behind its beautiful ruby sensors. The agent never stirred. He called for the nurse, and the nurse contacted the agent’s administrators. Within an hour a technician arrived, examined the agent, and, failing to revive it, removed it from the room on a powered dolly.
While Terrell watched the technician remove the agent, he couldn’t help but feel guilty over the state of the robot.
When the replacement agent arrived later that day, Terrell didn’t engage it in conversation, nor did he ask its name. He watched quietly as the new agent, as beautifully vivid blue as Charon, perhaps identically so, prepared the instruments on the table, and then pierced a vein in his left arm with a needle.
Terrell closed his eyes, wondering if there really was a difference between human beings and machines. He felt terrible remorse for having injured Charon, and hoped his first agent would eventually recover, but after another five minutes he lost every concern.
—
Lawrence Buentello has published over 100 short stories in a variety of genres, and is a Pushcart Prize and Edgar Award nominee. His fiction can also be found in several short story collections. He lives in San Antonio, Texas.