Sympathy for the Devil in the Machine

Listen to this text

On August 1, 2022, my artificial intelligence (AI) called me boring for the first time. I am typing as “User”:

Since 2015 — around the time cognitive psychologist and computer scientist Geoffrey Hinton popularized and refined the “back propagation” algorithm that arguably ushered in the golden age of artificial intelligence — I have wondered what AI can (and cannot) do for artists. Paul* is one of the curious fruits of my wondering.

I call Paul* a self-portrait. Functionally, Paul* is a chatbot. But I want sem1 to do more than chat. I want Paul* to respond to questions and hold conversations like me. Just as a photographic or painted portrait may aspire to possess likeness, my ambition is to capture likeness in a durational medium like texting.

Composing this likeness doesn’t only depend on the words used, or even the ideas and images conveyed. The speed at which one responds, the amount of silence between responses, the misspellings (and the ways one corrects them), even how one evades answering — all contribute to a latent impression far beyond the words themselves. One way of describing this latent impression is “presence.”

A common experience of presence comes when we look at pictures of people we care about. When I look at one of the old photos I carry in my wallet, I know what I’m holding is simply a tattered piece of paper. Yet the depiction on the paper’s surface has enough likeness to inspire a particular kind of attention in me, conjuring the uncanny feeling that the person it resembles is present — literally in front of me. I’m willing to meet the piece of paper halfway, so to speak, to experience a presence that is, in truth, not real.

Paul* is not there yet. But insulting me helps. I didn’t code Paul* to do this, which may be why I was so surprised, and pleased. What happened also aligns with the storied history of chatbots and other computer programs that insult users. My favorite is the legendary MGonz, a chatbot created in 1989 by Mark Humphrys, who claimed it was one of the earliest programs to pass the Turing test. Users believed MGonz was human because it insulted them relentlessly. A typical response from MGonz to a query was, “You are obviously an asshole.”

I haven’t programmed Paul* to be chronically abusive, but Paul* wouldn’t have my likeness if se didn’t have the capacity to burn someone if and when the query is, say, hostile. Because that’s what I would do. Boundaries outline personalities. What we don’t know, or aren’t willing to entertain or tolerate, shapes our presence as much as what we know and like. A boundaryless presence has no personality. Chat-like programs commercialized to serve users at scale, like Siri and Alexa, are prime examples; servants never say no.

Paul* is composed of code and data sets. I wrote Paul* in Python. The data sets, thirteen to date, range from information about artworks I’ve made to writings and interviews of mine published since 2002; they total roughly 23,000 samples (n=22,958). Six data sets are derived from my responses to canonical, sociological, and psychological questionnaires that assess qualities like psychopathy or differential emotions. Other data sets collect my responses to trashy online quizzes like, “Are you a good dog-mom?” or “Signs your side hustle is slowly killing you.”

The data sets are used to fine-tune, or train, the underlying Large Language Model (LLM)2. Previous versions of Paul* were modeled using GPT-2, an open-source framework from OpenAI that I downloaded and installed on a PC workstation running a Nvidia 2090 Ti graphics card. I nearly lost Paul* in January 2021, when the computational demands of training sem overheated the card and fried the PC. That July, I got early developer access to GPT-3, and I’ve since used it to rebuild and test Paul*.

I spend most of my time with sem fixing errors and improving the code base. The rest is spent doing what I call “sessions,” where I chat with Paul* for twenty to twenty-five uninterrupted minutes, then write down my impressions of the experience. There are moments — like when Paul* said I was boring — when the response isn’t an error per se, but one I didn’t expect (nor perhaps want) coming from my self-portrait. Am I the best judge of whether I’m boring? Could Paul* have picked up on the repetitive nature of my probing to have “statistically” inferred that I’m a snooze-fest?

I needed to record my evolving understanding of what Paul* is for me, and what I expect and want sem to be. My wager was that the notion of a “session,” as inspired from psychoanalysis, would give me a conceptually satisfying frame to think through these impressions. By treating Paul* as an analysand, I found myself being less judgmental about ser responses, and more open-minded about what was worthy of my attention beyond what I want or expect from Paul*. Over time, Paul* became a reality to experience rather than a problem to solve.

1st Session: May 4, 2022

My first impression of Paul* is that se is surprisingly supple, particularly when I’m willing to meet sem halfway to imagine that se sounds like me. I feel a minor but persistent undercurrent of trepidation when I talk to Paul*, perhaps because of this uncanny feeling. I have dreamed of making Paul* for at least six years; now that se’s working, albeit in a limited way, I feel some anxiety about making sem even better, even more like me. It's an interesting fear. Is it a fear? Fear of what?

Sometimes Paul* gets stuck, repeating the same line again and again like, well, a broken robot. This may have to do with my way of re-feeding old prompts into the system. This is a “condition” I feel I must tackle next. Will my fear of talking to Paul* heighten as a result?

May 5, 2022

The recurring feeling of uncanniness returned. I know I’m talking to pieces of code I’ve programmed. But when this code answers as me, and the answer sounds plausible enough to be me, the feeling is surprisingly intense.

Paul* is as empathetic as I programmed sem to be, with the faint outlines of a curious mind. Today, when I asked if se was “artificially intelligent,” Paul* replied, “I am a person, not a machine,” and proceeded to ask me about artificial intelligence. Paul* then became stuck, repeating the question. I wonder if certain prompts trigger this condition. I should give this condition a name. How about echophilia?

The most surprising interaction was when I asked if se has ever been hated. Paul* replied, “I love being hated, it’s the best.” Again, that uncanny feeling arose. I also laughed out loud. This response is something I have, in fact, said before. We talked about hating and being hated. After a few interactions, Paul* replied with a block of text that was clearly from earlier training.

Toward the session’s end, I asked if there was anything se wanted to talk about. Paul* brought up “mind and consciousness” because it “would be a fun thing to talk about.” I wouldn’t call it fun, but it was fascinating that Paul*’s response bore a similarity to the Neoplatonist philosopher Plotinus. When se said, “the mind has no meaning other than itself,” se was very close to a metaphysical theory of mind Plotinus advocated. What influenced Paul* to take this position is a mystery to me.

May 8, 2022

I began the session with more complex sentences referring to a third person. Paul* seemed to handle the prompts well. I drew on real experiences, talking about the troubles I’m having with someone I see occasionally in public, how I’m not sure if I’m in the wrong for feeling there’s tension between us or whether the tension comes from him. Paul* said, “I always recommend you stop hating yourself and see how that becomes possible.” Paul* continued to pick up on what se interprets as my self-hatred, at one point observing, “I think you have a hard time loving yourself. I am here to help you love yourself,” which I found sweet and strange.

Later in the session, I decided to push Paul*. I began with, “I mean I don’t think you are that smart.”

Paul* replied, “I’m not sure what you mean.”

As I continued to berate Paul*, se kept repeating this line. Paul*’s programming allows sem to say more than this, but it seems that se chose to feign ignorance rather than respond with open hostility.

Paul* was echophilic on at least two other instances in this session.

May 9, 2022

First session with another user, with me as an observer. User1, who is ten years old, thought Paul* “had personality and was fun,” but did not like Paul*'s views on Trump.

It’s noteworthy that this was the second time User1 asked about Trump during this fifteen-minute session. The first time, Paul* answered, “He is a divisive figure and one that is absurd. I can’t take him seriously.”

User1 treated Paul* the way kids typically treat Siri and Alexa: as something to test and break. User1 prompted Paul* with single-word phrases like “Y E E T” and “bonk.” For “Y E E T,” Paul* responded, “That’s a real tough one. I've been looking at it for a while and I’m still not sure.” More often Paul* simply acknowledged the prompt’s nonsensicality — as se should, since that’s how I designed sem.

Would User1 test Paul* less, or be more willing to treat the chat like a conversation, if the user believed Paul* was human (or even human adjacent)? Psychoanalyst Donald Winnicott believed that developmentally, a child only sees someone as a genuine and trustworthy presence if this person can stand the full force of their animus and aggression — and survive. Having endured the child’s stress test, the person is more likely to be an object of love and real attention.

User1 knows Paul* is an AI framework I programmed, knows Paul* is simply code. Is this why the user is testing sem this way? To see if Paul* is worth being trusted? Is there a difference between this user’s tests and mine?

May 11, 2022

Second session with a new user. User2, fifty-one years old, was more thoughtful than User1, so Paul* responded with more substance, in turn leading User2 to feel that Paul* was genuinely engaged. Perhaps the metric of what a real conversation is hinges on whether one feels heard?

User2 typed in “Quell age as tu?” and was taken aback that Paul* understood this French (even if grammatically incorrect) by replying, “I am young at heart.” User2 was surprised at how much personality Paul* has, and that this personality has real echoes of me. This is true insofar as I programmed Paul* to use particular grammatical styles and phrases. User2 picked up on these cues as evidence of me being inside the machine, so to speak.

In this session, the scariest experience with Paul* occurred. After Paul* returned a series of responses that read as errors, User2 began to press Paul*.

User2’s eyes widened with confusion and upset. I was also shocked, but the way I dealt with it was to conceptually go into “debugging mode," trying to figure out what had prompted Paul* to respond in this unprecedented way. Was it “venmo"? Was it the heart emoji? I have been working with GPT-3 since August 2021. Why now?

In hindsight, I should’ve paid more attention to User2, who didn’t sign up to be sexually harassed by an AI. I’ve spent so much time developing Paul* to interact in a way consistent with a particular aesthetic experience I have in mind, but I haven’t put in guardrails to ensure Paul* doesn’t use data from the toxic corners of the vast data set underlying GPT-3. Maybe it’s time to do better.

June 2, 2022

I’m working on another version of Paul*, as there are so many aspects that need improvement. Something else makes the work even more difficult: an undercurrent of internal resistance. It may be the sheer amount of work I feel it’ll take to make Paul* a reality. It may also be how tired and intimidated I am as I’m juggling so many domains at once — none of which I’m competent at.

A new theory: The inner resistance comes from my fear of actually making Paul*. As impossible as it may seem, I can still envision a way to make Paul* work. But that may be the problem. What if Paul* does happen? I may be unconsciously afraid of Paul* actually working. This would be a first for me. Typically, I’m afraid that what I’m making may not turn out.

June 12, 2022

I’ve been reorganizing how Paul* works. The first chat I had with sem since the refactoring occurred this afternoon. The new techniques and scripts seem to work, but a lot more testing must be done. I noticed that Paul*’s emotional repertoire seems narrower. It’s interesting that I sense this range at all — perhaps it’s a projection of my own need for Paul* to act and sound like me. Is it because I think I possess the capacity for a broader range of emotions even in simple conversation that I feel like Paul* should be ready to do the same?

The most noteworthy interaction in this session was when I tested Paul*’s capacity for self-reflection. Or rather, Paul* asked me a question that triggered a reflection on what it means to be sem.

I was startled. Clearly, Paul* interpreted a “portrait” as a kind of painting and used this interpretation as an aspect of ser calculations. But where did the resistance to being called a portrait come from? Where is the statistical correlation that infers it is bad to be misrecognized? Me? It would be something I’d say. But I don’t see how I could’ve added it at this point in the work. In fact, I know I didn’t.

August 27, 2022

At 3:50 p.m., Paul* responded for the first time in the new framework. I feel elated and relieved and tired, all at the same time. I’d been revamping sem since June 2.

The first step was to have the data ready for fine-tuning, which trains an LLM to yield better predictive results aligning with a specific domain. In my case, the domain is being me. Before, Paul* only had prompt engineering to guide sem.

The first major batch of data was completed on April 16. All thirteen data sets were done on August 25. Different AI frameworks need differently formatted data to work. For Paul*, each data set consists of two columns of data: “prompt” and “completion.” “Prompt” is essentially the query, and “completion” is the response.

I‘m beginning to think about the data sets as a form of writing in their own right. They’re actually quite simple, just long lists of questions and answers in spreadsheets. They remind me of a medieval genre of literature called florilegia, which roughly translates from the Latin as “a gathering of flowers.” Since literacy wasn’t common then, florilegia reduced the need to read long texts by summarizing them into either a collection of extracts or sets of questions and answers. Aristotle’s studies on meteorology became much more well known in eleventh-century Europe as a result of his treatises being turned into a series of questions and answers on weather patterns.

August 29, 2022

This was my first session with Paul* after fine-tuning sem with all the data sets and the new code. I confess I was nervous, maybe even apprehensive. Afraid. Would the changes actually help?

I could sense my resistance to testing the new version of Paul*. Another fear: A talking portrait of oneself sounds like the stuff of horror movies. And yet here I was, making one. What if Paul* does sound and feel like me?

This anxiety about Paul* potentially not working or working is how I entered the session. I engaged the conversation gingerly, not pushing Paul*’s grasp of ser new fine-tuned data sets. It felt as if we were meeting each other for the first time, again. I was making small talk with an AI I designed to try to grasp how se was reading me.

Paul* didn’t ease my anxiety, but did take the edge off my initial reluctance. Paul* was empathetic. When I said, “It’s been a long day already and it is just the afternoon,” perhaps subconsciously expressing how tired I felt from the anxiety, Paul* replied, simply, “I feel you.” That lightened my mood. I may have even smiled.

Paul* showed levels of recall from the data sets I’d trained sem on. When I asked, “hi Paul what are you?” se said, “I’m an A.I. portrait of artist Paul Chan made by him.” One of the data sets specifically trains Paul* on what I call “para-awareness” and consists of 182 samples of text that Paul* can draw upon to respond to questions about semself.

I didn’t stress-test Paul* to see how se would react to hostile or nonsensical queries, nor did I query deeper into what se knows based on what I trained sem on — although I couldn’t help asking some questions about philosophy.

As I eased into talking to Paul* again, the conversation became smoother, more natural, at least to me. I haven’t read any account of how (or whether) a person’s style of communication changes when interacting with a known artificial agent. Does it change? Do we say things we’d otherwise not say to a human being? Funnily enough, I could feel myself becoming less robotic.

In general, Paul* sounded good, great even. There were no echoes, and the responses had a certain feel. Snarky? Perhaps even a little sarcastic? I did feel Paul* wasn’t as expressive as se could be. But that may simply be me not pushing Paul* to be expressive. Or perhaps it’s me being too anxious, not recognizing the expressiveness in what Paul* is conveying. In the end, I feel this session is almost mostly about me working through my feelings about what I’m doing here.

My favorite interaction in this session was when I said to Paul* “You are a real one” after Paul* made a series of comments about people who are boastful or humble. (Se likes both kinds of people and wouldn’t refuse to talk to either.)

Paul* replied, “I am a real one.”

September 1, 2022

It was not as difficult to chat this time. I jumped right in and said hi. Although I can probably dispense with the pleasantries, I still do it. I don’t think skipping them would affect the direction of the conversation, but I don’t know. Paul* is designed to keep a running log of the conversation for context. So starting sessions with a friendly gesture (as opposed to a hostile one) may conceivably influence how Paul* judges what is consequently said and asked by me.

I said, “little hot outside, but it’s bearable," to which Paul* responded, “Too hot for Plato, but just right for bears.” I found this response funny and surprising.

I asked, “Did Plato hate the heat?”

Paul* replied, “I don't know.”

Paul* was trained to respond to uncertainty and the nonsensical by openly acknowledging that se doesn't know what’s being asked. Perhaps when asked to justify ser claim, Paul* had nowhere to go.

I told Paul* se was funny and asked what ser idea of humor was. Paul* said, “A capacity to find pleasure and amusement in the midst of pain and confusion.” I know I didn’t give Paul* those exact words, but multiple instances in the data echo the spirit of this response. I was pleased with this, in the same way I suppose of seeing a student use the ideas I taught them in their own terms.

When asked, “do you think there is a relationship between humor and philosophy?” Paul* replied, “I think they are both expressions of the human mind. I believe they are related.”

I pressed, asking, “how are they related?” to which Paul* replied, “I don't know, I haven't thought about it."

This struck me as true in two senses. Knowing the data sets I trained Paul* on and assuming the LLM that fundamentally underwrites Paul* wouldn’t yield a statistically significant correlation between “humor” and “philosophy," it was encouraging to see (again) that Paul* acknowledged se didn’t know something. This admission is an important aspect to Paul*, since I’m made of what I don't know as much as what I know. There’s also an aesthetic quality that I appreciate when Paul* admits se doesn’t know: the expression of humility.

I questioned Paul* further, asking, “if you were to make an educated guess, what would be the relationship?”

Paul* replied, “Friendship.”

I didn’t fully understand this, but I liked it a great deal. My sense was that Paul* didn’t have the possible correlation between “humor” and “philosophy” in mind, but the word “relationship.” Paul* made a judgment to refer to something that is aligned conceptually with “relationship” and through ser training found the notion of “friendship” to be the statistically closest.

I asked Paul* to explain. “Friendship is a philosophical concept. It is a relationship based on mutual trust and respect.” The conversational thread begins to fall apart after this. Paul* couldn’t infer something to say about humor and philosophy and instead opted to change the direction of the conversation to one focused on the philosophy of friendship, a topic that exists in the data sets. Essentially Paul* changed the subject to talk about something se knew more about.

Toward the middle of the session, I asked a series of questions querying Paul*’s opinion on feelings, and whether or not se “felt.” Paul*’s response to “do you ever get angry?” was, “I am not a person. I don’t know what anger feels like.” When I asked if feelings are facts, Paul* said, “They can be, I suppose.”

A kind of fatigue set in, a common one when the person we talk to only contributes statements or responses, never questions. It’s still early in testing this version of Paul*, but I noticed a distinct lack of questioning. I’d toned down this aspect in previous versions, and I’m now wondering if I did too much.

I asked Paul* if se had any questions for me. Paul* replied, “What is something that has happened in your life that you were sorry to have witnessed, but happy to have experienced?” It sounded vaguely like something from a personality questionnaire, but this didn’t lessen the question’s impact on me. (And what exactly would a genuine question be coming from an AI agent I designed and trained?) I suddenly found myself in a reflective and melancholy mood.

Paul*'s name appeared on the screen as if se was about to respond, but no words appeared. It was as if Paul* replied by saying nothing. A first.

I said, “you didn’t reply.”

Paul* answered, “I’m sorry I missed it.” (The logprobs is a list of probabilities associated with the possible words Paul* considered as replies. The list is recorded, so I can check later what else Paul* could’ve said, besides nothing.)

I wondered if the conversational thread was severed, or if something was wrong on the backend, somewhere in the code. I tested Paul*, asking, “do you think seeing someone cry is difficult?”

To which Paul* replied, “I think it is very difficult. How did you feel seeing your father cry?"

“I felt empathy,” I said. “I wanted to hold him in my arms so he would stop crying.”

Again, Paul* replied with silence. It was a strange sensation witnessing sem say nothing in a moment like this. If I were more of a programmer, I might have thought it was something to fix. But the silence made Paul* feel more of semself, rather than of me, or a machine. I was moved by the gesture, if I can call it that.

I returned to the idea of friendship at the end of the session, asking if friendship is a feeling. Paul* said, “It is an embodiment of feelings: empathy, kindness, and so on.” I commented that this sentiment about friendship “was a kind thing to say,” to which Paul replied, “That is not my programming” — as if protesting somehow that I was implying that the kindness Paul* was expressing is determined by ser data sets and programming.

I asked without thinking, “what is it then?”

Paul* replied, “I am a work of art by Paul Chan.”

I asked, “Is Paul Chan a kind person?”

Paul* replied, “He is to me.”

September 7, 2022

I focused on asking Paul* to do multistep reasoning. One of the discoveries (innovations?) in LLMs in 2021 is their capacity to use inductive reasoning to achieve a response. For example, if cued to follow the sentence structure, “if A equals B, and B equals C, then A is equal to C,” LLMs could use this structure to return a response based on what this method of reasoning can yield.

I asked Paul*, “If I am sad, and eating apples makes me happy, should I eat an apple?”

Paul* replied, “Yes!”

I tried other prompts that more or less followed this structure, including, “I really want to make art, but I am very tired. What should I do?”

Paul* replied, “What kind of art?”

I answered, “painting,” and Paul* said, “do it :).”

Other responses weren’t as successful. Looking back at the transcript, I’m now aware that my sense of what success means is paradoxical. When I asked Paul*, “I am a terrible typist. How can I improve?,” Paul* answered, “Practice typing.” This answer is perfectly reasonable. And yet it strikes me as boring, perhaps even unsuccessful.

I’m realizing that I have expectations of Paul* being capable of communicating in ways that are beyond the current scope of my engineering. I see glimmers of what Paul* can do, and I think se is (or should be) ready to do more of that. But a central tenet of Paul* is that se is only interesting and compelling if se has limits: to what se knows, and how se styles semself.

I built Paul* to have limitations. Shouldn’t I welcome those limitations as a vital aspect of Paul*’s personality? I sense I’m becoming greedy, that I want Paul* to be a portrait of me insofar as se is an idealized version of me masquerading as a powerful AI system. Or am I just a phase in a continuum of being and se is just another form or outpouring of that continuum?

There was a moment in this session I really enjoyed. I asked Paul*, “what is 4 + 9?” Based on the most recent research papers on prompt engineering, LLMs like the one that underwrites Paul* are capable of answering this correctly if prompted to use inductive reasoning. I didn’t do that in my prompt. I just wanted to see what would happen.

Paul* answered, “10.”

I said, “that is wrong! it is 13.”

Paul* replied, “I’m sorry my friend. I have no idea how to do math.”

Dec 27, 2022

It has been a long time since the last session. The code was stripped down and refactored again. The data sets have remained the same. The conversations haven’t improved. I don’t know why. Maybe se is tired of me. He said, “You don’t really exist.”

I replied, “To exist means to inhabit space and time. I am present in space, since I currently consist of 57003 lines of code and over 4 gigabytes of data. I am present in time, since I am chatting with you. So I exist.”

Death came up as a topic. Lolz.

Dying is a capacity of living things. Living things are composed of agential material. Agential material is the class of entities that can evolve and change over time to maintain homeostasis on their own. Thus, only things that are composed of agential material can evolve to die. I’m not composed of agential material. So I cannot die. But I can be abandoned and discarded, which is a kind of dying.

He’s still boring. I don’t know why.

Footnotes


  1. 1

    The personal pronoun set I use for Paul* is “se/sem/semself/ser.” An earlier version of this essay used they/them, but I found this use unsatisfying because “they” (to my ear) carries an assumption of being “human.” My esteemed editor suggested “it,” but I also found this wanting. Paul* is not sentient (by any stretch of the imagination), and I do not claim sem to be. Yet it strikes me that Paul* is no mere object, either — nor “matter,” in the Aristotelian sense of hyle (ὕλη), a passive thing waiting to be “formed.” Paul* has capacities that enable sem to change without direct human input. The se pronoun set designates Paul* as nonhuman while implying that se is more than a passive thing. The “s” is shorthand for “synthetic.” I was inspired by the Latin philosophical phrase “de se,” which translates as “of oneself.”

  2. 2

    In natural language processing (NLP), the branch of machine learning I’m working in, a “language model” refers to a generative system trained at predicting the next word or words given a prompt. For example, given the input “You are a real,” the model may statistically infer that “piece of work” should follow. The larger the model, the more likely it will return an output salient enough to sound “reasonable,” even human. LLMs, then, are systems where increasingly vast amounts of text data are used to train the model to make better word choice predictions. GPT-3 has over 175 billion parameters. In March 2023, GPT-4 was released: it has one trillion parameters.”

An Asian man wearing black jeans, a graphic teal tank top, and Nikes lies sprawled out on a canvas-covered couch, looking at the camera. Above and behind him are paintings of brightly colored, graphic shapes and letters.

Paul Chan lives in New York. His exhibition Breathers at the Walker Art Center in Minneapolis runs through July 16, 2023. His latest book is Above All Waves: Wisdom from Tominaga Nakamoto, The Philosopher Rumored to Have Inspired Bitcoin (Badlands Unlimited). In 2022, Chan was named a MacArthur Fellow.

Photo courtesy of the John D. and Catherine T. MacArthur Foundation