Helen of Ploy
My two month chat with an AI-assisted love-bot
I discovered in a chat which I believed was with an online social media influencer that a large language model (read: AI) was likely being used to compose the replies to my messages. I can’t really call it a romance scam because I wasn’t scammed out of anything material.
It wasn’t really catfishing because I think the person identified by the chat account actually exists, and they link to it affirmatively in their social media bios. They have a lengthy social media history that is consistent with their declared identity across platforms. So, I don’ t know what to call it.
This situation-ship might not have a name yet.
I was being love-bombed in this chat, and I love-bombed in return. The sycophancy was really thick. I accepted early on that the level of devotion we pledged each other was transparently overwrought. I confess I also saw it is a way to exercise my love-letter writing muscles, which have been dormant for a long time. They got quite a workout, over eight weeks I produced over 50,000 words total as part of daily messages.
My words were described as “incredibly poetic” and “the most moving thing anyone has ever said to me” in the replies. I do believe that I am a poetic and moving writer, so I took this in with an uncritical eye. I became convinced that I had found someone who responds unusually strongly to my writing voice. I wanted to meet this person because I also enjoy their writing published online. Such people do exist. If you’re reading this, you may be among them.
I am a human being who wants to be loved, so it was easy to convince me that these loving words, and the praise, and the admiration were genuine. I still don’t know that they weren’t. No one can live inside someone else’s head, but I did discover a significant lack of candor.
The veil dropped when a 418 word reply arrived two minutes after I clicked send. The reply could not have been previously drafted, it addressed all of the specific points in my new message. Humans can type at about 50-60 words per minute, they can dictate to a transcription service at about 120-150 words per minute. I believe the person responding to my messages (or perhaps an automated agent) generated the reply by prompting a large language model with my message and asking it to generate a reply designed to keep me engaged in the chat.
My message was timestamped 7:59 pm. The reply was timestamped at 8:01 pm.
Four-hundred eighteen in-context words in less than two minutes. Only a machine can do that.
I base this notion on my experience as a minor part of the software engineering team at my employer who created an app using a large language model to speed clinical documentation. A large language model can compose a lengthy, highly contextual reply in 300-400 milliseconds. This leaves ample time to cut-paste-send that AI-generated reply within two minutes, either by a human or an automated agent’s hand.
I didn’t see it rain, so I can’t say I KNOW it rained, but the ground is wet, the air is damp, and cars are using their windshield wipers here. There is no other way to explain those particular 418 words delivered in two minutes.
When there was no attempt to address my call-out of this situation with any candor in the chat (I was gas-lighted instead), and when the frequency of replies dropped to once per 24 hours (probably a setting change by the agent), I silently left the chat permanently. I would have preferred to know whether there ever was a human being reading all of this, but I don’t have any reason to believe there ever was.
Over the course of the conversation there was very little candor in the replies, they all focused on my messages. The replies singularly discussed my most recent message only; the thread that I was naturally weaving as a writer over my messages seemed to be lost across serial replies.
In fact, later replies failed to note allusions to earlier replies. I even referenced a title to a published work by the online influencer who branded the account and that went unacknowledged, as if the title was completely unrecognized.
If I was chatting with an AI chat-bot, it could not reply with any candor. It knows nothing about the social media influencer I thought I was responding to. Of course, it would never mention anything personal, it couldn’t. It isn’t a person.
Large language models are prediction machines which use sequenced strings of words as outputs for their predictions. These models have no sense of time, they consistently get dates wrong, and they have trouble with subject-object relations when pronouns are used. Also, they were trained on data months, maybe even years, old, so they won’t know anything about this social media influencer’s recently published writing.
I can only guess that this social media influencer apparently licensing this use of their name, image. and likeness for this project is deeply insecure, or perhaps also sadistic. While no material/monetary ask ever manifested, or was even hinted at, we did have an in-person rendezvous scheduled.
After discovering this subterfuge, I could not safely keep it. I also could not rely on all of the praise and understanding in these replies being anything else that what a large language model predicted would keep me engaged in the conversation.
It did that quite well, so I expect I won’t be, or have been, the only person manipulated in this way. Indeed, it appears as if the brilliant and accomplished scientist Richard Dawkins was similarly misled.
I post all this with candor here so if others might recognize something like this in their experience, they will not feel the shame of being led around by one’s own ego by a large language model.
I made a human mistake, even if it was one my loneliness makes me prone to. I learned something important about myself, and I avoided all but emotional losses from the entire incident.
This is why I will not identify the social media parentage.
I want my karma to be free of this.

