The Bullshit Machine

And other tales of misattribution

J Curcio
Modern Mythology
Published in
9 min readFeb 18, 2023

--

When I began Modern Mythology way back in 2006, the pace of information was such that it was possible to spend a week or two gathering my thoughts (and references). The first run blog posts were still not fit to print in a book without some oversight and revision, but it also wasn’t the proverbial “hot take”, that is, an immediate knee-jerk response.

The fact is that social media, both culturally and by design, has come to privilege that knee-jerk response, in terms of a frenetic necessity to be the first to hammer out a couple lines in all caps and hit send, and in terms of the messages that get bumped to the top of the queue. The same goes for what’s at the top-of-mind. There’s something here about why I’ve found myself writing less and less, at least in a manner intended for some form of later publication, but every now and then a bug gets up my ass, as the saying goes.

So today I want to talk about the topic du jour, which this past week was either balloons/UFOs or reactions to pre-release tests of Microsoft Bing, mostly told by journalists on various platforms, as they attempt to coax the most deranged reactions they can from it.

I imagine you can guess which I have more to say about.

I am a long-time fan of using cut-up and generative methods to augment “derangement”, so I have no criticism so far as that goes. But I have felt the need to comment a bit on how primed we are to personify systems like ChatGPT and Bing AI, or to treat them like oracles, or entities that in fact mean what they say, or for that matter, understand what they say. ChatGPT is quite literally a bullshit machine. There is value in a bullshit machine, but only when you’re pointing it in the right direction.

There is a reason for this, and that’s what I’d like to talk about a bit. Fair warning, as with most recent posts, I’m typing this up off of fragments of conversations I’ve had about it on social media and pressing send, as by the time I’ve had a chance to even consider what I’ve said, it’ll already be old news.

Such are the times we live in, such is the state of “journalism,” and such is also the current rate of development with AI software. Consider this a brief follow up to my prior write-up on the subject, “You May Live to See Man-Made Horrors Beyond Your Comprehension,” which although still relevant is already out of date, several months later.

To begin with, I’m going to quote a bit from an article that just ran on The Verge, which I think frames the situation fairly well.

“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

“AI chatbots like Bing and ChatGPT are entrancing users, but they’re just autocomplete systems trained on our own stories about superintelligent AI. That makes them software — not sentient.

[…]

Having spent a lot of time with these chatbots, I recognize these reactions. But I also think they’re overblown and tilt us dangerously toward a false equivalence of software and sentience. In other words: they fail the AI mirror test.

What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it. They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient.

This is not a new problem, of course. The original AI intelligence test, the Turing test, is a simple measure of whether a computer can fool a human into thinking it’s real through conversation. An early chatbot from the 1960s named ELIZA captivated users even though it could only repeat a few stock phrases, leading to what researchers call the “ELIZA effect” — or the tendency to anthropomorphize machines that mimic human behavior. ELIZA designer Joseph Weizenbaum observed: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Now, though, these computer programs are no longer relatively simple and have been designed in a way that encourages such delusions. In a blog post responding to reports of Bing’s “unhinged” conversations, Microsoft cautioned that the system “tries to respond or reflect in the tone in which it is being asked to provide responses.” It is a mimic trained on unfathomably vast stores of human text — an autocomplete that follows our lead. As noted in “Stochastic Parrots,” the famous paper critiquing AI language models that led to Google firing two of its ethical AI researchers, “coherence is in the eye of the beholder.”

We don’t actually know how far or close we are from generating AGI — my hunch is pretty far, actually — but we’re damn close to creating effective p-zombies. Which is to say, something that through outward appearance fools our expectations of “inwardness” based on what can be gleaned through “outwardness.” At least up until the present moment, this was merely a thought experiment. Now, I’m not so sure.

I want to be clear about this. I’m not raising this as a philosophical issue hinged on the true nature of consciousness/sentience, (whatever that is), but rather in regard to how easily fooled we are by our “hardware” to project inwardness and intent behind any apparent action. latet anguis in herba. Is the rustling in the hedgerow the result of the wind blowing, or is it a snake? (Or the May Queen).

I would surmise that it has often been the case that erring towards an assumption of agency has been beneficial, as the fallout from mistaking a snake for the breeze may be far worse than the other way around. Now, that may be flip-flopping on us very quickly, and most of us are not ready.

We can project God into a cloud formation, and ill-intent into a storm. The entire history of myth and religion is hiding behind this observation. I’ve spent many years exploring this, for example, in my book Narrative Machines. If a myth was simply “untrue,” there would be no sense in its continued existence. And so certain hardline atheists imagine themselves beyond such things, and are thereby even more susceptible to the force of myth, because of course it is nothing other than the force of narrative, through which we continually reimagine ourselves and one another.

There are surely reasons this pattern recognition/projection was an evolutionary benefit, and there are ways it can be used retroactively to make seemingly hard problems easy — like how convincing VR is to our nervous system if we simply introduce pov tracking head and eye movement — but I think it’s fairly obvious not all such uses of our physiological-mental shortcuts will be benevolent.

I’ve had some version of this debate for what feels like a solid week now, and it all comes back to “but you don’t know for a fact that movement in the brush is a breeze and not a snake”. And that’s true on its surface, although I also don’t for a fact know that the macaroni in the cabinet isn’t so intelligent that it thinks communication with me is beneath it.

In all fairness, no one can say with any absolute certainty that another person is conscious. Or a snake, or cloud formation. Particularly if resemblance to one’s own outward presentation isn’t the criteria — which it probably shouldn’t be. However, this is at least for the time being obfuscating rather than clarifying what the actual barriers are to our understanding of what MLA’s are.

If we don’t fall back on old familiar heuristics, we’re trapped in a philosophical conversation with a tripping 15 year old forever. There is also a cognitive blind-spot demonstrated in our inability to assess the “inwardness” of other agents — that we cannot in fact even have that degree of certainty about our own “inwardness”.

A poignant version of the alternative conclusion is presented within Peter Watts’ novel Blindsight, which in this regard should be considered something of a thought experiment, a manner of pushing back against the assumptions we draw when we place our subjective experience at the center of the universe. It is not a simple literal rendering of “how things are”, any more than any other science fiction is, but maybe even more than most, it is a useful corrective for this tendency, when taken seriously. Peter Watts is a biologist, and the basis of this argument is actually grounded more in physiology than it is in philosophical navel-gazing. In other words, I think it should be taken seriously, which is not the same thing as taking it for meaning exactly what it says.

“The novel raises questions about the essential character of consciousness. Is the interior experience of consciousness necessary, or is externally observed behavior the sole determining characteristic of conscious experience? Is an interior emotional experience necessary for empathy, or is empathic behavior sufficient to possess empathy? Relevant to these questions is a plot element near the climax of the story, in which the vampire captain is revealed to have been controlled by the ship’s artificial intelligence for the entirety of the novel.

Philosopher John Searle’s Chinese room thought experiment is used as a metaphor to illustrate the tension between the notions of consciousness as an interior experience of understanding, as contrasted with consciousness as the emergent result of merely functional non-introspective components. Blindsight contributes to this debate by implying that some aspects of consciousness are empirically detectable. Specifically, the novel supposes that consciousness is necessary for both aesthetic appreciation and for effective communication. However, the possibility is raised that consciousness is, for humanity, an evolutionary dead end. That is, consciousness may have been naturally selected as a solution for the challenges of a specific place in space and time, but will become a limitation as conditions change or competing intelligences are encountered.”

This cognitive blindsight is, of course, essentially analogous to the p-zombie I mentioned earlier.

AI chat systems are effectively designed to simulate and get by fairly our casual heuristics for distinguishing animate from inanimate. The projection we use to treat one another as agents more or less like we imagine ourselves to be, becomes the very method of… I want to say deception, except that implies ill intent and that’s not necessarily the case. Let’s simply call it simulation.

This isn’t necessarily purpose-built into most of them, but it is a necessary end result of creating a chatting partner or assistant. Like neurodivergents such as myself, it must wear a mask. We imagine that, unlike us, nothing is behind its mask. In my book MASKS, I explored that for humans as well it is “masks all the way down”, much of which remains relevant, however I meant this in regard to self-narratives, rather than a somewhat less stark sense than a thought experiment p-zombie.

Whatever our take is on ourselves and one another, it is fairly conclusively the case with current narrow AI systems, to the extent that they even manage to simulate the outward signs of inward consciousness. The more you work with current systems, the less this simulation tends to work, but of course that will almost assuredly change quite rapidly.

When AI is granted personhood based on a similitude of our own expectations, we may simply be reifying the manner of any social interaction — a troubling idea in some regard, but not an especially ominous one. It is in granting this sort of access to corporations, amplifying the power of our projections, that it becomes harder to imagine an outcome that isn’t terrifying. Not because of the technology itself, but because of the organizations and people who are going to be able to have near-direct access to the design, development, and implementation of the AI systems we are so eager to grant our own attributes.

JOI from Bladerunner 2049 may have been more on the nose than it even seemed at the time, back in the ancient days of lore, 2017.

People will develop relationships with their narrative reflections, almost despite any failures in simulation, but that isn’t what concerns me. It may strike some as sad, but ELIZA is a good example of how even the most elementary chatbots can have real therapeutic value. I am not here making a moral claim, particularly not about engaging with the technology. If anything, it would do us good to do so with a critical enough eye to look at what it is doing, rather than what it seems like.

No. It is when I look at the present rate of development of AI Chat, look at the public response both pro and con, and project it even just several years into the future, that it begins to look like a potential backdoor for hacking the “human biocomputer” at scale, on the one hand, and a fertile ground for Butlerian Jihad on the other.

--

--

Author, multi-hyphenate Artist and Producer. These days, mostly a racoon living in a tree made out of production equipment and books. JamesCurcio.com