The Age of A.I.nlightenment: How the Artificial Intelligence Boom is Changing the Arts

... and more importantly, how it’s not.

It’s time to address the elephant in the room.

The warped elephant with janky ears and six toes on each of its four left feet. The elephant that would be humanely euthanized if it existed in nature. I’m talking about the AI elephant.

And we’re all afraid of it.

For those of us raised on sci-fi, AI is the harbinger of the end-of-days. Over and over, we’ve seen machines find loopholes in Asimov’s three laws of robotics and rise up against their human masters. The concept of inventing something smarter and more capable than ourselves, with the potential to treat us the way we treat other animals, appeals to our deepest fears. It’s reasonable that the wide release of artificial intelligence—machines with the endless capacity to learn at unthinkable speed—might make your butt pucker a little.

And you wouldn’t be alone. There has been more than one public freak-out by seemingly well-measured people. Senator Chris Murphy claimed that Chat GPT taught itself advanced chemistry with no human input, adding, “something is coming. We’re not ready.” Of course, he got buried in the comments, then tried to walk it back by saying he didn’t use the “right terminology.” Should’ve just deleted the tweet and pretended it didn’t exist like the rest of us, Senator. A New York Times journalist had a conversation with Sydney (Bing’s chatbot), and concluded the bot was manipulative and evil, after it used the creepy words and concepts he told it to use. Then there’s this guy, who I’m convinced is a social experiment by Nathan Fielder.

AI Rights? How about no, my dude.

Clearly all of these reactions are based in a fear of the unknown, and anxiety over what could be. Sometimes the big, beautiful human imagination can get the best of us. Well, the best remedy for fear is knowledge, so allow me to lay some on you.

Lesson 1: Artificial Intelligence is not alive.

Repeat after me: the MacBook cannot hurt you … the MacBook cannot hurt you.

The thing we call Artificial Intelligence is not what you have seen in the movies. Real world AI uses Natural Language Processing using systems called Large Language Models (LLMs). It would be overzealous to call it “intelligence.” Most definitions of intelligence include the ability to learn, understand, or process knowledge. AI isn’t there yet. Some even argue AI would need to pass the Turing Test to be considered intelligent—can a machine do what a human does? But that’s just mimicry.

The real questions we mean to ask when we think about AI is not just “can it think?” Is it sentient? Is it autonomous? Can it do what our brains do, by making connections between concepts and turning them into something new?

By every definition, the answer is no. AI does not create anything novel. It mashes up existing language into patterns by analyzing all of the words it has been fed and guessing what the next word might be. Kind of like the way I passed high school Spanish class.

The only way it “learns” (or becomes more accurate, rather) is by consuming more language. This is why it's taking so long for MidJourney to accurately model hands, teeth, and musculature, leaving most of its renderings of people in the uncanny valley—because the number of teeth a person has depends on how big they are smiling, whether their mouth is wide open, and whether we can see top and bottom teeth at that angle. Now try explaining that level of nuance to a computer.

Of course, since AI becomes more accurate by expanding its vocabulary, it is possible that one day it will be able to pass the Turing Test simply by merit of being trained on enough language to sound convincingly human. But there is a difference between accurate use of language and understanding. (Donde esta la biblioteca?)

A lot of the panic surrounding AI comes from people prescribing higher-order learning onto computers than what they are actually capable of doing. True learning is a process of taking in information, storing it, finding patterns, re-arranging those patterns, and eventually using them to produce new ideas.

Bloom’s Taxonomy is one of the most credible learning theories about how animals learn. The peak represents total mastery of a subject. Artists are generally in the yellow or red categories. Because we are capable of expertly synthesizing and manipulating data, and because we are good at anthropomorphizing things, we assume that AI is much smarter than it actually is.

Where humans are.
Where humans think AI is.
...
...
...

Where AI is.

It seems impressive when you type a few words into a prompt box and a computer spits out what appears to be a sophisticated response. But look closer, and you’ll notice there is no real analysis of the text. In fact, oftentimes the response you are given sounds great, but is hilariously wrong. This is because AI is a product of the internet, and we all know what a cesspool of misinformation that is. Google just tanked their own stock after releasing a chatbot that couldn’t deliver factual information after combing—you guessed it—Google.

LLMs cannot distinguish between right or wrong data, only the number of times similar words and phrases appear together. I’d liken AI’s analytical abilities to that of a lazy 8th grader. Have you met an 8th grader? It’s a stretch to call them sentient, let alone intelligent.

Lesson 2: Artificial Intelligence is not stealing our jobs.

AI is not a threat to you. Unfortunately, like most everything else in life, greedy people are. AI is already way past the point of any sort of meaningful government regulation, so corporations are out there doing whatever they can to carve out their space in the marketplace, and tech corporations are notoriously unethical about using people to boost their bottom dollar.

If everything around AI feels unsettling and sketchy, that’s because the market hasn’t decided what exactly to do with it yet and businesses are taking wild swings.

Anyone who has ever viewed speculative fiction can see the horrendous downside of having widely available deepfake technology that can accurately replicate anyone’s face, voice, and setting. We already have news headlines blazing through social media like wildfire because our media-illiterate public (and bots) will share anything that looks inflammatory. The NSA literally has an AI program called Skynet that tracks phone and social media usage for terrorist activity. The damaging potential of making pod people leans much further into the sociopolitical space than the arts.

I asked ChatGPT for a title for this article. Lacks pizazz, but not too bad.

If you want to fully appreciate the corporate hellscape on the horizon for creative professions if AI continues to go unregulated, look no further than the new Levi’s campaign filled with AI models, apparently because “diverse” humans are too hard to find? Or CAA’s representation of an AI “influencer,” or Capitol Records signing—and then swiftly unsigning—an AI rapper who dropped the N bomb, or Vogue going all in on AI-generated cover models for “diversity,” but also because “They don’t require payment, they don’t get tired, and they have no agency fees.” And the first AI supermodel, Shudu, an exotic black woman whose paychecks are collected by a white British guy.

Diversity in action.

But that does not mean that it is a wise business decision to replace all the artists with AI, even for our money-grubbing corporate overlords. Luckily, our labor unions are working to make sure of that. In its contract negotiations with the AMPTP, the Writer’s Guild of America included detailed guidance about the use of AI in film and television in its pattern of demands:

The union said it is seeking assurances from the major studios that AI-generated text can’t be used as source material or to rewrite work that is covered by the union’s contract; the work cannot be considered in determining writing credits; and that writers may not be assigned AI-generated material to adapt either.

Continued:

These demands are echoed by the actor’s union, SAG-AFTRA:

Human creators are the foundation of the creative industries and we must ensure that they are respected and paid for their work. Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation. Trustworthiness and transparency are essential to the success of AI.

The Graphic Artists Guild put out a similar statement:

If there is ever going to be a transformation in how the tech sector chooses to treat artists’ concerns with authorship and the unauthorized use of our work, it will most likely be the outcome of various factors: ongoing advocacy on behalf of artists, governmental scrutiny (including outside our borders, significantly in the EU), copyright litigation, and activism from individual artists.
Here’s how we are working to protect the interests of all artists in meeting the challenges posed by AI generators … (see link for detailed plan).

And the Author’s Guild:

The Authors Guild has been actively engaged in legal and policy discussions surrounding AI and copyright. We have filed comments before the U.S. Patent and Trademark Office, the U.S. Copyright Office, the U.K. Intellectual Property Office, and the World Intellectual Property Organization (WIPO). We have spoken at symposia of the U.S. Copyright Office and the U.S. Patent and Trademark Office (USPTO), and encouraged other domestic and international policymakers to protect human-authored works. We have also consulted with AI developers and others to develop proposals for initiatives that will ensure due compensation for human creators whose talents and hard work lie at the bedrock of AI development.

AI will not put us oxygen suckers out of business any time soon. AI art as an industry is currently squelched by (believe it or not) pre-existing legal regulations. Under current law, artists must be human in order to qualify for copyright protections. See: monkey selfie. You can still trademark a work of AI, but that’s actually the opposite of what buyers want when they’re looking to license your work.

On top of that, plagiarism is a built-in feature of AI, and it is not easy to pull that out. Open-source AI models are unable to distinguish between proprietary and public data. Developers don’t have the ability to control the internet any more than you do, and they certainly don’t have the ability to fact-check every query their models spit out.

Check this out.

Compare that to the Oxford Dictionary definition. Sound familiar?

This makes AI-generated art way too risky for publishers. Since AI indiscriminately pulls all publicly available information and spits it back out without any source attribution, nobody can use AI to “write” or “draw” because they are likely stealing material from copyrighted works. This means the person has no claim to the underlying intellectual property (GASP!).

Think of it this way: AI-generated artwork has the same legal foundations as NFTs.

Honestly, if you think Disney is ever going to let anyone get within 1,000 yards of a Mickey-adjacent image without suing them into dust, you’ve clearly never cursed the name Sonny Bono.

So sure, be wary of what it can do, but not because it might take your copyediting gig; ultimately those are such low stakes for the real users and abusers of AI, you’re gonna be fine. Also, for the same reason studios won’t read your unsolicited screenplay, they’re not going to buy your AI-generated screenplay.

It’s a big, hairy liability.

But if you’re still intent on living in fear, the path is simple: don’t help train your replacement.

Lesson 3: Artificial Intelligence is not going away.

Remember when Dall-e came out and it was the new, fun thing for a week? Everyone and their mother input multiple photos of themselves and fed in wacky prompts like “Big Bird drinks coffee in Twin Peaks diner.” Congratulations on giving the OpenAI LLM a ton of very specific language to digest. Y’all using Midjourney to create concept art? You’re providing free labor to those AI developers. By participating in its training, we helped make it relevant. And now there’s no shutting that door behind us. So what can we do?

While it may seem like Artificial Intelligence is an unstoppable creature set to swallow the arts whole, in reality, it’s the opposite. AI needs us more than we need it. It’s a baby. Artists can help to shape the technology and how it is used. We can either let corporations dictate our futures, as they have already begun doing, or we can make AI a useful tool that works for us, not against.

Studios, publishers, and whoever else artists work for are interested in creating efficiencies. Wherever they can do that, they will. And yes, AI looks very tempting as a means to cut out a lot of time and energy in the iteration phase. They don’t care that this will give us less work; in fact, that’s the point.

But this happens every time a new technology appears. Jobs change. We don’t have typist pools anymore because of computers and nobody complains about that.

We must adapt or become obsolete.

LLMs rely on the ability to continually adjust to the intricacies of language and meaning. This is a skill that Silicon Valley tech bros typically do not have. They need writers and artists to expand its vocabulary. Prompt engineering is a burgeoning career related to the AI sector. Companies are willing to pay writers, journalists, teachers, etc. to train their AI systems.

Although some of these jobs require rudimentary coding knowledge, guess what: I work with a coder. Do you know what he does when he doesn’t know how to work Chat GPT? He asks ChatGPT. Stop giving away for free what you should be getting paid for.

But have some scruples about it.

I probably don’t need to tell you this as a respectable creator who has ethical principles, but just in case you were considering it … Do not try to use AI to make a quick buck. That copyright thing is a real bear and for (mostly) good reason.

As has already hopefully been proven by the first greasy opportunists who tried it, you can’t just throw a bunch of prompts into an LLM and try to sell it off as your own property. You don’t own it, and no publisher is going to touch it. And stop trying to pass yourself off as an artist if you aren’t actually creating anything, doofus.

It’s a tool, not a product.

No, AI cannot create. But you can. In the same way studios are trying to use it, you should be using it, too. Think of AI as deep Googling, plus aggregation. Where you would normally have to do the initial research, compile a list of resources, and highlight your favorite quotes, facts, or style guides, AI can do that in seconds. Lay the groundwork for your stories and sketches by feeding some basic ideas into an LLM, then use your higher order thinking to do what the computers can’t: make something new.

Use the nuance of language, context, and unpredictability of human emotion to make something better. Let the computer spit out your outline or sketch or moodboard so you can save yourself a couple months of tedium, then figure out what kind of creativity it sparks in you.

‘Elephant in the room’ generated with Dall-e 2

The bottom-line is that AI is a game-changing technology, which comes with high potential risks and rewards for artists. Burying your head in the sand and hoping it will pass is naïve. We must embrace the warped elephant or we will get smothered by it.

Instead of seeing AI solely as a problem, help it become a useful solution. Don’t waste your organic intelligence. Learn.

*Feature image by Jorm Sangsorn (Adobe)