When I started down this publishing venture, I said to myself that there were two topics I would avoid posting about on my feeds. Politics and religion. If you want the lynch mob to come after you, those two topics are the most efficient way to do it. And for over a decade, I have successfully sat on the fence with almost every topic out there.
Sure, there were times when I piped up to say that "enough was enough" when it came to the bullying that was happening in certain corners of the social media networks. But for the most part, I never really took a stance that could be considered "political" on any of my public profiles.
Until now.
In the last year, so many things have happened within the publishing industry, endangering the livelihoods of writers, editors, publicists, publishers… basically, every single human in the industry.
I'm talking about the war against artificial intelligence (AI)… and it's not even AI's fault. The ones to blame for this AI war are the humans who are deliberately taking action that misuses and abuses the technology. And because it is such new technology, those wanting to be honest in this industry have no way to truly fight against what is happening—except to go public and say that it's not okay.
What is happening is far from okay. Copyright of creatives everywhere is being abused in the training of AI-creation tools. The good names of several creatives are being trashed because of false AI-generated works that appear using their names. Creatives are being forced to choose between their future earning potential and that paycheck right now, because publishers are wanting to use their works to train AI, so the publisher can create more works like the creative's work, but without the creative's input. And to top it all off, the technology at the heart of this mess is also being compromised because of the shady practices of the ones looking to abuse the technology.
It's not okay, and I'm publicly taking a stance against the use of AI-generation tools within publishing.
In today's post, I am breaking my promise to myself about political posts on my public platform, because this is one topic that I can't stay silent on.
I have never been afraid of technology
For as long as I can remember, computer technology has always played a significant role in my life.
My father was a mainframe computer programmer back in the day, so I grew up with computers in the house. They were often loan machines from dad's work, but I still got to play with the technology. At school, I would spend many a lunchtime on the computers at the back of the classroom, coding in GW-Basic. (My fifth-grade classroom was the computer lab for the school.)
Mom resisted having a permanent computer in the house (hence the loan machines), but she finally caved when I started high school in 1989. Back then, the internet didn't really exist, and for those who had it, it was this receiver unit where you physically placed the phone receiver on the cradle. And the room that mom allowed the computer to be set up in didn't have a phone connection… so no internet.
Fast forward to the 2000s, and my husband and I were among the first in our street to have fiber for our internet connection.
I instantly fell in love with my Kindle when I got my first one all those years ago (I think I'm on my fourth unit now). And my acquisition of an eInk tablet/notebook back in 2021 has changed the way I work as an editor.
I was one of the first people on Facebook when Facebook became available to the public back in 2006. And now, I spend a significant amount of time understanding the internet world and the impacts it has on writers.
But last year, a new technology came on the scene that I stood back from and haven't played with. I'm not afraid of it. I'm just concerned about how easily the technology can be abused, and I don't want to be part of that equation.
I'm talking about ChatGPT, which launched on November 30, 2022.
ChatGPT is not the problem. Humans are.
When ChatGPT first came on the scene, there was a curiosity that swept through the publishing industry. It was a new technology, and of course, we all wanted to know how we could utilize this technology to our benefit. But in early-2023, certain details came to light that shifted the entire dynamic of our attitudes.
Questions started to rise about the source material used to train ChatGPT. Was the source material still under copyright? And how much of any copyrighted material is being incorporated into the AI-generated material? And what about our prompts? Are our creative ideas being given away by the algorithm to others?
There was a sudden increase in the number of self-published books that appeared to be AI-generated. That was always going to be the case, because the moment any technology surfaced as a way of generating manuscripts quickly, of course, people were going to use it. But publishers and producers started to push the training concept on creatives, wanting to use our work (and likenesses) to train AI, so they can generate other works using our voice but without our input.
Within the last few months alone, the entire landscape of the publishing industry has shifted in ways that none of us could have foreseen, and it is all because of various issues associated with AI.
Lawsuits are in place against OpenAI, the company behind ChatGPT, questioning the nature of the material used for training the AI algorithms. Those cases have yet to be heard. But more lawsuits are cropping up.
Charlatans generating AI material are selling those shady books under the name and guise of other authors—and, in some cases, well-known authors. And we are seeing the scam artists taking advantage of natural disasters by putting out books on those disasters, using the technology to make a quick buck.
All of this has brought into question how the industry is going to regulate the authenticity of human-generated works. Some people have suggested that one solution could be the removal of the free ISBNs from services like Amazon, Draft2Digital, and IngramSpark—an idea that I support.
The only thing that has happened recently that seems to be going in the favor of creatives was the ruling issued by U.S. Judge Beryl A. Howell that says AI-generated artwork can't be copyrighted. The judge presided over a lawsuit against the U.S. Copyright Office after the Copyright Office refused to issue a copyright to an AI-generated image made with an algorithm that Stephen Thaler created. According to various news articles, Thaler tried repeatedly to get the artwork copyrighted, but was repeatedly rejected. And the reason: It is already in the U.S. copyright laws that any work created by non-humans is not copyrightable. This includes works created by monkeys and elephants.
While it is unclear where the law will eventually sit when it comes to AI, there are more cases yet to be heard by the courts, that doesn't stop the misuse of the technology.
The abuse is also impacting on the technology itself
The irony in this situation is that the abuse of the technology is actually generating a breakdown in the technology itself.
Researchers have already pointed out that when you train AI using AI-generated materials that are already flawed, then you perpetuate those flaws and cause the "knowledge library" created to collapse. As more and more AI-generated nonsense finds the market, the technology used to generate that nonsense will implode.
While this is good news for human creatives, because humans will win in the end, it's bad news for society as a whole. Because where will the nonsense stop, and how will the humans combat the fake information generated by the tsunami that is the AI technology?
I'm taking a stance
When the technology first came onto the scene, I had always said that I wasn't going to use it until I could determine its stability and usefulness within my work. But with everything that is going on, I know that I can't trust AI-creation tools as far as I can throw them.
For the first time, I am taking a political stance and sharing that stance on my public channels.
To protect my work, I will not be using AI-creation tools during the creation process of anything that I do. I won't use AI-creation tools during my editing of client works. (It would be a waste of time, anyway.) And I won't use AI-creation tools for anything that I would have historically hired another human to do.
Everything that you read or see from me (and this includes any infographics from Black Wolf or my photography) will be generated by me—the "human" me. And I will not work with other industry professionals who are using AI-creation tools in the work that I contract them to do.
This means that my book covers will also be generated by humans, not an AI. It also means that if I ever do audiobooks, they will also be human voices. (And if I discover that someone went behind my back and did the AI thing anyway, there will be hell to pay.)
As the editor and writing coach, I will not knowingly edit AI-generated stories. If I discover that a client has sent me an AI-generated story for editing in a deliberate attempt to deceive me, I will drop them from my client books faster than I can make your head spin.
BUT I'm not cutting 100% of AI out of my life.
Google is driven by AI, and I use some aspect of Google every day of my life. I use Alexa in my speaker in the kitchen—and she makes me laugh when she refuses to accept my husband's commands. Hell, the spam filter that I use on my email system is a form of AI, because it's learning what I say is spam and what isn't.
I will use automated scheduling tools for my blogs and to share certain posts to my social media feeds, because I would be crazy not to. I will also use AI-assisted editing tools in my work, but notice I said assisted. The human me will still have a huge role to play in my editing process.
I might even consider using AI-generative tool in creating ad-copy blurbs and other marketing materials... because, let's face it, distilling a 100,000-word manuscript into 200-words is a nightmare. And I HATE DOING IT! And when it comes to creating those promo images for social media... Yeah, an AI can do that for me. It's another task that I'm not fond of doing, but I'm unlikely to hire anyone to do that for me (except for perhaps my cover designer, who has kindly added some promo materials into the package deals that I get from her).
This is not a stance that I take lightly. I know that by publicly stating that I won't work with AI-generated materials as an editor, I'm hurting my chances to work with certain industry professionals. I might even be killing my chances at eventually scoring a traditional publication contract. But I have to protect myself from the firestorm that I know is coming.
Right now, there is enough of the market that doesn't care about AI-generated art (literary, visual, or audio), which is why the scammers are doing what they're doing. But with so much in flux—and not in a good way—it's not worth my standards to dive into a world that will be filled with nothing but hurt.
My stance on AI-creation tools might change in the future, but not for the foreseeable future. Too many things are just… well… uncertain.
The only certainty in this mess is that only human-generated works are covered under copyright laws. And that, to me, is enough of a reason to avoid AI-generation tools, particularly for the creation part of the process.
Recent posts:
-
Sorry, I have not read every book
-
Goodbye NaNoWriMo
-
Mental health trumps who is right about events
-
Letter to Self: Your number one goal is to write!
-
Shifting Tactics: Going ALL IN to the Self-Publishing Road
-
Has Woke Culture Become Too Much?
-
What happened to “communication”?
-
Quality vs Quantity: Where is the balance?
-
The Little Louie Effect
-
I’m where chain mail comes to die
Copyright © 2023 Judy L Mohr. All rights reserved.
This article first appeared on judylmohr.com
Share this:
- Click to share on Facebook (Opens in new window)
- Click to share on X (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on Mastodon (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to print (Opens in new window)