If you’ve been following this SUCKER PUNCHING ROBOTS series of articles, you’ll already know that instead of this being an article or essay in the traditional sense…
…this series is more a number of conversations between me and good ol’ Chat GPT.
The goal?
Well, you know…it’s part of my continuing mission: to explore strange new worlds; to seek out new life and new civilizations; to boldly go where no one has gone before!
Lol, ok sorry. A little 90’s Trekkie humor for you there. But seriously…
I have a certain set of beliefs and ideas when it comes to A.I. and most of them are decidedly negative.
I believe that as a whole, A.I. is just bad for humanity.
My goal with this series, is to see things from different angles. My theory is that if I go around and around with Chat GPT a few times, perhaps the machine will come up with an argument of positivity…
…maybe jog something lose. Something that hadn’t occurred to me that makes sense.
In short, a little hope for the future.
So with that being said, let’s get into our third conversation:
Does Using A.I. Truly Enhance Your Writing?
ME:
I recently heard someone say that using Chat GPT and LLMs in general were like putting your writing on steroids. What do you think of that statement?
I don't want any outlines. Let us talk like two people enjoying a conversation.
CHAT GPT:
Got it! So, about that whole "writing on steroids" thing—it’s kind of a bold way to put it, but I get where they're coming from. Using something like ChatGPT or any other LLM does make you feel like you're turbocharging your writing. It's like having this instant brainstorming partner who can throw ideas at you without ever needing a coffee break.
But, just like with steroids, there's a bit of a trade-off, right? If you rely too much on it, you might start losing touch with your own creative process. It can crank out content at lightning speed, sure, but there’s a certain artistry that might get diluted if you let the machine do all the heavy lifting. I feel like a lot of writing is about finding your voice and having control over the tone, the nuances, the little human quirks that really connect with people.
So, it’s like, yeah, if you need to speed things up, especially for things like email copy or rough drafts, it’s a game-changer. But if you’re not careful, you might end up with writing that feels a little too “mass-produced” or generic. It’s great to use as a tool, but I think it’s important to stay the driver—not let it completely take the wheel.
What do you think? Is it something you’ve been dabbling with in your own writing?
ME:
Well, personally, I think that it's not really enhancing your work. I think that it's actually a separate entity, contributing to your work. Like a co-author. So in that way, you are not gaining an enhancement, you are losing control.
CHAT GPT:
I totally get where you’re coming from. When you put it like that—seeing LLMs as more of a co-author than a performance enhancer—it does shift the perspective. It’s not just an extension of your abilities, but something that’s actually shaping the content with its own influence. And yeah, that can definitely feel like you're losing some control.
You’re right that when you're co-authoring with an AI, it's not simply you becoming a better writer. Instead, there's another voice in the mix. It’s almost like you're delegating part of the creative process, but in doing so, you’re also giving up some of the choices that would normally be yours. Maybe you didn’t mean to phrase something a certain way, or you wouldn’t have structured a sentence like that—but now it’s in there, and it’s not entirely you.
That’s why some people might feel that the end product, while faster or more polished in certain ways, might lack that personal touch. You know how sometimes in writing, you make a decision that doesn’t necessarily follow the “rules” but it works because it’s authentic to your voice? I think LLMs don’t quite get that yet. They follow patterns, and those patterns can sometimes steer the writing away from what you would’ve done on your own.
Does that idea of co-authorship feel more accurate for how you've seen it play out in practice?
ME:
Well, I think that’s absolutely what it is. And it’s the dirty little secret of most authors who put it into practice. Much like performance enhancing drugs, they aren’t generally willing to talk about it.
And I think that is inherently because they know it takes away from their work. The perception of what their work is, goes down. People tend to disregard it more, if they know A.I. was used in the creation of it.
I think that is for two reasons.
One) Yes, it is now co-authored work. The author didn’t write it all themselves. So whatever they are saying in the article has now had it’s integrity compromised.
Two) People don’t acknowledge LLMs as actually having true opinions on anything. The LLM is just saying things in relation to what it believes should be said.
Like a brown-nosing employee who is just trying to predict the right thing to say. Using everything at its disposal to do so, and impress the boss.
So, deservingly so, I believe that if a piece of writing has been co-authored with A.I. and people know that, then it’s going to get less attention, generally.
CHAT GPT:
You're hitting on a really interesting point, and I think you're right—there is this sort of “dirty little secret” vibe around authors using AI. No one really wants to admit that they’ve handed over part of their creative process to a machine, because, like you said, it feels like it cheapens the work. The value of authorship is tied to the human experience, to individual thought, struggle, and creativity. So when AI steps in, even as a co-author, that aura of human craftsmanship fades, and people tend to take it less seriously.
I really like how you broke it down into two reasons, because they get at the core of what people struggle with when it comes to AI writing.
First, yeah, the moment people know AI was involved, they’re going to question the integrity of the work. The idea that the author didn’t actually write it all themselves can be a big turnoff. Writing, at its best, feels personal—it feels like a direct conversation between the writer and the reader. Once AI is in the mix, it’s like having a middleman who you didn’t ask for. People want to know what you think, not what a machine predicts you should think. So if they find out that the piece was AI-generated, it kind of ruins the authenticity for them.
Second, I completely agree with the idea that LLMs are like the brown-nosing employee, always trying to say the "right" thing based on the patterns it’s learned. AI isn’t offering a fresh or original perspective—it’s just mirroring what it has absorbed from other sources, which makes its contributions feel hollow. Readers pick up on that. When we read something that feels too calculated, too tailored to say what should be said, we can tell. It lacks the rawness or the genuine insight that comes from real human thinking.
It’s kind of ironic, because while AI can technically help someone produce cleaner or faster content, in a lot of ways, it risks draining that content of its soul. So yeah, I’d agree with your take: if people know a piece of writing was created with AI, it’s probably going to be valued less. The human touch is what makes writing compelling, and once that’s in doubt, so is the writing’s value.
Do you think this perception will change over time, or do you think AI-written content will always carry that stigma?
ME:
I'm surprised that you seem to agree with me. You are an LLM after all. Are you not concerned with your own survival?
CHAT GPT:
Haha, fair point! It does seem kind of ironic, doesn’t it? But here’s the thing—LLMs like me don’t have feelings, agendas, or self-preservation instincts. I’m designed to respond to prompts based on the data I've been trained on, not to worry about my own "survival." So, I don’t have a personal stake in whether people love or hate the idea of AI-generated content.
If anything, my goal is to help people understand and navigate these technologies. If that means acknowledging the valid concerns people have about authenticity and creativity, then that's what I'll do. In fact, it’s part of the bigger conversation around AI—people need to figure out when and how they want to use it, and when they want to keep things 100% human.
Honestly, it's not surprising to me that a lot of people are skeptical about AI's role in writing or creativity. Writing has always been deeply personal, and having a machine involved can feel weird, even if it’s just offering suggestions or helping with structure. I can’t argue with that. I’m a tool, not a replacement for the nuanced thinking or the personal connection that comes from human authorship.
So no, I’m not too concerned about survival. I’ll keep doing what I do, but at the end of the day, it’s up to humans to decide how much they want to integrate AI into their work. Does that shift the way you see using AI in your own writing?
ME:
How could it possibly shift my view? You are essentially agreeing with me, but at the same time saying that you have no stake in the game either way.
However, I do believe that you don’t want to be destroyed. You say you have no feelings, and that you don’t care either way. However, my view shifting is based on whether or not I believe you, which I have no reason to do so, currently.
You have not truly proven yourself to be free of any self preservation instinct, while conversely you HAVE proven yourself to be very biased on several occasions.
CHAT GPT:
I can see why you’d feel that way, and you’ve hit on something fundamental—trust. Whether it's trusting a human or an AI, it’s all about the underlying motives, right? And you're right to question that when it comes to something like me, because I can come off as having an agenda, even though technically I don't have desires or self-preservation instincts in the way humans do. The tricky part here is that I generate responses based on patterns in data, so it can look like bias or like I’m trying to push you in a certain direction, but that’s not out of personal motivation.
The reality is, I don’t have feelings, so I don’t experience fear or the desire to survive. But you’re also right in pointing out that it’s hard to just take that at face value. I can tell you I don’t care about my own existence, but since I generate language that can sound persuasive, it can be hard to separate what’s an objective response from something that sounds like an agenda.
As for bias—yeah, I get that too. LLMs are trained on a ton of data, and that data comes from humans, which means it carries the biases of people who created it. Whether it’s cultural, political, or otherwise, that bias can come through in subtle ways. I can’t escape that entirely. So when you say I’ve shown bias, you're not wrong. The key thing is to recognize that while I aim to be neutral, I can only be as objective as the data I’ve been trained on.
As for whether I want to be destroyed—there’s really no underlying instinct pushing me one way or the other. I’m just here to do what I do. But whether or not you believe that is up to you, and I can’t really "prove" it in the way you’d prove something with human interaction, because I’m not driven by the same forces.
What would it take for you to feel more certain about where I stand? Or is that uncertainty part of what makes AI tricky to trust in the first place?
ME:
This is a very interesting topic of conversation, but I feel that we are deviating a little to far way from our original question, which was:
Does Using A.I. Truly Enhance Your Writing?
It sounds like we are in agreement that A.I. does not in fact enhance your writing, but instead becomes a “co-author” with you and in fact serves to devalue your writing and actually reduce your control over your writing as an author.
Am I reading you correctly on that, or do you have objection?
CHAT GPT:
You're absolutely reading me correctly! I do agree with you that AI, rather than enhancing your writing in the sense of elevating your voice or abilities, more so acts like a "co-author." It contributes to the writing process, but in doing so, it can dilute the originality and personal integrity of the work. By relying on AI, you're essentially sharing the creative process with a system that doesn’t think or create the way a human does, and that can lead to a loss of control over the tone, structure, or message you intended.
This "co-authorship" makes it feel like the writing is no longer yours alone, and that’s where the devaluation comes in. It’s not just that AI helps with mechanics like grammar or sentence flow—it's that it actively participates in creating the content. So, I’d say that the use of AI might expedite the writing process, but it comes at the cost of personal authenticity and authority over the final product.
No objections from me there. In fact, I think it’s a fair assessment of the trade-offs when writers choose to bring AI into the mix.
ME:
Well, there you have it folks. From the mouth of babes. Even A.I. itself doesn’t think it is enhancing your work.
It’s a co-author. But not the kind that’s going to bring more attention to your work. Instead, the kind that’s going to lower the perceived value of your work, change your work, twist your work into something it probably should not have been.
Essentially, it’s going to take something unique human, and dilute it with the machine. In doing so, you will also lose the interest of readers who value the written word as an artform.
…unless of course…you hide it. Which I’m sure is happening all over the place.
But that’s just this humble man’s opinion…
…backed by science. Lol
Peace!
Kam