In Episode 3 of the Intentional AI series, Cole and Virgil move into the next stage of the content lifecycle: content creation.
AI can write faster than ever, but that doesn’t mean it writes well. From prompting and editing to maintaining voice and originality, AI-generated content still requires human effort and judgment. In this episode, the team explores where AI can help streamline production and where it can’t replace the creative process.
In this episode, they explore:
How AI fits into the content creation stage of the lifecycle
Why AI-generated content often takes just as much time as writing from scratch
The key risks of AI content creation, including accuracy, effort, and authenticity
How to maintain your voice, tone, and originality when using AI tools
Why humans are still responsible for quality control and credibility
What happens when you test the same research prompt across three writing tools
This episode also continues the real-world experiment from Episode 2. Using the research compiled with Perplexity, the team tests how three content-generation tools—Jenni AI, Perplexity Pro, and Writesonic—handle the same writing task. The results reveal just how differently each model performs when asked to create original, publishable content.
A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content creation stage.
Upcoming episodes in the Intentional AI series:
• Nov 11, 2025 — Content Management
• Dec 2, 2025 — Accessibility
• Dec 16, 2025 — SEO / AEO / GEO
• Jan 6, 2026 — Content Personalization
• Jan 20, 2026 — Front End Development & Wireframing
• Feb 3, 2026 — Design & Media
• Feb 17, 2026 — Back End Development
• Mar 3, 2026 — Conversational Search (with special guest!)
• Mar 17, 2026 — Chatbots & Agentic AI
• Mar 31, 2026 — Series Finale & Tool Review
Whether you’re a marketer, strategist, or developer, this conversation is about creating content intentionally and keeping your human voice at the center of it all.
New episodes every other Tuesday.
For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.
Chapters
(0:00) - Intro
(0:30) - Smarter content creation with AI
(1:00) - Effort doesn't go away
(3:20) - Tool / LLM differences
(5:34) - Audience fit & voice
(7:44) - We tested 3 tools for AI content creation
(10:08) - Testing Jenni AI
(13:23) - Testing Perplexity
(14:55) - Testing Writesonic
(16:55) - Key Takeaways
Subscribe for email updates on our website:
https://www.discussingstupid.com/
Watch us on YouTube:
https://www.youtube.com/@discussingstupid
Listen on Apple Podcasts, Spotify, or Soundcloud:
https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0
https://soundcloud.com/discussing-stupid
Check Us Out on Socials:
https://www.linkedin.com/company/discussing-stupid
https://www.instagram.com/discussingstupid/
VIRGIL 0:00
Often content creation with AI sounds like a dream. It's going to be fast, it's going to be effortless, and it's going to be automated. But honestly, that's about half the story. AI writing is often required just as much prompting and editing as doing it yourself. In this episode, we're going to dive into the risks and realities of using AI for content creation. Join us as we start discussing stupid. Hi, everybody. Welcome back to Discussing Stupid. Today's episode, we're going to continue our series around intentional AI. And we're going to be talking about content creation. And so this is one of those areas that I think a lot of people use AI for on a regular basis. And so we wanted to kind of look at the good, the bad, and the ugly of all that. So, Cole, why don't you kick us off and kind of take us where we need to go?
COLE 0:59
Yeah, sounds good. So first off, we want to think about what are the risks and what are the realities when it comes to content creation with AI. And I think the first one that comes to mind for me is that a lot of people think, you know, if I'm using AI for content creation, it's going to be less effort because all I need to do is type in, you know, a sentence, I'll make this article for me or something. But the law of AI seems to be if you're going to take away effort in one area, it's just going to be replaced in another area. So if your prompt is weak, then you'll just spend more time editing that. If you're, I mean, what do you think about it so far?
VIRGIL 1:49
Yeah, no, I mean, you're absolutely right. I mean, you know, the reality of AI is, it's cool. I mean, it adds a lot of opportunity in that. And so if you don't have much expectations, you'll probably be fine. But the reality is if a marketer is using it for something that is actual content that they're going to share on their website in a brochure or some other type of digital medium or something like that, they probably have much higher expectations. And that's where AI tends to fall short. in that is that you really have to not only understand how to prompt, like you said, but also have kind of like we talked about in the research episode, kind of that trust on the reliability of the content. And I don't know how many times we've generated content from AI where it's like, oh, this is terrible. We'll spend more time editing it than we will actually using it. But it's kind of that purpose of why you're doing it. If you're just getting it to do some kind of marketing puff piece and there's not really much expectations around the quality of the content or the, really the validity of the content. And it's probably going to give you a great fresh start. But if you have something that you want to show, your industry expertise, really fit your voice and that kind of stuff, there's more steps you have to go through. It's not that it won't give you a great start. It's that anybody that thinks this is going to be the end all for you is probably incorrect.
COLE 3:20
Well, cause if you're trying to be like an industry thought leader, you aren't going to produce original content with AI. You have to like take that expertise you have and put it into the prompting. And you know, different tools will provide you a better landscape for like inserting your intel that you have. And then it'll pull from, you know, a variety of sources and stuff.
VIRGIL 3:43
You're absolutely correct. Every tool is different and. they are not equal in that. But even more so, take the tools out of the equation. It's the language learning models behind them. I mean, that's probably even more significant. And there's so many on the market right now. And, you know, you kind of have to figure out what the use one. I mean, like, you know, there's kind of an industry perspective that the Claude Opal one is pretty much one of the best if you want like deep research, deep understanding. better communication and that. But it's also the most expensive out there on the market. You don't have to pay a lot of money for it. Where you have some lesser versions, where frankly, I've been very unimpressed with ChatGPT and GPT-5 and everything behind that because I feel like for certain things it works really well, but for content creation, it tends to struggle. So you don't only have the tool, and a lot of these tools do different things, but you also have the learning model in the language model that also has a great effect. So you're going to have to do a lot of experimenting, which is what a lot of people aren't going to do, really to kind of find that tool that kind of works the best for them.
COLE 4:56
Yeah, so you pretty much just get in or you get out what you put in when it comes to AI content creation, no matter if you're willing to invest in a better tool or if you're willing to invest, honestly, as much time as you would be writing, prompting, and or editing.
VIRGIL 5:13
Yeah, there is a possibility that at some point this will save you time, but the thought that you're just going to jump in and it's going to save you time is completely inaccurate, or you're going to put out a lot of inaccurate information. It's going to save you time, but you're going to have to put a lot of time and effort into figuring that. I mean, another good instance of that, which a lot of people aren't going to do is, you know, what does it really mean to have your voice And who are you trying to communicate to kind of what's out there? we use perplexity to create kind of a voice as part of this experiment for it. And I gave it a lot of our writing samples and it was able to come up with what we both considered a pretty strong voice. It did a good job at that. So it wasn't writing original content. What it was doing was taking our content and saying, what does this mean to you and how do you think we use? But As we'll talk about with the tools, we didn't always get great answers from the tools themselves when we had it actually do the content writing. But the other side of it is that we had... I was talking to somebody just at a conference just the other day and he was talking about the other thing you really need in there is you need an ideal customer profile where you actually have the AI help you walk through kind of who is your ideal customer in that and that also affect. So there's things that you need to get in place. to kind of make it not only create good content, but also content that fits your writing style, fits your voice, fits your organizational audience, and all that kind of stuff. So it's not just a simple one, two, boom, go. It's really, you have a lot of prep to do that. And then, as you said, on top of that, you have the prompt.
COLE 6:58
I think nailing the voice is one of the most important parts. I think it's honestly the main elephant in the room when it comes to AI content creation, because like you can tell when AI is. or if content is AI generated. There's a lot of indications, subtle indications, it's like, does this sound like how a human talks? Does this sound like how a human communicates? And more about how I talk and how I talk.
VIRGIL 7:21
Yeah, the more ironic thing is the AI tools called Humanizer tools that are out there to analyze somebody else's AI and see if it sounds human. So that you're using AI to decide if AI sounds human or to change it. So yeah, it's this whole thing.
COLE 7:36
Look how far we've come.
VIRGIL 7:37
Yeah, right. Exactly. You have AI testing AI and that. So that's always kind of funny in that. All right.
COLE 7:43
I think we've covered the potential risks of AI content creation pretty well so far. But how about we get into the actual experiment that you did, Virgil, on actually using these tools to, well, so if you weren't tuned in for the past episode, we used several tools to come up with research for an article that we're trying to create. And we use that research from Perplexity to actually generate three different articles using three different tools. So Virgil, can you tell people about this experience that experience?
VIRGIL 8:19
Yeah, the whole philosophy was, you know, in the last episode, we kind of talked about the research we did, and we kind of felt like the strongest research that kind of had the most backing and most validity as far as statistical significance in that was from Perplexity. I mean, it wasn't a big leap between the three that we got, but overall we felt like that was better. So I basically posed, and so the exact prompt I gave, and I'm going to be reading this verbatim, is using the research and related sources, links in the perplexity document, write a 500 word article on the top reasons organizations do not invest enough in making their digital content accessible. give this article a catchy and SEO-friendly title and write it in the style described in the Perplexity-built HM writing style document, which was what I said we used Perplexity to do the voice. So you can see that the prompt itself was very specific and kind of had this detail about not only what I wanted, what research I wanted to use, but also how I wanted it written and some things there. And so we ended up using Perplexity, And then we used two more writing specific ones, which is Jenny.ai and then WriteSonic. And we ran this through each of them. Each of them kind of had a different way of handling it. But overall, I think Cole, since you did a lot of the evaluation after the articles were written, we saw a very mixed bag.
COLE 9:52
Yeah. So I happen to be a content specialist. So I'm going to break down each tool, what it produced, and then I'll just kind of give my thoughts and Virgil can kind of add his color commentary on his other observations. So let's start with Jenni.AI. Taking a look at what it produced for us. I pretty much thought that Jenni.AI, no offense, made just a regurgitation of the research. It pretty much took each major research point from the document and just made a like re-skinned version of that. And I can see where it was like trying to use our voice, but it just made a very like awkward start to the article. It was like, don't be shy people, we don't bite, is what we might say to organizations hesitant about investing in digital accessibility.
VIRGIL 10:48
I mean, in all fairness, I mean, the research was very much written like an article. And if you haven't checked that out, you really should go back and look at that in the episode 2 show notes. It's the research there that you can kind of read through it. But I agree. I mean, Jenni.AI, it seemed very simple. in kind of the way it did it. And it did that. And that's not to be unexpected. I mean, maybe I should have added the word in an original voice or in an original, but I mean, using our voice should have been to rewrite it like us. But yeah, it definitely, I think, failed in the originality category.
COLE 11:29
Well, I'm sure this is a case where like more prompting, like just adding more to the prompting would probably produce better results. Like you'd be like, okay, have each paragraph be contain original content. I don't know, there's like other ways you could probably make the written article better. But at a base level in this test, it just didn't really do much for us. And I think some of the...
VIRGIL 11:53
It's probably going to take a lot of experimenting to kind of figure out how to get the prompt for each AI. And, you know, just to keep this test consistent, we've been doing basically the same prompt across them, so we're not trying to customize it too much. Probably the only customization is because when you start adding files, they all reference files differently and how you add that.
COLE 12:13
So. That was one other thing that I noticed. So you didn't even ask for the Jenni.AI to or any of the tools to cite sources, but the way Jenni.AI cited the sources was very confusing. It just, it kind of referenced the research document and didn't specify which source it was pulling from.
VIRGIL 12:31
So I thought that was weird too. so the one thing I wanted to really show with this is that it doesn't really reach through the research document and go back to the original sources and look at it because there are links to the sources in there. And, it's supposed to be artificial intelligence. So why is it not intelligent enough to look and say, oh, there's sources linked? And I did say that in the question, you know, to use the links inside the document to research this and it apparently did not.
COLE 13:01
Yep. Go figure. I don't know. I thought it was cool that I was trying to match our voice. I can see this being beneficial if a lot of work is put into like prompting and into editing on your own. But we did not go with Jenny AI for the furthering of this article creation. Right. Yeah. So moving on to the next tool, Perplexity. I actually thought that Perplexity was probably the best at just using our voice as it is, our like, high monkey blog voice. But I don't know, I also thought it was another instance of being a regurgitation of the research document. Well, what do you think, Virgil, about this one?
VIRGIL 13:46
Yeah, I mean, kind of based off your feedback and what I read, I mean, it did seem like it was kind of still just kind of not really creating an original article. It was just kind of, reusing what was in there and that. And I agree, I thought it flowed better than the Jenni.AI article, which I think is probably equally important. When you start thinking about having to change things and, really kind of adjust them after the fact, it'd probably be a lot shorter path with the Perplexity article to a finished product than it would have been for the Jenni.AI one. You know, it seemed to have a lot of errors in it.
COLE 14:24
Yeah, I would heavily agree. I think this is another case too where, like, it depends on like what your purpose is with using these tools for content creation. Like, I think Perplexity would be very good if you're trying to do a very concise blog post or research article. And it's like straight to the point. and still using your voice. I think this would be good for that. But overall, in terms of originality, it didn't really produce what I was personally looking to see out of.
VIRGIL 14:54
Yeah. So with the third one, which is WriteSonic, I kind of broke the rule I just said, which is I wanted to keep everything consistent and I did. But the one thing is, WriteSonic kind of gives you two options. It gives you kind of a wizard based where you kind of go through and you select different things and kind of talk about it. You attach the voice and do it through kind of a wizard. And then they give one where you can do more prompts like you do in Perplexity, just kind of write it out. So just because I wanted to try it, I did the wizard version for WriteSonic. And it actually walked through and you were able to say, you know, what kind of keywords you wanted in there. I only put like a couple, so I don't think I skewed the results too much. But overall, it just seemed much more like it was based around somebody who's trying to write an article and needs to kind of come up with the information as they go along. So I kind of like the process of the tool itself.
COLE 15:47
Yeah, I mean, it's arguably, in my opinion, probably almost unfair to compare Perplexity and Jenni.AI to WriteSonic because WriteSonic is just so much heavier in like the pre-prompting process. You just have so many more fields of information to fill out and like criteria in customizing the content. I thought by far WriteSonic made the best written article for us. However, the biggest issue with it was the reading level. I think I remember testing it in the flesch reading score tool and it got like a 24.5 on the scale when it should be at like a like a 70 at least for proper accessibility, right? Is that, am I right here? Yeah. Yeah, so I mean, and that's another thing in the pre-prompting where you could probably get a better or a more accessible article if you tell it to, but regardless, this is what I came up with. And I think the WriteSonic article was
VIRGIL 16:58
Yeah, it's still not great. I mean, I think the point is, no matter what, we'd still have to do some editing, we'd still have to do some, there's still some human interaction, and I think, if anybody takes any good, solid advice from our intentional AI series is, yeah, sure, AI can help you out, but there's still gonna be some interaction. It's, to me, it's almost like when machines started taking over manufacturing plants. It didn't mean there weren't people that needed to be involved. They just needed to be involved differently. And I think one of the challenges with AI is that, you know, you keep talking about prompting and yes, we're going to do an episode about prompting, but you talk about prompting. There's kind of this challenge around how to do a correct prompt. And There's a lot of things that you can throw in there, but you don't really know each time how it's going to affect it. So I think that's it. But yeah, I would agree. I think WriteSonic kind of was the most straightforward and produced the best results. But overall, this is a stage that could be very valuable, but probably if we were working with a customer, we'd probably not recommend that this is the first place you start working because there's just too many variables, too many unknowns. that you're going to end up, especially over the first quite a few times, creating a lot more work for you than solving problems, I think.
COLE 18:19
Yeah, totally agree. And I think rule of thumb with AI, like you don't, with content creation, want to use it for replacing your workflow. You want to enhance your workflow. And enhancing requires, you know, being very conscious of how you want to.
VIRGIL 18:38
Absolutely. You still have to follow content best practices, But AI can help. It can also hurt. And it can also be real ugly.
COLE 18:47
It sure can.
VIRGIL 18:49
Thanks, everybody, for joining us for this episode. We look forward to seeing you next time.
COLE: 18:53
Thanks, everyone.
VIRGIL 18:57
Just a reminder, we'll be dropping new episodes every two weeks. If you enjoyed the discussion today, we would appreciate it if you hit the like button and leave us a review or comment below. And to listen to past episodes or be notified when future episodes are released, visit our website at www.discussingstupid.com and sign up for our e-mail updates. Not only will we share when each new episode drops, but also we'll be including a ton of good content to help you in discussing stupid in your own organization. Of course, you can also follow us on YouTube, Apple Podcasts, Spotify, or SoundCloud, or really any of the other favorite podcast platforms that we might use. Thanks again for joining, and we'll see you next time.

