S3E2 - Intentional AI: Maximizing AI for research & analysis
Discussing StupidOctober 14, 2025x
2
22:07

S3E2 - Intentional AI: Maximizing AI for research & analysis

In Episode 2 of the Intentional AI series, Cole and Virgil dive into the first real stage of the content lifecycle: research and analysis.

From brainstorming ideas to verifying data sources, AI is being used everywhere in the early stages of content creation. But how much of that information can you actually trust? In this episode, the team unpacks where AI helps, where it hurts, and why you still need to be the researcher of the research.

In this episode, they explore:

  • How AI fits into the research and analysis stage of the content lifecycle

  • The major risks of using AI for research, including accuracy, bias, and misinformation

  • Why trust, verification, and validation are now part of your job

  • Security and legal concerns around AI scraping and data usage

  • How different tools handle citations, transparency, and usability

  • Why you can’t skip the human role in confirming, editing, and contextualizing AI outputs

This episode also features the first step in a real experiment: researching a blog topic on digital accessibility using the tools Perplexity, ChatGPT, and Copilot. The results of that research will directly fuel the next episode on content creation.

A downloadable Episode Companion Guide is available below. It includes key episode takeaways, tool comparisons, and practical guidance on how to use AI responsibly during the research stage.

DS-S3-E2-CompanionDoc.pdf

Upcoming episodes in the Intentional AI series:

  • Oct 28, 2025 — Content Creation

  • Nov 11, 2025 — Content Management

  • Dec 2, 2025 — Accessibility

  • Dec 16, 2025 — SEO / AEO / GEO

  • Jan 6, 2026 — Content Personalization

  • Jan 20, 2026 — Front End Development & Wireframing

  • Feb 3, 2026 — Design & Media

  • Feb 17, 2026 — Back End Development

  • Mar 3, 2026 — Conversational Search (with special guest!)

  • Mar 17, 2026 — Chatbots & Agentic AI

  • Mar 31, 2026 — Series Finale & Tool Review

Whether you’re a marketer, strategist, or developer, this conversation is about making AI adoption intentional and keeping your critical thinking sharp.

New episodes every other Tuesday.

For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.

(0:00) - Intro

(1:44) - Better research with AI

(3:46) - Risk: Trust & reliability

(5:29) - Risk: Security/legal concerns

(7:04) - Risk: Hallucinations

(9:17) - We tested 3 tools for AI research

(11:03) - Testing Perplexity

(14:38) - Testing ChatGPT

(17:45) - Testing Copilot

(19:54) - Comparing the tools and key takeaways

(20:52) - Outro

Subscribe for email updates on our website:

https://www.discussingstupid.com/

Watch us on YouTube:

https://www.youtube.com/@discussingstupid

Listen on Apple Podcasts, Spotify, or Soundcloud:

https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024

https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0

https://soundcloud.com/discussing-stupid

Check Us Out on Socials:

https://www.linkedin.com/company/discussing-stupid

https://www.instagram.com/discussingstupid/

https://www.facebook.com/discussingstupid

https://x.com/DiscussStupid

VIRGIL 0:00
You feel like sometimes research with AI is like a no-brainer. it's quick, efficient, sounds smart. But AI is also known for making up stuff, using sketchy sources, and causing a lot of people legal headaches if you're not really paying attention to what it's doing. In this episode, we explore how to use AI responsibly for research. So join us as we start discussing speech. So we're ready to kick this series off. We're going to be talking about the different stages of the content process, and we're going to be looking at how AI can be used for it. And then we're also going to be looking at, well, frankly, the good, the bad, and the ugly with all that. So, Cole, where are we going to start today?

COLE 0:54
First off, We're going to start by you liking and subscribing and sharing this podcast with anyone that could use this intel. And also understand that anything that we reference in this episode will be available to see in the show notes.

VIRGIL 1:12
And yeah, we have the. We're going to have things in the show notes this time because we're going to actually be going through and, you know, kind of talking about research in general. and research and analysis and using AI and that for content. But then we're also going to be talking about a specific scenario and us actually testing some tools and seeing what we get back from that. So I think that will be very useful. So everybody who's listening and not watching, make sure that you actually look at that part.

COLE 1:44
So yeah, Virgil, let's start with the topic itself, research and analysis. What does this stage look like in the content lifecycle?

VIRGIL 1:53
when you start thinking of the content life cycle, this is a reminder, we kind of break it down to multiple things. We look at research and analysis, which is really ideas, content creation, content management, and then, SEO, personalization, design, coding, all that kind of stuff. Everything that kind of happens is the entire content process. most people are going to use AI really for two things. One is research, the other one is going to be content creation. And that's probably because, especially with marketers, that's kind of the most well-known feature, isn't that? So when we look at research, what do we mean? Well, it could be idea creation. Maybe they're using AI to help them generate ideas that maybe would be good using in their market or something. It could be that they want to do some kind of more technical article or just, you know, puff piece on one of their products or services or something that's happened in the organization and they're using the AI to kind of make sure that they have all their ducks in a row and are doing research in that. But it's also one of the most challenging parts because the one thing about AI is, AI is obviously just like a search engine consuming all this information out on the internet. So you kind of have the grade of that, which means that you can use it, you can ask it a question and it'll give you some really interesting information about something. But there's a lot of downsides as well from how it could be approached. So once you kind of get into that, I think my overall point of this entire series is to say, well, it doesn't really replace you in that, it doesn't, this concept of AI replacing you in the content, but it probably shifts your responsibilities from there. So, where there's a lot of opportunities with it, we also have to talk about some of the challenges, right? Because I mean, whatever the AI is giving you, from your research, like you still have to verify what it's giving you. Like where did it come from? What types of sources is it referencing? Like are we referencing like Reddit forums or something like that? Like that's, I mean, trust. Trust is probably one of the number one challenges that you're going to run in with AI. And the thing is, you have to look at it and say, do I trust this information that it gave me? And do I trust the AI provider? I mean, that's actually, it's a really big one because Every day we see stories about AI doing things and providing false information and prompting people to do really stupid things and that kind of stuff. And you really have to look at it. If you're researching on something around an industry that you're trying to gain more traction and do you trust what it gave you? And so it doesn't mean you don't use it. means that you need to be play the role of the editor and the research confirmer, the person that actually validates what's happening in that, especially when it starts like regurgitating statistics in there. And you mentioned that a lot of AI tends to really rely on like social commentary such as Reddit in that to train itself. And where that's great for it to learn to be human, which is what it's trying to do, there could be a little bit more of a reliability issue from that.

COLE 5:13
Right. So what I'm getting is the responsibility isn't gone with AI in the creation of your content in general. It's kind of just shifted now to you to like, you know, take a look at everything and verify it. Also, you mentioned the other day, there's a lot of security concerns with AI, like where it pulls this data and like, can it actually pull that data? Can you can people actually use the sources that it's pulling.

VIRGIL 5:43
Oh yeah, I mean, a good example is you take like Perplexity, which is actually one of the tools that we used in this particular stage to do some, to do some research on our behalf. Perplexity is very well known in the news for being in the midst of a lot of lawsuits because they're scraping sites that they're not allowed to. and they're being accused of that. Now, are they absolutely doing that? No, I'm not a lawyer. I don't know that. But in the end, there's a lot of that. So it's a reliability. Or if you're using something like Copilot, where you can actually not only crawl, you know, scrape information from the external web, but also in your internal 365 system, I mean, are you sure that any of that information you should be sharing? what happens if it comes from company proprietary stuff? And there's that security about making sure that AI is also accessing things in the right way. And on top of that, if it's accessing your internal stuff, is it accessing really just the right stuff? I mean, security can be a really big concern and you have to know what you're doing. And again, it kind of goes back to you have to play that role of really helping people understand that they have to monitor and manage this process. And it can't just be something you let it do and then go.

COLE 7:04
Right. And then also, this goes without saying, I think we've seen AI do this probably the most out of all these potential issues with research. But like, just if it doesn't know an answer, sometimes it just, you know, makes stuff up or makes up statistics. there's a lot of issues on that front as well, wouldn't you agree?

VIRGIL 7:25
Yeah, ironically, I was watching a video on a guy, kind of walking through how to use Copilot Studio, which anybody that doesn't know, Copilot is Microsoft's AI tool. And Copilot Studio can build like your own, you know, your own agent. And one of the interesting things that he did when he walked through this was he said, you know, if you don't know the answer, don't make one up. send them to our support, but that's a very real thing. AI is very known for just like going on these hallucinogenic journeys and just kind of going off the rails and that. And, there's not enough guardrails on that side, but the other side is people don't really understand. It's the same frustration as people when they prompt and they ask something and they don't necessarily get the answer they wanted. They ask it again and they'll get a completely different answer even if they did the same thing because it's supposed to be a human, but it'll it's really hard to narrow those down and go through there. So you have to be very cognizant of what you're using and how you're using it. Research is such a great opportunity because it has so much opportunity to kind of consolidate all this information and really pull out conclusions in that. But at the same time, it can do it wrong. And so you've got to be willing to go through the effort of looking at the sources, it cites and all that kind of stuff and kind of validating. Does that actually fit into the context that you want?

COLE 8:49
Yeah, I like that example you gave of the guy telling the AI tool not to make up answers if you don't know the answer. I think that should be a requirement of everyone using AI to put in the customization. You know, there's that customization setting that you can do, like adding the traits to the AI. I also think people should probably ban AI from using emojis and em dashes because you can just tell.

VIRGIL 9:15
It's very true. Well, you know, and speaking of an example, so one of the fun things we decided to do was we decided to think about a piece of content. This in particular is one we've written in the past, but anyway, to kind of walk it and walk it through each stage and then at each stage, you know, test different tools and that one in there. So The premise of this is we want to write a blog post for our website. And on that blog post, we want it to be about the top reasons that organizations really don't invest enough in accessibility, which you and I both know is a really big problem out there in the world. So I started by kind of creating the question in that and walking through it and kind of looking at three tools. And if you're okay with it, that's kind of where I'm going to start.

COLE 10:06
Right, yeah. Do you want to read the question that you asked or do you want me to?

VIRGIL 10:10
Sure, no, I can. So the question that I asked in particular, the exact wording of it is using solid research and statistical information, provide the top reasons organizations do not invest enough in making their digital content accessible. And so for this particular exercise, I use three of the most well-known tools out there. I use Perplexity, I use ChatGPT, and I use Copilot. And all of them gave different ones. And so a couple things that I wanted to mention is the one thing that was really important for me was getting statistical information. So don't just make this stuff up, actually use solid research and statistical, don't just come up with there. And so all the tools kind of presented I'm going to say relatively the same information, but a lot of times in very different ways and some different things. So first. looking at Perplexity, I'm going to say that probably if I was to declare a winner, but that's not really the purpose of this, but we'll call it for that. Perplexity definitely did the best job. Some of the things they did really great is they really cited the resources very well. They were tagged inside the content, but they also had kind of an appendix at the end that gave it there. And a lot of it came from research-based things like studies and government sites and things, maybe things with a little bit higher trust level than like a Reddit discussion, from that side. So it was really nice because it was also very easy to go in to look at those sources and see what they did. The other thing that I really liked about perplexity, though this shouldn't be a deal breaker for anybody, is you actually out-of-the-box can export really quickly into like a document in that. So you don't have to copy paste, you can just export it from there. But overall, it gave really good results. And like we said, you can see any of these results in the download in the show notes. So for anybody being, well, you're not showing it on the screen, well, you're correct because they'd be really long. But there were some things that kind of frustrated me about perplexity in that. One, obviously we know that there are some challenges around its viability as far as the information it's gathering and whether it's actually allowed to gather it. So as a person creating content, you obviously don't want to be citing sources that you're actually not allowed to cite because you don't have any permission and the AI didn't. So you're going to have to look at that.

COLE 12:44
How do you verify that?

VIRGIL 12:46
And that is a that is $1,000,000 question. I mean, there's not a real good way. I mean, honestly, if you were doing it, so like, let's just say for instance, like Perplexity, one of its lawsuits that it's in is with Amazon. And Amazon is saying that it's been scraping its content without permission. There's no real easy way to tell whether this came, unless it cites Amazon and you actually know that Amazon's one of its sources and it's not supposed to. But it's not like people are advertising this and even advertising that type of content. So it's not really. So you somewhat have to use your own judgment in that area.

COLE 13:22
Yeah, you ask perplexity, hey, are these sources that you're pulling viable sources legally to using?

VIRGIL 13:30
so another thing that kind of goes along with this that again can be very frustrating is a lot of these use models, so all of them have a model behind it that they're using to build the AI. And so a lot of tools like Perplexity and Jenni.AI and a lot of the ones that are not building their own models, you're able to select different models. But of course, in perplexity, you have to pay for it to be able to do that. But the reality is in the free version, you don't even know what model it's using. So you don't even know what it is. Plus, you also have to have the knowledge of the models. And if you read through them, it's really hard to figure out, you know, what kind of model to use to get the best results. So there's just some things that you have to think of along that lines that you have to be very caution about. cautios about. And from a research stage, one of the things that's going to happen is that you might get really good information, but you're going to do a ton around that from the standpoint of having to actually know what is going on there. So that's just something to really keep in mind.

COLE 14:35
Yeah, that's interesting stuff. So what about ChatGPT? What was your experience like using this tool?

VIRGIL 14:41
Well, ChatGPT, you know, obviously is one of the originals and Interestingly, one of the first things I did is it didn't cite sources out-of-the-box. So Perplexity already cited its sources just based off my question. So I actually had to add additional language to that question that said, and cite sources to get it actually to give me the sources and that. Because nobody wants to go through and do a bunch of research and then have to go do a bunch of research to figure out where that research came from. Obviously, that's just worst case. in that once I added and site sources, it gave me the information, but I had to ask that. And that kind of points out one of the big things about prompts in there. And I was, you and I were even talking about maybe at the very end, we do kind of a discussion about prompts and how you have to be very specific. If anybody's tried doing images, you know, kind of to get to the specific image, you have to ask a lot of different things. So we'll be looking at the prompts throughout there. The other thing is that But because it didn't get it, I asked the question and then I needed to re-ask the question and I had to figure out how to get the question to give me back what I wanted. And again, probably one of the biggest downsides from any AI is it kept giving me different answers. So the first one had really good stats, but I didn't have any references. The second one didn't have as good as stats. It gave me a lot more fluff, but it also cited sources. So you kind of have that challenge of kind of figuring out what to do and you can't necessarily repeat something really well.

COLE 16:08
How were the quality of the sources in comparison to like perplexity with ChatGPT?

VIRGIL 16:14
I'll say ChatGPT used a lot of statistics, but it also made more assumptions. If you read their research output, it made a lot more assumptions about like it's saying stuff and it just seemed like it was very much opinion driven in that, which was not there in that. And But one thing I really liked is how they cite the sources inside the content. It made it very easy to know, like, this statistic came from this source and that statistic came from that source. That was very handy from there. So overall, it didn't perform bad, but I would definitely myself, I think Perplexity is more known for its research-based AI than ChatGPT. And ChatGPT is the one that I feel like kind of can go off the rails alot.

COLE 17:00
Yeah, and you can't export your Answers from ChatGPT, correct?

VIRGIL 17:07
Right, yep.

COLE 17:08
So that's kind of a con as well.

VIRGIL 17:09
Yeah, and so one other thing that I liked about Perplexity versus ChatGPT was that it actually gave, Perplexity in its summary actually gave a statistical significance of how reliable it thought the research really was. So it actually told, so it found stats and then it said how confident it felt about those stats. So that was kind of like, did it come from a definitive source that has a good reputation? So it just kind of was a little step above.

COLE 17:36
So it's probably the more research-based. AI tool that you want to use. It's the one that excels the most. Well, until you start talking about Copilot. So Copilot is a little bit interesting in that it uses ChatGPT. I mean, or the GPT is its backend from OpenAI. But at the same time, it actually does some more interesting things in that now, if you're running Copilot inside your 365, it can actually leverage both external sources and internal sources. But much like ChatGPT, it doesn't do that out-of-the-box. You actually have to tell it to reference external and internal sources. So obviously, you know, I've written and done a lot of things, a lot of presentations around accessibility. So I wanted to give that in there as well. So that was actually really knowledgeable. So it can combine your internal knowledge from your external knowledge. It did a really good job of showing me kind of what knowledge was coming from external sources versus what knowledge was coming from internal sources, which was interesting. A lot of the internal sources were actually the article that we wrote that we've never published in the first place. So that was kind of funny. And that, but overall it is. It's just not the nicest of all interfaces in that it's very simplistic, which is probably nice for most users, but you don't totally know what it's doing all the time. And much like ChatGPT. I couldn't export, and then I had to copy paste, and I just hated that. doesn't give you that real quickly. You can do an additional statement, but again, if you add an additional statement, then it'll probably get you different results anyway. So if you didn't do that up front, it's not going to really work there. But it is really nice to be able to access your internal resources and kind of prioritize those in the information. So that's one that It gives you a definite leg up in the research, especially if you're researching around your own things. Yeah, I'm sure like depending on like what type of research you're trying to do, Copilot might actually be better. But in our case, Perplexity seems like it's kind of the top dog.

VIRGIL 19:45
Yeah, though, like you said, the coolest thing about Copilot is it used a bunch of emojis in his answers, you know, so the icons to put in there, which I don't call the option. But I mean, overall, I felt like the statistics were similar. They all pulled from different sources. They all did that. They definitely all went out and got information from kind of opinionated information from places like Reddit and different discussion forums. It really just kind of comes down to, can you use it for research and analysis? Yes. And we haven't really talked even about analysis, like analyzing big data sets and that, but I don't really feel like that's a... use of this time in this session. But overall, you still have to be the researcher of the research. And that really is what it comes down to. So you need to become the researcher of the research and validate that, make sure it's trustworthy and do that. And eventually you're going to get to a point where you have some trust around this and you know what works because you're going to know the sources that it comes from and you're going to have some confidence. But overall, you're always going to have to play that editor role really to make it work well.

COLE 20:53
Right. And we will be verifying all of the research that Perplexity produced for us and using that research in the creation of the actual article itself in this series, which will be the next episode.

VIRGIL 21:08
Next episode, yep.

COLE 21:10
All righty. So yeah, thanks everyone for tuning in. We had a great time talking about this stuff. So waiting for the next one.

VIRGIL 21:16
Just a reminder, we'll be dropping new episodes every two weeks. If you enjoyed the discussion today, we would appreciate it if you hit the like button and leave us a review or comment below. And to listen to past episodes or be notified when future episodes are released, visit our website at www.discussingstupid.com and sign up for our e-mail updates. Not only will we share when each new episode drops, but also we'll be including a ton of good content to help you in discussing stupid in your own organization. Of course, you can also follow us on YouTube, Apple Podcasts, Spotify, or SoundCloud, or really any of the other favorite podcast platforms we might use. Thanks again for joining, and we'll see you next time.