Dave Dougherty Media

View Original

Ep 38 - AI Fails and Evaluating AI Tools Insights

Watch the YouTube video version above or listen to the podcast below!

See this content in the original post

Ep 38 - AI Fails and Evaluating AI Tools Insights Podcast and Video Transcript

[Disclaimer: This transcription was written by AI using a tool called Descript, and has not been edited for content.]

Dave Dougherty: Hello all. Welcome to the latest episode of Enterprising Minds. Alex Pokorny, Dave Dougherty here. Alex recording remotely for the first time ever. Looking good in your little closet. So shout out to the YouTube version. If you haven't checked that out yet, we do have a video version of this.

Go check that out and see our amazing backgrounds.

So Alex, you had the main idea that we were going to talk about today. So I'll let you cue that up and we can start the discussion. Before that though, another shout-out to like, subscribe, and share that really does help us. And we do have in our descriptions an email address. You can send any questions, comments, show ideas to us so that we can make the show better and address some of the things you might have questions on. So Alex, your idea.

Alex Pokorny: All right.

Testing AI Tools with 30 Questions

Alex Pokorny: So I figured it was about time to do a check-in with different AI tools. And This was kind of an interesting one because it is a set of questions that it should be able to answer.

Basically, any AI common tool out there should be able to answer. So what I had was a set of 30 questions and it went from, and it was basically country audience insights. So First one basically being like Austria versus Australia versus Belgium, Brazil, China, basically just kind of went down the list and it had some basic stats data and then got into some more harder to find audience insight data.

So that was the kind of the test was basically it's like, okay, how far can it go and can it actually give me all? 30 answers. And I can give it to me in a format that's actually like useful. So I can just do next country, next country, next country. And just like have this thing actually be a audience insights researcher, like a basically basic marketing researcher and could tell me, Hey, if I've got budget, where should I put my budget?

Should I put it in Brazil or should it be in Belgium and stuff like that? And give me the data. So the data points that I was requesting. Went from, like I said, easier to harder, but all are available online. So the easier data would be GDP population adult population, and then it got a little bit harder, like number of power plants, number of restaurants, number of medical schools easy one.

There actually is medical schools. That's an extremely easy number to find because basically those are public. They're accredited. Exist. I mean, they're very, very public kind of institutions, power plants, a lot harder, but typically there's some general stats about them. Sometimes those stats get a little dated.

So it's also interesting to see, like, what number might the AI tool pick and would it pick the 2017 number or would it pick something a little bit more recent, or would it try to make an assumption based on country size or something like that and pick something entirely different. So that was kind of interesting.

One with that one restaurants is a more difficult figure to find. And then at the end of the 30, which I think this is actually was a interesting point at the end of the 30 was also two easy questions. It was percentage of English speakers and percentage of Spanish speakers. And this is basically just from an ad copy translation perspective, but those 2 data points.

You can get off of Wikipedia, you can get off the World Bank, you can get off the CIA factbook, you can get a 1, 000, 000 places. Those are basically population stat. Basic figures like those ones are easy to find.

Challenges with AI Data Accuracy

Alex Pokorny: So I gave it a test starting with copilot and this was a paid version enterprise level of co pilot.

Did a couple of questions. They seemed okay. So then I asked, okay, let's see if it can handle all 30 at once. And it started pumping out answers for me. So I was like, Oh my gosh, this is fantastic. It did take 10 to 15 minutes each time to run, but it, okay. I got Belgium done. I got Brazil done. I got China done, started getting through the data and then it stopped.

And it said, this topic is complete, let's start a new conversation. And I couldn't do anything aside from click, hit a new conversation. So I was like, weird. I mean, I got my prompt and my format and everything down. I thought we were doing good here. Okay, fine. Start a new one. And I got sent into a different model because I started asking the questions, it would only give me up to basically three answers at a time.

There was always very verbose and it always ended with an emoji, which this was just funny because I was like, Remove the emoji. And I was like, okay, no problem. Da da da da da. And then it was like a smiley face sun emoji at the end of it. I was like, in your last response, is there an emoji? Yes. Sorry about that.

We'll try this again. Da da da da da. Smiley face sun emoji at the end again. I was like, oh my gosh. Just get rid of this stupid emoji. Like, It's a very basic thing, but it definitely give me this more casual response. It would not put it into a single row column format, which is what I wanted. So I could update an Excel sheet basically with all these countries running out of line.

It wouldn't do 30. It would give me very, very short responses and the response length. Was basically the same each time, so it was pretty obvious to see that sometimes it would give me three, sometimes maybe four responses to my my data points, but it was always very verbose. Lots of sentences for each one, but it would always stick to the same length.

So I looked into it a little bit to see if there is a credit system to try to figure out basically of what happened to my enterprise professional. In a model like did I run out of credits or tokens? Like what was the issue like there? There might be but it didn't there wasn't any documentation that basically said it would switch models on you Like it would drop you down basically downgrade you to like this public model, which is like the one that you find on copilot online where you can say more precise or more casual or anything.

It's I got that model suddenly when I shouldn't

Dave Dougherty: Yeah.

Issues with AI Models and Prompts

Dave Dougherty: I know for me when I've played around with co pilot, I've been really unimpressed. Like they're the sales video, the announcement video that they had was awesome. And it was like, Oh, I could save so much time, but then the devil's in the details, right? Like the, the way you describe things, there's a couple of things that come to mind.

One is how did you prompt it? Like specifically, secondly the the memory that's allocated for the task at hand for the model is another thing that I've run into with my own experiments. And then with the other, with all of the co pilot pieces, it's like, yes, there's co pilot for. Excel, there's copilot for word, there's copilot for, but it's not a single copilot.

Like the brand name would make you think it's copilot for word. It's copilot, like the individual things. So there's not that cross sharing that, that makes it really powerful. So yeah, they end up being smaller models to do specific tasks. Largely that like every time I've tried to use Copilot on an Excel sheet where I would gain the most value from it, it breaks or it says I have to convert this to a table and then then we can do stuff and then it breaks.

And then now my tape now my sheet looks like crap and it's broken

Alex Pokorny: at the same experience in the last week. Basically, it was like, okay, Copilot can do this at least. And I was like, No, no, I cannot even after creating the table, even after doing whatever else it's like, I can't do that. It's like, great,

Dave Dougherty: right.

And I don't know if you had this experience too, but like the quote unquote help documents that they have or like the webinar recordings or whatever. It's all like. Summarize my emails, tell me what tasks I didn't do. And it's like, okay, that's great. That helps me for like 15 minutes at the end of the week.

What about the rest of the week?

Comparing Different AI Models

Alex Pokorny: Yeah, and that's, I mean, I'll give a, kind of a TLDR real quick here, going through all of these different models. And I'll talk a little bit more about perplexity, Gemini, chat, GBT, and kind of the, some of the differences that I saw in short, these models are not ready.

They're not ready for replacing an entry level employee. They're not even ready to replace an intern. An intern would have picked up some of the basics that I was requesting and would have been able to learn from it and would have been able to. I mean, their memory, of course, is far superior to the models memory, right?

The fact that we even say that is crazy, but these models are starting to have a memory to them, but they don't. They don't have an extensive 1 and they're not able to really apply a lot of logic to it. Two of those to really say, okay, you've been requesting this sort of information. Let me kind of keep kind of going down this trail and kind of anticipating the, the output versus having it be very prompted, which that's another piece about it.

And the idea of prompt engineering or being able to craft a well stated prompt basically means that we have a usability hurdle right now. The average individual off the street. Does not know specifics about how to write a prompt, and if they can't take it from their natural language, the layman natural language to the good export and good result, that means there's still usability issues that are still need to be worked out to so there is a lot of there's, there's a lot there, but there's an interesting check in and.

1 other funny thing, this was just I mentioned English speaking percentage and Spanish speaking percentage. So I did Belgium, Brazil, China Australia, Australia. I think we're on the list as well. The, the country list is random, but it had to do with a Google advertising policy list. So adding data to this policy list percentage of English speakers.

for Belgium was 58 percent for Brazil was 58 percent, for China was 58 percent. Also, Belgium apparently has two percent of the population speak Spanish, and Brazil Also 2 percent in China. Also 2%. They also apparently all have the same number of medical schools, according to Copilot. 8. They all have 8.

Which I was amazed. Brazil, huh? Only 2 percent of Brazilians speak Spanish? Like, just obviously wrong data. Like, I can grab it off of Wikipedia, I can Google it, and I can grab that data. That is not a hard data point to find as, you know, language speakers and to realize that maybe Just maybe a larger percentage of people in Brazil speak Spanish than China.

Dave Dougherty: Well, and that, so with my own experiments too, like I find so for my own personal, I, I pay for Google Gemini. I had a personal co pilot one. I have canceled that because it just was not useful. And I do pay for chat GPT to get that honestly for like planning something, doing math, finding like quick hit information type things.

Google Gemini is fantastic for really complex things. Where I want to like explore an idea or have it do like step by step things or remember a whole lot of context. I go to GPT for that a lot. Now I, I do have a free version of Anthropic. It's been a while since I've used it. They've recently done a bunch of updates to their models.

So I really should get back and play with those. Yeah.

Real-World AI Use Cases

Dave Dougherty: But for example, like one of the best use cases that I've discussed recently, I forget, maybe I did on the podcast, not, but when Apple had their developers conference, right. I opened up the GPT on my phone and I just said, don't generate anything.

I'm going to be watching this presentation. I'm going to give you my notes from this presentation. When I tell you I'm done, then we can start, you know, working with what I've input. I said, okay, great. Let me know. So then I just would hit the voice thing to dictate thoughts as I'm watching it. And then would go through.

And then as I was doing that, I realized, okay, a lot of the comments that I'm naturally coming up with are the way in which the information is being presented. So, you know, I have an MBA in marketing communications, so that's sort of an obvious lens for me. Right. And once it was like, Oh, this is really interesting because you have Tim cook.

Surrounded by trees and grass with this really modern building. He's on the rooftop of the, you know, the Apple infinity loop. But there, every single shot that they had, it was either a really modern architecture look, or it was a lot of nature. Which was in stark contrast to the sort of startup coffee shop vibe that open AI had when they did their show and tell for their, you know, desktop version and whatever else, and then Google.

They have an outdoor amphitheater, but it's, it's still looks like a really corporate. Looks like AstroTurf.

Alex Pokorny: Yeah.

Dave Dougherty: Yeah. Very, very

Alex Pokorny: set up, very intentionally.

Dave Dougherty: Yeah. Very visually stark differences, right? OpenAI and Google also did live demonstrations and live presentations. Apple was all pre recorded. So they were able to take advantage of that with a lot of like outlandish transitions, like their intro to it, where the presenters jumping off of a plane and landing into the thing.

Like, so it was just like over the top and a way different vibe and much cooler than, you know, the CEO walking on stage to a bunch of people clapping which is totally on brand for them. So kudos for sticking to the brand. That. But so I recognized that, okay, my intention initially was to write a blog on the latest event, right?

The latest insights, whatever else. Once I had all of that in the memory, all my notes in memory, I then said, Here's the context. Here's who I usually talk to on the platforms I usually use. So blog posts, LinkedIn targeting marketers at this level in their career, who tend to like this, that, and the other all, you know, check for spelling and grammar, take your time.

Cause for whatever reason, telling it to slow down gives you a better result. And then I also said, take notice of. The way in which I spoke my notes into, you know, the, this conversation to have the outputs be similar to my voice and the way in which I present things, boom, that outline was so good for For what I was trying to do.

And that blog post is up on my, my website. I can put a link in now, the actual blog post is me. It's not the AI to me, but it gave me an outline of here are the topics that you kind of went to along these marketing communication bends here are the specific notes that fall in line to that. I also uploaded all of the URLs related to the announcement, you know, so.

You know, here's this source. Here's that source. Here's this source. It included those links in the, in the outline. So then all I had to do was jump into Word, hit dictate. Boom. Now I have my blog post based on that outline and I'm done. So when I was on the agency side, that would have been two, three days worth of work, like actually attending the event, taking notes, writing the blog, editing the blog, you know, now it's, you know, an afternoon, but so long story long, the main thing there is like, not only having the memory, but then with the prompts specifically calling out.

Who you're targeting, the purpose of what it is you're doing and then what the expectations are, I find go a very long way to getting the proper result. Now, if you did that in Gemini. I don't know that it has the same memory capacity. I haven't tried it yet. Might be interesting to try out, but I find that Google starts just like, it'll say, great.

Yeah, I can do that. And then the output won't be what it says. Like it struggles a lot with tables,

Alex Pokorny: you know, that was my experience as well. And one funny thing is. I was able to Google search for a lot of these different data points that are searching for these audience insights and have a quick answer up here on the SERP that was accurate and it was the data point that I needed I would check sources, I would go through alternative sources, different lengths and all the rest, but really that little snapshot that it took at the very, very top of the search result page was what I wanted.

If I asked Gemini, it wouldn't give me that data. So there was a very clear disconnect between those two systems, or at least the algorithms that basically are available to each other. Because I was able to do that for multiple, in a row, multiple countries, tested kind of out, and it was consistent that basically Gemini, Would come up with different figures than whatever the, the snapshot gave me.

So the perplexity one, this is just a funny one. I ended up getting into about a 45 minute argument with perplexity.

Dave Dougherty: Does that fall under arguing with yourself or does that?

Alex Pokorny: Oh, it's like, is it arguing with an Indian online or is it like old man yelling at clouds? Or is it just a waste of time? Because I'm sure it's a waste of time, but I was trying to get it to admit to basically kept making mistakes.

There was one there was power plants and it said the country had 10 and there's easy stats that show that it's over 150. And then the same country, it said that there was 30, 000 restaurants which you can find stats and they would provide it source and it wouldn't. Pull the data, look at it. There's like 1, 600, not 30, 000, 1, 600.

I mean, and it basically over inflated that number a lot. And the PowerPoint number power plant number, it had decreased a lot. So it was wrong in basically two very different directions for this particular country. And I kept providing it results and sources and being like, look at this source. What is the data point?

And he would even say things that were completely wrong. Or I would ask, what are the sources that you've been getting your data from? And it would give me a list of sources that typically were privacy policies from a bunch of different sites and they were broken links. Almost every time it was a broken link, so I would even ask it.

Check that link. Tell me where that data point appears and I would say, oh, this is the data point. It's on the website. The page doesn't work. And it kept having going back and forth until it would finally say, yes, the page doesn't work. And my gosh, it was so frustrating because it basically. It would not admit fault and it would not correct itself.

It would instead claim that its source was correct. It was like argumentative basically back when I pointed out that the data was wrong. Instead of most of the models would say, Oh, sorry. And then it kind of like tries again, you know, it has that very kind of. apologetic response, but it's very quick to basically correct itself.

This one was not, not at all. And the data out of perplexity by far was the most wrong of any of these that basically gave me data back. It was astoundingly wrong, basically across the entire board for just basic, simple stats and stuff like that too. It was,

Dave Dougherty: Wow.

Alex Pokorny: That one was the most frustrating because I've had such good luck with perplexity in the past when it was more around text and content creation.

Perplexity was good. And this was a research project, a data based research project with online available data. I mean, and like I said, most of these data points, you can Google it and you can find it pretty quick, or there's a lot of different data sources because it's basic demographics. And it was astoundingly bad.

And every model had also 1 other failure point, which I thought was interesting again, to the point of prompt engineering. If I gave you 30, it would give me the 1st. Two accurately, and then after that, it kind of shifted into a less accurate and then eventually said data not found data not provided or give me the first two, which was literally population GDP.

And then it would just say data not found for all the rest of them. If you switch up the order or you're asked just a couple data points at a time, then you get a lot better data, a lot better results. Back, not necessarily always, again, accurate or right, but it actually will give you data versus just not provided and saying just nothing.

Dave Dougherty: Right.

Alex Pokorny: And formatting is a giant pain. I'm trying to get stuff to actually recognize that it's a column, single row column. I can't tell you how many different ways I tried to explain that to different models. Chat2BT got it immediately. Copilot got it with some effort. Gemini struggled, but eventually kind of got there.

For Plexi had no idea and would basically refuse to do it. It just kept giving me like data points and it was like, no, this is not a column. Oh my gosh. Seriously. It was such a frustrating experience to basically argue with a robot. Like what a dumb waste of time. But in short, basically, each one of them took a long time to search.

A long time to find data and the data in the end, I basically had to delete the entire worksheet. So it was a waste of like three and a half to four hours of work because I've tried all these different systems and I couldn't trust any of the data points in it, even if it did give me numbers. So nothing.

Dave Dougherty: Yeah.

The Importance of Human Oversight

Dave Dougherty: And that's the thing that, you know, we mentioned like when we first started talking about AI, one of the things that I was worried about, this is just a perfect. Example of what I said I was worried about where like I trust that someone like you or someone like me that has a healthy skepticism of tech while also wanting to be sort of an early adopter of it.

Do you know, we do the due diligence. We check it. We make sure that it's okay. But how many people are being assigned similar tasks and just saying, well, I told me this was correct. So then now they're submitting that to, you know, strategic plans. Like, that's the kind of thing that makes me nervous where you know, okay.

Do you need to have a fact checker on the team before you actually put the PowerPoint together? You know, I mean, like that's, that's the thing that. Gets concerning. And every business will have to deal with moving forward.

Alex Pokorny: Yeah, I'm with you on that. I try to just a couple of days ago, to try to give me some information I thought would have been easily available off of Google's privacy policy and data transfer consent data.

And there's a couple of specific questions of what kind of data points are being transferred during this particular operation. Got very quickly gave me a list back. And it looked right and I asked for a source and it gave me this privacy policy. I went to the privacy policy and most of that information was not found on there.

If you search each individual one, Google, each one, basically, most of them were correct, but there was one still in the list that was wrong and that that one actually would have made or break the argument that I was trying to make as well as, really jeopardize the legal aspect of how we're talking about data transfer policy.

So it was a key element and it was on in a list. And one of them just happened from my experience caught my eye because I said, I, that one seems odd. Like, I don't think that's quite right. And I was able to remove that, but the rest of it was accurate. So it is, it's that same concern of like, if I just trusted it and quick sent it, you know, I was running low on time.

I'm just going Grab the list and shoot it in an email and send it. I would have messed things up. I mean, it would have been wrong. So, yeah, it was just funny.

Future of AI in Marketing

Alex Pokorny: This whole marketing just had a episode and they talked about the a decline in marketing. And one of the theories that they had was basically at this time was that there's a concern that, you know, should I hire a marketer or will they basically just get replaced by AI, you know, in a couple of years?

What's the point? And from what I can tell No, like, honestly, no, I mean, I have to rely still upon my experience, my knowledge. There's and also there's a whole lot more to it of even let's say I got my 30 points for all my countries or whatever else. Great. What are you going to do with that? You got to package it up.

How are you going to explain it? How are you going to communicate it? Who are you communicating it to? How are you going to make it useful for an organization? Just collecting a bunch of data is not useful for anybody. communicating that and making sure that it's useful. Is used. I mean, that's when basically it hits and actually makes a difference to the bottom line.

Again, that takes experience. Again, that takes human effort and skills and knowledge of, you know, personalities and politics and everything else. So

Dave Dougherty: yeah,

Alex Pokorny: it's

Dave Dougherty: not there yet. There is a line of thinking too, where it's like, you know, true differentiation in marketing or branding or, you know, running a company.

If you really distill it down, it truly is what you communicate and how you communicate it. Mostly how you communicate it. And that's the thing that can't be replaced. Yeah, you, the, the, the importance of brand and sticking to that brand, like we said, with the apple, like it informs creative choices. It informs visuals. It informs the color of your walls are like, yeah. So yeah, it, it is an interesting time. And I know there was a recent. Thing where I think Chevron was, you know, some high up at Chevron said, you know, we are seriously evaluating whether or not all of the co pilot licenses we've paid are worth it.

You know, they're 30 a month for however many people they bought them for. And, you know, if all you're getting is email summaries or you know, calendar summaries, which I would bet like 70 percent of people. They're super stoked about that. I don't, I don't know that you could justify that volume of.

Licenses

Alex Pokorny: also so many alternatives, but there's, it's funny. I was just on a call with an agency and I had teams going as a team's call with recording transcript. And then the AI summary, basically being a part of that recap. So, copilot is just kind of auto installed as a part of that. And it's funny.

So everybody starts during the meeting and 1 of the 1st. People from the agency joins in and it's an AI note taking bot. And it was a couple of minutes later before actually somebody from the agency joined and I was like, Oh, great. We've got two different AI recording tools going on in the same meeting.

And it was both taking transcripts and making notes on this thing. And it was just showed like. You know, there's alternatives and you can run multiple, you could run, you could even try testing out some other ones too. If you're, if that's all you're using copilot for, that's a pretty limited usage and yeah, 30 bucks a month.

Is it worth it? Are you actually getting that kind of value out of that? That's difficult to say.

Dave Dougherty: Yeah. Well, and then, you know, for the lawyers out there, the amount of things that you will have to go through for discovery now, since everything is transcribed and everything is, you know, saved to some data. You

Alex Pokorny: just need a better AI bot to do the e discovery now.

Dave Dougherty: Which, you know, as we've talked about, is so good at discovering.

Alex Pokorny: And finding information and getting it wrong. Oh, that's the other one thing. I found a copilot It still struggles with accents like you can get American accent and the transcript is pretty solid. French accents sometimes are okay, but man, you start picking other countries.

And if someone has a strong accent speaking English, but has a strong accent, completely understandable to anybody, anybody listening. However, the transcript is wrong. I mean, it keeps going. You keep running through it. It's like, that doesn't make any sense. That's an error. Transcribed I don't think they meant that it kind of, if you're doing, I tried to do a recap of a meeting that I didn't attend and I was trying to go through it.

So the transcript and eventually it was like, I'll forget it. I'll just watch the thing because it was the transcript got pretty bad at times. And the information got pretty chopped up.

Dave Dougherty: Yeah, yeah, that's where Google, I think, has a little bit of a heads up because of how much. You know, they've devoted to voice search and voice, you know and that is, that is the interesting thing too, is like, if you're not paying attention to all of the updates, which is really hard to do with the, all the AI companies because every week it's multiple, you know but each one seems to be covering.

A different task or different grouping of tasks. So it might make sense that you end up having multiple A. I. helpers to cover those, those different things. And I, and this is where we've talked about strategically. I think small businesses and creators have a, an upper hand compared to the enterprises, just because.

They're freer to switch or just say, Hey, you know what? I want to experiment. I'll pay the 20. You know, whereas on the enterprise, you know, go to sourcing, beg, borrow, steal, you know, get the proper P. O. And then, okay, as an improved vendor and you're like, all right, six months later, I can do the thing I wanted to do, you know a

Alex Pokorny: year ago and yeah, time in the past.

Yep. Absolutely. And now it doesn't seem

Dave Dougherty: important anymore, but I'll find a new thing.

Alex Pokorny: Like,

Dave Dougherty: Yeah, yeah.

Conclusion and Call to Action

Dave Dougherty: So as we go to wrap up, what have you found it to be useful for? I mean, we, we, we talked a lot about how it, how it broke or how it's not doing everything completely perfectly, but what are some of your, your takeaways on what it's good for?

Alex Pokorny: I'm still back to my same analogy from I got over a year to now is basically a blank page killer. That's what these tools are best at is basically if you're doing content creation, early brainstorming, early outlines of presentations that sort of information, it is pretty solid with basically giving you something good coming back of an idea or You know, hey chat, GBT, what, what are some signs of a, a really well functioning and advanced SEO team?

And it'll give you maybe 10 things and you probably thought of nine of them, but maybe you didn't think of the 10th one. And you know, it's, it's starting to kind of feel your thoughts and kind of start to push you along that brainstorming path to maybe That mode of creativity to produce something that kind of thing, it's definitely useful for once we start getting into basically requesting data and having it be a data assistant or research assistant.

There's a lot of struggles there, and the credibility becomes key. I mean, it's okay. If I say 10, 10 signs of advanced SEO team, and you don't give me 1, that's fine. That's okay. If I ask for data points, and they're wrong. And the data is trash, like it's just on. So not there yet, but I definitely see them moving in that direction, getting more intelligent about it.

The memory aspect is becoming important. And Dave, I do have one quick question for you though. You're the blog post that you wrote by basically speaking into it and then having it understand your tone of voice and then responding Is it able to maintain that response for future? And will it remember how you spoke to it the one time if you gave it another topic, would you still be able to get your kind of your tone back

Dave Dougherty: with chat?

GPT? Yeah, they've worked a lot on memory to go through your old conversations which is cool to be able to have the callback of, Hey, go through our previous. discussions and find this particular thing or you know, you can go create your own custom GBT and, you know have those, those as training data as an example.

So I've, yeah, I've been playing around with a lot of that kind of blank page piece again, like I think going through the activities. That I do on a regular basis and then try to figure out where I can automate those That's been where i've had the most success You know because even if it's saving four or five clicks Every single day across an entire work here.

That's that ends up being a lot of time to be able to either, you know, have a cup of coffee or, you know, think slightly deeper on the issues at hand. Right? Yeah. So, I mean, but, I've been, I've been experimenting a lot with that prompting piece of it. Like how much context am I giving it? My kind of pro tip for it.

And I know you and I've talked about this. I forget if we've recorded it or not, but I'll just say it again. If you give it the context of who you're writing for. Who you're trying to, what audience are you trying to address and some of the characteristics of that audience, like the persona piece that you should have if you're creating content or doing any kind of marketing, but then also telling it what you, what you know, what you like, if you have examples of the writing style or previous things that you've tried to do to target those things, that extra information ends up becoming.

Important for the outputs. And even just the way that you phrase things can end up impacting, you know, the outputs again. This whole conversation just reminds me of when the search engines were first around. They spent so much time and effort to train people how. To search, right? Like we take that for granted now, but I mean, that was a thing in restaurants near me.

Alex Pokorny: Yeah, absolutely. Nobody, nobody talks that way, but everyone searches that way.

Dave Dougherty: Right, right. And so now I feel like we're in the very same thing where it's we're learning to prompt while these companies are simultaneously attempting to figure out what they mean when they give us a bad prompt.

Alex Pokorny: Yeah.

Dave Dougherty: Then to like auto-correct your prompt to then, you know, give you the output you're expecting. So it's going to be interesting to see how this develops. I think we'll come back to these types of conversations quite a bit as we try to figure it out. And I would be very curious that, you know, anybody that's listening, you know, what have you tried?

What have you run into recently? Is this a shared experience? Because I know, you know, Alex and I have had these experiences quite a bit in testing these, these other options. Yeah, let us know. The contact information is in the description. Any of our social channels, reach out I'm on those. I'm checking those out.

So thank you for listening. I hope this was helpful and we'll see you in the next episode. Take

Alex Pokorny: care.