Episode 21 - Can We Trust AI Journalists? Exploring the Future of News
Watch the YouTube video version above or listen to the podcast below!
Main Topics:
In the Journalism and AI: Navigating Ethics and Trust episode of Enterprising Minds, hosts Dave Dougherty and Alex Pokorny discuss the implications of AI in journalism and media. Key topics include:
Trust in AI-Generated Content: Dave and Alex examine the trustworthiness of articles written by AI, particularly in light of the Sports Illustrated controversy involving AI writers using fake names and photos.
"Do we trust an article written by AI? If it had a fake byline and a fake author photo, would we trust it a little more?" - Alex
Ethical and IP Concerns: They delve into the ethical issues and intellectual property (IP) challenges posed by AI in journalism, including cases where AI-generated articles are passed off as human work.
"Ignorance is the worst comeback... It's on your site; if it's a third-party vendor, you approved them." - Dave Dougherty
"You can't copyright AI-generated content; there's no human to associate those rights to." - Alex Pokorny
AI's Impact on Creativity: The discussion extends to AI’s influence on the creative process, exploring the fine line between inspiration and plagiarism in AI-generated content.
"The only real difference [between AI and human creativity] is the fact that AI remembers everything verbatim whereas human beings... naturally mash things up." - Dave Dougherty
Future of Journalism with AI: They reflect on the future of journalism in the age of AI, balancing technological advancement with ethical responsibility, as well as the increasing importance of investigative journalism.
"Investigative journalism is an important thing for society." - Dave Dougherty
Episode 21 - Can We Trust AI Journalists? Exploring the Future of News Podcast & Video Transcript
[Disclaimer: This transcription was written by AI using a tool called Descript, and has not been edited for content.]
Dave Dougherty: Hey there and welcome to the latest episode of Enterprising Minds. So Alex and Dave here, um, doing, I don't know, kind of a special episode because we feel so inclined to talk about this. But, um, Alex, why don't you, um, why don't you set us up since it was your idea? Sure.
Alex Pokorny: So a little bit of news in recent times, but there's actually been a lot of news for the last About a year and a half now about the use of AI in journalism.
So, what we want to talk about today is trust. Do we trust an article that was written by AI? If it had a fake byline and a fake author photo. Which trusted a little bit more, you know, where's your level at? And then what do you think are kind of the greater implications for journalism? Given that we've seen a number of organizations now and one of the most recent ones was Futurism did an expose of Sports Illustrated and Sports Illustrated has been caught using AI writers, completely fake names and photos.
And Sports Illustrated came back claiming that it's a third party who did all this. It's their problem. They've cut the relationship. And also they believe all the content was human written even though it is. atrocious content that makes absolutely no sense. And I would hope if there was an AI that wrote it and you can't say that human ever wrote that one.
Um, but also the, the response and belief that somehow, oh yes, this was all unknown to us. We had no idea we were completely bamboozled by this. This terrible thing. There's those kind of responses, but this is not this first organization. Um, actually, if you go to that futurism article, take a look at some of the other links that go out because they have a long list of doing this with other publications as well.
Looking at the AV club, um, who else seen it? There was the street. A number of different publications that have all, it's definitely not the first, no, not the stretch, but it is kind of more than one popular ones and definitely hit the, the search kind of circle SEOs and as we've been all about this and kind of hearing so much about it within our own circles and media, um, literally Ray, who's a, a well known speaker and presenter at many conferences, but also she's just been her own public figure since probably, um, Early 2000s, 2005, six, seven, even back when she was more affiliate, but now she has more SEO side of things, but she knew about this.
She actually did a presentation presenting it as an anonymous organization who was obviously using AI on, I think a number of years ago. So she's known about this for a while. Others have now seen it and sports illustrated that I was coming back saying that they had no idea.
Dave Dougherty: So back when I did crisis management, PR stuff.
Ignorance is the worst comeback. Like, cause nobody ever believes you, right? It's on your site. You, if it is a third party vendor, you approve that vendor, you know, so like. Internal politics, firing a vendor, fine. You can save your job and your boss is happy that you moved on to something else. Externally with the public, that is a horrible look, you know?
Yeah, the thing that was interesting to me with that was, um, you had shared a link to Newsweek and their AI policy. And I thought, um, this was in a, just a chat between, you know, the three of us. Um, and when I started reading through that policy, I thought it was very interesting because it got pretty specific into we have at least three journalists look at a thing, AI does not replace the need for multiple journalists to, you know, look at a piece or, um, you know, the editors will continually, you will have to touch whatever is created, whether it's via a human or a, uh, an AI bot.
Um, I. And I remember thinking that I was like, okay, this is at least useful. They, they admit that they're experimenting with it. Uh, like every company should be at this point, you know, um, you're running experiments, you're seeing how you can use it, you're seeing where it falls flat. Um, I, I'm happy that, you know, journalists who have.
A code of ethics are actually playing with it, as opposed to, you know, everybody else. Um, but there was no, like, enforcement piece of it. Like, what happens to the journalist that passes something off as, um, as their own? Um, cause that was something that came up in a different podcast. I was listening to where they, they mentioned, I'm, I was trying to find it while you were talking, um, but I, I didn't.
So, um, if I find it, I'll put it in the show notes, but, um, There were, there's a survey that came out and it showed like, there is a large percentage of, of people who are passing off AI work as their own, despite company policies. To say it has to be created by. Humans, right? And this is a really big IP problem.
You know, if your company is expecting trademark or copyright or whatever else,
Alex Pokorny: right? Perspective on it. You can't copyright that content because it's, there's no human to associate that copyright. Rights too. So you have that problem and co written pieces Become murky. There's no real good ruling right now at least from the US side of things Whether or not you can be granted a copyright.
I think the big case that went through was a comic book and the Illustrations were AI generated the copy was not so they granted Copyright only to the, the text part, the copy part, but they would not grant a copyright to the image part of this comic book, which is kind of a major element from the comic book, but it's an interesting case.
Yeah. Whether or not it's disclosed or not is the other piece. And I think that's one difficulty I've seen also about the, some of these investigations of different companies of saying whether or not this content is AI created, you can throw through these different checkers. There's some flags, but there is no definite answer at the end of the day that yes, this person who may or may not exist may or may not have written this.
You can turn with a person a lot easier than you can determine the contents origin,
Dave Dougherty: right? And can you imagine the crazy loss of productivity for any organization like if the class action lawsuits against open AI and some of these other large language models go the way of the. Trade associations and the copyright holders, you know, in terms of like training the models on copywritten things, all of the, you know, how do, how does that spread out to any of the content that was created?
Cause here's, here's an interesting thing that I've thought about, right? Because.
I tend to skew towards rights holders. I want human beings to create art. I want them to thrive financially for having good ideas that, you know, resonate with, with people. Um, I don't like the commoditization of Fine art. Um, that's, you know, personal preference with all the tech stuff. But so that's just I'm saying that on the front end.
But when you look at the process of what an artist or a creative person, or even business people do. To become knowledgeable. We are digesting copywritten work through books, through presentations, through whatever else we are influenced by those things. And part of me feels. Torn because it's like, I've gone through the same process that these bots are going through.
The only real difference that I can come to when I actually like do the thought experiment is the fact that AI remembers everything verbatim, whereas human beings, we can't remember breakfast. So everything gets mashed up and, you know, torn into new things. Naturally, because we can't necessarily remember everything verbatim.
So is it Is it the same or is it, um, just worse because of the expectations with the robots? You know what I'm getting at? Or am I just like, no,
Alex Pokorny: no, no. That's an interesting angle because the other one, it's funniest from it. We mentioned it a couple episodes back as a team, why I a group that have a podcast and they were talking about some of the fun things that they're trying to do with AI to create new ideas for them to.
Create and then basically post on YouTube because they're just kind of every once in a while the well runs drive new ideas of what they should create as their content creator profession. It was an interesting point. One of them made about how heavily can you sample a music track until. It's copied. And how many different samples do you need to create a music track?
So let's say you're sampling from three different songs. You're going to grab a little element baseline here, a little bit of the beat over here, the vocals from here, you're going to mash them all together. And now it's your own. And if you're sampling multiple ones, that's okay. If it's just one and you're just copying it, then it's not okay.
And The more that you get influenced by the more kind of mashups will probably get produced in your head, like the design for a particular object, you've seen hundreds of them before, you know, kind of fit a general kind of look and feel you're going to produce something that fits in that same kind of category will sit, they'll still look at that same look and feel to that object.
So it's, I get it because there is, there's a point there where AI is not doing the mashup piece of it. Basically, it is instead, this is a piece of copy from here, this is a copy from here, we're throwing it together. There is elements to it. When you're talking about the lawsuits, I wonder if maybe the future is individualized private AIs.
So you could say that this is the Wall Street Journal's AI. It's been trained on the Wall Street Journal's articles because they have full rights to them. It also has a baseline code in it that understands the English language or whatever language they're going to publish in, and the grammatical rules, spelling rules, and the like that exist in that as well.
Then they can create an article that, you know, they feed it five bullet points, and it pumps out three paragraphs instead, and they get their copy.
Would the AI be as good as the ones that are out there now that is sampling from much, much larger databases? No, I don't think so. No, it would be probably pretty repetitive. And there was one that, um, again, futurism, but they did a, an article of own BuzzFeed and BuzzFeed was trying to make these random travel guides to random places around the world.
Like any place, every place, little tiny towns that really there's nothing there, but. They're going to create a guide on it and it was hilarious because they had this very very very long paragraph About how it loved to say the hidden gem. So it was like Spain the hidden gem of Europe and it was like Yeah, no one knew about But it was like literally every single tiny town out of like the East Coast that were randomly mentioned along with everywhere else What's called the hidden gem.
And it was just hilarious because it basically was so repetitive and using this statement and some of the other ones that were also, um, about other publications were also running into the same thing where they basically kept using it on phrasing.
Dave Dougherty: I've run into that even with our show and using the tools for social clipping or give me five episode title ideas.
That ultimately I don't use, but, you know, just to get a little bit of feedback or something to respond to. Right. Um, there is an overemphasis when you tell it to do something for social or blog titles or YouTube titles, you know, Um, revolutionize your marketing, your sales, your whatever. Um, unbelievable trick to do.
No, most things are a nicer shade of gray.
Alex Pokorny: I almost did this and it almost failed. Yeah,
Dave Dougherty: it's just like, man, I don't, I just don't want to do that. If, if it is a really funny story or some sort of thing that is out of the ordinary, fine. I might keep the hyperbole in there, but you can throw some parody of it.
Sure. Exactly. But, um, yeah.
Alex Pokorny: Yeah, that's another. So that's a really good issue points to kind of take that with the journalism media publication aspect. Right. So oftentimes when different organizations have basically taken on some sort of AI components, they've made some pilot, they published the articles, articles, maybe junk, but shortly thereafter, there's typically layoffs.
And while there's consistently statements saying like, Oh, we're going to test this, we're going to pilot it, but it won't affect any jobs. So I'll keep all the jobs. And then like a week later, they cut 25 people. And That's pretty obvious. The further point I just want to make on that is the, the generic aspect of the copy and how we were losing basically investigative journalism, this very unique form of journalism that basically has the story that is unique.
No one else has this story. They're diving deep into a particular topic, pulling on some thread that they thought was a little odd and decided to make a whole article about it, investigate and create a piece on it. And. We're going to lose that because a BuzzFeed quiz. Is not even close to BuzzFeed's journalism, which actually they've produced some really good journalistic pieces until they fired that team until they cut the entire newsroom.
Exactly. They cut the entire thing and you should see some of their like their documentaries and the like that they've produced. I mean, it's so funny because BuzzFeed, I think of as like. The wiki how of the early like internet content farm kind of days, like they were the content mill of content mills, man.
They were, they're atrocious, but at some point they really got into hiring some fantastic people from some very reputable media and journalism companies and built up one heck of a great team. And then they slashed them all so that they could go right back to the quizzes.
Dave Dougherty: Well, so it's, this is one of those things where AI is really going to force.
A conversation, or at least I really hope it does, because if not, we're going to a dark place, um, certain business models are going to have to change and certain business models should not, or will not be able to stand if they are publicly traded, because you will be forced to go back to the quizzes and the fluff.
Um, because facts don't typically pay well, you know, and, um, so yeah, you're going to need to have either state supported things like the BBC or nonprofit organizations like the NPR areas, or, um, one of our you. Friends and former colleagues. Carlos Abeler has this idea around this very topic that was shouted out on, um, this old marketing podcast.
Um, and so, um, I'm going to find out what he's, what he's working on. Uh, I would hate to speak for it, but, um, he, um, you know, we, there is a potential where, I mean, Robert Rose and Joe Polizzi have talked about this for years where, you know, okay, if you have something like. Um, a target that buys all of the stadium sponsor rights in a particular town because they're from that town and they employ a huge portion of that town's, you know, thing.
Why wouldn't you as A marketing budget or a charity write off or whatever thing you want your accountants to do. Fund the local newspaper so that they don't have to worry so much about all of that, right? I mean, this, this is not a crazy new idea. I mean, this was happening at the turn of the last century where you had little company towns.
Yeah. You
Alex Pokorny: know. Sweet goodness man that didn't
Dave Dougherty: end well. No it didn't. But at the very least it, it would fund something that I hope we're all in agreement is a good thing for society. Investigative journalism is an important thing for society in my view. Um, Pretty great. I feel like that shouldn't be a controversial statement, but who knows?
This is going online.
Alex Pokorny: But
Dave Dougherty: yeah, yeah, it is. I will be fascinated to see how many more companies actually put up A public AI statement. Like I remember, you know, in, in one of our previous shows, I brought that up as a specific topic of like, Hey, what are you guys thinking about this? Um, we, uh, I think I'm going to do this for my own personal site, just to plant a flag in the sand.
And it seemed a little early days, right? I mean, cause it, who knows, but then all this stuff keeps happening and it's like, yeah, I think you should probably go on record and, and, and do that.
Alex Pokorny: Um, This is definitely a topic I want to explore more in when we get Ruthie back in. Um, I want to go back to that question of would you trust an article that was written by what seems to be a human?
Would you trust an article that is clearly written by AI? And where your lines are, where's your trigger that says This is too generic that it's not useful to me the wiki how of content or this is fantastic and it's really useful or this is oddly repetitive like where are those kind of trigger points for you that kind of point that lever towards I trust this content and this random site that I don't know if we'll put in that context or do trust this content.
Um, we'll save that for the next episode with Ruthie and continue this conversation then.
Dave Dougherty: All right. Thanks all. This is definitely a quick hit. Um, let us know if you like this format. Um, let us know what your thoughts are on, um, journalism and AI and where, where you stand on, on those topics. So, um, episode show notes will be on the David Orte media website.
Um, drop a comment on, um, The YouTube platform or podcast players, like rate, review, share all that good stuff. So, um, thank you. And we will see you in the next episode.