The PressPectives Podcast | Esther Dyson: Why AI will never replace journalists

2024-11-19
The-PressPectives-Podcast-with-Esther-Dyson

In the first episode of the PressPectives podcast, Esther Dyson, a journalist, investor, the founder of Wellville and a PressReader board member, discusses AI's impact on journalism. While Dyson acknowledges AI's utility in tasks like processing press releases and gathering basic information, she emphasizes that true journalism—uncovering untold stories and holding people accountable—requires human insight and intuition, which AI lacks.  

Dyson discusses how AI can replicate writing styles but cannot replace the depth, curiosity, and original thinking behind quality journalism. She cautions against business models that exploit AI to manipulate consumer behavior and advocates for stronger liability measures to hold companies accountable for the harm caused by AI. 

She also addresses the challenges AI poses to publishing, particularly in protecting intellectual property. Dyson believes that AI-driven content can lead to a commodification of journalism, but high-quality, human-driven reporting will remain valuable. She suggests that publishers should focus on enforcing their rights and ensuring transparency in how content is used. 

Overall, Dyson is optimistic about AI's potential to assist journalists but stresses that it cannot replace the human elements that make journalism valuable. She calls for accountability, transparency, and regulation to ensure AI enhances, rather than diminishes, the role of investigative reporting in society.

 

Also available on Spotify and Apple Podcasts

Transcript

Ruairí Doyle

[00.06] Hello everyone and welcome to the first episode of the Perspectives Podcast. I am your host, Ruairí Doyle and I am here to help us dive into topics at the intersection of technology and publishing together with some incredible guests. So whether you are an industry pro, a curious learner or just looking for some inspiration, you are in the right place. Today's guest is none other than Esther Dyson. Esther is an author, a commentator, an accomplished investor, a philanthropist, and today one of the world's leading figures in digital technology and health.

[00.49] And full disclosure, Esther is also a director on the board of PressReader. Esther, you are very welcome to the podcast where in the world are you, and what are you up to
?

Esther Dyson

[00.53] I'm at my wonderful home in New York City.

Ruairí Doyle

[00.55] Well, it's great to be able to connect with you from Vancouver, Canada all the way to New York. So, Esther, you have a fascinating background. Your father was a physicist, your mother, a mathematician. I believe you had Nobel Prize winners over for dinner. I think we were we were sharing a car ride one day and you were telling me about how you watched Oppenheimer and it was so relatable to you. And I think you even grew up on campus at the Institute For Advanced Study. After that, you headed to Harvard. And then you started your career as a reporter and fact checker. And after that, you've had, you know, numerous investments in different companies and organizations. And, of course, you're notably the founder of Wellville. But I want to go back to when you started your career as a reporter and a fact-checker. I'm curious. What originally led or motivated you to start off in that field?

Esther Dyson

[01:45 ]Well, I wrote for my school, actually, when I was 7, I think. I started the Dyson Gazette, which was on carbon paper, and it was, you know, what George did in 1st grade and the new baby and this kind of stuff. And there were probably 6 or 7 issues of that. Then I worked for the high school paper. And then in college, I joined the Harvard Crimson. I started I was paid for proofreading, and then I wrote articles for free. And I loved it. It was if I ever gave money to something, it would not be to Harvard itself but to the Crimson. And I also learned a lot in their work in there than I ever did in my classes. Yeah. In the field is often more practical than some of the theory that they that they teach us. Well, the problem is the classes are all about remembering stuff somebody already knows. And journalism of that is finding or at least telling stories in a way that people didn't already know. So it was just fundamentally more interesting. And if we go back to the Dyson Gazette, you called it. Right? Yeah. What originally got you into that at 7 years? Like I thought it would be cool, and I liked writing. I liked reading. I thought, yeah. We should have news. We had news in the family and it was just fun.

Ruairí Doyle

[03:08] So, Esther, I was born in 1980 and I just about snuck in, I think, on the the tail end of Gen X or Generation X. And I'm one of the last people maybe to or the last generations to remember life without internet and mobile phones and you know when you were creating that publication at 7 years old it was you know very raw without all of these technological distractions that we have and even distractions that we have. And even probably when you started your career, it was without any of those things too. I'm curious. Is there anything that you miss from those days when we didn't have all of this useful technology that we have today?

Esther Dyson

[04:04] So I remember when I was working at Forbes, there was a library that was full of folders with company's annual reports and news articles and so forth. There was no other way to find stuff. And if you wanted to talk to the company, you would find their phone number. You would ask for the PR department, then you would say I'm a reporter from Forbes. I'd like to set up an interview with your CEO. So I was in my early twenties, and I flew all around the country. I went down to Dallas to interview the CEO of American Airlines. I went to Wyoming to visit the coal mines, and it was just amazing.

Ruairí Doyle

[04:25] What do you think the benefits of technology have been over the years in helping journalists do their work?

Esther Dyson

[04:30] Well, it is. It's so much easier to find stuff out. And, of course, there's it's so much easier for your competition to find it out as well. You can get in touch with people much more easily. You can store your notes, and now you can have chat GPT help you at least write the first draft. You know, whatever it is you want, it's easy to find. You still need to have the journalists' smarts to know what makes sense, to ask the questions that aren't being asked, to find the story that people haven't already seen or understood or found out. Of course, people's attention spans are much shorter, which is very sad. I mean, we if you wanna go into the media business, there's a lot of destruction and maybe a little bit of creative destruction. And in terms of, destruction or transformation, the elephant in the room, today, rather large one, for the publishing industry and the industry that we're in in Press Reader is AI. It's you know it's a new wave of industrial revolution or evolution or whatever you want to call it.

Ruairí Doyle

[05:20] It's got all types of opportunities, challenges, threats for publishing and moreover for society at large. I'm a father to a 1-year-old and a 3-year-old, and some days I'm very excited, other days I am terrified. And you're no stranger to this, this scene, this area you've been commenting on and commenting on it, and it's but it's changing daily. I'm just wondering what's top of mind for you with regard to AI and publishing or just AI in general?

Esther Dyson

[06:06] The real news is the news that people won't tell you. AI is great for looking at all the press releases and finding out who's got new products and so forth and so on. But in the end, the stories that people want to know or to be real is usually bad news where you need journalists to go out and find what really happened. Who was responsible? What was how come we couldn't handle this better? The reality is the journalists need to find out the stuff that they don't put in the press releases. I mean, once a journalist finds it, then AI can copy it and reuse it and so forth and so on. But the the journalists who get people to tell the truth whatever way are that's still a very human endeavor, and it requires knowing other humans. And it requires journalists do it in different ways. But, fundamentally, it requires treating people with respect and giving them the dignity and asking them to tell their story in their way and then understanding, oh, well, I got 3 different versions of this story. Which one is most accurate?

[07:14] And, again, the journalist has to have the wisdom to look at all the facts and all the motivations and so forth and so on and come up with a coherent story that we talk about unexplainable AI. But, you know, in the real world, the real problem is unexplainable humans, including the humans behind the businesses that use AI to implement whatever their business model might be. Again, I'm not so much scared of the AI, but I'm scared of the business models that are using AI to, if make us spend money on things we don't need, make kids feel inadequate, so they go buy stuff they don't need.

Ruairí Doyle

[08:01] You know, on that theme, there's there's a role for governments and regulation, to perhaps charter a course through it. I think it's fair to say that the US Congress and, you know, the European Union and other, big government bodies all over the world are still playing catch up to big tech as we call it, still playing catch up to social media and its impact on society and young people in general, and do you think any lessons have been learned you know in the last 10 to 15 years that are preparing governments for what's about to happen which we all understand is you know fundamentally going to be in some way positively or negatively transformative. Do you think any lessons have been learned or they are ready?

Esther Dyson

[08:37] I certainly want government regulation, and I want government enforcing the laws, but I'm not sure I'd much rather have a strong legal liability environment because I'm not sure government can act fast enough to fix things. If you make the businesses or the individuals who caused the damage liable for that damage, then you don't need to define in advance how it was created. AI is very complicated, and so I'd rather look at the damage. And insurance works in 2 ways.

[09:17]One is it compensates people. But more importantly, an insurer can go into a company and say, we don't like what you're doing. We think this is gonna lead to damage. They don't need to have a law written. They can simply say, we're not going to insure you. Insurance was a huge factor in reducing fires in the early 19th century because they would go in and they'd do fire inspections. It wasn't because there was a law, but it was because you couldn't get fire insurance. And if you didn't have the fire insurance, you couldn't get this or that permit to do something else. And I think some notion of liability versus defining what is correct is a much better way to keep up with the ever-changing and fast-changing things that AI is capable of. 

Ruairí Doyle

[10:11] I agree and I want to believe that as much as possible. However, I was, just this week listening to The Daily podcast from The New York Times, and then it was about a lawyer who has been working with the parents of Sandy Hook and then another, school shooting since that, and he was highlighting that, you know, there's I can't remember if it's case law or legislation or something, but there's something called PLCCA or related to gun law in the US that's essentially a get out of jail card for gun manufacturers and then there's another law regarding big tech and that they cannot be liable for yes.

Esther Dyson

[10:51] Yeah. That's what we need to change. We need to make them liable. We need to make them liable for the damage they cause, not specify what algorithms they should be using. And that's that's very different. And it's the same with privacy. You do these things you can't define well enough in advance because the bad guys always move ahead. But you can you can define the damage. And, again, then you can require companies to buy insurance to do various things. So as I said, you do need some laws, but it doesn't you don't want laws that take 3 years to enact and then get changed when Congress argues and they say, well, we need to add this for something else, and so we're gonna take this provision out. So what you want is the people who define what makes sense, they're looking at the money. They're looking at the potential liability. They're not looking at, are we meeting the letter of the law? And so they're going to be much more clever and flexible and creative about what it is or what they will not insure.

Ruairí Doyle

[12:00] Yeah. And I think one of the messages from from listening to that lawyer was that, you know, the strategy is essentially to try and tackle the business models where it hurts in their pocket and then they will make the changes that are appropriate. But are you positive and hopeful that those steps will happen in the right sequence at the right time in terms of AI?

Esther Dyson

[12:23] It's something I'm going to be talking about a lot. It's a well deal ends at the end of this year. I'm going to be writing a book. And it's going to talk about a bunch of things like this. Okay. We've Not that I will do it single-handedly, but I'm going to help that cause. And so if if I was if I was pessimistic, I wouldn't do anything. And for sure, but if you're positive or at least optimistic, then you have a chance of making the challenge in alliance with a lot of other people. Again, it's less about the business model. It's more about the enforceability and the terms of the deal. It's complicated. It depends how much money you have to sue. It depends how egregiously someone has used your stuff already. I think there's going to be a big the market's gonna turn into kind of like artisanal information that's really highly valued that where people already pay for subscriptions to the Wall Street Journal, for example. Whereas if you're funded by advertising, the reality is your content is kind of fungible. The kind of content you have, the kind of audience you have makes a huge difference.

Ruairí Doyle

[13:35] I'm also wondering, you know, what's your thoughts on you know, publishers are doing these these deals with LLMs. They're part of these deals, and some of them, the bigger ones are kinda, two-way sharing of resources and technology and information. So it's not just a check for your content. It's its training and transformation of the newsroom. What should the standard be in terms of how publishers work with readers and are transparent about what has been used into production of this content so that we can understand. I mean but you we should also note that we even before LLMs, we didn't have transparency on that either. Right?

Esther Dyson

[14:14] I think it's again, I go back to the food metaphor. I think it's you want supply chain transparency and visibility. You should have had it before too. Like, where did this come from? Who wrote it? And most importantly, who paid for it to be published? Did someone pay to have his article published, or was this person actually such a good analyst reporter that the company paid that person to produce the information because stuff that's published now is actually paid for by the writer or by the producer of the content rather than being paid for to a journal a working journalist. So that's another fundamental thing. And, of course, was this advertiser sponsored? Are the ads you know, is the publishing side and the editing side separate? Yeah. In the end, I don't really care whether it's written by a smart AI that looked at 5 good sources or a person. On the other hand, if it was written by a specific journalist who was actually the one asking the questions of the people interviewed, again, that to me raises the value in the price. But just being transparent about where it came from is key.

Ruairí Doyle

[15:28] We've been having an internal discussion, here at PressReader about, you know, can I will AIs and by when might they get as smart as a writer or a journalist or let's take somebody like Paul Krugman, you know, the esteemed economist, New York Times op-ed writer I believe, Nobel Prize winner? Do you think AIs will ever be able to do the type of work of somebody like Paul Krugman?

Esther Dyson

[16:00] No. You can never replace him. So, certainly, you can build an AI that will write in the style of Paul Krugman. That's disturbing, and that's why you want to know the providence because something in the style of Paul Krugman is not necessarily what Paul Krugman would have said. I mean, the whole benefit of Paul Krugman and I hope of myself and of other good journalists is they keep discovering new stuff. You know? And the AI can pair it the old stuff, and it can sound like me, but it's not out there interviewing the people I'm interviewing, or it's not out there reading the articles and the people that Paul Krugman's listening to write his pieces. The mental model in Paul Krugman's head is very, very different from the model in an LLM, which is not really a model. It's a prediction engine. And the essence of what a journalist does is it is they predict new stuff and discover new analogies or understandings. And that's not what LLMs do. They spit out old stuff. I mean, they do hallucinate on occasion, and I'd like to say one of the biggest hallucinations ever was that the Earth was round, and that was a hallucination that turned out to be true. And some hallucinations are true and some are not.

Ruairí Doyle

[17:20] Yes. However, you know, the business models that we're talking about will, will care to differ right because, they're they're of the opinion that I think I heard in one report that by 2026 they're going to run out of original content from humans, and I agree with you by the way on your answer around Paul Krugman, that's the position I'm taking, there's different positions here internally. The unique human experience of Paul Krugman or any other human or journalist for that matter brings a whole array of variety, spontaneity, experience, education, magic to the content creation experience that style cannot replicate. Right? It cannot replicate it. However, the business models are purporting to us that synthetic content from 2626 onwards is going to be just as good. So I guess time will be the tell teller.

Esther Dyson

[18:25] I mean and that's why people pay $50 for, you know, some chef's cooking. I talk about information diabetes, and, you know, this is a society of diabetes, which is caused mostly by bad food. And information diabetes is caused by bad information. But to be healthy, we need to consume good food and good information. And, you know, and, hey, that's not easy because bad food is cheap, and it's made to be seductive. And you want more of it, and bad information is the same. But, yeah, at the same time, the very best publishers will continue to be providing useful stuff, and they will charge for it. I spent a lot of time in in Russia. And if you wanted real information, you subscribe to business journals. You didn't read the popular papers. And in the end, it's the same here. I subscribe to the information. And, of course, as a director, I subscribe to PressReader and the kind of content it provides. And I subscribe to Statton News, which does a similar thing for the health care. And those will not be the huge markets, but they will still be profitable and produced by sentient human beings who ask real questions of other real humans.

Ruairí Doyle

[19:48] So shifting gears slightly from from the publishing world and the publishing industry, you have many interests in the health and healthcare space and digital technology related to that. What your outlook is in terms of what AI can do to help in that space?

Esther Dyson

[20:05] So there's a huge amount of health care that can be automated so that the people actually working in health care, the nurses, the anesthesiologists, the doctors, the community health workers can be human beings talking to humans. And that's probably the most exciting thing AI can do for health care. Then off in the distance, yes, it can be used to develop new drugs. It can be used to analyze data to show the horrible impact of the food we consume, to show the impact of plastics in our bodies, and it can show people in a way that's personally meaningful. This is what will happen to you if you don't exercise. This will is what will happen to you if you don't hang out with your friends. You need human connection. The notion of spending on health care is wrong. We should be investing in health because it's a fundamental social asset. And we've sort of got data that shows that, but we need better and better data and, honestly, more convincing explanations of that to a broader public.

Ruairí Doyle

[21:18] Can only be positive. You're involved in many organizations and projects these days. You know, what inspires you to get involved in a new company or a new project?

Esther Dyson

[21:28] So my best-known saying is always make new mistakes and learn from them and so forth. Another one is never do anything you wouldn't do for free and then try to get paid. So that means invest in something that's that excites you because if it's life's too short to invest with people you don't like, and then they need to be doing something useful. And it's pretty broad. I've invested in logistics companies, and I was on the board of 23 andMe in genetics. I'm now on the board of something called Banff Health, which eliminates treatment-resistant prostate cancer. You know, these these are pretty exciting things. And, of course, PressReader itself. I don't like to invest in things that are a better version of something that already exists.

Ruairí Doyle

[22:22] One last question for you, which is a question we want to consistently ask people that we invite on this show is, to imagine a parallel universe where you never ventured into fact-checking journalism, publishing, technology, or health, what unexpected career or passion might you be pursuing?

Esther Dyson

[22:41] Well, I mean, probably astronaut.

Ruairí Doyle

[22:43] That's true. You've trained. Right?

Esther Dyson

[22:47] Because, of course, I grew up in that in that world, and I did train in Star City outside Moscow for 6 months. This is not a goal where when I die, I'll say, oh, I didn't do this, and my life's a failure. But I would like to retire on Mars not too soon. I'd love to be on Mars in 1 third gravity in a nice, comfortable place built by Elon Musk. And yeah. I probably don't see it as much as a career as sort of a final stretch.

Ruairí Doyle

[23:25] Well, Esther, it's been a pleasure. Thank you for joining us. And that's a wrap on our first episode of the PressPectives podcast. Thank you very much, Esther, for joining us, and thank you all so much for tuning in. We will be bringing you new episodes every month. Each one featuring a different leader in the world of publishing.

So be sure to subscribe on Spotify, Apple Podcasts, or YouTube to catch the latest. Bye for now.

Learn more about how our partners use PressReader to help their customers and  clients stay up to date by providing them with access to an unmatched global  selection of content. 

Learn more

Featured Highlights Publishing technology artificial intelligence PressReader Features PressPectives Podcast


Related Articles