CAPTivated

Ep 08 Battling Misinformation and AI Slop with Sree Sreenivasan

CAPTivated

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:09

In this episode Julius, Sage, and Hanna talk to tech journalist, former Chief Digital Officer of New York City, Columbia University, and the Met, and founder of Digimentors, Sree Sreenivasan to discuss misinformation. The conversation traces the problem from email scams and WhatsApp forwards all the way to AI-generated "slop," it's global ramifications, and examines why Silicon Valley's tech moguls are to blame. Yet Sree remains a techno-optimist. He makes the case that improving your media literacy is now a fundamental civic duty. 

Key Takeaways from Sree:

  1. Own your role. Every time you doom scroll or share without thinking, you are part of the problem. Every time you put the phone down, read a local newspaper, or correct a bad forward in a group chat, you are making a difference.
  2. Stay literate about new technology. The speed of change is only accelerating. Experiment with new tools, test AI by asking it about things you know well, and stay curious about what's coming next.
  3. Participate. A handful of Silicon Valley oligarchs are betting that your vote doesn't matter — prove them wrong. The damage being done right now flows from an election margin of 1.5%. Your vote, your voice, and your willingness to make "good trouble" matter more than you think.


Find out more about Sree on:


Some of the texts, and resources we refer to in this episode:


Sree’s Media Diet

Meat and potatoes: 

Outlets: X, Fox News, The Wall Street Journal


Junk Food: YouTube shorts

This podcast is part of CAPT’s efforts to encourage open and diverse intellectual exchange. The ideas presented by individuals on the podcast are their own and do not represent Purdue University, which adheres to a policy of institutional neutrality.

We would love to hear your thoughts on this episode! Send us feedback to captivatedpod@gmail.com

In this episode Julius, Sage, and Hanna talk to tech journalist, former Chief Digital Officer of New York City, Columbia University, and the Met, and founder of Digimentors, Sree Sreenivasan to discuss misinformation. The conversation traces the problem from email scams and WhatsApp forwards all the way to AI-generated "slop," it's global ramifications, and examines why Silicon Valley's tech moguls are to blame. Yet Sree remains a techno-optimist. He makes the case that improving your media literacy is now a fundamental civic duty. 

Key Takeaways from Sree:

  1. Own your role. Every time you doom scroll or share without thinking, you are part of the problem. Every time you put the phone down, read a local newspaper, or correct a bad forward in a group chat, you are making a difference.
  2. Stay literate about new technology. The speed of change is only accelerating. Experiment with new tools, test AI by asking it about things you know well, and stay curious about what's coming next.
  3. Participate. A handful of Silicon Valley oligarchs are betting that your vote doesn't matter — prove them wrong. The damage being done right now flows from an election margin of 1.5%. Your vote, your voice, and your willingness to make "good trouble" matter more than you think.


Find out more about Sree on:


Some of the texts, and resources we refer to in this episode:


Sree’s Media Diet

Meat and potatoes: 

Outlets: X, Fox News, The Wall Street Journal

Junk Food: YouTube shorts


Transcript:

Sree Sreenivasan (00:00:00)
You've heard the famous line: by the time the truth puts on its pants, the lie has traveled around the world. Well, that was said not in the age of AI. That was said decades ago, when it was things like the telegraph that took things around the world. Now, it's… everybody is a telegraph operator. Everybody's a television owner. Everybody is a broadcaster.

Hanna Sistek (00:00:28)
Welcome to another episode of Captivated, a podcast hosted by the Center for American Political History, Media, and Technology at Purdue University. In each episode, we examine a specific facet of our digital public sphere: how it works and how we got here. We are here to help you sort through the noise. I'm Hannah.

Sage Goodwin (00:00:48)
I'm Sage.

Julius Freeman (00:00:49)
And I'm Julius. Our guest for today's episode is Sree Sreenivasan. It is hard to know where to begin with introducing Sree, but I'll start by saying Sree began his career as a tech journalist. He spent 20 years as a professor of journalism at Columbia, specializing in new media. Since then, he's been the Chief Digital Officer for New York City, the Metropolitan Museum of Art, and Columbia University.

Hanna Sistek (00:01:11)
Now, you may be thinking, what is a Chief Digital Officer, exactly? Well, we'll get into that in the conversation.

Sage Goodwin (00:01:19)
Sree’s also the CEO and founder of DigiMentors, a consulting company about how to use digital, social, and AI technology. That's all pretty impressive. But you should also know that Sree co-founded the South Asian Journalists Association and the Online News Association. In 2004, Newsweek named him one of the 20 most influential young South Asians, along with Norah Jones, M. Night Shyamalan, and another name you might've heard of: Kamala Harris.

Hanna Sistek (00:01:43)
We wanted to talk to Sree because for over 30 years, he's been on the real bleeding edge of thinking about journalism and tech. He's also been teaching journalists how to use new technology. So, in the ’90s, you could find him doing seminars about texting, and in the early 2000s, he was teaching people how to use email.

Sage Goodwin (00:02:00)
More recently, the New York Observer even called him the "Twitter Tutor," and I think he's one of the most insightful people out there about both the potential and problems with AI for our democracy.

Julius Freeman (00:02:09)
Like we always do, we asked Sree what he thinks is the biggest problem with the digital public sphere. He thinks it's misinformation. We had a really wide-ranging conversation about it.

Hanna Sistek (00:02:19)
Yeah, and I thought it was interesting that Sree is somebody who comes off as a techno-optimist, meaning somebody who believes that technology basically can make the world a better place. However, it's becoming harder and harder to make that case now because of the way that the "Silicon Valley bros" are behaving.

Sage Goodwin (00:02:37)
Yeah, you're definitely not gonna come away from this conversation thinking that Sree is a fan of Mark Zuckerberg or Elon Musk. Not at all. Um, but in being a techno-optimist and someone really excited about technology, Sree, I think, makes a really good case for one of the ways that you can become more informed and more technologically and media literate, which is to just experiment and play around with these things yourself. Every time there's a new technology, a new platform, Sree is someone who is out there just seeing how it works, and he suggests it's one of the things that we should all be doing too. Yeah, he talked about how you can test AI by asking it really important questions about things you know well, and you can be able to gauge its answers and test its validity. One of the interesting things that he found when he put in his own information is that it—it told him that he was dead.

Hanna Sistek (00:03:25)
So, since Sree seems to be quite alive and kicking, we started out by asking him about his life and his unusually global upbringing.

Sree Sreenivasan (00:03:34)
I was born in Tokyo. My dad had no problem picking me out in the window. He said, "The brown one is mine." I lived there after that in Bhutan, in Moscow at the height of the Soviet Union. I used to speak fluent Russian. I was a full communist; I used to come home saying, "Lenin is God," and then I ended up in New York, and I started saying, "John Lennon is God," and became a full capitalist. I went to high school in Fiji, which is exactly as wonderful as it sounds, and then went to college in India, and I came to America for grad school and stayed here. So, that was my kind of second round in the US. I went to Columbia Journalism School to study journalism, and it was still very early days. This was 1992; word processing was the high—the new technology. And on the television side, we were using Hi8 video tapes, which are a smaller format tape that you could use, like what you would have in your camcorder when there were camcorders. I told my parents when I was 12 that I was gonna be a journalist, and they started crying immediately because no Indian parent wants that. They want engineer, doctor, you know, lawyer—anything but a journalist. And I knew this was what I wanted to do with the rest of my life, and that's how I started my global career.

Hanna Sistek (00:04:52)
So I'm really curious, as somebody who's lived in India, and I know, you know, it's—it's a doctor and engineer thing. What made you so interested in journalism to begin with?

I've continued the formatting as established—polishing the grammar and punctuation while keeping the speaker's name and the 2026 timestamp visible and copyable.

Sree Sreenivasan (00:05:01)
I loved to write; I loved the ability to connect with people. I won a storytelling contest in fifth grade in New York City at PS 6—that was long before storytelling had anything to do with journalism. No one had ever used that phrase. Now we use it all the time, and storytelling is so important. There was a story in The New York Times about my work at The Met, where they said, "So many stories to tell for The Met's digital chief." And so that idea of being able to tell stories, I think, was very important.

I'm sure I was also impacted by all the journalists in the media who were visible to someone like me, including Peter Parker, who was of course Spider-Man; Clark Kent, who was Superman; and let's not forget Kermit the Frog was sometimes a journalist. He would wear a trench coat and a fedora. And so, when you add them all up, I was exposed to a lot of media folks as a kid, and I knew I wanted to do that. I was also interested in advertising because I loved the way that ads told stories, too. And I've always thought that if I didn't go into the work that I did, I would've been in advertising. But then that leads you to marketing, and in my current avatar, I do a lot of ads and marketing, and social media. All of this is connected in a way that they were not at all back in the 1980s.

Sage Goodwin (00:06:28)
So you are always—you are always like seeing into the future there.

Sree Sreenivasan (00:06:33)
More often by accident, but yes.

Sage Goodwin (00:06:35)
So I just wanted to go back quickly to this idea of being the Chief Digital Officer, like being the head of digital. It's a little bit like, you know, in the Barbie movie, like they say Ken's job is "beach." Like, Sree's job is digital. Can you just paint us a little bit of a picture of what a digital officer is? What is being the head of digital?

Sree Sreenivasan (00:06:58)
Sure. I was Chief Digital Officer three times, and I call that job "Chief Listening Officer" as well as Chief Digital Officer. It is the person whose job in the organization is to listen for new ideas, trends, tools, and things happening so that your organization can stay abreast of what's happening—stay ahead, not just survive, but thrive.

And at Columbia, when I was Chief Digital Officer—I was the first one there—it was to think about the future of education. This was part of the great MOOC crisis of 2012. Some of your readers will blissfully be unaware of this. MOOC was Massive Open Online Courses, where you could take classes—and you still can at Coursera, edX, and places at MIT and Harvard—where you can take classes with 50,000 other students. And places like Columbia were really worried about what this means for education.

Then I was recruited by The Met to be its first Chief Digital Officer, and there we were working on the future of culture. How do we continue to engage audiences, young and old, around museums? One of the things I learned is that there are people who live across the street from The Met, there are people who live around The Met, and there are people who live in New York City who rarely go because they're busy, or because The Met's too crowded. Whenever anybody asked me, "What is the biggest competition for a place like The Met? Is it the Guggenheim? Is it the British Museum? Is it the Louvre?" I would say it's none of those things. It's Candy Crush, it's Netflix, it's life in 20... now 2026. I was there ten years ago now, but this idea is that we're not competing with each other as museums. We're competing against attention, against time, against the busyness of modern life. That's how I used to see it.

And then finally, I was the Chief Digital Officer of New York City, working on the future of cities and citizens. I learned so much at every place I've been, and I'm grateful for every opportunity.

Sage Goodwin (00:08:59)
Yeah, that's quite an incredible career. And I guess your job is very much there, as I said, on the cutting edge—on the bleeding edge. Okay. So, as someone who is on the leading edge of thinking about tech, we wanted to ask you: what do you see as one of the biggest problems facing our digital public sphere today?

Sree Sreenivasan (00:09:20)
I will say one thing in my defense—because I am tagged with being "too digital" by people all the time—that for the last ten years, I've been publicly, in a kind of insane way, reading the print New York Times aloud on social media. We call the show the New York Times Read Along. We have also done the Washington Post Read Along, the Toronto Globe and Mail Read Along, and The Khaleej Times in Dubai. And we read the print paper on social media, and we have guests, including some fabulous journalists from The New York Times, as well as authors, writers, and all kinds of folks—Harlan Coben, the writer who seems to have at least three or four shows on Netflix all the time and two or three books on the top of the bestseller list, and Joe Stiglitz, the Nobel-winning economist.

We use this... And people say, "Why do you do this?" I say I do it because I love print. There's something magical about print, and print can be a newspaper, a magazine, or a book. We still subscribe to two or three print magazines. And when you see your name in print, or when you read an article in print, it hits your brain in a different way than it does digitally. So I still channel that love for print, and we still do this with the digital stuff, all to the point that we want to fight misinformation and disinformation. I tell people, "You don't have to subscribe to The New York Times. Subscribe to something. Subscribe to a local publication that is covering your town, your neighborhood, your region, because that's so important."

So, when you talk about what is problematic, I would say knowing what's real, knowing what is useful, knowing what's accurate, and knowing where things are going is so difficult. I'll take a minute to just talk about "AI slop," a word that's come out of nowhere and everyone's embraced it; I think it's a very good word. But I'd say what it has done is that now anything you see online, you have to question: is it real or is it AI? But it's something I've been saying for 25 years: that if you see anything online, if it's too good to be true or too bad to be true, it probably is.

What I mean by that is, you've all gotten these emails where it'll say, "My cousin and his friend went out dancing in New York City, and they woke up in a tub of..."—anybody remember this?—"...a tub of ice with a note saying, 'Your kidney's been stolen.'" Like, this was something that would make the rounds repeatedly. We had various foreign potentates offering to make you rich by giving you money. We had all kinds of scams and schemes all over the internet long before AI, so this is not an AI issue. But the quality of the AI stuff is so high and getting better and better each day that it's very difficult for people to know what they're looking at. I use a tool called Sora, which I'm sure you folks have seen. I had me climbing the Burj Khalifa, and many people said, "Well, that's fake." And so many other people said, "Oh my God, where did you have time? Aren't you scared? How did you go up there?" And this is the reality, where yes, many people can tell something's fake, but there are so many other people who cannot. The "slop" stuff is... It's a word that's easy to use to dismiss it, but we should be careful about that.

Sage Goodwin (00:12:44)
So in thinking about this danger of seeing something and thinking it's real and passing it on, I came across an article from 2009 from The New York Times about how journalism is using the internet. It tells a story about how you found out that Michael Jackson had died because you saw people retweeting a post from TMZ. You talk about how you were judging the quality of the information based on the people who shared it, and therefore their evaluation of TMZ. And they quote you as saying to your students, "Anytime you decide to forward something, you're making a big decision. You're putting your reputation in real life and in cyberspace on the line." So that's back in 2009, when Twitter was only three years old. Obviously, two decades later, we're in a wildly different time now. How would you think about that now? How would you update that advice?

Sree Sreenivasan (00:13:40)
Oh, I haven't thought about that article in a while. This was minutes after I had gotten into a subway in New York, going uptown, and I saw the TMZ news. I knew this was 100% true if TMZ reported it. It sounds ridiculous because TMZ is so outlandish and out there, chasing people down and doing all the things TMZ does—but I knew this was real. This is like The Washington Post saying the President has resigned; they will not get it wrong because if they get that wrong, they're out of business. So I knew that, and I said out loud, "Oh my God, Michael Jackson's dead."

Immediately, everybody in the subway stopped and turned to look at me like, "What are you talking about?" This was a subway heading uptown in New York, where we have folks who love Michael Jackson, just as we do in Midtown and Downtown. This was a moment where I realized that if I had been wrong, that would have been bad. But I knew that they were right.

Sage Goodwin (00:14:42)
And in terms of thinking about trusting information before you pass it on, I think that is even more of a problem today. You've spoken elsewhere about—I think you called it "WhatsApp University," which is…

Sree Sreenivasan (00:14:57)
Oh, that's not my term. Hannah will know that that's something they say about India. "WhatsApp University" refers to the fact that people believe they're learning through WhatsApp forwards, which are the equivalent of email forwards or your crazy uncle sending you things on Facebook. But the power of the forward is so strong in India. You feel like you're learning things about money, health, politics, and history.

In India and places like that, it's weaponized. Here, if you make a mistake about something, you might get ostracized or canceled. But we've seen multiple things happen in India, Sri Lanka, and Myanmar, where these tools have been used to generate such a backlash that people are killed or injured as a result. For all the horrific things happening in American and Western social media, that is typically not the first thing you worry about—but in those countries, it is. We can blame the people of India, Myanmar, or Sri Lanka, but they're not the ones to blame. The ones to blame are the people in Silicon Valley who were specifically warned about these things, who were told about these things, and still didn't care. I met people in Sri Lanka who said they had begged the folks at Facebook to please add more than two moderators for the entire country—knowing that Sri Lanka has a history of ethnic, civil, and brutal warfare—and they didn't care. They didn't listen.

Hanna Sistek (00:16:40)
So, regarding the moderators, I'm thinking about WhatsApp. How would the moderators work if you have a private family group where your uncle thinks he's posting the right thing? It's a really tricky question to battle misinformation because the chambers where people talk are so fragmented.

Sree Sreenivasan (00:17:05)
100%. One of the things they have done is reduce the velocity and the ability to share on WhatsApp. You used to be able to forward things unlimited times; now, certain messages can only be forwarded to five people instead of 5,000. One thing they used to do: you’d have a picture of something offensive or incorrect, and if someone tried to correct it by saying, "Hey folks, don't share this," that correction would get separated because they couldn't be sent together. Now, you can send them together. At the server level, they are certainly aware of what's happening in these tools, and they need to do much more than they're doing. I like to place a lot of blame on Mr. Zuckerberg because he owns the biggest tools: Facebook, WhatsApp, Messenger, and Instagram. Combined, each of them has at least two billion people. We're talking about scale and reach at unprecedented levels.

Julius Freeman (00:18:16)
Yeah. Sree, it would be great to hear—because I think I can hear audience members asking—how do these crises happening in Myanmar, India, and Sri Lanka, or even America, land at the feet of Mark Zuckerberg, Elon Musk, and these Silicon Valley figures? Can you dig a little deeper into how they are the impetus for this issue?

Sree Sreenivasan (00:18:48)
It's amazing that I haven't mentioned the name Elon Musk yet, because you will see me rise up out of my chair in anger about Elon. In my mind, he competes with Donald Trump for being the worst human on the planet. The fact that they found each other is just poetically beautiful and awful at the same time. I'm very proud that I wrote in my newsletter back in 2018 that Mr. Musk should get off Twitter because being on there was bad for his other businesses. He doesn't know how to control himself. Little did I know that not only would he stay, but he would also buy the damn thing.

It was already a problematic platform, but he made it much, much worse. I said foolishly at the time, "Oh, 44 billion dollars, what a waste." No, I was wrong. It was the greatest bargain in history. He bought a presidency for 44 billion dollars. That is nothing. When you ask, Julius, this is what I mean: this is at the feet of people like this because they allowed users to use their tools in ways that were originally not intended.

We often talk about how social media corrupts the minds of young people, but I wrote a column about how it corrupts the minds of old people, too. It also ate the brains of Zuckerberg and Musk. These were people who originally had good ideas and good intentions. But when you take the algorithms they built and add the incentives of money and politics, they themselves get transformed. We can say some uneducated person got "blown up" by social media, but it happened to these leaders themselves. I have spent time with Zuckerberg and several Silicon Valley "Mughals." (Mughal is an Indian word; the empire ruled India, and from that we get the term.) We have seen what they are able to do and what happened to them.

Elon Musk had very good intentions with the electric car; he wanted to save the planet. So many people bought those cars, and now they have stickers saying, "I bought my Tesla before Elon went crazy." But he has since gone on to support political opportunities that make the planet less safe. His idea that we should all move to Mars tells you he doesn't care about this planet; he wants to burn it down and leave. That is a problem.

One last thought: Bill Gates, sainted in so many ways, but thanks to the Epstein files and what happened with Melinda, we know how problematic he is and what he has been accused of. It puts people like me, who believe technology can be used for progress and to make the world a better, friendlier place, in a position where it's almost impossible to prove that to anybody anymore. The way I trusted Bill Gates' guidance during COVID to say "vaccines matter"—now critics can throw his personal failures back at us, and we have to just take it.

Julius Freeman (00:22:52)
I think this is a great transition to start hearing your perspective on what kind of solutions we might have. As you laid it out, we have billionaires creating mechanisms of AI and social media that are fomenting misinformation and ignorance, leading to real-world violence. Someone has to be held accountable. What are your solutions to this problem?

Sree Sreenivasan (00:23:33)
Oh, it's much easier to just point out the problems and forget the solutions. Who cares, right? That's so quaint. I mean, in a serious way, I think that the more we understand what's happening with these tools, the more successful we're going to be in dealing with all of this. We have seen various efforts now to correct some of this and to find a path forward, but I don't think it's that obvious or that available. What we once viewed as things that were going to move in the correct direction forever—that's obviously changing.

And a lot of that has to do with AI and the speed at which things change. We're talking soon after Sam Altman, in an interview in New Delhi, said something outrageous. People were talking about how we have to come up with a trillion dollars of renewable energy to fuel all this AI stuff. He was asked about that, and he basically said, "Oh, that's no different than what it costs to feed a child until the age of twenty, when they're smart enough to do something." Where is the empathy? Where is the humanity of these people? And they're the ones running Silicon Valley, and therefore running the planet.

That's why I was so happy that DeepSeek exists; it shows that you can have other places where technology can rise up and have an impact, much like Spotify coming out of Scandinavia. I think it's really important that we have that because a few men and women in Silicon Valley—and again, sadly, it’s mostly men—are deciding everyone's future. I think it's really problematic. We need to encourage more voices coming out of other places. In all my presentations, I'm often asked, "Who's gonna win: Team Human or Team AI and the robots?" And I say, "I believe—I'm optimistic—that Team Human will prevail."

But then the question comes, and somebody asked me this when I wasn't ready for it—they said, "What kind of human? Will it be more Elon Musks, Mark Zuckerbergs, and the 'tech bros' like Peter Thiel—an awful, awful man—or will it be a better type of human?" It can't happen unless we change the economics, the incentives, and the abilities for people in other places to rise and do new, better, and more important things.

Hanna Sistek (00:25:59)
Sree, I wanted to just hark back to DeepSeek, the Chinese LLM. Can you talk a little bit more about it for our listeners and explain why that's a good thing?

Sree Sreenivasan (00:26:10)
I was in India in early January last year doing a series of AI talks, when, out of the blue, came this announcement that a Chinese company called DeepSeek had created a model that is now available worldwide. They built it orders of magnitude cheaper and faster than OpenAI and those other tools. Just the fact that it wasn't American, I thought, was a good thing.

We can endlessly debate whether these things are all controlled by the Chinese Communist Party and what that means, but part of me would rather have things controlled by the Chinese Communist Party than by Larry Ellison—oh my God—or whoever is Trump's closest buddy that he then hands over the keys of the most important communications tools to. I didn't have problems with the way TikTok was; it was predictable. Now they're going to put their thumb on the scale and change the way that people see TikTok. So, I hope that at least gives you some background.

But I have to say, what I tell everyone is: when you see a new platform, you test it by going in and typing something you know a lot about. In the case of all of you, you know a lot about communications or whatever your PhD is in. Ask a question about that, and then you'll see if they know something or not. Or, just type in your own name and see what it comes back with. In my case, I typed in my name, and it was going along pretty well at first. It started by saying I was born on September 9th, 1966, in New Delhi. You know I was born in Japan, and it was actually four years later in 1970, so I was okay with that.

But then it went on, and only later did I realize it was all in the past tense. Sure enough, we came to a spot where it said, "Sree died in September 2020 during the pandemic, and a lot of people mourned his passing in the digital and cultural communities." I thought that was very sweet, but I'm kind of offended that none of you showed up at the funeral!

Julius Freeman (00:28:09)
I just didn't get it in the mail! I didn't get the email.

Sree Sreenivasan (00:28:15)
But that... I mean, even this model that I'm praising thought I was dead. And it's not about me, but I've had an online footprint since 1995 or 1996, so I've been out there. I'm so easy to find—my Wikipedia page, etc. If they get that stuff wrong, then what else do they get wrong? That is the question.

Sage Goodwin (00:28:34)
So, in thinking about how individual people...

Sage Goodwin (00:28:37)
We should be tackling the problem of how to recognize misinformation. I know you've been a big advocate of media literacy. Do you have any kind of tips and tricks? And I should just say, one of the easiest ways you can make yourself more media literate is by subscribing to Sree’s Sunday note, his newsletter, which is a kind of deep dive every week into where digital media meets our analog world. It's also got lots of tips and tricks, and it's a really good resource for thinking about media literacy.

Sree Sreenivasan (00:29:05)
Thank you. Thank you so much. Yeah, you folks can find it on Substack; it is sreenet.substack.com. I've been writing for eight years now, and I do it because I can then go back to my roots of journalism, reporting, and writing. But also, as you said, we all have to do our part. I beg all my cousins and friends that every time another cousin sends something stupid into a group, you have to correct it. Embarrass them enough that they will think twice before hitting forward.

As we think about what we can do and how we fight this stuff, we have to experiment and try to understand the tools in front of us. What is new? What's exciting? What's good? What's bad? And how can we do things with that? The new iPhone that I have, for example, has a cool feature where you can shoot a video with a front and back camera at the same time, and therefore, that adds a lot of credibility to the videos you shoot. So if I'm interviewing somebody or I'm showing a fire, you can see me—I'm shooting the fire in one direction, but the back camera works, so that puts me in that spot. Can it be faked? Of course. But when it's built into the DNA of that camera and that tool, that's a really good thing.

Some of you will remember that this sounds vaguely like a tool that blew up during COVID—super popular—called BeReal, which used the front and back camera before it disappeared. So you can learn from these tools and see what we can do to make something more authentic to help people understand that this is real and this is not.

Hanna Sistek (00:30:40)
I think that one of the problems we're dealing with here is also motivated reasoning and polarization, right? Even when things are not real, people may not really care. One example: some social scientists did a study during Trump's inauguration in 2017 regarding the photo shoot of how many people came. I think they compared that to Obama's, which had visibly much bigger crowds. And then they went and asked people—Republicans and Democrats—which photo has the bigger crowds? It was shocking; several Republicans said that the Trump picture had the bigger crowds. It's not just about recognizing what's right or wrong; it's about thinking about what we want to believe, right?

Sree Sreenivasan (00:31:26)
Well, we were told as children, "Believe your eyes, believe your ears," but how can you when things can be so easily manipulated? And then what you're talking about is a slightly different problem, where people will believe what they want to believe. You could put any kind of information or facts in front of them, and they just won't believe it because they know in their heart of hearts—or what they see in the media—that what they already believe is what drives them.

Because things are changing so quickly and every day there's news that's shocking to us, people are inured to this. They just don't believe it. This idea, for example, that there is a group of rich people who are running sex trafficking rings and running the world, turned out to be true. The people who were doing it weren't necessarily the people who were being accused of it—it might have even been the other side—but it doesn't matter, right? Because once you've broken that dam of "this is something we believe" and "this is what we consider to be true," once that's gone, you cannot bring it back. In journalism, we try things like corrections and fact-checking, but all of that is problematic because it's so hard for people to believe it.

Sage Goodwin (00:32:49)
And I think especially with what you were talking about earlier, regarding the difficulty of getting people's attention—it's one thing to get attention for the first incorrect or unreal thing, but then to additionally get attention for the debunking and the fact-checking? That's exactly the issue. It's a problem of attention and sustained attention. But on the topic of attention, if our listeners had only a couple of seconds to listen to what you have to say, what would be your three main takeaways to give them?

Sree Sreenivasan (00:33:23)
Well, I think the fact is that you have to own your role in all of this. By participating in social media, you're part of the problem. Every day that you're on it—every time you pick up your phone instead of doing something else, every time you are doomscrolling—you are part of the problem.

Therefore, every single time you don't pick up your phone, every single time you read a book, every time you pick up a magazine, or even watch Netflix rather than participate in this, you are making a difference. So, know that. That would be one thing. The other is that the speed of this stuff is changing so quickly that we have to stay literate about the technology and literate about new problems coming down, so that people can help you understand.

Finally, I would say we are responsible for the people who run our lives in terms of politicians. We must all participate and not just say, "Well, it's too late. I'm just one vote; it doesn't matter." What you have seen in the United States—the extreme damage being done in the US by Donald Trump—is from an election margin that's 1.5%. You would think, if you didn't know better, that he won 60% or 70% of the vote, or that he won by 40 points. No, he won by 1.5 points—less than 50% of the vote. It was the third-smallest victory for any president who won both the Electoral College and the popular vote; it was the 11th smallest in history among all 47 previous presidents.

From that, they're claiming a mandate. From that, they are causing havoc all over the world. Because the world is now all alliances and connections, even if you don't live in the US, you must care about what's happening here and hold your leaders' feet to the fire to say, "We're all in this together in this awful, terrible way."

We're also talking after the birth anniversary of John Lewis. We're talking Black History Month. John Lewis was a great congressional leader who had this idea that you should make "good trouble"—that you should speak up and speak out. Look at the people of Minnesota and how they're fighting back and what they're doing. We should all be doing that in our own countries and our own cities; we must be ready to fight back. He had a great line: "We all arrived on different ships, but we're in the same boat now." I feel like we're exactly like that—the entire planet is in one boat being run by four, five, or ten terrible "Silicon Valley bros" and a handful of terrible world leaders. We are kind of adrift on this ship, but we have to fight back and take responsibility for ourselves.

Sage Goodwin (00:36:19)
Yeah, I think that's a really important message advocating against apathy, really. So thank you for that, Sree. Before we let you go, we would love to know about your media diet. You've been talking a lot about finding who to trust and educating yourself on what you read and listen to. What we like to frame as your "meat and two veg"—what is your main news diet? And then we'll also ask you about your "junk food"—what you do for fun—and your palate cleanser.

Sree Sreenivasan (00:36:52)
Wow, that's making me hungry. Let's see. I'm still on X, where a lot of people have left. I know some of you are there—Hannah’s there—but I'm still there. People ask me why, and I say, "If I'm unhappy with my electricity company, I don't boycott it. I try to make it better." And that's why I'm there.

But I also love being able to say terrible things about Elon Musk on his own platform, to his face—not that he'll ever see it. I'm throttled on there; I'm blocked, and my influence is much reduced since he took over. But that's okay; that's his prerogative. I still have hope that someday he'll get bored and move on. So I still read a lot on Twitter, and because of the things I read, the algorithm now only feeds me mostly stuff I want to see—stuff that is helpful in the work I do.

I read lots of news and a lot of newsletters. I have things just kind of coming at me all the time. I also have a lot of friends and acquaintances around the world who forward me things. I tell them, "If you see something useful or helpful, just forward it to me. You cannot over-communicate with me unless you're sending me junk. But if you send me good stuff, I want it because I'm looking for things to share; I'm looking for new ideas." Regarding that idea of a "Chief Listening Officer," I have to be my own Chief Listening Officer, so I have to listen to other things.

My family will tell you I watch way too much Fox News, and I do it because I understand a little bit about what's happening in the world because of that. I teach a class on the "evils" of Rupert Murdoch. I knew days after Obama was elected—let alone inaugurated—that he was going to be in deep trouble and unable to do the things he wanted, because on Fox, you could see the anger of white nationalism. That then manifested itself in the Tea Party, which morphed into MAGA and everything else that’s happened. So I like to watch it because it gets me nice and angry. It's the same way I read The Wall Street Journal's editorial pages. The reported pages are terrific—the news there is top-notch. The opinion pages, however, make your blood boil. But I like it. When your blood boils, it means you're ready to go out and do something about it.

Hanna Sistek (00:39:09)
I wanted to ask you, what are your top three newsletters?

Sree Sreenivasan (00:39:13)
It would be unfair to call out just one or two because there are so many good ones. But the ones I tell other people about—especially on the tech, journalism, and media sides—include one called Wonder Tools by Jeremy Caplan. He takes technology and breaks it down; he does a really good job with AI, especially.

There's another one called Journalist's Toolbox by Mike Reilley. You don't have to be a journalist to use these, but they will make you a better consumer of news and a better participant in our attention economy.

I also love reading Liza Donnelly. Liza is a New Yorker cartoonist who has adapted to the new world. You can think of nothing more classically traditional than a New Yorker columnist, but she has taken those skills and applied them to the digital world. She runs a great Substack and does things like live-drawing at the Oscars, the Tonys, and the Grammys. I showcase her as someone we can all learn from as we age and face new roadblocks; she was able to transform herself, where a lot of people in traditional media would just say, "Woe is me." She stepped up and has done an impressive job.

Here is the final section of the transcript, polished and formatted as requested to ensure the 2026 timestamps remain visible.

Sage Goodwin (00:40:45)
Thank you. And then just quickly, what's your "junk food"? What's your favorite thing to get away from it all?

Sree Sreenivasan (00:40:53)
I like watching YouTube Shorts. I find that YouTube Shorts is not doomscrolling because almost everything I get is basically clips from Breaking Bad, Better Call Saul, and The Sopranos. These are the shows that I love—and Seinfeld. Somehow, I've curated this little engine that only shows me these four shows.

The growth of YouTube Shorts has been amazing to see. We put our Read-Along on YouTube Shorts as well as on regular YouTube, and sometimes there are 100 to 500 times more people watching on YouTube Shorts than on regular YouTube. But I'll say one word about YouTube in general: during the pandemic, I said everyone should follow one of my hashtags—#AlwaysBeLearning—and that we should all learn new skills. I decided to learn how to make YouTube videos. Oh my God, they are so tough!

To get people to subscribe to your YouTube channel requires enormous fortitude and luck because people treat a YouTube subscription differently than a Twitter follow or an Instagram follow. They really care; they're curating their YouTube world. Also, whatever you subscribe to often shows up on the big TV with the family, so they're very careful about that. Reaching my first thousand followers on YouTube took forever, whereas I have tens of thousands on other platforms. It just showed me how hard it is, and my respect for these youngsters who have truly mastered YouTube—and are able to get you to subscribe and hit the notification button—went up dramatically.

Sage Goodwin (00:42:27)
Thinking about subscribing, Sree, where can our listeners find you and your insights? I know there are many places, but give us the top ones.

Sree Sreenivasan (00:42:37)
Well, the main thing I would love everybody to do is go to our Substack: sreenet.substack.com. The reason is that we are experimenting there to do more than just my newsletters, which will always be free. I don't believe that if you're trying to change people's minds, you should then ask them for money; it doesn't make sense. If you've ever seen Jehovah's Witnesses or other people standing on the street, they don't ask you for money before they try to convert you, right? They just bring you in.

The people at The New York Times who were most upset in the early 2000s about the move to a paywall were columnists like Maureen Dowd, Nick Kristof, and Tom Friedman, because they're in the job of "converting souls," and to convert people, you have to let them read your stuff. So, my main newsletter will always be free, but we're blessed to have paid subscribers now for whom we can provide a lot more content. We even created a newsletter called OKTV. You all know "Prestige TV" or "Peak TV"—well, what about just OKTV? Netflix creates so much content that we all watch endlessly, so we created that. We're going to do much more on there. I'm very grateful for everyone who subscribes, whether for the free or paid version. And then I would direct you to my Twitter account, because I like the pain.

Hanna Sistek (00:43:59)
Well, thank you.

Sage Goodwin (00:44:00)
Yes, thanks so much for joining us, Sree. It was great to have you.

Sree Sreenivasan (00:44:04)
Thank you. It's so great to be in a conversation that's not just one-on-one. You get so many different perspectives coming at you, and I love that. Thank you for that, too.

Hanna Sistek (00:44:12)
Appreciate it.

Julius Freeman (00:44:13)
This has been another episode of Captivated. It's been hosted by CAPT—you know, "CAPT-ivated." You guys get it. It's the Center for American Political History, Media, and Technology.

Hanna Sistek (00:44:27)
The ideas presented by individuals on the podcast are theirs and theirs alone. They do not represent Purdue University, which adheres to a policy of institutional neutrality. 

Sage Goodwin (00:44:37)
To learn more about this episode's guest, check out the show notes. We really enjoyed this conversation today, and we hope you got something out of it, too. Thanks for listening.