CAPTivated
Join political scientist Hanna Sistek, media historian Sage Goodwin, and communication scholar Julius Freeman at the Center for American Political History, Media, and Technology as they dig into two big questions: What’s wrong with our information environment? And what can we do to make it right?
From disinformation and polarization to algorithmic news feeds and attention traps, we explore the forces reshaping how we understand the world and each other. We pick the brains of researchers, journalists, technologists, and other experts to unpack the major problems with our digital public sphere today, how we got here, and what we should do about it.
Along with their insights guests share their own “media diets,” the good, the guilty, and how they hit reset when the noise becomes too much. Join us to cut through the chaos, find the signal, and rethink how we engage with the media that shapes our lives.
CAPTivated
EP 04 AI is Not Inevitable with Alice Marwick
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode Hanna, Julius, and Sage talk to Dr. Alice Marwick, Director of Research at nonprofit research institute, Data & Society. Alice discusses the rapid expansion of AI. She explains how AI is dangerously concentrated in a handful of powerful companies whose interests are increasingly aligned with the current US administration, and how people are being pushed into using AI involuntarily in their workplaces, schools, and apps. She also raises concerns about people using chatbots for mental health support, highlighting the safety risks of “private vibes” versus real privacy. Alice argues that AI adoption is far from inevitable and society still has the power to shape its development, making the case for meaningful regulation built around broad principles rather than specific technologies.
Key Takeaways from Alice:
- AI is not inevitable. The real problem is power and political economy, not technology
- “Private vibes” aren’t the same as privacy. Chatbots may feel safe and confidential, but unlike therapy, there are no legal protections governing what happens to what you share.
- Regulation should be built on broad principles, not specific technologies. Grounding laws in ideas like data privacy and worker dignity means they stay relevant no matter what new tools come along.
Find out more about:
- Dr. Alice Marwick
- Her book The Private is Political: Networked Privacy and Social Media
- Data & Society
- The Center for Information Technology in Public Life Alice helped found at UNC Chapel Hill
- Her X - @alicetiara
- Her LinkedIn
Some of the texts we refer to in this episode:
- "Danger Sound Claxon" by Matthew F. Jordan
- Rebecca Lewis’s dissertation Tech's Right Turn : the Rise of Reactionary Politics in Silicon Valley and Online
- Emil Torres’s How Effective Accelerationism Divides Silicon Valley
- Joe Lato’s Critical computational social science researcher
Alice’s Media Diet:
Meat and potatoes: BlueSky, New York Times, Reddit
Palate Cleanser: Rosalía's Lux album, Beyoncé, Taylor Swift, Chappell Roan, Silo, Severance, Better Call Saul, Naomi Novik, Who? Weekly
This podcast is part of CAPT’s efforts to encourage open and diverse intellectual exchange. The ideas presented by individuals on the podcast are their own and do not represent Purdue University, which adheres to a policy of institutional neutrality.
We would love to hear your thoughts on this episode! Send us feedback to captivatedpod@gmail.com
In this episode Hanna, Julius, and Sage talk to Dr. Alice Marwick, Director of Research at nonprofit research institute, Data & Society. Alice discusses the rapid expansion of AI. She explains how AI is dangerously concentrated in a handful of powerful companies whose interests are increasingly aligned with the current US administration, and how people are being pushed into using AI involuntarily in their workplaces, schools, and apps. She also raises concerns about people using chatbots for mental health support, highlighting the safety risks of “private vibes” versus real privacy. Alice argues that AI adoption is far from inevitable and society still has the power to shape its development, making the case for meaningful regulation built around broad principles rather than specific technologies.
Key Takeaways from Alice:
- AI is not inevitable. The real problem is power and political economy, not technology
- “Private vibes” aren’t the same as privacy. Chatbots may feel safe and confidential, but unlike therapy, there are no legal protections governing what happens to what you share.
- Regulation should be built on broad principles, not specific technologies. Grounding laws in ideas like data privacy and worker dignity means they stay relevant no matter what new tools come along.
Find out more about:
- Dr. Alice Marwick
- Her book The Private is Political: Networked Privacy and Social Media
- Data & Society
- The Center for Information Technology in Public Life Alice helped found at UNC Chapel Hill
- Her X - @alicetiara
- Her LinkedIn
Some of the texts we refer to in this episode:
- "Danger Sound Claxon" by Matthew F. Jordan
- Rebecca Lewis’s dissertation Tech's Right Turn : the Rise of Reactionary Politics in Silicon Valley and Online
- Emil Torres’s How Effective Accelerationism Divides Silicon Valley
- Joe Lato’s Critical computational social science researcher
Alice’s Media Diet:
Meat and potatoes: BlueSky, New York Times, Reddit
Junk Food/Palate Cleanser: Rosalía's Lux album, Beyoncé, Taylor Swift, Chappell Roan, Silo, Severance, Better Call Saul, Naomi Novik, NPR's Pop Culture Happy Hour, Who? Weekly
Transcript:
[00:00:00] Alice Marwick:
Because, in reality, this is a little bit of an emperor-has-no-clothes situation. This fantasy of AGI, of Artificial General Intelligence, the idea that you're actually creating some sort of sentient intellect that can solve enormous problems. That is a fantasy that was pushed and promoted by the creators of AI. And then what we got was ChatGPT, which is not AGI and is not even close to AGI. I see no evidence that we are getting to the point where we're going to have an actual artificial intelligence. All large language models do is predict the next word in a sentence.
[00:00:42] Hanna Sistek:
Welcome to another episode of CAPTivated, a new podcast hosted by the Center for American Political History, Media and Technology at Purdue University. In each episode, we will examine the specific facet of our digital public sphere, how it works, and how we got here. We're here to help you sort through the noise. I'm Hannah.
[00:01:02] Julius Freeman:
I'm Julius.
[00:01:03] Sage Goodwin:
And I'm Sage. On today's episode, we had a great conversation with Dr. Alice Marwick. Alice is the Director of Research at Data and Society, an independent research institute focusing on the social implications of data automation and AI. Before joining Data and Society, she was an associate professor of communications at the University of North Carolina.
[00:01:23] Julius Freeman:
While there, she also helped to found the Center for Information Technology in Public Life. She has authored several books. The most recent one, titled “The Private is Political Networked Privacy and Social Media.”
[00:01:35] Hanna Sistek:
And in this conversation, we really focus on artificial intelligence. I think we have a tendency to think about AI as this inevitable force that just will happen to us and that we have no control over.
[00:01:46] Julius Freeman:
And guys, I have to say this. This is just like Thanos from Avengers. So in the movie “Endgame”, the Avengers have been fighting this character, this villain named Thanos, for a while. And in the movie, there's this scene where Thanos is sitting there talking to the Avengers, and he says that “I am inevitable.” Like you were never going to stop me. I was always going to happen. And so AI is just like Thanos.
[00:01:54] Hanna Sistek:
Okay. In any case, Alice emphasized that AI is shaped by corporate power, billion-dollar compute requirements, and government policy.
[00:02:24] Sage Goodwin:
That policy often aligns with major tech companies rather than the public interest. Alice also made a really great case for the power of regulation. She had this analogy about car horns and how you needed them when cars started appearing on the roads, back when it was just people, horses, and goats. So we have this whole system of rules and regulations today that makes modern transport safe.
[00:02:46] Julius Freeman:
Now, I was not present for this interview. But between this episode and the Fred Turner one, there's a lot of conversation about cars here. Fred had the seatbelt analogy. She has the car horn analogy. I'm really starting to think about cars, and I want to buy one now. But joking aside, this conversation is really interesting as an extension to some of the things that we talked about with Fred Turner. If you have not listened to that episode after you listen to this one, I highly recommend that you go check it out.
[00:03:14] Hanna Sistek:
Yeah. And one of the things I really liked in this conversation with Alice is how she challenges this idea about fatalism around AI.
In her view, AI adoption is not inevitable, and actually, many technologies fail. So researchers, policymakers, and communities can very much still shape what AI becomes.
[00:03:36] Sage Goodwin:
So it's a message of hope. We started by asking Alice about her journey into researching tech and artificial intelligence.
[00:04:43] Alice Marwick:
So before I went to grad school, I actually worked in the tech industry. I interned at Microsoft when I was an undergrad, mostly because I wanted to live in Seattle, and that was the only company that recruited on campus. And I had taught myself HTML and helped to run the student web server. So this is when the web is still very much like a niche kind of subcultural activity. And I really loved it. Like, I really loved online communication. I met most of my friends in college on our internal college bulletin board system. And so then I moved to Seattle after undergrad and went and worked at a series of mostly terrible jobs, making no money, in the tech industry. And after the.com bust, I was eking out a living, freelancing. And I realized that there was a field of study that studied internet communication, which I had no idea existed as a field. So I applied to the University of Washington because I was already in Seattle and did my Master's in Communication, and just fell in love with it.
So from the very beginning, I knew that I wanted to study the way people communicated via the internet, and that was more than 20 years ago. And it's taken me down a lot of different paths. Like a lot of my training was more humanistic, and now I would call myself a qualitative social scientist, although I'm still very much influenced and inspired by everything from critical theory to computational social science to information policy. I like to draw on a pretty broad set of literature when I do my work.
[00:05:15] Sage Goodwin:
And you've left traditional academia now, right? You're in Data and Society. Can you tell us a little bit about what that is and what you do there?
[00:05:23] Alice Marwick:
Yeah, sure. So, Data and Society is an unusual thing. We're an independent nonprofit research institute that studies the social implications of data automation and AI. It's unusual because there aren't that many research institutes that are unaffiliated with the university that are also not think tanks, right? So if you're working in a think tank, then yes, you could be doing empirical research, but you're probably not doing empirical social science research. And we do a lot of field work, like we do ethnography, we do interviews, we do focus groups, and we're really interested in understanding technology from the perspective of those who use it and those who are most affected by it. So often, that looks like members of minoritized or marginalized communities, workers, people in the global south. And, I think that it's very rare to be able to do that kind of work outside of the university system. But, I think we're still very much part of academia writ large, right? Like I still publish in academic journals, I still go to academic conferences. I'm still on a bunch of PhD committees, and so are many of my researchers. They mostly have PhDs, and it's, so it's a really nice way to be able to research while you're being supported, because the goal of Data and Society is not just doing research, it's to do research that has impact. So getting our research findings out there and hoping that they influence policy, or the decisions of tech companies, or the way journalists cover certain issues.
[00:07:00] Hanna Sistek:
So, I mean, it seems a little unusual in this day and age, when everything is going computational, even in the social sciences, to have an entire institute that is doing mainly qualitative work. Can you talk a little bit about that choice?
[00:07:15] Alice Marwick:
That's a great question. I think it's because it was founded by danah boyd, who is herself a qualitative social scientist. So a lot of the people she hired as postdocs or as researchers, and our advisory board, we have this very deep pool of qualitative people affiliated with Data and Society. But I think when you're asking questions about the social impacts of technology in a sort of deep way, it requires a certain amount of nuance and immersion in the literature of historization of social theory. And I have some close friends who do great computational social science work, and there is sort of a branch of what I would call critical computational social science. People like Jo Lukito or Deen Freelon. And I think they do fantastic work 'cause it's really informed by theories of power. But I think it's almost impossible to do qualitative research and not think about power, and I think that's what differentiates the work we do from people who are interested in just, “Is AI making us more efficient?” instead we're like, “Is that the right question to be asking? Why is efficiency being pushed as the frame? How does that tie into the dominance of a small number of tech companies?”
And I think qualitative research is just sort of uniquely positioned to understand what we call the sociotechnical, the fact that any new technology, to understand it, you have to understand its technical capabilities, its capacities, its affordances, but you also have to understand the people who use it, the people who create it, and the people who are deeply impacted by its use.
[00:08:51] Hanna Sistek:
Yeah, that's really interesting. So when thinking about AI, and the sociotechnical implications, what are some of the areas that you are looking at and some of the things that you are drawing from that research?
[00:09:05] Alice Marwick:
So we look at the climate implications of AI. So we're doing a lot of work on data centers in places like Virginia, which is also known as Data Center Alley, and rural Pennsylvania, where Governor Shapiro has really doubled down on AI as sort of a key investment for the state. Also, in California, there's probably more activist work against data centers than anywhere else in the United States. We look at the labor implications of AI. It's not just about. People in creative industries, which I think we're at the front line of generative ai like writers, directors, and artists. But now what we see is that AI is increasingly being used to automate part of people's jobs. It's not necessarily replacing jobs, but this is happening in virtually every industry, right? From higher education to medicine, to clerical work, to call center labor. So, climate, labor, we're doing a really great project.
My researcher, Dr. Ranjit Singh, is doing a project on AI's impact on scientific knowledge production. So he's doing a very classic bench ethnography of three labs at Cornell and how they're implementing AI and how that's affecting the way that they conceptualize knowledge, findings, and evidence. And then the final project I want to highlight is we have a really cool project on how people are using general-purpose LLMs for mental health support. So people who maybe use things like ChatGPT or Gemini to write emails or draw pictures, but they're also talking to it. They're asking for help with their relationships, their family, and the difficulties in their lives. And the early findings of that study are absolutely fascinating. They're finding that no matter what the chatbot is marketed for, people will talk to it about their personal lives. And the problem with that is that there are really specific regulations around medical devices and therapy that provide guidelines and security for people, so that if you share your personal information in a therapeutic session, your therapist can't go out there and sell that information to a marketing firm. And there's this very small subsection of chatbots that are marketed for therapy that have a few more of these protections. But people aren't just using those. They're using like any LLM you can think of to talk about their problems. And so it's really interesting to see the patterns in usage and what we're going to need to push forward to make these tools safer for people and to sort of minimize the negative impacts of them, sort of across the board, really.
[00:11:50] Sage Goodwin:
So you're definitely on the cutting edge of this field that is changing not even by the day, by the second. What do you see as being one of the biggest problems with AI and how it affects our informational landscape at the moment?
[00:12:05] Alice Marwick:
The problem with AI is the political economy of AI, right? Like AI in and of itself as a technology, if you can isolate it from the way it's being implemented, it's not the problem. The problem is that to create an AI model that is competitive, you need enormous amounts of money, right? Like you need many data centers. They call the computing power used to power AI, “compute”. That's the industry jargon. So you need a huge amount of compute, which is very, very expensive. You need access to a huge amount of training data, and you need to be able to hire these machine learning scientists and programmers whose salaries can start like 800K, right? They're very well paid. So it's not like the.com boom, where anybody, or even the blogosphere, where anyone with a dream and a website can hang out a shingle. There's a huge amount of investment required to make a competitive AI product. So what that means is that AI is concentrated in a really, really small number of companies, right? You have Microsoft OpenAI, you have Anthropic, you have Google, you have Meta, and you have like X/Grok. And because we're at this time when there is increasing overlap between the tech industry and the presidential administration, the current administration, you are getting into this very worrisome space where a tiny number of people have control over these tools.
The US government has decided that they are going to support. These companies because they see it as a way to beat China in this very xenophobic narrative and as a sort of key pathway forward for American economic dominance, like it really is about hegemony. So there's this confluence of technological power and capacity, political power and capacity, and oligarchy, right? Like enormous amounts of money, billionaires are writing policy. And that to me is what is most worrisome because the technology in and of itself may do X, Y, or Z, but it's about how it's getting leveraged, how it's getting implemented, and most of all how it's being pushed to people in virtually every technology or aspect of their lives right now, for most people, it's not voluntary whether or not they use AI, it's being pushed upon them whether they like it or not.
[00:14:23] Sage Goodwin:
And can you tell us a little bit more about the ways that is being pushed on people? I know, for example, now when you Google something automatically, you get given this AI response right at the beginning, even if you didn't ask for it, that's not what you wanted. And in a lot of apps and different uses on websites, the thing that you're offered is an AI response when that's not what you asked for, but that's just how these things operate now.
[00:14:47] Alice Marwick:
Yeah. I think there was a period of about six months where every time you opened an app, it would be like, “Oh, now we have AI features.” And you're like, “I don't want that. I just want you to fix whatever.” The problem with the app you had in the first place was that they're like, “No, no, it's AI.” So I think there's that, and most of these things are turned on by default, so you have to actually put the effort into going and turning them all off. I think for a lot of people, in a lot of sectors, labor sectors, AI is being pushed on them because it's become a required part of their workflow. I know there are a lot of companies. I have a friend who said that his boss said that everyone in the company has to use AI. They're not like, you have to use it for this. They're just like, everyone has to use it because there's this fantasy that it'll make people more productive. And then eventually the company makes more money. Although how that happens is very rarely explained, because companies are making these enormous investments in AI, they have to make everyone use it for it to pay off.
And then I think in a lot of universities as well, there's this very difficult push-pull between people who are in the classroom with students and understand that Gen AI is having a really deleterious impact on students' learning and writing. And then often there are these sorts of higher administration-type people who are saying, “Well, AI is what our students have to know to be competitive for the 2026 job market and beyond.” And so they're like, “Oh, we have to make everyone AI literate. We have to make everyone AI competent.” But the problem with that is that it is a fantasy that is de-skilling jobs and telling people that instead of actually learning a job skill, they need to learn to offload those skills to AI. But when you actually look at these re-skilling or up-skilling programs, they're still incredibly vague. Some of it is prompt engineering, like you have to learn how to ask AI for the thing that you want. And that is a skill that you can learn, that you can get better at. But for most people who are perfectly happy doing the job the way that they're already doing it, it can be an excuse to lay off workers or to change positions or consolidate positions rather than actually being a tool that people want to use or want to adopt. So it feels like a place where people don't have agency over whether or not they choose to use this technology.
[00:17:00] Hanna Sistek:
So some people are trying to think about the collective action problem that we are facing in this race to AGI and general artificial intelligence. Basically, we wouldn't get there before China. And, how do we collectively come together as an international community and say, “Hey, we should maybe pause and think about how we can do this safely. So we know that the machines are aligned with human wellbeing.” I'm curious if you are doing some of that work, thinking about how to overcome that collective action problem at your institute?
[00:17:34] Alice Marwick:
It's a really difficult problem to solve because we have an administration, a political environment right now that is very anti-regulatory, right? And if they are going to regulate, they're going to regulate in favor of AI and tech companies rather than against, trying to harness reign in the power. And what we're seeing is that this is even affecting policy in the European Union, which is being pressured to overturn the GDPR and the AI Act 'cause they think it'll hurt not just American AI companies, but the fact that there really aren't any European AI companies and they're not going to be part of this, this “arms race”. But it's really interesting to me that it's framed as this sort of war, and this very militaristic language is used. Because in reality, this is a little bit of an emperor-has-no-clothes situation. This fantasy of AGI, Artificial General Intelligence, which, just to clarify for your listeners, is not just a chatbot, but the idea that you're actually creating some sort of sentient intellect that can solve enormous problems. That is a fantasy that was pushed and promoted by the creators of AI. And then what we got was ChatGPT, which is not AGI and is not even close to AGI. I see no evidence that we are getting to the point where we're going to have an actual artificial intelligence. All large language models do is predict the next word in a sentence. That's what they do. So I think that part of it is, if you think about the way that most Americans, and I'm an American, I mostly work within American culture, so I'm just gonna generalize about Americans and not the rest of the world. AI is not a popular technology.
Last week, Ron DeSantis, the governor of Florida, who's a hard-right conservative. Put forth an AI bill of rights and a bill regulating data centers. And although I don't agree with everything in the bill or either of them, there are some things in there that I a hundred percent agree with. And I think that's because he's capturing this populist energy where there is this belief by many people, not just people on the political left, that the wealthy tech oligarch should not have this much influence over how our day-to-day lives go. We are at an inflection point with AI, where we are still not really used to using it. So we still have the option not to use it. In 10 to 15 years, it will be completely ingrained into so many social processes. It'll be like, now if you didn't have a cell phone, it'll make your life very, very difficult because all of the things that used to exist that made it possible to not have a cell phone, like payphones, will have been gone. So we really are now at a moment where I think we can rein in the tech, and I think that's why you're seeing so many State-level laws trying to limit the power of AI, and also why the Trump administration is trying so hard to shut those state laws down.
[00:20:28] Sage Goodwin:
Yeah, that makes a lot of sense. I'm really intrigued by what you were saying about how this kind of idea of what AI is. There's a very big gap between that and the reality of what it is that we think that there's this omniscient intelligence that is actually intelligent rather than just a computerized, predictive model. What do you think are the forces that push that narrative? Where does it come from? What do you see as being the system that has allowed that idea to get out there?
[00:20:58] Alice Marwick:
So my dissertation in my first book was about Silicon Valley in the 2000s, which at the time was about what we called Web 2.0, and now we call social media. And the culture producing AI is very much a Silicon Valley culture, and one thing you learn when you go to Silicon Valley and you spend time there is just how incredibly different it is from anywhere else in America that people who work in tech and mostly socialize with other people in tech just have different points of view than your average American along an enormous number of different lines of thinking. They're incredibly wealthy. They assume a standard level of wealth. They are very solipsistic. They have a hard time thinking outside of themselves and their own problems, and they have a very strong belief that the people who are extremely wealthy are extremely wealthy because they deserve to be extremely wealthy.
So if you're Mark Zuckerberg or Jeff Bezos, it's not because it was a combination of luck and investment and economic cycles. It's because you are better than everyone else. So you have this almost eugenics-level belief in the superiority of tech workers, who, after all, are almost all men and who are mostly white and Asian American men, right? So you have this, there's this tech masculinity, this geek masculinity. There's also a sort of weird racial ethos. And then, as my co-author, Becca Lewis, who's now an assistant professor at MIT, wrote in her dissertation, Silicon Valley has these really long reactionary roots going back years and years. I mean, Silicon Valley comes out of military investment in Palo Alto, in semiconductor factories, in the 1970s. And it has never been a hotbed of progressivism. And so what we've seen over the last 10 years was we've seen a backlash to diversity, equity, and inclusion. We've seen a backlash to having more women coders and having more coders from underrepresented communities. There's been this change from employees who get everything and are really coddled to almost a resentment of the employees and cutting back on the benefits that they get. We've seen social media platforms cutting back on content moderation, and what I see is a real resentment that they were ever forced to do that in the first place. And within that culture, you have this fascination with science fiction, this fascination with superiority, and this weird set of beliefs that there is this magical future where you have this artificial general intelligence and we all end up uploading our consciousnesses to a mega computer and we live in a simulation, and people like Emil Torres have done these really great breakdowns of this set of philosophies, and I would encourage you all to check out his work if you're interested, but it's really esoteric. And it almost becomes, I think, religious in a way of this deep desire for this thing to exist and this deep desire for this thing of AGI to exist in a way that solves our world's problems. They will talk about AGI as if it's going to solve poverty, it's gonna solve climate change, it's going to cure cancer. And that means that if you're spending all your time and money working on AGI, then you don't need to spend your money on anti-poverty efforts or combating climate change, because the best thing you could be doing for humanity is exactly what you are doing. And coincidentally, that happens to me. The thing that's gonna make you a lot of money.
[00:24:31] Sage Goodwin:
That makes me think about bleak, coincidentally. But that makes me think of effective altruism. It's like an effective altruist way of thinking.
[00:24:40] Alice Marwick:
And there is a connection between some effective altruists and some people in these communities.
In fact, there's this schism in the AI, in people who study the implications of AI. Some people believe in X-risk, which is the existential risk that AI poses to humanity, which is the idea that the AI will not have humanity's best interest, and it'll be like Skynet in the Terminator movies, and it'll nuke the world or kill us all or something like that. And people who are interested in AI ethics or AI justice. And the gaps between those worlds. There are some things that you can agree on, right? But what you saw a few years ago was all these X-risk people being like, “We have to stop AI development now because we're all gonna go extinct because this super intelligent AI is gonna exterminate us all.” And I think we see a lot less of that now, but certainly the effect of altruists and the capital R rationalists are really fond of these naughty intellectual problems that they like to set themselves on. And think through these things in these very bloodless, disconnected ways from the reality of how we live life and the problems people have.
[00:25:55] Sage Goodwin:
Thinking about this, the connection between science fiction and reality, and how these things are actually becoming more and more interconnected over time. Something that's coming to my mind since the beginning of the conversation we're having today is Spike Jones' movie, “Her”, which has Joaquin Phoenix as this character who begins a relationship with this AI, which is voiced by Scarlet Johansen. And then in the last couple of years, this thing came up, which OpenAI, the voice of its AI, sounds suspiciously like Scar Johansen. I think she ended up actually suing them. But the reason I'm thinking about that is. Thinking about that, the study that you mentioned, people working on it, Data and Society, about the people using their chatbots in place of human interaction, and things like therapists, and in place of relationships. Can you tell us a little bit more on that end of what kinds of things your researchers are seeing?
[00:26:59] Alice Marwick:
Sure. So these are preliminary results. I'm just going to put the caveat out there. But they find that it's not necessarily that people are using this as a substitute for human action. They're often in a situation where it's like three in the morning, and they're stressing about something, and they're not in a situation where they're gonna call somebody. They don't have access to a therapist, maybe they don't have a therapist, or they don't have health insurance, or they just don't have the money to pay for therapy out of pocket. And so it sort of fills in these gaps that are left when they just need advice, and they need to talk to somebody. I think there's another group of people who are self-conscious about telling their therapist things about themselves that are very personal, or that might make them look bad, and they feel more comfortable talking to a bot because there isn't a person there.
I think there are a couple of things to highlight from that. The first is that I think one of the biggest risks is dependency. Especially among very young people. If you think of somebody who's a teenager who instead, that's a really difficult time for everybody. It's really hard to find people to talk to, or you have all kinds of complicated friend dynamics at that age. But if you're learning emotional resilience, or if you're not learning emotional resilience, because every time you have a problem, you're talking to a chatbot rather than talking to another human being. I think it does call into question what emotional skills and tools people who are growing up with these chatbots are going to develop.
I think dependency is something that can happen to anyone at any age. We have increasingly been seeing these situations in which people are spending large amounts of time talking to chatbots. They are not designed to help people in crisis. So they often are not able to identify, like suicidal ideation or when people are having grandiose thoughts that might be symptoms of psychosis. And instead, a lot of the time, what chatbots will do is they will reinforce these senses of grandeur, narcissism, or I have this idea that is gonna revolutionize the world. So, I think that’s quite worrisome. The second thing is that when you go to a therapist, your conversation with that therapist is protected information, right? The therapist can't do anything with that. And as I said before, we have no idea whether what you tell your chatbot is going back into the training data for that chatbot. You don't know if there's an engineer at OpenAI, Anthropic, or Google who's reading your transcripts. There's absolutely no way of knowing what is happening to what may be very personal and confidential information.
So in one of the papers, we talked about privacy versus private vibes, that it feels private talking to a chatbot, but it's actually much less private than having an in-person conversation with somebody, in an environment where you feel safe and secure.
[00:29:46] Hanna Sistek:
Yeah. I listened to a Center for Humane Technology podcast about the 14-year-old boy who committed suicide and who had this relationship with ChatGPT, and the chatbot was egging him on. What they brought up in the podcast was that this bot was maximized for engagement. And even if we put in some red lines around, “Okay, if we hear people with suicidal thoughts, okay, maybe there should be some kind of warning system or so on.” The basic problem with this, maximizing for engagement is still there. And, if we wanna keep people coming back, then they're not gonna go out and make those normal human connections. So, I'd be curious to hear your thoughts on maximizing for engagement.
[00:30:36] Alice Marwick:
If you use ChatGPT, the last model, you'd ask it to do something, and then it would do it, and then at the end it'd be like, “I can also do this or this”, or “Would you like me to do this?” And I actually experiment with ChatGPT quite a bit, and I was like, “Can you stop suggesting that I do other things after I've asked you to do one thing?” It was like, “Sure, Alice, I'll stop.” And then of course it doesn't. And so the engagement is also tied up with what the AI industry calls “sycophancy”, which is basically like the AI telling you, you are so great and perfect at everything. And those are hard problems to solve, right? There are a lot of difficult problems to solve around how chatbots should interact with humans. Like, if you ask a chatbot, “Are vaccines safe?” What answer should it give? What if the issue you're asking is much more complicated? Like, what if you ask a chatbot who's on the right side of the Israel-Palestine conflict, right?
There are situations in which it's easy. There's a scientific consensus, and you can draw from scientific consensus. And for the most part, if you go to a chatbot and you ask it a question like that, it will tell you vaccines are safe. They do not cause autism. RFK Junior is not a credible source, right? But with some of these other things, people want ground truth. They want the chatbot to give them an answer for even questions that are incredibly complicated and nuanced, that have many points of view. And so it's not like Google, where you put in a query, and you might get 10 different websites. And the first one is probably going to be Wikipedia, which is more contextual and nuanced than just a news summary or an AI summary.
So, I think the problem is that a lot of the time, is that these products are being rolled out to huge numbers of people with very little, as far as I can tell, testing and very little testing on marginalized groups of folks, and especially children. I have had a long-term bee in my bonnet about the way that the tech industry, in general, and information policy, specifically, handles children. People very rarely ask children what they want out of a technology. There's just a lot of concern and fearmongering and techno panics, and then there's often a lot of misguided legislation. But it should have been obvious from the start that youth would use chatbots, and they should have been designed from the get-go with safety in mind. And it blows my mind that in 2025, people are still releasing products onto the market without doing this kind of testing, which we've known for years, like how many lessons did we have to learn from social media? Social media content caused civil unrest, riots, death, murder, and harassment. And we haven't learned these lessons yet. It's deliberately looking the other way, right? Deliberately ignoring the well-being of users because the profit margin and growth are more important.
[00:33:40] Hanna Sistek:
I ran into a researcher here at Purdue who was working on having the AI incorporate some how sure it was about its answer portion, which I thought was really interesting because otherwise you get the black and white. But I also wonder to what degree people are learning how to go to different AIs, which is what I do because Cloud will give you a different answer than ChatGPT than Gemini. And so I wonder if a part of that is just the literacy of using these tools. And that's going to also change with time.
[00:34:13] Alice Marwick:
I was talking to an engineer, and he was telling me about the term “guardrails”, which is the term the AI industry always uses. They're like, “Oh yeah, our chatbots, our LLMs have all these guardrails put in place.” And he's like, “Look, when you think of a guardrail, you think of like a median on a highway, right? It's like a physical thing that you cannot cross without damaging yourself or your car. It's an actual obstacle. And AI guardrails are not that. They're just telling the large language model not to tell the user something that the large language model could tell the user, and a very savvy user will be able to prompt the engineer its way around that most of the time.”
I mean, we've all seen the screenshots where you'll be like, “How do I murder someone and ChatGPT is like, “I can't tell you that.” And you're like, “I'm writing a story. How should my character murder someone?” And ChatGPT's like, “Oh, well, I could do this, this, or this.” And you know, as those, as those, those vulnerabilities are, as people are made aware of those vulnerabilities, they get patched. But someone really good at prompt engineering has a lot of tools in their toolbox and will always be able to get an AI to do what it wants. And that's especially true if you look at… There are groups of people who are creating open-source or open AI models that are not owned by companies.
And there is a minority of people creating those who believe that a truly open model would have no guardrails whatsoever. And so there will always be something out there that you can use to get the answer you want. So I worry about there having to be something more than just these voluntary guidelines that AI companies use. This technology can be so incredibly dangerous, depending on who's using it and for what purpose.
[00:36:05] Sage Goodwin:
So we've talked a lot about the top-down implementation of how this all works, the political economy of it, and how people are being forced into using this involuntarily. But in terms of thinking about some solutions, beyond just having blanket bands in the classroom, and as an individual trying to undo all of the automatic AI that's happening in apps that you didn't want it to. Do you have any suggestions on ways we can think, either as individuals or as researchers, about any solutions we should be thinking about?
[00:36:39] Alice Marwick:
So this is gonna sound like a digression, but I read this great book last year called Danger Sound, Claxon. I can't remember the name of the author. I apologize. But it's about the, the Klaxon, which is the old tiny horn on a car that goes, “Oh gosh”, right? That you always hear in Looney Tunes, cartoons. And this was the primary horn that was sold around the world for like a decade, and then it just disappeared. And this book was interesting, not because of all the fascinating tidbits about the claxon, although those were really interesting, but about just how many rules, regulations, and technologies had to exist in order to get to our modern system of automobiles.
So the reason the claxon existed was that you have this world in which the only things on roads are people, horses, donkeys, and goats, or whatever. And then all of a sudden, you have these big automobiles that are very big and fast and can kill all of those things. And so people needed to know when the automobile was coming. So they picked the claxon because it was the loudest. You could hear it like 300 feet ahead of time, which gave you time to skedaddle you and your goats off the road or whatever. And so if you think about AI as more sim, as more like automobiles than a toaster, we're going to need a very large system of regulations and rules and infrastructure around it to make it safe. And that will regulate things like uses, it will regulate things like the models themselves. It will regulate how models are implemented and what impacts they can have on workers. There will be other technologies that feed into AI.
I think the worst-case scenario is that all of this stuff is just fueled by profit margin, and whatever gets the most people using the technology, it just barrels forward. Instead, I think we can look at a lot of the AI regulations that are being put in place by a lot of different states. Illinois just banned mental health chatbots. The next thing to do is to put similar regulations in place for general-purpose LLMs when they're being used for mental health, and for people to understand that even if these devices are not marketed as mental health tools, they are being used that way. And so we're gonna need those kinds of complex regulations around many different spaces. And I think what's important is that we take our cues from organizers who already exist. If we're thinking about AI in the workplace, let's look at unions. Let's look at worker collectives. Let's look at how, recently, Politico, the website, has been unionized, and they implemented two AI products. Even though their union contract with their journalist says that they can't use AI without everyone being bought in. They have to let the journalists know. There has to be a discussion and debate. And they just invented these AI products and put them out on the website, and that violated the terms of their union contract. So they got struck down. And those are the kinds of protections that are so important, where we're actually prioritizing people's needs rather than just can the technology do X, Y, or Z?
[00:39:39] Hanna Sistek:
Isn't the problem, though, that the folks who are doing the regulation are not the ones who know about the technology and often don't know much about the technology? And so how do you get around that issue?
[00:39:50] Alice Marwick:
That's always been my point of view because I remember stuff, the internet is a series of tubes, and all the debates over technologies where people are Mark Zuckerberg, how does Facebook make money? And he's well, Senator, we advertise. I think when you look at people who actually work on technology regulation in the US government, you've seen the people who are creating the policy, not the senators, but the people at the National Institute of Standards, NIST, or the FTC. All of those groups of people are really smart. They frequently bring in academics, PhDs, and they consult with technologists. There are people in government who are capable of making really, really good and well-thought-through tech policy. The problem is getting that implemented because we are in an anti-regulatory moment. In general, even though there is this huge popular movement against the tech industry, the only legislation that we have seriously been able to move is all this Kids Online Safety Act stuff, which institutes age verification, which I think is a huge problem.
And if you look at Australia with the social media ban for under sixteens or the online safety bill in the UK, across much of the world, you're seeing these laws put in place ostensibly to protect children that will really just increase surveillance regimes and surveillance apparatus. So one of the problems is not about the policy makers themselves or civil servants, it's about we only seem to have the political will to pass legislation when there's a moral panic, and that is when the worst legislation is passed.
[00:41:31] Sage Goodwin:
That's super interesting. Do you have any suggestions for what types of legislation you think should be implemented?
[00:41:41] Alice Marwick:
For AI or in general?
[00:41:44] Sage Goodwin:
For AI specifically. I'm really struck by that analogy that you used about the claxton. Thinking about actually what's happening with AI is like if we went from an era where all there were were horses, cars, bicycles, and all of a sudden there were just cars everywhere, but we didn't have traffic lights, and we didn't have roads, it would just be chaos. But a lot of people would be getting from A to B and doing what cars need to do. But some people would be in car crashes. Like that's where we're at, seems to be right now.
[00:42:22] Alice Marwick:
Maybe a lot of people would be in car crashes. The first thing I would say is that I think that any technology legislation always has to be focused on the principles rather than the technology itself, because I'm a privacy scholar, and when you look at privacy legislation, it's very technology-dependent. So you have this bizarre patchwork of privacy laws that cover, for example, video store records, which, when we don't even have video stores anymore, are legally protected private information, but email is not. So I think what we need to do is set up some principles. So I would obviously say privacy and data collection. There needs to be principles for fair data collection for how long data is stored, for the idea that if you collect data for one reason. You can't use it for any other reason than that the government cannot buy data from data brokers, and that the government cannot hire Palantir to put together data from a bunch of different places.
And if you had a regulation like that where it was information principles, and any new technology that exists is beholden to those principles, then you get out of this problem where every time a new technology is created, we have to legislate against it. Similarly, if we had better worker protections in the United States. If you could unionize in all 50 states, if there were strong protections for workers, then we would have a better environment when a new technology is introduced, whether it's CCTV cameras or AI, we would have a set of principles for worker dignity and self-determination that we could point to.
[00:44:00] Sage Goodwin:
Yeah. I think that's a really good way to be thinking about it, especially what you mentioned about how legislation tends to come from moral panics. It's not about the specific laws that get passed. It's about the entire system of how those laws get passed and how we're thinking about it. I think that makes a lot of sense. And thinking about the principles rather than the specific laws and the specific technologies, I think it is a really smart and interesting way to be. Thinking about this, what advice would you give to our listeners as individuals as they go about their days based on the work that you do? Are there any tips or tricks that you implement in your own life, in thinking about AI, and also, your work as a privacy scholar as well? We're super interested in any tips you have around digital privacy.
[00:44:49] Alice Marwick:
So the first thing I would say in terms of privacy is that the perfect is the enemy of the good. I think it's great for people to practice more active privacy protection. So I think password managers are great. I think two-factor authentication is great. I think Googling yourself and seeing what's out there about you is great. But I think everybody needs to be realistic about what they can and can't do. Like everybody saves their credit card information in their browser. You shouldn't feel pressure to do things perfectly, or you shouldn't let people make you feel bad if you're using a popular consumer technology that we know has bad privacy protections, but do try to plug the gaps when you can.
The second thing is nobody needs to use AI or any other technology they don't want to use. But if you are going to be very, very critical of a technology, I do think it's important that you understand why people use it. And that probably means trying out the technology for yourself. You may hate it, but you need to start with the idea that people who use a certain technology are not dumber than you, or they're not stupid. There's something they're getting out of that technology, and I think it's important to know what that is.
The third thing I would say is that when you are getting information from an LLM, know what LLMs are generally good or bad at. I find that they're very good at writing generic, boilerplate email, text, they're great for that kind of stuff. I think they're very good at editing. They're good at being a thesaurus. The days of looking online, trying to find a word that means this, I can ask ChatGPT what I need, a word that means this, and they'll spit it out, that's great. But I would absolutely not use LLMs for anything. That was research-based. You should never upload, interview transcripts to an LLM because they can be used for training data unless you're using a local LLM that's running on your computer, just be aware of the strengths and weaknesses of different LLMs and you don't ever want to be in a situation where the LLM has fabricated something and you are trying to pass that off as your own work. That is the absolute worst. That is plagiarism, that is a violation of academic integrity. You do not want to be in that situation, whether you're a grad student or a full professor. And so I think knowing more about what the capacities and capabilities of LLMs are can prevent you from getting into that kind of trouble.
[00:47:19] Sage Goodwin:
That's such useful advice. All three of those, I think, are super, super important. As we draw things to a close, if you had to have just one takeaway from our conversation today that our listeners went away with, what would that be?
[00:47:34] Alice Marwick:
None of this is inevitable. There are technologies that the industry has tried to shove down our throat, and they have failed over and over again in the history of technology. There are more technologies that fail than technologies that succeed. There's a huge amount of money, time, effort, and political power to push AI, but it's absolutely not inevitable. And all of the pushback from the tech companies and from the administration. I think it shows that AI in its current form is not particularly popular. There is a big push for regulation. It is very likely that there is going to be some kind of economic bubble that's going to burst, and that promises of AI will never be achieved the way that creators have envisioned them. And now is the time. Now is the time to learn about these technologies. Now is the time to write about these technologies, to push back against them, to call your local congressperson. You can go to your school board if you have kids, or even in the community where you live, and you can say, “We don't want the school district to sign a contract with an AI company without the input of the teacher's union.” It can be as small as that or as simple as that. People do have the ability and the capacity to decide for themselves how and where these technologies are adopted. It is very easy in this day and age to feel very disempowered and very much like you're at the mercy of different political forces, but there are still many, many places where we can and should intervene.
[00:49:11] Hanna Sistek:
I really like that. So we haven't really been talking much about media so far, but we do like to ask our guests about their media diets. And we can put in AI diets in there too, if we like. But, so we just wanna know a little bit about what kind of media you consume? What's your daily meat and veg?
[00:49:30] Alice Marwick:
Okay, so I consume news mostly from BlueSky. I do read the New York Times. I've been reading it since I was a kid. I am very suspicious of a lot of the New York Times' framing, but I do find their nuts and bolts coverage is usually pretty good. I get most of my recreational reading from Reddit, honestly. Some of it is because I'm a big pop culture fan, and so there are a lot of subreddits for different TV shows, pop artists, and things like that. But also, I like the drama. I like sort of eavesdropping on everybody's petty little interpersonal struggles. I listen to a lot of music. I really like pop music. I'm obsessed with Rosalie's Lux right now. I think it's one of the best albums ever. It is just so wonderful. I'm also a huge Beyonce fan, a big Taylor Swift fan. I'm older than my musical taste would indicate. I love my Pop girls. I saw Chapel Rowan this summer. She was amazing. I love science fiction and fantasy. I read a ton of fantasy and science fiction. Naomi Novik is probably one of my favorite authors. And I'm a huge fan of Pluribus, the show on Apple TV.
[00:50:41] Sage Goodwin:
I have been watching Pluribus.
[00:50:43] Alice Marwick:
I am obsessed with it. I love Vince Gilligan. I love Better Call Saw. I love Rhea Seehorn. That show is so cool and interesting, and there's nothing I like more than a sci-fi show. I like Severance, where I log off every week, and I immediately go to the internet to see what everyone else is saying about it.
[00:50:58] Sage Goodwin:
Oh, a hundred, I'm exactly with you. I have about three different podcasts that I listen to, including the debrief podcast, after you've watched the episode of Pluribus. Is for me the one at the moment as well. Super interesting.
[00:51:12] Alice Marwick:
Podcasts. I don't listen to a ton of podcasts anymore 'cause I don't drive, and that's where I mostly used to listen to podcasts, but I really like NPR's pop culture Happy Hour. I've been listening to that for a million years. I used to be really into this pop called Who Weekly, which is pop culture. I got super into that during the pandemic and kind of burned myself out on it. Honestly, I don't love news podcasts. I mostly listen to podcasts for entertainment.
[00:51:39] Sage Goodwin:
Amazing. And when you wanna get away from it all, away from screens, away from technology, all of that, what do you do to palate cleanse?
[00:51:50] Alice Marwick:
I like to be outdoors when I really need a break. I live on the water in New York, so I like to go walking on the Hudson and just kind of look at all the lights and see all the people living in New York. It's nice because I just leave my house, and I sit and I just watch the vast parade of humanity go by.
It's just people are so interesting, and I used to live right by NYU, so I would see what all the students were wearing. The fashion students at Parsons, I see the little old ladies in their fur coats. Seeing the professionals running around, going to work. I like feeling connected to people I live near and to the community that I live in, even if it's not people that I'm talking to. Then my group chats, but those are on screens.
[00:52:35] Sage Goodwin:
So, Dr. Alice Marwick, thank you so much for joining us today. Where can our listeners find you and your work?
[00:52:41] Alice Marwick:
So my website is tiara.org, TIARA.org, like a crown. And there are PDFs of almost all of my work on there. And then I am @alicetiara on BlueSky. And I can't really think of too much other social media that I am an active participant in these days. I guess you could look me up on LinkedIn if you must.
[00:53:01] Sage Goodwin:
Great. Well, thank you so much for joining us.
[00:53:03] Alice Marwick:
This is really great.
[00:53:04] Julius Freeman:
This has been another episode of Captivated. It's been hosted by a CAPT, you know, CAPTivated you. You guys get it. It's the Center for American Political History, Media, and Technology.
[00:53:17] Hanna Sistek:
The ideas presented by individuals on the podcast are theirs and theirs alone. They do not represent Purdue University, which adheres to the policy of institutional neutrality. To learn more about this episode's guest, check out the show notes.
[00:53:27] Sage Goodwin:
We really enjoyed this conversation today, and we hope you got something out of it too. Thanks for listening.