Podcast / Transcript and Show Notes

Technology and Social Media

Listen now

Transcript for Season 2, Episode 1: Technology and Social Media

Kurt Andersen: Welcome to The World As You'll Know It, I'm your host, Kurt Andersen. Last season was all about the pandemic, how it looked to change the ways we live and work. This season our subject is still the shape of things to come, but specifically the future of technology. 

There has never been a moment in my life when the stakes in so many different areas seem so high, and the outcomes so up for grabs. We are at an inflection point. That used to be a technical math term before it became a meme in the 1990s, thanks to Andy Grove, the co-founder of Intel.  He wrote, “strategic inflection points can be caused by technological change, but they are more than technological change. They are full scale changes. The change can mean an opportunity to rise to new heights, but it may just as likely signal the beginning of the end.”  Inflection points are when organizations or whole societies face new facts and take action or fail to take action in response. 

So here we are, a few decades into the digital revolution that's transforming our economy and politics, our daily lives, our very understandings of reality for better and for worse. About a different revolutionary moment centuries ago Charles Dickens’s take still applies: These are the best of times and the worst of times, an age of wisdom and foolishness, of hope and despair.  I think that thinking seriously about the future comes down to addressing one big question. Will we harness technology to make life better or let it harness us? How can we wind up closer to utopia than dystopia? And we'll get at those answers through conversations with an A-team of experts and visionaries. 

First up, Sinan Aral, an MIT professor who has spent his whole professional life studying social media, and as an entrepreneur, creating and financing social media technologies. Sinan Aral, welcome to The World as You'll Know It. 

Sinan Aral: Thanks so much for having me. It's an honor. 

Kurt Andersen: So you are pretty positive about the potential of the Internet and social  media specifically. But, but your book has a kind of negative title -- It's The Hype Machine, and throughout it you use the phrase promise and peril a lot. So I do want to talk about the actual and potential good things that social media can do, might do in the future, and also the bad things that they've done and are doing and might do. But what I'm really interested in is, is how that technology can be used properly, be fixed to maximize the promise and minimize the peril. But first of all, how consequential of a thing is, is social media in terms of civilization and history? I mean, bigger than TV, as big as the printing press? 

Sinan Aral: You know, if I had to think about it looking backwards from one hundred years from now, I'd say it's bigger than TV and probably not as big as the printing press. So it's on the order of magnitude of those kinds of technologies in my mind. And I think that the reason it's so consequential is that it is rewiring the central nervous system of humanity. It's putting us together, changing our information environment at scale with such speed in ways that are informing us, changing the way we decide who to vote for, what products to buy, who to love, even. If you consider these algorithms are also at the heart of dating apps, which are just another form of social media pointed at a specific decision that we make in our lives, our romantic relationships. And just to follow that thread for just two seconds, you know, algorithmically introduced matches, romantic matches, passed traditionally introduced matches in 2013. And those algorithms have certain differences. The types of people that we are matching algorithmically are different than the types of people that we would match traditionally. And if you extrapolate that romantic relationships then evolve into procreation and the future of humanity, then the genetic pool is being altered. 

Kurt Andersen: Yeah, no well it extends to this whole theme you have in the book of homophily, like likes like, and birds of a feather gather together, and all that, which of these machines this machine intelligent is programmed to encourage. And that's just one more version of it, I guess. So you were a graduate student at MIT, studying this as it began.  This has been your life for the last 20 years, did you think mainly at the beginning: “Amazing! Awesome!” or, “This could go really south in a bad way.” Did you go back and forth on those two possibilities? 

Sinan Aral: I think it's better to think about technologies like this as agnostic, and they can be used for good and evil. And, you know, one thing that I make clear in the book and that I think is becoming more and more true is that we went through a decade of techno utopianism where we thought, “Wow, you know, this technology is amazing. It's going to connect us all.” And everybody was kind of kumbaya about it. 

Kurt Andersen: That that was the 90s, as I recall them. 

Sinan Aral: That was the 90s. Exactly. And then we went through the flip side. We went through a decade of techno dystopianism, which we're still sort of in the middle of, I would say. But I think that we have to get past this debate about whether social media is good or evil, because, as I have said, the answer is yes, It is good and it is evil and it depends on how we use it. And that's sort of the point of the book. 

Kurt Andersen: You quote Sean Parker, who helped Mark Zuckerberg create Facebook as saying, “We designed this to create a social validation feedback loop. We are exploiting a vulnerability in human psychology.” Which is amazing that he actually just said it as such. But so describe, if you will, how do social media make us want to spend more and more time using it compulsively? 

Sinan Aral: There are a couple of things there that I think are important. The first is this notion of the social brain hypothesis, which is one of the two leading hypotheses for why the human brain is so big. Relative to body weight the human brain is very large among all of the species on the planet. And one of the leading hypotheses for why that's true is our very complex sociality, that we essentially have evolved to be social species. Our brains evolved to process social signals, and then we invented a technology that scaled these social signals -- what people like, what people ate for dinner, pictures of their food from last night, who they're dating, et cetera -- to the tune of hundreds of trillions of social signal messages every day from millions of people. When you think about it that way, the meteoric rise of social media is not a surprise because it's like tossing a lit match into a pool of gasoline. In addition, you have what's known as the dopamine reward cycle. So every time you get a like or a comment on some of your content, it stimulates the dopamine reward cycle in your brain. That keeps us coming back for more. And that's coupled with what's known as a variable reinforcement schedule. So we have these notifications on our phone -- the lights, the buzzes, the sounds -- and those notifications could come at any moment, which means we are trained to expect at any moment hits of dopamine coming from our phones that represent social interactions. And so our brains are then craving more of that dopamine and expecting it at any second, which means it's very easy to get our attention. They have this thing called ghost vibrations, where people report phones feeling like they are vibrating in the pocket when they're not. And that's just a neurological impulse that has been trained into our brains from getting these vibrations all the time. So dopamine reward system, combined with the social brain hypothesis, combined with a variable reinforcement schedule, creates this addiction style desire for social media. 

Kurt Andersen: Well, and I was just going to say addiction style, yes. But actually, as you describe it, literal addiction. Right. I mean, isn't it exaggerating our natural addiction to dopamine? 

Sinan Aral: So addiction has a clinical set of definitions to it. And I think that to label social media as addictive, I think we need to go a few steps further in research. And that's why I'm very careful with my words. As a scientist, I never want to overclaim the research that we're currently at. 

Kurt Andersen: You say often in your book and in your presentations like, ‘oh, but, let's remember the good things, the promise, the good things.’ So what are the five or whatever best things in terms of society’s, culture’s, perspective?

Sinan Aral: It can be a force for good in the world, and I'll give you a few examples. So when Nepal experienced its biggest earthquake in the last hundred years, Facebook spun up a “Donate Now” button and raised more money than Europe and the United States combined for relief efforts in Nepal from 770,000 individual donors around the world, which shows you the scale of the altruism that can be, you know provoked. People like to make fun of the “Ice Bucket Challenge,” but it's hard to laugh at a quarter of a billion dollars with a “b” raised for ALS research in eight weeks, which is a tremendous amount of good done for the world. It is powerful in mobilizing progressive social movements. For example, the founders of the Black Lives Matter movement say there would be no Black Lives Matter movement without social media. And before Black Lives Matter it was the Snow Revolution in Russia and protest movements in Hong Kong and Ukraine and the Arab Spring. All of those things sometimes get lost when we start talking about threats to democracy, threats to elections -- which are real and which we have to think about -- but we've sometimes forgotten that this powerful technology can also be aimed at good. 

Kurt Andersen: To your credit about the fact that social media -- LinkedIn, Facebook, whatever -- can help people get jobs and get higher pay, and there are studies of this, on the other hand, those better jobs and higher pay go disproportionately to highly skilled workers and to men, right? So oh, one step forward, two steps back. 

Sinan Aral: Yes, exactly. So when you're examining the promise and the peril, as I describe it, you realize that it is a very nuanced topic. It is not cut and dry. We can't continue to armchair theorize about how social media is impacting the planet. We have to be rigorous because the solutions require nuance. The laws that are written, the way that the algorithms are written, what I call the four levers that we have to steer social media, the money code norms and laws, writing the code, writing the laws, thinking about what norms we use to adopt the technology, and also how the business models are set up, requires nuance and rigor. 

Kurt Andersen: Yeah. You have a kid, a six year old. Is he still ... 

Sinan Aral: He’s 8, he just turned 8.

Kurt Andersen: Oh well, he was six when you wrote the book.

Sinan Aral: That's right.

Kurt Andersen: Well, that's interesting then, because you say how -- and I don't know if this is armchair parenting -- but, like he got no screen time of any kind. So does he get screen time? And how are you deciding if, when, how much? 

Sinan Aral: It's so interesting. So before the pandemic, he got very little screen time, almost none. And he was fine. He was sort of blissfully unaware of all the screens and so on. After the pandemic hit and he started doing school virtually, he can now fix my computer.

Kurt Andersen: Bring him in!

Sinan Aral: He's now the technological genius of the household in just a year's time. So it evolves very rapidly, obviously, as kids get older. The pandemic was a massive inflection point in our relationship to technology around the world, as I describe in the preface to the book. And, you know, I think that's going to continue. We're both limiting his screen time, but it's more than he used to get. 

Kurt Andersen: Because why? 

Sinan Aral: Well, because, you know, it's interesting to know that Steve Jobs, who invented the iPhone and the iPad, didn't allow his kids to use the iPhone and the iPad. And there's a reason for that. You know, I think that there are a number of things that are lost if you just allow the technology to work its magic on young brains. If you don't set guidelines and a framework, boundaries around it, then the dopamine reward system, the sort of gamification of engagement will take hold. And kids are very smart. If you speak to them like adults, they will typically engage you like adults. If you say here is the reason why, you know, I talked to Caia about the dopamine reward system and when he won't give up the iPad at the end of his allotted time, we talk about, “Hey, that's your dopamine reward system talking right now. And, you know, your brain doesn't want to give up what feels so good to it.” And he can understand that. 

Kurt Andersen: I worry, frankly, more about what it's doing to adults right now rather than developmental impressionable brains of children. I was struck by one of your, I guess, groundbreaking studies: this big study of false news online and how it spreads, and you had like basically, as I understand it, every tweet tweeted in Twitter's first decade, right? And and discovered that, oh yes, falsehoods much more viral, inherently more viral than truth and reality, right? 

Sinan Aral: Yes. So we did this study in cooperation with Twitter. And I have to say kudos to Twitter, they were extremely collaborative. They gave us access to the Twitter historical archive and we studied all of the true and false news that was verified, all the verified true and false news that ever spread on Twitter from its inception in 2006 to the end of 2016. So a decade's worth of data. And we used six independent or I think maybe it was eight independent fact checking organizations to corroborate which stories were true and which stories were false. Those organizations agreed 95 to 98 percent of the time. And then we followed the diffusion, the spread of these stories on Twitter, and we measured how quickly the true one spread and how quickly the false one spread. And we compared and what we found was that false news traveled farther, faster, deeper and more broadly than the truth in every category of information that we studied, sometimes by an order of magnitude, and that false political news was the most viral. And this was very troubling to us when we saw this result and we published this on the cover of Science in 2018. 

Kurt Andersen: And yet on this constant, “on the one hand on the other hand,” I was talking recently to one of my nieces who is one of my political science, scientist nieces, whose specialty is this kind of spread of misinformation and disinformation. She doesn't see that there is actually a big electoral consequence in all of this raging falsehood. And it seems to me the evidence is mixed of that. So it's bad, obviously, that there's unbridled lies and fantasies and deceptions spreading around. But so far it doesn't seem to be affecting who we elect a lot. 

Sinan Aral: Well, I think the jury is still out on that. And what I say in the book is that we don't know. Because the research that we would need to determine that hasn't been done. And it depends on what you mean by affecting elections and democracy. So a couple of things about that. We know that these types of messages can affect voter turnout. That is pretty well-established. We also know that Donald Trump won in 2016 by something like between 70 and 80 thousand votes in three states. We know that a Facebook experiment in 2010 among 61 million people created 800,000 additional votes in congressional elections. So there is a chance that voter turnout effects could affect elections. The evidence on it changing vote choice: “I was going to vote for Clinton, but I voted for Trump or vice versa,” is very small. It looks like that is highly unlikely. But we also know that a very large fraction of the misinformation campaigns of Russia in 2016 were dedicated to voter turnout, trying to suppress minority votes, saying you shouldn't vote or, you know, you shouldn't trust either candidate. Stay home and so on. So what effect did it have on voter turnout? We don't know. What's the likely effect on vote choice? Probably not much. We also know, however, that it matters in other types of initiatives like ballot initiatives and smaller local legislation. So you can imagine a systematic set of attacks that would affect policy, but maybe not a general presidential election, maybe would affect elections to the House of Representatives or to the Senate, perhaps more easily than a general presidential election. The bottom-line conclusion in the book is that we don't know.

Kurt Andersen: 100 percent. And I was going to say, well, it may not have decisively or definitively thrown this election this way or that election that way, but unequivocally and the research as you depict it, shows that it makes us hate each other. Whatever the political results of that is bad for our culture and a lot of other cultures, not just American. And there are these experiments you talk about in the book, which shows that there is the power in these algorithmic nudges to like show people a little more from the right side if they're on the left or vice versa. They don't change their opinion, but they hate the other side less. 

Sinan Aral: That's exactly right. So a few things about algorithms that point to the possibility of the promise. One is that when you nudge people to consume more information from the other side of the political spectrum, they do so and they broaden their perspective. The second is that the algorithms tend to narrow your consumption so that you consume more of what you already like and believe. But when we turn the algorithms off, you go back to your diversity-seeking self, which means that it's not a permanent effect on human beings. And we can change the effect that algorithms are having by changing the algorithms themselves. And we also know, for instance, that if you were to include sort of diversity objectives in an algorithm, that you might be able to, you know, nudge people towards reducing rather than creating political polarization. 

Kurt Andersen: Way before there was an Internet, journalism emphasized bad news. It's the nature of news, right? And if it bleeds, it leads. And along comes the Internet, along comes social media, digitalized news media. And it's, whoa, let's you know, the systematic, almost robotic pandering to to it’s, you know, outrageous, horrifying, angry, disgusting things turns these natural appetites, if you will, into these crazy compulsive head fillers of badness, right? 

Sinan Aral: Yeah, I mean, this is tied directly to the business model.  So the business model, the money, as I call it, is one of the four levers that I describe in the book. And the business model of social media is built on what's known as the attention economy. And so the way that that business model works is social media gets our attention and then it sells our attention to advertisers as opportunities for persuasion. Once I have your attention, then I can put you in front of an ad that creates ad inventory, which is what social media sells.  In order to make that business model work -- to maximize the revenue and the profit of that business model -- you have what is an engagement model. You want people engaged. And so the things that are most engaging are what are generating the most profits and revenues for the companies and what what is the most engaging? Well, it turns out, things that are novel, shocking, surprising. And if we continue to follow the engagement business model, the attention economy, it will continue to favor that which is salacious and shocking and horrifying, anger inducing, disgusting and so on. And that's where we find ourselves today. And that whole economy is described in detail in the book. And there's been a lot of studies done that spell out how it works. 

Kurt Andersen: Right, I mean the business model. Let's talk about that a little bit. I mean, if, it didn't have to be advertising based right, it could have been subscription based and media in America, media in the world has gone back and forth between readers, paid for everything to that ad, started being more of a thing and and or all of the thing in television, radio, and then at the end of the 20th century and lately, oh, no, people are paying for cable TV and people pay for Netflix and people pay for The New York Times and so forth. So. It seems to me the tragic error was deciding in the early 2000s after the founders of Google wrote a paper saying, no, never don't have advertising sponsor search engine results, that that would be the way to go because advertisers only want as much engagement for as many minutes, hours as possible, whereas people who sell things directly to consumers, they don't care. I mean, HBO or Netflix doesn't care how much I watch as long as I pay my subscription. Right. I mean, that seems to me the key, ‘Oh, my God. That's where it went bad’ moment. 

Sinan Aral: I think yes and no, I think we have to be rigorous also about whether or not subscription models are a silver bullet to solve all of these problems. So a couple of things. First is that it's clear that not everybody can afford a subscription to social media. So is it going to create inequality if you switch to a subscription model? Do you need a progressive pricing? How are you going to manage that progressive the pricing? Are you going to shut people out of meaningful human connection, access to jobs, life saving health information because they can't afford it? That's one. 

Kurt Andersen: It's an argument. 

Sinan Aral: Number two is it's not clear that the subscription model means that engagement is irrelevant, because if Netflix doesn't engage me and have me interested in their content, then I have less of a reason to pay or a willingness to pay more for that content. So I think that the engagement model still exists in justifying the subscription price and in increasing the subscription price. And I also think that the targeting still exists as well, because you want to understand your audience, you want to tailor content to your audience and so on. I do think that it has at first, you know, as a first order effect a reduction in this sort of engagement-over-all-else model of advertising, and that subscriptions could help. But it also has other effects that we have to think about before we think of subscriptions as the silver bullet solution to all of this. 

Kurt Andersen: I'm with you. It is not a silver bullet. All I'm saying is one of the things, one of the important things, I think is for everyone to understand that these are choices that are made. And it could have gone this way and it could have gone that way. And it got us where we are of the three problems you said, I buy the idea that non affluent people should have this access and not be locked out of it. But but I think it's more important things to say, OK, we are now making choices for the rest of time. Let's not make them as stupidly as we did in 19-, in 2001.

Sinan Aral: First of all, Twitter is experimenting now with a subscription and so they are going to start to collect data on, you know, revenues, willingness to pay and so on. And we might find that they find that sustainable and that they can make a business out of that. My guess is that they will actually do both, that they will do the subscription model, they will continue the advertising model. People floated the subscription model as a possible solution to the social media crisis, and the platform said, yes, we'll have that, too, in addition to the advertising model. And you know, that's a freemium model. You pay for a subscription, you turn ads off, you don't pay for the subscription. We're still going to get ad revenue from that consumer. The other thing is that it highlights the arbitrary decisions that were made early on, as you describe, which is OK, why did we choose the advertising model instead of the subscription model early on? I think that's a very good question. Another similar question about an arbitrary choice is, how do we land on the like button? Why is it that we are so enamored with the like button and what is the like button do? Well, the like button prioritizes popularity over everything else. The more people that like something, the more that's going to be shown to other people, the more the society on social media is going to value that item that is liked a lot. But why don't we have a “This taught me something” button or a “wellness” button or a “health” button or, “I learned something from this” button or a “truth” button. You know, we don't have any of those. And I kind of feel like this was an arbitrary choice at the beginning as well. And we sort of set the entire system to run on popularity rather than anything else. Would we create incentives for people to put out knowledge or health and wellness if we incentivize that by creating, you know, health kudos or knowledge kudos or truth kudos instead of just likes or popularity kudos? 

Kurt Andersen: Absolutely. I was also thinking on Twitter, for instance, if you have a blue checkmark, you're verified. Couldn't there be an indication in your Twitter profile of how often you have or haven't shared complete falsehoods? 

Sinan Aral: You know, I will say that when it comes to why isn't there like a veracity score on profiles, one thing I also, you know, to be fair point out in the book is how deep that question goes, because when you dig all the way through and you peel that onion all the way, at the core of that is the question: Who gets to decide what's true and what's false? That's not an algorithmic question. It's not a technical question. It's not an economic or business model question. It is a deeply philosophical question. And I don't think that we have a good answer to that question. I do believe that if we could come to a consensus on veracity, that the scores would help us make better decisions about what to believe and share, but that we first have to deal with a very deep question, which is who gets to decide what's true and what's false and how do we make those decisions in society? 

Kurt Andersen: So you are pretty much the scholar and public intellectual of this realm. You are also you know, you're a coach as well as a player or ref as well as a player or something. You're also this media entrepreneur and investor and sell things to Tinder and, you know, and work with these people who run these big companies, and we all wear many hats and we are all not without contradictions, but does that second hat do you feel constrain your ability to see and say the ugly truths when necessary or call for the most painful reforms or any of that? 

Sinan Aral: No, I think quite, quite the opposite. So I am a scientist, entrepreneur and an investor in that order. My science is not funded by the companies that we're discussing here and my entrepreneurship, although it's in a related set of areas, I think that at the end of the day, I am an objective scientist first. All of my studies are peer reviewed and published in peer reviewed journals. There is a whole sort of apparatus of managing conflicts of interest that go with being a scientist that I follow to the letter. All of that is disclosed in every paper. And so I am an objective scientist first, and I don't get any benefit from advocating in one direction or the other on the questions that I describe in my book. And being an entrepreneur and an investor gives me an understanding that I would not have if I were just a scientist, because I know the ins and outs of how this technology is being built, was built and continues to be built. I understand the landscape of the thousands of companies that that we're evaluating for investments in terms of what's coming next in the future of this technology and in artificial intelligence and machine learning in general. And so I think all of these various experiences helped me understand this new digital era even better than any one of those experiences alone.

Kurt Andersen: Having been myself and castigated myself at length in writing about having been a useful idiot of the economic right back in the 90s, for instance. I wonder if you don't wonder if sometimes like, ‘wow, what if I decided Facebook was unredeemable, a force for evil and and should be just whatever crushed.’ I mean, it would be hard if I were you to like, come to that conclusion to say, you know. You know what I'm saying? I mean, you're, you're incentivized not to say, oh, my God, this, these companies are killing us. 

Sinan Aral: Well, I mean, I have to tell you, if that were true, I'm not very good at my job because I come out pretty strongly against these companies in this book, 

Kurt Andersen: Yes, yes...

Sinan Aral: You know, and we we did the studies you know on the cover of Science that showed falsity traveled farther, faster, deeper and more broadly than the truth... 

Kurt Andersen: Do the, you know, these people do they anguish over what they may be doing, have done, are doing some of them. All of them? 

Sinan Aral:  Yeah. You know, I know a lot of people who work in these companies, and I have to tell you that they are good people. They're very smart, and they are people who care about the planet, about humanity. And I think that there is a difference between the overall strategies that these companies are pursuing and the day to day engineers, scientists and managers in the companies who are are very, very good people. And I think that it would benefit society if. The top brass of these companies listened more to their employees who have over the last two to three years through whistleblowing, through being outspoken critics of their own companies and so on, tried to set a standard for a better way in the new social age. And I actually believe that the true leaders of the new social age will be the ones who eventually realize that the long term profit maximizing, shareholder value maximizing strategy that's the most effective for these companies are the ones that essentially align with and maximize society's values, because if they try to profit at the expense of society's values, they're going to be met with a tremendous amount of backlash, both regulatory and in terms of the consumer base and in terms of employees jumping ship and whistle blowing and so on, that it's not a long term sustainable shareholder value maximizing strategy to destroy the planet in pursuit of profits. There are better ways, and I think that the true leaders, you know, of the new social age will be the ones who realize that first.

Kurt Andersen: You have some plans and ideas for how to fix this. What should be done, in many cases, small tweaks algorithmically and otherwise, can be done to to make things make this less bad. What, what's that basic plan? 

Sinan Aral: Yeah. So the four levers are money, code, norms and laws. Money is the business model. Code is the design of the algorithms and the platforms. Norms are how we adopt and use the technology and of course, laws are regulation. And by the way, all of these four oars need to be rowing in the same direction because no one solution fixes this problem. If we take it in reverse and start with regulation, the entry ticket is to create competition in the social media economy because without competition, these platforms have no incentive to change because they're making money hand over fist. Now, when I say competition, the first thing on everybody's mind is, ‘oh, you mean break up Facebook?’ But I actually don't think that that creates sustainable competition. I mean, we can decide to break up Facebook or not, but that's not going to sustainably create competition. 

Kurt Andersen: And there may be other reasons to break up companies than the particular ones we've talked about. 

Sinan Aral: Sure. But the real solution to create competition is interoperability and social network and data portability. The reason why you have movements toward monopoly in the social economy is because the economy runs on network effects, which means the value of any of these platforms is a function of the number of users that it has and what happens in a, in an economy that runs on network effects as it tends towards market concentration. So if you break up the market leader without structural reforms to the economy itself, it'll just tip the next Facebook-like company into market dominance because the economics of the social economy are such that it tends towards markets concentration because it runs on network effects. The right way to regulate structurally a market that runs on network effects is to mandate by law that these platforms have to be interoperable, that you can be on one platform and connect with users on a different platform. 

Kurt Andersen: So just to just to get specific. So I'm on Facebook. I decided I don't want to I don't want to be on this for whatever reason. I want to go to this other X new platform or competitive platform. What you're proposing would allow me to say, ‘sorry, Facebook. I don't like what you do in my privacy. I don't like what you're doing in China, whatever I don't like about you. I'm taking my stuff and putting it over on this one,’ which is effectively impossible now, right?. 

Sinan Aral: Exactly. The reason people don't leave Facebook, if they don't leave Facebook, is because all of their friends and family are on Facebook and they say, ‘how am I going to connect with these people if I leave Facebook?’ Well, if Facebook was mandated by law that they had to accept messages from Sinan’s new social network that I invent tomorrow, then people could choose Sinan’s social network that was better on privacy, better on other policies, and send messages to people on Facebook from this new social network. And if that was mandated by law, it would create competition. When AOL merged with Time Warner, the AIM instant messenger, AOL Instant Messenger was the market leader with sixty five percent market share. 

Kurt Andersen: They invented messaging.

Sinan Aral: They invented messaging and and by mandate. In the approval of this merger, we forced AOL to be interoperable with MSN Messenger and Yahoo! Messenger, and they went from 65 percent market share to 59 percent market share in one year, then to 55 percent market share the year after that. And then they ceded the entire market to new entrants just three years after. That's an example of how rightly written interoperability legislation can create competition in a market with network effects. 

Kurt Andersen: And I just want to tell people and remind people that it's a more complicated process than making your telephone number go with you wherever you went. But that wasn't a thing that existed and then it was required to exist. And now, of course, that's how you can do it. Right, and go from T-Mobile to Verizon. And that's what you're talking about. 

Sinan Aral: People need to be able to take their social networks with them, just as they did when they took their cell phone numbers with them, when they switched from Verizon to Sprint. And people say, ‘oh, well, it's technically difficult to do this interoperability thing over social networks.’ But it's really not that hard if you think about it. The platforms have solved much more technical challenges than this, harder challenges, really. They all have the same messaging formats now. They have a textual format. They have a video format. They have an audio format. They all have stories now. These are very similar messaging formats that we could create standards around. And you could have a stack of interoperable five to seven messaging formats that they all had to adhere to APIs that they could send messages to each other in these formats and they could build proprietary formats on top of that if they wanted to, but this would allow interoperability, which would create competition and would create incentives for them to then deliver platforms that we like. 

Kurt Andersen: And one more thing I want to mention before we move on...The United States versus Microsoft didn't want to break, didn't break up Microsoft didn't really want to. But it did prevent in the settlement Microsoft from becoming a monopoly of the new Internet, right? And allowing Facebook and allowing Google to become Facebook and Google 

Sinan Aral: Yeah, I mean, a lot of people say, quote unquote, the trial is the remedy -- that basically just going through an antitrust trial creates a lot of change in a company. And in the book, I advocate forward-looking merger oversight that we should be a lot better at conducting oversight over possible mergers and acquisitions that Facebook and other companies are making. 

Kurt Andersen: I mean I guess the horse is out of the barn, but like just to show people, and Facebook, that we're serious, why not make them divest WhatsApp and Instagram? Now, just to say like we’re  serious. 

Sinan Aral: First of all, it would it's not the structural reforms to the economy that would create sustained competition. Second of all, the FCC conducted that oversight and approved those mergers. And so, yes, they reserve the right in that oversight to revisit. But it sends a chilling message that they would conduct oversight, approve a merger, then somebody would buy a company, invest hundreds of billions of dollars into it, and then they say, oh, well, actually, no, you know, you need to divest that company. Yeah I think a much better way to be serious is to create laws that have penalties and punishments for breaking those laws that have teeth, you know, that are a 20 percent of revenue punishment. If you have an interoperability legislation or if you're going to implement product safety regulations that make you responsible for the harms that you put on into society, that those aren't, you know, a five million dollar or a ten million dollar fine, but they're very hefty fines that make a meaningful dent. 

Kurt Andersen: That's so important. The product safety liability thing, which people understand with cars, with insurance, with all of it, we don't want to get deep into the section 230 weeds. But introducing that idea, because it is about product safety. Yes, fortunately or unfortunately, it's overlayed and intertwined with free speech, which is a whole other thing. But there are products safety and liability issues here that it seems to me can be part of the solution. 

Sinan Aral: Absolutely. I mean, you know, we've talked about the antitrust piece, but this is one small piece of the solution that I describe in the book. And I do talk about, well, where do we draw the line between free speech and hate speech? How do we reform Section 230? I think a repeal of Section 230 would be disaster for the Internet. I don't think it's workable at all. I think that product safety regulation is important. But it's also difficult because attributing the harm to social media is a step that is even difficult to prove in science. We talked about effects on democracy. Well, how do you show and prove the straight line between social media and effects on elections? It's not easy in data, as as it is in the difference in fatalities and car crashes with and without seatbelts. That's pretty clear. The research that shows that tobacco causes cancer is pretty clear. The research that shows that Facebook harms democracy? Not as clear. 

Kurt Andersen: I was also, I was thinking your book came out late last year or so before Donald Trump was kicked off Twitter and other social media platforms. But talk about the effectiveness of a, banning a super spreader of falsehood. That study that was done the first week after he was off Twitter that social media shares and information about all kinds, about the big lie. Election fraud was cut by three quarters in a week. Right? I mean, it shows you that. I mean, for better, potentially for worse as well. But like, man, these companies by themselves can, you know, nudged and legislated and regulated toward these decisions, can stop putting out toxic stuff as much as they have. 

Sinan Aral: Yeah. I mean, I think that there was a short term reduction. I think that there was an underlying demand for that kind of content that continued and continues. And it's a sort of a case of whack a mole that you whack a mole here. And then it pops up somewhere else, and so, you know, obviously at the same time, you have Parlor that was really at the time advocating, hey, we're a place where you can say anything. And then they also had trouble sort of maintaining their investment population by advocating that. I think that there will always be new avenues of communication where the demand for and the supply of that kind of information continues to crop up. But, yes, in that in the traditional mainstream social media channels, the big lie in particular and other lies reduced dramatically in the weeks following Donald Trump being kicked off. 

Kurt Andersen: And by the way, whack a mole gets a bad name. Life is whack a mole. So you say, as you said, the true leaders of this social age will be people who make hard decisions and put social welfare above shareholder value. Yes, I agree. You say the most consequential decisions of the new social age are yet to come. So what are those most consequential decisions and how hopeful are you that gazillion shares for whom shareholder value and supremacy is everything will make the right choices

Sinan Aral: I'm cautiously optimistic. I think what the right choices are. One, exploring new business models to understand how you could create sustainable economic value and reduce some of the harmful effects, whether that's a subscription model or other types of models that we've discussed. Second is moving towards a more multi objective function set of recommendation algorithms that favor diversity, that favor high quality content, that don't just rely on first-order popularity, that don't just rely on giving you more of what you... what they think you like all the time. I think that applying that also to the friend suggestion algorithms, the people you may know algorithms so that we can diversify the human social network as well, because that's becoming very cliquey. It's becoming a smaller and smaller world as we get wrapped in tightly knit clusters of very similar people, rather than keeping those diversity of connections open. And I think that tamping down on harmful content --  so, for instance, I think we can all agree that live streaming of mass murders like the one we saw in Christchurch, New Zealand, is something that we can all do without on social media and that that's very harmful for society. But there are a number of other things that I think that the companies could be bold about and say this is harmful and we're going to sort of really rigorously and systematically cut it at the knees, including anti vaccine content that's known to be false. It's fine to question medical science. It's fine to question the efficacy of vaccines and so on. But there's a lot of vaccine content out there, anti vax content out there that is known to be false and known to be designed to be exciting and inciting in a sense that could be clamped down on. And I think that they have to be really protective of free speech. I think that they have to be careful about how they design their policies in order to strike the right balance between free speech and harmful speech. So there are a number of things that the platforms could do to take bold steps to contributing to a brighter social age. 

Kurt Andersen: I have one last question connected to the future in a somewhat morbid way, which is I noticed again and again in the research, in your book and elsewhere that, of course, it's older people who disproportionately go to false news websites and who share falseness. And, now you wonder and I'm going to ask you to comment on this without scientific rigor, but is it intrinsic to old people or is it intrinsic to old people now who are not Internet natives? And to some degree, when old people die off, that younger people will have a more natural ability to be savvy about what they're seeing. And that will be one of the 20 ways, 50 ways, 100 ways in which the problem is solved? 

Sinan Aral: Well, I mean, I think that my guesstimate is that, yes, it's a problem of now. I think that in essence, people who are over 65 today have not had to deal with the flood of falsity coming from numerous different sources at once. They have had in the past a set of trustworthy information sources that were more easy to control, and now that has splintered into an ecosystem of millions of people, billions of people in a real time conversation where the information could be coming from anywhere. So it's much more difficult to sort out. And they have had the greatest transition from the reality that they knew to a new reality of digital and social media, whereas younger generations sort of grew up in this, understood it from the beginning to be, well, we have to be a little bit more cautious. Well, what's the source of that? And I need to take this with a grain of salt and so on. So I don't think it's something inherently characteristic about people as they age. I think it's a moment in time where they grew up in a different reality. And I hope, like you said, that younger generations will be more more prepared. 

Kurt Andersen: Well, there we have a hopeful note to end on.

Sinan Aral: Perfect.

Kurt Andersen: Fingers crossed.

Sinan Aral: I love it. 

Kurt Andersen: Sinan Aral this was just a great pleasure. Thank you so much. 

Kurt Anderson: The World As You'll Know It, is brought to you by Aventine, a nonprofit research institute creating and sharing work that explores how today's decisions could affect the future. The views expressed do not necessarily represent those of Aventine, its employees or affiliates. Danielle Mattoon is the editorial director of Aventine. The World As You'll Know It is produced in partnership with Pineapple Street Studios.

On the next episode of The World As You'll Know It, I'll talk with Alison Gopnik. She is one of the best known cognitive scientists around, a professor of psychology and philosophy at the University of California at Berkeley, who is helping computer engineers program A.I. to learn more like children do. 

Alison Gopnik So I think what will happen if we actually get more sophisticated machines is it's unlikely that they'll be a sort of sense of, “Does this one have consciousness or not?” as if it's a binary or is it intelligent or not? It's the different kinds of creatures with different kinds of functions, different kinds of computational complexity are going to have different kinds of intelligence and probably have different kinds of consciousness.

logo

aventine

About UsPodcast

contact

380 Lafayette St.
New York, NY 10003
info@aventine.org

follow

sign up for updates

If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

© Aventine 2021
Privacy Policy.