On YouTube’s CEO’s empty rhetoric and a future where we’re back in control of the information we share.
|Michael Jones||Dec 2|
Last night, 60 Minutes aired an interview with YouTube’s CEO Susan Wojcicki to check in on the company’s focus on rooting out hate speech, disinformation and radicalization. Wojcicki was pressed by reporter Leslie Stahl on a number of thorny issues I’ve written about in previous Daily Readers — and will revisit in this one.
Defending the company’s policies on political ads, Wojcicki said, “For every single video, I think it’s important to look at it. Politicians are always accusing their opponents of lying.” I won’t park us here too long, but it’d be wildly irresponsible to ignore this weak justification of companies like YouTube that allow politicians, including our current president, to reach up to 80 million Americans with an intentionally misleading ad because the candidates have the budget to do so. There’s a clear difference between someone being accused of lying by a political opponent and being empowered to lie to the electorate because they have the budget to do so.
Stahl also asked Wojcicki about YouTube’s artificial intelligence algorithms that recommend new videos to users so they’ll keep watching so advertisers have eyes and ears to promote their products and services — even if those videos are harmful to communities of which I’m a part of or ally to: people of color, queer people or women. In response, Wojcicki offered much of the same boilerplate messaging we’ve grown to expect from tech executives. “You can go too far and that can become censorship. And so we have been working really hard to figure out what's the right way to balance responsibility with freedom of speech,” she said. “We think there's a lot of benefit from being able to hear from groups and underrepresented groups that otherwise we never would have heard from.”
But algorithms aren’t designed for these “underrepresented groups” to have their content surfaced unless they counter with the same violence and vitriol that’s infiltrated our mainstream. (According to Wojcicki, YouTube started reprogramming its algorithms in the US to recommend questionable videos much less and point users who search for that kind of material to authoritative sources, like news clips. The amount of time Americans watch controversial content has decreased by 70 percent.) At press time, YouTube did not respond to a request for comment on concerns from minority communities that YouTube amplifies harmful content on the app.
Tech companies are unwilling to bite the hand that feeds them
There’s no question that Wojcicki’s job of “nurturing the site’s creativity, taming the hate and handling the chaos,” as described by Stahl, is wrought with challenges. In fact, it’s after watching or reading one of these interviews that I’m especially grateful for choosing a path that enables me to avoid grappling with competing interests from advertisers, consumers and creators. But Wojcicki and her contemporaries do themselves no favors when they’re constantly innovating their products to make them even stickier (when they’re already addictive enough), looking the other way (or responding too slowly) when users violate their policies, or kicking the tires to our elected officials, who, save for a few socially savvy legislators, aren’t well-versed on how these technologies work to bring any meaningful solutions to the table.
Sometimes it can feel frustrating to engage with tech companies when it’s obvious they’ll continue to carry on with business as usual as long as they profit from the very behavior they claim to be interested in moderating.
The internet offers consumers an infinite supply of media, so it’s usually the “borderline content” — the most shocking, outrageous and offensive stuff that stops just short of violating community guidelines — that stands out.
In a 2018 blog post, Facebook CEO Mark Zuckerberg wrote:
One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.
Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don't like the content.
This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement.
Zuckerberg’s comments are proof that tech companies understand that their apps incentivize so-called borderline content. He’s right: Sensationalism was a media fixture before Facebook or Google (which owns YouTube) rose to dominance. But the conversation is incomplete if we ignore the ad-supported business model that is fueled by the very engagement borderline content generates. 90 percent of Facebook and Google’s revenue is generated from ads. You’d be right to think it’s against those companies’ business interests to moderate too much.
Perhaps that’s why it seems Wojcicki attempted to manage (or reset) expectations in the 60 Minutes interview. “YouTube is always going to be different than something like traditional media where every single piece of content is produced and reviewed,” she said. “We have an open platform. But I know that I can make it better. And that’s why I’m here.”
“Silicon Valley is just like every other large center of business in the world”
I’ve always believed tech CEOs like Wojcicki believe their products and services exist to leave the world better than they found it. But as Instagram cofounder Kevin Systrom said to Stella Bugbee for New York:
There was a while where everyone thought it was all about solving the world’s problems and it was all mission driven. There is a lot of that, but the idea that there’s no sense of capitalism, no sense of winning at all costs, would be misguided. People have come to realize that Silicon Valley is just like every other large center of business in the world. It’s an industry, and it has its cast of characters. It’s not all philanthropy.
This didn’t stop Roger McNamee, an investor and the subject of an excellent profile by Brian Barth published in last week’s New Yorker, from drinking the “social media as a force for good” Kool-Aid — before turning into one of the industry’s fiercest critics.
McNamee was convinced that Facebook was different. Then, in February, 2016, shortly after he retired from full-time investing, he noticed posts in his Facebook feed that purported to support Bernie Sanders but struck him as fishy. That spring, the social-media-fuelled vitriol of the Brexit campaign seemed like further proof that Facebook was being exploited to sow division among voters—and that company executives had turned a blind eye. The more McNamee listened to Silicon Valley critics, the more alarmed he became: he learned that Facebook allowed facial-recognition software to identify users without their consent, and let advertisers discriminate against viewers. (Real-estate companies, for example, could exclude people of certain races from seeing their ads.)
Ten days before the Presidential election, McNamee sent an email to Zuckerberg and [Facebook COO Sheryl] Sandberg. “I am disappointed. I am embarrassed. I am ashamed,” he wrote.
Recently Facebook has done some things that are truly horrible and I can no longer excuse its behavior … Facebook is enabling people to do harm. It has the power to stop the harm. What it currently lacks is an incentive to do so.
McNamee, according to Barth, continued:
“They were my friends. I wanted to give them a chance to do the right thing. I wasn’t expecting them to go, ‘Oh, my God, stop everything,’ but I was expecting them to take it seriously,” he said. “It was obvious they thought it was a PR problem, not a business problem, and they thought the PR problem was me.” McNamee hasn’t spoken to Sandberg or Zuckerberg since.
Palace intrigue aside, these burned bridges led to McNamee’s argument that Facebook “should be used for staying in touch with friends and family, rather than for political debates, which the platform alchemizes into screaming matches.” He told Barth that “outrage and fear are what drive their business model, so don’t engage with it. I was as addicted as anybody, but we have the power to withdraw our attention.”
The future after social media
Withdrawing our attention is one thing. But imagining a post-social-media internet, which Annalee Newitz did in an article published by the New York Times last Saturday, is another. Newitz pulls no punches in the opening paragraph of “A Better Internet Is Waiting for Us”:
Social media is broken. It has poisoned the way we communicate with each other and undermined the democratic process. Many of us just want to get away from it, but we can’t imagine a world without it. Though we talk about reforming and regulating it, “fixing” it, those of us who grew up on the internet knows there’s no such thing as a social network that lasts forever. Facebook and Twitter are slowly imploding. And before they’re finally dead, we need to think about what the future will look like after social media so we can prepare for what’s next.
Throughout her piece, Newitz highlights an important distinction, credited to author and academic Cal Newport, between the social internet and social media. From “Tech companies enable creativity, but devalue creators”:
Sure, tech companies enable creativity. But they don’t value all creators — especially if you’re a woman, person of color or member of the LGBTQ+ community. Without unregulated access to attention and creativity, these tech companies likely aren’t billion-dollar businesses.
It feels impossible to own a creative business or live a modern life without these platforms. That’s what’s so insidious about the Consumption Okie-Doke. These companies are now so dominant that, as Cal Newport writes in Deep Work, “they’re inextricably intertwined into the fabric of the internet.”
People assume I’m anti-tech or anti-internet because I’m anti-social media. Newport says it’s because in most people’s eyes, “to criticize social media, therefore, [is] to criticize the internet’s general ability to do useful things like connect people, spread information, and support activism and expression,” Newport writes.
But I’ve discovered in the two years since I quit social media that it’s possible to enjoy the connection, discovery and expression the internet enables without endorsing a small number of big companies to monetize — and damn-near monopolize — it against my behalf.
Newitz spoke to a range of experts on the possibilities of life after the internet, where “ad networks parasitic on human connection” like Google and Facebook can no longer make money on outrage and deception.
Science fiction writer John Scalzi’s idea of online profiles beginning with everything and everyone blocked by default and news and entertainment reaching you only after you opted into them would protect users from “viral falsehoods, as well as mobs of strangers or bots attacking someone they disagree with.” My favorite part: “You can’t make advertising money from a system where everyone is blocked by default — companies wouldn’t be able to gather and sell your data, and you could avoid seeing ads.”
Here’s an idea I’ve workshopped in private conversations with my creator-friends and is endorsed by Safiya Umoja Noble, a professor at the University of California at Los Angeles: “slow media”:
Instead of deploying algorithms to curate content at superhuman speeds, what if future public platforms simply set limits on how quickly content circulates?
It would be a much different media experience. “Maybe you’ll submit something and it won’t show up the next minute,” Noble said. “That might be positive. Maybe we’ll upload things and come back in a week and see if it’s there.”
That slowness would give human moderators or curators time to review content. They could quash dangerous conspiracy theories before they lead to harassment or worse. Or they could behave like old-fashioned newspaper editors, fact-checking content with the people posting it or making sure they have permission to post pictures of someone. “It might help accomplish privacy goals, or give consumers better control,” Noble said. “It’s a completely different business model.”
The key to slow media is that it puts humans back in control of the information they share.
The urgency to imagine the internet after social media could be explained by the favorable public opinion tech companies still benefit from. In Brian Barth’s aforementioned New Yorker piece, he cited research from a recent Pew Center poll that found around half of Americans think the tech industry is having a positive impact on society (compared to seven in ten in 2015). Google and Amazon came in second and third in a survey of millennials’ favorite brands conducted earlier this year. People are more worried about the behavior of banks and pharmaceutical companies than tech companies. And most have yet to meaningfully change their consumption habits.
As I wrote in “Barack Obama and the politics of technology”:
There’s an imbalance between the value tech companies provide with their social media platforms and the problems associated with their business models and public images and the creative class’s ability to work and live on its own terms.
The question has never been if these tools are convenient. What’s up for discussion is if they’re necessary for creators and consumers to exchange ideas, experiences and money. My answer is a resounding no. And I’m betting on the future to prove me true.