Over the past week, we’ve seen a cascade of negative news stories on Facebook. To close observers, none of them reported anything new. Lost amongst all this noise was the fact that Facebook is fighting a war—against both bad actors on its platform and poor public perception—and has a new general. His name is Mark Zuckerberg.
In a remarkable 4,500-word post last Thursday—which, notably, received zero press coverage—Zuckerberg has finally come to grips with the steps he needs to take to solve the problem of content moderation across Facebook’s various services. Together with a conference call held on the same day, Zuckerberg came across as assertive and determined.
Here are some of the most interesting points from Zuckerberg’s post:
One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.
After many years of dealing with content moderation issues, Facebook is finally ditching its Pollyannish assumption that the internet is only good. If there was ever a forcing function needed to spur Zuckerberg into action, this new understanding was it.
The problem, though, is that content moderation is a very tough problem. Where does one draw the line? Is showing a nipple good, or bad? The answer is… it depends. A recent Radiolab podcast explored this in depth. After iterating several times on what type of content was allowed for lactating women, Facebook’s content moderators had to decide what to do about an image of a teenage African girl breastfeeding a goat:
And we googled breastfeeding goats and found that this was a thing. It turns out it’s a survival practice. According to what they found, this is a tradition in Kenya that goes back centuries. In a drought, a known way to help your herd get through the drought is, if you have a woman who’s lactating, to have her nurse the baby goat along with her human kid and so there’s nothing sexual about it.
The episode ends inconclusively, but makes clear that content moderation is an endless game of whack-a-mole. People are always trying to push the boundaries of what’s acceptable, and the boundaries of what’s acceptable are always changing. As Zuckerberg noted in his post,
As with many of the biggest challenges we face, there isn’t broad agreement on the right approach, and thoughtful people come to very different conclusions on what are acceptable tradeoffs. To make this even harder, cultural norms vary widely in different countries, and are shifting rapidly.
It’s a positive development, then, that the French government will be allowed to peek behind the curtain and see Facebook’s content moderation in action. Regulators—judging by their questions at Facebook’s various public hearings—don’t really understand the scale and difficulty of the challenges Facebook faces.
The big shift over the past couple of years, though, has been from one of reactive moderation—where Facebook waited for user reports of problematic content before taking it down—to instead being proactive, understanding that the internet amplifies both good and bad, and actively detecting and either demoting or deleting problematic content.
Being proactive has resulted in important early successes. For context, it’s instructive to go back to pre-internet days. In 1974, a lady suffering from depression committed suicide on live TV, the first person ever to do so. Facebook gave anyone a broadcast license, and a few months after introducing its livestreaming feature in 2015, several people did the same.
This is the update Zuckerberg had to provide on this front:
Another category we prioritized was self harm. After someone tragically live-streamed their suicide, we trained our systems to flag content that suggested a risk — in this case so we could get the person help. We built a team of thousands of people around the world so we could respond to these flags usually within minutes. In the last year, we’ve helped first responders quickly reach around 3,500 people globally who needed help.
Another sign of progress has been Facebook’s attitude towards countries, like Myanmar, where Facebook effectively is the internet:
In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we’ve made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts reviewing content.
Importantly, Zuckerberg notes that there are now “around 30,000 people” responsible for enforcing Facebook’s content policies. But Facebook can’t be the arbiter of free expression; to this end, it’s creating an “independent body” to review content decisions. The goal is to have this body—like an external Supreme Court—established by the end of 2019.
This work will now be reported alongside Facebook’s quarterly earnings. For instance, here is the most recent report for Community Standards Enforcement, along with minutes of the standards forum meeting.
Incentives Matter
In early September, Jack Dorsey, CEO of Twitter, testified before both the Senate and the House. He mentioned the problem of incentives—that the purpose of Twitter is to act as the town square and encourage conversation—but that increasing “likes” and “followers” might not be the proper way to do so.
In his post, Zuckerberg wrote about a related incentive problem:
One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.
There’s a saying in journalism, “if it bleeds, it leads.” Some researchers have documented how, since the 1970s, “the mainstream commercial media in the United States changed their editorial policies […] to focus more on the police blotter.” Other work corroborates this finding.
In a new approach, Facebook will take content that people don’t like—but can’t help engaging with—and demote it. Here’s what the natural engagement pattern looks like:
Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.
There is much more in Zuckerberg’s post, and it is very much worth reading in its entirety. The comment section under the post is also filled with positive reactions and appreciation from Facebook users.
Thinking from First Principles
Communications networks created before the advent of the “information superhighway”—the newspaper, telegraph, telephone, radio and TV—did not allow global participation and interaction by anyone with an internet connection. Facebook, by creating a digital representation of our real-world, physical connections, enables a type of network humans have never dealt with before. It is, in effect, a new type of technology.
And every new type of technology going back to the discovery of fire has always been a double-edged sword, useful for both good and evil. Humans have created fixes, which frequently have unintended consequences (“revenge effects” or “bite back” as Edward Tenner calls it). In particular, technological capabilities can become quite complex and outstrip our ability to understand them:
Kevin Systrom, the co-founder of Instagram, noted recently that “Social media is in a pre-Newtonian moment, where we all understand that it works, but not how it works.”
Now that we are beginning to understand the incentive problems in social media, it’s time to put the pedal to the metal and fix them.
Fixing Facebook
Benedict Evans, a partner at venture capital firm a16z, made the astute obervation that “Facebook is really just an extended case study of Goodhart’s Law.”
Goodhart’s Law states that “when a measure becomes a target, it ceases to be a good measure.” In Facebook’s case, since the company went public in 2012, investors have obsessed with reported metrics such as daily and monthly active users. But what if that’s the wrong metric?
Zuckerberg’s early mantra was “move fast and break things,” later amended to the “much less sexy version of move fast with stable infrastructure.”
Today, Facebook’s new marching orders should be “move fast and rebuild trust.” Trust should be the new measure. Zuckerberg gets this. In last Thursday’s call with journalists, he said,
One [of] the most basic thing that people trust us with is that people come to our services and about 100 billion times a day choose to share some text or a photo or videos with a person or a group of people or publicly and they need to know that our services are going to deliver that content to the people that they want. And that’s the most fundamental thing and I think we continue focusing on delivering that and I think people have good confidence in general that when they use their services, that’s what’s going to happen.At the corporate level more broadly, I think people want to be able to trust our intention and that we’re going to learn and get stuff right. I don’t think anyone expects every company to get everything right the first time, but I think people expect companies to learn and to not continue making the same mistake and to improve and learn quickly once you are aware of issues.
The Media Makes Some Noise
While Facebook is making its best effort yet at addressing these issues—David Kirkpatrick, who wrote a book on Facebook, praised Zuckerberg’s tone in last week’s call as the best tone he’s ever heard as far as taking seriously these sets of problems—journalists continue to bash Facebook as if instead it were doing nothing.
A cynic would point out that the media has many reasons for picking on Facebook:
Facebook has turned the traditional gatekeeper media companies into commodity providers, and helped accelerate the unraveling of their business models
Journalists, who are overwhelmingly left-leaning, are angry that Facebook helped elect Trump
Facebook is a company a lot of people care about, and writing lots of scary-sounding stories about it generates pageviews and advertising revenues
Perhaps the media’s inability to appreciate the progress Facebook has made has to do with information asymmetry. Throughout Facebook’s history, there have been several episodes of folks complaining about changes the company has made, which ultimately were vindicated; the crowd, without the benefit of seeing Facebook’s internal data, was wrong.
The best-known example is, of course, the creation of the newsfeed in 2006. Antonio García Martínez, who once worked at Facebook and wrote a book about his experience, wrote: “Journalists who cover Facebook and bristle at their haughty disdain and/or patronizing condescension, consider this illustrative example from Daniel Ellsberg, discussing a conversation he had with Kissinger.”
The Ellsberg story is an interesting example of the Dunning-Kruger Effect, in which one is clueless about a topic, but is clueless about one’s cluelessness. Ellsberg explains to Henry Kissinger that once Kissinger joins the government and has access to incredible amounts of information he previously never had access to, he would be incapable of learning anything from anyone without the same security clearances.
That’s how journalists perceive Facebook: while they think Facebook isn’t doing enough, the reality is that Facebook insiders have a huge informational advantage and can see things the outside world can’t.
The media’s noise has taken a toll on Facebook’s stock price and employee morale; and while many stocks have been highly correlated to Facebook’s stock price decline, it’s interesting to ponder whether folks are trading on all this noise.
This reminds me of an excerpt from one of my favorite books on behavioral finance, Misbehaving:
Larry Summers had just written the first of a series of papers with three of his former students about what they called ‘noise traders.’ The term ‘noise traders’ comes from Fischer Black, who had made ‘noise’ into a technical term in finance during his presidential address to the American Finance Association, using it as a contrast to the word ‘news.’ The only thing that makes an Econ [rational person] change his mind about an investment is genuine news, but Humans [normal people] might react to something that does not qualify as news, such as seeing an ad for the company behind the investment that makes them laugh. In other words, supposedly irrelevant factors (SIFs) are noise, and a noise trader, as Black and Summers use the term, make decisions based on SIFs rather than actual news.Summers had earlier used more colorful language to capture the idea that noise might influence asset prices. He has an infamous but unpublished solo-authored paper on this theme that starts this way: ‘THERE ARE IDIOTS. Look around.’
For Zuckerberg and his troops, the message is clear: ignore the noise, and march onwards.
תגובות