Welcome!
2017-09-28 22:46:03
Facebook’s Ad-Targeting Problem, Captured in a Literal Shade of Gray

For a sense of the dilemma confronting Facebook over its ad-targeting system, consider the following word: confederate.

As of Wednesday, any prospective advertiser who typed that word into Facebook’s ad-targeting engine would be prompted to distribute their ad to a potential audience of more than four million users who had indicated an interest in the Confederate States of America, according to a test by The New York Times.

There are plenty of Civil War buffs, of course, as well as students and scholars who have taken an academic interest in the Confederacy. Yet Facebook must also be mindful in today’s charged political atmosphere that some might use its targeting system to reach those who support the Confederacy as part of a white nationalist worldview.

The social network recently grappled with revelations that advertisers were able to target Facebook users who used terms like “Jew hater” to describe themselves. But even after the company took steps to shut down those clearly offensive categories, other targeting terms remain that fall into a gray area. That includes categories like Confederate States, which are legitimate in principle but can be potentially problematic or misused in practice.

It illustrates the blurry lines and policing challenge that confront Facebook in its ad targeting. And after a year in which the social network has accepted more responsibility to crack down on false or offensive material, and last week, when the company twice announced new measures to prevent abuses by advertisers, some experts said the scale of that challenge is only starting to become apparent.

“What we’re actually talking about is all of the social issues one can think of — any social issue, social debate, social strife — being reproduced in this arena,” said Sarah T. Roberts, an assistant professor at the University of California, Los Angeles, who studies content moderation on digital platforms.

“These issues are taken wholly unresolved and put into a commercial context where they’re amplified and disseminated at instantaneous speed, forever,” she added. “I have great empathy around the difficulty.”

Targeting involving contentious subjects can be done legitimately, said Rob Goldman, Facebook’s vice president of ads products, such as companies advertising historical books, documentaries and television shows. He acknowledged situations in which certain targeting categories could be used “in malicious ways” but said, “This type of behavior is against our policies and has no place on our platform.”

Facebook said it had multiple safeguards to ensure that an ad campaign was appropriate. While its system is far from perfect — the company recently disclosed that it allowed Russian operatives using fake accounts and pages to place ads on topics that polarized American voters, like race and immigration — the company said it would block an ad that included overtly racist content or directed users to a web page promoting racist ideas.

“We are taking a hard look at our ads policies and enforcement, and are looking at ways we can do better,” Mr. Goldman said.

How do people end up in the potential audience for Facebook’s ad-targeting categories in the first place? Facebook creates an ad category corresponding to a subject through a mix of human discretion and automated processes that it declined to describe.

Facebook users then effectively sort themselves into the targeting category by liking and visiting certain pages on the social network and through other activities they engage in on the service. Facebook has said that liking a page is one signal among many that helps it place users into the categories that advertisers can target.

So if Facebook creates, say, a red wine category, people increase their likelihood of being included in it by engaging with Facebook pages dedicated to the topic.

Once an ad category exists on Facebook, advertisers can push their messages to those users. Those who may be targeted in an ad campaign around the Confederate States may be Civil War buffs who visited or liked a page about the Confederacy set up by a seller of history books.

But advertisers can also gain access to people associated with Facebook pages that perpetuate false, misleading or divisive information. For example, many people who liked two pages on Facebook that frequently defend the Confederacy are likely to be included in the Confederate States of America category that advertisers can target.

One of the pages, with roughly 250,000 likes, recently included a post declaring the Confederate Army “the greatest force that ever walked the Earth,” and another post prominently featuring a quote attributed to a Confederate general: “The Army of Northern Virginia was never defeated. It merely wore itself out whipping the enemy.”

Stephanie McCurry, a Civil War historian at Columbia University, examined both pages and found them littered with “fake history,” such as the suggestion that slavery was not the central reason for secession.

Despite their potential to offend, Facebook’s Confederacy pages do not appear to run afoul of the company’s standards on issues like hate speech. Some veterans of the digital advertising business said that as long as that is the case, it should be up to advertisers to determine whether to target categories composed partly of people who like these pages.

“At the end of the day, these gray areas are dictated by the advertiser,” said Chris Bolte, a longtime ad-technology official at companies like Yahoo and Walmart. Mr. Bolte said advertisers had every right to target the audiences most likely to be interested in their products and services, unless those audiences were “obvious hate groups.”

But Ms. Roberts at U.C.L.A. argued that simply by allowing Confederate States of America and similar pages to exist and then using this content to help advertisers target people with those interests, Facebook was blessing the views expressed there as legitimate.

“We can draw a line from content that proliferates on the platform to what is extracted and monetized, made into revenue flow from advertising,” she said. “It is up to Facebook to make the decision here whether to impede that process. Whatever they decide, it is no longer possible for that to fly under the radar.”

Other Facebook ad-targeting categories that fall into the gray area of being legitimate in principle but potentially problematic or open to misuse include Wehrmacht, which refers to the Nazi-era German military, and Benito Mussolini, the Fascist Italian leader during World War II, according to a test of Facebook’s ad-targeting system by The Times.

Advertisers who target an ad using the term Wehrmacht would probably gain access to many people who liked a page dedicated to the Wehrmacht that appears to celebrate the Nazi-era military.

While scrolling through the Wehrmacht page, Robert Citino, the senior historian at the National WWII Museum in New Orleans, said, “It does implicitly accept the German propaganda view of the Wehrmacht: handsome warriors drinking beer, with their planes on a hastily constructed airfield.”

He said people who esteem the Wehrmacht can only do so while “ignoring the fact that they were massacring supposedly inferior racial groups in the Eastern campaign,” though he defended their right to pursue an interest in the Wehrmacht’s tactics and equipment.

Scott Galloway, a marketing professor at New York University and author of a forthcoming book on the big tech companies, said Facebook should not necessarily ban content that celebrates institutions like the Confederacy or the Wehrmacht and advertising that targets people interested in these subjects. But he said allowing this content and selling ads around it should reflect on Facebook the same way it would reflect on, say, CNN or The Washington Post.

“I think it’s fairly cut and dried,” he said. “Their responsibility is the same as any other media company.”