A Simple Solution to the “Facebook Problem”

With the emergence of the Cambridge Analytica fiasco, many observers feel like the moment is ripe for Facebook to take a major fall. There is talk that the tone-deafness and arrogance of the company – seemingly embedded in its DNA via the self-serving track record of its founder – may ultimately lead to its downfall, either through some kind of mass user exodus or massive government penalties.

Some large advertisers are also putting major pressure on Facebook, though I doubt they have enough leverage to bring about change on their own.

Change is certainly in the wind. Facebook, now dimly aware of its limitations, is now virtually begging to be regulated.

The primary “time to grow up” moments for Facebook’s counterparts among the tech juggernauts have revolved around antitrust. Arguably, Microsoft grew up and took a turn towards being a more responsible global citizen following its DOJ punishments in the 2001 antitrust case. Microsoft’s market capitalization today is $700 billion – pretty good for a company that was laid low by an antitrust ruling. In light of that, one might expect that even severe punishments will still leave Facebook plenty of scope to innovate and operate at scale.

More telling may be the maturation process of the companies’ founders. Bill Gates has been noteworthy for his balanced views on technology company behavior, and especially for his thoughtful (not window dressing) philanthropic works. He’s very nearly a mensch.

It’s hard to picture Mark Zuckerberg “getting it” in that way.

Facebook’s violations have been around user privacy. Everyone with any common sense has always known of Facebook’s “creep factor,” in the sense of handing advertisers an incredibly granular picture of small or large groups of users. To push beyond even Facebook levels of creepiness is too much for many of us to tolerate.

Still, given the overarching loss of privacy associated with the digital experience today – thermostats, deeply surveill-y phones, doorbells, personal assistants like Alexa chuckling at us; the widespread and guileless use of tools like Uber – can we expect major changes in the overall degree of respect for user privacy in the tech world? Unlikely.

At some level, most people in advertising (not “tech”) have always done something relatively innocuous – even though the critics of all advertising will reflexively point to how insidious all of it is.

Scaremongers have ranted and raved about big companies “knowing things about us” so they can send us direct mail. But then, Doctors Without Borders and the Canadian Cancer Society do the same thing. Cambridge Analytica is just a cousin of many relatively innocuous means of providing data to advertisers, political parties, etc.

To me, seeing an ad for a type of bleach, or even a remarketing ad for the Westin Cape Coral, is relatively innocuous. Someone’s trying to sell me something. Sometimes they succeed!

But allowing U.S. politics – even the Presidential election – to be significantly influenced by media we the people do not understand, via mechanisms that leak our personal details to propagandists, takes a major step outside relatively innocuous commercial activity. “Tech companies” don’t like to make standards, police things, etc. Well, once you control the discourse and flow of information for billions of people, sorry, you’re no longer just a tech company. You’re part of a global regime of governance. Did you miss those classes at Harvard, Mark?

“If you choose not to decide, you still have made a choice.” Capeesh?

Beyond that, getting in bed with Cambridge Analytica for the purposes of gaming a U.S. election seems to be a major blunder even by Facebook’s own terms of reference. Surely, those terms of reference include creating or facilitating content and display advertising campaigns to dissuade militant jihadists from radicalism. Major tech companies, in their interactions with large consulting firms contracted by the U.S. government, have (reportedly) willingly participated in propaganda-style campaigns directed overseas. One assumes they facilitate such campaigns because it’s seen as a patriotic duty.

Being reasonably aware of how it all works, then, how can they have been so clueless as to allow foreign powers to unleash propaganda on U.S. soil? (Twitter, too.) For a few extra dollars in ad revenue? Or just because they’re bad/lax at policing user accounts? This strikes me as tone-deafness to the point of incompetence. Possibly worse (i.e. remember the “patriotic duty” thing?).

At our agency, our primary expertise is with PPC advertising, especially through Google AdWords. Looking at our current client roster, not a single client seeks to influence the direction of public opinion on politics or social issues. It’s all consumer products, travel, lead generation, mud endurance events, software, etc. It’s commerce, not lobbying or propaganda. (Of course, we’ve also run Facebook campaigns for our clients. Same commercial intent. If they’re not effective, they wouldn’t run.)

A company like Google can steer clear of a lot of problems if the primary source of its ad revenue is commercial in nature, looking much like our client roster.

The organic parts of social media will continue to be rife with opinion. But accepting paid advertisements or covertly sharing data for political and social issues buys a platform much more trouble than it’s worth. You become part of a shadow government. That might be an advanced class at Harvard, maybe very advanced. Mark probably doesn’t have the prerequisites.

Why not scale back one’s ambitions to mediate every conversation? It’s no accident that Facebook recently announced an algorithm change to devote more time to (simple, innocuous, trivial, connecting, passion-sharing, topical) conversations with family and friends. Not too long ago, I think this is how we all defined Facebook. Then it became something else.

After re-emphasizing the family-and-friends sharing mission, to be realistic about how you squeeze profit out of that, Facebook should scale back and reassess how it monetizes all that user attention. It’ll earn less profit, but it will gain clarity of purpose. The ecosystem is tired of all the opacity.

The upside in terms of revenue just isn’t worth the headache – or worse, the social damage – that may be caused by providing propagandists with a one-way mirror into our souls.

Part of Facebook’s problem is that you can boost posts. Organic is swirled in so tightly with paid that they’ve become blended. A bit like buying a can of red-and-yellow striped paint and expecting it to go on the walls in stripes, rather than coming out orange? As it turns out: that’s a big problem.

Facebook should stick to what it knows. Simplify revenue streams and focus on commerce, not propaganda. (Scaremongers will still argue that companies selling us stuff is a terrible intrusion, especially if they use computers to try to get to know target markets more intimately. And regulators will still have to watch out for our privacy. But… baby steps.)

In light of the way they’ve squelched organic reach for companies and organizations, and the current mechanism at posters’ disposal for unsquelching that reach (in part, “boosting posts”), good luck to Mark and the team as they do their utmost to, um, “fix it.”

Users have been known to squeal when it’s really obvious they’re being “sold to” or “manipulated.” What we should be most worried about is when the manipulation is so subtle that we have no clue it’s happening.

When I see an ad and decide to replace my $75 J.Fold wallet just a bit too soon with a new, $75 J.Fold wallet, I’m under no illusions that the folks at Wallet World are my friends or allies, just trying to “keep me in the loop” or “protect me from harm.” They simply want me to replace my wallet, and soon, before I go with another brand of wallet for the next two years. Let’s get back to that, shall we?