Opinion

Digital Falsities To Prime Likes

Cambridge Analytica caused quite a kerfuffle when it was outed by whistleblowers on its questionable tactic of snagging 50 million profiles from FB

Getting your Trinity Audio player ready...
Digital Falsities To Prime Likes
info_icon

Using big data analytics to gauge the sentiment or sway potential voters is not wrong. Seeding fake news, priming platforms with fake profiles, gaming algori­thms and, definitely, stealing profile data is wrong. There is a touch of farce to what is already becoming a broad comedy in this evo­lving story of Cambridge Analytica ‘stealing’ data and hel­ping win an election. Let’s look at the actors in this piece: UK-based data analytics firm Cambridge Analytica, Facebook, a ring of secretive investors and a winning Presidential poll.

On the face of it, the culprit—it seems to everyone—is Cambridge Analytica that illegally used data sourced from a legitimate Facebook app data that profiled users and their friends creating a database of 50 million individuals. Yet, you can’t but marvel at how Cambridge Analy­tica changed voter perceptions by using just a small subset of Facebook’s data. The question everyone is asking is how they just did it. It was simple, really. On its initial profile dump, the company superimpo­sed rich data sets—believed to be in excess of 4,000 data points per individual—of just about every American they could lay their hands on, legally, mostly. From gender, age and ethnicity to individual hobbies and favourites, it was all mixed up in a digital soup to filter out tastes as well as sentiments.

Once it had profiled and silo’ed the lot, it was just a matter of reaching out with the right content to each group. For that, it chose Facebook. Easy choice, given that 81 per cent of the US population is on social media and the leader is Facebook.

The false reality

To understand how Cambridge Analytica leveraged a mix of tactics to deliver its political messaging to the aud­ience on Facebook, one must understand how the social media giant delivers content to its users. The concept of Filter Bubbles, so evocatively explai­ned by Eli Pariser in his eponymous The Filter Bubble, is the basis behind most popular social media platforms. The crux: proprietary algorithms on these platforms increasingly screen the worldview from its users, by throwing up filtered content it “thinks” they want to see.

Cambridge Analytica data scientists may have programmed profiling algorithms that gamed the logic Face­book used. Cambridge Analytica used seeded content to bombard target users on Facebook from every angle. In the words of a former employee with Cambridge Analytica, the company “took fake news to the next level”. In other words, Cambridge Analytica created a filter bubble on steroids—constantly peppering its target users with news designed to change their perceptions about candidates and issues.

There are other dubious methods of pushing specific content to the top of the feeds. Cambridge Analytica  may have selectively used them. Bots or fake acc­ounts can be leveraged to achieve this—social media teams of many Indian political parties and independent social media outfits wield an abnormal number of such accou­nts. Recently, Cambridge Analytica co-founder Chris Wylie (who had left the company in 2014) said this is based on ‘informational dominance’. It’s “that if you can capture every channel of information around a person and then inject content around them, you can change their perception of what’s actually happening.”

Muddying the waters

Believe it or not, it was Facebook that set the ball rolling in 2012 in a secret experiment. It determined that it could sway sentiment, nay perceptions, of its users based on what they see. Facebook took a shot at trying to determine if it could somehow affect the emotional state of its users in that experiment. It tampered with its own news feed, without informing its users to prove that it could indeed alter the emotional state of its users—and, by that logic, change their perceptions too.

So how implicated is Facebook in all this today? To start with, it is now known that Facebook knew that Cambridge Analytica had improperly accessed profile data from the purported research app more than two years ago. More importantly, what is being termed illegal now—sharing profile data with third parties—was above board in 2012. Former President Barack Obama’s re-election team, Obama for America, used a Facebook app to boost his campaign. The app, with prior permission, asked Facebook users if it could access their friends data. It then boosted friend-to-friend messaging to those identified as fence-sitters. Facebook amended its terms in 2014 to stop sharing of data collected by the app with third parties.

Cambridge Analytica denies using the data imp­roperly sourced from Facebook, and instead says that it simply had more clarity of what the electorate looked like, based on crunching the vast amount of data it had collected from multiple sources. Facebook has, however, claimed that its policies were violated and that it would ban Cambridge Analytica from its platform.

Is this playing out here?

Closer home, there is speculation building up about the data being collected by a number of polls and apps on Facebook. Could they be wheedling out personally identifiable information from users and sharing it with third parties, read political parties? That is in fact a difficult question to answer.

info_icon
Outed And Blocked

In 2016, the three social media platforms—Twitter, Facebook and Instagram—cut off feeds to Geofeedia. The Chicago-based company was known for its real-time analysis of feeds to deliver relevant context-based surveillance data to police forces. Even today commercial feeds, without which social media analysis platforms cannot function, have a clause that expressly prohibits the end user analysis platform from sharing personally identifiable data of its users with law enforcement agencies and the government.

There are several social media tools that pollsters rely on. Tools that pull out and club influencers based on their sentiment and tonality around ‘peg’ topics. And help build psychographic-based databases of potential voters who could be reached out to.

Cambridge Analytica was not the first and may not be the last to use analytics based on social media data to help steer perception among voters. Voter-profiling outfits in India are dime a dozen, and many of them claim with varying degrees of certainty to be able to predict the outcome of an election or sway voter sentiments. It is common knowledge that the two leading national parties rely on int­ernal and external teams of social media analysts who have access to these tools and the data.

As long as a large voting audience is stuck to social media, the struggle to influence perceptions of reality will continue.

(The writer developed and deployed a social media analysis tool that was put to use in Ghana’s presidential elections. Currently,  he is consulting with PricewaterhouseCoopers in their cyber security practice.)