top of page

Social Media Attempts to Fight the Problem

Social media platforms have made attempts to combat "fake news" through human intervention, algorithms, and even icons to help readers determine validity. What were the repercussions, and did they succeed?

 

In a 2017 National Public Radio broadcast on Weekend All Things Considered, host Ray Suarez interviews Kathy Flynn, a Mashable reporter covering Facebook and Twitter's attempt to fight misinformation. Flynn recognizes that Twitter lets anyone join the platform, even if they're sending a hateful message, which drives users away from joining.

 

She mentions that Facebook tried incorporating red flag icons in 2016, called "disputed" flags, onto fake articles, but they admitted recently that this wasn't working:

 

"...Red actually can enforce a message - as in, I'm reading something, and I'll remember it more 'cause there is a red label next to it," Flynn says. "That's clearly not what they would want for someone to hope if they're reading something that's fake news."

 

Now, they want to incorporate related articles, which means underneath the fake article, there will be similar Facebook posts, which will hopefully tell readers what kind of narrative the story is telling.

 

Twitter is taking a much different approach, according to Flynn. They aren't regulating content as much, but hoping users will see a fake news story and retweet it to say it's wrong.

 

In conclusion, she comments on what the platforms are doing differently:

 

"Facebook isn't necessarily taking down particular users or particular pieces of content...if there is a fake news story, it can still be shared. But with that, if you do not buy their (ph) standards either on or off the platform, you can be out. And for a lot of people, that's what's really scary about Twitter right now is they don't really know whether they're out because these processes are slowly rolling out. And even Twitter said perhaps they'll make mistakes."

 

And mistakes they definitely have made. After implementing the "disputed" flags, Facebook said "overall that false news has decreased on Facebook," but did not provide any proof, according to author Violet Blue on Engadget.com.

 

Then in August 2017, Facebook said it would ban "pages that post hoax stories from being allowed to advertise on the social network." Blue mentions this was the month that Facebook, under congressional questioning, admitted that Russian propaganda services used the social media platform's ad service to spread misinformation during the 2016 presidential election.

 

Facebook's most recent effort was announced on 6 April 2018. New controls will be implemented to ensure transparency from advertisers and users on the site, according to The GuardianThe updates are designed to prevent misinformation especially in elections and will include information to identify political ads as such. They will be labeled as "Political Ad" and information about who paid for the advertisement too, says Facebook executives Rob Goldman and Alex Himel.

 

Facebook said: “We are working with third parties to develop a list of key issues, which we will refine over time. To get authorized by Facebook, advertisers will need to confirm their identity and location. Advertisers will be prohibited from running political ads – electoral or issue-based – until they are authorized.”

 

CEO Mark Zuckerberg said the company is taking big steps: "With important elections coming up in the US, Mexico, Brazil, India, Pakistan and more countries in the next year, one of my top priorities for 2018 is making sure we support positive discourse and prevent interference in these elections."

 

This new attempt will hopefully ease the minds of Facebook users, but only time will tell.

 

Clearly, there is no easy fix, which is why IEEE believes it can help regulate the "fake news" problem. There hasn't been any criticism yet on the new standard, but there are obvious challenges that could arise from such a touchy subject.

 

Algorithms themselves can often be biased, and the working group at IEEE recognizes that. Although it's in the early stages of production, IEEE's membership publication The Institute asks creator Joshua Hyman how the standard will avoid bias.

 

He says the working group making decisions on which words and what kind of articles the algorithm detects will constantly change. New people will be added and those that seem biased will be removed. Hyman presses that the standard isn't designed to block or censor anyone. The team is completely open to suggestions and anyone who wants to join, can.

 

In "Search Algorithms: Neutral or Biased?" Paul Cleverley analyzes how truly objective and unbiased even just search algorithms can be.

 

He says search algorithms undergo constant evaluation and tweaking by people in the background, generating judgments on how good results are. This is the same with IEEE's algorithm; there will be people behind the scenes interpreting which words and phrases constitute as misinformation.

 

"Algorithms no longer just help us find what we know, but can also surface patterns to suggest what we don’t know. These data can challenge existing viewpoints and current orthodoxy," Cleverley says.

 

Standard 7011 could suggest certain opinions or ideologies, depending on who is involved in the process. It could bring forward even more political divide if it's done with biased people. However, if IEEE does what it says, and change who is part of the working group every so often, the algorithm could be as close to impartial as possible.

 

Although one's ideology isn't the be-all-end-all, it is important. In my next section, I'll analyze both sides of the U.S. political spectrum's opinion on the media over the past two years.

bottom of page