Earlier this year, two of the world’s largest consumer goods companies — Procter & Gamble and Unilever — did something unusual. They said they would get off online platforms that had toxic content including Google, Facebook and Youtube. The statement put the spotlight firmly on an issue fermenting for long.
While social media bridges the gap between brands and consumers, it also at times leaves a trail of backlash and trolls. In some cases, brand messages could be sitting next to content that is discriminatory or pornographic or has terror links, putting companies in a spot. The alarming rate at which this is growing has led to social media now taking strict action against all this.
“It is imperative for online platforms to make their users feel safe; ensure there are enough and more reasons to make them come back. Hate speech, bullying and harassment is rampant and has to be tackled at various levels. This is not a one-time activity. Intelligence needs to evolve,” says Harsh Shah, vice president, client services, Dentsu Webchutney.
Some experts argue that the ball cannot be put in social media’s court alone. Brands have a responsibility too, they say, to have effective systems and processes in place to counter instances of hate speech against them.
ALSO READ: How must brands fight online trolls without sullying their reputations?
“While the intent is noble, monitoring the long tail of publishers and content providers on platforms such as Google, Facebook and YouTube is difficult,” says the head of a top digital agency.
“Google, for instance, manages display ads for these publishers and content providers through its AdSense network on a revenue-share basis and limiting content therefore is not a feasible exercise for it,” he says.
Recently, ride sharing app Ola found itself in the midst of a storm when a customer tweeted about cancelling a cab ride because of the driver’s faith. While Twitter refused to take down the post, saying it did not violate their community guidelines, Ola responded to the tweet saying it did not support discrimination, nipping the issue in the bud.
Globally too brands have reacted quickly to hateful and negative comments, ensuring a positive atmosphere exists around their brands.
MAC Cosmetics, for example, had to delete numerous racist comments some time ago after posting a picture of a black woman’s lips on its Instagram account. The derogatory remarks also saw a follow-up message from the brand saying it was for “all ages, races, sexes”.
Sportswear brand Adidas was also hit by a barrage of comments lately after an Instagram post of two pair of feet (of a female couple) wearing the same set of shoes evoked sharp reactions from people. Adidas chose to not delete the comments, instead responding to each message with a kiss emoji.
No hit and run
While hate speech and trolls don’t affect a brand directly, it is very important for its consumers, says Shah, that brands take a stand. Millennials and digital natives in particular are especially cognizant of this and expect brands to be socially-conscious and responsible. Hit and run, says Shah, cannot be permitted at all here.
Some experts point to the need for brands to be aware of conversations beyond their social media accounts.
“Brands have to set in place systems to monitor such social media buzz and chatter around them as well as their competitors. If a brand rant begins, it may choose to ignore it initially. However, at some stage, it may need to take action,” says Ambi Parameswaran, founder, brand-building.com
Brands such as State Bank of India, for instance, had to bear the full brunt of Twitter users’ tendency to opine uninhibitedly. The public sector lender was flooded with a volley of complaints regarding poor service recently soon after it went live on Twitter, forcing it go off the platform owing to its inexperience in handling the issue. The lender subsequently returned to the platform only after taking help from experts on the matter.
Fake accounts and bots are another menace that confront advertisers. Shah of Webchutney says fake accounts affect the quality of engagement between brands and consumers, prompting companies to curtail digital spends. This has far and wide implications for the financial health of online platforms, since most depend on advertising money in a big way.
“In recent years, fake accounts have been a big issue, prompting Twitter to purge millions of such accounts. Facebook too has been actively terminating bots and accounts to weed out the problem,” says Shah.
In the January-March period of calendar year 2018, for instance, Facebook disabled as many as 583 million fake accounts, most of which were disbanded within minutes of registration. While the social media giant claims its technology doesn’t work well for hate speech, it still managed to remove as much as 2.5 million such pieces in Q1 2018 — 38 per cent of which was flagged off by its own technology. Google too is investing in artificial intelligence to help it identify users promoting vitriol and hate. For brands and online platforms, it is one step at a time.