Vaccinating Your Brand

June 29, 2017

How to Protect Your Organization and Reputation from
Social Media Attacks Stemming from Employees and
Other Individuals or Constituencies

Perception by customers, business partners, vendors and the public at large is vital to any organization’s success. Studies have shown that businesses with a clear mission, record of compliance, and diversity, generally perform better.1 Yet, the task of defining and defending an organization’s brand has become increasingly difficult with a new frontier of threats evolving from social media, as attacks or adverse postings from customers, employees or the public are potential brand destroyers. As social media continues to evolve and expand, organizations must find new ways to protect their reputations. In this new world of social media, there is no single solution for every threat. Nevertheless, organizations can develop tools and strategies to appropriately respond to social media issues when they do occur.


Social media threats include a number of tactics such as unauthorized social media accounts, content threats, hijacked accounts, and anti-brand causes (e.g., threats by activist organizations, disgruntled employees, and unsatisfied customers).2 Whether attacks are targeted, unintentional or incidental, they can damage a corporation’s brand.

Unauthorized Accounts

Unauthorized accounts are easy to create, and are the most common social media threat any organization faces.3 They detract from an organization’s credibility, confuse customers, and damage the organization’s reputation. A study examining 100 organizations revealed that more than 1,000 unauthorized accounts were created each week.4 A separate study found that 40% of Facebook accounts claiming to represent a Fortune 100 brand were unauthorized, and 20% of similar Twitter accounts were unauthorized.5 Considering the quantity of unauthorized accounts, and the speed with which these accounts appear and disappear, it is extremely difficult, if not impossible, to monitor all unauthorized accounts.

Unauthorized accounts, or impersonation accounts, are created by attackers who “footprint” an organization.6 Attackers may identify key employees within an organization, familiarize themselves with the organization’s tone or voice, replicate the organization’s logos and service marks, and establish some semblance of legitimacy by including the organization’s mentions and news items in their postings.7

1 Vivian Hunt, Dennis Layton, & Sara Prince, “Diversity Matters,” McKinsey & Company, Feb. 2, 2015.
2 Kimberlee Morrison, 40% of Facebook Accounts For Fortune 100 Companies Are Unauthorized, Adweek (Dec. 10, 2014), available at //
3 McAfee Threat Center, “How Cybercriminals Target Social Media Accounts,” McAfee.
4 Mike Raggo, Anatomy of a Social Media Attack, DARK Reading, (Aug. 23, 2016), available at
5 Nexgate, “The State of Social Media Infrastructure 2014, Part 2: Security Threats to the Social Infrastructure of the Fortune 100,” proofpoint, Dec. 2014, available at //
6 Raggo, supra note 4.
7 Id.

Content Treats

Content threats are accomplished when attackers hide malicious links in the comments section of an organization’s social media postings.8 Social spam has grown exponentially (a 658% increase in social media spam on branded accounts between July 2013 and June 2014),9 and 99% of malicious links posted to social media accounts led to malware and/or phishing attacks.10 Attackers post content threats for various reasons; attackers may want to damage a brand, manipulate the stock market, or execute some other self-serving purpose.

Hijacked Account

Hijacking an account is the most difficult attack to accomplish, but is often the most impactful.11 With access to an organization’s social media accounts, hijackers can infiltrate the company’s internal network (thereby exposing it to data breaches), and post inflammatory comments.12 A study revealed that 2.29 accounts at every Fortune 100 organization exhibited hijack indicators.13

Anti-Brand Causes

Social media can be particularly dangerous when it comes to anti-brand causes, such as threats by activist organizations, because of the absence of truth-filters and the immense potential reach of each message.14 Similarly, postings from employees and customers can be devastatingly disruptive. Recently, for example, an active wear company was forced to address a situation that stemmed from hateful comments by an online blogger. Upset customers destroyed apparel they owned, and shared images online of items burning, items in the toilet, and items in the garbage. The company was depicted in a negative light through no fault of its own, and quickly acted to protect its reputation.


Organizations should dedicate adequate resources and personnel to monitoring social media accounts, take appropriate measures to monitor their “comments” section, and have social media guidelines and/or a criticism-response plan in place.15 Preparing a defense does not ensure attacks will not occur, but successfully interacting with the public is vital to preventing attacks.

8 Morrison, supra note 2.
9 Id.
10 Nexgate, supra note 5.
11 Id.
12 ZeroFOX Team, Top 9 Social Media Threats of 2015, ZeroFOX, Jan. 20, 2015, available at //
13 Raggo, supra note 4.
14 As of December 31, 2016, Facebook had 1.86 billion monthly active users, with 1.5 billion of those accessing the site on their mobile devices. “Company Info,” Facebook Newsroom. As of April 2017, Twitter had 313 million monthly active users, with 1 billion unique visits each month to sites with embedded Tweets. “Company Fact,” Twitter.
15 Floyd Woodrow, How prepared is your company for a cyber-attack?, Guardian News (June 18, 2013), available at //

Dedicate Appropriate Resources, Including Verified Accounts

Organizations should use “verified” accounts.16 The presence of an active verified account may cause users to ignore unverified accounts, and an organization can more easily distance itself from impersonators. Some anti-brand incidents, for example, can be minimized by employing an effective Customer Relations Department responsible for monitoring verified accounts, and training employees who may otherwise become the subject of “viral” postings to take preventive measures.

“Comments” Section Policies

Social media platforms typically have their own “comments” policies and guidelines, but may leave it to an organization to report harmful or offensive comments.17 Instead of waiting for the platform to act, organizations can invest in moderators to control “comments” sections from harmful or abusive comments.18

Social Media NLRA Considerations

An organization must be mindful that social media policies attempting to control an employee’s ability to use social media may come under scrutiny from the National Labor Relations Board (NLRB). Recent decisions indicate that provisions that appear facially harmless can often be viewed as violating federal labor law (e.g., policies that are seen as chilling an employee’s ability to engage in protected concerted activity under the National Labor Relations Act (NLRA)).19 Similarly, basing employment decisions on an employee’s social media activity makes an organization vulnerable to suit and/or NLRB unfair labor practice charge.20 However, not all concerted activities are protected by the NLRA. Actions that are made with reckless disregard of the truth, or are maliciously untrue, are not protected.21 Similarly, a policy that prevents discriminatory remarks, harassment, and threats of violence or similarly inappropriate or unlawful conduct is lawful.22

An Example of a Criticism-Response Plan

A concrete criticism-response plan is useful to avoid roles and responsibilities becoming mixed during an incident, and minor criticisms becoming major issues. Guidelines help all organizations avoid

16 Raggo, supra note 4.
17 Paul Chaney, Managing Negative Facebook Page Comments, Practical Ecommerce (May 16, 2011), available at //
18 Aaron Lee, How to handle negative comments on social media, Ask Aaron Lee (Feb. 16, 2012), available at // (Feb. 16, 2012).
19 Howard Bloom, Labor Board Prosecutor’s Social Media Report Concludes Common Policy Provisions May Be Unlawful, Jackson Lewis P.C. (March 2, 2012), available at // (Mar. 2, 2012).
20 Howard Bloom & Philip Rosen, Firings for Facebook Comments Unlawful, NLRB Rules, Jackson Lewis P.C. (Sept. 8, 2014), available at //
21 Richard Greenberg & Philip Rosen, Third Guidance on Social Media Policy Issues from NLRB Acting General Counsel Includes Sample Policy, Jackson Lewis P.C. (May 31, 2012), available at // (May 31, 2012).
22 Id.

those issues, but are not a substitute for discretion and business judgment; organizations must mold each response to each incident.

A successful criticism-response plan reacts to credible threats with urgency and expediency. Experts opine that organizations that avoid threats the most effectively typically engage with attackers quickly and directly.23 Those organizations’ responses are reactive, knowledgeable, and indicate a level of ownership and/or accountability.24 The responses that are typically the best-received are those that are transparent, include sources, and portray a tone that is reflective of the organization’s mission. 25

By way of example, one government agency created a “Web Posting Response Assessment” that evaluates comments, and offers moderators specific pre-generated responses.26 Other organizations have similar tiered approaches for identifying and avoiding potential attacks.27 The referenced Assessment Plan includes three phases of analysis: a discovery phase, an evaluative phase, and a responsive phase.28

The Discovery Phase

Organizations should consider actively searching social media to identify relevant messages. 29 Some corporations preempt the discovery phase by building internal mechanisms into their online presence.30 Those organizations aim to channel negative feedback internally, and build goodwill with the online community. This approach is reliant on engaging potential attackers before their frustration manifests itself in a public comment, and attracting other social media users to defend the organization, correct attackers, and direct attackers to the internal complaint system. One consumer electronics company created an online network where its customers shared and discussed developments in technology and technology-related products. The various community-driven sites allowed customers to share, vote and discuss ideas that would allow the company to improve. This model allows an organization to monitor criticism, and limits its exposure to social media attacks.

23 Matthew Gain, The 5 best ways to respond to a social media attack, Ragan (Apr. 17, 2012), available at //
24 Stephanie Marcus, How to Respond When Social Media Attacks Your Brand, American Express OPEN Forum (Apr.24, 2010), available at //
25 360i, 3 Tips for Developing Your Brand’s Social Tone of Voice, 360i (Sept. 25, 2012), available at //
26 Noah Shachtman, Air Force Releases ‘Counter-Blog’ Marching Orders, WIRED (Jan. 8, 2009), available at //
27 Ellyn Angelotti, How to handle personal attacks on social media, Poynter (Aug. 20, 2013), available at //
28 Id.
29 Josh Catone, HOW TO: Deal With Negative Feedback in Social Media, Mashable (Feb. 21, 2010), available at //
30 Prashant Suryakumar, Market Data Relevant: The New Metrics for Social Marketing, Mashable (Jan. 11, 2011), available at //

The Evaluative Phase

After the discovery phase, the government agency’s Assessment Plan directs the moderator to proceed to the evaluative phase. The evaluative phase provides a moderator with a group of fixed categories into which an attacker’s message must fit. Some corporations refer to this phase as a “comment traffic system.”31 The evaluative phase is mechanical, but organizations must be mindful of being too rigid in any response. Flexibility is a tremendous tool when evaluating threats, and deciding how to minimize the likelihood of an attack.

In the evaluative phase, all messages are sorted into categories. The agency’s Assessment Plan categorizes messages into the following groups: messages that are factual and well-cited that disagree with your organization’s policy; messages that are factually incorrect; messages that bash and degrade your organization; messages that post rants, hateful comments, or inappropriate content; and messages from users who have had a negative experience. 32 Categorizing the message is critical to framing a response.


For generations, the typical response to many public relations issues, and threatened or actual litigation, was “no comment.” That may no longer be the case. Successfully responding to a social media attack largely mirrors avoiding the initial attack. Responding quickly with knowledge about the issue, taking ownership of the attacker’s criticism, and providing meaningful responses all are vital considerations.33 As a general principle, quick and personal responses are great tools for responding to a social media attack. Engaging the source is typically done via the same online medium in which the attack was presented. Responses should be personalized, but mindful that the attacker is seeking ammunition to incite his/her cause. In the case of an unauthorized user or a content threat, it is important to disassociate the organization from the user or content, and take visible measures to report the attempted attack. If an organization is subjected to an intentional or unintentional anti-brand attack (e.g., activist organization, disgruntled employee, unsatisfied customer), it is important to consider a prompt and transparent response. As in most situations, statements that are easily supported by verifiable facts and documents are often the best responses. Additionally, messages from users who have had a negative experience should be offered reasonable solutions, or a restatement of the organization’s position on the issue.

Unmasking the Attacker: Litigation as an Option

First Amendment rights are highly protected, even when the speaker engages in anonymous speech on the Internet.34 Those protections extend to “attackers.” Courts will almost always favor a person’s
31 Gain, supra note 22.
32 Angelotti, supra note 26
33 Angelotti, supra note 26.
34 See Buckley v. American Constitutional Law Found., 525 U.S. 182, 197-99; McIntyre v. Ohio Elections Comm., 514 U.S. 334 (1995); Reno v. American Civil Liberties Union, 521 U.S. 844, 849-50 (1997).

right to speak anonymously over a company’s interest in limiting speech. Nevertheless, there are circumstances when an organization may want to consider taking legal action.

An organization considering a lawsuit to quash an attack or recover damages related to an attack is at an immediate disadvantage. Some of the challenges organizations face include: (1) demonstrating that the attacker engaged in unlawful behavior (criminal or civil); (2) proving that “unmasking” the attacker’s identity is the least restrictive means for investigating the offense; (3) establishing that the organization’s demand for information is not motivated by a desire to suppress free speech; and (4) evidencing that the organization’s private interests outweigh the attacker’s First Amendment Rights.

In 2001, a New Jersey court was tasked with deciding a lawsuit brought by an organization against anonymous users who posted criticisms on an online message board. 35 The company claimed the comments were unlawful.36 Before rendering its decision, the court elaborated on a number of issues organizations should consider before pursuing a lawsuit. 37 The Court recommended that organizations make attempts to engage attackers before bringing legal action, which includes providing attackers with notice that the organization will pursue a subpoena or other legal action if the attacks do not stop, or are otherwise remedied.38 Providing notice allows the attacker to respond to the organization’s concerns. From the Court’s perspective, notice affords attackers an opportunity to stop their actions before the matter develops into a lawsuit, or articulate a legitimate explanation for why their speech is protected.

Recently, the Trump Administration employed a similar approach when it attempted to unmask the person(s) behind a Twitter account that had been critical of the Administration’s plans and policies.39 The Administration served Twitter with a subpoena that instructed Twitter to reveal the person(s) behind the account.40 In response to being served with the subpoena, Twitter filed a lawsuit seeking to have the subpoena declared unlawful.41 Before the matter was decided by the court, the Administration withdrew its subpoena, and Twitter withdrew its complaint.

Social media attacks can lend themselves to multiple causes of action, including, breach of contract, misappropriation of trade secrets, interference with prospective business advantage, and defamation. However, any organization bringing those claims faces obstacles prosecuting the matter, and obtaining and executing a judgment. Prosecuting the claim will be difficult because identifying the appropriate person(s) is problematic. For example, accounts are created and deleted with tremendous frequency, and attackers can create social media accounts without providing any identifying information. With respect to effecting change by bringing a lawsuit, the attacker can delete the account and create a new account with relative ease – thereby restarting the legal process. In addition to the procedural difficulties, an organization must prove its interests outweigh First

35 See Dendrite Intern., Inc. v. Doe No. 3, 342 N.J. Super. 134 (App. Div. 2001).
36 Id. at 146.
37 Id. at 141-42.
38 Id. at 141.
39 Susan Seager, Twitter Will Probably Win Lawsuit Against Trump Administration, Experts Say, The Wrap (Apr. 6, 2017), available at //
40 Tony Romm, Twitter Is Suing The Government for Trying to Unmask an Anti-Trump Account, recode (Apr. 6, 2017), available at
41 Complaint, Twitter v. U.S. Department of Homeland Security et al., No. 3:17-cv-01916 (N.D. Cal. Apr. 6, 2017).

Amendment interests, and that it suffered damages attributable to the attack (depending on the cause of action). The standard is highly fact-specific, and is revisited on a case-by-case basis. In sum, litigation may be an option, but stopping an attack or recovering damages by way of a lawsuit will likely be a difficult and expensive proposition.


Organizations and or messages about organizations can “go viral” and lead to great name recognition, and in many cases, explosive growth. Those same qualities make social media dangerous for organizations. As with all powerful tools, social media must be actively managed. It is imperative that organizations consider the threat of an intentional social media attack, or the possibility that information on social media negatively impacts its reputation and or operations. The consideration process leads to the development of preventive and responsive measures. It is equally important that each situation is evaluated; the situation should mandate the approach – not the other way around.



Subscribe to Guest BlogGuest Blog