Ed. note: Stock market manipulation schemes are coming to life on social media platforms and within inboxes of unsuspecting investors. This two-part series will discuss the problem and the tension between true and tainted information. In part one, we discussed the growing trend of disinformation and the SEC’s enforcement actions that have enabled defrauded companies to gain a foothold in fighting back against bad actors who engage in securities fraud at their stockholders’ expense. In part two of this series, we will delve into what companies can do to get out in front of disinformation campaigns through attribution and other measures.
Pseudonymous stock manipulators have taken aim at public companies on social media outlets to spread disinformation designed to drive stock prices up or down, depending on the position they’ve taken. “Short and distort” schemes feed off of negative information about a company and its performance. “Pump and dump” schemes, which are less common, entail falsely touting a company’s worth. Although both forms of stock manipulation are punishable under the Securities Exchange Commission’s (SEC) antifraud provisions and a litany of state securities laws, consumer protection statutes, and common law, there is a growing sense that companies cannot keep up with the pace of disinformation.
Whether companies can effectively fight for their reputations is largely a function of the facts and circumstances of the online attacks and the strategies taken proactively and reactively to counter the disinformation.
The root causes of this malfeasance are hard to ignore — first, the ease at which bad actors can spin up seemingly credible social media accounts under false personas and, second, the simple math that creates profits when successfully influencing other investors to undervalue or overvalue a stock when they’ve taken a diametrically opposed position. Layered onto this paradigm is the speed at which false rumors are spread online and the difficulties in dispelling disinformation given the drumbeat of fresher news and the breadth of seemingly credible sources.
Parsing Free Speech And False Speech
While companies have historically been disincentivized from taking legal action to counter short and distort activity with United States courts siding with shortseller contentions that some articles and social media content are protected speech, recent SEC actions such as the ongoing SEC v. Lemelson Capital Management litigation are potentially paving the way for companies to fight back.
The hope by many, especially company counsel, is that successful enforcement actions against those who propagate false information about companies would serve not only as a significant deterrent to would-be market manipulators, but also as a signal flare to companies and investors that joint efforts at attribution and civil litigation to recover damages are within sight. In Lemelson, the SEC charged that the investment management company and affiliated hedge fund made false statements to shake investor confidence in Ligand Pharmaceuticals Inc. to lower its stock price and increase the value of Lemelson’s position by using written reports, interviews, and social media posts to claim that the company was near bankruptcy and that one of its products was obsolete. Despite the SEC action, Lemelson was undeterred and continued to publish negative content regarding Ligand, even calling upon the Department of Justice to intervene and investigate Ligand. The outcome of that litigation is still unclear.
What’s clear is that the line between defamation and constitutionally-protected opinion continues to be drawn by the facts. Disclosing a pecuniary interest in the subject matter and providing documents and facts undergirding an opinion greatly increase the chances of prevailing on a motion to dismiss a defamation claim and ostensibly SEC actions. Courts have found that reports, tweets, and conference presentations containing sufficient disclaimers underscore that statements made about companies like Eros and Silvercorp, which sued detractors alleging defamation, were opinions based on disclosed or substantiated facts and therefore were not misleading and did not constitute defamation.
A Path Forward
With those considerations in mind, companies have successfully deterred or strategically mitigated the fallout from disinformation campaigns outside of the courtroom. Successful companies seem to be those adept at attributing online activity to actors to derail their anonymity.
Internal resources work hand in hand with cyber threat experts like Nisos (disclosure: I work at Nisos) to help them proactively address this behavior when negative posts spin up. By observing the activity of known detractors, including the resharing of posts and collaborating with others to disseminate bursts of disinformation, companies can effectively deflate bad attackers who typically operate under a cloak of pseudonymity or disguised identity.
Pseudonymous identities are attributable through a variety of advanced forensic measures including, but not limited to, stylometric clues left behind in their writing and in their connections to others within a cluster of cohorts who quickly spin up and reshare similar viewpoints. According to Joshua Mitts, an Associate Professor of Law at Columbia University who researches short and distort behavior and its effect on trading markets, pseudonymous authors who are perceived as trustworthy are largely new authors who have encountered few reversals in the past or have no past history of commenting within forums. Unfortunately, new authors are not inherently trustworthy, rather, their modus operandi is often to switch over to new identities after planting disinformation in the marketplace without facing any accountability.
In order to proactively address the harm that truly deceptive campaigns can wreak on victim companies, Justin Zeefe, CEO of Nisos, advises that companies engage in an active defense that is both comprehensive and timely. “Companies that proactively monitor media activity, identify manipulative patterns in trading data, and bring in the right resources to monitor markets will stay ahead of the game and be prepared to quickly respond to market disinformation. Speed and preparation are key indicators of success here.”
Companies that perform digital media monitoring use platforms like Meltwater or Lexis Nexis, which can provide negative news updates that can help alert companies to disruptions before they erupt, allowing them to determine whether to respond, ignore, investigate, or all of the above.
Artificial intelligence (AI) and machine learning (ML) are also on track to help. Nasdaq’s 2019 Global Compliance Survey assessed that investments in AI/ML technology designed to reveal trends rose significantly, with 42% of respondents reporting that they have recently invested in it, and 65% planning to invest in it over the next 12 to 24 months. Nasdaq also announced that it’s using proprietary patent-pending AI and ML technology to detect irregular and “potentially malicious” patterns in trading activity, allowing computers to learn from complex patterns within defined datasets. With its U.S. market surveillance team reviewing more than 750,000 alerts annually, identifying unusual price movements, trading errors and potential manipulation, it makes sense for Nasdaq, and other stock exchanges, to send in the machines.
Keeping ahead of market-shifting disinformation is a full-time, full-contact activity, but it’s very much a standard operating protocol for public companies given the vulnerabilities of the ecosystem in which their stock prices fluctuate.
Jennifer DeTrani is General Counsel and EVP of Nisos, a technology-enabled cybersecurity firm. She co-founded a secure messaging platform, Wickr, where she served as General Counsel for five years. You can connect with Jennifer on Wickr (dtrain), LinkedIn or by email at email@example.com.
This article is sourced from : Source link