Last November, a ransomware collective did something unprecedented. A week after breaching a fintech company, it wrote to the U.S. government.
The criminals reported their own crime. They didn’t intend to turn themselves in, or give up anything they’d stolen, though. Quite the contrary: they wanted to wield the power of U.S. regulatory law against their victim.
The stunt reflected a broader, sweeping change to how organizations across America must now handle their data breaches. And if fear was the goal, it certainly worked.
How Things Used to Be
There was a time, not so long ago, when you didn’t really have to tell anyone about your massive data breaches.
Back in the summer of 2016, for example, Verizon announced it was acquiring Yahoo for 4.8 billion dollars. In an associated regulatory filing with the U.S. Securities and Exchange Commission (SEC), Yahoo assured the government and its new buyers that it knew not of “any incidents of, or third party claims alleging” a cyber incident.
It was an interesting claim given that, between the acquisition announcement (July) and that filing (September), an individual operating under the moniker “Peace_of_Mind” started selling Yahoo account information on the Dark Web.
In fact this wasn’t a new incident, nor was it the only one, or even the most significant. In the prior few years, Yahoo had been breached on multiple occasions, by multiple entities, including by Russian state intelligence services. In one case, 500 million accounts were compromised. In the other, all of its approximately three billion accounts had been lost to hackers, including more than 150,000 belonging to U.S. government and military personnel. It’s possible that the situation was even worse still, as Yahoo’s logs only saved data for a limited period of time. Evidence of further crimes from 2013 and prior were lost to history.
Data breaches are seismic. But to avoid fallout—the embarrassment, the apologies, the money and deals lost—companies often used to hide these events from the world. This had consequences not just for companies themselves, but also those companies’ investors and, of course, their customers.
That’s why last year, to prevent scenarios like this from happening again, the U.S. government laid the hammer down.
The New Rules for Incident Disclosure
In July, the SEC adopted new rules around disclosure that officially took effect in December. Now, organizations must rapidly disclose to the government “any cybersecurity incident they determine to be material.” Within just four business days of uncovering a material cyber incident, they’re required to file a Form 8-K that “describe[s] the material aspects of the incident's nature, scope, and timing, as well as its material impact.” Only incidents affecting national security or public safety are allowed an exception.
Organizations can no longer mask or even delay incident disclosures. Practically, too, there’s a second consequence to these rules: in order to meet all of the reporting requirements, they’ll probably need to have significant resources, people, and a plan in place should such a scenario ever arise in the future. After all, can a company without a mature incident response plan really identify the nature, scope, timing, and impact of a data breach in just four days’ time?
There is some nuance to the SEC’s policy change, though. What counts as “material”? How much information is required to describe the nature and scope of an attack? The impact the rules would have, the consequences they might have, and the SEC’s ability to root out non-compliance, were initially not entirely clear.
Then one hacker group decided to test out the new system for themselves.
Holding Companies to Account
MeridianLink, based in Costa Mesa, California, is a classic sort of ransomware victim. Mid-sized, valued at around 1.5 billion dollars, it’s big enough to be attractive for theft, but not so big that it’s likely to make headlines.
On either November 7th (as reported by the attackers), or the 10th (as reported by the company), MeridianLink was breached. Its attackers—the infamous BlackCat/ALPHV ransomware collective—didn’t encrypt any of its data, but they did exfiltrate it. The company was made aware the day it happened, and quickly patched the channel through which the perpetrators first breached its network.
At this scale, ransomware attacks typically involve a negotiation. But in the days that followed its attack, BlackCat couldn’t reach MeridianLink.
Days passed. With seemingly no prospects of negotiating a ransom payment, the group tried out a novel strategy.
“We want to bring to your attention a concerning issue regarding MeridianLink’s compliance with the recently adopted cybersecurity incident disclosure rules,” the group wrote in a complaint to the SEC. “It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days, as mandated by the new SEC rules.”
The letter worked like a charm. The hackers shared their stunt with DataBreaches.net, leading to widespread media coverage. MeridianLink was named and shamed in the public square. More importantly, a precedent was set.
Back in 2017, an independent committee found that at Yahoo, “senior executives and relevant legal staff were aware that a state-sponsored actor had accessed certain user accounts” as far back as 2014, two years before they admitted to anything. They did little, according to a 2017 SEC 10-K filing, because “failures in communication, management, inquiry and internal reporting contributed to the lack of proper comprehension and handling.”
That kind of story used to be common. Now, it seems like an artifact of a bygone era.
Cyberattack victims today have to respond to incidents with more diligence and care than ever before. They have to be prepared to the tee, with plans in place for what to do if the worst comes to fruition. If not, the government might step in. And even if the government might not have otherwise found out, apparently, their own hackers will hold them to account instead.