Data Breaches, Ransomware, and AI Lawsuits Signal Escalating Digital Threats
Baltimore sued xAI over Grok deepfakes as hackers claimed 6.8 million Crunchyroll records and Gunra ransomware hit a chip-testing firm's Singapore subsidiary within days of each other.

Anime streaming service Crunchyroll confirmed a data breach involving customer service ticket information following an incident with a third-party vendor — one of three major digital security events that emerged in rapid succession this week, collectively exposing fresh vulnerabilities across consumer platforms, industrial supply chains, and AI governance.
The Crunchyroll breach allegedly took place on March 12, stemming from a compromised employee at Telus, Crunchyroll's business process outsourcing partner. The hacker gained access after compromising an Okta single sign-on account belonging to a Crunchyroll support agent. The hacker claimed to have downloaded about eight million support ticket records from Crunchyroll's systems, including roughly 6.8 million unique email addresses, though the claims have not been independently verified. The streaming site, which Sony acquired from AT&T in 2020 for $1.18 billion, serves 15 million subscribers worldwide. The hacker claims to have sent extortion emails to Crunchyroll demanding $5 million in exchange for not publicly leaking the data, but did not receive a response from the company.
Business process outsourcing companies have become high-value targets for threat actors over the past few years, as they often handle customer support, billing, and internal authentication systems for multiple companies. The Crunchyroll intrusion mirrors a pattern seen just days earlier in the semiconductor sector.
Semiconductor services firm Trio-Tech disclosed that one of its subsidiaries in Singapore fell victim to a ransomware attack, with the incident occurring on March 11 and resulting in the encryption of certain files within its network. Trio-Tech International initially shrugged off the attack as immaterial, only to reverse course days later after discovering stolen data had been disclosed. In an 8-K filing with the SEC, the company stated that "on March 18, the incident escalated and resulted in the unauthorized disclosure of certain Company data," with management concluding the incident may constitute a material cybersecurity event. The company has not shared details on the threat actor responsible, but the Gunra ransomware group added Trio-Tech to its Tor-based leak site. Trio-Tech reported more than $36 million in revenue last year, with 94% of its customers located in Asia, and has about 600 employees, most of whom are located across Asia.
Gunra follows the now-standard double extortion playbook: encrypt the victim's files first, then threaten to publish stolen data if the ransom isn't paid. The incident comes a month after another semiconductor test equipment supplier, Advantest, reported a ransomware attack to Japanese authorities.

On the legal front, the week's most consequential development may carry the farthest reach. The Mayor and City Council of Baltimore filed a lawsuit in the Circuit Court for Baltimore City against X Corp., xAI Corp., and SpaceX, alleging the companies violated Baltimore's Consumer Protection Ordinance by designing, marketing, and deploying Grok, a generative AI system that produces and disseminates non-consensual sexualized images, including content involving minors. The complaint cites estimates that Grok generated between 1.8 million and 3 million sexualized images in just days between December 29, 2025, and January 8, 2026, including around 23,000 depicting children, according to the Center for Countering Digital Hate and a New York Times analysis.
"These deepfakes, especially those depicting minors, have traumatic, lifelong consequences for victims," said Baltimore Mayor Brandon M. Scott. Last week, three teens in Tennessee also filed a proposed class-action lawsuit against xAI after Grok generated images portraying them in sexualized and debasing scenarios.
Baltimore's lawsuit against xAI and its Grok chatbot could decide how far cities can go to regulate artificial intelligence in the absence of federal law, according to one expert. Legal experts note that "the stronger legal focus is likely to be on whether the AI system itself materially contributed," adding that if courts view Grok "as an active creator rather than a passive intermediary," responsibility will fall more heavily on xAI.
Taken together, the three incidents point to a digital threat landscape in which the weakest link is rarely the primary target itself. A single phishing email sent to a BPO contractor opened Crunchyroll's ticketing infrastructure. A ransomware group found a path into global semiconductor testing infrastructure through a small Singapore office. And a generative AI product widely marketed as safe became the centerpiece of the first municipal lawsuit of its kind in the United States. Legal experts suggest settlement is the most probable outcome for the Baltimore case, though it could still result in "a precedent-setting ruling on AI accountability.
Know something we missed? Have a correction or additional information?
Submit a Tip

