New Study Unveils Early-Warning Framework to Predict Surges of Online Abuse
The new findings can help move social media platforms from reacting after online harm occurs to preventing harm before it happens.
Using only the first ten comments in online threads, researchers from Rutgers and University at Albany have developed an early-warning framework that can predict when a toxic social media conversation is likely to erupt into a wave of abusive remarks—a phenomenon that the team calls “Neg Storms.” Central to this framework is Comment Storm Severity (CSS), a new metric the researchers designed to track and quantify toxicity levels in discussion threads over time.
"CSS is a compact, interpretable, time-aware metric that quantifies how intensely toxicity concentrates within a short span, normalized against early baseline behavior," said co-author Associate Professor of Library and Information Science Vivek Singh. "We formally define a Neg Storm as a thread segment in which the CSS exceeds a predefined threshold, indicating a concentrated episode of harmful interaction."
Their findings are significant for social media users and platform security designers, Singh said, because "they can help shift platform security frameworks from reacting after harm occurs to preventing harm before it happens."
The paper, "Forecasting 'Neg Storms': Time-Aware Modeling of Toxic Situations in Social Media," co-authored by Singh and Graduate Teaching Assistant Irien Akter and Associate Professor Pradeep K. Atrey, both at the University at Albany, SUNY, was accepted by the Proceedings of the 2025 International Symposium on Multimedia (ISM) held from December 8-10, 2025 in Naples, Italy.
"Most social media security tools look at single comments, like checking one tree at a time. But the real danger is in the forest,” Singh said. "A single tree might look fine, yet if many trees catch fire together, the whole forest burns. Online threads work the same way. A few negative comments might seem harmless, but if negativity spreads quickly across the conversation, it can overwhelm the space and cause real harm. Our hypothesis was that early signs, such as how fast people reply and how comments cluster in time, could help us forecast these storms before they fully ignite."
Their early prediction models, co-author Akter said, can accurately forecast future occurrence of Neg Storms about 70 to 80 percent of times (in different settings) by looking at just the first ten comments in a thread.
Further, co-author Atrey said, they also found that "timing patterns (when comments arrive and how tightly they cluster) are more informative than just looking at the words. Combining timing and content gives the best results."
To conduct the study, the researchers analyzed thousands of conversation threads from Reddit and Instagram. They:
- Calculated CSS, which measures how sharply toxicity spikes in a thread.
- Looked only at the first ten comments to extract features like timing (how fast replies come) and content (how toxic early comments are).
- Trained models to predict whether the thread would escalate.
- Tested performance using standard measures like accuracy and ROC–AUC.
Describing the practical implications of their findings for platform security, Singh said, if social media platforms can spot early warning signs of impending Neg Storm, they can:
- Slow down conversations that are heating up.
- Send gentle reminders or apply temporary limits.
- Flag risky threads for human review.
"For everyday users, this means fewer toxic pile-ons and safer online spaces," Singh said. "For researchers and policymakers, it introduces Neg Storms as a new concept for studying collective harm, moving from 'trees' to 'forests' in social media research."
Discover more about the Library and Information Science Department at the School of Communication and Information on the website.