95% of data breaches involve human error, report reveals

Human error surpassed technological flaws as a contributing factor to data breaches in 2024, with 95% of breaches involving human mistakes, according to Mimecast’s State of Human Risk Report published Tuesday.Meanwhile, a KnowBe4 survey published Tuesday found that employees tend to overestimate their ability to detect phishing attempts, with 86% saying they could confidently identify phishing emails while nearly 50% admitted to falling for scams.Mimecast’s report compiles information from interviews with 1,100 security and IT decision makers between November and December 2024, while KnowBe4’s “Security Approaches Around the Globe: The Confidence Gap” survey includes responses from professionals from six different countries who primarily work on laptops.Both reports highlight the growing human element in cyberattacks and the factors that lead to successful phishing attempts and breaches.

Insider risk and collaboration tools raise security concerns

Insider risk comes not only from disgruntled employees and intentional data leaks, but also from unintentional mistakes by employees who are fatigued, careless, negligent or who have fallen prey to social engineering.Mimecast’s survey found that 43% of respondents saw an increase in internal threats or data leaks due to employee mistakes within the past 12 months, and 66% were concerned that data loss from insiders would continue to increase over the next 12 months. The estimated average cost of insider-related data exposure was $13.9 million, according to the State of Human Risk Report.Collaboration tools were noted to be a growing area of concern, although email attacks were still the most common threat, with 95% of respondents expecting to see email security challenges in 2025. Collaboration tools like Microsoft Teams and Slack saw a 7% increase in attacks over 2024, according to Mimecast, and 44% of respondents saw an increase at their own organization over the last 12 months – up from 37% the previous year.Compared with email, collaboration tools may be an overlooked attack surface for some organizations. A majority, 67%, of respondents said most native collaboration tool security is insufficient and 79% said collaboration tools posed new threats and security loopholes that urgently needed to be addressed, the Mimecast report stated. Additionally, 61% of respondents felt it was inevitable or likely that their organization would suffer a collaboration tool linked attack that would lead to a negative business impact in 2025.Many organizations are already taking measures to better secure these tools, with 53% currently monitoring for collaboration tool based attacks and 34% currently in the process of implementing such monitoring.Cybersecurity budget can be a pain point in addressing both human error risk and collaboration tool security, with only 3% of respondents reporting that they had a sufficient budget to address all cybersecurity areas, despite 85% seeing a security budget increase in the past 12 months.More than half, 52%, of respondents said more budget was needed for cybersecurity staff and third-party services, and 57% said they needed more resources to secure collaboration tools. Fewer respondents, 47%, felt they lacked sufficient budget for email security.Mimecast noted that human risk management (HRM) programs can be more effective than one-size-fits-all security awareness training, as these programs can better identify employees at a higher risk of compromise and help address their specific weaknesses. This approach is key, as just 8% of employees account for 80% of cybersecurity incidents, according to Mimecast.

Overconfidence and underreporting can lead to security lapses

KnowBe4’s survey highlighted how employees tend to overestimate their ability to identify phishing scams, with confidence levels varying greatly between respondents from different countries; for example, only 32% of French respondents felt confident in identifying phishing, while 91% of South African respondents expressed such confidence. However, South African respondents were also the most likely to fall for scams, with 68% admitting to falling victim.“This overconfidence fosters a dangerous blind spot—employees assume they are scam-savvy when, in reality, cybercriminals can exploit more than 30 susceptibility factors, including psychological and cognitive biases, situational awareness gaps, behavioral tendencies, and even demographic traits,” said Anna Collard, senior vice president of content strategy at KnowBe4.Respondent’s to KnowBe4’s survey were most confident in identifying email phishing, with 86% saying they would be able to spot such scams, and more than 80% also said they could identify voice phishing (vishing), social media phishing and SMS phishing (smishing) schemes. However, confidence was lower when it came to more complex social engineering attacks (67%) and deepfakes (65%), highlighting the challenge posed by evolving attacker techniques.Victimization rates identified through the survey showed that employees were most likely to fall for email phishing, at a rate of 24%, followed by social media phishing at 17%, deepfakes at 12%, smishing at 11% and vishing at 11%.Underreporting of security risks was also identified as a potential pitfall in KnowBe4’s report. Although most, 56%, of employees said they were “very comfortable” raising security concerns, 11% said they hesitated to report such risks. German respondents were the least likely to be comfortable reporting threats, at 40%, while almost all (97%) of South African respondents said they felt comfortable reporting.The most common reason given for hesitating to report risks was not knowing how to (38%), while 31% of respondents said it was too difficult, 22% said they were afraid to and 20% said they didn’t want to bother security teams with their reports.

Role of AI in phishing attacks and defense

Phishing and social engineering attacks are evolving as generative AI enables attackers to create more convincing phishing emails and deepfakes, both reports note.Twelve percent of survey respondents told KnowBe4 they have fallen for deepfake scams, while 55% of information technology and security decision-makers said they were not fully prepared for AI-driven threats, according to Mimecast.However, AI is also becoming a key tool in cyber defense, with 95% of respondents to Mimecast’s survey saying they use AI to defend against cyberattacks and/or insider threats. And organizations are beginning to incorporate AI-related components in their cybersecurity training and policies.Mimecast’s report notes 44% of respondents said they were developing internal AI tools to defend against AI-driven attacks, 42% were training on how to use AI to avoid exploitation, 40% were creating policies on AI usage and 35% were conducting simulated AI-driven phishing attacks.“Regularly evaluate the AI tools your team uses. Ensure they are fully integrated and aligned with organizational needs,” Mimecast recommended. “Stay updated on emerging AI tools and understand the tactics used by cybercriminals to counteract evolving threats.”

Source link