![](https://cyber-vigilance.de/wp-content/uploads/2024/11/Business-continuity.png)
No one wants to become involved in a cyberattack, whether professionally or personally. However, an increasing number of people are becoming victims of such online threats. These attacks are not only growing more common, but also more severe, with humans frequently at the centre of them.
According to IT security experts, 9 out of 10 data breaches involve a human aspect that permits information to be taken, money to be stolen, or identities to be compromised.
Cybercriminals use AI technologies to successfully target humans.
A major part of the explanation is that an attempted cyberattack today appears very different from what it did five, two, or even one year ago. Criminals and bad actors have long recognised that the human aspect provides a dependable access point into systems, circumventing technological defences. They leverage the power of developing technologies to enhance their hacking talents. The same AI-powered tools and LLM models that promise to transform customer service and product development are being exploited to make bogus requests for information appear even more credible. WormGPT and FraudGPT are tools that are spreading through hidden discussion boards and being handed around on the dark web. An email generated by WormGPT could look like this:
![](https://cyber-vigilance.de/wp-content/uploads/2024/11/image-10.webp)
Using generative AI, emails attempting to access information can be created up to 40% faster than traditional methods. What has resulted is a larger-scale deception. At SoSafe, we were able to generate emails (in simulated assaults) that 78% of employees opened, 65% provided personal information, and 21% clicked on harmful links or attachments. Furthermore, scammers can now include industry and company-specific information while crafting grammatically correct, well-formatted communications. These new types of assaults may not display the typical symptoms of phishing, such as unusual fonts, syntax, or file types. Worst of all, the capability of AI allows each attack to be tailored to themes and information that you will find interesting. The age of mass personalised spear phishing is arrived.
Technology helps to keep the hordes at away, and it works well, but it is not enough. Professional hackers working together, given enough time, will most certainly overwhelm any technical defences put in place by IT. This will require time and effort, as well as ingenuity and possibly zero-day exploits. It is significantly easier to target an authorised user who can bypass technical controls due to the access privileges they have been granted.
Users have become our ‘main attack surface.’ This is the simplest way for an attacker to acquire access to our systems, data, and resources, and it is frequently done by a staff member attempting to assist or respond to a client demand or a personal issue. A glaring illustration is what occurred to the software development firm Retool. The attackers used artificial intelligence to deepfake IT worker voices in order to defeat MFA codes after obtaining the employee’s credentials through a smishing attack. This gave them access to the accounts of more than 25 cryptocurrency consumers. As a result, many customers lost millions in cryptocurrency, including Fortress Trust etc.
![](https://cyber-vigilance.de/wp-content/uploads/2024/11/f-Security-smishing.png)
This is the “human factor” in cyber defence. People are on the front lines and should be viewed as assets in the fight rather than as a source of ‘enabling’ invasions.
We need to integrate awareness and training into safe behaviours.
Education is critical for managing human risk. The first stage involves becoming aware of the problem. Bad actors are motivated to steal valuable information and resources from an organisation. Compliance frameworks have long recognised its importance, presenting security leaders with a set of criteria addressing human layer risk. However, simply checking the compliance box is insufficient because these frameworks only communicate basic best practices rather to actually influencing behaviour. Too frequently, security training ends here, yet simply meeting the minimum compliance requirement is insufficient since workers are rarely given enough tools and understanding to play an active role in a company’s defence. What is the most worrying aspect? According to a recent NIST research, 56% of security executives still see compliance as the most significant measure of SA&T performance. However, while they saw compliance as an important sign of success, they also recognised that it may not reflect true efficacy in behavioural and attitudinal change.
Cyber security training may be tedious, with countless slides that need a user to click every 15 or 30 seconds to ‘guarantee participation,’ as well as meaningless quizzes with blatantly obvious answers. This is no longer appropriate for the purpose. When confronted with a growing threat landscape, businesses must strive to advance their organisations beyond the fundamentals of awareness.
Instead, programs must identify and prioritise human risks unique to a certain firm, and then develop a corrective action plan to address these concerns. This will lead to the development and diffusion of behaviours that enable people to identify, understand, and respond to threats. These programs must take into account a wide range of human-related dangers and develop strategies to counter the threat. These programs must assess a wide range of human-related dangers and devise strategies to counter the threat. They must include cultural effects, motivating factors and attitudes, context, and emotional reactions. There must be an emphasis on the principles that underpin safe and secure interactions with digital information and communication systems, which will remain applicable even if the format or underlying technology changes.
Training should be engaging. Yes, it should contain essential material, but it should also ensure that people learn how to apply their knowledge, develop good security practices, and understand why these things are important. Good news: We can use time-tested psychological approaches. In practice, this entails providing a multichannel experience and contextual learning opportunities wherever individuals are. These applications break down large blocks of text into bite-sized portions, using techniques such as gamification, continuous and spaced repetition, interactive components, contextual nudging, and storytelling – all while emphasising positive reinforcement over fear-based learning.
Behavioral-based human risk management is our last hope to tackle the burnout epidemic that security personnel are experiencing today.
Companies that do not act risk overwhelming the specialists who deal with these dangers. Burnout is on the rise among security teams. Over 65% of security professionals in the United States and Europe experience severe work stress, while 3.9 million cyber security roles remain unfilled worldwide. To accomplish their mission, professionals require assistance from others. Cyber security must become a shared responsibility, and humans possess the ability to combat cybercrime. Equipped with security instincts, they become the most powerful ally and adaptable component of businesses’ defence strategy for long-term risk reduction.
The greatest approach to understand the transformation required is to use one of the most fundamental common analogies: give a man a fish, and he’ll eat for a day; teach him to fish, and he’ll never go hungry. In this case, a fish represents a technological method that may eliminate one threat but does not address the larger issue. Only by empowering frontline personnel through a comprehensive human risk management program will businesses be able to build resilience and sustainably mitigate cyber risk, assuring long-term viability.
How can CVA assist you in managing and reducing human risk?
CVA is a leading human risk management platform that aims to make secure behaviour second nature. We think that people want to do the right thing, but they frequently require assistance to succeed, particularly in today’s world, where AI-driven risks are on the rise. That is why we are committed to developing security cultures that not only guard against digital threats but also include individuals in lowering human-related risk. We teach, Transfer, act and connect.