We are seeing an escalation in the sophistication and impact of cyberattacks across the globe. This, coupled with the fact that organisations need to protect themselves against a plethora of attacks while attackers only require a single vulnerability in an organisation’s defences to be successful, is a scary prospect for business leaders charged with the security and sustainability of large organisations. Junaid Amra, Director at PwC Forensic Technology Solutions and Cybersecurity, and Nina Kirsten, Economist and Geneviève Frydman, Political Risk Analyst at PwC Strategy& talk solutions…
In PwC’s 22nd Annual Global CEO Survey, released in January 2019, cyber threats were identified as the fifth largest threat to global growth, following over-regulation, policy uncertainty, the availability of skills and trade conflicts.[1] As companies increasingly embrace digitisation, automation and artificial intelligence, the risk of cybercrime continues to grow.[2]
The sixth South African edition of PwC’s Global Economic Crime and Fraud Survey[3], released in 2018, highlighted that 26% of South African organisations expect cybercrime to be the most disruptive type of economic crime over the next two years.
Given the scale of the risk, it is no surprise that an emerging cybercrime insurance industry is growing rapidly too. A 2015 report by PwC suggested this industry could be worth up to USD7.5 billion globally by 2020.[4]
In South Africa, the cost to members of the public who fell victim to cybercrime in 2017 reached R2.2 billion per annum, according to the South African Banking Risk Information Centre.[5] This included incidents of identity theft, online fraud and scareware.
The motive for cyberattacks can broadly be classified in three types:
- Financial gain: Typically carried out by organised crime syndicates or perpetrators seeking financial gain. This will include EFT fraud, ransomware and other forms of attacks. Also included in this category is the theft of intellectual property, personal information and other forms of information that can be monetised.
- Economic, political and/or military advantage: Attackers in this category are essentially governments or nation states and to some extent competitors seeking strategic advantage. Targets of interest from a nation state perspective would be trade secrets, sensitive business information, emerging technologies, and critical infrastructure and information that could be used for political advantage. Competitors target strategic business information, intellectual property, sensitive customer or supplier information, among other things.
- Activism: This is often referred to as ‘hacktivism’ in the cyber world and refers to activists making use of technology to further a political agenda, ideology or to influence social change. Organisations and governments can become targets of hacktivists overnight. As an example, sponsors of the 2014 FIFA World Cup in Brazil were targeted by hacktivists. This was motivated by the fact that many Brazilians did not support the government’s decision to host the event. The aim of such hacktivist campaigns is often to make a statement or cause reputational damage.
The proliferation of cybersecurity insurance mirrors the growing popularity of political-risk insurance, with the latter providing insurance in the event of malicious damage to property, or injury and death of personnel, as a result of terrorism, political violence or war. A cyberattack can similarly bring day-to-day business operations to a grinding halt and cause long-term reputational damage. Cybersecurity insurance, even if costly, is a necessary consideration for companies embracing the Fourth Industrial Revolution.
Organisations need close structural loopholes; higher digital walls alone will not solve the problem. They also need to look inside their walls to lower the risks of cyberattacks, as cybercriminals often target employees and customers who exhibit predictable patterns of behaviour. These modes of attack are referred to as social engineering techniques and coerce or entice users to perform certain actions.
Driving down the risk of a cyberattack, whether financially or politically motivated, not only minimises the risk of significant loss in customer trust and financial losses, but also means companies can lower the cost of cybersecurity insurance, which is a growing driver of operational cost.
Real-life example
Here is a scenario of how attackers can trick employees into installing malware on their devices: You face another pressured day at the office and receive an email from your company’s risk team stating that you have failed to respond to a mandatory staff compliance check. Action is required immediately to avoid disciplinary procedures. A surge of fear and a sense of urgency rushes through you. You click on the link in the email and malware begins to download in the background. You’ve been hacked. The email was not from your company.
This example could happen to any of us. The methods used by attackers leverage insights into basic human psychology. Your employer, taking on an authoritative position, seems to send you an email. The email deliberately creates a sense of urgency and triggers a powerful lever – fear.
These tactics are understandable through the lens of behavioural economics, a study of decision-making that deciphers behaviour based on our cognitive make-up and rejects the classic economic view of people being rational, self-interested and driven by precise consideration of the relative costs and benefits of their choices.
These are the top three behavioural traps that cybercriminals use, and the behavioural measures organisations can adopt to protect themselves and their employees:
- Tapping into the power of fear and loss aversion
The tendency to avoid pain and loss is at the core of our genetic make-up, and fear is one of the most powerful levers motivating us to act. It is a perfect tool for cybercriminals.
Whether in the form of an email from our employer informing us that we may be in breach of company policy, or a call from the bank informing us that our online bank account has been compromised and requiring us to share confidential information, such criminal attacks force us to act or to unfortunately face a dangerous and painful outcome.
Since fear and loss-aversion commonly prompt a panicked reaction that leads us into the arms of cyber-criminals, organisations can counteract this by reminding their employees to ‘stop and think’ whenever they receive an email or a call that triggers feelings of fear and panic. Slowing down employees to allow them time to validate the email address of the sender, checking for typos, or for other signals of a scam, can significantly lower the risk of a successful cyberattack.
Furthermore, encouraging employees to check with their colleagues whether they received a similar email or call can also help them not to act on an automatic gut reaction, instead prompting further interrogation of the situation.
Behavioural science suggests salient and frequent reminders, utilising social norming techniques, can help companies instil a sense of calm and scepticism in employees to protect them against cybercriminals’ fearmongering.
- Leveraging the influence of authorities and social persuasion
The social engineering techniques used by cybercriminals attempt to leverage our willingness to submit to authority and our tendency to trust others’ opinion, a phenomenon termed herding. Thus, cybercriminals’ efforts to contact employees will generally evoke a superior authority, be it an executive at the company or a government institution. Because we have learned to trust authorities from a young age, we are less likely to question the validity of the request for information if it seems to be coming from a place or person of authority.
In addition to our tendency to submit to authority, we also tend to take behavioural cues from our peers. Looking to others for guidance can be helpful. For example, in a Tanzanian case study led by the Busara Center for Behavioral Economics, participants increased their savings rates by 11% when prompted by the total savings levels of peers who saved more than they did.
However, social persuasion can also cause harm. When cybercriminals pretend they were referred by a colleague or friend, they employ herding tactics to work against the employee’s best interests. This method is also used by attackers on social media platforms who gain the trust of people you are connected to by getting you to add them as a contact. In so doing they automatically become trusted by people you know, allowing them to extend their reach.
To mitigate the risk of cyberattacks, companies can use social persuasion to their benefit by enlisting behaviour champions who visibly exhibit helpful behaviours, such as in relation to having good password habits, or in how best to respond to suspicious emails and calls. This form of social proofing can get more employees to follow the champions’ leads.
In addition, internal communication initiatives can highlight how the majority of staff follow cybersecurity best practices. For example, a campaign highlighting that “90% of your colleagues do not click on email attachments sent from unknown sources” can prove an effective method to promote the right behaviours by leveraging the principles of herding.
- Driving scarcity and urgency
Many of us have encountered phishing emails, which attempt to fool us into clicking on a link or opening an attachment that will install malicious software onto our computer, as highlighted in the example above. Such emails may illicit a sense of scarcity or urgency, prompting the recipient to act quickly.
While only some of us become victims of phishing emails, the fact that criminals can send out thousands of emails at once, at no cost to them, remains a productive means for them to gain access to computers. It is often possible to spot a scam email – riddled with bad grammar and spelling mistakes – but the ‘in a rush’ factor means we might not notice or pay attention to the fact that there are typos or other suspicious mistakes in the email.
We often make poor choices when we are in a hurry. Thus, companies should encourage their employees to consider the possible intentions of any request with a level head. One way to encourage this is to make the risk of scam emails more salient.
Organisations are attempting to train users in recognising attacks by conducting phishing and other simulations for employees. These simulations are effectively a cyber ‘fire drill’. Based on the work PwC is doing in this area, we see simulations targeting three distinct layers of the organisation: Board and executive training, technical IT teams and general users.
Although all the simulations targeted at the different levels of responsibility are aimed at improving awareness and decision-making when dealing with cyberattacks, there are significant differences in terms of the decisions required by each of the groups. During phishing simulations, companies send out emails to their employees, presented as normal firm communication, except for a few details that can tip-off employees that the email is not legitimate.
Globally, PwC has run simulations for clients, with either board members or executives as a start, and thereafter proceeded to perform simulations aimed at IT teams and end-users, or covering specific aspects required by the organisation. End-user simulations are continuous and take various forms. Phishing simulations are conducted routinely.
In these simulations, when an employee clicks on the link in the email, a message pops up informing them that they have been compromised. Through these exercises, companies can effectively demonstrate to their employees how easy it is for cybercriminals to exploit vulnerabilities, as well as which clues to look out for to avoid falling into this trap.
Reporting channels should also be established for users to report incidents and obtain assistance. This may seem simple, but we find this to be a common gap where users are either not informed or simply don’t have an avenue to report suspicious activity. In one instance, a hacker wanted to make an ethical disclosure to a company whose systems were vulnerable and at risk of being compromised. Due to the lack of internal processes to deal with this, he tried to speak to someone within the organisation, but without success and eventually released the information online, causing reputational damage to the organisation.
The way forward
Cyberattackers are focussing efforts on exploiting social, emotional and psychological drivers that influence human behaviours due to the strides organisations are making in implementing stronger technical controls. Organisations can, however, embrace the opportunity to strengthen not only the digital walls that protect them, but also to address the human aspects of cybersecurity by educating their employees and customers of the dangers as part of their cybersecurity awareness programmes. By understanding cybercriminals through the lens of behavioural economics, businesses can change their employees’ behaviours, ultimately creating a security-conscious culture that assists in reducing their overall cybersecurity risk.
[1] https://www.pwc.com/gx/en/ceo-survey/2019/report/pwc-22nd-annual-global-ceo-survey.pdf
[2] https://www.pwc.co.za/en/press-room/cyber-risk-hotels.html
[3] https://www.pwc.co.za/en/assets/pdf/gecs-2018.pdf
[4] https://preview.thenewsmarket.com/Previews/PWC/DocumentAssets/400109.pdf
[5] https://citizen.co.za/news/south-africa/crime/2047717/cybercrime-costs-sa-almost-r2-2bn-a-year/