Technologies such as artificial intelligence (AI), machine learning, the Internet of Things and quantum computing are expected to unlock unprecedented levels of computing power, writes Brian Pinnock, cybersecurity expert at Mimecast.These so-called Fourth Industrial Revolution (4IR) technologies will power the future economy and bring new levels of efficiency and automation to businesses and consumers.
AI in particular holds enormous promise for organisations battling a scourge of cyberattacks. Over the past few years cyberattacks have been growing in volume and sophistication.
The latest data from Mimecast’s State of Email Security 2022 report found that 94% of South African organisations were targeted by email-borne phishing attacks in the past year, and six out of every ten fell victim to a ransomware attack.
Companies seeing potential of AI
To protect against such attacks, companies are increasingly looking to unlock the benefits of new technologies. The market for AI tools for cybersecurity alone is expected to grow by $19-billion between 2021 and 2025.
Locally, adoption of AI as a cyber resilience tool is also growing. Nearly a third (32%) of South African respondents in Mimecast’s latest State of Email Security 2022 report were already using AI or machine learning – or both – in their cyber resilience strategies. Only 9% said they have no plans at the moment to use AI.
But is AI a silver bullet for cybersecurity professionals looking for support with protecting their organisations?
Where AI shines – and where it doesn’t
AI should be an essential component of any organisation’s cybersecurity strategy. But it’s not an answer to every cybersecurity challenge – at least not yet. The same efficiency and automation gains that organisations can get from AI are available to threat actors too. AI is a double-edged sword that can aid organisations and the criminals attempting to breach their defences.
Used well, however, AI is a game-changer for cybersecurity. With the correct support from security teams, AI tools can be trained to help identify sophisticated phishing and social engineering attacks, and defend against the emerging threat of deepfake technology.
In recent times, AI has made significant advances in analysing video and audio to identify irregularities more quickly than humans are able to. For example, AI could help combat the rise in deepfake threats by quickly comparing a video or audio message against existing known original footage to detect whether the message was generated by combining and manipulating a number of spliced-together clips.
AI may be susceptible to subversion by attackers, a drawback of the technology that security professionals need to remain vigilant to. Since AI systems are designed to automatically ‘learn’ and adapt to changes in an organisation’s threat landscape, attackers may employ novel tactics to manipulate the algorithm, which can undermine its ability to help protect against attack.
Shielding users from tracking by threat actors
A standout use of AI is its ability to shield users against location and activity tracking. Trackers are usually adopted by marketers to refine how they target their customers. But unfortunately threat actors also use them for nefarious purposes.
They employ trackers that are embedded in emails or other software and reveal the user’s IP address, location, and engagement levels with email content, as well as the device’s operating system and the version of the browser they are using.
By combining this data with user data gained from data breaches – for example a data breach at a credit union or government department where personal information about the user was leaked – threat actors can develop hugely convincing attacks that could trick even the most cyber aware users.
Tools such as Mimecast’s newly released CyberGraph can protect users by limiting threat actors’ intelligence gathering. The tool replaces trackers with proxies that shield a user’s location and engagement levels. This keeps attackers from understanding whether they are targeting the correct user, and limits their ability to gather essential information that is later used in complex social engineering attacks.
For example, a criminal may want to break through the cyber defences of a financial institution. They send out an initial random email to an employee with no content, simply to confirm that they’re targeting the correct person and what their location is. The user doesn’t think much of it and deletes the email. However, if that person is traveling for work for example, the cybercriminal would see their destination and could then adapt their attack by mentioning the location to create the impression of authenticity.
Similar attacks could target hybrid workers, since many employees these days spend a lot of time away from the office. If a criminal can glean information from the trackers they deploy, they could develop highly convincing social engineering attacks that could trick employees into unsafe actions. AI tools provide much-needed defence against this form of exploitation.
Empowering end-users
Despite AI’s power and potential, it is still vitally important that every employee within the organisation is trained to identify and avoid potential cyber risks.
Nine out of every ten successful breaches involve some form of human error. More than 80% of respondents in the latest State of Email Security 2022 report also believe their company is at risk from inadvertent data leaks by careless or negligent employees.
AI solutions can guide users by warning them of email addresses that could potentially be suspicious, based on factors like whether anyone in the organisation has ever engaged with the sender or if the domain is newly created. This helps employees make an informed decision on whether to act on an email.
But because it relies on data and is not completely fool proof, regular, effective cyber awareness training is needed to empower employees with knowledge and insight into common attack types, helping them identify potential threats, avoid risky behaviour and report suspicious messages to prevent other end-users from falling victim to similar attacks.
However, less than a third of SA companies provide ongoing cyber awareness training, and one in five only provide such training once a year or less often.
To ensure AI – and every other cybersecurity tool – delivers on its promise to increase the organisation’s cyber resilience, companies should prioritise regular and ongoing cyber awareness training.