It’s common knowledge that organisations of all sizes face numerous and formidable cyber-threat actors, and we certainly seem to hear plenty about them: Cyber-criminals or nation-state actors that are bankrolled by foreign governments and seem to have god-like hacking skills. However, we focus on these threats at the expense of others to our peril.
As dangerous as the literal armies of state-sponsored hackers as well as relentless and ruthless cronies of organised crime may be, there are cyber threats that warrant far more attention than they receive.
With that in mind, let’s talk about insider threats.
It’s actually the people who comprise and are already inside your organisation who present risks that are more severe than those posed by even the most sophisticated cybercriminals. Anyone can be an insider threat. Karen from Finance. Joe from the Marketing department. A System Admin, or even the janitor. Indeed, for all you know, I could be an insider threat…
Of course, I am not an insider threat, but yet, I still could be. This brings us to the main reason insider threats pose such an extraordinary risk to organisations of any size: It’s because insider threats often don’t know that they have become one, and they will remain oblivious to the havoc they wreak because they don’t even necessarily have any malicious intent or any other reason to think that they are a security risk.
There are three broad categories of Insider threats:
- Malicious insider threats – those who abuse access privileges to steal and sell valuable intellectual property for financial gain or competitive advantage by performing actions such as selling access credentials on the dark web, embezzlement or crippling systems to benefit a competitor.
- Negligent insider threats– those who inadvertently expose the organisation to cyber-attack by some kind of accident attributable to the unavoidable human tendency to make mistakes.
- Compromised insider threats– those who inflict damage to an organisation’s information systems either directly or indirectly (e.g., one might follow instructions or surrender their access credentials to a hostile third party) because their decision-making is being manipulated. People can be driven to do all manner of things if it means relief from pain, securing the safety of a loved one, or if they are convinced that they are ‘special’ and uniquely capable of doing something. This is where the unique capabilities of social engineers come into play…
Insider threats in any of these forms are both common and pernicious, and the reason that they are so common is tied closely to the inherent fallibility of humans. We are our own worst enemy.
Hackers routinely leverage human error and our susceptibility to social engineering attacks to execute malware and/or compromise a system without directly accessing anything themselves. In these situations, hostile threat actors create and use insider threats to do their dirty work for them.
In other cases, insider threats will manifest independently of any hacker or hostile third party. In these scenarios, oblivious individuals will often inflict harm inadvertently; usually when they’ve failed to exercise due diligence (I.e., Negligent Insider threat) or simply by accident.
For example, sometimes employees are granted administrative controls before they’re ready the responsibilities associated with enhanced privilege. This frequently results in people haplessly wreaking havoc as their litany of mistakes leaves the door open for threat actors to walk right on in.
We are just getting started.
Let’s move on to malicious insider threats.
Motivated by financial difficulties, greed, boredom or frustration with their employer, malicious insider threats will deliberately compromise a system(s) at their workplace to benefit themselves or others. Sometimes malicious insider threats will aim to abscond with valuable intellectual property, or perhaps siphon funds from company bank accounts. However, it is just as common for insider threats of this kind to simply provide a means for hostile third parties to access an organisation’s systems, thus exporting any number of nefarious responsibilities, such as stealing, data exfiltration, damage and destruction to others.
Insider threats are an increasingly common and severe cyber risk, and they’re also (in more ways than one) more dangerous than other more widely publicised cyber threats, such as ransomware. Unlike many other hostile threat actors, insider threats have intimate knowledge of the organisation, its culture, power structures, policies and procedures. Considering that most insider threats tend to be at a managerial level, they also enjoy the advantage of having the trust of the organisation they exploit (knowingly, willingly or otherwise) in addition to immediate and direct access to key systems and valuable data.
With so many advantages at their disposal, it’s no surprise that by the time an insider threat has been identified, the damage has already been done, and the culprit long gone.
Furthermore, in most cases, the higher the insider threat sits within your organizational chart, the greater the threat they pose. For example, someone like a Project Manager or Chief Financial Officer will have greater access to sensitive data and systems than junior administrative staff. Therefore, the implications and significance of a CFO’s account being compromised are proportionately greater. This is because those who hold higher positions within an organisation’s hierarchy are typically also granted increased access privileges.
That said, it’s not as if the scale of the risks posed by insider threats is determined by your PD or your level of seniority within an organisation. Of course, if your company’s CFO has a lapse of concentration or judgement and falls victim to a spear-phishing email, then an attacker can then use their credentials to access far more sensitive and valuable data than that which is available to more junior staff. However, the same or even greater risks can stem from anyone.
For example, a software developer is not an executive, but could still do just as damage as a compromised or negligent CEO. Said developer might find a USB stick in the carpark and plug it into their workstation – just to see what’s on it. the developer’s curiosity may lead to the unfortunate decision to open an enticingly labelled document, which then executes a malicious macro. Before the developer can say ‘culpability’, that workstation reaches out to a website to download and execute ransomware. To add insult to injury, the macro that ultimately facilitated the company’s ransomware infection should not have been available at the developer’s workstation.
This hypothetical organisation conducts rigorous and regular governance, regulation, and compliance audits, including assessments of each individual’s needs relative to their roles. As such, each employee must have a good business-related reason to justify having macros made available to them at their workstation – and our developer friend had a very good reason for having such access. The problem is that this very good reason had not applied for months, and no change had been made to the workstation because the developer had “accidentally” failed to notify relevant parties during the last security audit.
Neither of these insider threats intended to cause harm to the organisation. They were fundamentally victims of human behavioural proclivities: We all make mistakes, we all get tired, we can all be distracted, we are all prone to being curious, and most of us (whether we admit it or not) are typically inclined to make decisions that place others at risk for the sake of our own convenience and/or benefit (e.g., car crashes caused by people ‘just checking’ their phones, showing up to work when you know you just had the flu etc.). Anyone can become an insider threat, and the root cause of the risk/threat/problem lies with human behaviour and decision making.
So, what can we do to combat insider threats?
First and foremost, we can work towards changing our behaviour through educating our employees on their responsibilities, and more broadly to improve their cyber literacy. With the right knowledge transfer mechanism and pedagogical know-how, even the most stubborn employee will at least become more cognisant of the potential issues that their decisions could create as they go about their work.
Second, wherever we cannot rely on behavioural adaptations alone to reduce risks, we can strategically (and tactfully) enforce access control restrictions on things like shared folders, USB ports and application macros. The safest option when dealing with something prone to error is to remove it, but that’s often not an option (especially when that thing happens to be a company executive), so the next best thing is to contain and control the risks associated with the hazard.
Third, when someone does have to leave, or is terminated, the organisation should aim to sever the relationship amicably. This follows simple crime prevention principles that have become common practices in many offline environments for decades, and revolve around the principle of removing or reducing opportunities for antisocial/harmful/criminal behaviour. Thus, by supporting a former employee’s efforts to find a new role ASAP, an organisation can reduce the opportunities of that individual acting on their capacity to become an insider threat by virtue of increasing the visibility and decreasing the motivations.
These three strategies are derived from principles embedded in our Human Centric Cybersecurity (HCCS) practice, which at the highest level aims to increase an organisation’s cyber resilience in general, but against insider threats specifically by:
- Simulating social engineering attacks – to gauge typical human behaviours and responses and leverage findings to develop improvement programs.
- Removing any barriers to improving cyber maturity – by identifying, analysing and resolving issues/blocks generated by workplace cultures and power structures.
- Building cyber skills, policies and processes – helping staff develop their ability to identify potential insider threats and putting effective procedures in place that defuse the associated risks.
Done consistently across, HCCS helps to foster a culture of holistic cybersecurity that will outlast and outperform any purely technical ‘solution’.
Get started by contacting us at 460degrees.