The Insider Threat Problem
How to Protect the Infrastructure Against Espionage, Sabotage, and Theft from the Ones We Least Suspect
This essay offers a practical tip in information security as part of establishing good governance in private companies, non-profit organizations, and public institutions. It is the first part of a nine-part series to support organizations in modernizing their information systems.
The president of a small company had caught his Information Technology (IT) manager at playing video games on a number of occasions. The leader finally had it and decided to fire the manager under the assumption that the manager wasn’t doing his job. What the president failed to understand was that his manager had kept the computer network operating flawlessly without a single instance of downtime. The network and computer systems were running so well and smoothly that the manager had plenty of free time on his hands. The manager said, “No worries,” and had accepted his boss’ decision. But just minutes before his departure, he installed some malicious code that would execute six months later and would self-destruct without leaving a trace of the act. The software time bomb did indeed go off, and the company’s financial system was toast. The president had no idea how it happened or whom to blame.
The foregoing dramatization is based on a true account. Damage to an information system may not always be attributable to an outsider who breaches the network perimeter. An information security disaster can be created by a privileged employee who has all the access provided to her by authority of her job position. The high-profile cases of Chelsea (Bradley) Manning and Edward Snowden should serve as a warning to corporations that the private sector needs adequate safeguards against insider threats just as much as government.[1] An insider can arguably do more harm to an organization’s data assets than a computer hacker can dream.
With cybersecurity becoming increasingly critical across all sectors of the economy, leaders must understand exactly who is a threat and not necessarily need to know what technical control can stop the threat. Technology is important, but it is not the be-all and end-all to protecting information. A security system can go down or be bypassed. What is equally important if not more is understanding the motivations and attitudes of individuals who may commit a cyberattack. “Technical solutions can identify occurrences but understanding human factors allows employers to act preventatively.”[2] Information security is not simply a matter of implementing technology but involves developing rules and standards for people to follow. Breaches of cybersecurity is a people problem—not a tech issue.
At a U.S. Senate committee hearing on 3 May 2017, the then-director of the Federal Bureau of Investigation, James Comey, responded to a question about addressing insider and outsider threats in the federal government. Director Comey answered, “Technically it is a matter of law and policy. It’s about the security culture inside our organizations.”[3] His point stressed the need for information security policies.
Who is an insider? Can an outsider also be an insider?
Rowlingson explored the definition of an insider and concluded that a person can vary by profile: technically skilled in computers in some and little knowledge of computers in others; employees in an organization and contractors affiliated with an organization; and computer users in a physical office and remote users who connect from home.[4] An insider isn’t necessarily someone who has deep computer knowledge to penetrate an IT system. An insider may be the least savvy prone to making mistakes. She may also be a partner not employed with the company but has a connection to the company’s records. It can be hard to detect or easy to overlook because employees are “often viewed as responsible, honest, and patriotic because of passing a background check…”[5] It is not unusual to see a good person turn rogue. This wider view of potential perpetrators expands the traditional definition of an insider.
Knowing all actors who could do harm leads to a question of why. What motivates a person to commit a breach? It must be noted that a security issue may not be malicious. A security violation may be unintentional or an innocent mistake made by a person who wasn’t trained or aware of the issue. Breaches can be either malicious or non-malicious.
Studies examined people’s proclivities to comply and not to comply with security policies. Individuals were found likely to break security rules when they learn techniques to rationalize their rule-breaking behavior to deny responsibility, to say that no one was harmed, or to explain that their act was warranted in order to complete a task.[6] The tendency to break rules could be countered by appealing to individuals’ beliefs on intrinsic benefits (accomplishment and fulfillment), intrinsic costs (guilt and embarrassment), vulnerability, safety of resources, and self-efficacy. Bulgurcu and team found that individuals who hold attitudes of intrinsic benefits and costs and have self-efficacy have the intention to comply with information security policies.[7]
Fear through the use of warning messages may play a role in compelling users to abide by security practices. An example of a warning message is a brief note that is automatically inserted at the top of an incoming email message that indicates it is from a sender unaffiliated with the recipient’s organization. Another example is a message that displays when a user is about to log in to her computer or to a website. The warning includes language that indicates the user may be fined or imprisoned for committing a prohibited act.
Research suggests that warning messages universally applied to all computer users could backfire. Fear-inducing messaging is a double-edged sword in view of self-efficacy. Johnston and Warkentin highlighted that some may choose to reduce their fear instead of acknowledging that a threat exists in compliance with the warning; and as a result, they suggested a recommendation “[to] devise a strategy in which end users are exposed to fear appeals with language suitable to their efficacy level.”[8] People with high self-efficacy may react differently when actions are forced upon them. Sending an automated warning may be effective for a recent new hire. The same warning, however, may not be effective when sent to an experienced employee who already knows how to handle himself.
Research has emerged to move beyond the analysis of tangible consequences of cyberattacks and into the domain of understanding human behaviors. In developing a comprehensive threat analytical model, Greitzer and Hohimer included in their model several psychosocial risk factors in which a concerning behavior such as disgruntlement, disengagement, poor performance, or self-centeredness could act as a trigger that provides a catalyst for a person to carry out a malicious act.[9] Along a similar vein, Willison and Warkentin followed the approach to examine offenders’ thought processes to understand what led them to be disgruntled, what caused them to have a perception of injustice, or how they’re able to rationalize away their responsibility.[10] A novel study formalized a theory for researchers to understand how employees across an organization can behave in certain positive ways to protect the organization’s data, converting insiders into stewards of information security.[11] The new concept would make cybersecurity a shared responsibility among all employees—not compartmentalized only for IT experts to manage.
Analyzing behaviors that pose risks would need to be presented in a format that is useful to take action by decision-makers. The literature indicates that a substantial amount of data would need to be collected on individuals in order to consider a range of human factors. How will all the data be summarized with a degree of coherence? And more importantly, can personal privacy be preserved in light of all the data that would be saved and stored?
A monitoring tool was proposed for the Department of Defense to measure user behavior that’s akin to the Fair Isaac Corporation (FICO) credit score. Along a similar purpose of FICO, which enables lenders to measure consumers’ creditworthiness, the User Online Risk Score (UORS) model, according to Roberts, would measure an employee’s risk level in using and accessing an organization’s information resources and IT systems. Capturing data on an employee’s use of login credentials, access to physical and digital information sources, visits to websites and the Dark Web, changes to work-related stressors and events such as demotions and complaints, and influences from external sources, UORS would report behaviors across seven dimensions in a dashboard viewable by the employee and actionable for management.[12] Not only would it benefit the organization in mitigating security risks, UORS would benefit the insider in that the computer user would be able to see for himself where he needs to make needed corrections.
The literature points out the problem of insiders as a major concern for information security. As physical boundaries are blurred with remote, telework, and hybrid work arrangements and organizations rely on contractors, grantees, and partners, the definition of who is an insider expands to include many individuals that hitherto would not have been considered. An outsider can indeed be an insider. Anyone of these actors could commit a cyberattack that causes serious harm to an IT system or leads to a considerable loss of records.
Monitoring holds the key to mitigating and preventing security breaches, whether they’re malicious or accidental. Technical tools can be implemented to monitor the operations of an organization’s physical and technological assets. But the same technology used to protect computer hardware and software may be useless or inappropriate in supervising humans. If technology is used to monitor people, technology can be neutralized through techniques of rationalizing personal actions. It becomes apparent that monitoring human behavior requires employing methods that are not created by computer scientists but rather are devised by behavioral experts. Tools and measures based on psychology and social behavior would be better suited at analyzing a person’s past actions and in predicting how the person may react in the near and distant future. Social scientific methods provide for the more appropriate means to evaluate people’s behaviors and actions.
While this essay has focused on the insider threat as a sub category under information security, conventional threats coming from outside an organization (examples such as an anonymous hacker and an enemy agent) continue to pose high risks to an organization. Management must remain vigilant on all possible security breaches and must be capable of handling both the outsider threat and the insider threat.
A final point that needs to be said is the potential for an organization to abuse its power in collecting too much data on anybody who has access to information. Not only would it raise administrative costs but reviewing too much activity may infringe on a person’s privacy and civil liberties. Not to mention the overly obtrusive monitoring—the “big brother” effect—can be creepy that may induce ill will among employees against their employer. Balance is necessary here to collect a certain, limited amount of data that effectively predicts future behavior, on one hand, and does not invade personal privacy, on the other hand. Securing data and the entire IT infrastructure should not come at the expense of individual freedom. Both data and privacy can and must be protected.
Evidence for Practice
Organizations will be positioned to prevent security breaches when managers can identify and promptly address motivational factors or triggers in an employee’s behavior and attitude that would give rise to committing an information security violation.
An information security policy needs to be crafted in a way that considers human behavior, intrinsic benefits, and intrinsic costs in people’s willingness to comply with mandated rules. Not doing so could enable individuals to break the rules and use techniques to rationalize their behavior to escape culpability.
Fear tactics to force employees to comply with cybersecurity rules should be applied carefully with an attention on crafting nuanced warning messages tailored to different types of computer users. Security messages that play on human emotion may produce the opposite effect in certain individuals who already know the risks and take voluntary actions to protect themselves.
Next Steps for Leaders
The head of the IT Department should consult with the head of the Human Resources (HR) Department on how their departments can collaborate on cybersecurity initiatives, security awareness training, and other related programming. Each department provides complementary expertise to implement an effective information security policy that’s human centered.
The IT Department should evaluate current programs to raise awareness of security risks to determine if they’re producing desired effects. Security programs would need to be changed or adapted if they’re not achieving expected outcomes.
The HR Department should review procedures on disciplinary action in cases of information security violations. How does the organization hold a person accountable for a security breach? Is there due process to allow for discovery of facts and a fair hearing of the alleged violation?
The HR Department should review procedures on offboarding an employee for separation. Does the exit interview ask questions to ascertain whether the departing employee holds any grudges or negative attitudes toward the organization? Are all of the departing employee’s login credentials, access codes, and identification cards inactivated, so that nobody can use the credentials to gain access into any system or facility? Does the supervisor provide direct and immediate supervision as the departing employee completes final tasks in the last few days of employment?
Open for Discussion
Have you witnessed or experienced a security breach in your organization? How did the organization respond? Do you talk about security risks and the potential threats in internal meetings? Feel free to submit your answers in the comment section. If you’re not comfortable sharing your experiences, you can talk to a specialist at Peaceful Governance Institute (PGI) in private; click this link to call the PGI hotline support.
Notes
1. Stew Magnuson, (2013) “Companies Ill-Prepared to Fend Off Insider Threats,” National Defense 98(720): 30.
2. Mike Patterson, (2019) “Insider Threats: More Than Just an IT Problem,” National Defense 104(790): 18.
3. U.S. Senate Committee on the Judiciary, Full Committee Hearing, (2017) “Oversight of the Federal Bureau of Investigation,” 3 May 2017, U.S. Senate. (Accessed 27 December 2025 at https://www.judiciary.senate.gov/committee-activity/hearings/05/03/2017/oversight-of-the-federal-bureau-of-investigation.)
“Senate Judiciary Hearing with FBI Director James Comey,” Transcript of meeting, 3 May 2017, CNN. (Accessed 27 December 2025 at https://transcripts.cnn.com/show/ath/date/2017-05-03/segment/02.)
4. R. R. Rowlingson, (2005) “Inside and Out? The Information Security Threat from Insiders,” Journal of Information Warfare 4(2): 27–28, 35.
5. William E. Kelly, (2018) “Insider Threats: Enemies Within Our Government,” American Intelligence Journal 35(2): 8.
6. Mikko Siponen and Anthony Vance, (2010) “Neutralization: New Insights into the Problem of Employee Information Systems Security Policy Violations,” MIS Quarterly 34(3): 489, 496, 499.
7. Burcu Bulgurcu, Hasan Cavusoglu, and Izak Benbasat, (2010) “Information Security Policy Compliance: An Empirical Study of Rationality-Based Beliefs and Information Security Awareness,” MIS Quarterly 34(3): 540–542.
8. Allen C. Johnston and Merrill Warkentin, (2010) “Fear Appeals and Information Security Behaviors: An Empirical Study,” MIS Quarterly 34(3): 562.
9. Frank L. Greitzer and Ryan E. Hohimer, (2011) “Modeling Human Behavior to Anticipate Insider Attacks,” Journal of Strategic Security 4(2): 41–43.
10. Robert Willison and Merrill Warkentin, (2013) “Beyond Deterrence: An Expanded View of Employee Computer Abuse,” MIS Quarterly 37(1): 1–20.
11. Clay Posey, Tom L. Roberts, Paul Benjamin Lowry, Rebecca J. Bennett, and James F. Courtney, (2013) “Insiders’ Protection of Organizational Information Assets: Development of a Systematics-Based Taxonomy and Theory of Diversity for Protection-Motivated Behaviors,” MIS Quarterly 37(4): 1189–1210.
12. Stephen A. Roberts, (2021) “DoD Has Over 3.5 Million Insiders – Now What?: A User Online Risk Score Framework To Reduce The Insider Threat,” The Cyber Defense Review 6(4): 124–126.
