Forcepoint X-Labs Reveals How Cognitive Bias Leads to Reasoning Errors in Cybersecurity
New whitepaper highlights how human bias can impact decision making and business outcomes, offering unique guidance on overcoming bias through human understanding combined with advanced behavioral analytics
Dubai, United Arab Emirates - 17 June 2019: Forcepoint X-Labs, the world’s first dedicated research division to combine deep security expertise with behavioral science research, has released a whitepaper: “Thinking about Thinking – Exploring Bias in Cybersecurity with Insights from Cognitive Science”. Authored by psychologist Dr Margaret Cunningham, the whitepaper examines six universal unconscious human biases and explores how a deeper understanding of cognitive science plus the application of advanced analytics can improve decision making in cybersecurity - for both the end user and the industry.
Global cybersecurity leader Forcepoint launched the X-Labs division in March 2019 with the remit of using data insights from the entire Forcepoint product portfolio and external research to drive innovation in modern, risk-adaptive security solutions. Forcepoint examines a wide range of bias in humans as well as data-driven analytics, with a goal of creating more flexible and effective cloud-first cybersecurity solutions appropriate for today’s intricate landscape.
Six Human Biases Skewing Security Strategies
The whitepaper, part of Forcepoint’s series on cognitive science in cybersecurity, covers six analytical biases in-depth, exploring aggregate bias, anchoring bias, availability bias, confirmation bias, the framing effect and the fundamental attribution error.
“We are all subject to cognitive bias and reasoning errors, which could impact decisions and business outcomes in cybersecurity,” said Dr Cunningham, Principal Research Scientist at Forcepoint. “However, an exceptional human trait is that we are able to think about thinking, thus can recognise and address these biases. By taking a different approach and avoiding those instances where automatic thinking does damage, we can improve decision making,”
“We regularly see business leaders influenced by external factors”, adds Nicolas Fischbach, global CTO at Forcepoint. “For example, if the news headlines are full of the latest privacy breach executed by foreign hackers, with dire warnings regarding outside attacks, people leading security programs tend to skew cybersecurity strategy and activity against external threats.”
This is availability bias in action, where an individual high-profile breach could cause enterprises to ignore or downplay the threats posed by malware, poor patching processes or the data-handling behavior of employees. Relying on what’s top of mind is a common human decision-making tool, but can lead to faulty conclusions.
Confirmation bias also unconsciously plagues security professionals. When individuals are exploring a theory for a particular problem, they are highly susceptible to confirming their beliefs by only searching and finding support for their hunch. For example, an experienced security analyst may “decide” what happened prior to investigating a data breach, assuming it was a malicious employee due to previous events. Expertise and experience, while valuable, can be a weakness if people regularly investigate incidents in a way which only supports their existing belief.
It’s not my fault, it’s PEBKAC
One social and psychological bias that impacts nearly every aspect of human behavior is the fundamental attribution error. Security professionals have been known to use the acronym PEBKAC, which stands for “Problem Exists Between Keyboard and Chair”. In other words, they blame the user for the security incident. Security engineers are not solely impacted by this bias, as end-users also blame poorly designed security environments for any incidents, or refuse to recognize their own risky behaviors.
Coping with fundamental attribution errors, and the self-serving bias, is not easy and requires personal insight and empathy. For supervisors and leaders, acknowledging imperfections/failures can help create a more resilient and dynamic culture. For those designing complex software architectures, it should be recognised that not all users’ motivations will be as highly security-focused as the designers of a system. Users’ failures are not because they are “stupid”, but because they’re human.
Overcoming Bias with Applied Insight
The Forcepoint X-Labs whitepaper aims to assist business leaders and cybersecurity professionals alike by improving their understanding of biases. In this way, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions. The industry’s efforts to build harmony between the best characteristics of humans and the best characteristics of technology to tackle cybersecurity challenges depend on understanding and overcoming bias.
At Forcepoint, the X-Labs team is currently building a deep understanding of human behavior into its risk-adaptive security solutions, with an end goal of improving business processes and outcomes, reducing friction and enabling the business to thrive and succeed.
Forcepoint Dynamic Data Protection has human-centric behavior-analytics at its core, and helps security professionals to deal with cognitive bias. The product computes and continuously updates a behavioral risk score against a baseline of “normal” behavior of each end-user, wherever and however that user is accessing the corporate network.
Forcepoint’s intelligent systems, informed by the individual risk assessment, then apply a range of security countermeasures to address the identified risk based on an organization’s appetite for risk. For example, Forcepoint Dynamic Data Protection can allow and monitor data access, allow access but encrypt downloads, or fully block access to sensitive files depending on the context of individual interactions with corporate data and the resulting risk score.
“People tend to make mistakes when there is too much information, complex information, or information linked to probabilities. By pairing behavioral analytics with security countermeasures, we can decrease bias,” concludes Dr Cunningham.
Announced in March, the second product in Forcepoint’s risk-adaptive portfolio Dynamic Edge Protection, will enable enterprises to transform their network and security architectures with seamless network and cloud connectivity to take full advantage of cloud services in a secure manner.
Take Action to Address Bias: Questions for Cybersecurity Professionals
Forcepoint advises that security professionals and business leaders take a few moments to walk through the six biases described in the whitepaper and ask these questions:
• Do you or your colleagues make assumptions about individuals, but use group characteristics to form your assumptions?
• Have you ever been hung up on a forensic detail that you struggled to move away from to identify a new strategy for exploration?
• Has the recent news cycle swayed your company’s perception of current risks?
• When you run into the same problem, over and over again, do you slow down to think about other possible solutions or answers?
• When offered new services and products, do you assess the risk (and your risk tolerance) in a balanced way? From multiple perspectives?
• And finally, does your team take steps to recognize your own responsibility for errors or for engaging in risky behaviors, and give credit to others who may have made an error due to environmental factors?
Home >> Lifestyle Section