Effective cyber hygiene – what cyber claims tell us
Global insurance industry players Zurich, Marsh and Gallagher Re recently released studies into cyber claims and what they reveal about how to protect organisations from cyberattack. We look at their research to uncover valuable insights for insurers and their customers in building greater resilience against cyber criminals.
In the Actuaries Institute Cyber Green paper Taylor Fry co-authored last year, we noted “a vibrant cyber insurance market will do more than provide financial recompense for risks … it can offer clear signals and incentives to business – in the form of eligibility, pricing and sharing of insights – on best practice standards”. The maturing cyber industry is starting to do just that: using the data it holds to offer insight into what best practice looks like. When it comes to strengthening protections and in the latest gesture of industry sharing, Zurich, Marsh and Gallagher Re published research in the past couple of months into which cybersecurity controls, among other factors, make a difference.
Study 1: Using data to prioritise cybersecurity investments
Earlier this year, global broker Marsh released a study entitled Using data to prioritise cybersecurity investments.
What did this study look at?
This study combined two datasets:
- Cybersecurity posture questionnaire for individual organisations
- Historical claims data from November 2020 to November 2021, consisting of 1) Claims events that resulted in a cyber claim being paid, and 2) Notices of circumstances that didn’t cause an insured loss.
It used the two datasets to look at which cybersecurity controls have the greatest effect on decreasing the likelihood of an organisation experiencing a cyber event.
What did it find?
- The control with the largest effect on cybersecurity was ‘hardening techniques’ – applying baseline security configurations to system components, including servers, applications, operating systems, databases, and security and network devices.
Rounding out the rest of the top five were:
- Privileged access management – The organisation manages desktop/local administrator privileges via endpoint privilege management.
- Endpoint detection and response (EDR) – Marsh notes cybersecurity best practices have evolved since the 2020-2021 period it studied, with managed detection and response, and extended detection and response superseding earlier EDR tools, such as advanced endpoint security.
- Logging and monitoring – The organisation operates its own security operations centre and/or has an outsourced managed security service provider with the following capabilities at a minimum: a) Established incident alert thresholds, b) Security incident and event management monitoring and alerting for unauthorised access connections, devices and software.
- Patched systems – The organisation’s target timeframe to patch high-severity vulnerabilities (as defined by the CVSS scoring system) across the enterprise is a minimum of within seven calendar days of release. (CVSS is a security vulnerability scoring system commonly used by information security teams to help them prioritise remediation efforts. For more information on CVSS, see this article).
Study 2: The 10 cyber controls that will help SMEs thwart 70% of cyberattacks
In July, Swiss-based insurer Zurich Insurance Group and researchers at university ETH released the results of its joint study in a summary article entitled The 10 cyber controls that will help SMEs thwart 70% of cyberattacks, noting the challenge for SMEs to navigate the complexity of cyber risks is due to a lack of resources or know-how.
What did this study look at?
This study focused on the cyber controls implemented by small and medium-sized enterprises (SMEs). Zurich says it cross-checked and validated the controls identified in the study against information gathered from its SME customer questionnaire, as well as benchmarking data from global customer assessments and claims.
What did it find?
Zurich listed the following 10 controls that mitigated 70% of the most common SME cyber risks:
- System monitoring
- Configuration settings
- Malicious code protection
- Baseline configuration
- Least functionality
- Continuous monitoring
- Least privilege
- Access enforcement
- Account management
- Software, firmware and information integrity.
Study 3: Can scanning technologies predict claims?
In June, reinsurance broker Gallagher Re released a study entitled Can scanning technologies predict claims?. External scanning refers to external threat scanning, which looks at an organisation’s network from the outside to find, identify and help close potential external entry points for unwanted intrusion (e.g. by looking for weaknesses in the network’s firewall). Insurers use external scanning information (typically expressed as an overall score) across the cyber insurance policy lifecycle, including for risk assessment and pricing, responding to emerging events and portfolio management.
“External scanning data is most effective at identifying the worst 20% of risks, while revenue and industry factors were more predictive of claims frequency than the external scanning data.”
Previously, Gallagher Re studies found that existing external scanning data was challenging, as each external scanning provider used their own underlying methodology, and the resulting scores showed little or no relationship between the various levels of risk estimated by different providers. This prompted the reinsurance broker to take a new approach to achieve more valuable insight.
What did this study look at?
In order to gain a better understanding of the reliability of external scanning data, the authors built a machine-learning model and combined it with historical cyber claims to capture which elements of external scanning (or technographic) data would have been more predictive of a future claim at the point of underwriting. You can find more about the modelling approach here.
They also noted other publicly available reports on the predictivity of external scanning data have been in partnership with vendors, and that there was value in this study taking a vendor-agnostic approach.
What did it find?
The study offered the following insights:
- External scanning data is most effective at identifying the worst 20% of risks, which they recognised was consistent with how many insurers are using external scanning data for risk selection.
- Revenue is the greatest predictor of claims. In addition to external scanning data, Gallager Re also included organisations’ revenue and industry in the models, and these factors were more predictive of claims frequency than the external scanning data.
- When it came to technographic predictors, patching cadence – the speed at which organisations apply patches to critical external facing vulnerabilities – was the strongest technographic predictive indicator. This highlights “the importance for organisations to maintain a rigorous approach to vulnerability identification and patching”.
- The next strongest technographic indicator was port security. While the study noted that in the previous 18 months remote desktop protocol (RDP) had become a less likely attack vector for threat actors (largely due to security posture improvements), companies with exposed ports will still be enticing for potential attackers.
- Finally, web security – tightening the security posture of an organisation’s external facing web presence – was also important. When three web security risk factors (SSL/TLS configuration, web certificates and HTTP headers) were combined, they presented greater predictive potential for cyber claims than either patching cadence or port security.
Finding perspective
When considering the three papers and how they might inform best-practice cybersecurity approaches for insurers and their customers, it’s important to bear in mind:
- The results take into account only those organisations that have taken out a cyber insurance policy. Given cyber insurance coverage ranges from about 6% to 13% of corporations in the UK, the security behaviours of those organisations holding cyber insurance may be different to those that don’t hold insurance.
- Insurers don’t get the full picture of cyber incidents from policyholders. For example, some organisations might not be aware an incident has taken place, and some incidents might not be significant enough to warrant paying the policy’s excess, or organisations might be worried about the reputational impact of reporting.
- Additionally, insurers don’t necessarily receive a complete picture of an organisation’s cybersecurity posture at the time of underwriting. Questions asked are open to judgment from staff filling in the information requests, and there may be other controls/behaviours/factors which are more likely to influence a cyber claim being made than the factors collected by insurers.
- The predictive power of some controls may not indicate a causal relationship between the control and a particular attack vector, but rather be representative of an organisation’s overall cybersecurity maturity level.
- The cyber landscape is evolving quickly, so what controls worked in the past should be seen as a baseline of minimum security practice, rather than controls organisations can ‘set and forget’.
- The studies looked at different jurisdictions, types of companies and time periods.
A practical path forward
Beyond the important caveats, we found some common themes from the three studies indicating the most effective controls towards limiting cyber claims:
- Patching within limited timeframe of release
- Tightening web security settings
- Ensuring privileged access management controls are in place
- Conducting continuous monitoring of attacks
- Applying hardening techniques across systems and settings.
Reassuringly, these controls are also broadly aligned with the Australian Cyber Security Centre’s Essential Eight cyber mitigation strategies.
An ever-evolving space
The insurance industry’s role in sharing best practice in this space will grow over time, as it gets more data on cyber claims. It will also take advantage of the technological advances that will allow it to have an increasingly dynamic understanding of an organisation’s cybersecurity posture. This understanding has the potential to assist:
- Organisations to focus their cybersecurity spend on the most effective controls
- Insurers to continue to refine their underwriting process – for example, what minimum controls to have in place to accept risk and how to better advise organisations seeking insurance
- Insurers to better identify predictive factors to price risks more accurately, and ultimately reduce claim frequency and therefore premiums
- Governments in developing and refining recommended cybersecurity controls (e.g. Australia’s Essential Eight).
Taylor Fry’s cyber risk team works with corporates, policymakers, regulators and insurers to understand and quantify cyber risk to optimise cyber resilience efforts. For more information on the services we offer, visit our Cyber Risk page.
Other articles by
Win-Li Toh
Other articles by Win-Li Toh
More articlesRADAR FY2023 – Biggest profits since 2014, but affordability threatens sustainability
RADAR FY2023, Taylor Fry’s annual general insurance rundown in what’s been a turbulent and nuanced FY2023 for the industry
Read ArticleThe cyber wake-up call we had to have?
Following the biggest data breaches in its history, Australia is focused on improving cyber resilience. We explore the latest developments.
Read Article
Related articles
Related articles
More articlesRADAR FY2023 – Biggest profits since 2014, but affordability threatens sustainability
RADAR FY2023, Taylor Fry’s annual general insurance rundown in what’s been a turbulent and nuanced FY2023 for the industry
Read ArticleWin-Li Toh awarded Insurance Leader of the Year 2023
The Taylor Fry Principal was recognised for the “shining leadership”, collaboration and goodwill she has brought to the industry and beyond
Read Article