Underwriting: Using analytics to predict fake insurance claims

Detecting early claim risk before policy issuance can not only help insurers reduce operational costs, it can also make life insurance more affordable

Updated: Apr 9, 2019 04:08:51 PM UTC

Jasjeet Singh is Partner and Jayesh Raj a Senior Manager, Financial Services Analytics Advisory at EY

bg_insurance
Image: Shutterstock

The Indian insurance sector needs to adopt predictive analytics to tackle risks surrounding early claims, no matter how slim the margin. Early death claims (claims received within 0-2 years of policy issuance) are a primary risk focus for Indian life insurers. Typically, early death claim rates range between 0.2% – 1.0% of policies issued, with a high proportion of such claims being fraudulent (for example, dead-man insurance, misrepresentation of health/financial information).

Detecting early-claim risk before issuing a policy can not only help insurers reduce operational costs, it can also make life insurance more affordable, as the saved cost is passed on to consumers. Despite the focus on reducing early-claim risk, most Indian insurers still rely on intuitive, rule-based frameworks rather than predictive analytics-driven automated workflows for underwriting. This results in high false positives (rejected cases that would not have resulted into a claim), higher physical verification costs and longer decision cycles.

So, what prevents insurance executives from investing in analytics-driven underwriting? Let’s have a look:

Myth 1: Data captured at Policy Login stage is too thin to build statistically significant models

Reality Check: While superficially, it might seem that a life insurance application form captures only basic customer data, when we look deeper, there is a plethora of information that can be leveraged to identify risk patterns on affordability, sale location, seller, product, pricing etc. Moreover, alternate data sources like credit bureaus, social media and socio-economic indicators can be used to further augment the information.

Myth 2: Analytics solutions will involve heavy technology investments

Reality Check: With the advent of open-source programming tools like R and Python, technology investments required to build proof of concept models have actually become insignificant. Also, the models developed on these tools can be converted into rule-based scorecards that can be easily implemented to automate existing front-end underwriting systems.

Myth 3: Claim rates in the Indian life insurance market are quite low, so the accuracy of models predicting these scenarios is bound to be low

Reality Check: Given the low early claim rates, some methodologists might argue that this is too thin an event rate to build a predictive model. However, there are a variety of machine learning techniques (neural networks, gradient boosting machine, etc) and statistical interventions (bootstrapping, multi-sample ensembles, etc) that can be used to achieve high accuracy levels. Some Indian Insurers have developed classification models that can identify cohorts as small as 0.5% of total issuances contributing to 50% of all early claims.

While there is a strong case for predictive analytics based underwriting frameworks, some key things should be kept in mind while building such frameworks:

  1. Focus on building the “decision process” around the predictive model right: To leverage the predictive power of analytics, a holistic approach is necessary to build a differential underwriting workflow for modelled risk categories of customers–- for example, very high risk (0.5%) are auto rejected, high risk are referred for on ground verification & mandatory medicals (1%), medium risk (5%) are referred for mandatory medicals etc.
  2. Build a business case and monitor it closely: Predictive models are designed using historical data patterns (main assumption: what has happened in the past will hold for the future). As such, the business cases and cut-offs developed at the design stage might not hold true in the future, and hence, need to be monitored and calibrated continuously.
  3. Model monitoring and re-calibration: Businesses and market conditions are dynamic and hence the business mix, risk patterns and risk concentrations also keep changing with time. Therefore, regular performance monitoring and periodic re-calibration is highly imperative to ensure model accuracy over time.
  4. Business discretion around model parameters: This is quite important. Indian life insurance businesses do not essentially validate details around customer’s income, occupation, address etc. If any of these factors are determinants of a risk score, and if this knowledge is shared (even internally within an organisation), it might lead to applicant data being manipulated.
  5. Patience with experimentation: Yes, the Taj Mahal took about 10 years to build and Edison found 10,000 ways that didn't work before developing the light bulb!!


While predictive models can help insurers contain originations risk, there is a strong case for the industry as well to share risk data. Similar to credit bureaus for banking, incorporating a central insurance data custodian to maintain and share industry-wide data repositories, including risk information on geo-locations, sourcing agents, claims and high risk customers, can enable better underwriting decisions.

 Jasjeet Singh is Partner and Jayesh Raj is Senior Manager, Financial Services Analytics Advisory at EY

Post Your Comment
Required
Required, will not be published
All comments are moderated
Prev
Why women are taking the lead in buying homes
Next
Why finance professionals need to change the way they work