In the September 2015 edition of Australia ReView, in his article “Lessons from the Recent Australian Group Market Experience,” Colin Yellowlees, Chief Pricing Actuary for RGA Australia, commented that “without good data we are flying high up in the clouds with only the occasional glimpse of what’s happening on the ground.”
This was written from a risk management perspective, but also rings true for those of us involved in the financial reporting process, in particular for valuation actuaries.
A key focus of the valuation function is the calculation of policy liabilities in respect of business that an insurer or reinsurer has on its books and to reserve appropriately to meet these liabilities. The policy liability must provide for:
- a best estimate value of the liability of the company in respect of obligations
under life insurance contracts; and - a uniform emergence of profit in respect of life insurance contracts.
To calculate policy liabilities, a valuation actuary needs to understand the insurance contracts that the insurer has written and make estimates about the future expected financial performance of those same contracts. This calculation process typically involves projecting key financial information such as premium income, claims outgo and expenses for the lifetime of each policy, and calculating reserves in line with relevant accounting standards.
So what does a valuation actuary use to inform these projections?
Policyholder and claimant data is the key input to the process. In particular, there are two major sets of data that an actuary needs:
- Current exposure data. This gives the actuary information on the types of contracts that have been written at a policyholder level, including policy information such as the benefit type (e.g. life cover, disability income) and demographic information for the policyholder (e.g. age, gender, smoking history).
- Historical exposure and claims information for the portfolio. This allows the actuary to analyse experience in historical periods such as claims incidence and termination rates (where relevant), as well as lapse rates and other experience items. This historical experience can then be used to inform assumptions about expected experience in future periods.
Take the example of a Disabled Lives Reserve (DLR) which, given the volume of disability income business written in Australia and New Zealand, is a large balance on most insurers’ books. This reserve is held to cover the expected future payments for insured lives that are currently disabled.
To calculate this reserve, a valuation actuary needs an understanding of every claim that is currently open in the portfolio (“current exposure data”) and have key data on
each of those claims, including what caused the claim, when it occurred and the duration for which benefits are payable. Expected claims payments are projected, with adjustments based on assumptions for likelihood of the claimant returning to work or dying, to give a best estimate value of the claim. Historical exposure and claims information will support the setting of these assumptions, so the more granular and complete this historical dataset is, the better informed the assumption setting process will be.
In the past, the quality of this policyholder and claimant data has left a lot to be desired, resulting in valuation actuaries having to make estimates to cover incomplete information. This means more judgement is required in the setting of reserving assumptions and, in turn, increases the mis-estimation risk in the reserving process. The industry is getting better at recording higher quality data, and poor recent market experience has added extra impetus to improve this further. These data quality improvements will allow valuation actuaries to improve their assumption sets and make better informed reserving decisions.
Aside from the lack of availability of deep and rich data, an additional challenge for reinsurers and others analysing data from multiple sources is the lack of consistency between data sets produced by the industry. For example, most insurers have their own definitions of occupation classes and categorise occupations uniquely, making rich analysis of experience across multiple insurers in these categories more difficult. Similarly, claims data is managed differently by each insurer leading to challenges in understanding such fundamentals as cause of claim, ancillary versus core benefits, etc. The consistency of this data has been improving over time as insurers have increasingly focused on standardisation (e.g. mapping claims cause data to ICD categories) but there is more that can be done by the industry in this area.
In the United States, the National Association of Insurance Commissioners (NAIC) requires that insurers provide uniform standardised datasets as part of their regulatory returns. This means that insurance data is provided in the same standardised format, providing a consistent foundation for more rich data analysis when this data is aggregated. Data quality is also an increasing focus for the Australian Prudential Regulatory Authority (APRA), which effective 1 July 2013, introduced Prudential Standard SPS250 – Insurance in Superannuation and Prudential Practice Guide LPG270 – Group Insurance Arrangements, containing new data requirements and standards for superannuation funds. Under SPS250, superannuation entities must maintain records of “sufficient detail for a prospective insurer to properly assess the insured benefits that are made available, including, for at least the previous five years, the claims experience, membership, sum insured and premiums paid in relation to beneficiaries.”
This growing volume of quality information will be invaluable to valuation actuaries to help them understand the risks that they are exposed to and value the business more appropriately, but the key challenge for insurers will be ensuring that they have the right processes and people in place to reap the benefits of the increasing volumes of quality data at their disposal.