For many years, insurance was beautifully simple, its models and underlying capabilities clearly defined. Policies were developed and priced by actuaries. Biometric underwriting data was collected during the application process, and premiums were calculated to provide optimal protection.
The biometric data sets were, at best, incomplete, and as they were collected at a single time point, the longer a policy was in force, the more out-of-date the data would become. There’s nothing bad or negative about this: it is just how underwriting was done, and the industry has always made the most of the data available.
Nowadays, the types of data available to insurers has expanded tremendously. People are digitally connected in ways that were not possible even a few years ago. Completely novel information is coming from both static sources and continuous live streams. Mobile phones are in virtually every pocket or purse, and wearable devices, many of which double as fashion statements, are tracking and charting every step, breath and heartbeat. Meanwhile, IoT (Internet of Things) sensors are speedily transforming people’s relationships with their external physical environments, resulting in rich pools of data that are allowing their actions and action patterns to be extrapolated from and utilised as never before.
All of these data streams are being added to the sizeable pools of historical information being collected from current and legacy policy administration and claims management systems, and are already enabling efficiencies such as predictive models that can correlate certain data streams as well as provide far more underwriting speed and simplicity.
However, it can be challenging to not only make sense of, but also navigate and interpret, these vast commingled pools of traditional and new data. Currently, cloud systems are easing storage and aggregation needs, and sophisticated computer models coupled with fast-evolving technological capabilities are enabling complex analyses of traditional and non-traditional data at high speeds. Insurers so empowered can adopt the latest tools and techniques to develop cutting-edge models so that pricing accuracy can improve, customised products can become possible, and overall customer experience can change for the better.
Still, insurer data needs to be clearly understood, defined, and of a quality that will let actuaries and underwriters use it with confidence, as it will be used to support models and decisions that will be locked in for years, if not decades. Insurers are aware they need to learn about and adapt to this new abundance (some might say overabundance) of data, and many companies are already engaged in doing so. If this could be done easily – or, alternatively, if the right data were easily accessible – new frameworks such as dynamic pricing, dynamic underwriting and real-time claims adjudication and payment might be just as simple to use as swiping a card. This, however, is not the case, and for several good reasons.
Consider the various costs today of owning and using data: In the past, storage and computing were the dominant costs, but as these have become more efficient, their prices have dropped and will most likely continue to do so. Today’s true costs are two: that of developing and maintaining the software that processes the data and provides administration for its systems; and that of making sure the right people are in place to work with the data. Those people must be professionals who understand data domains as well as corporate needs, can develop the most beneficial platforms and solutions for the needs, and can collect and administer the right data appropriately. It is the rare company that is not scouring for professionals with the technology and data science proficiency to know what needs to be known. It’s a hard task, and getting harder every day.
The insurance industry has clearly reached an inflection point.
Data and technology are advancing so rapidly that the gap between current and desired capabilities is widening daily. The work needed to develop a safe, secure, compliant, sustainable and continuously up to date solution stack is huge. Concerns related to alternative sources, computational power, security, and privacy are escalating in importance and in the costs to manage.
Since the insurance industry’s inception, two types of needed capabilities have remained constant: actuarial and underwriting. Supporting these needs have been ever evolving sets of data and ways to work with them. Today, a third type of needed capability is clear: an ability to understand, master and then work with these burgeoning data pools.
In today’s fast-evolving data ecosystem, the ultimate objective for insurers is to empower underwriters and actuaries to focus on strengthening their domain knowledge. It may be simpler and more cost-effective for a company to develop high-functioning integrated solutions stacks at a core level than to put together small teams to develop individual stacks for each need. Indeed, freeing them from commoditised activities and from understanding their company’s solution stacks, or how their models and analytics work, can enable them to focus on core activities such as optimised customised pricing as well as value-added activities such as improving loss prevention and fraud detection.
For most, data is foreign, a cloudy concept that surfaces into consciousness as the hulking servers and mainframes in movies, or as computer programmers in jeans and hoodies at 3 a.m., writing code amidst junk food wrappers. The reality is that data is now deeper and broader than ever. Data for data’s sake – that is, data without the ability to analyse or to extrapolate information that can enable it to be used effectively and appropriately – is not worth much. Data can challenge privacy and security or present
opportunities to enhance a company’s presence and reach. It is not the holy grail of our age, but the reflected perception of that which defines us.