Artificial intelligence (AI) and machine learning has the potential to transform the way actuaries and other insurance professionals work. AI improves efficiency and accuracy in less time and with less effort.
Large language models (LLM) and natural language processing — exemplified by newer models such as ChatGPT — represent a powerful emerging use case for AI in insurance. These models are designed to solve the task of “next word prediction”, which involves taking a sequence of input words and predicting the subsequent word. The models are trained on large datasets of text, including diverse sources such as Wikipedia and books, and use statistical and probabilistic methods to predict the most probable next word.
AI in insurance use cases
In the insurance industry, this approach has unlocked several use cases. These models can interpret claims notes, summarizing information for claims handlers, and extract information from insurance policy contracts. They can also be used for fraud detection and web scraping for commercial lines underwriting. By using AI to extract insights from unstructured data, insurance companies are improving their claims handling processes, reducing fraud, and making more accurate underwriting decisions.
Beyond understanding text, the application of predictive AI is rapidly growing in the insurance industry. Predictive AI uses statistical and machine learning techniques to understand images, analyze historical data to make predictions about future events, and uncover hidden patterns in data. For example, predictive AI can analyze images of buildings to learn about their construction properties or read claim notes to predict the likelihood of future litigation.
While AI has the potential to revolutionize the insurance industry, there are also concerns about the potential risks and limitations of AI. For example, there is a risk of perpetuating biases and inaccuracies if the models are not properly developed and implemented. There is also a risk of breaching attorney-client privilege if the models are used to analyze legal documents. In some cases, generative AI tools can “hallucinate”, making up data and the source behind it.
Mitigating risks and ensuring safe AI implementation
To avoid these risks, it is important for data scientists building these AI tools, as well as their end users, to have appropriate oversight and expertise to ensure safe development and implementation. It’s also important to be transparent about how the models are being used and what data is being collected. Human supervision is crucial to carefully review the outputs and ensure the results are accurate and unbiased.
AI has the potential to improve insurance processes and outcomes by extracting valuable insights from unstructured data, making more accurate predictions, and improving customer service. However, it is important to approach AI with caution and ensure that the models are properly developed and implemented to avoid perpetuating biases or introducing inaccuracies. With the right approach, AI can empower insurance companies and their clients to remain competitive while delivering superior service to their customers.
Watch our webinar and explore real-life insurance applications of natural language processing and AI, including Marsh’s Blue[i] Analytics Solutions.