Businesses around the world are looking to capture advantages from the use of generative AI. These can include everything from streamlined operations to more-personalized customer experiences to enhanced decision making and innovation. But insurance professionals, including underwriters, also need to be mindful of the added risks that may come with the expanded use of this technology, and the questions that need to be asked when assessing and covering those risks.
These questions are complex, because generative AI introduces unique challenges to underwriting “traditional” insurance risks. For one thing, although generative AI is still a new type of technology, it has spread much faster than many other digital technologies. There are unresolved questions about how and why different industries may adopt this technology, and for what purposes.
Additionally, the introduction of this technology has raised questions about compliance, regulation, and liability. Which entity is responsible when something goes wrong in the AI ecosystem: the creators of the technology or the end users? Or does liability lie somewhere in between? How will regulators ensure responsible use of AI technology?
AI-enhanced phishing is the tip of the iceberg
Some recent stories in the news have highlighted the potential abuses and misuses of generative AI.
For example, we’ve seen generative AI deliberately used to enhance phishing, or similar tactics for extracting or extorting fund transfers, with greater believability. Even more ominous is the threat that AI could be used to create digital twins of a senior executive in order to emulate that executive, or issue remarks that could cause reputational harm.
More likely to be pervasive, however, are risks arising from well-intended uses of generative AI that inadvertently open up users to liability issues and other systemic risks. The potential implications may range from cybersecurity and privacy concerns to copyright and IP issues, to bias in the recruiting and hiring process, to regulatory risks, physical risks and more.
The introduction of this technology has raised questions about compliance, regulation, and liability.
Several recent legal filings, for example, allege that AI has lent itself to the generation of marketing, product design or other content that infringes on trademarked or copyrighted materials:
- Andersen v. Stability AI Ltd.: Several visual artists filed a class-action lawsuit against the creators of image generators, alleging that these AI tools violate copyrights by scraping images from the internet to train the AI models.
- Getty Images v. Stability AI: Getty Images filed a lawsuit alleging that Stability AI’s use of millions of Getty’s images to train its own AI models infringes on trademarks.
- The New York Times Co. v. Microsoft Corp.: The New York Times sued OpenAI and Microsoft, claiming the startup unlawfully used millions of Times articles to build out its artificial intelligence tool.
Who is most at risk from the expanded use of AI
At the moment, the companies that are developing the large language models appear to be the ones defending recent suits, as distinct from the end users of AI-based software solutions. However, companies using a technological solution could also be held liable for the AI’s end result, much as a company may be held liable for securing its own networks and data. If companies are not carefully tracking the evolution of this legal issue, they could put themselves at increased risk at some future time.
Companies that seek to develop their own large language models may also run into data security risk due to the incorporation of various types of personal data, including protected health or medical data. Especially because such types of data are highly regulated, it is critical that the system never inadvertently discloses confidential or personal information. This imposes new requirements for data safety.
Several recent legal filings allege that AI has lent itself to the generation of marketing, product design or other content that infringes on trademarked or copyrighted materials.
The training of generative AI models also requires a great deal of computing power, which entails significant energy requirements. These requirements may in turn create conflicts with an organization’s sustainability goals. D&O coverage could also be implicated by a “buy versus build” strategy, and the decision-making process used in adopting one strategy or the other.
Implications for management liability
Additional likely areas of corporate risk relate to alleged bias or discrimination in areas like hiring. The large language models used in generative AI are trained using data sets that may not be representative of the full range of job candidates. For this reason, legislation is being considered by several states that would control or monitor the use of generative AI in making hiring decisions.
Beyond that, there is also the potential that generative AI may result in poor decision making. For example, if an insurance company is using an AI chatbot that erroneously states that the company will honor a claim, is the chatbot responsible because it is serving as the agent of the company, or is the developer of the technology responsible?
Given the breadth of these issues, professional liability could also be impacted by the development of an enterprise AI framework. That framework clearly needs to include specific guidelines or principles to ensure that AI is safely and responsibly deployed within the organization. This might include requiring that a human is in the loop when such tools are used, especially in areas of critical business function or where protected data is implicated, and ensuring that specific decision-making steps are followed.
Staying ahead of new AI-related risks
Beyond the risks that have already manifested themselves, there are multiple theoretical types of business risk that will likely soon become real and practical.
An enterprise AI framework clearly needs to include specific guidelines or principles to ensure that AI is safely and responsibly deployed within the organization.
Unlike property insurance, where the nature of the risk is largely defined, the very nature of the AI risk itself has yet to be defined, and continues to evolve rapidly, particularly in the absence of robust risk and regulatory frameworks. In the meantime, the uncertainty regarding who will ultimately be responsible for certain AI-related risks makes it important that companies stay ahead of the game in this area.
To sum up, AI seemingly holds potential benefits for insureds in many areas—including opportunities for enhancing companies’ cyber defenses down the line. Upcoming articles in this series will explore the different underwriting approaches that can be used to categorize and address these risks, such as:
- A risk-based approach, which analyzes the various categories of risk associated with Generative AI.
- An ecosystem-based approach, which analyzes the complex chain of players that comprise an AI-driven tech stack.
- A usage-based approach, which discusses various ways in which Generative AI end users are applying the technology.
All these approaches have their uses in guiding future underwriting. Without pre-empting any of these areas of discussion, it seems evident that the best general defense against AI-specific risks is likely to be a rigorous governance framework that imposes continuous monitoring and discussion of both the opportunities and challenges. This may require the creation of a central committee or executive group charged with developing a clear view of the risk and legal landscape, so that the organization can be prepared to respond in a well-prepared and thoughtful way. The caveat will be assuring that this framework is nimble enough to address an organization’s needs to adapt to this exciting area of innovation.