Skip to Main Content

AI and professional liability: Achieving balance between rising adoption and generating risk

AI is revolutionizing businesses by enhancing efficiency, but it also introduces new risks.


By Sal Pollaro
Executive Underwriting Officer, Professional Liability  |   
6-minute read

Artificial intelligence (AI) is revolutionizing the operational landscapes of companies across every industry, driving efficiency, innovation and competitiveness. Global AI adoption surged to 72% in 2024.1

Most notably, 56% of businesses are using AI for customer service, 51% for cybersecurity and fraud management and 30% for recruitment efforts.2 The professional services sector is already reaping the benefits of this sophisticated, multifaceted technology in these significant ways:

  • Data-driven insights and predictive analytics lead to better-informed risk assessments, financial planning decisions and business outcomes
  • Automation of highly manual tasks improves operational efficiencies
  • Seamless, intuitive touchpoints enhance customer experiences and promote client engagement
  • Classification and automatic encryption of data boosts compliance with state, federal and international data privacy laws

With any innovation comes new and increased risk as well. While AI brings significant value to the professional services world, it carries grave risks for businesses as well. Over half of organizations using AI report inaccuracy, intellectual property infringement and cyber vulnerability as relevant risks, and 44% indicate they’ve experienced at least one negative consequence because of AI use.1 Common issues that carry far-reaching professional liability risk include:

Article highlights

  • AI benefits for businesses
  • AI-driven liability risks
  • Risk management strategies

Transparency and accountability issues result from the lack of clarity around how AI algorithms make decisions. Datasets with historical biases can lead to misinformation and discriminatory practices. For example, AI’s application as a resume review tool has resulted in claims of age and gender discrimination.

Professional errors cause damage when the AI tool makes a mistake that leads to financial loss or harm to a customer. Professionals who rely on or implement these systems could face liability. For instance, brokers and financial advisors utilizing AI to offer financial advice could face legal risks if the technology prioritizes conflicting interests or generates misleading recommendations.

An uptick in the frequency and severity of cyber events is being fueled by nefarious threat actors using AI tools to launch sophisticated cyber-attacks and bypass security measures. Open-source data can generate and execute widespread phishing attacks with more legitimate and convincing workplace emails. AI deepfake scams are also increasing in severity as AI improves its ability to create deceivingly realistic videos, images and audio used to impersonate individuals or spread misinformation.

AI deepfake scams are also increasing in severity as AI improves its ability to create deceivingly realistic videos, images and audio used to impersonate individuals or spread misinformation.

How businesses can reap AI’s benefits and mitigate its risks


Addressing evolving AI risks requires a multifaceted approach. The following four robust risk management practices, including developing clear guidelines for AI usage, ongoing monitoring of AI systems and a commitment to ethical and responsible AI development and deployment, can help reduce your professional liability risk:

Stay up to date on emerging AI laws

While there isn’t an overarching federal law that governs the utilization of AI, regulators including the US Department of Health and Human Services, the Federal Communications Commission and the European Union’s Parliament have recently implemented laws to safeguard and limit the application of AI. The EU published the AI Act, the world’s first comprehensive AI law, in August 2023, and the U.S. Senate AI Working Group issued their Roadmap for Artificial Intelligence Policy in May 2024.

Create AI usage policies and procedures that align with best business practices

Craft a policy to establish transparency and clear boundaries about your business’ AI usage. A strong policy will provide critical guideposts to frame usage while demonstrating strong corporate governance and responsibility. Use the following as a springboard for your AI policy framework:


  • Document how AI tools can be used to enhance the value of your service
  • List and train your team on risk management best practices to ensure AI use doesn’t uncover new forms of risk
  • Build in guideposts to ensure humans remain involved in checking accuracy of AI generated information
  • All work using generative AI should include an appropriate citation, for example: “This content was generated using (tool)”
  • Document the extent of AI usage for internal business operations, like hiring decisions and external functions, such as AI chatbots to provide customer service.
Ensure human oversight across all AI systems and usage

AI must be monitored with continuous human oversight to ensure ethical usage and reduce professional liability. Form an internal AI policy team to regularly audit, evaluate and monitor AI utilization across the organization, appointing specific employees to review systems for biases and ensure regulatory compliance. Assign responsibilities to employees within each department to oversee AI usage within their area. Clearly document and communicate who is responsible for making AI-related decisions, addressing concerns and providing solutions. Require close engagement between your board of directors’ disclosure committees and AI policy teams, with updates provided at least quarterly.

Conduct phased rollouts for new AI technology

Mitigate early AI mistakes by taking a careful, methodical implementation approach to new technologies. Start by defining your goals and choose tools that best align with business objectives. Ensure diverse datasets are used to train models, tools and systems, conduct stringent quality control to check for unintended biases and set realistic usage expectations with your teams. Invest time and resources into comprehensive employee training to foster adoption and understanding. Conduct pilot testing and gradually implement the new technology across the business to allow for corrections and adjustments through the rollout.

AI must be monitored with continuous human oversight to ensure ethical usage and reduce professional liability.

Balance innovation with robust risk management


AI creates great risks and rewards, offering significant growth and profitability potential while generating new security and compliance challenges for those in industries susceptible to professional liability claims. Find ways to balance the use of this rapidly evolving, unparalleled technology alongside proactive assessment measures to limit unintended consequences and bias so AI can transform your business for the better.


Become an Insider

Subscribe to The Ins—your go-to source for all the ins and outs of today’s insurance industry, delivered to your inbox by Markel US Specialty. 
Related US insurance offerings

Professional Liability

Explore how professional liability insurance can help your business.

Cyber insurance

Learn more about how cyber insurance can help reduce your risk.