When AI calls your patients, who is responsible?

When AI calls your patients, who is responsible?
Mike Schinnerer, Vice President of Enterprise Product Management, TNS Communications Market

Thirty-eight percent of Americans received a scam call in 2025 in which someone impersonated one of their healthcare providers. This is an eye-opening fact for healthcare executives, security leaders and compliance officers who are challenged to stay ahead of increasingly sophisticated AI scams that even the most inexperienced criminals can now deploy quickly and cost-effectively.

https://omg10.com/4/10736335

Hospitals, health systems and clinics were already on high alert when the American Hospital Association (AHA) issued a December 2025 advisory. warning to prepare for a growing wave of deepfake scams. AI-generated audio, video, and text are used in phishing scams targeting healthcare workers. Deepfake voice phishing scams are damaging enough on their own, but bad actors no longer rely solely on one communication channel. We are seeing an increase in multimodal campaigns, where the initial contact can be a text message, followed by a phone call, email, or both to make the campaign appear more legitimate.

The attack that temporarily shut down Kettering Health in 2025 was a textbook example of how disruptive multimodal attack campaigns can be. Under Kettering, the ransomware group precipitated a system-wide IT outage where patients were unable to contact staff and call center support lines. This phase of the campaign created chaos and confusion, which the Group took advantage of by posing as members of the Kettering Health team and requesting credit card payments for medical expenses. It took Kettering weeks to resume normal operations of key services.

Imposter fraud campaigns create legal and cost liabilities for healthcare organizations. Failure to comply with data protection laws such as HIPAA, HITECH, and GDPR risks losing millions of dollars due to the attacks themselves, in addition to potential regulatory fines.

AI accelerates the impact of voice scams

Healthcare organizations have always been a prime target for fraudsters, ranging from sophisticated global criminal operations to more localized ad hoc fraud campaigns. Artificial intelligence is driving these efforts, and healthcare stakeholders are rightly concerned, as recent survey data found that 77% of Americans are very concerned that AI technology could be used to convincingly impersonate their voice or identity to access sensitive accounts.

What is notable within the data trends is that bad actors are given equal opportunities and use AI to attack not only patients but also healthcare personnel. The same survey reveals that three-quarters of consumers are more concerned about scammers impersonating them to access confidential accounts than about receiving fraudulent calls or text messages.

While healthcare organizations face the challenge of balancing layered security with a frictionless customer experience, consumers recognize what is at stake and are willing to do their part. Eighty-four percent of Americans are willing to go through a longer login or customer verification process if it reduces the risk of bad actors accessing their sensitive accounts.

Drive demographic assumptions

Historically, older Americans have absorbed most of the attention of scammers. Elder fraud is pay for Seniors generate more than $3 billion in losses annually, and it has been particularly severe in high-touch industries like healthcare and insurance.

That said, the data shows that scammers posing as AI have had a level playing field in targeting demographics. While 38% of Americans received In a scam call in 2025 in which someone impersonated a healthcare provider about their coverage, the number rises to 53% for Generation Z and drops to 25% for baby boomers. Similarly, more Gen Z respondents indicated that someone impersonating them had fraudulently accessed their healthcare data (36%) than any other age group.

Address the expanding attack surface

Despite consumers’ growing distrust in the authenticity of communications coming from their healthcare provider, the voice channel remains a preferred communication method for patients: 65% of adults would prefer to communicate with their healthcare provider through a phone call than through text messages, apps or websites.

As a result, healthcare companies are prioritizing protecting the voice channel as they would their networks, data, web, cloud, and physical infrastructure. Protecting brand reputation and customers requires a comprehensive voice security strategy that includes:

  • Presentation of critical information about calls. By marking calls with the company name and logo, healthcare companies can identify themselves, giving customers a better understanding of who is trying to contact them.
  • Prioritize call authentication. Proper call validation allows healthcare providers to confirm the origin of each call by verifying that it is coming from their identified number.
  • Implementation of protection against counterfeiting. Calls that are not properly authenticated should be blocked before they reach patients and other interested parties to prevent scammers from making contact.

Managing the risk of AI-powered imposter fraud attacks to the voice channel and evaluating emerging strategies and technologies to help mitigate these risks can help healthcare decision makers protect their organizations and their patients. It is responsible for TNS enterprise authentication and spoof protection, enterprise branded calls, phone number reputation monitoring, and TN Insights. Mike first joined TNS in 2010 and spent 12 years managing caller ID products before returning in 2024 after a role at cybersecurity startup Lookout (acquired by F-Secure).


About Mike Schinnerer

Mike Schinnerer is vice president of product management for enterprise sales at TNS with specific responsibility for the TNS Communications Market.

Leave a Reply

Your email address will not be published. Required fields are marked *