The following is a guest article by Lauren Spiller, Enterprise Analyst at ManageEngine
Healthcare systems are making significant efforts when it comes to AI oversight: developing training programs, implementing risk assessments, vetting tools for compliance, and creating guidelines. Yet despite these well-intentioned initiatives, a significant disconnect has emerged between official oversight and employee behavior.
According to a recent survey, 70% of healthcare office workers report that their colleagues are using unauthorized AI tools. Even more striking, 90% of healthcare IT professionals acknowledge that employees are adopting AI faster than these tools can be properly assessed and approved.
While these numbers are concerning, our findings reveal this isn’t about malicious rule-breaking. Instead, three key factors explain why well-meaning healthcare workers are adopting AI faster than organizations can keep up: healthcare’s adopt-first culture, AI’s unprecedented accessibility, and widespread misunderstanding of data risks.
To understand why this disconnect persists, we spoke with cybersecurity expert Lee Kim and healthcare operations executive Dr. Anthony Tedeschi. Their insights reveal not only what’s driving unauthorized AI use, but how healthcare leaders can create governance that their employees will actually follow.
Adopt First, Ask Later
If healthcare’s rapid AI adoption feels overwhelming, Tedeschi offers some perspective: we’ve been here before.
The senior executive, who has overseen multiple hospital turnarounds and healthcare system transformations, recalls when texting first entered healthcare in the early 2000s. Healthcare workers started using it for everything from patient updates to shift scheduling, often before their organizations had security protocols in place.
The lesson from that era wasn’t that healthcare workers were reckless, but that their mission-driven culture made them early adopters of anything that could help patients or colleagues. Kim, a global authority on AI and digital risk who serves as Senior Principal of Cybersecurity and Privacy at HIMSS, sees this same culture as a driver of AI’s adoption challenges today: “In hospitals and healthcare organizations, from a human perspective, we’re predisposed to help people. We’re predisposed to trust each other and collaborate with each other. This is what we do.”
While valuable, this shared instinct can lead to rapid adoption of new tools without formal vetting—a pattern Tedeschi has observed throughout his career. “Any time new technology comes into healthcare, the experience I’ve had is that many people are ready to jump in and use it, often times before we’ve had the chance to develop a framework.”
That’s not to say organizations aren’t developing such frameworks—in fact, they’re investing heavily in AI oversight. U.S. hospitals and healthcare organizations are actively creating internal AI governance frameworks to manage risk and promote responsible use. The American Hospital Association supports these efforts through resources and advocacy aligned with federal AI policies, while the American Medical Association provides practical toolkits to help build leadership and oversight structures.
These efforts are especially critical in the U.S., where the healthcare data governance landscape is fragmented. But even the most comprehensive governance programs face a fundamental challenge: the tools themselves have never been more accessible.
Free, Instant, and Irresistible
If there’s any one thing compounding the cultural problem of AI in healthcare, it’s the tools’ ubiquity. “I’m hearing more and more colleagues say it’s hard not to use AI,” says Tedeschi. “It’s built into our search engines now. It’s everywhere.”
Not only are these tools everywhere—they’re also largely free and instant, removing barriers that might otherwise prompt employees to seek approval first. Combine that with the misconception that AI chats are confidential, and patient data is at serious risk. “There’s this idea that I’m just here doing my work, and only I can see it,” Kim explains. “Of course, if you’re using a free AI tool, we don’t know what happens with that data.”
This awareness gap is a real problem in healthcare. Just over half (52%) of surveyed office workers think there’s no risk to using unauthorized AI tools, while 40% trust themselves to use AI tools responsibly, even if it’s without approval. Nearly a third (32%) see no issue in using AI tools without approval from IT if it’s on their own devices, and over a quarter (26%) think it’s okay if the tool seems low-risk.
Meanwhile, 58% of IT workers think employees don’t understand AI-related security risks at all—and the same percentage believes senior leaders also underestimate the risks of shadow AI. Those risks, Kim emphasizes, are grave: “If patient safety is at risk because of the use of an unauthorized AI tool, that’s obviously a tragedy.”
If healthcare organizations are going to close this gap, they must first understand the misconceptions that make workers and leadership alike so confident in their use of unauthorized AI tools.
The Data Safety Blind Spot
Protected health information, or PHI, encompasses the most sensitive data you can get about a person. This includes fingerprints, social security numbers, brain scans, and psychological data—all things that should be handled with the utmost discretion by healthcare organizations.
Intentionally sharing PHI with an unvetted AI tool, then, should be unthinkable to any medical worker with patients in their care. But Tedeschi isn’t sure everyone understands the impact of PHI in an AI space. “I think folks often use ChatGPT or another AI tool in their personal space and simply see it as a way to get additional information,” he notes. “So they rely on it. They bring those activities into the workplace.”
Kim sees this as well. “Unfortunately, there has been a lack of education with the public at large, and also in terms of the healthcare organizations themselves. Sometimes it’s simply that there are no policies on the books in terms of tools you can or can’t use.”
This lack of education is a bigger problem than most organizations realize. Employees often don’t grasp that routine interactions with AI tools can expose PHI in unexpected ways—like voice assistants that store conversations, support tickets containing patient identifiers, or seemingly anonymous case descriptions that could identify patients in small communities if the tool collects geographic data, as many free ones do.
The complexity multiplies with personal devices. As Kim points out, when healthcare workers use AI tools on phones shared with family members, they’re creating exposure risks they likely haven’t considered. “If my kid, for example, were to get my phone, what exactly would he be looking at? Would it be healthcare data?”
Closing these knowledge gaps requires a fundamental shift in how healthcare organizations approach AI governance. Rather than simply creating more policies, successful frameworks must address what’s really driving unauthorized adoption: the fact that behind these tools are real, fallible humans trying to do their best work.
Building AI Governance Frameworks for Humans
When asked what might help reduce shadow AI in their organizations, surveyed healthcare IT workers mentioned the following solutions:
- 76% suggested implementing technical controls like network monitoring and blocking specific sites
- 64% suggested integrating approved tools into standard workflows and business applications
- 60% suggested implementing clear policies and guidelines on acceptable AI use
- 56% suggested establishing a list of approved/vetted AI tools
While these tactical solutions address immediate concerns, Tedeschi and Kim offer a more comprehensive approach that addresses the root causes of unauthorized AI use by prioritizing education. Says Tedeschi, “Let’s get our workforce educated about what AI is, what’s possible, and what some of the problems and tasks are that it’s solving.”
As far as what this education should look like, Kim emphasizes brevity: “Less is more. Gone is the era of one-hour talks or lectures. Create a video that’s one or two minutes, or 30 seconds or less.” Equally important is ensuring resources are simple and digestible. To scale this education effectively, Kim recommends “recruiting a champion or two within teams so they can help disseminate deliverables, messaging, training, etc.”
“In a train-the-trainer kind of context, you can amplify your message through peers and colleagues whom others respect,” she continues. “That respect and trust in people we identify with, look up to, and work across from is important. It shouldn’t be something cold and impersonal that doesn’t have a human face or voice to it.”
Lastly, ongoing training programs won’t get far without cross-functional leadership. As Kim suggests, “Get the key decision makers at your organization—HR, IT, legal, finance, and risk—into a room, form that AI committee, and have them regularly meet to talk about how we should deploy and execute our AI awareness or upskilling program.”
Key Takeaways
Healthcare’s AI governance problem isn’t about rogue employees—it’s about well-intentioned workers navigating their helping instincts, accessible technology, and knowledge gaps.
Traditional approaches focusing on policies and technical controls forget who these tools were designed for in the first place: humans. Effective governance, then, requires brief, jargon-free education over lengthy training; peer champions to scale that education; and proactive cross-functional AI committees rather than reactive oversight.
The organizations that succeed won’t have the most restrictive policies; they’ll have frameworks designed for real people who want to do their best work safely.