ICA

Understanding and Overcoming Everyday Barriers to AI Adoption in Public Health

Simplistic, modern graphic of interconnected human figures, representing community and global health for World Health Day, on a bright background --ar 16:9 --stylize 250 --v 6 Job ID: 7a10490c-430a-4fed-b147-2fb3ff5b83ae

Understanding and Overcoming Everyday Barriers to AI Adoption in Public Health  

Artificial intelligence (AI) holds enormous potential to accelerate and improve public health research and practice. Researchers are already using AI to develop research concepts and protocols, write code to aggregate and clean data, draft manuscripts, and automate other processes. U.S. federal health agencies are deploying AI to drive discovery, monitor and respond to emerging public health threats, and improve health care delivery. For overstretched teams working in health departments, community-based organizations, hospitals, and the private sector, AI could boost efficiency and generate additional insights across a broader array of public health challenges. Yet, actual adoption remains uneven both within and across different settings, despite advances that have made AI tools more accessible and powerful.  

What is holding us back?  

Many professionals across the public health ecosystem face a range of barriers that make it difficult to engage with AI tools confidently and effectively, though many perceived barriers are based on misunderstandings. Without addressing these underlying issues, efforts to introduce AI can be stifled or fall flat, even among motivated teams. By examining some common barriers to adoption, we can better understand what needs to change to unlock AI’s full value in public health practice and research.  

Lack of Familiarity and Confidence. Many professionals lack formal training in AI and experience applying it in research and practice. Even those with strong quantitative or programming backgrounds may feel uncertain about when to apply it and how to best integrate it into existing processes and workflows. For community-based researchers and early-career staff, AI can feel even more daunting because of the jargon surrounding it and the plethora of tools used to develop and implement it. Without guidance or a low-stakes entry point, engagement is often delayed or avoided. 

Fear of the “Black Box”. Transparency and accountability are central tenets in public health. Many public health professionals view AI as a “black box” that processes data in ways they cannot fully observe or explain. The lack of transparency makes it difficult to understand how inputs are transformed into outputs, which undermines the trust needed to accept new insights, make decisions, and act. Without a full understanding of how results were generated, it is difficult to explain and stand behind findings, defend recommendations, and maintain credibility with stakeholders.  

Anxiety About Job Displacement. The automation of common research tasks, such as literature reviews, data coding and cleaning, and preliminary analyses, is creating concern about job displacement for early career staff and others with highly specialized technical skills. Will their relevance to the team be diminished? Will employers replace them with less expensive AI alternatives? Without a clear message that AI adoption is intended as a force multiplier to make teams more efficient and effective, fear can quietly stall adoption. 

Unclear Roles and Collaboration Gaps. AI adoption is more than a mere technical challenge. It requires coordination across disciplines, programs, and teams. When roles around leadership, implementation, and evaluation are poorly defined, projects can stall or fail outright. For example, without a clear lead, no one takes responsibility for aligning AI tools with public health goals. If implementation is siloed within a technical team, critical domain may not be incorporated in the solution. When evaluation responsibilities are vague, it becomes difficult to measure success or build trust in the results. In collaborative public health settings, AI efforts are most likely to succeed when roles are clearly defined and shared ownership is established from the start. 

Lack of Access to Tools and Infrastructure. Even when teams are motivated and have the skills to work with AI, limited access to tools and infrastructure frequently stands in their way. Common barriers include restrictions on the use of cloud-based platforms, insufficient on-premises computing capacity, and the excessive cost of AI platforms that enable data governance, sharing, and AI implementation. Limited funding often prevents investment in infrastructure modernization and related support services. Security and compliance requirements can also delay or block implementation. Even adoption of free AI tools requires time for evaluation and staff training, and a reliable environment for testing and use. Without these foundational supports, AI remains out of reach for many capable teams. 

How can we accelerate AI-adoption?  

While the challenges to AI adoption in public health are real, they are not insurmountable. Many barriers can be addressed with small, practical steps that lower the stakes and build confidence. The key is to focus less on the technology itself and more on how people learn, collaborate, and solve problems together. By opening a dialog about AI adoption and its challenges, public health teams can accelerate adoption in ways that align with their values and most pressing challenges.  

Learn Together. Exploring AI as a team makes it less intimidating and more useful. Group learning builds shared vocabulary and encourages experimentation. This might mean hosting a lunch-and-learn, going through an online tutorial together, or debriefing after a shared experience with a new AI tool. When people learn together, they are more likely to use and trust what they have learned. 

Start with the Problem, Not the Tool. Adoption is more likely when AI is used to solve real, everyday challenges. Think of tasks that are time-consuming or repetitive, like classifying open-ended survey responses, summarizing interview transcripts, or screening documents for relevance. If the application is clear and grounded in existing work, teams will see AI as support rather than disruption. 

Pilot Small and Build Confidence. Try using AI for one manageable part of a project. For example, use it to organize meeting notes, group themes from feedback, or flag unusual trends in data. Pilots allow teams to explore benefits, spot limitations, and build skills without committing to full-scale change. 

Define Roles and Invite Collaboration. AI adoption works best when roles are clear and inclusive. Bring interested staff into early planning, including domain experts. If technical capacity is limited, invite in collaborators from other teams or organizations. Clear roles reduce confusion, highlight hidden strengths, and ensure that AI does not stay siloed among technical staff. 

Emphasize the Human Role. Public health depends on judgment, ethics, and context, which means AI solutions require human engagement across the adoption lifecycle. Make it clear that AI is a tool for acceleration to help reduce fear and build trust. People are essential to framing questions, interpreting results, and keeping equity and accountability at the center.  

A Path Forward 

The key to making AI a transformative force in public health is by ensuring that public health professionals are equipped and empowered to use it, which means investing in clarity, trust, and shared purpose just as much as tools and infrastructure. When AI is introduced through collaboration, aligned with real-world needs, and guided by public health values, it becomes a catalyst for smarter decisions, faster responses, and better outcomes for all.