The Wary Watch: How Marylanders’ AI Awareness Shapes the State’s Path Forward
The relationship between the citizens of Maryland and the rapidly evolving field of Artificial Intelligence is no longer one of curiosity; it is one of informed, yet cautious, engagement. A comprehensive poll released in early November 2025, conducted by the Institute of Politics at the University of Maryland, Baltimore County (UMBC), confirms what many industry observers suspected: awareness of AI is nearly universal, but it is deeply tempered by significant public apprehension. This finding—high recognition paired with specific, pointed worries—presents a defining challenge for both the state’s policymakers and the technology sector aiming to integrate these powerful tools into the Maryland ecosystem.
The poll, which surveyed 810 Maryland adults between October 21 and October 25, 2025, found that a combined 97% of respondents reported hearing or reading at least “a little” about AI, with 54% having heard or read “a lot.” This pervasive awareness exists alongside a majority sentiment that AI will ultimately have a negative impact on society, a perspective held by those at least “a little” familiar with the technology, though three in ten voters remain optimistic. This nuanced apprehension is the defining characteristic of the state’s relationship with artificial intelligence as it moves from the lab into the everyday lives of its residents. It underscores that technological capability alone will not drive adoption; demonstrated trustworthiness is the new prerequisite for successful integration.
The Data Speaks: Specific Anxieties Driving Maryland’s Wariness
The power of the recent UMBC survey lies in its specificity, moving the conversation beyond generalized fear to concrete areas requiring immediate focus. The concerns identified by Maryland residents map a clear risk matrix for regulators and developers alike.
The Integrity Crisis: Misinformation and Identity
The most overwhelming anxieties center on the authenticity of information and personal security. As of the late 2025 findings, the poll revealed:
- Informational Integrity: A staggering 81% of Marylanders are concerned about the spread of misinformation and political propaganda fueled by AI. This anxiety is timely, coming as federal and state bodies grapple with the potential for hyper-realistic synthetic media to impact electoral cycles and public discourse.
- Identity Protection: Closely following, 78% cited identity theft and impersonation—including the manipulation of images, video, or audio—as a significant concern. This reflects a tangible fear of digital forgery and the breakdown of trust in digital identity verification.
- Education and Critical Thinking: 61% of respondents worried about the impact of AI on education and the development of critical thinking skills. This aligns with legislative interest in studying AI use in public schools, as one delegate noted the transformative, yet potentially dangerous, nature of the technology.
- Job Displacement: With 55% worrying that AI would displace people in the workplace, the threat of automation remains a top-tier socioeconomic concern. This figure directly informs the necessity of proactive workforce planning.
- Collect and analyze industry data to determine future AI skill requirements for public sector jobs.
- Meet with labor leaders to incorporate worker perspectives on potential impacts.
- Expand AI training for state workers through initiatives like the Maryland Data Academy.
These figures suggest that any effective regulatory effort, whether initiated by the General Assembly or the Governor’s AI Subcabinet, must prioritize provenance standards and robust authenticity verification mechanisms for synthetic content. The public is demanding guardrails against manipulation, a sentiment echoed by lawmakers preparing for the January 2026 session who are keen to address safeguards against AI-related harms.
Socioeconomic Impact: Education and Employment
Beyond immediate informational threats, Marylanders express significant concern over the technology’s long-term societal shifts, particularly in skill development and the labor market:
Furthermore, a slight majority (58%) noted concern over the decline of interpersonal interactions and relationships, indicating a broader anxiety about technology mediating human connection.
Usage vs. Worry: The Paradox of Adoption
What makes Maryland’s stance particularly interesting is that wariness has not translated into outright avoidance. Despite these significant concerns, the data reveals a state actively adopting the technology. More than 70% of those surveyed reported using AI tools at least occasionally, with a robust 43% using them weekly or more frequently. This duality—high concern juxtaposed with high usage—creates a compelling mandate for policy: the aim cannot be prohibition, but rather the creation of a trusted framework that allows for the “productive” use of AI while strictly mitigating the identified risks.
Policy Considerations Suggested by Public Sentiment Analysis
The concerns identified by the residents of the state offer a clear, data-driven roadmap for immediate policy focus. As Maryland continues to build upon its foundational 2025 AI Enablement Strategy, public sentiment dictates the prioritization of specific governance and transition initiatives.
Prioritizing Provenance and Accountability
The overwhelming anxiety surrounding informational integrity (81%) and identity protection (78%) suggests that any effective regulatory effort must aggressively prioritize technological solutions for provenance, authenticity verification, and accountability for synthetic content. This aligns with the broader call for “responsible innovation” found in related studies, where majorities favor government certification of AI models and post-deployment audits to ensure compliance and safety over unchecked development.
Maryland lawmakers, preparing for the January session, are considering proposals that directly address these issues, including potential limitations on AI-driven hiring tools and recommendations from a workgroup tasked with protecting consumers from AI harms in employment and housing. The state’s AI Governance Act (SB818) from 2024 laid the groundwork, but the 2025 poll indicates the need for concrete, enforceable standards that build upon the state’s ongoing efforts to establish mature AI governance capabilities within procurement and risk management.
Mitigating Socioeconomic Disruption through Investment
The high concern over job displacement (55%) mandates a parallel, strategic focus on economic resilience. This requires more than just acknowledging the risk; it necessitates investment in educational pathways and economic transition programs designed to upskill and reskill the workforce to withstand automation pressures.
The Governor’s 2025 AI Enablement Strategy already outlines pillars addressing this by planning to:
These initiatives, already underway as of mid-2025, directly address the public’s primary socioeconomic fear, demonstrating a commitment to making the workforce resilient to the most significant perceived negative effect of the technology.
The Legislative Imperative: Guardrails Over Gridlock
The August 2025 UMD survey, which showed broad, bipartisan support for AI regulation, reinforces the November findings regarding wariness. The public sentiment, as articulated by Delegate Jesse Pippy, suggests a clear demand: “There needs to be guardrails on AI”—particularly citing instances where AI produced inaccurate legal case law. For Maryland policymakers, the path forward is one of responsible stewardship, ensuring that the pursuit of AI-enhanced public services does not erode the foundational trust necessary for governance to function effectively. The focus must be on preventative regulation, as majorities of citizens find arguments for being proactive more convincing than those advocating for unconstrained innovation.
Future Trajectory of AI Integration in the State’s Ecosystem
Ultimately, the findings suggest that the future of artificial intelligence in this jurisdiction will be shaped less by technological capability and more by demonstrated trustworthiness. The public has spoken clearly: they are ready to use the tools, but only to the extent that they believe the builders, deployers, and regulators are actively mitigating the clear and present dangers they have articulated.
The Trust Dividend: Earning Everyday Adoption
The sustained usage rate—with nearly half of the state’s residents engaging with AI tools weekly—indicates a public willing to experiment and integrate. However, this adoption curve is fragile. Should a major incident related to AI-generated misinformation or identity fraud occur, the 81% and 78% concern rates could rapidly translate into a complete rejection of the technology across commercial and personal spheres.
Therefore, the success of Maryland’s ongoing structured experimentation—moving away from the “opportunistic” approach of 2024—will be measured not just by efficiency gains in state agencies, but by the public perception of its safety. The state’s planned studies across critical domains like election security, public safety, and infrastructure must yield transparent, accessible findings by their target dates in late 2025 to inform the January 2026 legislative cycle.
Balancing Transparency with Competitive Advantage
The challenge for industry leaders operating within Maryland is navigating the tension between proprietary development and public demand for transparency. While the state continues to build its capacity to responsibly leverage AI to improve constituent outcomes, industry must concurrently adopt high ethical standards, especially concerning data governance and algorithmic bias, which are key components of Maryland’s 2025 Strategy.
The journey forward will be characterized by a continuous negotiation between the allure of powerful new capabilities—such as AI in healthcare or enhanced cybersecurity defenses—and the unwavering demand for safety, security, and the preservation of human-centric values in the evolving digital era. This nuanced apprehension, solidified by the November 2025 poll, serves as the state’s core directive: to champion innovation that is demonstrably accountable to its people.
For the next phase of AI integration in Maryland, the metrics of success must shift. It is no longer about how fast a model can be deployed, but how effectively its deployment can assuage the very real concerns of the populace. From the secure handling of data underpinning AI systems to the clear labeling of synthetic content, every action taken by the state and its corporate partners will either repay or deplete the fragile reservoir of public trust that currently allows Marylanders to be both highly aware and actively engaged users of artificial intelligence.
The framework established by the AI Governance Act of 2024 and the subsequent 2025 Enablement Strategy provides the structural response; the November 2025 poll provides the necessary public mandate and prioritization list. The coming year will test Maryland’s commitment to balancing the transformative potential of AI with the public’s clear demand for ethical constraint and verifiable safety.
This balancing act is essential for maintaining the state’s position as a hub for technology and innovation. If Maryland can successfully translate its residents’ specific worries into actionable, transparent governance, it stands to create a regulatory model that fosters sustained, beneficial AI integration, moving beyond the initial phase of awareness and into an era of earned confidence.
The key takeaways for stakeholders moving into 2026 are clear: Information provenance is paramount (81% concern), personal security must be guaranteed (78% concern), and workforce adaptation cannot be an afterthought (55% concern). These figures are not abstract polling points; they are the operational requirements for achieving widespread, sustainable AI adoption within the Old Line State.
The commitment to increasing AI literacy among state workers and fostering structured experimentation reflects a positive feedback loop—the government is actively learning about the technology it seeks to govern. This engagement, combined with legislative efforts to understand AI’s role in critical domains like elections and public safety, sets a tone that is measured, deliberate, and highly responsive to its citizenry’s apprehension. Maryland’s path forward is thus illuminated by its own public consciousness: innovation must walk hand-in-hand with established security and ethical protocols.