The Hidden Risk: Uncontrolled AI in Transit Operations
By Michael A. Echols | 5/5/2025
MICHAEL A. ECHOLS
CEO, Max Cybersecurity LLC
Creator of AiM FRAME™ OT AI Maturity Framework
Former DHS Critical Infrastructure Leader

We are no longer simply modernizing transit; we are now automating it. This means artificial intelligence (AI) is appearing in places it never existed before, including signal systems, predictive maintenance platforms, video analytics, and dynamic scheduling. While these innovations promise efficiency and resilience, they can also introduce a dangerous level of complexity. This is especially true in operational technology (OT) environments, where physical consequences are real and immediate. In many cases, vendors are introducing AI to a transit agency that has little knowledge of what it does and how it is trained.
The problem is, AI does not just follow instructions. It learns and will have an even stronger adoptive feature over time. An organization’s AI tools may be learning, adapting, and, sometimes, acting in ways its creators did not anticipate. Without governance, oversight, and the right controls, AI can take well-intentioned improvements and turn them into operational vulnerabilities.
Let me bring this home with examples that are recognizable to transit professionals. Imagine a transit system deploys AI to optimize power usage across its traction power substations. The model learns that certain power demands fluctuate during low ridership periods and begins suppressing loads accordingly. But one day, during an unexpected service disruption, the system fails to restore power fast enough because the AI deprioritized what it viewed as “nonessential” load zones. No human overrode the logic, and no one noticed the drift until passengers were stuck, and schedules collapsed.
That is not a fluke. It is what can happen when transit agencies fail to govern the AI environment. Unfortunately, it is more and more likely to happen because agencies are underfunded, and leadership is typically not proactive about potential liability without precedence.

Governance Is Not Optional, It Is Survival
Many agencies believe they have made major strides in OT cybersecurity. But those gains are fragile, and AI integration can quietly reverse them. When machines begin making decisions that were once human, the absence of guardrails becomes a critical weakness. In transit, AI that is not aligned with mission goals can undermine safety, equity, and public trust.
We need to stop treating AI as a plug-and-play feature. It is not just smarter software; it is a decision-maker. And if we don’t define how it makes decisions, it will define that on its own.
Where the Gaps Are—and Why They Matter
AI does not just pose technical risks. It creates new governance challenges that most transit agencies are not prepared to manage. Here is where things are breaking down:
- Silent Failure in Physical Systems
AI can deprioritize anomalies it considers statistically insignificant, even if those anomalies indicate mechanical failure. This leads to undetected issues in trackside sensors, brake systems, or escalators until a catastrophic event forces manual intervention. - Unintended Bias in Service Delivery
Left unchecked, AI can reinforce inequities by reallocating resources away from low-ridership or underserved communities. Algorithms do not understand social context unless we teach them, and in transit, equity is not optional. These requirements are core to the mission. - Liability Without Clarity
When something goes wrong, who is accountable, the vendor, the engineer, the AI? Without formal governance structures, no one owns the risk, which leaves agencies exposed to legal, reputational, and regulatory fallout. - Conflicted Response During Emergencies
If AI systems are not designed to coordinate during incidents, they can act against each other, or worse, against emergency protocols. This erodes the very resilience that modernization is supposed to strengthen.
A Smarter Way Forward
Transit leaders do not need to halt AI adoption; rather, they need to govern it. The path forward is about clarity, not complexity. Start here:

- Establish AI Governance Now
Create an OT AI Steering Committee that includes cybersecurity, engineering, operations, and legal. This group should define how AI systems are evaluated, monitored, and held accountable before deployment, not after. - Map Existing AI Footprints
You may already have AI embedded in vendor platforms, control logic, or asset management tools. Conduct an inventory so you know what is learning, adapting, or making decisions in your environment. - Use a Maturity Framework to Get Grounded
Do not reinvent the wheel. Tools like the new AiM FRAME™ or the NIST AI Risk Management Framework can help assess your agency’s readiness, identify gaps, and prioritize investments that promote safe, reliable, and equitable AI integration.
Through organization at the Department of Homeland Security and APTA, we have spent years building cybersecurity programs, investing in resilience, and modernizing our systems. But without AI governance, we risk losing control over what we have worked so hard to secure. The stakes are too high to ignore.
Transit is not just about moving people. It is about doing it safely, fairly, and with intention. Let us make sure our AI understands that, too.