Risk to resilience: How to govern AI in facilities

Content Type
Article
Written
November 1, 2025
Read Time
# minutes
Author
Download
Download
Table of Contents

Transcript

Artificial intelligence (AI) is rapidly changing how facilities are managed, unlocking efficiencies in predictive maintenance, space utilization, and occupant comfort. But with these opportunities come new vulnerabilities. As AI becomes embedded in the systems that power critical infrastructure, facility leaders must not only consider performance, but also governance, security, and trust. The stakes are high: poorly implemented AI can expose sensitive data, create compliance risks, or even open doors for cyberattacks.

In this segment, Stacy Hughes, a seasoned technology and security leader, outlines what organizations should prioritize when integrating AI into facility management. Her guidance provides a roadmap for ensuring AI delivers value safely and responsibly. For more AI insights, watch the full webinar: AI and the next era of facility care.

Transcript

ABM Contributor:
Stacy Hughes
SVP and Chief Information Security Officer

Stacy has 20+ years of experience leading complex IT initiatives for Fortune 500 financial technology clients.

Stacy Hughes

The foundation: Governance structures

AI adoption in facilities must start with governance. A robust structure provides the rules, checks, and accountability needed to safeguard data and operations. Key elements include:

  • AI policies: Clear guidelines on acceptable use and organizational expectations help establish consistency and accountability.
  • Inventory of use cases: Tracking how and where AI is being applied—whether developed internally or sourced from third parties—ensures visibility across the enterprise.
  • Data classification: Understanding what types of data are used in AI models is critical. Sensitive information requires compliance with privacy regulations and additional safeguards.
  • System integration: AI does not exist in isolation. Effective governance means ensuring AI integrates seamlessly with existing security tooling and monitoring systems.
  • Testing protocols: Rigorous validation processes confirm that AI models perform as expected and help avoid inaccuracies before deployment.

Without these pillars, organizations risk implementing AI solutions that are inconsistent, insecure, or misaligned with broader business and compliance goals.

Ensuring reliability in AI-driven security

Cyber threats evolve daily, and AI-driven systems must evolve with them. For security-focused AI applications, Hughes highlights several best practices:

  • Model updates: AI tools must continuously integrate the latest cyber intelligence to stay ahead of new attack methods.
  • Test environments: New AI functionality should be tested in isolated environments before being deployed in production to prevent disruptions.
  • Feedback loops: Continuous communication between testing and deployment phases ensures that AI models improve over time rather than stagnate.
  • Integration with security operations: AI must be embedded within existing monitoring and response workflows, providing teams with additional visibility and actionable intelligence.
  • Auditing: Regular audits verify compliance, validate model performance, and identify areas for refinement.

These measures transform AI from a theoretical advantage into a reliable part of a facility’s defense strategy.

The human element: Educating the workforce

Technology alone cannot secure facilities. Workforce readiness is essential, especially as cybercriminals increasingly weaponize AI. Malicious actors now use AI to create convincing phishing emails, deepfake audio, and even fraudulent video content. Without proper awareness, employees can become the weakest link in the security chain.

To address this, Hughes recommends embedding AI awareness into security training programs. By teaching staff to recognize suspicious communications and encouraging proactive reporting, organizations can strengthen their human firewall. Employees who understand the risks posed by AI-driven attacks are better equipped to respond effectively and reduce vulnerabilities.

Balancing opportunity with responsibility

The promise of AI in facility management is immense. From predictive maintenance to smarter energy use, the efficiency gains are clear. Yet as Hughes emphasized, technology is only part of the equation. Roughly 30 percent of AI’s value comes from the tools themselves; the remaining 70 percent depends on people and processes. Governance, testing, and education are what ensure that AI adoption strengthens facilities rather than introduces new risks.

Looking ahead: Integrating trust into AI strategies

The integration of AI into facilities is not a one-time project—it is an ongoing journey. Building trust requires leaders to:

  • Establish governance frameworks before large-scale adoption.
  • Treat AI as a dynamic system that requires updates, audits, and refinements.
  • Equip their workforce with the knowledge to identify threats and act responsibly.

AI will continue to evolve and so will cyber threats. Organizations that commit to governance, resilience, and education will not only unlock AI’s full potential but also protect the people and environments they serve.

With strong governance and secure systems in place, the next step is unlocking AI’s potential to elevate the client experience. See how Bob Clarke is using AI to reshape engagement—making operations more personal, responsive, and efficient.

Share your challenge
Tell us what you’re facing. We’ll help you find a way forward.
Contact Us