Compliance, AI & the future of trust: Prepare for what’s next in cybersecurity

Content Type
Article
Written
October 3, 2025
Read Time
# minutes
Author
Download
Download
Table of Contents

Transcript

In today’s hyper-connected world, cybersecurity is no longer measured by whether an organization can keep intruders out. The real test is whether it can operate with confidence in a landscape defined by evolving regulations, expanding digital footprints, and transformative technologies like artificial intelligence.

Compliance frameworks provide a necessary foundation, but they are only the beginning. True resilience requires: maturity that goes beyond checklists, governance structures that keep pace with innovation, and a culture that makes trust the central outcome of security.

In the second part of our two-part cybersecurity series, ABM leaders in security, legal, and innovation explore how organizations can meet these challenges by harmonizing compliance, governing AI responsibly, and preparing for a future where transparency is as important as technology.

ABM Contributors:


Stacy Hughes, SVP & CISO

Scott Flynn, SVP, Chief Compliance Officer & Deputy General Counsel

Kayla Oliver, Head of Products, Partnership, and Innovation

Key Takeaways:

  • Compliance is the baseline, not the finish line. Frameworks like ISO and NIST establish minimum standards, but real security maturity goes beyond checklists.
  • Adhere to the most stringent requirements. Simplify global operations by aligning controls to the toughest applicable standards and embedding defense-in-depth across every layer.
  • Govern AI with accountability and transparency. Maintain a clear inventory of AI use cases, require human oversight, and build explainability into every deployment.
  • Build cross-functional governance. Collaboration among security, legal, compliance, and innovation teams ensures AI and new technologies advance responsibly and efficiently.
  • Foster a culture of awareness and engagement. When employees proactively ask questions and seek guidance before acting, it signals true organizational maturity.
  • Prepare for regulatory uncertainty. Expect evolving requirements and regional variation. Transparent documentation and conservative governance will keep organizations ahead of new rules.
  • Trust is the ultimate differentiator. Embedding compliance, governance, and culture into every decision creates confidence with clients, regulators, and employees alike.


Compliance as the baseline

Standards such as ISO 27001 and the NIST Cybersecurity Framework have become the backbone of modern cybersecurity programs. They provide a set of minimum expectations that help organizations align on common practices, benchmark their progress, and demonstrate accountability to regulators, clients, and partners.

But compliance alone does not equal security. Frameworks are valuable because they create structure and a shared language, making it possible to discuss risks with executives. Rather than diving into technical jargon, leaders can assess maturity by whether key controls are being met consistently.

At the same time, compliance should not be treated as a finish line. It is the floor on which a security program is built, not the ceiling that defines resilience. Organizations that treat compliance as the end goal often discover gaps during real-world incidents, where culture and behavior matter more than documentation.

Navigating on a global scale

For organizations that operate across regions, the compliance landscape is rarely simple. In the United States alone, companies must track dozens of evolving state-level privacy and security laws. Add the EU’s GDPR and the UK’s variant, and the challenge becomes one of constant monitoring, interpretation, and adaptation.

The practical solution is to harmonize to the most stringent standard available. Rather than creating separate rulebooks for every jurisdiction, aligning to the toughest requirements simplifies oversight and reduces risk. While this approach may require more upfront investment, it prevents the complexity and cost of fragmented compliance regimes.

Defense in depth makes this harmonization possible. By embedding controls across every layer—from physical access and encrypted devices to endpoint monitoring and email/web filtering—organizations create a security envelope that satisfies diverse regulatory requirements without creating separate silos of compliance.

Maturity beyond checklists

The difference between compliance and resilience is often cultural. A compliant organization may have all the right documents, but a resilient organization demonstrates security awareness in everyday decisions.

The clearest signal of maturity is when employees proactively engage before taking risky actions. Asking questions before connecting a new tool, sharing sensitive information, or installing software shows that security awareness has become instinctive. This behavior reflects a culture where employees see security not as a hurdle, but as a partner in doing their jobs safely.

Creating this culture requires more than policies. It requires open communication, visible leadership support, and continuous reinforcement that questions are not only welcomed but expected. In mature organizations, security is integrated into the rhythm of work rather than treated as an external enforcement function.

AI: Opportunity and risk in equal measure

AI is one of the most significant forces reshaping cybersecurity today. It introduces new possibilities for defenders, while simultaneously giving attackers new tools to scale and refine their tactics.

On the opportunity side, AI enables faster detection of unusual behavior, improved threat intelligence, and the ability to sift through massive amounts of data in real time. Used responsibly, it helps security teams stay ahead of threats and respond with greater precision.

On the risk side, AI has become a powerful weapon for adversaries. Deepfakes, voice cloning, and real-time impersonation make social engineering more convincing than ever. Attackers are using AI to craft personalized phishing at scale, creating communications that are nearly indistinguishable from legitimate ones.

The duality of AI means organizations cannot ignore it. They must adopt it to remain competitive, while also preparing for the risks it introduces. The question is not whether AI will be part of cybersecurity (it already is) but how it will be governed.

Governance as the foundation of trust

The reputational risks of AI demand governance structures that are clear, transparent, and inclusive. Poorly managed AI deployments can quickly erode client and employee confidence, whether through biased outputs, harmful recommendations, or misuse of sensitive data. At the same time, failing to defend against AI-enabled threats exposes organizations to equally significant risks.

Governance provides the guardrails that build trust. Best practices include:

  • Use-case inventories. Maintaining a log of where and how AI is used ensures transparency and prepares organizations for future disclosure requirements.
  • Risk-based human-in-the-loop requirements. No consequential or complicated decision should be fully automated. Demonstrating human oversight provides accountability when errors occur.
  • Explainability and transparency. Organizations must be able to articulate why AI made a decision and disclose when it was involved.
  • Adapted vendor reviews. Evaluating not only a vendor’s functionality but also how their AI features handle privacy, bias, and security concerns.
  • Cross-functional councils. Governance groups that bring together security, legal, compliance, and innovation ensure AI is adopted responsibly and efficiently.

When governance works well, it doesn’t slow innovation. In fact, it accelerates it by clarifying what is acceptable, streamlining reviews, and catching potential risks early. Governance is not bureaucracy—it is the foundation on which trust is built.

Preparing for an uncertain regulatory environment

The regulatory landscape for AI is in flux. Early frameworks such as the EU AI Act suggest a future focused on risk-based tiers, explainability, transparency, and human oversight. But the global environment is fragmented, with states and countries experimenting with different approaches—often moving quickly to legislate, then just as quickly revising or retracting.

Organizations cannot wait for clarity. The best strategy is to prepare as though transparency and reporting will be required, even if formal regulation lags. That means maintaining inventories, drafting policies for advanced AI now, and adopting conservative practices that assume eventual scrutiny.

At the same time, it is important to recognize that many of the outcomes of AI use are already covered by existing laws. Privacy, intellectual property, defamation, and consumer protection regulations all apply to AI outputs. The future of AI regulation is likely to combine new requirements for visibility with the application of long-standing laws to a new context.

The expanding digital footprint

AI and compliance challenges cannot be separated from the broader trend of digital expansion. Organizations now operate across a vast and growing attack surface:

  • Internet of Things (IoT) devices monitoring occupancy, energy use, and cleanliness.
  • Robotics and automation supporting facility operations.
  • Electric vehicle chargers connected to networks.
  • Payment kiosks and unattended points of sale.
  • Laptops and mobile devices in hybrid environments.

Each of these touchpoints expands the digital footprint and adds to the complexity of compliance and governance. Security programs must evolve to include these assets in their inventories, risk assessments, and controls. Governance must extend beyond the IT department to encompass every part of the business where data is generated, processed, or shared.

What organizations should do now

Moving from compliance to trust requires immediate action across several dimensions:

  1. Benchmark against the strictest standards. Align to the most stringent requirements in your footprint to simplify operations.
  2. Measure cultural maturity. Track employee engagement and encourage proactive questions as a key performance indicator.
  3. Establish AI governance councils. Create cross-functional groups that meet regularly to review use cases, vendors, and risks.
  4. Document AI usage. Maintain an up-to-date inventory and assume eventual disclosure requirements.
  5. Apply a framework for human oversight decisions. Ahead of implementing autonomous workflows, establish a decision framework for when human-in-the-loop is required.  
  6. Embed transparency. Build explainability into AI processes and disclose its role in decision-making.
  7. Expand risk models. Include IoT, robotics, EV chargers, and unattended environments in your security and compliance programs.

These actions are practical, achievable, and build a foundation that can adapt to regulatory fits and starts.

Trust as the differentiator

Compliance and AI both point toward the same outcome: trust. Compliance provides a baseline that builds client confidence, but it is culture and maturity that turn checklists into resilience. AI provides opportunities to improve security and efficiency, but only governance and transparency can turn those opportunities into lasting trust.

As digital footprints expand and regulations evolve, organizations that embed compliance, governance, and culture into every layer of their operations will stand apart. They will not only meet minimum requirements but also inspire confidence that security and trust are part of their DNA.

Cybersecurity’s next chapter will not be defined by technology alone. It will be defined by the ability to use that technology responsibly, transparently, and in ways that strengthen the trust of clients, employees, and communities.

Share your challenge
Tell us what you’re facing. We’ll help you find a way forward.
Contact Us