Skip navigation
Video
24 minutes
Aug 4, 2025

Video


ABA

AI Governance in the Agentic Era - Jonathan Dambrot, Cranium

In this video, Cranium AI Chief Executive Officer Jonathan Dambrot and KPMG Trusted AI Managing Director Kelly Combs discuss the challenges and imperatives of AI governance, security, and risk management in the era of agentic AI systems.

AI Governance Agentic Systems AI Security Risk Management Enterprise Technology

Takeaways

  • Developer playgrounds and prompt improvers can be strategically assigned to different user personas to balance experimentation with accuracy in AI outputs.
  • AI steering committees and risk working groups are often limited by their inability to enforce policies at production scale, creating bottlenecks between governance and deployment speed.
  • Security-by-design pledges signed by AI vendors frequently lack real implementation, with most third-party models entering enterprises without embedded security practices.
  • Governance strategies benefit from treating AI systems as composed of multiple interoperable components, allowing more modular risk assessments and controls across reused elements.
  • Unit-of-measure thinking, which categorizes AI components like models, agents, and orchestration layers, can accelerate governance workflows by identifying reusable risk patterns.

Summary

In this video, Jonathan Dambrot, Chief Executive Officer of Cranium AI, shares his journey from KPMG to founding a leading AI security company and delves into the urgent topics defining AI governance in today's rapidly evolving business environment. The conversation, guided by an unidentified moderator, opens with Dambrot's experience witnessing firsthand, at KPMG, how AI was being deployed at enterprise scale without adequate security or established DevSecOps protocols. Recognizing a critical gap, he spun out Cranium AI to address secure and trusted AI implementation, underscoring that while vast investment is pouring into AI, little of it is earmarked for security, a misalignment with potentially severe consequences for organizations.

The session explores the tension between the need for rapid, agile deployment of AI applications and the slow pace of regulatory clarity. Dambrot and the moderator note that while every major enterprise is racing to implement agentic AI, existing governance and risk management practices are ill-suited for the pace and complexity of modern AI tools, especially when dealing with small software vendors. They stress the need for continuous, technically grounded governance solutions and highlight that governance must evolve from a compliance afterthought to an embedded, digitized part of development lifecycles. This shift includes using persona-based user experiences, providing differentiated governance for developers and end users, and building technical accelerators that automate risk checks within code itself.

The discussion further addresses the challenges of explainability and security by design, emphasizing that despite industry commitments and pledges, most AI development today lacks security by design as a built-in principle. Regulators and technical teams alike grapple with scaling governance over sprawling AI deployments—such as thousands of interconnected agents—without clear methods for inventory, risk assessment, and compositional analysis. Dambrot notes that governance frameworks must catalog and monitor AI components as reusable assets and that new technological approaches, like digitized policy enforcement, are central to managing both model-specific and system-level vulnerabilities.

The session concludes with a focus on emerging workforce and leadership challenges in the agentic AI era, such as compensation models for supervisors managing both humans and AI agents, the economic and social implications of workforce reduction, and the ongoing need to nurture entrepreneurial leadership. In response to audience questions, both presenters stress that traditional phishing and fraud protections are no longer sufficient in an age of deepfakes and AI-driven attacks. They advocate for AI-enabled, automated defensive systems that monitor for data poisoning and identity-based threats, moving away from reliance on human vigilance and toward systemic, machine-driven controls.

Job Profiles

Chief Executive Officer (CEO) Product Manager Data Governance Manager Chief Information Officer (CIO) Chief Information Security Officer (CISO)

Actions

Watch full video Export

ABA
Content rating = A
  • Relies on reputable sources
  • Adequate structure
  • Must-know
Author rating = B
  • Has professional experience in the subject matter area
Source rating = A
  • Established, respected publisher
  • Features expert contributions
  • Maintains high editorial standards