Title: AI Governance in the Agentic Era - Jonathan Dambrot, Cranium Resource URL: https://www.youtube.com/watch?v=5iVtvbS_4Nc Publication Date: 2025-08-04 Format Type: Video Reading Time: 24 minutes Contributors: Jonathan Dambrot;Kelly Combs; Source: KPMG US (YouTube) Keywords: [AI Governance, Agentic Systems, AI Security, Risk Management, Enterprise Technology] Job Profiles: Chief Information Security Officer (CISO);Chief Information Officer (CIO);Data Governance Manager;Product Manager;Chief Executive Officer (CEO); Synopsis: In this video, Cranium AI Chief Executive Officer Jonathan Dambrot and KPMG Trusted AI Managing Director Kelly Combs discuss the challenges and imperatives of AI governance, security, and risk management in the era of agentic AI systems. Takeaways: [Developer playgrounds and prompt improvers can be strategically assigned to different user personas to balance experimentation with accuracy in AI outputs., AI steering committees and risk working groups are often limited by their inability to enforce policies at production scale, creating bottlenecks between governance and deployment speed., Security-by-design pledges signed by AI vendors frequently lack real implementation, with most third-party models entering enterprises without embedded security practices., Governance strategies benefit from treating AI systems as composed of multiple interoperable components, allowing more modular risk assessments and controls across reused elements., Unit-of-measure thinking, which categorizes AI components like models, agents, and orchestration layers, can accelerate governance workflows by identifying reusable risk patterns.] Summary: In this video, Jonathan Dambrot, Chief Executive Officer of Cranium AI, shares his journey from KPMG to founding a leading AI security company and delves into the urgent topics defining AI governance in today's rapidly evolving business environment. The conversation, guided by an unidentified moderator, opens with Dambrot's experience witnessing firsthand, at KPMG, how AI was being deployed at enterprise scale without adequate security or established DevSecOps protocols. Recognizing a critical gap, he spun out Cranium AI to address secure and trusted AI implementation, underscoring that while vast investment is pouring into AI, little of it is earmarked for security, a misalignment with potentially severe consequences for organizations. The session explores the tension between the need for rapid, agile deployment of AI applications and the slow pace of regulatory clarity. Dambrot and the moderator note that while every major enterprise is racing to implement agentic AI, existing governance and risk management practices are ill-suited for the pace and complexity of modern AI tools, especially when dealing with small software vendors. They stress the need for continuous, technically grounded governance solutions and highlight that governance must evolve from a compliance afterthought to an embedded, digitized part of development lifecycles. This shift includes using persona-based user experiences, providing differentiated governance for developers and end users, and building technical accelerators that automate risk checks within code itself. The discussion further addresses the challenges of explainability and security by design, emphasizing that despite industry commitments and pledges, most AI development today lacks security by design as a built-in principle. Regulators and technical teams alike grapple with scaling governance over sprawling AI deployments—such as thousands of interconnected agents—without clear methods for inventory, risk assessment, and compositional analysis. Dambrot notes that governance frameworks must catalog and monitor AI components as reusable assets and that new technological approaches, like digitized policy enforcement, are central to managing both model-specific and system-level vulnerabilities. The session concludes with a focus on emerging workforce and leadership challenges in the agentic AI era, such as compensation models for supervisors managing both humans and AI agents, the economic and social implications of workforce reduction, and the ongoing need to nurture entrepreneurial leadership. In response to audience questions, both presenters stress that traditional phishing and fraud protections are no longer sufficient in an age of deepfakes and AI-driven attacks. They advocate for AI-enabled, automated defensive systems that monitor for data poisoning and identity-based threats, moving away from reliance on human vigilance and toward systemic, machine-driven controls. Content: ## Introduction: Setting the Stage for AI Governance The discussion opens with an introduction of Jonathan Dambrot, Chief Executive Officer and co-founder of Cranium AI, a company specializing in AI security and trusted AI platforms. Moderated by another industry expert, the session highlights Dambrot's unique journey from being a technology-focused partner at KPMG to founding a rapidly growing AI security company. His background encompasses over two decades of experience in technology, third-party risk management, and security. A notable transition, Dambrot moved from KPMG, where he observed enterprise AI implementation firsthand, to Cranium AI, leading with a mission to address emerging threats and governance gaps in AI adoption. ### Entrepreneurial Insights and the Need for AI Security Dambrot explains that while at KPMG, he identified a significant opportunity as large organizations rapidly adopted AI, often bypassing established development security operations (DevSecOps) processes built up over the previous twenty years. Drawing on engagement with over 100 Chief Information Security Officers (CISOs), he observed that few proactively secured AI as it moved into production. This realization led KPMG to support his vision, granting initial funding which, along with additional investments, propelled Cranium AI into an influential market position. Dambrot emphasizes that despite AI being a top strategic priority, significant investment in security remains lacking. ## Balancing Speed, Risk, and Governance in the Agentic Era The conversation addresses the dual challenge organizations face: the pressure to deploy AI quickly and the risks associated with insufficient security and governance, especially when vendors are small startups. The speakers note that while industries are eager to leverage AI’s competitive advantages, many still struggle with implementing the necessary safeguards at scale. The discussion points out that regulatory enforcement around AI is currently fragmented and limited, with consistent accountability, transparency, and regulation yet to become industrywide standards. Corporate decision-makers such as Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) are seeking agile and modular governance solutions that protect assets without stifling innovation. The presenters argue that previous approaches, which treated governance as a paper exercise, are no longer viable. Technical understanding of AI’s impact within business environments is now essential. ## Technical Controls and Proactive Governance by Design A case study involving a large telecommunications firm illustrates how business imperatives can demand speedy AI deployments—sometimes in as little as six weeks. This pace necessitates a rethinking of governance, integrating security and oversight as continuous, AI-driven processes rather than one-off reviews. The speakers propose incorporating personas and user experience design to tailor governance and controls for different user groups. For example, precise output may be required for some roles, while others benefit from open experimentation in a sandbox environment. They stress the need to move from traditional “big G” governance, often perceived as burdensome, to governance that is digitized and embedded within user workflows. The discussion covers industry efforts and pledges to prioritize security by design, noting that most providers still lack robust controls as part of their development pipelines. Examples from large banks and technology conglomerates show that while technical accelerators and self-service tools are emerging, widespread adoption remains elusive. ## Scaling AI Oversight and Addressing Explainability Challenges The speakers further examine the rise of internal AI steering committees and risk working groups as organizations try to formalize governance policies. The main challenge lies in enforcing these policies and scaling controls across thousands of AI components, especially in federated R&D environments. Methods for cataloging, inventorying, and technically assessing both individual models and composed agentic systems are discussed, with the recommendation that even as AI solutions proliferate, policy enforcement must be automated and standardized to meet business requirements. On the topic of explainability, the speakers highlight the technical limits in understanding and auditing agentic AI. They suggest focusing on core principles like security by design and continuous technical testing. Legal compliance remains important but is often too slow to keep pace with rapid AI changes. Organizations must find effective ways to monitor vulnerabilities and concentration risks within their AI supply chains—especially as external actors and geopolitical concerns enter the equation. ## Evolving Workforce Dynamics and Deepfake Threats The conversation turns to workforce implications as AI and agentic systems become more prevalent. Business leaders must rethink team structures, compensation, and skills, as managers may soon oversee both human employees and AI agents. The speakers voice concern for social and economic consequences arising from workforce reductions and advocate for entrepreneurial leadership and upskilling. Audience questions address governance approaches to large language models (LLMs) in financial industries, pointing out the importance of treating each software and model component as a managed, inventoried asset. The emergence of deepfakes and misinformation is discussed as an acute risk, with presenters calling for AI-enabled, automated security measures to counter threats that cannot be reliably detected by humans alone. Ultimately, they propose that strengthening machine-driven security and governance is necessary as traditional defenses become increasingly ineffective in the agentic AI era.