Title: AI-Powered Learning Experiences at Scale Resource URL: https://www.youtube.com/watch?v=EMoMaiZ95Kc Publication Date: 2025-05-24 Format Type: Video Reading Time: 33 minutes Contributors: Elizabeth Speight;Jeff Fissel; Source: Training Industry (YouTube) Keywords: [AI Agents, Data Quality, Knowledge Base, Learning Technology, Instructional Design] Job Profiles: Chief Human Resources Officer (CHRO);Digital Transformation Consultant;Learning and Development Specialist;Training and Development Manager;Chief Technology Officer (CTO); Synopsis: In this video, Liz Speight, Event Marketing Specialist at Training Industry, and Jeff Fissel, Vice President of Technology at GP Strategies, explore AI-powered learning at scale, focusing on data preparation and agent-based architectures to streamline enterprise training workflows. Takeaways: [Auditing and structuring your learning content sets the stage for reliable retrieval-augmented generation (RAG), preventing AI from hallucinating or indexing obsolete materials., Embed domain expertise directly into AI agent workflows, such as instructional‐design checklists and style rules, to elevate output quality beyond generic chatbot responses., Building knowledge bases from vetted course files enables semantic search and virtual subject‐matter expert chatbots that both validate technical accuracy and support on-demand learning., Creating workspaces that ground AI interactions in specific data silos ensures that generated scripts, assessments or branching scenarios remain contextually relevant and trustworthy., Rethinking L&D workflows rather than automating existing steps helps teams leverage AI’s intelligence while applying human wisdom to guide quality and consistency.] Summary: Training Industry’s Learning Tech Showcase, introduced by Liz Speight, frames a session on scaling AI-powered learning. Liz Speight sets the stage by contextualizing the importance of innovation in learning technologies. Jeff Fissel, Vice President of Technology at GP Strategies, then outlines how organizations can move beyond simple chatbot trials to achieve consistent, high-quality training outputs. Fissel emphasizes that raw generative AI tools often produce unreliable results when fed unstructured or outdated content. The presentation highlights two foundational pillars: high-quality data inputs and domain-embedded AI agents. First, practitioners must audit and curate their learning assets—videos, SCORM packages, slide decks and documents—to ensure accuracy, context and relevance. Structured metadata—such as skill tags, time-based outlines and validated assessment answers—enables retrieval-augmented generation tasks to surface the correct information and minimize hallucinations. Awareness of bias in source material further safeguards against misleading outputs. Second, learners and instructional designers should architect AI agents that mirror expert workflows. By decomposing tasks—objective generation, question creation and internal reviews—into discrete AI calls, organizations can embed instructional-design rules and style guidelines. Fissel demonstrates how the Content AIQ platform supports this process: building knowledge bases from grouped course files, creating workspaces that ground AI interactions in specific content, and deploying output agents for video scripts, assessments and branching scenarios. Integrations with Azure OpenAI services and a synthetic media tool illustrate end-to-end automation, including multilingual script generation and direct deployment to e-learning platforms. Fissel concludes by urging attendees to rethink traditional design processes rather than merely layering AI onto existing workflows. By codifying “what good looks like” into agent architectures and leveraging curated, structured data, L&D teams can repurpose legacy content, accelerate creation timelines and maintain consistency at scale. The session closes with a call—reinforced by both Speight and Fissel—to experiment with custom agents, pilot small projects and apply human wisdom to steward AI’s intelligence. Content: ## Introduction A professional development event welcomed learning and development professionals to explore the application of artificial intelligence in enterprise training. The session opened with logistical details, including chat-based interaction, session recording availability, and social media engagement, before introducing the first presenter—an executive responsible for technology at a leading training solutions firm. ## Overcoming the AI Hype Cycle Drawing on Gartner’s innovation-cycle model, the presenter underscored the danger of expecting peak-of-inflated-expectations magic from generic large language models. To traverse the trough of disillusionment and reach sustainable productivity gains, organizations must move beyond ad hoc prompts in tools like ChatGPT or Copilot. ## Ensuring Data Quality High-quality, structured data forms the bedrock of scalable AI learning experiences. Rather than indiscriminately ingesting entire repositories or SCORM packages, practitioners should curate content for currency and relevance. Structured metadata—learning objectives, skill tags, time-based outlines and validated assessments—prevents unreliable outputs and mitigates bias. Recognizing that “not all wrong data is wrong,” teams must anticipate fragmentary errors and design prompts to disregard obsolete elements. ## Architecting AI Agents with Expertise Embedding domain expertise into AI workflows transforms mediocre responses into specialized outputs. By decomposing tasks such as transcript parsing, learning-objective generation and question authoring into sequential agent calls, organizations can replicate instructional-designer methodologies at scale. Meta-review loops enable agents to self-evaluate and iterate on draft outputs before human review. ## Demonstration of the AIQ Platform A live demonstration showcased a proprietary Content AIQ solution built on Microsoft Azure. Participants observed bulk data ingestion—video transcripts, slide decks and learning documents—and automatic extraction of metadata, including topic detection and justification for each segment. The platform’s knowledge-base feature organized related materials into editable collections, serving as a virtual subject-matter expert. Workspaces then grounded AI interactions within chosen knowledge bases, preventing off-topic requests. Using output agents, the presenter illustrated how to generate video scripts, podcast outlines and assessment questions. Agents prompted users for audience details and adhered to structured output formats, ensuring consistent style and completeness. Integration with a synthetic media tool produced narrated scenes automatically, and localization workflows demonstrated native script generation and post-process translation options. ## Recommendations and Next Steps Attendees were challenged to rethink traditional design processes rather than retrofitting AI into existing workflows. By codifying best practices—defining “what good looks like”—and investing in data preparation, L&D teams can repurpose legacy content, accelerate new development and maintain consistent quality. The presenter encouraged experimentation with custom AI agents, small-scale pilots and collaboration to harness AI’s intelligence while applying human wisdom as a guiding force.