Title: New Research: Get Better Ideas with AI Resource URL: https://podcasts.apple.com/us/podcast/new-research-get-better-ideas-with-ai-with-kian/id1721313249?i=1000645989692 Publication Date: 2025-02-20 Format Type: Podcast Reading Time: 69 minutes Contributors: Kian Gohar;David McRaney;Henrik Werdelin;Jeremy Utley; Source: Beyond the Prompt (Apple Podcasts) Keywords: [AI in Business, Cognitive Bias, AI-Assisted Ideation, Einstellung Effect, FIXIT Framework for AI] Job Profiles: Business Coach;Academic/Researcher;Entrepreneur;Artificial Intelligence Engineer;Business Consultant; Synopsis: In this podcast episode, co-hosts Jeremy Utley and Henrik Werdelin explore how cognitive biases affect AI-assisted problem-solving. Joined by authors David McRaney and Kian Gohar, they discuss research on AI-powered ideation, explaining why teams using AI often settle for mediocre solutions. Takeaways: [AI-assisted teams often generate fewer and less diverse ideas than non-AI teams due to cognitive biases., The "Einstellung effect" leads users to accept AI’s first answers instead of iterating for better solutions., AI tools work best when treated as conversational partners rather than simple answer generators., Effective AI-powered brainstorming requires structured workflows, including individual ideation before AI use., The FIXIT framework, highlighting focus, individual thought, context, iteration, and team incubation, helps teams maximize AI’s potential in ideation.] Summary: This episode explores how teams use AI in problem-solving and why AI-assisted brainstorming often underperforms. Jeremy Utley and Kian Gohar share findings from their recent study, which analyzed AI’s impact on team ideation. Contrary to expectations, AI-assisted teams often generate fewer ideas and lower-quality solutions compared to traditional brainstorming teams. The root cause is the Einstellung effect, a cognitive bias where people fixate on their first solution and fail to explore alternatives. The discussion highlights that most teams use AI tools like ChatGPT incorrectly, treating them as oracles instead of collaborative thought partners. This approach leads to teams quickly accepting AI-generated ideas rather than iterating for better results. The key takeaway is that AI can enhance ideation, but only when teams engage in a structured conversation rather than passive querying. To counter these challenges, the FIXIT framework is introduced: Focus: Clearly define the problem before using AI. Individual Thought: Think independently before consulting AI. Context: Provide AI with detailed and relevant context. Iteration: Treat AI as a brainstorming partner, refining responses. Team Incubation: Compare AI-generated insights with team input to finalize ideas. The episode concludes with practical advice on improving AI-assisted brainstorming, including challenging AI’s first answers, using role-playing techniques, and engaging in iterative conversations to push beyond average solutions. Content: ## Episode Overview This special crossover episode of *Beyond the Prompt* reverses roles: the usual host steps back into the guest chair while the co-host assumes hosting duties. Joining the conversation is a renowned podcaster from *You Are Not So Smart*, alongside a research partner whose year-long study has illuminated both the promise and pitfalls of generative AI in organizational ideation. ### Host, Co-host, and Guests - **Primary Host** (formerly co-host of *Beyond the Prompt*) becomes the guest for this episode. - **Guest Host** (from *You Are Not So Smart*) leads the discussion. - **Co-Host** (entrepreneurial AI practitioner) explores the business implications. - **Research Partner** presents empirical findings on AI-assisted brainstorming. ### Research Focus Over the past year, the research team has investigated how generative AI—specifically tools like ChatGPT—affects team creativity and problem solving. Their experimental study reveals a surprising cognitive bias that can undermine AI-augmented ideation despite the technology’s apparent capabilities. --- ## Research Background When ChatGPT emerged, practitioners and scholars alike speculated that it would revolutionize innovation workflows. Anecdotal experiences suggested dramatic improvements in idea generation, yet no rigorous field study had quantified its impact on real-world teams. Motivated by questions such as “Is ideation dead?” and “How does AI change creative collaboration?”, the researchers designed an experiment to compare traditional brainstorming with AI-assisted sessions in corporate contexts. --- ## Study Methodology ### Participants and Context - Employees from multiple organizations, each tasked by their own problem owners to generate solutions to authentic business challenges (e.g., improving customer service, designing training modules, entering adjacent markets). - Teams believed they were simply engaging in a valuable problem-solving exercise, not a research study; they were blind to the parallel condition. ### Experimental Conditions 1. **Control Group (No AI)** - Received a facilitated, world-class brainstorming activity. - Collaborated via whiteboard or virtual whiteboard using established creativity techniques. 2. **AI-Assisted Group** - Received the same facilitation plus access to a generative AI tool. - Attended a brief primer on prompt techniques. Each session lasted approximately two hours, concluding with teams presenting their ideas for independent evaluation by the original problem owners. --- ## Key Findings Contrary to expectations, teams using generative AI: - **Produced Fewer Ideas**: On average, AI-assisted teams generated a moderate number of ideas, often fewer than the control group. - **Achieved Moderate Quality**: Their outputs tended to cluster in the “average” range rather than yielding notably superior solutions. - **Induced Misattribution**: Evaluators (blind to condition) assumed AI-assisted groups had produced the best ideas, illustrating a secondary bias in perception. These results challenge the assumption that AI automatically amplifies both the quantity and quality of team ideation. Instead, a well-known cognitive bias appears central to this underperformance. --- ## Cognitive Bias in AI-Assisted Ideation ### The Einstellung Effect The study identified **Einstellung**—the tendency to fixate on familiar solutions—as a primary barrier. When presented with initial AI suggestions, teams often accepted them without further exploration, failing to push beyond “good enough.” This mirrors classic findings: - **Luchins’ Water Jug Experiment (1942)** - Participants trained on multi-step solutions to intermediate puzzles later applied the same complex method to a simpler puzzle, ignoring an easier approach. - **Chess Masters (Oxford Studies)** - Expert players, when familiar with one tactical approach, overlooked a more straightforward solution in new positions. In the AI context, teams defaulted to early outputs rather than engaging in deeper, iterative dialogue. Although some groups adopted a more interactive stance—sharing AI responses and building on them—many lapsed into a “resting AI face,” silently reviewing suggestions and abandoning further inquiry. --- ## Best Practices: The FIX IT Framework To harness generative AI effectively, teams should adopt a structured five-step process, acronym **FIX IT**: 1. **F – Focused Problem** - **Define a narrow, well-scoped question.** Avoid broad prompts (e.g., “Improve sales by 10%” is too general). 2. **I – Individual Ideation** - **Generate personal ideas first.** Safeguard human creativity before consulting AI. 3. **X – eXplicit Context** - **Provide AI with relevant background.** If uncertain, ask the AI which additional details it needs. 4. **I – Iterative Conversation** - **Treat AI as a thought partner.** Engage in back-and-forth, ask clarifying questions, play devil’s advocate. 5. **T – Team Incubation** - **Reunite the group.** Share AI-inspired ideas alongside human contributions, then prioritize based on feasibility, desirability, and impact. Implementing FIX IT helps teams avoid early satisficing—settling on the first “good enough” solution—and promotes deeper exploration of the solution space. --- ## Anecdotes and Illustrations ### Assisted-Living Epiphany To demonstrate AI’s conversational potential, one researcher interviewed the tool about his grandmother’s decision to move into assisted living. By prompting the AI to pose diagnostic questions before offering advice, the grandmother experienced firsthand how a “chat” dynamic can feel empathetic, informative, and even surprising—far beyond a simple search. ### Philosophical Brainstorming Another experiment involved iterative role-playing prompts: asking various philosophers to define a concept, then challenging and refining those definitions, culminating in a hypothetical critique from Wittgenstein. Each iteration yielded fresh insights and demonstrated how recursive AI dialogue can accelerate creative thought. --- ## Implications and Next Steps 1. **Conversational Fluency:** Organizations must train teams to interact with AI as they would with skilled human collaborators. Superficial prompts breed superficial results. 2. **Facilitation and Enablement:** Just as professional coaches guide workshop participants, AI newcomers benefit from external facilitation and scaffolded exercises. 3. **Beyond Ideation:** The journey does not end with a great idea; it continues through prototyping, testing, and dropping “acorns”—small pilots that reveal which concepts can grow into impactful solutions. For a comprehensive presentation of the study and detailed recommendations, visit **howtofixit.ai**. You will also find access to the full peer-reviewed paper and additional resources for integrating generative AI into high-performance team workflows. --- Thank you for joining this crossover episode of *Beyond the Prompt*. If you found these insights valuable, please rate, review, and share the episode on your preferred podcast platform. Stay curious, and until next time, embrace the dialogue between human creativity and artificial intelligence.