Gen AI: Innovations, mindsets, and what’s next for 2025

What's next for GenAI?
In our closing webinar of 2024, we asked a pressing question: Can generative AI be a force for good? Joined by leaders in learning, innovation, and social impact—including Dr. Cornelia Walther (Director, POZE), Scott Provence (VP, ELB Learning), and Nikki Le (Head of Impact Evaluation, Google)—we explored the powerful, and sometimes complex, role of AI in shaping our workplaces and society.
Moderated by Josh Penzell, VP of AI Innovation and Strategy at ELB Learning, the conversation surfaced a clear message: AI isn’t here to replace us—it’s here to help us become more human.
Using AI to elevate human potential
Generative AI has the power to support—not supplant—our creativity, empathy, and judgment. Dr. Walther challenged us to shift focus: “Rather than using AI to do more for less, ask how it can help you do more of what makes you unique.”
By automating routine tasks, AI can free up time for higher-value work like strategic thinking, problem-solving, and innovation. It becomes a tool that supports personal growth, not just productivity.
AI can’t replace human intuition or empathy—but it can give us more space to use them.
AI's role in learning and development
Nikki Le emphasized that AI can personalize learning at scale—supporting employees with tailored guidance, adaptive content, and real-time feedback. Instead of one-size-fits-all courses, learners can engage in dynamic experiences that evolve with their needs and pace.
This is where L&D becomes transformative. Personalized learning journeys build agility and prepare teams for the ever-changing demands of work.
Shifting problem-solving mindsets
Scott Provence pointed out a broader shift: Organizations adopting AI must rethink how they approach challenges. Instead of generic fixes, teams are moving toward “first principles thinking”—breaking down problems to their core and rebuilding from the ground up.
This shift fosters innovation, responsiveness, and resilience—especially in complex, compliance-heavy environments.
Ethics and responsibility aren't optional
As AI becomes more deeply embedded in business operations, ethical use is non-negotiable. Bias in algorithms, lack of transparency, and regulatory gaps pose real risks. The upcoming EU AI Act will require organizations to embed AI ethics into their compliance training, making it a core part of responsible innovation.
“We are the ones training the model, selecting the data, and interpreting the results, that means we’re still accountable.”

Nikki Le, Google
Scott Provence echoed this, warning that unconscious biases can easily become embedded in AI-generated language. “AI reflects our zeitgeist,” he said. “But we have to ask—what’s being highlighted, what’s ignored, and what does that say about us?”
Building an equitable AI future
All panelists agreed: If we want this next wave of technological change to be different from past revolutions, we need to lead with values—not just velocity. Dr. Walther called for a shift in priorities: “Past revolutions focused on profit, with social impact as an afterthought. With AI, we have the chance to reverse that.”
As we head into 2025, the call to action is clear: Equip teams with AI literacy, embed ethics into training, and use this moment to build a more inclusive, just, and human-centered future.
Explore the full conversation and hear firsthand how AI can support people-first innovation across your organization in the webinar at the top of this article.
AI ranks in the top 3 essential tech skills for most companies

Train smarter, spend less
Train smarter,spend less
Connect with a Go1 expert to explore the best training options for your organization—no pressure, just solutions that work.