How To Evolve By Embracing AI In L&D
In my previous article, we started exploring lessons learned from a conference on how learning professionals can prepare for the changes that Artificial Intelligence (AI) and automation are bringing in the near future. This article continues with the next five calls to action for embracing AI in L&D, and also attempts to answer a common question about Large Language Models (LLMs): how smart are they at reasoning?
Key Takeaways For Embracing AI In L&D
Here are some of the takeaways from talking to industry leaders about this approach today at the conference:
1. Develop A Strong Understanding Of Behavioral Science
- Study behavior change models
Gain familiarity with models like COM-B (capability, opportunity, motivation—behavior), self-determination theory, and Fogg’s behavior model to understand what drives learning motivation and engagement. Ultimately, your goal is behavior change, not just knowledge retention. - Design for motivation
Use insights from these models to create learning experiences that motivate learners through autonomy, competence, and relatedness, increasing the likelihood of sustained behavior change. - Test and adapt
Continuously test different strategies to motivate and engage learners, then adapt based on what resonates most effectively. Measure the right things! You must go beyond level 1 surveys and “knowledge checks” at the end of the course. For example, by shifting your focus from retrospective (satisfaction with content) to predictive (behavior drivers such as motivation, opportunity, job capabilities, and goal attainment), you can gain more actionable insights after a learning experience, that you and stakeholders can then act on.
2. Build A Network
- Follow industry experts (both internally and externally)
Follow industry leaders in L&D, AI, and future work trends. Pick wisely. You will find a whole range of people on a scale of “AI will solve all problems” to “AI will destroy the world” when it comes to embracing AI in L&D. Don’t build echo chambers where everyone is saying the same thing. Find practitioners who actually implement projects, not just blog about AI using AI. Regularly reading insights from experts helps you stay updated and inspired by emerging trends. There’s a lot of noise in the playground today. Let industry leaders cut through the noise and filter the dust. Otherwise, you’ll be overwhelmed. - Join L&D communities
Engage in communities like LinkedIn groups, conferences, and forums. Networking with other professionals can provide fresh perspectives and innovative solutions. But don’t stay in the L&D bubble only! See the next point. - Go beyond L&D and HR
Find champions within the company. Again, AI will be implemented somewhere first, which will have a direct impact on business goals. Be proactive. Learn from the early mistakes.
3. Focus On Building “Learning” Ecosystems, Not Just Programs
- Think beyond courses
By “learning,” I don’t just mean LMSs or LXPs, but any thing dedicated to training. Anything that enables, accelerates, and scales the ability of your workforce to perform their job is learning. Create ecosystems that support continuous, informal, and social learning. Experiment with using chatbots, forums, or peer coaching to foster a culture of learning in the flow of work. But, also, know where to get out of the way! - Use technology to integrate learning and performance systems
Nobody gets excited about logging into their LMS or LXP. Nobody will search the LMS or LXP about how to do things later. Yes, AI is now included in every single learning technology application, but it is fragmented and mostly a wrapper around a Large Language Model. Integrate learning and performance systems (where employees work) behind the scenes (through application programming interfaces or APIs). We don’t need to know where the assets are stored; we just need to be able to access them. Learning technology is any technology that supports learning. Build your alliances.
4. Strengthen Change Management Skills
- Learn change management frameworks
Familiarize yourself with frameworks like ADKAR (awareness, desire, knowledge, ability, reinforcement) or Kotter’s 8-step change model, along with behavioral motivation. - Address resistance to change
Develop strategies for overcoming resistance by understanding employee concerns and showing the long-term value of new learning approaches. Your AI implementation (at least for now) relies on human execution. Everyone wants change, but nobody wants to change. Start with solving specific problems for your stakeholders and the target audience. Start small, pilot, and scale from there through iterations. Bring skeptics together as testers! They will be more than happy to try to break the application and point out flaws.
5. Understand Data Security, Data Privacy, And Ethics
- Build the foundations
Do you have a data privacy council today? If not, start building it. Find out who owns data security in your organization. Partner with them on clear guidance about data classification levels: what type of data can be used where. Understand your vendors’ data security and data privacy policies. You may or may not own the data. You may own the data after separating, but you need to archive it. You need clear policies on how long you keep the data, along with where and how it is stored (encryption both in transit and at rest). Be clear about what data you collect and what that data can be used for. (For example, if you collect data on skills to implement personal development programs, can someone later decide to use this data for performance evaluations?)
How Smart Are LLMs, After All?
Finally, one of the most interesting questions I got from a conference attendee was how smart current LLMs are. Are they good at reasoning or at the illusion of reasoning? How much can we rely on them for reasoning, especially if we build solutions directly connecting AI (LLMs) with the audience?
LLMs are trained on huge data sets to learn patterns, which it uses to predict what comes next. With some oversimplification, you take all the data you collected and split it into training data and testing data sets. You train your AI model on the training data set. Once you think they’re doing well with pattern recognition, you test it out on the test data that they have not seen yet. It is way more complicated than that, but the point is that “smartness” and reasoning can be misinterpreted for pattern recognition.
What’s an example? Let’s say you trained your model on how to solve mathematical problems. When the model recognizes the problem, it follows the learned pattern of how to solve it. It does not have an opinion, belief, or any sort of fundamental stand on this. That is why when you simply tell the model that it’s wrong, it apologizes and reconsiders the answer. Mathematical reasoning (as of today) is not their bright spot.
A study across all models found through the GSM-Symbolic test showed that generating versions of the same mathematical problem by replacing certain elements (such as names, roles, or numbers) can lead to model inconsistencies, indicating that problem-solving is happening through pattern recognition rather than reasoning [1].
Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark.
When you add seemingly relevant information to the problem that is actually irrelevant, humans, through reasoning, just ignore it. LLMs seem to try to integrate the new information even if it is not needed for reasoning, as studies found:
Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn’t contribute to the reasoning chain needed for the final answer.
In short, current LLMs are amazing at pattern recognition, which they can achieve at a speed and on a scale that no human can match. They’re great at pretending to be someone for soft skill practice! But they do have their limitations (as of today) on mathematical reasoning, especially in reasoning out why the answer is the answer. However, new models, such as the Strawberry one by OpenAI, are attempting to change this [2].
References:
[1] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models