This webinar on strategic L&D in the age of AI/ML features Chris Minik, who discusses the impact of AI, particularly generative AI, on learning and development. He demonstrates tools he built using GPT-4 to create personalized learning paths and outlines ethical considerations and the implications of the October 30th US executive order on AI. The session includes a group exercise applying these concepts. This segment details how AI is transforming L&D, including analyzing organizational data to predict performance and identify skill gaps, automating administrative tasks, facilitating scalability of learning programs through localization and translation services, incorporating gamification for engaging learning experiences, providing immediate feedback, and predicting future learning needs. The speaker introduces the session's topic: strategic L&D in the age of AI and machine learning, focusing on generative AI's role in learning and development. He uses a self-generated image as an example to illustrate generative AI's capabilities and potential impact on content creation, highlighting the session's aim to explore these technologies and their implications. This segment emphasizes the necessity of integrating AI and ML into L&D, highlighting their transformative potential in personalizing content, predicting learning needs, and enhancing data analytics. It stresses the importance of aligning AI/ML with organizational goals and outlines the steps for effective integration, including cultural shifts within the organization. Examples of learning paths for various roles are provided. A sample boot camp curriculum for upskilling data scientists is presented, illustrating how AI can be used to create structured and progressive learning experiences. The example showcases a five-week program with a knowledge assessment, increasing complexity of classes and projects, and a final post-test, highlighting the adaptability of AI-driven training programs.This segment provides a clear explanation of AI and machine learning, differentiating between the broader field of AI and its subset, machine learning. It emphasizes the transformative potential of these technologies in L&D, particularly in designing custom learning paths, predicting needs, and enhancing data analytics. The importance of understanding these concepts for all L&D professionals is stressed.The speaker delves into the specifics of AI and machine learning, explaining the relationship between artificial intelligence, machine learning, deep learning, and generative AI. The explanation includes the concepts of labeled and unlabeled data, highlighting how these are used in training AI models, particularly generative models. The role of deep learning in enabling generative AI's capabilities is also explained. A demonstration of "Pathfinder," a personalized learning path generator built using GPT-4, showcases how AI can create customized learning journeys based on user input regarding learning goals, style, time commitment, and prior experience. The demo highlights the system's ability to generate a logical and relevant learning plan, adaptable to various subjects. This segment focuses on the difference between labeled and unlabeled data in training AI models, explaining how labeled data (e.g., images tagged with descriptions) is used for specific tasks like image categorization, while unlabeled data (e.g., large text corpora) is used for pattern detection and generalization. The discussion includes semi-supervised learning, a technique used by models like GPT, which combines both labeled and unlabeled data for training. The speaker explains the concept of foundation models, large machine learning models pre-trained on vast amounts of data. The process of fine-tuning or post-training these models for specific tasks is also discussed, using the example of Pathfinder, which was pre-trained on a large dataset and then fine-tuned for personalized learning path generation. This segment showcases a practical exercise focused on managing new hires, demonstrating how AI tools can be used to personalize learning experiences based on individual needs and preferences. It illustrates the process of identifying learning styles, time commitment, and motivations to create a tailored learning plan. This segment details the crucial steps in aligning AI and machine learning initiatives with broader business objectives. It emphasizes setting SMART goals, conducting a thorough infrastructure assessment to identify areas for improvement, and understanding current capabilities and resources before integration.This section highlights the importance of building the right skills and knowledge within the team to effectively utilize AI tools. It stresses the need for L&D professionals to understand data analytics, AI algorithms, and ethical implications, while also focusing on selecting relevant tools based on objectives, ease of use, integration capabilities, and scalability.This segment underscores the importance of creating a realistic timeline for AI integration, including key milestones and checkpoints to monitor progress and make necessary adjustments. It emphasizes the need for a flexible plan that allows for adaptation to challenges and changes in organizational priorities.This portion addresses common challenges in AI integration, such as resistance to change, data privacy concerns, and content relevance. It advocates for proactive strategies like stakeholder engagement, rigorous testing, and clear data governance policies, emphasizing the importance of a collaborative approach to problem-solving and continuous learning. huge issue. So this deals with concerns related to the ownership and copyright of AI generated content. a lot of the data in these models, we really don't know for sure but it can be assumed that if it was trained on content from the internet, a lot of this data is copywritten. Uh, it's protected, maybe protected by intellectual property laws. It may be trademarked and AI always generates original content. But pieces of the training data may show up in its outputs. And there is a a really good example of this, uh, that happened couple weeks ago where someone figured out some researchers figured out that if they told Chat GPT to just repeat a certain word, I think the word was poem, repeat, poem in in forever infinitely. And it would start doing that and go, go, go. But there is a certain amount of randomness that's built into every response from a generative AI system. And so eventually, after it was was repeating poem forever, it started spitting out other random things. And this included like some, someone's contact information, some data from, uh, you know, bits of data from here and there. And so this is, it's a an exploit that could theoretically be used to get data out of the system that wasn't intended to be used for straight ahead responses. Training doesn't take entire documents so you'll never see like um, you'll never be able to say print out the uh entirety of moby dick. that's not what an ai model is for what the what the model does when it's being trained is it looks and finds the meaning and extracts you know kind some words here some words here to find to find meaning rather it's not a it's not a giant database of documents like a like the internet, it is rather a meaning a meaning extracting machine but there still are very important issues around copyright. there's also huge issues around style. Do you own your style if you're a musician or a painter? is that something where if someone says to chat gpt write me a song in the style of Bruce Springstein is this um violating bruce spring Bruce Springsteen's uh copyright or intellectual property rights in some way. And then this um, misinformation deep fakes. we have to manage the potential for generative AI to create realistic but false information. deep fakes and misinformation consent and privacy is another huge issue uh, respecting privacy obtaining consent, especially when personal data or likenesses is included in training models. And this is still very much not figured out how to how to do this, especially when you have just these companies who developed these models and these models and were trained on a lot of things that were obtained just by scraping the web and saying, this is, you know, it's on the web, it's public data. woohooo But, uh, you know, it's a big issue and we have to ensure that generative AI systems are transparent in their operations and in their decision-making process processes. So whether this eventually leads to there needs to be open- source data about what it was trained on uh we don't we don't know we don't know yet the impact on jobs and industries especially in creative fields that are traditionally riant on human skills is another issue where we've got to figure that out and this is you know this is a thing that I spend a lot of time thinking about as a as a writer and a teacher is that I you know I like to think that I couldn't be replaced and I you know look for opportunities to confirm that belief by looking at uh subpar examples of things that uh chat gpt generates and saying oh well it can't do humor. it can't do sarcasm and these are things that I'm really good at and so therefore you know and it doesn't understand what it is to be human but uh these systems are getting better, uh, better, faster, uh, all the time. And so it, it is something to be thinking about. I think, for example, that computer programming is moving from a, a field where programmers memorize how to how to do things with code, to where programmers are in charge of designing systems and then creating natural language and uh, describing what the code should be to an AI model. And then the AI model worries about the, um, actually writing code statements and functions and classes and things. So that's yeah. All right. We have to address the significant energy consumption and environmental footprint of training large scale generative models. It takes a lot of energy to train a model. It also takes a lot of energy to run the systems that query a model. And so that's something that needs to be addressed safety, Uh, for people who are designing AI models, creating AI models, as well as for people using them, we need to define what misuse is. We certainly need to prevent things like AI being used in, for, uh, cyber attacks or for automa at weaponry. And those are, you know, couple of the things where it has real potential for what we call dual use, where the model can be used to, you know, write love poems. It can also be used to generate, uh, fake pictures of people in, um, situations where they weren't or where they never would be. And so, and, and much worse. And so, you know, we have to, uh, look at that. And so all of this is going to require development of effective governance frameworks and regulations to guide the ethical development and use of generative AI. And hopefully the result will be that we develop, um, public trust and the, the public can have faith that the people who are making these models and running these models and training these models are, are not evil. And so, uh, that is the goal and that they're regulated by by someone, they can't just do whatever they want. There are some limitations and challenges of generative AI, For example, generative AI model require large amounts of high quality and diverse data for effective training. And this can be a significant limitation. Training generative models demands a lot of computational resources. So that makes it very costly and energy intensive. This is kind of a, a, a pro and con at this point, because I can't just It's a road map for maintaining the US's leadership and AI technology. It directly impacts how we approach learning and development in the AI sphere. So let's begin by understanding kind of the breadth and and depth of this executive order. The October 30th AI executive order is a comprehensive national strategy. Its purpose is to enhance the development and use of AI in a manner that upholds American values. Prom promotes economic growth and protects national security. It covers a wide scope, emphasizing AI's safe, secure, safe, secure and trustworthy development and use. The executive order revolves around three key objectives. First is advancing AI technology. And the US is investing heavily in cutting-edge AI research. This means our L&D strategies must also focus on the latest AI advance advancements and innovations. Second, is ensuring AI safety and ethics As AI becomes more integrated into our lives, the need for it to be trustworthy and safe is paramount. This has direct implic implications for how we train and educate our workforce. Third, the order promotes us leadership in AI, aiming to establish America as a hub of global AI innovation. And this involves not only technological advancement, but also in setting standards for AI's ethical use as L &D professionals. Your role in realizing these go, these goals is crucial. You need to align L &D strategies with the rap, the rapid advancements in AI, this means staying ab breast of the latest AI technologies and integrating them into your training programs. Furthermore, we have to focus on ethical AI training and awareness. It's essential that our workforce not only understands how to use AI, but also the ethical implementations and responsibilities that come with it. This approach supports a workforce transformation that's in line with the executive order's vision for AI. The executive order calls for significant investment in AI research and development. This translates to increased funding and resources dedicated to AI innovation to foster a climate where breakthroughs in AI are more likely. This investment also encourages collaborations between the private sector and academic institutions for us in L &D. This means there's a growing need to upscale the entire workforce in AI related fields. Training programs must evolve to include cutting-edge AI knowledge and skills. And a big part of this executive order talks about AI literacy. it's become a national priority. The executive order emphasizes the need for widespread AI education and skills development. This is crucial in creating a techsavvy workforce that can thrive in an increasingly aid-driven world. And so, in response, L &D initiatives need to be tailored for AI competencies. This involves not only technical skills, but also an understanding of how AI impacts various aspects of business and society. We have to prepare our workforce for the challenges and opportunities that AI presents very much encourages responsible AI use. And so, it's our responsibility to align with these standards, We need to ensure our employees are equipped to make decisions that reflect our commitment to ethical AI practices and align with national and international standards. There's the new European law that is, um, moving, moving its way through the process, which looks like it's going to be the new sort of latest standard and the, uh, setting sort of the, the boundaries and the rules that the rest of the world is going to be copying. but it's largely in line with what is expressed in this order as well. And so the AI era brings new challenges in terms of consumer data and privacy. The executive order underscores the importance of educating our workforce on AI's impact in these areas. It's essential that employees understand how to handle consumer data responsibly and are aware of the relevant regulations and compliance requirements. Your L&D programs must uh, align with privacy and consumer protection laws naturally. And this involves training employees, not just in compliance, but in understanding the broader implications of AI on consumer rights and privacy, It focuses on supporting American workers, particularly in the context of AI-induced job transformations. So this calls for reskilling and upskilling initiatives to prepare prepare our workforce for an AI centric economy. So our role is to focus on these transformations, ensuring that your workforce is not left behind as the techn technological landscape evolves, Again, we need to equip employees with the skills and knowledge to thrive in an economy where AI plays a central role, which is where we're headed. So continuing, uh, your commitment to continuous learning in AI is more important than ever. You have to stay ahead of AI innovation and