This segment delves into the critical role of trust in AI application design, particularly in educational contexts. It discusses the risks of both overtrusting and mistrusting AI systems and introduces two key strategies for building trust: explainability and control. Explainability involves clearly communicating how the AI arrives at its conclusions, using examples like generating summaries from notes or explaining the source of information. Control focuses on empowering users to modify prompts and results, enhancing their sense of agency and understanding. The segment also emphasizes the importance of tailoring explanations to different user groups (students vs. teachers) to ensure clarity and comprehension. This segment provides a comprehensive definition of user experience (UX), focusing on how users interact with a product or service, from onboarding to task completion. It emphasizes the importance of a user-centered design approach, highlighting four key aspects of UX in AI applications: functionality, accessibility, reliability, and pleasantness. The explanation uses the example of an educational AI product for students and teachers, illustrating how different user groups have varying needs and capabilities. This lesson covers designing user experiences for AI applications, focusing on trust, transparency, and collaboration. Key aspects include functionality, accessibility, reliability, and pleasantness. Building trust involves explainability (how the AI works) and user control. Collaboration is fostered through feedback loops and clear communication of AI capabilities and limitations. A challenge is to design user opt-in/opt-out for data collection.