This lecture covers eigenvalue computation. Power iteration finds the largest eigenvalue; inverse power iteration finds the smallest. Shifting the matrix (A - σI) allows targeting eigenvalues near σ. The Rayleigh quotient iteration refines eigenvalue/vector approximations. Convergence rates depend on eigenvalue ratios. LU factorization speeds inverse iteration. This segment delves into problem three and four, focusing on the concept of Mahalanobis metrics and how it relates to eigenvector problems. The instructor guides students through the problem, emphasizing that the unknown is a matrix, not just eigenvectors, and that finding a metric matrix that makes data points close together is the key challenge. The discussion includes examples and hints to help students approach the problem. The instructor discusses homework problems focusing on applications of eigenproblems and least squares, highlighting the challenges students faced with Mahalanobis metrics. The segment emphasizes the importance of understanding the underlying concepts rather than solely focusing on completing the problems, encouraging students to view the assignments as opportunities for learning. The instructor explains the concept of minimizing a quadratic energy function with constraints, relating it to eigenvector problems. The segment connects this concept to previous lectures on registration problems and principal component analysis, providing a broader context for understanding the problem. The instructor strategically avoids giving away the complete solution, encouraging students to work through the problem independently. This segment highlights the inefficiency of computing the inverse of a large matrix, especially in iterative methods where the inverse is applied repeatedly. The speaker emphasizes the numerical instability of direct inversion and advocates for LU factorization as a superior alternative, significantly reducing computational cost and improving numerical stability, particularly when dealing with high-dimensional matrices where the iterative process might converge within a relatively small number of steps. The instructor introduces the topic of eigenvalue computation, emphasizing its importance and usefulness. The segment introduces the spectral theorem, highlighting its significance in linear algebra and functional analysis, and sets the stage for the discussion of methods for computing eigenvalues and eigenvectors of symmetric matrices. The instructor addresses student confusion regarding Tikhonov regularization, clarifying the definition used in the problem set and emphasizing the importance of understanding high-level patterns in solving eigenproblems and least squares problems. The segment highlights the significance of recognizing these patterns over memorizing specific algorithms or details. This segment delves into optimizing the convergence rate of eigenvalue iteration methods. The discussion focuses on the impact of the ratio between the two largest eigenvalues (λ2/λ1) on convergence speed. It explains how a smaller ratio leads to faster convergence and introduces the concept of modifying the matrix (e.g., using a shift) to improve this ratio, thereby accelerating the iterative process and enhancing the efficiency of finding eigenvalues.