This segment extends the concept of least squares to image smoothing. It demonstrates how image smoothing can be framed as a minimization problem involving a derivative matrix, showcasing the versatility of the least squares approach in different image processing tasks. The instructor identifies the unknowns in the image alignment problem (transformation matrix 'a' and shift vector 'b') and introduces a least squares energy function to measure the quality of the alignment. This clearly defines the objective function that will be minimized.This segment details the formulation of the least squares energy function, emphasizing that it's not immediately obvious why this is a least squares problem. The speaker highlights the importance of recognizing the quadratic nature of the energy function in 'a' and 'b'. The lecture discusses solving linear systems, particularly least squares problems arising in image alignment. A key matrix, AᵀA, is shown to be symmetric and positive semi-definite. The Cholesky factorization, a computationally efficient method for solving systems with symmetric positive definite matrices (like AᵀA), is derived and explained, leveraging block matrix operations and forward substitution. This segment introduces a practical problem of image stitching, explaining how to align two images by finding a transformation matrix and a shift vector that preserve key points. The speaker clarifies the assumptions made (camera transformation approximated by a 2x2 matrix) and sets the stage for solving the problem using least squares. This segment introduces a more efficient algorithm for computing the Cholesky factorization. The speaker explains how to compute the diagonal element of L by separating out a row and column k, and then performing matrix multiplication to obtain the desired result. The explanation includes detailed matrix manipulations and a clear explanation of the indexing scheme used. The segment emphasizes the efficiency of this approach compared to the previous method. This segment presents the result of the Cholesky factorization, showing how the matrix E, C times E transpose leads to a specific structure. The speaker emphasizes the importance of the matrix's structure, particularly the first row and column, and explains how the resulting matrix D tilde is symmetric and positive definite. The segment concludes by highlighting the significance of this factorization as a numerically stable alternative to LU factorization, saving computational space. This segment focuses on the crucial properties of the A<sup>T</sup>A matrix (symmetric and positive semi-definite), which frequently appears in least squares problems. The speaker explains these properties and their significance in solving linear systems, laying the groundwork for further discussions. This segment details the empirical derivation of the Cholesky factorization, starting with the observation of a symmetric matrix and applying the same operations to both rows and columns. The speaker explains the process of incorporating a square root, grouping terms, and performing row and column substitutions to achieve a factorization, highlighting the importance of forward substitution and the resulting zero elements. The explanation includes detailed matrix manipulations and justifications for each step.