This segment explains how a function can be minimized in 'n' steps by moving along conjugate directions. It connects the concept of conjugate vectors (where vᵢᵀAvⱼ = 0 for i ≠ j) to the efficient minimization of a function, highlighting the significant computational advantage this offers. The explanation builds upon a previously proven lemma, demonstrating a clear and concise application of the mathematical concepts. Conjugate gradient method solves linear systems Ax=b in n steps using conjugate directions (vᵢᵀAvⱼ=0 for i≠j). Line searches along these directions guarantee convergence to the minimum. Finding these directions efficiently is key; Gram-Schmidt is unstable. The method iteratively builds a Krylov subspace, aiming for optimal solutions within successively larger subspaces, unlike slower gradient descent. This segment discusses the challenges of using Gram-Schmidt orthogonalization within the conjugate gradient method. It highlights the instability and increasing computational cost of Gram-Schmidt with each iteration, as it requires projecting out previously computed search directions. This analysis motivates the need for a more efficient algorithm to find conjugate directions, setting the stage for the introduction of a superior method in the subsequent part of the lecture.