This segment introduces the Secant method as an alternative to Newton's method when computing derivatives is difficult or computationally expensive. It explains the core idea of the Secant method—using the secant line between two iterates instead of the tangent line—and contrasts it with Newton's method. The segment highlights the method's practicality for scenarios where derivative calculation is impractical, such as complex simulations, and sets the stage for further discussion on its convergence properties. This segment explains Newton's method as a fixed-point iteration on a function *g*, analyzing its convergence rate. It contrasts the concept of simple roots (where the derivative at the root is non-zero), highlighting how this property influences the method's efficiency and convergence near the root. The discussion emphasizes the relationship between root-finding and fixed-point iteration, providing a deeper understanding of Newton's method's behavior. Newton's method finds function roots via iterative approximation using tangent lines (fixed-point iteration). Quadratic convergence is achieved near simple roots (non-zero derivative). The secant method, a derivative-free alternative, uses secants and exhibits superlinear convergence (golden ratio rate). Hybrid methods like Dekker's combine the speed of Newton/secant with the guaranteed convergence of bisection. Choosing a method involves balancing iteration speed and convergence rate.