This lecture discusses generative models maximizing log-likelihood. Flow-based models, using invertible transformations (like coupling layers) to efficiently compute likelihood, are introduced. Autoregressive models, like PixelCNN and WaveNet, calculate likelihood by sequentially modeling data dimensions, avoiding latent variables but incurring high generation cost. GPT, using transformer decoders, is also mentioned as an autoregressive approach. Each method's advantages and disadvantages regarding likelihood calculation and generation cost are compared.