This presentation introduces four image translation models extending cycleGAN. StarGAN enables multi-dataset transformations efficiently, avoiding the high cost of training separate cycleGANs for each combination. InstaGAN focuses on transforming only specified image regions, overcoming cycleGAN's limitations in object shape changes. Finally, MUNIT separates content and style in latent space, allowing both one-to-many and two-image blending transformations.