Researchers from Max Planck Institute for Informatics, Saarbrücken Research Center for Visual Computing, Interaction, AI, MIT, University of Pennsylvania, and Google have collaborated to release a GAN-based manipulator for images that enable synthesis of flexible visual content that can be precisely controlled for expression, shape, pose, and control. The manipulator, called DragGAN, allows for interactive point-based manipulation on the generative image manifold. The publication is titled “Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold”.

source update: Holy Cow! Introducing DragGAN – Towards AI


There are no comments yet.

Leave a comment