Stable Diffusion, the AI that can generate images from text in an astonishingly realistic way, has been updated with a bunch of new features. However, many users aren’t happy, complaining that the new software can no longer generate pictures in the styles of specific artists or generate NSFW artworks, The Verge has reported.
Version 2 does introduce a number of new features. Key among those is a new text encoder called OpenCLIP that “greatly improves the quality of the generated images compared to earlier V1 releases,” according to Stability AI, the company behind Stable Diffusion. It also includes a new NSFW filter from LAION designed to remove adult content.
Other features include a depth-to-image diffusion model that allows one to create transformations “that look radically different from the original but still preserve the coherence and depth from an image,” according to Stability AI. In other words, if you create a new version of an image, objects will still correctly appear in front of or behind other objects. Finally, a text-guided inpainting model makes it easy to switch out parts of an image, keeping a cat’s face while changing out its body, for example.
However, the update now makes it harder to create certain types of images like photorealistic images of celebrities, nude and pornographic output, and images that match the style of certain artists. Users have said that asking Stable Diffusion Version 2 to generate images in the style of Greg Rutkowski — an artist often copied for AI images — no longer works as it used to. “They have nerfed the model,” said one Reddit user.
Stable Diffusion has been particularly popular for generating AI art because it’s open source and can be built upon, while rivals like DALL-E are closed models. For example, the YouTube VFX site Corridor Crew showed off an add-on called Dreambooth that allowed them to generate images based on their own personal photos.
Stable Diffusion can copy artists like Rutkowski by training on their work, examining images and looking for patterns. Doing this is probably legal (though in a grey area), as we detailed in our explainer earlier this year. However, Stable Diffusion’s license agreement bans people from using the model in a way that breaks any laws.
Despite that, Rutkowski and other artists have objected to the use. “I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski told MIT Technology Review. “That’s concerning.”