Controlnet, how to tame image generation with stableDiffusion
After the first 5 minutes of excitement, generating images with generative models like Stable Diffusion becomes a gacha game. While the prompts help, positive and negative ones allow for somewhat related results to what we write. Having good results solely with that depends a lot on luck.
To obtain more controlled results, Controlnet is one of the fundamental tools. Controlnet allows us more control over the initial conditions of image generation.

In the previous example, we can see the result of Controlnet’s lineart model, which allows starting from a sketch and applying style and color.

Another very useful model for Controlnet is Openpose. This model allows us to extract the pose from a reference image and then work with our prompt to obtain our result.
