10 Stable Diffusion Terms Explained

As we traverse a world reshaped by AI, we hear words thrown around that everyone may not have an idea of. So I decided I'd list a few that I heard in the world of stable diffusion and image generation AI and try to explain it in as simple a language as possible. 

 

Now, even if you're not a data scientist or an AI engineer, you'll use these terms & concepts if you're trying to get an image generation AI going in your personal computer.

 

So here are 15 Stable Diffusion terms explained in brief:

 

1. Sampling steps

When generating an image stable diffusion starts by creating a completely random image. This image is then improved with every iteration until a recognizable image is formed. As a general rule, more sampling steps mean a more accurate image.


2. Negative prompts 

Prompts nudge Stable Diffusion to create the images you want. Negative prompts do the opposite. Negative prompts let you tell SD what you do not want in the output. Some examples could be bad anatomy, ugly, deformed, extra limbs, malformed etc.


3. Pre-trained Model

Stable Diffusion is trained on a vast collection of diverse images. So the images it generates would be more general. Pretained models let you train SD to do a specific kind of image generation. You can for instance train SD on anime pics (like we did) to make it better at generating anime images.


4. Lora

Lora or Low-Rank Adaptation are similar to pre-trained models but are trained on a much smaller set of images (10-100) to make small adjustments to the SD model. If you've seen an AI Headshot generator, you've seen a Lora in action. Headshot generators use Loras to train SD on your photos to create AI-generated headshots of you.


5. Regularization

Regularization helps SD specifically understand what you're talking about in your prompt. When generating an image of a man, a group of regularization images of men will stop SD from generating a boy for instance, instead of a grown man. 


6. Seed

Seed is the numerical representation of an image. SD starts an image generation with a random image before improving it to create the final result. Every image generated by SD is given a seed number. When left unspecified, SD generates a random image with a random seed to start. If you want SD to start with a particular image, you can specify the seed of that number as the model seed.


7. Sampler or Sampling method

Samplers in Stable Diffusion are mathematical functions that try to create the shortest most, accurate path between the random initial image and the ideal one. Different samplers cut different paths to the final image and arrive at slightly varying outputs while taking varying times to get there.


8. VAE

VAE or variable auto-encoders convert a generated image to lower dimensions or an abstract noisy form and then make SD regenerate it with the lower dimension image as the starting point. This improves the quality and accuracy of the images generated.


9. Checkpoint

Checkpoints in SD are snapshots of intermediate steps in image generation. SD takes numerous iterations to generate a final image. You can have SD save its training parameters at a set interval whether to start off from that point if training fails or use the checkpoint model as your final model if you find the final model unsatisfactory.


10. Inference

Inference is the process of generating an SD image from a given prompt. It is the technical name given to the process of getting an image generated. The term inference can be used interchangeably with "generate".