Sometime when you are using the Stable version with Google Colab, it has very common issue to show “RuntimeError: CUDA out of memory.” and it stops working.
This issue is seen with previous as well as with newer Stable Diffusion v1.4 as well. Stable Diffusion CUDA out of Memory issue is basically seen when you are using the unstable version of Stable Diffusion.
Below is error snippet for Stable Diffusion CUDA out of Memory issue:
RuntimeError: CUDA out of memory. Tried to allocate ..................................... If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
To fix the error ‘RuntimeError: CUDA out of memory.’, download and install the latest version from Stable Diffusion’s official Git page. you can download the basujindal’s branch of Stable Diffusion which uses less memory, clone the project again and run. Error would be fixed!
A latent text-to-image diffusion model called Stable Diffusion can turn any words into photorealistic visuals. All of the model checkpoints that are available are listed on this model card.
Please look at the model repositories provided under Model Access for more detailed model cards on its official website.
As per official Stable Diffusion website, version1.4 has below improvements:
- The checkpoint continued training from stable-diffusion-v1-2 in stable-diffusion-v1-4. To improve classifier-free guidance sampling, 195,000 steps at 512×512 resolution were made on “laion-improved-aesthetics” and 10% of the text-conditioning was dropped.
- In order to improve classifier-free guide sampling, stable-diffusion-v1-4 resumed from stable-diffusion-v1-2.225,000 steps at resolution 512×512 on “laion-aesthetics v2 5+.” It also dropped 10% of the text-conditioning.