LoRa Training with Kohya

LoRa Training with Kohya

EDIT: The UI has been updated while the information below is still useful you may want to consider looking at this new tutorial.

How to train a LoRa using Kohya. This Tutorial is based on a community day given by Revolved. If you prefer a step by step video tutorial please check that out too!

If you prefer a video tutorial click here.

Step 1: Select Kohya on the left side then hit select and continue until it launches. You will want to use a Medium or Large server.

Step 2: Go to the Lora Tab

Change Model to Stable-diffusion-xl-base-1.0

Step 3: Upload Pictures Select the 3 Lines on the right side.

Select New Folder

Label the Folder train

Repeat the process inside the Train Folder to create a second folder where you will put your images.  Mine will be gloom

Upload your images into your new folder.

Step 4: Setting up Dataset Preparation Tab under the Lora tab.

Instance Prompt: This is the token you will use when prompting later with your Lora.
Class Prompt: Person, Style, etcTraining Images: /mnt/private/train/(Your Folder name)/
Repeats: Let’s select 20 for now. Here is what mine will look like
Destination directory: /mnt/private/train/(Your Folder name)/

Select Prepare Training Data. It may not look like anything happened just wait a moment.

Then in the top right click on the train folder or reload.  And then go back to the folder you created. For me it is the gloom folder.

When you click on train then gloom you will then see some new folders created.

Click on the img folder and you should see the new folder created. The 20 there stands for the “20 repeats” we selected earlier.  Click on that folder

You should now see all your images in that new folder.

Step 5: Utilities Tab: Setting up captioning

Select Blip Captioning

On this page add the directory, put the token in the Prefix to add to BLIP caption.  Then select caption images. You can also use Manual and other ways to caption.

This may take a few moments.  You can reload and go back to the folder to check on the progress or look at the logs. Common error is mistyping the folder path.

Step 6: Lora Tab Source Model Tab make sure you have stable-diffusion-xl-base-1.0 selected.

Next go to the folders tab.

Add the file path to the directors we created.  Also add the Model output name and in training comment you can put the target word.

Parameters Tab!

Presets: SDXL -Lora AI_Now prodigy v1.0 (Should fill out most of the information for us)

Epochs: Take Max Steps you want, divide that by the number of images you have, and divide that by 20.  Or just put a small number like 4-6. An Epoch is one pass through all your images and classification images (if you are using them).

Network Rank: 64 / Network Alpha: 32
The higher your network rank and network alpha, the larger your lora will be, so the less loras you will be able to save without filling up your storage. Although it may be compelling to raise these settings, note that many trainers have found that larger does not necessarily mean better, it's a case of diminishing returns.

We recommend batch size 1 for smaller datasets (say under 30-50 images) as it will improve the accuracy. The higher the batch size, the more general the training will be. Even though it can be MUCH faster! Note that higher batch sizes need more VRAM, so you may see CUDA errors when you raise the batch size over 1 depending on which server size you are using.

Select the Advanced Tab

Scroll down to Save every N Steps.  And put a number of how often you want it to save the Lora. I put 300 steps in this example. Keep in mind how much storage you have available, as if you run out of storage the training will stop.

I ended up liking 100 Steps so I can pick exactly the best one if you have the space you can do this but don't forget to delete them when you are done because they take up a lot of space. You can also instead use "Save Every N Epochs". If you have a low amount of Epochs, like around 10, then this ends up being the perfect amount and you don't need to use Save Every N Steps.

Scroll down to the bottom if you are using Weights and Biases website (https://wandb.ai) add your API key there. This will allow you to track the progress of your training. The site is free to use, although they do have paid plans for heavy users. It will allow you to compare your trainings, few your samples, etc.

Note that Tensorboard cannot be used on RunDiffusion - we find WandB is a much improved experience anyways!

Select the Sample Tabs

The sample should be something you would prompt to generate an image so you can view samples throughout the training process.

Example Prompt: gloom_shadows a mouse, 8k resolution, photograph, good quality --n bad quality,poor quality, blurry, bad composition --w 1024 --h 1024 --d 3456, --l 6.5,  --s 28

Go back to  the Source Model Tab and save your config. Type out the full path ala /mnt/private/train/myconfig.json

Now hit "Start Training!"

To view the Log while it is in process select the Server Manager tab on the left side.

You can also check in the files your KOYA.log file in logs > koya.log

WandB will have the training data showing your samples and graphs.

Make sure to monitor your storage to make sure you don't run out of space while training, as that will end the training and not throw any error in the logs.

About the author
Adam Stewart

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to .

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.