Some styles aren’t something we can describe in words, and it isn’t something the model would be trained on. We need a way to provide an image and tell the model “do something like this”.

We can use T2I-Adapter on top of Stable Diffusion to allow for a style image as an additional input.

Using the technique listed in Style Transfer From Text In Stable Diffusion, we just add one more step to provide a style image.

Notice that the style image impacts both the color and content of the scene.

Style ImageOutput
[none]Pasted image 20230624165052.png
chocolate-chip-cookies.pngPasted image 20230624162236.png
Reddit-Productivity-feature-image-in-ClickUp-blog.pngPasted image 20230624163213.png
xzivdk4kgjxejm2qkbpb.pngPasted image 20230624164116.png
download (1).pngPasted image 20230624164400.png