Using Stable Diffusion, we can alter the style of an image while still leaving it recognizable as the original.

ContentStyleResult
Pasted image 20230624122740.png”a bowl of salad"Pasted image 20230623091446.png
sad-keanu-meme.webp"minimalist line art”Pasted image 20230624165052.png

By using img2img, we can provide an inital image as a starting position. Then, providing a prompt and a low denoise value (~0.2) we can generate a batch of slightly altered pictures. Pick our favorite, send it back to img2img, and do it again until we have a result we like.

To further improve these results, we can pair this technique with a few ControlNet models to help keep the silhouettes consistent.

Openpose helps keep the pose & face consistent Canny/Lineart/Depth help keep the structure consistent

Using a photo of myself, I used this technique with the Canny & Openpose models to get a consistent structure while providing a prompt that changed the style.

Input ImageStyleResult
Pasted image 20230623095317.png”Cyberpunk city, 4k Unreal Engine, […]"Pasted image 20230623095327.png
"Norman Rockwell, Painting, […]”Pasted image 20230623095738.png