Generating randomized art from neural network "style transfer"

(Update September 12, 2019: I wrote this in January 2017 and am copying it from the original blog , which is 404'd now as I no longer host it.)

Here’s a quick link to view this in-browser:

The Rave Maze

As for my process…

The idea, at first — and I’m going to still roll with this, eventually — was to create an art gallery of entirely random image combinations. Each run of neural-style needs two input images, one essentially functions as the “shape” of the resulting image and one is the style — the overall color, line shape, etc. Give it a good enough style and you can turn a piece of garbage into art, or turn art into a shitpost. As an example, I combined an image off a google search for “Jesus” with an image of some pasta.

Doesn’t make any sense to look at, right?

But the result is great. Here’s a portrait of Jesus as portrayed in the Pastafarian timeline.

There’s a great website to test this out on without having to install or run the software yourself. I combined the above images with Deep Dream Generator and the result is pretty good, though it can take anywhere from a few minutes to 15 or more when processing on there.

One reason — probably the biggest — why I haven’t played with style transfer much in the past is that the software requires a lot of software to be installed manually when running in Windows. With Linux? Pretty easy to install it and set it up to run CUDA. Not as easy with Windows, but after following the steps on Running Google’s Deep Dream on Windows (with or without CUDA) – The Easy Way I had a working setup on Windows.

After, I installed a few more files, along with (optionally) NVIDIA’s cuDNN — which requires an account and a bunch of information to even download, but after putting it in the application is approved immediately.

The implementation I’m using is style-transfer by fzliu, primarily because of the ease of installing it. I also use GraphicsMagick to downscale, and Waifu2x-caffe to upscale the results.

So, after getting this working in Windows (to avoid rebooting into Linux), I wrote a python script that would process images while idle. With two folders, “content” and “style”, both filled with images (mostly in content), my script would take a random image from content, a random image from style, and if the two haven’t been combined already, they’ll be run through neural-style and the output will be thrown in a folder. It also records the combination of images used for each output incase I want to re-run the same image with better settings later. I set it to only run when I’m idle for a long time. Which, turns out, tends to be no longer than 30 seconds if I’m at my computer working… but over time I increased it to 5 minutes, as it takes about 3.5 minutes to process an image on my setup, generally, so this gave me a pretty decently sized output of images without using too much of my CPU or GPU as I’m working.

After running this thing for a couple of days, changing the settings on and off, I realized most of the images that I like most are using bright, neon “80’s cyberpunk” style source images. I decided I’d make a gallery using only those colors.

The output of neural-style, if I want to have any reasonable (less than 30 minutes) processing time, is kind of bad. But if I set the output to around 512 pixels at the longest, then downscale with GraphicsMagick and re-upscale with waifu2x, the result is pretty cool looking. By downscaling to 256x the colors blend together a bit, to remove the graininess. After upscaling to 1024x there’s a noticable improvement. I edited my script, removed the styles I didn’t want, and re-ran it overnight.

Surprisingly, running it overnight and throughout part of the day before, I generated 452 files, taking up nearly half a gig of space. I went through and copied out the images I liked the most and batch converted to jpeg.

Oh, and by this point I’d already removed the meme images. Most of the source art now is either of either my own or my fiancée’s artwork, screenshots of my Fallout or Skyrim mods, or anime/TV screenshots (primarily from Doctor Who, Pokemon, and Serial Experiments Lain)

Huh. Well, Frank the Bunny is looking a bit like SHODAN now.

After this, I quickly made a 3D maze to put the images into. I looked up some mirror maze floor plans and they all tend to use triangular sections of mirrors. I used this pattern but with walls instead of mirrors, then added a strobe light effect and general Moonside-style lighting .

Oh, and after all that I totally plagiarized the new name for the thing from a level in a Yume Nikki fangame, “.flow”, that I found while looking up Moonside screenshots. Presenting the Rave Maze.

You can try it out in your browser , though it works best in the native JanusVR client.

And here’s a video: