Experiments in AI Art – Part 1 – First Steps

Dall-E, Midjourney, Stable Diffusion – unless you have REALLY not been paying attention you will definitely have heard of at least one of those.
If you have been paying attention you will likely have the same “witnessing a revolution” feeling I have.
Beautiful (and unfortunately predictably sleazy) images produced by these txt2img AI art platforms are flooding the internet.

A tiny dive into Discord or Reddit will soon have your head spinning as the pace of new tools and techniques explodes on what feels like a daily basis.
Not just still AI images either! AI image upscalers, AI animations, AI voice changers, AI music generators the list is endless and utterly fascinating.

I posted my first AI image on Instagram on 7th September 2022:

Midjourney Prompt: An alien, riding a motorbike, in the wilderness, photorealistic

Generated on the Midjourney Discord using one of my first free image credits.
As soon as I saw the image appear I was pretty gob smacked.

However, it wasn’t until the 23rd September when I became completely obsessed:

Midjourney Prompt: A man living inside a large round sealed glass terrarium, dense foliage, many kinds of plants, detailed, cinematic lighting

This was the image that really started my (somewhat) deep dive into the AI art space, it was to be the first of many that have just stopped me in my tracks when they appeared on my screen.
Looking at it again now a few short months later I still REALLY love it. It is so close to the image I had in my head that its a little scary and yes, I think its beautiful.
Really beautiful.
(It does scream “generated with Midjourney v3” though)

Realising I would soon have to pay for Midjourney, it’s very recognisable (at the time) “style” and the fact that I didn’t own my generated images on Midjourney’s free tier had me researching other platforms and I quickly came across Stable Diffusion. (The Stable Diffusion model was only released publicly on August 22nd !)
Or more specifically the Automatic 1111 local install version of Stable Diffusion.
A process not for the command line afraid it involves installing Python, Git and downloading around 11GB of “stuff” most of which I barely understood.

I was soon up and running though and tried the same Midjourney prompt to see what it would give me:

Stable Diffusion Prompt: a man living inside a large round sealed glass terrarium, dense foliage, many kinds of plants, detailed, cinematic lighting

As a person who CANNOT draw or paint at anything above 5 year old child level, my mind was on fire with the possibilities.

Or it sort of was, I was very quickly faced with “option paralaysis
If the only limit is what you can imagine to type into the thing, what exactly do you type?

It would seem looking around that the kind of person who would like an anime girlfriend has very little trouble thinking of things to type :)
I, however, couldn’t be less interested in that type of thing.

It wasn’t until I seriously started to think about how I could use this technology as another tool that things began to make more sense……..And that would have to wait until after I’d been to Portland, Oregon filming for a feature documentary on Portland’s animation scene.

See part 2 for what came next and some thoughts on the ethical issues this technology raises.

(A couple of other early experiments are in the gallery below)

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.