The start of my journey into AI art starts with getting into digital glitch art. Back round April, I decided to take a leap into the realm of digital glitch art. I had been doing digital photography for a few years taking pictures at the local open Jam and being hired by a few artists to do promotional work. Glitch art really caught my attention and I dove as deep as I could into the community with ending up on Rob Sheridan’s Discord server.
He was making several posts about the Volstof Institute and talked about how it was made with an AI called MidJourney. I quickly signed up for the beta and within a few weeks I received the invite to the Discord server and I was on my way to exploring the possibilities that await. I received access in late May
It was like drinking from a fire hose. There was so much happening on the Discord with people collaborating and all of us figuring out how to use it. My first images were kinda meh, but I started to get an understanding of how best to use MidJourney and get the results I want.
After getting to know MidJourney better and turn out the renderings I was looking for, the usefulness and just how powerful MidJourney was came to light. Probably one of the strongest use cases for these text-to-image AI’s is to rapidly prototype and produce concept art. The renderings from MidJourney are rarely perfect, but they are able to capture the style and concept trying to be expressed. This technology is incredibly powerful and will change this industry fundamentally.