Glitchwerks Chromafield

I was able to get my hands on the Glitchwerks Chromafield back in June of 2023. The Chromafield is a circuit bent Canon A540 digital camera. There are three switches to change the glitch effects as well as maintaining the features and functions of the camera.

I’ve done a bit of experimenting with the camera and I use it often when I’m out to capture glitches on the go. I’ve gotten some very vivid captures while out that bring a unique vibe to the world around us.

I’ve even used it as video input for a feedback loop at a live show. The colors were incredibly bright and brought a whole new dimension to the performance. I’ve also used it to capture TV footage for an instaglitch.

Overall I’m really happy with it and it’s a great addition to my glitch arsenal. One thing I had to do when I got it, just as an FYI, was replace the CMOS battery because it was dead. With no CMOS battery the camera drained the AAs very quickly.

Mothership Deluxe by Gator Glitch Gear

This post has been updated since initial publication.

This has been a long time coming, but I’m finally getting around to writing gear reviews! First up, we have the Mothership Deluxe by Gator Glitch Gear. The Mothership was the first circuit bent piece of analog glitching equipment I was able to obtain. Using the Archer Super Video Processor as a base, Gator creates a device that is able to explore new dimensions within the analog glitchverse.

It took some time to get a handle on what the device is able to do, but I soon found a setting that allowed me to create “The Bend”, as I call it. Examples of The Bend are below:

The bend has been a core component to my glitch art and is featured in a number of my pieces. I’ve also been able to get great moving effects from this device and rainbows for custom logo work.

One of the issues I had with this device was keeping track of different settings I had when achieving an awesome effect. To help with this, I am announcing the release of my patch sheet template along with 4 patch settings for the Mothership! I have them stored in a folder on my Google Drive, which will allow you to view and download the PDFs.

One of the things I’ve found with these analog devices is that the effect has to be created by twisting the knobs in a certain order. Normally I start on the left and move right to change the settings, but sometimes you need to wiggle some of them randomly to get the desired effect.

After speaking with Gator, there are a few other aspects of analog glitching and the Mothership I forgot to mention. First, the source media of the glitch has influence on what the resulting will be. Going full analog(VCR, camcorder, etc) will produce different visuals than going from a digital source then downgrading the signal to composite. Second, the type of TV used will also influence the resulting image. For glitching, I use a generic Target brand TV from the early 2000s. Different models may or may not result in the same glitch effects as what I have achieved. The patch notes I provide are a guide, hopefully you can find amazing effects and different sweet spots!

If you make some interesting patches, please let me know! I’m always finding new things with this device and I’d be very keen to see how others are using it.

Psych Glitch

Resolume is a really powerful VJ software that I have been exploring a lot with preparing for live visual shows. An output of one of these experiments was recording some footage of my analog glitch art through a few effects.

The analog textures come alive in new ways when combining them with digital processing effects. To achieve the output I’m using the Mirror Quad and Delay RGB effects. Not every analog glitch capture has worked for this, it needs to be something that scrolls so make the desired output.

I started using the output as a backdrop for recording some jams with the Moog Sound Studio. The result is something that is visually and audibly stunning and mesmerizing.

I plan on creating a collection of these videos to sell as a pack for creatives to use in their sets or whatever creative output they explore.

Live Visuals – Reflection on the show at the Downstairs in Ithaca 8/29

I set out with two goals for this year; one was to get my art in more galleries and the second was to do one live visual show. I can happily say that both of these goals have been achieved!

Through connections I’ve made from performing live with Adam Arritola, a request was put out for someone to do live visuals for an upcoming show in Ithaca. I quickly hopped on the request and was booked to perform visuals for the gig. I had been practicing live visuals in preparation for such an opportunity and was excited for tackling the challenge.

For each of the artists I created a setlist for them of different visual clips and effects. I listened to some of their music and asked them how they “saw” their performance and matched my footage to that. I mixed together footage I’ve taken, custom glitch footage I’ve done, and some stock video to put together an amazing visual performance. I also used a Glitchwerks Chromafield for some live glitch visuals of the performers.

The event went really well; the artists were pleased with the show they had. It certainly added a bit extra flair to what was going on. I was able to capture some video which is up on YouTube:

Glitch and AI: Exploring a new digital consciousness – FUBAR 2023

I had the amazing opportunity to present at this year’s FUBAR conference. I decided on a presentation that combined both my interest in glitch art and generative AI. I’ve long been disappointed with how the many generative AI systems struggle at producing images that capture the analog-CRT-glitch aesthetic.

Looking to change this, I finally decided to dive into training my own model to be used with Stable Diffusion. There are several guides available but I didn’t feel like they clearly explained it. Doing a bit more research, I found an online service that will generate the model based on images you provided. As a bonus, it builds off the standard SDv1.5 model so there are base images to create art with.

Using Dreamlook, I created my own SD model using 44 of my analog glitch art images. The results have been amazing!

Check out my whole talk on Youtube where I go into much more detail about the process!

Playing the SubHarmonicon Live

As someone who has been playing live regularly, my mind almost instantly tried to figure out how to perform with the various Moog synths in a live setting. On YouTube and in Facebook groups, it seemed like most of the musicians that use these instruments are in a studio setting. I challenged myself to find a way to perform live with these instruments with other musicians at the weekly open jam I attend that is mostly blues/rock/jazz.

I have very little keyboard or piano knowledge, so using the Mother-32 did not seem practical. Having to program the sequencer for each song does not work will in a dynamic and live environment. I also ruled out the DFAM as I am not looking to provide percussion. That left the SubHarmonicon as a possible good choice to use live.

I picked up a midi keyboard to allow me to change the key the SubH is playing in so I can stay in tune with the other musicians. Using the SubH’s midi interface, hitting a note on the keyboard changes the pitch the oscillators operate on.

After a bit of trial and error, I started to get the hang of using the instrument in a live setting. Using the built in sequencer, I try to adjust each note to create some sort of pattern. Either high/low, ascending, or descending. Dropping the VCA Decay to 0 and raising the the VCA attack to almost the 8 o’clock position can create a cool doppler effect. Doing the same to the VCF attack/decay can create a pulsating rhythm.

To my surprise, using the SubH at the jam was very well received. Not only was there a novelty around a new and weird instrument, musicians asked me to join with them on their jams. I found the SubH works well with more open-ended jams.

For live sound, I tried a few different things. I wanted to go into the house PA originally so that I wouldnt have to bring my own equipment. But how I am setup when I get there, I have no monitor facing me and it’s difficult to hear it through the mix. I tried one of the house amps but the sound was incredibly muddled and still hard to hear it.

I ended up going to my local guitar store and buying a Boss Katana-50 Mk-II. I tried a few other small 30-50 watt solid state amps but they just didn’t sound right. The Katana gave me probably the clearest sound I’ve heard for the SubH. Having full control over my sound at the jam has greatly improved my performance since I can actually hear myself.

A goal I had was to record my jam sessions to reflect and use. I’ve been posting some of the recordings on my SoundCloud. One of my favorites is “Name of the Wind.”

Dealing with Content Stealers

I had been sitting on my glitch of the Flower City Logo for awhile before sharing it to the Rochester community. I submitted a print to the RoCo 6×6 fundraising show and posted it just to the r/Rochester subreddit.

I had the design up on the Threadless shop for awhile but it didn’t gain any traction. I decided to grab one for myself to wear around and maybe gain some interest. Once the shirt arrived I shared a picture of it on the subreddit again and got a *lot* of folks interested in the design. Not wanting to annoy the mods, I did not post a link where the shirt was for sale. Instead, I decided to DM each person who showed interested and provide them with a link to buy the shirt.

Not long after that, someone stole my design and posted it on another POD site. This person posted the link saying “In case anyone want this” and received a bunch of upvotes. I quickly replied to the comment saying it was a scam listing and it was not from my store.

The mods were quick to remove the link, however, the scammer decided to post it again. I reported that link again and it was also removed. Also, I filed a DMCA claim with the POD’s parent company that the content was infringing on my work.

The POD stated the item was only listed for a few days and I was worried that the listing would stay up for a long time. Luckily, the site responded to my request within 24 hours and the offending content was removed.

It’s incredibly frustrating to see someone steal your content and try to sell it in the same thread you are also trying to promote it. Whoever did it obviously has no shame. If you’re interested in grabbing the shirt from my Threadless shop, here is the link

Exploring Synths

Music has been a part of my life since I was very young. I was always interested in it and would pretend to play on a stage in front of an “audience” on a pull-out sofa bed and a toy guitar my parents had got me. Around the age of 15, having a real guitar became a reality and I never stopped practicing. Even today, I play out pretty regularly at a local open jam with a variety of other musicians.

Electronic music had interested me a bit but never to the point of me every getting equipment. I remember visiting Alto Music once and being fascinated with a blue synthesizer that made the most bizarre sounds I had ever heard. Sadly the price tag was well out of the reach of an 18 year old, but it is something I wish I could find again.

When scrolling on Facebook one day, I saw a mention of the Moog Sound Studio and a demo video; I don’t think it was an ad but it was someone posting in one of the groups I was in. It caught my attention as compact and modular, so I started researching the product and modular synths in general. What caught my fascination was the ability to transform the sound in almost endless ways. I immediately saw potential of pairing this type of music with my glitch videos to create a sonic and visual experience for people in a live and recorded setting.

The Moog Sound Studio is a three piece bundle that includes the Moog DFAM(Drummer from another mother), Mother-32, and Subharmonicon. I watched countless demos and started reading how to even operate these things to make sure it would fit well into the vision I see for myself. Satisfied with what I found, I ended up buying the complete set when they were on sale to save myself some money.

This system is amazing. I’m able to shape the sound in so many unique ways in what seems like a never-ending rabbit hole. The creative potential with this set outshines many of my other ventures and it has become a regular go-to for me to produce music. Since starting exploring this unique area of music, I have made it a personal goal of myself to release a 6 track album every month for the remainder of the year. It is an ambitious challenge but something I’m using to push myself musically in new and uncharted realms.

5 albums have been released already with 1 more planned. Search “Psycho Moon Project” on all streaming platforms, links to a few are below:

Main Street Arts Sprawling Visions 2023

Sprawling Visions was the first exhibit for 2023 at Main Street Arts in Clifton Springs, NY. It ran from Jan 7th to Feb 22nd 2023. I had found them while looking for galleries that have open calls for art and hastily applied. At the time I was only doing digital glitches based on my black and white photography. One of the pieces was a photograph of an old red wagon and the other was taken at Ontario Beach Park.

At the time I was incredibly new to the art world. I hadn’t even branded on “Distorted Reality” yet and was still finding my way. I knew it was a bit of a long shot to get accepted, but there’s rarely any harm in trying. To my surprise, my work was accepted and this became the first art gallery to accept my work for display.

I had large format prints done by WhiteWall and when they came in I was completely in awe at the quality.

I made it to the opening of the show with my wife and milled around the gallery with the other artists and viewers. Seeing my work up on the wall, alongside other artists, was a complete surreal experience. Also, they have amazing bread.

I consider this a pivotal part of my artistic career and validation that what I am doing has wider appeal. Glitch art isn’t just a niche; it has a way to transform how we view the world and challenge perceptions of reality.

I’m actively working on submitting my pieces to other galleries and hope to expand my reach in the coming year. I want to thank Brad at MSA for giving my art an opportunity to be viewed by the public.

MidJourney V4 review

The MidJourney team has constantly been improving their model to further the development of text-to-image AI systems. I started back when they were on V2 of their algorithm and previously wrote a review comparing V2 to V3 after V3 was launched. V4 brings new characteristics to MJ and has changed the game a bit.

The team teased us with V4 for awhile by talking about the development and also giving us the ability to rank images before the model was released. What we saw was a big change in the realism in the renders and accuracy. Keeping in step with how they’ve released the models, it just happened suddenly with very little warning. Fortunately I was able to interact with V4 a lot the first day and into the weekend.

One note: for this review I will *not* be comparing V4 to TEST/TESTP. TEST/TESTP are based on Stable Diffusion and brings with them all the issues SD has. While the models were good, I still felt there were limited and not fully expressive like MJ has the capability to be. For this review, I will be comparing V4 to both V2 and V3.

First off, V4 is incredibly impressive even though it is still considered being in Alpha. The renders are very realistic and show big improvement in what they can deliver. The team is finally closing the gap with Dall-E in terms of photorealism and prompt accuracy but still brings over the MJ aesthetic we have grown to love. Another benefit to V4 is that it produces the traditional 1024×1024 images in a 2×2 grid whereas TEST/TESTP only provided a 1×1 grid and used up double the GPU hours. It’s nice to have that back as I’ve started to be more frugal in my hour usage.

To start the comparison, I will be using the same prompt with a consistent seed and the default options for all the algorithms. Below is the V3 2×2 grid render with prompt details:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 3

We’ll use the above grid as the baseline as we compare with V4. Using the same prompt and seed, below is the V4 grid output:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 4

For one more baseline, below is the V2 output of the same prompt:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 2

Immediately the differences stand out. For one, the MJ team has mentioned that V4 is a all new and doesn’t borrow from V2 or V3. Given how vastly different the initial grids are, I agree with the team that this is all new. Also, the V2 grid is somewhat similar to the V3 grid which further supports this. While both have rendered monsters, the V4 ones are in a portrait style where the V3 ones are a mix of portrait and full body. V4 also looks more “clean” than V3/V2. While I brought this up in my previous review of V3 removing some of the dirt that V2 had, V4 seems like even a future departure from this. Since we’re still in alpha with V4, we’ll have to wait and see if there will be options to adjust the output like we were able to in V3.

When it comes to the upscaling, the alpha qualities of V4 are more apparent. The upscale renders don’t seem as deep and the image quality appears far to bright. Running the renders through the beta upscale helps clean this up a bit, but hopefully the quality gets further refined as they make improvements.

V3 U2 upscale
V4 U2 Upscale
V4 U2 Beta Upscale

V4 is a huge step forward for MidJourney as they become more realistic, accurate, and improve their model.