Playing the SubHarmonicon Live

As someone who has been playing live regularly, my mind almost instantly tried to figure out how to perform with the various Moog synths in a live setting. On YouTube and in Facebook groups, it seemed like most of the musicians that use these instruments are in a studio setting. I challenged myself to find a way to perform live with these instruments with other musicians at the weekly open jam I attend that is mostly blues/rock/jazz.

I have very little keyboard or piano knowledge, so using the Mother-32 did not seem practical. Having to program the sequencer for each song does not work will in a dynamic and live environment. I also ruled out the DFAM as I am not looking to provide percussion. That left the SubHarmonicon as a possible good choice to use live.

I picked up a midi keyboard to allow me to change the key the SubH is playing in so I can stay in tune with the other musicians. Using the SubH’s midi interface, hitting a note on the keyboard changes the pitch the oscillators operate on.

After a bit of trial and error, I started to get the hang of using the instrument in a live setting. Using the built in sequencer, I try to adjust each note to create some sort of pattern. Either high/low, ascending, or descending. Dropping the VCA Decay to 0 and raising the the VCA attack to almost the 8 o’clock position can create a cool doppler effect. Doing the same to the VCF attack/decay can create a pulsating rhythm.

To my surprise, using the SubH at the jam was very well received. Not only was there a novelty around a new and weird instrument, musicians asked me to join with them on their jams. I found the SubH works well with more open-ended jams.

For live sound, I tried a few different things. I wanted to go into the house PA originally so that I wouldnt have to bring my own equipment. But how I am setup when I get there, I have no monitor facing me and it’s difficult to hear it through the mix. I tried one of the house amps but the sound was incredibly muddled and still hard to hear it.

I ended up going to my local guitar store and buying a Boss Katana-50 Mk-II. I tried a few other small 30-50 watt solid state amps but they just didn’t sound right. The Katana gave me probably the clearest sound I’ve heard for the SubH. Having full control over my sound at the jam has greatly improved my performance since I can actually hear myself.

A goal I had was to record my jam sessions to reflect and use. I’ve been posting some of the recordings on my SoundCloud. One of my favorites is “Name of the Wind.”

Dealing with Content Stealers

I had been sitting on my glitch of the Flower City Logo for awhile before sharing it to the Rochester community. I submitted a print to the RoCo 6×6 fundraising show and posted it just to the r/Rochester subreddit.

I had the design up on the Threadless shop for awhile but it didn’t gain any traction. I decided to grab one for myself to wear around and maybe gain some interest. Once the shirt arrived I shared a picture of it on the subreddit again and got a *lot* of folks interested in the design. Not wanting to annoy the mods, I did not post a link where the shirt was for sale. Instead, I decided to DM each person who showed interested and provide them with a link to buy the shirt.

Not long after that, someone stole my design and posted it on another POD site. This person posted the link saying “In case anyone want this” and received a bunch of upvotes. I quickly replied to the comment saying it was a scam listing and it was not from my store.

The mods were quick to remove the link, however, the scammer decided to post it again. I reported that link again and it was also removed. Also, I filed a DMCA claim with the POD’s parent company that the content was infringing on my work.

The POD stated the item was only listed for a few days and I was worried that the listing would stay up for a long time. Luckily, the site responded to my request within 24 hours and the offending content was removed.

It’s incredibly frustrating to see someone steal your content and try to sell it in the same thread you are also trying to promote it. Whoever did it obviously has no shame. If you’re interested in grabbing the shirt from my Threadless shop, here is the link

Exploring Synths

Music has been a part of my life since I was very young. I was always interested in it and would pretend to play on a stage in front of an “audience” on a pull-out sofa bed and a toy guitar my parents had got me. Around the age of 15, having a real guitar became a reality and I never stopped practicing. Even today, I play out pretty regularly at a local open jam with a variety of other musicians.

Electronic music had interested me a bit but never to the point of me every getting equipment. I remember visiting Alto Music once and being fascinated with a blue synthesizer that made the most bizarre sounds I had ever heard. Sadly the price tag was well out of the reach of an 18 year old, but it is something I wish I could find again.

When scrolling on Facebook one day, I saw a mention of the Moog Sound Studio and a demo video; I don’t think it was an ad but it was someone posting in one of the groups I was in. It caught my attention as compact and modular, so I started researching the product and modular synths in general. What caught my fascination was the ability to transform the sound in almost endless ways. I immediately saw potential of pairing this type of music with my glitch videos to create a sonic and visual experience for people in a live and recorded setting.

The Moog Sound Studio is a three piece bundle that includes the Moog DFAM(Drummer from another mother), Mother-32, and Subharmonicon. I watched countless demos and started reading how to even operate these things to make sure it would fit well into the vision I see for myself. Satisfied with what I found, I ended up buying the complete set when they were on sale to save myself some money.

This system is amazing. I’m able to shape the sound in so many unique ways in what seems like a never-ending rabbit hole. The creative potential with this set outshines many of my other ventures and it has become a regular go-to for me to produce music. Since starting exploring this unique area of music, I have made it a personal goal of myself to release a 6 track album every month for the remainder of the year. It is an ambitious challenge but something I’m using to push myself musically in new and uncharted realms.

5 albums have been released already with 1 more planned. Search “Psycho Moon Project” on all streaming platforms, links to a few are below:

Main Street Arts Sprawling Visions 2023

Sprawling Visions was the first exhibit for 2023 at Main Street Arts in Clifton Springs, NY. It ran from Jan 7th to Feb 22nd 2023. I had found them while looking for galleries that have open calls for art and hastily applied. At the time I was only doing digital glitches based on my black and white photography. One of the pieces was a photograph of an old red wagon and the other was taken at Ontario Beach Park.

At the time I was incredibly new to the art world. I hadn’t even branded on “Distorted Reality” yet and was still finding my way. I knew it was a bit of a long shot to get accepted, but there’s rarely any harm in trying. To my surprise, my work was accepted and this became the first art gallery to accept my work for display.

I had large format prints done by WhiteWall and when they came in I was completely in awe at the quality.

I made it to the opening of the show with my wife and milled around the gallery with the other artists and viewers. Seeing my work up on the wall, alongside other artists, was a complete surreal experience. Also, they have amazing bread.

I consider this a pivotal part of my artistic career and validation that what I am doing has wider appeal. Glitch art isn’t just a niche; it has a way to transform how we view the world and challenge perceptions of reality.

I’m actively working on submitting my pieces to other galleries and hope to expand my reach in the coming year. I want to thank Brad at MSA for giving my art an opportunity to be viewed by the public.

MidJourney V4 review

The MidJourney team has constantly been improving their model to further the development of text-to-image AI systems. I started back when they were on V2 of their algorithm and previously wrote a review comparing V2 to V3 after V3 was launched. V4 brings new characteristics to MJ and has changed the game a bit.

The team teased us with V4 for awhile by talking about the development and also giving us the ability to rank images before the model was released. What we saw was a big change in the realism in the renders and accuracy. Keeping in step with how they’ve released the models, it just happened suddenly with very little warning. Fortunately I was able to interact with V4 a lot the first day and into the weekend.

One note: for this review I will *not* be comparing V4 to TEST/TESTP. TEST/TESTP are based on Stable Diffusion and brings with them all the issues SD has. While the models were good, I still felt there were limited and not fully expressive like MJ has the capability to be. For this review, I will be comparing V4 to both V2 and V3.

First off, V4 is incredibly impressive even though it is still considered being in Alpha. The renders are very realistic and show big improvement in what they can deliver. The team is finally closing the gap with Dall-E in terms of photorealism and prompt accuracy but still brings over the MJ aesthetic we have grown to love. Another benefit to V4 is that it produces the traditional 1024×1024 images in a 2×2 grid whereas TEST/TESTP only provided a 1×1 grid and used up double the GPU hours. It’s nice to have that back as I’ve started to be more frugal in my hour usage.

To start the comparison, I will be using the same prompt with a consistent seed and the default options for all the algorithms. Below is the V3 2×2 grid render with prompt details:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 3

We’ll use the above grid as the baseline as we compare with V4. Using the same prompt and seed, below is the V4 grid output:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 4

For one more baseline, below is the V2 output of the same prompt:

/imagine a vintage 1900s photograph of a grotesque monster –seed 0420 –v 2

Immediately the differences stand out. For one, the MJ team has mentioned that V4 is a all new and doesn’t borrow from V2 or V3. Given how vastly different the initial grids are, I agree with the team that this is all new. Also, the V2 grid is somewhat similar to the V3 grid which further supports this. While both have rendered monsters, the V4 ones are in a portrait style where the V3 ones are a mix of portrait and full body. V4 also looks more “clean” than V3/V2. While I brought this up in my previous review of V3 removing some of the dirt that V2 had, V4 seems like even a future departure from this. Since we’re still in alpha with V4, we’ll have to wait and see if there will be options to adjust the output like we were able to in V3.

When it comes to the upscaling, the alpha qualities of V4 are more apparent. The upscale renders don’t seem as deep and the image quality appears far to bright. Running the renders through the beta upscale helps clean this up a bit, but hopefully the quality gets further refined as they make improvements.

V3 U2 upscale
V4 U2 Upscale
V4 U2 Beta Upscale

V4 is a huge step forward for MidJourney as they become more realistic, accurate, and improve their model.

Digitizing the Glitch

When working with analog signal glitching, the resulting output is too corrupt for digital devices to understand. They will most often blank out and say lost signal, especially when the glitching becomes too extreme. In order to correct this, you need to use a device called a Time Base Corrector. This video on Youtube does a good job at explaining TBC and gives examples of some of the more popular ones.

Based on his recommendations, I was able to snag a Sima SFX9 off eBay. They’re rather hard to come by now, so I was happy to find one in working condition.

The SFX9 has some pretty cool features for video mixing. I got it setup going into my CRT first just to make sure it worked. Using the two channels, I created a loop for the glitching so that the dry signal went into Video1 and then used that to make the wet signal in Video2. This allows me to mix, wipe, and key the signal as an overlay.

I didn’t capture my initial tests but I was very happy with the results. The SFX9 has no apparent delay on the effects and it handles the glitching wonderfully. However, the resulting output is different from when I go straight into the CRT from my circuit bent equipment.

Analog glitch straight into the CRT
Analog glitch through the SFX9

Even with the differences, the ability to use the setup on digital devices such as projectors and digital capture cards, I’m happy with the trade off. This mixer gets me one step closer to performing live visuals.

Cultures surrounding the AIs

I’ve been very fortunate to be involved early on with 3 of the latest AI text-to-image art systems. First was MidJourney, second was Dall-e 2, and now I’ve been able to get in on early access for Stable Diffusion. From a tech perspective, each of these generate different styles of images and have their various strengths and weaknesses. What is more curious is to observe the cultures each of these tools has surrounded themselves with.

MidJourney

When I got on MidJourney, there was a sense of exploration amongst ourselves. We seemed to all be getting in on something new and unique and we were all seemingly working together to explore this new tool. MidJourney attracted enthusiasts who wanted to learn and explore this new tool, together. There was a lot of collaboration and openness as we experimented with different prompts and getting what we wanted out of the system.

Dall-E 2

When I first got access to Dall-e and started to get into the surrounding unofficial communities, there was a striking different tone compared to the folks who were using MJ. From the restricted access, there was a lot more scams showing up where people were taking advantage of this lack of access. People were charging money to run prompts, they were charging money for invites(that didn’t exist), and there was a much bigger sense of trying to use Dall-e for commercial purposes.

Stable Diffusion

SD is the epitome of tech bro culture. Everything surrounding their release was all hype. Folks who had requested beta access were granted the ability to get in on their Discord server, but we still had to wait over 24 hours before the bot even came online. During this time, the mods, founder, and server staff continually teased us with SD’s generations and kept on building the hype. It seemed like they were focused more on growing a user fan base first instead of making sure the product was refined before launch. The founder’s messaging about bringing in “influencers” and making statements about how “well funded” they are further exemplified this opinion.

Dall-e 2 and Glitching

Combining glitch art with the AI was only a matter of time. With my early experiments, I used the AI output through my glitching process. With Dall-E 2 producing incredibly photorealistic renderings, I decided to give it a try to produce glitched images.

Turns out Dall-E 2 can produce some really nice glitch images! Just adding the words “glitch, data mosh” to the prompt will get the AI to produce it. Combining those terms with “vaporwave” or “synthwave” can also produce some dynamic color ranges with a variety of distortions

In the limited time I spent experimenting, the resulting renderings were very stunning. In addition to creating glitch images, Dall-E 2 does a great job of generating variations on existing glitch images.

I’ve found myself using Dall-E 2 to produce variations of existing images more than I am using it to create new ones. While it is extremely powerful at image generation, the ability to make variations on existing images makes it stand out from any of the other text-to-image AI systems.

MidJourney Version3 Comparison and Review

For the past two days I have been exploring and testing the new Version3 generational algorithm from MidJourney. This one’s goal is to improve accuracy of output and a new upscaler to remove distortions and artifacts. My initial experiments showed the potential of this new algorithm but it seemed to take away some of the “dirtiness” I came to like when beginning to work with MidJourney.

I decided to perform some experiments comparing the Version2 and Version3 algorithms and see how the Version3 can be tweaked to regain some of the “dirt” the V2 algorithm had that made it so special.

To start, I generated the inside of an abandoned shopping mall taken on a Polaroid. MidJourney V2 did a really good job of capturing that grit and small distortions that capture the aesthetic Polaroid film would have.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 2

This image will serve as the reference point I will try and recreate with the V3 algorithm. By setting a seed value, the various generations will be somewhat similar.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3

To start, I created a V3 generation using all the default settings. This is the most “neutral” of the V3 options in stylizing and quality. Comparing the two, it appears the V3 one is more “clean” in appearance as some in some cases the photo doesn’t look like a Polaroid at all.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3 –stylize 5000

Bumping up the stylizing gives it a way cleaner look. There’s hardly any distortions in the sample renders and the subject of the images themselves appear too straight and perfect. This has started to drift away from an abandoned mall.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3 –stylize 20000

Pushing the stylizing further up generates more abstract renderings. As shown above, these are completely far from want a mall looks like. There’s also mountains, clouds, and other structures in the resulting images that don’t make any sense. When testing these high stylizing options on other generations, those features mentioned always show up. Regardless of what the subject is, multi-color clouds, mountains, and other fantasy objects appear.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3 –stylize 625 –q .25

Next I decided to explore the quality options while keeping the stylized settings as low as possible. The lowest quality, .25, produced renderings closer to what the V2 algorithm did. The images just don’t seem that appealing as the quality is really low.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3 –stylize 625 –q .5

Pushing the quality up a notch while keeping the stylizing the same got me much closer to V2 renderings. The proper aesthetic for an abandoned mall is there and the images do look like they were taken with a Polaroid. I’m happy with these, but there’s still some exploring to do with other options.

/imagine a polaroid of the inside of an abandoned mall –seed 0420 –v 3 –stylize 1250 –q .5

For the final rendering, I put the stylizing back to the default and still kept the quality down a bit. This produced images that are the closest I was able to come to the V2 aesthetic of a Polaroid of an abandoned mall. Not all the grit from V2 is there, but enough of the characteristic remain for it to be usable.

The Differences Between Dall-E and MidJourney

I was very fortunate to get access to two very powerful text-to-image AIs within a short period of time. I had been on MidJourney for a few weeks before my email to join OpenAI’s Dall-E 2 platform arrived. I honestly forgot I even applied to Dall-E so it was a very pleasant surprise.

Dall-E 2 is a much more powerful AI. It produces incredibly detailed renderings that could easily be confused with real life images. The accuracy is quite stunning and frightening at the same time.

Unicorn Loafers generated by Dall-E 2

While experimenting with different prompts, I decided to try the same prompt in both systems to see what images they would produce. From these experiments, I got the impression that MidJourney produces more abstract renderings whereas Dall-E is more literal. When using a single word prompt, like “nothing”, MidJourney produced images that invoked the emotion or demonstrated what the meaning of the word was; whereas Dall-E produced images that were literal examples of the word.

As you can see above, the differences are distinct. I know the AI’s were trained on different models, but with working with them it is important to know where their strengths lie.

If I want to create an incredibly realistic looking literal thing I will use Dall-E. If I want more abstract, concept art style rendering, I will use MidJourney to complete that task. Not to say either cannot produces images like the other, but I’ve found them to just be stronger at one thing and suit very different purposes.