https://youtu.be/UZWBeu9-HN0 I recorded this generative improvisation on 08 Oct 2022 around 4-5PM Eastern Time using Ableton Live – follow actions, the standard Max4Live LFO and stock Live midi and audio effect devices – and Arturia’s Buchla Easel V. Photography was taken in Massapequa, NY, USA on 18 March 2014. The title comes from a draft presentation I was working on last week.
https://youtu.be/rkE4GzUjujk This was a pretty noisy blurry one. People always tell you to put your reverb AFTER your dirt but I wanted to do it anyway. No one died during the making of this video. Try everything!
Teenage Engineering PO-12 , Rowin Loop Station, Muza FD900 Reverb/Delay, K Pedals Fender Blender Clone – shorter vids at https://ift.tt/lgUB4Oe
https://youtu.be/aNDtpUEcAkM Recording of an electronic percussion improv using James Milton’s Poly iOS app. This was probably done on iPhone and is from a time where I was not recording as regularly as I do now. My daily recordings started later that same month. Art was done with Pablo Brush app on the same day. The term “Value Statement” comes from some notes I’d made the day before.
Feel free to sample this stuff – or go find the app and make your own – but please also mention my name when you do.
I am using Audiobus 3 on an iPad 9. Into this I loaded some midi stuff – Rozeta Collider, where I pick a scale etc, then I load Midiflow Randomizer and then load a Scaler. You don’t see me do anything here in the interest of time, but it defaults to C Natural Minor, so that is quantizing all the midi coming into it. From there, I send the midi into the Aparillo synth, which is where I spend most of my time. On the audio page, I load no effects, because the delay and verb in Aparillo are fine with me but you can imagine that for a longer piece, I might. Then I also load AudioShare for quick stereo recording no uploading to Dropbox then to PC later on.
I am not an educator and do not claim to be. I don’t 100% remember what I did here nor do I care to create a comprehensive overview of my workflow, which might impair the artistic goals I have for the pieces as I improvise them.
I am sharing whatever I can remember to inspire other people to explore for themselves instead of watching YouTube videos when they should be exploring!!
Working in AudioBus 3 and on the MIDI side, I used Rozetta Particles or Collider (I’ve forgotten) to generate notes, MidiFlow Randomizer to mix it up a bit and then ScaleBud by Cem Olkay to quantize down to a scale before sending it to the sound generator / synth Kronecker and then over to AudioShare to record to wav file. I believe I added some effects between the synth and recorder later on but can’t tell from this vid.
The entire audio was 36 minutes and this is just a short 90 second video excerpt from that “performance.”