Monday, May 15, 2006

Week 12

  • Practical 1 - Audio Arts - Marketing [1]
The first half of this class was mainly a questions and answers session mainly to do with our assignments. The second half was about the business side of music. I found this quite part quite interesting.

David told us that if we want to sell out soul, ahem, music, to a record label, it's a good idea to present yourself differently so your cd dosn't get thrown into the pile of generic CD's.
  • Practical 2 - Creative Computing - Supercollider (5) [2]
Argh, haven't done yet. (Will update soon)
  • Music Technology Forum - Presentation - Squawk Box: An Open Presentation Session [3]
For the first time this year our presentation hour was open to all the Tech students, but disappointingly it turned out there were only three people who decided to present. It would have been nice to have some more presenters, but I guess it was a busy period, and the thought of volunteering to do an extra presentation on top of all the work that was due was probably a bit too much. Perhaps the next squawk box should be at a less busy time such as mid or beginning of semester.

Presenters were Vinny Bhagat, Patrick McCartney, and Myself.

Vinny started the ball rolling by playing a piece he started to write for a female in the jazz department. Trying to flatter her with a song written exclusively for her, he was harshly rejected after she refused to sing her part in the piece for him as it wasn’t her ‘style’. I give my heart to Vinny for trying. Anyway it was quite a nice piece of music, although I thought the gong (or was it a cymbal?) loop that played throughout became a bit unnecessary.

Next up was Patrick presenting his idea for conducting a monthly composition workshop in the Mawson Lakes Planetarium in the dark. I could imagine this being a lot of fun. I have already signed up to the related blog. Hopefully it will all work out.

Finally I finished the session overtime with some pieces of music I wanted to present. I was a little disappointed that I left the CD with Jonathan Gage’s “Into Violet” at home, but you can download the song here. It’s my hope people got something out of Ryoji Ikeda’s music from Dataplex. Judging from the verbal and written response, this seemed to be the case. This presentation was a good learning experience. It really emphasised the importance of presenting correct information. I'm sure if I spent a little extra time acquiring more facts about the composers, it would have made a difference.

  • Music Technology Forum - Workshop - Workshop on Nobukazu Takemura, Gutbucket, Toby Twining, Arnold Dreyblatt, Otomo Yoshihide, and Igor Stravinsky [4]
Nobukazu Takemura – “Assembler Mix”

It was an enjoyable listen. The strings in the background were nice, and the organ sounding loop reminded me of Reich.

Gutbucket – “Snarling Wrath of Angry Goop” from Humping the American Dream

First of all, I like the meaningful song and Album names. The piece was in the style of progressive rock and included this crazy (good) sax solo. Very skilled musicians, but I'm not really into this type of music right now.

J.S Bach – “Ricercar a 6” from The Musical Offering

I thought this piece was monotonous, but that’s only because I don’t fully understand his music. Listening to this piece however only built upon the curiosity established from Stephen’s teaching of Bach in “Music In Context II”. I must learn more about this genius of a composer.

Toby Twining – “Kyrie”

This was a nice piece. I really enjoyed the part with the fluttery notes and sustain the background. At one point it sounded as though a solo was being played with harmonics only.

Arnold Drayblatt – “Lapse”

The piece sounded like it was created through pieces of junk, and of course this reminded me of pieces created with Milkcrate. The tuning for this piece was based on the tones of the harmonic series and contains some string harmonics.

Otomo Yoshihide w/ John Zorn – “Hardcore Chinese Opera”

It was an interesting piece of music, although I was somewhat expecting it to go for longer.

Igor Stravinsky – “Symphonies of Wind Instruments”

This piece makes use of the ‘block technique’ which is basically taking musical ideas that are unrelated, and combining them. I liked this piece. It reminded me of the orchestration from old Disney cartoon shows.

Pfft, like I can summarise all these pieces in 125 words without going any deeper than, “It was good/bad” for each piece. Well I say stuff this 500 word limit, how am I suppose to take pride in writing crappy prose? Succinct I like, but not if it means writing boring prose that’s not even worth reading. Please, someone show me a succinct and interesting summary of this workshop in 125 words (to leave word space for the other three classes). If I can see that this can be done, then perhaps I’ll think differently.

Week 11

  • Practical 1 - Audio Arts - Mixing [1]
This week was a bit different than previous weeks in that it was more about analysing and listening than anything else. David played us some of his favourite tracks that he worked on. It was a great insight to hear his mixes and learn about them. He told us various thing about the mixes such as what sort of reverb he used, doubling, signal routing (eg through various effects), and sound design. The sound design part was very interesting to me. I mean my experience with Mikcrate was a similar idea to what David described in this particular mix, but the difference was that it didn’t sound experimental. In fact the first time I heard it, it didn’t even occur to me that most (if not all) of the percussion sounds were created using non-traditionally musical items.
  • Practical 2 - Creative Computing - Supercollider (8) [2]
Ok here's my patch. Ok here's my patch. The synthDef's work, but I couldn't work out how to get the Pbind to execute correctly. I suspect it had something to do with the order of execution, I'm not sure. I think 12 hours is enough time to spend on this patch. Perhaps if I find some time later on, I'll try and fix it. Special thanks go out to Adrian and Martin for doing their patches before me so I could use them as a guide for mine.

Here is the actual sound of a single note that the SynthDef's produce

This is the result PBind returned

I somehow don't think this is the sound I was looking for.

// Week 11

(
// Modulator
SynthDef("Modulator",
{

// Arguments
arg busout = 30,
density = 8
;

// Variables
var modulator
;

// Modulator
modulator = Dust.kr(density)
;

// Output
Out.kr(
bus: busout, // Modulator out control bus 30
channelsArray: modulator
)
}
).store;

// Carrier
SynthDef("Carrier",
{

// Arguments
arg busin = 30,
busout = 20,
freq = 8000,
dur = 2,
leg = 2
;

// Variables
var carrier,
modulator
;

// Modulator
modulator = In.kr(busin, 1);

//Carrier
carrier = SinOsc.ar(
freq: freq,
mul: modulator // Modulator in on bus 30 performing AM on Carrier
)
;

// Envelope
carrier = carrier
*
EnvGen.kr(
Env([0,0.6,1,0], [dur,0.1,0.01]), doneAction: 2
)
;
// Output
Out.ar(
bus: busout, // Sending out audio bus 20
channelsArray: carrier
)
;
}).store;

// Effect
SynthDef("Delay",
{

// Arguments
arg busin = 20,
busout = 0,
mdtime = 0.2,
deltime= 0.2,
dectime= 6
;
// Variables
var delay,
carrier
;

// Modulated Carrier
carrier = In.ar(busin, 1);

// Filter
delay = CombC.ar(
in: carrier,
maxdelaytime: mdtime,
delaytime: deltime,
decaytime: dectime
)
;

// Output
Out.ar(busout, delay)
;

}).store;
)

Synth("Modulator", addAction: \addToTail);
Synth("Carrier", addAction: \addToTail);
Synth("Delay", addAction: \addToTail);

(
// Sequencer
Pbind(
\Instrument, "Delay",
\dectime, Pseq([6, 5, 4, 3, 2, 1, 0.5, 0.25, 0.125, 0.0612], inf)
).play;
)

  • Music Technology Forum - Presentation - Presentation by Stephen Whittington [3]
Stephen Whittington took the stand this week and talked to us about his compositions and his current work on distributed music and vocoding.

He started off explaining that before mass communication and travel, humans lived in small scattered groups. Cultures were isolated, and there were geographical boundaries. Today, thought is conducted on a global scale, and travel that used to take months, can now only take a few hours. Stephen is interested how this thought has translated to musical interaction. With communication protocols such as VOIP (Voice Over Internet Protocol) that work with the internet, “Distributed Music” is now a global phenomena.

Stephen defines Distributed Music as musicians performing together, but not in close proximity with each other. The thought that we are all on the same ‘Spaceship', is a good example of global thought; Stephen mentioned a quote from someone who said something on the lines of this, but I can’t remember who.

Stephen’s other main interest lie in that of vocoding, which ties in to his interest with the human voice and the expression of ‘utterance’.

He presented some of his compositions that used both of these technologies. The one composition that stuck in my mind was, “X is Dead”. With this piece he played some audio through a speaker connected to the bottom of a piano which is turned resonated its strings. At the same time, he also played the piano and spoke a series of words.

The other piece I found interesting was his involvement with “Distributed ‘synchronicity’ experiments”. The sonic result of this I thought sounded similar to a piece from the Dues Ex – Invisible War soundtrack. I think the fantastic “Alexander Brandon” had some part in creating this soundtrack. The soundtrack itself is mostly ambient with some ethnic instruments at times. You can download it for free here. The Deus Ex 1 soundtrack can be downloaded here.

I hear the 2nd years get to have a lecture with him every week on the human voice. I wish I had attended this as the human voice is also one of my interests. That’s the thing with the Tech course; it changes every year so there is usually something new to learn about other year levels even if you are a 3rd year. I wish all lecturers were as cool as Stephen is about sitting in on their classes… well most lecturers of whom I’ve met are.

  • Music Technology Forum - Workshop - NA
David Harris was sick this week.
  • References
    [1] Grice, David. 2006. Practical on Live Recording. University of Adelaide, 23 May.
    [2] Haines, Christian. 2006. Lecture on Supercollider . University of Adelaide, 25 May.
    [3] Tim Swalling. 2006. Presentation on Tim Swalling's projects. University of Adelaide, 25 May.

Week 10

  • Practical 1 - Audio Arts - Mixing (1) [1]
This week was an introduction to Mixing. David Grice taught us his method of approaching a Mix.

Step:
  1. Trim Files.
  2. Organise applause for the start and end of the piece (for live performance).
  3. Label each track on desk.
  4. Make sure the fader for each track is at ‘0’ amplitude, and panned centre.
  5. START the mixing.
It’s good to start mixing from the foundation (drums) through to the bass, then harmony, and finally the lead parts.

Group all the drums on the drum kit into a ‘Drums’ group. Depending on the style of music, you might want apply gates, EQ’s, or clean up noise between drum hits. EQ is useful to pull back on any resonance individual drums might have. You’d usually start with the kick, and then the snare. The overheads can be paned left and right, and the other drums panned according the ears of the drummer.

We brushed over the Bass, Piano, and Vocals as we didn’t have enough time to go into detail. David emphasised the fact that stereo mixes should also sound good in mono and urged us to test our mixes throughout to check for this. He also told us to monitor at low volume to avoid audio fatigue, but also because you can usually hear a lot more detail – As I understood it, you’re brain isn’t as overloaded at a softer volume so it can process more, revealing extra detail. Finally, he suggested that at some point we listen to the mix in the hallway from outside the studio to give us a perspective of the mix in a different acoustic environment.

The experience, knowledge, and personality of David Grice is exactly what this degree needs, and I hope he can teach the students here for as long as possible.
  • Practical 2 - Creative Computing - Supercollider (7)
Ok, this week we looked at streams. So far I have literally spent around 10 hours working on this blasted patch and I can't work out why it's distorting. It also sounds great on the Mac, but when I take the recording to the PC, it sounds like crap. I don't know what's wrong. I've spent way too long on this patch, so I'm just going to post what I've done and be done with it. Besides, I've probably permanently lost 1Hert from my upper hearing range judging by pain in my ears. I suspect the distortion might have something to do with the "Resonant Low Pass Filter Cutoff Freq". Please, if anyone knows what's wrong with this patch, I'd love to know.

Week 10 result (crappy) - It kinda reminds me of the periodic sound of shooting out diarrhea poo. Enjoy!














// Week 10

// SynthDef

(
SynthDef("Horror",
{

// Arguments
arg midinote, // MIdi Note
dur = 2, // Duration of Event (for use in Envelope) (NOT DONE)
leg = 0.8 // Space between start of events (for use in Envelope) (NOT DONE)
;

// Variables
var bus = 21, // Bus 21
i, // Instrument
i_modfreq, // Instruments Modulating Frequency
filtercutfreq, // Cutoff Frequency of Filter
out // Output
;

// Instruments Modulating Frequency
i_modfreq = Saw.ar(
freq: 60,
mul: 0.1
)
;

// Instrument
i = LFCub.ar(
freq: midinote.midicps // Carrier Frequency
+ i_modfreq, // Modulating Frequency
mul: 0.1 // Overall Amplitude
)
;

// Envelope
i = i * EnvGen.kr(
Env.new([0,1, 0.3, 0.8, 0], [dur*leg, dur*leg, dur*leg, dur*leg],'sine'),
doneAction: 2
)
;

// To Bus 21
Out.ar(bus, i
)
;

// Resonant Low Pass Filter Cutoff Freq
filtercutfreq = FSinOsc.kr(
freq: leg*1000,
mul: XLine.kr(
start: 10000, end: leg*50, dur: dur,
doneAction: 2
)
)
;

// Bus 21 passed through a Resonant Low Pass Filter and assigned to 'out' variable
out = RLPF.ar(
In.ar( // Signal to be Processed
bus), // Bus 21
filtercutfreq // Cutoff Freq
)
;

// 'out' variable to Output 0 (Left) and 1 (right)
Out.ar([0,1], out
)
;
}
).load(s);SynthDescLib.global.read
)

(

a = Prand(#[10, 66, 65, 63, 47, 2, 65, 63, 61], inf).asStream;

Pbind(
\instrument, "Horror",
\midinote, a.next,
\dur, Prand(#[8, 4, 3, 2, 1, 0.5, 0.25], inf),
\leg, Prand(#[6, 5, 4, 3, 2], inf)
).play;
)

  • Music Technology Forum - Presentation - Presentation by Robert Chalmers [2]
For our presentation this week, we had the pleasure of listening to Robert Chalmers talk to us about Copyright and the Law. As he mentioned a couple of times during his talk, if he were to tell us everything then it would take months or even a years, so he just outlined some main points.

He started off by explaining what copyright is, what it covers, and how long it lasts for. He also talked about lyrical, moral, musical, and personal rights, and how they can be exercised in different situations. He talked about infringement and fair use, and how “fair use” only really applies in America. Finally, he skimmed over p2p file sharing systems, the iPod, and how the law views both these things, before finishing off with questions and answers. The questions and answers were equally as interesting and educational as the lecture part of the session. I found the whole session to be thoroughly enjoyable, educational, interesting, and worthwhile.
  • Music Technology Forum - Workshop - Workshop on Mr. Bungle, Stockhausen, and My Bloody Valentine [3]
Mr. Bungle – “Perfection”. I think that word basically sums them up. We listened to “Love is a Fist”, and “Dead Goon”. “Love is a Fist” used the block form (John Zorn) technique, and was of a crazy metal genre with guitar screeches and staccato brass stabs. “Dead Goon” on the other hand was comical, dynamic, and atmospheric with a free feeling to the structure. Both were excellent pieces, and reminded me just how fantastic these guys were.

We then listened to Stockhausen’s, “Hymnen”. This piece was based around picking up noise with a shortwave radio. It also contained what sounded like some tape technique, especially near the end with the German word collage. Overall it was a very inspirational piece that reminded me of horror films in certain sections.




Lastly, we listened to “To Here Knows Web”, by My Bloody Valentine. To me I thought this piece sounded like a pop tune with some of Alex Carpenters ‘noise’ music layered over the top. The combination of these two together didn’t really appeal to me.



  • References
    [1] Grice, David. 2006. Practical on Mixing. University of Adelaide, 16 May.
    [2] Robert Chalmers. 2006. Lecture on Copyright and the Law. University of Adelaide, 18 May.
    [3] Harris, David. 2006. Workshop on Mr. Bungle, Stockhausen, and My Bloody Valentine. University of Adelaide, 18 May.

Week 9

  • Practical 1 - Audio Arts - Reverb [1]
The purpose of reverb like other effects is to colour sound. It can be used for many different purposes, but one of the main ones is to acoustically emulate different environments. In a given mix you may be running several different types of reverbs. You may decide to use different reverbs for:

- kick
- snare
- load vocal
- backing vocal
- keyboards

There can be a plethora of different parameters on a given reverb (as demonstrated on the Dp/4 in studio 1), but from this picture you can see the D-Verb parameters:



A good article on how to use this crappy plugin can be found here. The article does make a good point by mentioning the fact that it can be used effectively for the less important parts of the mix to save on processing power.

  • Practical 2 - Creative Computing - Supercollider (6) [3]
This week we looked at ‘P’ functions. I’ve spent the whole day today doing this patch. It’s actually poor time management to spend the whole day on a patch experimenting and such when you have a plate and a half left of other work to do, so for some reason I have alluded from the thought of why I have chosen to do this. At this point whilst my mind has been “Supercollider’ised”, I can’t think of an answer to this question. I’ve wanted to incorporate a lot more into this patch, but of course, I must draw the line somewhere.

// Week 9

// SynthDef Instrument

(
SynthDef ("FM",
{

// Arguments
arg mp1 = 60,
mp2 = 67,
dur = 4, // duration of Volume Envelope
amp = 0.2,
pan = 0 // Couldn't get working with Pbind
;

// Variables
var fm
;
// UGens
fm = SinOsc.ar(
freq: ([mp1.midicps,mp2.midicps]) + // Carrier Frequency
VarSaw.ar(
freq: LFTri.kr( // Control Frequency
VarSaw.kr( // Control Frequency
20, 100),
100),
mul: LFPulse.kr( // Index
200, 1)


),
// Overall Amplitude
mul: amp
);
// Envelope
fm = fm * EnvGen.kr(
Env.perc(0, dur),
doneAction: 2
)
;

// Out
OffsetOut.ar(0,Pan2.ar(fm, pan));
}
).load(s);
SynthDescLib.global.read
)

// Pbind Sequencer

(
var explosion;
explosion = [0, 23, 3, 5, 41, 20, 16];

Pbind(
\instrument, "FM",
\mp1, Pseq([70, 66, 65, 63, 65, 66, 65, 63, 61,
63, 61, 60, 58, 57, 60, 63, 66, 65, 58, 61], inf),
\dur, Pseq([2, 2, 1, 1, 1, 1, 1, 1, 1, 0.5,
0.5, 2, 2, 1, 1, 1, 1, 1, 1] * 0.2, inf)
).play;

Pbind(
\instrument, "FM",
\mp1, Pseq([70, 66, 65, 63, 65, 66, 65, 63, 61,
63, 61, 60, 58, 57, 60, 63, 66, 65, 58, 61] * 1.015, inf),
\dur, Pseq([2, 2, 1, 1, 1, 1, 1, 1, 1, rrand(0.45, 0.6),
0.5, 2, 2, 1, 1, 1, 1, 1, 1] * 0.2, inf)
).play;

Pbind(
\instrument, "FM",
\mp1, Pseq([70, 66, 65, 63, 65, 66, 65, 63, 61,
63, 61, 60, 58, 57, 60, 63, 66, 65, 58, 61] * 1.030, inf),
\dur, Pseq([2, 2, 1, 1, 1, 1, 1, 1, 1, rrand(0.35, 0.6),
0.5, 2, 2, 1, 1, 1, 1, 1, 1] * 0.2, inf)
).play;

Pbind(
\instrument, "FM",
\mp1, Pseq(explosion +60 , inf),
\dur, 0.2,
\amp, 0.03
).play;

Pbind(
\instrument, "FM",
\mp1, Pseq(explosion +60 , inf),
\dur, 1.9,
\amp, 0.03
).play;

Pbind(
\instrument, "FM",
\mp1, Pseq(explosion +60 , inf),
\dur, 1.8,
\amp, 0.03
).play;

Pbind(
\instrument, "FM",
\mp2, Pseq(explosion +60 , inf),
\dur, 4,
\amp, 0.08
).play;

Pbind(
\instrument, "FM",
\mp2, Pseq(explosion +60 , inf),
\dur, 4.01,
\amp, 0.08
).play;
)

Result from above code


I attempted to incorporate a panning mechanism into the SynthDef (based on the SimpleTone instrument), but I couldn’t get it going when I tried to use it with the Pbind’s. In this case it’s just unused code, but I think I’ll leave it there in case I work out how impliment the panning later on.

I also attempted to incorporate these two messages:

(mp1.midicps+mp2.midicps)
And
([mp1.midicps, mp2.midicps])

as an arguments (to replace the “freq” of the SineOsc in the SynthDef) that I could control with Pbind, but alas I couldn’t work this out.

This is what the patch sounds like when I tried to incorporate these messages as an argument


Kinda cool, but completely unintentional.

This is what it sounds like with “(mp1.midicps+mp2.midicps)” instead of the original “freq”.


  • Music Technology Forum - Presentation - Presentations by Seb Tomczak and Darren Curtis
Sebastian Tomczak talked to us this week about his honours thesis, which is to develop a cheap, and easy to assemble musical interface for the general public. He mentioned two current interfaces (teleo, and I-Cube X) that share a similar aesthetic to his own idea, only these cost hundreds of dollars. He mentioned that he estimates his own design will cost $21. However this is only an estimate and the price may change depending on what features he intends to incorporate into it. Still, this is a very good starting price. He mentioned that his design will be based on the MJoy schematic. I find this area of new musical interfaces, and electronics very interesting. It would be nice if we could have honour/masters students present several times throughout the year and talk about their progress.[4]

Darren Curtis also talked about his honours thesis that relates to the area of sound healing, which involves sound that has a direct impact on the physiological processes of the body. He mentioned three popular sound healing techniques:

- Binaural Beats
- Filtering
- Gating (sonic stimulation)

He also mentioned three different Biotechnologies:

- Ultrasound
- Vibroacoustics
- Biocomputer Waves

Darren has decide to use the technology of Binaural Beats as his focus for his thesis. Originally he planned to work with people from the psychology department and make use of their EEG machines, but unfortunately he doesn't think this is possible given the scope and timeframe. He said that perhaps in future study as a masters or PHD student, he'll be able to incorporate this sort of technology. I also find this subject very interesting. At a presentation Darren Curtis late last year, I got the pleasure of listening to his sacred sound CD. I must get a copy sometime.[5]
  • Music Technology Forum - Workshop - Workshop on Christian Marclay, and Pink Floyd [6]
This week we listened to Christian Marclay, the vinyl scratcher. The four pieces we listened to, “John Cage”, “Mariah Caller”, “Jimi Hendrix”, and “Johann Strauss”, were compilations of these composer’s works, but scratched up. I guess my favourites out of these four were “John Cage”, because of the interesting rhythms, and “Johann Strauss”. “Johann Strauss” reminded me of someone playing a monophonic synth with the slide feature enabled. I actually found this piece quite humorous because it sounded as though an orchestra was completely butchering this piece. On a more serious note this piece could also be thought of as an interpretation of this classical piece of music, must like a classical musician interprets notation.


We revisited Pink Floyd again this week. This time we listened to “Shine On You Crazy Diamond”, from Wish You Were Here. I personally hadn’t heard a lot of Pink Floyd outside of “The Dark Side of the Moon”, so this was an interesting introduction to some of their later work. Only two years after Dark Side of the Moon, but I noticed quite change in their style.
  • References
    [1] Grice, David. 2006. Practical on Live Recording. University of Adelaide, 9 May.
    [2] Sound on Sound. "Making The Most Of D-Verb". From Sound On Sound. November 2003. http://www.soundonsound.com/sos/nov03/articles/protoolsnotes.htm . Accessed on 15/05/06.
    [3] Haines, Christian. 2006. Lecture on Supercollider . University of Adelaide, 11 May.
    [4] Seb Tomczak. 2006. Presentation on Seb Tomczak's honours thesis. University of Adelaide, 11 May.
    [5] Darren Curtis. 2006. Presentation on Darren Curtis's honours thesis. University of Adelaide, 11 May.
    [6] Harris, David. 2006. Workshop on Christian Marclay, and Pink Floyd. University of Adelaide, 11 May.

Saturday, May 06, 2006

Week 8

  • Practical 1 - Audio Arts - Recording and preparing Voice Over [1]
This week we learnt about how to record and prepare voice overs. The points to focus on with voice overs are:

- clarity (condeser mic and popper stopper)
- output optimal volume with lease noise using compression (usually 4:00:1 ratio is good)
- deadest sound (dead room is best to record in)

The mic should be set to either cardiod or hyper-cardiod with flat settings. Setting up a music stand is also a good idea to prevent paper rustle from nervous 'talents'.

Later on this same week I got the chance to record a voice over for a "Life Impact" Uni promotion video. Unfortunately I wasn't able to use the dead room for my recording, but instead I set up a quasi-dead room by using those movable walls covered in absorbant sound material.


(click for larger image)

This setup worked pretty well and here is the final product of both voice-overs.

Female Voice



Male Voice



As well as compression, I found a light gate worked well to eliminate unwanted mouth noises between phrases. At the same time this also got rid of any breaths in between phrases. I'm not sure if this was a good idea or not as it makes the voice sound a little unnatural, but I figured most people probalby wouldn't notice this with music and visuals accompanying the voice.

  • Practical 2 - Creative Computing - Supercollider (5) [2]
Well it’s week 11 now and I’ve finally got around to doing the Supercollider for week 8. After going through the readings twice for this week 8, there were still some things that I just couldn’t grasp, so I figured I wouldn’t move on until I understood them. Not understanding the material after two separate readings really put me off. Thanks to my lecturer’s (Christian Haines) example, I was finally able to understand what to do.

It was the ‘do’ section of Chapter 21 that I was having trouble understanding. Additionally, for some reason the example code (that didn’t require a MIDI interface) in the MIDI section returned an error – I couldn’t work out why.

Anyway the biggest piece of knowledge I have taken away from this week’s Supercollider, is the ‘do’ message. This allows me similar function to the ‘metro’ object in max. With this function the doors of possibility have open. I only wish I had more time to explore this space of possibility.

FM with Iteration

// Week 8 FM

(
SynthDef ("FM",
{

// Arguments
arg mpitch1 = 60,
mpitch2 = 67,
dur = 1.5 // duration of Volume Envelope
;

// Variables
var fm
;
// Output
fm = SinOsc.ar(
freq: [mpitch1.midicps,mpitch2.midicps] + // Carrier Frequency
VarSaw.ar(
freq: MouseX.kr( // Control Frequency
0.0, 100.0),
mul: MouseY.kr( // Index
0.0, 1000)


),
mul: [0.5] // Overall Amplitude

);
fm = fm * EnvGen.kr(
Env.perc(0, dur),
doneAction: 2
)
;

// Out
Out.ar([0,2], fm);
}
).load(s);
)

// Routine used to execute do (Iteration)

(

r = Task({

inf.do(
{
// Arguments
arg click // bang
;

// Variables
var next = 1, // time between bang
cnt = click % 6, // Influences pitch
pitch1 = 10 * (cnt*2), // Changes Pitch
pitch2 = 17 * (cnt*3) // Changes Pitch
;
// Feedback
cnt.postln;

a = Synth.new("FM");
a.set(\mpitch1, pitch1, \mpitch2, pitch2);

// Pause
(next).wait;
}
);
});

// Start Routine
r.start;

)

  • Music Technology Forum - Presentation - Presentations by Tim Swalling, and Jasmin Ward
The title of Tim Swalling's presentation was "Bringing Music to A-Life". It centered around his honours thesis of using artificial intelligence to create music. He talked about his current research into genetic algorithms and mentioned how they're based on natural selection and cellular automata. It was an interesting presentation, and i'll be interested in how his research further develops.[3]

Jasmin Ward spoke about the current competition she is involved in with Stephen Whittington. Her aim is to develop a system using Max/MSP that will filter out audio samples into a different sound. This is suppose to be an audio representation of natural ecological filtration systems that filter dirty water into the clean water. I will be interested to hear her final result.[4]
  • Music Technology Forum - Workshop - Workshop on Led Zeppelin, Pink Floyd, Frank Zappa, and Pierre Henry [5]
First off we listened to "Whole lotta love", by Led Zeppelin. This piece featured the Theramin as concurrent sweeps with guitar squealing. To be honest I didn't even notice the Theramin as it was mixed so well with the guitar squealing.

Next we listened to "Bike", by Pink Floyd. This piece started off like a stadard rock song, and then progressed to a trippy improvised sounding music concrete section. This type of music didn't really impress me, and the concrete section was nothing new to me.

We also listened to "Breathe, by Pink Floyd. This is a great rhythmic piece featuring the VCS 3. An excellent example of evolving electronic musical ambience. Personally, i've heard this song to death so hearing it again killed me even more.

The last piece was the best, and appropriately saved till last. This was the tape piece, "Voile d'Orplae" by Pierre Henry. Unlike other tape pieces, this one seemed as though it had a coherant structure rather than crazy depersonalised sound bytes. It actually reminded me of many modern day film sound design techniques. If only Pierre were alive doing movies today. This piece has definately inspired me to check out more of Pierre Henry.
  • References
    [1] Grice, David. 2006. Practical on Live Recording. University of Adelaide, 2 May.
    [2] Haines, Christian. 2006. Lecture on Supercollider . University of Adelaide, 4 May.
    [3] Tim Swalling. 2006. Presentation on Tim Swalling's projects. University of Adelaide, 4 May.
    [4] Jasmin Ward. 2006. Presentation on Jasmin Ward's honours thesis. University of Adelaide, 4 May.
    [5] Harris, David. 2006. Workshop on Iannis Xenakis, Gabrielle Manca, and Phillip Glass. University of Adelaide, 4 May.