Monday, August 28, 2006

Week 5

  • Practical 1 - Audio Arts - Game Sound Analysis [1]
This is an update of the blogged stratification map from last week. Our group now has a forum which we all use to give feedback, express ideas, and report progress among other things. So far the artists have posted two characters, a PC (player character) mage, and a hybrid dog thing which I presume to be an enemy. Ok so assuming these will be in the game, I will create an asset list for them

Title Screen
Here is a description taken from my brief.
"The User Interface will have a background of a fully real time image of a river and bridge scene over-looking a forest or a looping video of the same scene. The time of day will be just after midday. The river will be quite deep, but flowing gently down stream. Birds could be hidden in the trees and there will be a slight breeze blowing through the forest/trees."

There will also be button clicks I will need to create for when buttons are pressed on the title screen Interface.

Birds:
- light bird ambience
- Occasional bird screech from particular types of birds (perhaps this environment is something like our own in which case, what part of the world is it, and what type of birds occupy this area?)
- light stream
- heavy deep stream

Trees:
- trees blowing in the wind

River:
- Insect Sounds

Interface button clicks:
- possible combination of
- mouth click
- door closing (+reverb)
- footsteps on wood

Options Screen
When the user wants to change the sound effect volume a sound will need to be either looped on mouse down, or triggered when the volume slider is moved. This could be a sound of anything really.

IN GAME

COMMON SFX:
- Footsteps or character movement
-- could potentially be different depending on the creature whether it be human or hovering efreet
Armour
The PC will be able to equip a number of different armour types. When the armour is clicked upon and equipped, a sound will need to accompany this. The body will be divided up into certain segments and these given segments will define the type of armour used.

Miscellaneous Items
- Potions,
- Gold
- Quest Items

Anything else that can be dropped from monsters or found throughout the world. Each will need their own sound.

Combat
Attack sounds and dropping
- Axe
- Bow
- Sword (etc...)
- Hand to Hand weapon slashes
Etc…

UNIQUE SFX:

NPC's (Non-Player Characters)
The NPC's will need voices when clicked upon to buy things, or if the player needs to talk to them to retrieve information. They may also create sound via footsteps, interaction with the world, or react vocally when the PC becomes within a certain proximity of the NPC’s.

Artists have so far created:

Human Mage:
- Voice
- Spell Casting

Moxen:
- Idle sounds
- ‘Noticed PC’ sound
- Attacking sound
- Footsteps

Equipment

Armour
The PC will be able to equip a number of different armour types. When the armour is clicked upon and equipped, a sound will need to accompany this. The body will be divided up into certain segments and these given segments will define the type of armour used.

Miscellaneous Items
- Potions,
- Gold
- Quest Items

Anything else that can be dropped from monsters or found throughout the world. Each will need their own sound.

Combat
Attack sounds and dropping
- Axe
- Bow
- Sword (etc...)
- Hand to Hand weapon slashes
Etc...

Monsters
- Idle
- Noticed PC and Incoming
- Monster Hit
- Monster Die
Etc...

  • Practical 2 - Creative Computing - MIDI Input [2]
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.
  • Music Technology Forum - Presentation - Presentations by William Revill, Ben Probert, Jacob Morris, and Tim Gabbusch [3]
William Revill - "Neurotic Turbulence"

I absolutely loved this piece. William told us he wasn't quite happy with them because he still felt they needed some extra tweaking, but from the point of view of an audience member whom isn't as close to the piece as he is, I'd have to say it was fantastic. I really like the contrast textures from smooth to harsh. The combination of wind and creaking gave me an extremely vivid picture of being suspended in a blimp-like basket on top of some rickety stilts wavering with the wind extending above a large rainforest at night time. Good work

Ben Probert - 3 pieces of music

Ben played us his Creative Computing piece containing his own satanic sounding vocal samples, his NIN remix, and an Ableton piece he slapped together in 1/2 hour. I would have to say I thought his Creative Computing was the most enjoyable listen. It contained some interesting sound design.

Jacob Morris – "New Surroundings"

I really liked the short narrative Jacob presented with his piece, although I found it difficult to follow in the piece. I guess I was hoping for something longer because by the time I thought the narrative had moved say half way through part 1, the piece had finished. The piece itself was a good listen too, and I can't help imagine how it might have sounded in 5.1 of which it was originally written for.

Tim Gabbusch – "Tape Piece" and Audio Arts recording

Tim's 'tape piece' brought back some nice memories with our brief touch of the analogue world. Our generation of Tech students were unfortunately the last privileged to experiment with analogue tape. I remember all the pieces in our first year were very distinct in tonal colour, but I couldn't remember hearing this particular piece. It was confirmed later on that Tim accidentally presented the wrong piece.

His Audio Arts piece was well recorded, and well composed might I say, although I thought maybe an extra section could have improved upon its repetitive nature. I thought the toms might have been a bit loud, and the overall mix sounded a little bassy, but that could have just been the reproduction system.
  • Music Technology Forum - Workshop - Improvisation Groups!
As I mentioned last week, from now on I am going to devote these sessions to mastering the Juno-6. Apart from the absentees, the session went really well. From my point of view I did feel a sense of togetherness for a least 50% of the time, but unless I 'level-up' psychic skill, I can't comment on what was going on in the heads of others. I am myself to blame partially for the remaining 50% of the time of in which some of us seemed to be playing by ourselves as opposed to contributing as a 'whole'. Reflecting on the improvisation, I found myself constantly slipping in and out of group consciousness. In other words at times I find myself listening to other players and adjusting my sound accordingly, and at other times I found myself becoming so absorbed in my own sound that I tuned out of the 'group sound'. This is something I think I need to work on; maintaining awareness of our 'group sound'.
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 22 August.
    [2] Haines, Christian. 2006. Lecture on SuperCollider. University of Adelaide, 24 August.
    [3] Reed, Henry. Mazzone, Matthew. Murtagh, Daniel. 2006. Student Presentations. University of Adelaide, 24 August.

Thursday, August 17, 2006

Week 4

  • Practical 1 - Audio Arts - Audio Engine Analysis [1]
This week we were asked to do a "macro asset stratification map". As I understand it this is a fancy word for spotting notes, or at least the game equivalent of spotting notes. The current list is currently incomplete as we haven't finalised how much we'll be able to get done in out given time period. Anything mentioned 'IN GAME' are currently ideas that we may or may not get to.

Title Screen
Here is a description taken from my brief.
"The User Interface will have a background of a fully real time image of a river and bridge scene over-looking a forest or a looping video of the same scene. The time of day will be just after midday. The river will be quite deep, but flowing gently down stream. Birds could be hidden in the trees and there will be a slight breeze blowing through the forest/trees."

There will also be button clicks I will need to create for when buttons are pressed on the title screen Interface.

Birds:
- light bird ambience
- Occasional bird screech from particular types of birds (perhaps this environment is something like our own in which case, what part of the world is it, and what type of birds occupy this area?)
- light stream
- heavy deep stream

Trees:
- trees blowing in the wind

River:
- insect sounds

Interface button clicks:
- possible combination of
- mouth click
- door closing (+reverb)
- footsteps on wood

Options Screen
When the user wants to change the sound effect volume a sound will need to be either looped on mouse down, or triggered when the volume slider is moved. This could be a sound of anything really.

IN GAME

NPC's (Non-Player Characters)
The NPC's will need voices when clicked upon to buy things, or if the player needs to talk to them to retrieve information.

Equipment

Armour
The PC (Player Character) will be able to equip a number of different armour types. When the armour is clicked upon and equiped, a sound will need to accompany this. The body will be divided up into certain segments and these given segments will define the type of armour used.

Miscellaneous Items
- Potions,
- Gold
- Quest Items

Anything else that can be dropped from monsters or found throughout the world. Each will need their own sound.

Combat
Attack sounds and dropping
- Axe
- Bow
- Sword (etc...)
- Hand to Hand weapon slashes

Monsters
- Idle
- Noticed PC and Incoming
- Monster Hit
- Monster Die
  • Practical 2 - Creative Computing - GUI [2]
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.
  • Music Technology Forum - Presentation - Presentations by Henry Reed, Matthew Mazzone, and Daniel Murtagh [3]
Henry Reed - “Lucky”

Henry presented us this week with a touching piece he wrote for his grandfather who used to be an aerial photographer in World War II. The piece was written with SuperCollider which he used to scrub samples back and forth at various speeds. The sample playback was triggered by prime numbers, and various other parameters such as playback speed, volume, and instrument note sequence were controlled via random procedures. The samples he chose imbued vivid imagery inside my mind. I especially like his choice of the “camera shutter”, and the “jazz music” sample. The evolving ambient backdrop was also good.

Matthew Mazzone - 3 pieces of music

The first piece Matthew played was for a game he is writing for. It gave me vivid imagery of lush swamps with rain and plentiful insect wildlife. Very well produced, although I think the piano may have been a little loud in some spots. The second piece he played us was an “Ableton Live” piece he apparently wrote in about 30 mins. I quite like it and reminded me of the game music composer Alexander Brandon who did the score for Deus Ex, Unreal 1, Jazz jackrabbit, Tyrian, and other greats. I especially like the sparse contrast of the Electric Piano and Guitar. The third and final piece he presented us was a piece he wrote whilst attending another private institution. This piece was well produced and sounded good for a while, but I mostly found it to be too repetitive, and lacked variation.

Daniel Murtagh – A Heavy Metal piece

Daniel’s recorded heavy metal piece of which he co-wrote was a little shocking to my ears. I must admit I'm not really used to hearing heavy metal. I soon got over it though and as my ears began to adjust, I started to appreciate it a lot more. Unfortunately by this point it finished soon after. Daniel mentioned the synth, “Drum kit for Hell Superior”, which he used for the midi drum kit. He claimed that it used a technology that simulated mic positions on the drum kit, and also allowed drum kit bleed. All this was supposed to make it sound like it was more so in an acoustic space. Personally the result didn’t really impress me and even though I know he used it, to me it sounded like any other midi drum track, but with better samples. The music was pretty good, but I thought the bass end needed a bit of a boost. The bass drum in particular I thought sounded a bit dead, the other drums that occupied the higher frequency range such as the high hat and crash I thought sounded pretty good. Overall it still sounded like a midi drum kit to me, and I'm not at all inspired to go out and buy the synth myself.
  • Music Technology Forum - Workshop - Improvisation Groups!
Ok we all made some more progress this week, and some of us got to play some music too. Luke got out his Jupiter-4 and Poppi her turntables. Seb and Luke were the only people who were able to improvise together. Seb improvised with his Wacom tablet max patch which was controlling Luke's audio signal from the Jupiter-4. I am still trying to work out my networking idea, but I'm getting there. I didn't have a lot of time to work on the max patch this week, but thankfully Seb had taken the initiative to program two systems of data mapping integral to the patch; the allocation of the max data dependant on the incoming pitch of the audio, and also the amplitude scaling of participating clients. Today I applied some programming glue to stick them both together. Seb thought of the idea of using the Mac closest to the turntables as a gateway for the turntable audio. It was a good idea, but unfortunately I encountered a problem with the 'netsend~' object. For some reason I couldn't get the audio sending over the network. However I got the 'netsend' object to work. I didn't get enough time to troubleshoot the 'netsend~' object, so hopefully I can get it working next week. The original idea was to use laptops wirelessly in a network environment with Max/MSP. The reason we have retreated to the Mac lab is that only Vinny and Seb have Max, Poppi doesn't have an airport card, and I only have PD (which I'll need to learn if this wireless idea is to come to fruition).

UPDATE: Ok, after considering the feedback from Stephen and David, and after speaking to Ms. Doser, I have decided to scrap the whole idea of max and netsend. It got me thinking about why I am actually doing this netsend idea, how it will make me feel, and ultimately what I want to get out of these improvisation sessions. Poppi seemed to be feeling the same as me about nothing getting done and this feeling of lack of togetherness. I was thinking that the netsend idea was more intellectual than emotional and it reminded me of how I felt doing the max patches last year for my assessment. It brought me back to my issue of interesting vs fun and enjoyable. Netsend is interesting, but ultimately I'll be having more fun with my new focus. I figured that I want to target the more "emotional" side of music. All I want to do now is grab an analogue synth and play around with it just like what Luke is doing. Actually I was really inspired by Chris Williams who played a Moog at one of the EarPoke concerts during ACMC. He knew that synth back to front. I have made that my new aim for these improvisation sessions; to be able to express myself completely with one of these synths, and to break away from the 'binary' I have immersed myself in these last two years.
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 15 August.
    [2] Haines, Christian. 2006. Lecture on SuperCollider. University of Adelaide, 17 August.
    [3] Reed, Henry. Mazzone, Matthew. Murtagh, Daniel. 2006. Student Presentations. University of Adelaide, 17 August.

Monday, August 14, 2006

Week 3

  • Practical 1 - Audio Arts - Systems Analysis & Game Sound Analysis [1]
This week we were asked to do a Game Engine Analysis with a specific focus on sound. We were asked to look at a number of components.

Game Engine Structure: Refer to Week 2 Post

Tools:
The Three main tools that encompass TGE are The World Editor, GUI Editor, and TorqueScript.

IDE:
I believe this may be part of the Torque World Editor, but I'm not sure

Extensibility and Other Engines:
Since the birth of TGE, a number of add-on engine have become available. These are the Torque Shader Engine, and the Torque Game Builder. There are also starter kits for various genres of game available such as the RTS starter kit.

Middleware:
Because the full source is available, any sort of middleware should be able to be slotted in. In terms of audio this means that FMOD could be used instead of OpenAL.

Systems Abstraction:
The Torque engine can be compiled for Windows, Mac, and Linux. OpenAL also allows cross platform development with Audio.

Source V Binary:
I believe there are GUI interfaces for the world editor which allows you to add scripts including scripting audio events. If you want to get down to a lower level, the source is also available in C++.

  • Practical 2 - Creative Computing - In the Sand Box [2]
The sun is shining, and the birds are singing my song, or maybe Supercollider is just starting to sound good. It's week 7 now and since the start of Week 3 I have revisited this patch every week trying to work it out. Well it turns out I was using a 'mono' file. The story goes that before I even started this patch, I heard from several people that you must use a 'mono' file. That didn't seem to sink into my thick head, and I spent a week or two trying to work out why my patch didn't work. I then opened up my wave file and realised it was stereo. After realising this I thought I converted it, but it turned out a couple of weeks later I re-realised I still hadn't converted it. Today, (3/09/06) after converting the wave file to mono, and referencing Poppi's patch, I finally got it going. It's reassuring that I'm not a complete air head when it comes to understanding the code, but instead an air head when it comes to understanding that I need a mono file to work with Tgrains. It was really worrying me as this patch remained a dependency for the former weeks' Supercollider work.


(

// Global Variables
~thisPath = (PathName.new(Document.current.path)).pathOnly;

// Sound File
~soundfile = Buffer.read( // Buffer of sound file to granulate
server: s,
path: ~thisPath++"ringroad.wav"
);

~soundfilebuffer = ~soundfile.bufnum;

// TGrains SynthDef
SynthDef("Zuljin",
{

// Arguments
arg trigFreq, // Grain trigger
grainAmp, // Grain amplitude
grainPos, // Grain max volume position in audio file (not exactly sure how this one works)
grainPan, // Grain pan position
grainRate, // Rate of playback
grainDuration, // Grain duration
mainVolume // Main Volume
;

// Variables
var signal;

// Output
signal = Normalizer.ar(
in: TGrains.ar(
numChannels: 2, // No. of output channels
trigger: Impulse.kr( // Grain Trigger
freq: trigFreq
),
bufnum: ~soundfilebuffer, // The (mono) sample to granulate
rate: grainRate, // Playback Rate 1.0=normal, 2=twice speed
centerPos: grainPos, // Position of audio file in seconds in which grain env will reach max vol
dur: grainDuration, // Duration of grain in seconds
pan: grainPan, // Grain Panning
amp: grainAmp, // Grain Amplitude
interp: 1 // Grain interpolation 1=none, 2=linear, 4=cubic
),
level: 1.0 // Peak Normalizing Rate
);

Out.ar([0,1], (signal * mainVolume))
}).store;
)

// Performance Settings
(
~zul = Synth("Zuljin",
[
\trigFreq, 200,
\grainAmp, 0.5,
\grainPos, 2,
\grainPan, 1,
\mainVolume, 0.8
]
);

~tempo = TempoClock.new(
tempo: 1,
beats: 0,
seconds: Main.elapsedTime.ceil // not sure what 'ceil' means
);



// Seq Grain Stream

~tempo.schedAbs(
beat: 0,
item: {

// Arguments
arg beat;

// Variables
var performance;

// Performance Data - MultiDimensional Array
performance = [

// Trigger Frequency
Array.series( // Array 0
size: 11, // [ 50, 51, 52, ...etc... ]
start: 0,
step: 2
),

// Grain Rate
[0.5, 1, 2, 20, 200], // Array 1 // wanted to do something like this [{rand(2, 3)}, {rand(10,20)}]

// Grain Duration
Array.series(
size: 50,
start: 0.1,
step: 0.1
)
];

// Performance Settings // % mousex?
~zul.set(
\trigFreq, performance.at(0).at(beat%performance.at(0).size).postln, // beat%500 (size of Array 0)
\grainRate, performance.at(1).choose,
\grainDuration, performance.at(2).choose
);
1 // Execute every single beat ([rand(0.5, 2),1].choose.postln;)
}
);
)
  • Music Technology Forum - Presentation - Presentations by Adrian Reid, David Downling, Vinny Bhagat, and Dragagos Nastasie [3]
Adrian Reid - “Forces”
I really liked the textural contrast and variation in this piece. Although these weren't extreme, they never the less worked together quite well. There was one particular piercing sound with a decay to noise that sounded very nice indeed. A name - "2001: A Sonic Obscurity" came to mind.

David Dowling - Recording by the band "Tuscadero"
I must admit, this isn't really my type of band, but for that genre they sounded very tight and professional. The recording was pretty good overall. I'm not an expert at mixing this type of music, but I would have probably added a little more reverb, and brightened up the vocals a bit.

Dragos Nastasie - "Induced"
I really enjoyed this piece, and Dragos's expertise with Reason really shined through this piece. It had a really nice groove, but I thought it dragged on for a little too long > I was hearing changed of sections in my head, but alas they didn't manifest. The highlight of this piece was the middle section which reminded me of the development section of a sonata. I really liked the distorted pad sound contrasting with the electric keyboard.

Vinny Bhagat - "Raag Yaman"
I have no recollection of what this piece actually sounds like. All I can remember is the visualisation playing on the projector which didn't really do much for me. The piece wasn't all that bad, but I actually thought Vinny's piano playing got in the way at some points. I also thought the piano was overall a little loud compared to the rest of the mix.
  • Music Technology Forum - Workshop - Improvisation Groups!
My group talked about the ideas presented last week and tried to narrow down how all these ideas can tie together. I didn't take many notes so some of what I say is from memory. I believe Luke wants to play some of the old analogue synths, and Patrick wants to play his tabla. At this stage I'm not sure exactly what Dan wants to do. I think Vinny will be using his laptop. I'm sure if Poppi is sure exactly what she wants to do, but I think it's either using a turntable, or staying away from digital and incorporating some paper machete if my memory serves me right. I'm not sure what Seb and Tim want to do. I want to incorporate computer networking technologies. Here is a diagram explaining what I want to do.



A number of evolving Max parameters (which could be anything - a random object spitting out values for instance) are sent out from the server to a single Client of which is selected by the pitch of the input (via pitch~) to the server. The chosen Client not only exclusively receives these Max parameters, but also the audio signal of the server input (via netsend~) at an increased amplitude relative to the number of clients participating.
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 8 August.
    [2] Haines, Christian. 2006. Lecture on Supercollider. University of Adelaide, 3 August.
    [3] Reid, Adrian. Downling, David. Nastasie, Dragos. Bhagat, Vinny. 2006. Student Presentations. University of Adelaide, 10 August.

Monday, August 07, 2006

Week 2

  • Practical 1 - Audio Arts - Systems Analysis & Game Sound Analysis [1]
This week we were asked to do a Systems Analysis for the system we are developing audio for. Here is:

System: PC
It's too early in development to determine minimum or recommended system requirements

CPU: Average (guess)
RAM: Average (guess)
Storage: 1Gig total game (guess)
Bus Speed: Samples loaded into ram, or streaming from disk (guess)
ADC/DAC: assume lowest common denominator - built in audio AC97 (guess)

Medium: DVD/CD (guess)
Streaming off HD, not DVD/CD

Reproduction: Speaker Array/Headphones
Mono/Stereo or Multichannel 5.1/7.1 (hopefully)

Game Engine - Torque
Audio Engine - OpenAL

*supported by Creative Labs
- Option of supporting EAX 5 (unlikely support due to time constraints)

The general functionality of OpenAL is encoded in source objects, audio buffers and a single listener. A source object contains a pointer to a buffer, the velocity, position and direction of the sound, and the intensity of the sound. The listener object contains the velocity, position and direction of the listener, and the general gain applied to all sound. Buffers contain audio data in PCM format, either 8- or 16-bit, in either monaural or stereo format. The rendering engine performs all necessary calculations as far as distance attenuation, Doppler effect, etc.[2]

Sample Rate 44.1 (guess)
Bit Rate: no idea
Number of Channels: up to 7.1
  • Practical 2 - Creative Computing - Splice and Dice [3]
We basically covered BBCut this week. I was going to use CutMixer, but I wasn't completely sure how it worked and fit into the rest of the patch.




// Week 2
(

// Global Variables
~thisPath = (PathName.new(Document.current.path)).pathOnly;

// Sound Files
~sf = BBCutBuffer(
filename: ~thisPath++"ringroad.wav",
beatlength: 20 // Number of beats in the soundfile, or used to estimate BPS
);

~sf1 = BBCutBuffer(
filename: ~thisPath++"ringroad.wav",
beatlength: 8 // Number of beats in the soundfile, or used to estimate BPS
);

~sf2 = BBCutBuffer(
filename: ~thisPath++"getit.wav",
beatlength: 8
);

~sf3 = BBCutBuffer(
filename: ~thisPath++"beat76.wav",
beatlength: 8
);

// Clock Setups
// Clock 1
~clock= ExternalClock( // Playback speed (8 and 1 are good)
tempoclock: TempoClock(
tempo: 1
)
);

~clock.play;

// Clock 2
~clock2= ExternalClock( // Playback speed (8 and 1 are good)
tempoclock: TempoClock(
tempo: 1
)
);

~clock2.play;

// Clock 3
~clock3= ExternalClock(
tempoclock: TempoClock(
tempo: 1 // Playback speed 1=normal, 0.5=half speed

)
);

~clock3.play;

// Clock 4
~clock4= ExternalClock(
tempoclock: TempoClock(
tempo: 1 // Playback speed 1=normal, 0.5=half speed

)
);

~clock4.play;

)

// The Vault Dweller (Works)
(
Routine.run({ // What does the 'run' method mean?

a = BBCut2(
cutgroups: CutBuf1(
bbcutbuf: ~sf,
offset: 0.35
),

proc: BBCutProc11(
sdiv: 4, // How many sub divisions (how often it will cut) - can be rhythmic at higher values
barlength: 0.5, // x/4 Time Sig (how much of length to play)
phrasebars: 0.5, // The length of the current phrase is barlength*phrasebars.
numrepeats: 6, // Total number of repeats for normal cuts. (higher numbers create less frequent cut phrases)
stutterchance: 1, // Chance of repeating Cut
stutterspeed: 1, // Integer multiple of subdiv
stutterarea: 1 //a stutter is permissible within this proportion of the last bar of a phrase. 0.5 for a half bar at 4/4
)
).play(~clock);
})
)

// The Chosen One
(
Routine.run({

b = BBCut2(
cutgroups: CutBuf1(
bbcutbuf: ~sf1,
offset: 0.3
),

proc: BBCutProc11(
sdiv: 8,
barlength: 3,
phrasebars: 2,
numrepeats: 3,
stutterchance: 0.5,
stutterspeed: 3,
stutterarea: 1.0
)
).play(~clock);
})
)

// Marcus
(
Routine.run({

c = BBCut2(
cutgroups: CutBuf1(
bbcutbuf: ~sf2,
offset: 0.3
),

proc: BBCutProc11(
sdiv: 8,
barlength: 3,
phrasebars: 2,
numrepeats: 3,
stutterchance: 0.5,
stutterspeed: 3,
stutterarea: 1.0
)
).play(~clock2);
})
)

// 3rd
(
Routine.run({

d = BBCut2(
cutgroups: CutBuf1(
bbcutbuf: ~sf3,
offset: 0.3
),

proc: BBCutProc11(
sdiv: 16,
barlength: 3,
phrasebars: 2,
numrepeats: 3,
stutterchance: 0.5,
stutterspeed: 3,
stutterarea: 1.0
)
).play(~clock3);
})
)

// Free all Buffers
(
a.end;
b.end;
c.end;
d.end;
)


  • Music Technology Forum - Presentation - Presentations by Luke Digance, and John Delany [4]
Luke Digance - “Concrete Harmony”

Luke presented his Musique concrète piece he did last semester. For this piece he directly challenged the 'Musique concrète' idea of dereferencing sounds from their source and treating them as sound objects, but instead examined the fundamental frequency of each sound turned them into a pitch. From there pitch shifted these sounds to form notes that make up chords. He basically wrote a piece of music that falls into western harmony, but with found sounds instead pitched instruments. Although the piece was perhaps slightly repetitive at times, the repetition was not enough to bore me. Overall I thought the piece sounded really great and commended him on his good idea. I could also hear the influence of Aphex Twin that he mentioned.

John Delany – “Performance Symmetry”

John also presented his Audio Arts piece from last semester. The idea behind this piece was to record Benjamin Probert and Patrick McCartney singing on a single note, but emphasising harmonics. I gathered this was harmonic singing. From there John pitch shifted the voices within a range of 3 octaves to form a symmetrical curve of pitches. First of all I thought his score presentation was fantastic; Very clear and excitable to the chemicals in my brain. I also loved the distortion he applied toward the end of the piece. From an orchestration point of view, I had never heard harmonic singing with distortion, but I thought it sounded fantastic. Come to think of it, it’s probably an instrumental combination you’d least expect. My only criticism however is that just as I was getting into it, the piece abruptly faded out over about 300 milliseconds. When asked, about this John replied that this was part of his deliberate aesthetic. Well fair enough, I have done similar abrupt finishes to pieces in the past, but personally I would have liked it to go on for at least twice as long. It was otherwise a great piece.
  • Music Technology Forum - Workshop - Improvisation Groups!
This semester we're doing something new and experimental. For our workshop component, three groups need to work out techniques for improvisation and make a recording. The group I was chosen to be in contain the following people:

Tyrell Blackburn
Vinny Bhagat
Poppi Doser
Patrick McCartney
Daniel Murtagh
Luke Digance
Tim Swalling
Seb Tomczak

At this stage we have only started brainstorming ideas. Here are some of the ideas thrown about:

Max/MSP
Old PC sound hardware such as Adlib or PC speaker (personal interest)
Jitter
SuperCollider
Analogue equipment
- VCS 3
- Reel to Reel
Networking of machines: This is something that could very well work as all but three of us have laptops, four of which are Macs that also have wireless. I am particularly interested in exploring this area as it allows a more direct manipulation and interaction through improvisation.

At this stage I don’t feel comfortable enough with SuperCollider. I mean I could very well use it, but it would be more a practice of experimentation (and half learning the program) than actually directly realising my ideas. Experimentation is always good, but personally I would like to take my ideas to the next level with this improvisation, and I believe that falling back on Max/MSP would allow me to do this. Who knows, I may change my mind later on in the Semester...
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 1 August.
    [2] Wikipedia. 2006. "OpenAL". http://en.wikipedia.org/wiki/Openal. 29 July. Accessed on 31 July 2006.
    [3] Haines, Christian. 2006. Lecture on Supercollider. University of Adelaide, 3 August.
    [4] Digance, Luke. Delany, John. 2006. Student Presentations. University of Adelaide, 3 August.