Monday, August 14, 2006

Week 3

  • Practical 1 - Audio Arts - Systems Analysis & Game Sound Analysis [1]
This week we were asked to do a Game Engine Analysis with a specific focus on sound. We were asked to look at a number of components.

Game Engine Structure: Refer to Week 2 Post

Tools:
The Three main tools that encompass TGE are The World Editor, GUI Editor, and TorqueScript.

IDE:
I believe this may be part of the Torque World Editor, but I'm not sure

Extensibility and Other Engines:
Since the birth of TGE, a number of add-on engine have become available. These are the Torque Shader Engine, and the Torque Game Builder. There are also starter kits for various genres of game available such as the RTS starter kit.

Middleware:
Because the full source is available, any sort of middleware should be able to be slotted in. In terms of audio this means that FMOD could be used instead of OpenAL.

Systems Abstraction:
The Torque engine can be compiled for Windows, Mac, and Linux. OpenAL also allows cross platform development with Audio.

Source V Binary:
I believe there are GUI interfaces for the world editor which allows you to add scripts including scripting audio events. If you want to get down to a lower level, the source is also available in C++.

  • Practical 2 - Creative Computing - In the Sand Box [2]
The sun is shining, and the birds are singing my song, or maybe Supercollider is just starting to sound good. It's week 7 now and since the start of Week 3 I have revisited this patch every week trying to work it out. Well it turns out I was using a 'mono' file. The story goes that before I even started this patch, I heard from several people that you must use a 'mono' file. That didn't seem to sink into my thick head, and I spent a week or two trying to work out why my patch didn't work. I then opened up my wave file and realised it was stereo. After realising this I thought I converted it, but it turned out a couple of weeks later I re-realised I still hadn't converted it. Today, (3/09/06) after converting the wave file to mono, and referencing Poppi's patch, I finally got it going. It's reassuring that I'm not a complete air head when it comes to understanding the code, but instead an air head when it comes to understanding that I need a mono file to work with Tgrains. It was really worrying me as this patch remained a dependency for the former weeks' Supercollider work.


(

// Global Variables
~thisPath = (PathName.new(Document.current.path)).pathOnly;

// Sound File
~soundfile = Buffer.read( // Buffer of sound file to granulate
server: s,
path: ~thisPath++"ringroad.wav"
);

~soundfilebuffer = ~soundfile.bufnum;

// TGrains SynthDef
SynthDef("Zuljin",
{

// Arguments
arg trigFreq, // Grain trigger
grainAmp, // Grain amplitude
grainPos, // Grain max volume position in audio file (not exactly sure how this one works)
grainPan, // Grain pan position
grainRate, // Rate of playback
grainDuration, // Grain duration
mainVolume // Main Volume
;

// Variables
var signal;

// Output
signal = Normalizer.ar(
in: TGrains.ar(
numChannels: 2, // No. of output channels
trigger: Impulse.kr( // Grain Trigger
freq: trigFreq
),
bufnum: ~soundfilebuffer, // The (mono) sample to granulate
rate: grainRate, // Playback Rate 1.0=normal, 2=twice speed
centerPos: grainPos, // Position of audio file in seconds in which grain env will reach max vol
dur: grainDuration, // Duration of grain in seconds
pan: grainPan, // Grain Panning
amp: grainAmp, // Grain Amplitude
interp: 1 // Grain interpolation 1=none, 2=linear, 4=cubic
),
level: 1.0 // Peak Normalizing Rate
);

Out.ar([0,1], (signal * mainVolume))
}).store;
)

// Performance Settings
(
~zul = Synth("Zuljin",
[
\trigFreq, 200,
\grainAmp, 0.5,
\grainPos, 2,
\grainPan, 1,
\mainVolume, 0.8
]
);

~tempo = TempoClock.new(
tempo: 1,
beats: 0,
seconds: Main.elapsedTime.ceil // not sure what 'ceil' means
);



// Seq Grain Stream

~tempo.schedAbs(
beat: 0,
item: {

// Arguments
arg beat;

// Variables
var performance;

// Performance Data - MultiDimensional Array
performance = [

// Trigger Frequency
Array.series( // Array 0
size: 11, // [ 50, 51, 52, ...etc... ]
start: 0,
step: 2
),

// Grain Rate
[0.5, 1, 2, 20, 200], // Array 1 // wanted to do something like this [{rand(2, 3)}, {rand(10,20)}]

// Grain Duration
Array.series(
size: 50,
start: 0.1,
step: 0.1
)
];

// Performance Settings // % mousex?
~zul.set(
\trigFreq, performance.at(0).at(beat%performance.at(0).size).postln, // beat%500 (size of Array 0)
\grainRate, performance.at(1).choose,
\grainDuration, performance.at(2).choose
);
1 // Execute every single beat ([rand(0.5, 2),1].choose.postln;)
}
);
)
  • Music Technology Forum - Presentation - Presentations by Adrian Reid, David Downling, Vinny Bhagat, and Dragagos Nastasie [3]
Adrian Reid - “Forces”
I really liked the textural contrast and variation in this piece. Although these weren't extreme, they never the less worked together quite well. There was one particular piercing sound with a decay to noise that sounded very nice indeed. A name - "2001: A Sonic Obscurity" came to mind.

David Dowling - Recording by the band "Tuscadero"
I must admit, this isn't really my type of band, but for that genre they sounded very tight and professional. The recording was pretty good overall. I'm not an expert at mixing this type of music, but I would have probably added a little more reverb, and brightened up the vocals a bit.

Dragos Nastasie - "Induced"
I really enjoyed this piece, and Dragos's expertise with Reason really shined through this piece. It had a really nice groove, but I thought it dragged on for a little too long > I was hearing changed of sections in my head, but alas they didn't manifest. The highlight of this piece was the middle section which reminded me of the development section of a sonata. I really liked the distorted pad sound contrasting with the electric keyboard.

Vinny Bhagat - "Raag Yaman"
I have no recollection of what this piece actually sounds like. All I can remember is the visualisation playing on the projector which didn't really do much for me. The piece wasn't all that bad, but I actually thought Vinny's piano playing got in the way at some points. I also thought the piano was overall a little loud compared to the rest of the mix.
  • Music Technology Forum - Workshop - Improvisation Groups!
My group talked about the ideas presented last week and tried to narrow down how all these ideas can tie together. I didn't take many notes so some of what I say is from memory. I believe Luke wants to play some of the old analogue synths, and Patrick wants to play his tabla. At this stage I'm not sure exactly what Dan wants to do. I think Vinny will be using his laptop. I'm sure if Poppi is sure exactly what she wants to do, but I think it's either using a turntable, or staying away from digital and incorporating some paper machete if my memory serves me right. I'm not sure what Seb and Tim want to do. I want to incorporate computer networking technologies. Here is a diagram explaining what I want to do.



A number of evolving Max parameters (which could be anything - a random object spitting out values for instance) are sent out from the server to a single Client of which is selected by the pitch of the input (via pitch~) to the server. The chosen Client not only exclusively receives these Max parameters, but also the audio signal of the server input (via netsend~) at an increased amplitude relative to the number of clients participating.
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 8 August.
    [2] Haines, Christian. 2006. Lecture on Supercollider. University of Adelaide, 3 August.
    [3] Reid, Adrian. Downling, David. Nastasie, Dragos. Bhagat, Vinny. 2006. Student Presentations. University of Adelaide, 10 August.

1 Comments:

Blogger Adrian said...

I like your title for my piece, it kind of fits.

I remember when listening to Vinny's work that the visualisation looked better on his computer than on the projector screen.

9:38 am, August 21, 2006  

Post a Comment

<< Home