Friday, October 06, 2006

Week 9

  • Practical 1 - Audio Arts - Case Study [1]
This week we did a case study on Diablo II. I found it one of the best classes we've had so far. Hearing Christian's perspective on how certain sounds were created was very educational, and has consequently made this task a lot more approachable. Playing games and understanding them is important, but after this class I’ve realised that listening to game sfx on their own can be equally as important. Depriving the visual certainly does open up the aural in this case, and it's probably one of the best exercises one can do (at least that I've encountered) for game sound design. I guess from there it would probably be a good idea to try and recreate these game sounds.

The sampling rate of the sounds themselves were only 22050Hz, but as it was pointed out, this doesn’t make a lot of difference to the player because there are so many other things going on, they don't get a chance to concentrate specifically on the technical properties of the sounds. I guess as games become more advanced, so will audio engines, reproduction standards and consequently higher fidelity sound will become more important to the game play experience. I.e. extra frequency ranges and overall audio fidelity may become essential to the players’ survival. Although it's apparent to me that audio queues can make tell the player a lot about what is going on, the realism of the sounds aren't especially important. From my own experience, sounds serve as associations which hold meaning. The actual realism of the sound is not important, only its attached meaning. To illustrate this point, Christian showed us that a particular frog sound in Diablo II clearly sounded as though it was created from a Duck. However having played the game for hours on end myself, it never occurred to me that these frogs in fact sounded like ducks.
  • Practical 2 - Creative Computing - HID [2]
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.

  • Music Technology Forum - Workshop - Improvisation with DJ Trip (Tyson Hopprich) [3]
Words will not do justice trying to explain the concoction of endorphins and thoughts that overflowed my mind during this session. Tyson started composing with the Commodore 64, and then later the Commodore Amiga. During these early years he started to establish his own sound. It's only recently that he has felt the need to migrate to the PC platform. Although he’s now using more modern software such as Cubase SX, he still tries to maintain his old retro 8 bit sound developed through the tracker based Amiga programs, whilst at the same time assimilating the new compositional capabilities of the PC. I found it interesting when he talked about creating his “own sound”. This idea of your ‘own sound’ has taken from seat in my mind. Another person who shares a similar background to Tyson Hopprich, is Jesper Kyd. From various interviews and articles I’ve read about him, I’ve discovered that Jesper also emphasises the importance of spending time to create your own unique sound. Jesper has written the music for the Hitman series of games, and I would recommend people listen to the soundtrack of Hitman 1. For Hitman I believe he used a midi triggered audio engine with a custom DLS soundbank; A very effective combination of tools for creating a unique sound. He also did the soundtrack to Freedom Fighters, and you can download the title track to this game here. Probably not the best track from the game, but still good.

I found it interesting when Tyson talked about listening to himself in 3rd person perspective, and from there adjusting his music dynamically depending on the audience vibe. I found Dragos’ piano style nostalgic. Albert’s wind derivative sounds worked very well. Matt’s piano was suitable and his use of ear was impressive. He did however stick to the mid to mid-high end of the piano, and avoided the higher and lower register extremes. For this sort of music, I think those extremes could have worked really well. It was difficult to pick was Adrian and Jake were doing so I refrain from talking about their contribution. I will have to say however that the drum beat in the 1st and maybe the last or second to last (not sure which one) persisted for at least half the performance too long. I realise it’s hard to break away from a beat when you’re the only person creating it, but some variation in the beat, silence, or a different beat altogether could have allowed for a more dynamic performance.
  • References
    [1] Haines, Christian. 2006. Lecture on Game Audio. University of Adelaide, 12 September.
    [2] Haines, Christian. 2006. Lecture on SuperCollider. University of Adelaide, 14 September.
    [3] Sardeshmukh, Dr. Chandrakant. 2006. Presentation by Dr. Chandrakant Sardeshmukh. University of Adelaide, 14 September.

0 Comments:

Post a Comment

<< Home