Over the past few decades the games industry has made some startling advances in the various fields that constitute it, most notably in the areas of graphics and artificial intelligence (AI). With the ever increasing capabilities of modern computer hardware, particularly processors and graphical accelerators, it has become possible to do more complex calculations on the fly, hence the current booming popularity of physical realism in computer games. A number of companies have appeared in recent years developing real-time physical simulators that can be integrated with existing graphics engines to create a visually believable virtual world. However, in order to create a truly immersive experience you need to have both vision and sound. If the sounds you hear don't match what you see, then the illusion of realism will be ruined, no matter how impressive the visual aspect may be. With the inclusion of physics engines in many modern games, the foundation is already there for the addition of sound synthesis. The physics engine already calculates a considerable amount of data for its own purposes, and would usually discard any superfluous information once it has achieved the movements required. Instead of disposing of the intermediate physical calculations we can use the data to assist in the synthesis of an accompanying effects soundtrack.
The current trend for computer games and virtual environments is to use pre-recorded sounds, known as samples, for any of the audio components of the system. This has the advantage of requiring minimal processor time to play back the samples, since they usually can be played back without the aid of advanced filters. This method has two main disadvantages:
Computer simulated collisions, from an audio perspective at least, are a simple matter of what object collided with what, and they don't usually take into account the specifics of the collision. In reality the exact points of impact on the colliding objects are very important in determining the resultant positions after the collision. The same factors come into play when determining what sounds can be heard. Take for example, two cubes knocking together. If they were to collide face to face (i.e. flat surfaces together) you would get a particular sound. On the other hand, if one of the blocks was tilted at an angle so that one of its corners hit one of the faces of the other block, you would get a moderately different sound. The difference in sound isn't a huge amount, but enough that one would notice it. This is where sound synthesis appears to be advantageous. If the sounds were being created on the fly, then factors such as point of impact and force of impact can be used to control the nature of the sounds being produced.
This project uses modal synthesis to generate the sounds for the environment, which represents objects in terms of their frequency modes of vibration, allowing us to use the information supplied by a physics engine (contact forces and so on) to stimulate these modes and produce appropriate contact sounds. The first simulation pictured below right shows a model of a musical instrument similar to a vibraphone. At the users discretion a number of balls are added to the simulaiton at random locations above the instrument and allowed to fall under gravity. Upon contact with the keys of the instrument and with each other the synthesis engine will calculate the corresponding sounds based on the physical data received.
McCann, G., Real-Time Physically Based Audio Generation, M.Sc. Thesis, February 2004.
McCann, G. and McCabe, H., Extending Physical Simulation to the Audio Domain, ITB Journal, Issue 5, December 2002
McCann, G. and McCabe, H., Physics-Based Sound Synthesis for Computer Games, The Irish Scientist 2002 Yearbook, November 2002
McCann, G. and McCabe, H., A World Without WAVs: Towards Physically Based Sound Synthesis for Computer Games, in Proceedings of the Third Irish Workshop on Computer Graphics, ps. 71-75, Media Lab Europe, March 2002