Every so often a story about a product that sounds like "being there" makes it into the hi-fi press. High-end audio equipment manufacturers often claim to be able to accurately recreate the sound of being at a live event. Recreating the experience of being at an event is often perceived to be ultimate aim of recording and producing live music events. The people who work behind the classical music sound desk often like to be called sound balancers (not engineers or designers). This implies they are subtly balancing the audio levels rather than doing any extreme mixing, that they are just transparently channeling the performance from the concert hall to the audience’s living room.
But is this what’s happening? And should they be trying to accurately recreate the sound of being at an event? I suggest the answer is they currently don’t, and probably shouldn’t. Sound balancers have a difficult job. They have to take the complex sound sources of around 100 musical instruments (along with the response of the concert hall and audience to these instruments) and squeeze that experience into two (or sometimes six) loudspeakers. If more loudspeakers are added it may be acoustically easier to recreate the sound of being there, but I’m still not sure that accurate recreation should be the aim. Although it could be argued that the best mixes convey a sense of the actual space in which the performance occurred, most sound engineers will admit to using artificial reverberation or other tricks to improve their mix. The very act of careful microphone repositioning is the first step in acoustic mediation, and that’s before any balancing occurs. There is also a regular debate amongst sound engineers as to whether an audio mix should follow a vision mix. For example, should the sound balancer push up the level of the piccolo when the vision mixer cuts to a close up of the instrument? One side of the argument goes: being more dynamic about the audio mix ties the audio and video together. The other side of the argument goes: the director should know the piece well enough to direct the vision in sympathy with the music, therefore there should already a tie between the audio and video. If this debate is of interest, compare the BBC Proms mix on BBC1 TV with the on Radio 3 to get an idea of the difference. So, currently the aim is not to recreate the sound of being there. The reality of attending a classical performance (in all but the best concert halls) often has undesirable early reflections, a non-ideal direct to reverberant sound balance, and that’s not mentioning rustling crisp packets and audience coughing. The sound balancer is not mixing the audio to match the best seat in the house, actually they are creating a hyper-real audio balance that only exists in the engineer’s control room. It’s a virtual position in the concert hall, perhaps floating somewhere around the conductor, and from an absolute quality standpoint, probably sounds far better than the best seat in the house. So why go to an event if the best sounding seat is actually in an engineer’s control room? Well, there are at least 4 other senses at work when you’re attending a concert, and technology (although it’s trying) has a long way to go before it can replicate the whole experience.
But is this what’s happening? And should they be trying to accurately recreate the sound of being at an event? I suggest the answer is they currently don’t, and probably shouldn’t. Sound balancers have a difficult job. They have to take the complex sound sources of around 100 musical instruments (along with the response of the concert hall and audience to these instruments) and squeeze that experience into two (or sometimes six) loudspeakers. If more loudspeakers are added it may be acoustically easier to recreate the sound of being there, but I’m still not sure that accurate recreation should be the aim. Although it could be argued that the best mixes convey a sense of the actual space in which the performance occurred, most sound engineers will admit to using artificial reverberation or other tricks to improve their mix. The very act of careful microphone repositioning is the first step in acoustic mediation, and that’s before any balancing occurs. There is also a regular debate amongst sound engineers as to whether an audio mix should follow a vision mix. For example, should the sound balancer push up the level of the piccolo when the vision mixer cuts to a close up of the instrument? One side of the argument goes: being more dynamic about the audio mix ties the audio and video together. The other side of the argument goes: the director should know the piece well enough to direct the vision in sympathy with the music, therefore there should already a tie between the audio and video. If this debate is of interest, compare the BBC Proms mix on BBC1 TV with the on Radio 3 to get an idea of the difference. So, currently the aim is not to recreate the sound of being there. The reality of attending a classical performance (in all but the best concert halls) often has undesirable early reflections, a non-ideal direct to reverberant sound balance, and that’s not mentioning rustling crisp packets and audience coughing. The sound balancer is not mixing the audio to match the best seat in the house, actually they are creating a hyper-real audio balance that only exists in the engineer’s control room. It’s a virtual position in the concert hall, perhaps floating somewhere around the conductor, and from an absolute quality standpoint, probably sounds far better than the best seat in the house. So why go to an event if the best sounding seat is actually in an engineer’s control room? Well, there are at least 4 other senses at work when you’re attending a concert, and technology (although it’s trying) has a long way to go before it can replicate the whole experience.