- Intensity-based localization (loosely based on "Ambisonics"; see
Winter '95 Computer Music Journal)
- For each sound source, generate 4-channel "B-format" signal:
- W: omnidirectional = signal*0.707
- X: left-right = signal*cos(horizontal_angle)*cos(vertical_angle)
- Y: front-back = signal*sin(horizontal_angle)*cos(vertical_angle)
- Z: up-down = signal*sin(vertical_angle)
- "Decode" combined B-format signal to eight speaker signals:
- All eight speakers get W.
- Left speakers get +X; right speakers get -X
- Front speakers get +Y; rear speakers get -Y
- Upper speakers get +Z; lower speakers get -Z
- Amplitude attenuation is proportional to distance_from_listener**2
- Advantages:
- Simple computation (which leaves more CPU cycles available for
event processing, etc.).
- Core of code doesn't need to know about number of speakers or
speaker location.
- Filter-based localization
- Head Related Transfer Function (cf. NASA "Convolvotron"):
- Using "dummy head" or human subject with microphones in ears,
get impulse response from numerous positions around, above, below, etc.
- Apply filters thus obtained to adjust timbre of sound according
to desired sound position.
- PC cards with 3D sound support (e.g., Turtle Beach Montego)
becoming available.