LEARNING TO USE THE VIRTUAL SOUND SERVER

AN INTRODUCTION TO VSS 3.1

Audio Development Group
last updated: Camille Goudeseune, 22 May 1998

This is an introduction to version 3.1 of VSS, developed at NCSA. A Reference Manual for VSS is here.

Table of Contents:

  1. FAQ: What is VSS?
  2. Architecture and signal flow
    1. Functionality
    2. GUI
  3. Running VSS
    1. Linking clients to VSS
    2. The basic client-server connection
    3. Usage
  4. Sending messages to VSS
    1. Actor message syntax
    2. Usage
    3. Actor handles
  5. Actor classes
  6. Sound authoring
  7. Audfiles
    1. Using .aud files
    2. Client-side view of .aud files
    3. Connecting clients to .aud files
    4. .aud file command syntax
    5. The simplest .aud files have no message groups
    6. Designating time in .aud files
  8. Message groups
    1. Structuring client code for AUDupdate()
    2. Using message groups
    3. Assigning data from the client to Actor messages
  9. CAVE Apps
  10. Tutorials
  11. Sound file libraries

1. FAQ: What is VSS?

Q. What is VSS?
A. VSS is the software application that engages the NCSA Sound Server environment. VSS provides a client application access to the library functions and renders the sound, in real-time or as a sound file.

Q. Where can I get VSS?
A. At NCSA, run VSS directly from the directory /afs/ncsa/packages/vss/6.2. Elsewhere, get it from the NCSA Audio Group website.

Q. When do I run VSS?
A. VSS runs in the background as a continuous service-provider while client applications send control messages.

Q. Does VSS work like a network or multimedia server?
A. VSS operates as a "local server" to render sound at a particular workstation. VSS can respond to multiple requests from multiple clients. Client applications can be running on the VSS host platform, or on other platforms that are networked to the VSS host. Messages are sent remotely over the network using the Universal Datagram Protocol, or UDP. (The more well-known TCP/IP is built on top of UDP.)

Q. How many VSS jobs can I run at one time?
A. Due to hardware restrictions, at any time only one copy of VSS can run on a single SGI workstation.

Q. Does every VSS client require its own VSS?
A. No. A single VSS can service requests from multiple clients simultaneously.

Q. Can I network and synchronize multiple VSS services?
A. Yes. A client may be configured to send messages to multiple servers running on separate workstations. Synchronous client requests will produce synchronous sound events.

Q. How do I make sound with VSS?
A. For making sound...

  1. identify one or more synthesis algorithms in the sound server library;
  2. configure a C or C++ application (a client) to send messages to VSS;

In the simplest configuration a client can send messages to the sound library as C++ function calls. In most cases a text file format -- called an .aud file -- is used to facilitate communications between the client application and the server. This tutorial introduces .aud file syntax and usage.

Q. Why use an external file instead of putting the messages into client code?
A. For flexibility and quick addition of sounds to an application, (1) the .aud file syntax provides a structured approach to making sounds, and (2) ASCII configuration files encode the client--server relationships.

Q. How do I control VSS?
A. VSS provides a library of function calls we refer to as Actor messages. Actor messages invoke the functions in Actors and are used to pass control variables to those functions.

Q. What is an Actor?
A. An Actor is a C++ class representing a transfer function or a temporal function that sends data to sound synthesis functions. Control data may be input to the Actor or it may originate in the Actor. An Actor performs a high level of sound control; for a given Actor there are usually a greater number of control signals passed out of an Actor to VSS, than the number of data streams input from a client. Actors can be used to control other Actors or to control synthesis functions external to VSS.

Q. Where do I learn Actor messages?
A. Messages associated with each Actor in the sound server are listed in the Reference manual.


2. Architecture and signal flow

The vss3.1 architecture consists of

  1. a client-side library,
  2. an authoring protocol stored as a text file (".aud file"), and
  3. the executable sound server which has sound production engines defined as dynamically shared objects, or DSO’s
(see Figure 1).


Functionality

Client
  1. Compile time: link to sound server library.
  2. Run time: make sound requests and export numerical data corresponding to client states.

Authoring file

  1. Register specific sound production messages for each type of client sound request.
  2. Map domain of client-side data or status to range of control parameters for sound production.

Server

  1. Parse message requests from client applications.
  2. Execute sound production messages, processing data sent by client according to audfile specifications.

The goal of this architecture is to minimize the sound production knowledge encoded in the client application, and isolate specific sound production configurations in the .aud files, supported by sound engines loaded into the server.

GUI

A GUI is provided to assist the sound authoring process (see Figure 2). The GUI provides a front-end for information stored in .aud files. Mappings from client data to sound production are created in the GUI and stored in .aud files. The GUI also supports and records the configuration of multiple sound production engines which respond to client sound requests. The GUI is used for sound authoring and is not required when a client application is in operation. esting sounds.


3. Running VSS

VSS defaults to a sampling rate of 22 kHz and one channel (mono) playback. These may be changed on the control panel when it is running, or on the command line at startup. To learn these and other options see Running VSS in the reference manual.

Linking clients to VSS

Sound library functions are linked to the client application at compile time. Functions are prototyped in vssclient.h and the routine library is found in libsnd.a

To configure a client application, make sure that the appropriate directories are set at compile time, for example these could be Makefile entries:

	CC -c client.c -I/vss/include/directory
	CC -o client client.o -L/vss/library/directory -lsnd -ll -lm

where /vss/include/directory contains the file vssClient.h
and where /vss/library/directory contains the file libsnd.a
(typically these are the same directory, in fact). Note that the "link command" which builds your client (here, the second line

	CC -o client client.o -L/vss/library/directory -lsnd -ll -lm
), MUST BE "CC". If you get a link error for things like __vec_new and __nw__FUi, it is because you linked with "ld" or "cc".

The client code must then have

	#include "vssClient.h"

ahead of the function calls. For specific details and a working example see The Trivial Client in the Writing Client Applications section of the reference manual.

The basic client-server connection

A client's connection to VSS is initiated and terminated using these functions:

	int BeginSoundServer(void);
	void EndSoundServer(void);

These functions return 1 if successful, else they return 0. BeginSoundServer() establishes a UDP connection and handshakes with VSS. EndSoundServer() cleans up memory in VSS used by the client and cleanly breaks the UDP connection.

To establish a connection with VSS running on a specific remote host, use

	int BeginSoundServerAt(char * hostname);

instead of BeginSoundServer(). BeginSoundServer is sufficient for a remote connection to VSS if the server's hostname has already been set as a unix environment variable prior to launching the client application. To do this use the command (from csh)

	setenv SOUNDSERVER hostname

Usage

When initializing a connection to VSS it is a good idea to check if the client application can find VSS. If VSS is not running or its path cannot be found then the client has the option to continue running silently. In the example below the client exits if there is no server conenction.

	if (!BeginSoundServer() )
		exit(2);

Within the exit code of the client (i.e. after normal contact with, and use of, the server), clean up as follows:

	EndSoundServer();


4. Sending messages to VSS

Actor message syntax

The basic Actor message syntax is:
Command Arguments;

The Arguments will typically contain an actor handle (see Actor handles below), followed by one or more parameter-setting values specific to the actor type, so that the message takes the basic form:

Command ActorHandle value [value ...];
examples:
	(1) SetPlaybackRate Actorhandle playrate "filename";

	(2) SendBreakpoints Actorhandle envArray;
In the first example Actorhandle and playrate are floating-point numbers and filename is a character string identifying a soundfile that is playing or that will be playing. (Note that filename must be surrounded by double quotes.) In the second example Actorhandle is a float, and envArray is an array of floats.

Usage

The way to call this syntax depends on whether the message is entered in an .aud file or called from application code (a direct C function call to the runtime library). The .aud file format is simpler than a C/C++ function call. For example in an .aud file an instance of the message in example (2) appears as follows:

	(3) SendBreakpoints my_env [ 0, 0.333, 2, 1, 5, 0 ];

In this example the term "my_env" is a user-specified variable name representing an Actor handle. Brackets are delimiters for array declarations in .aud files.

Actor message arguments may be delimited using any combination of commas and whitespace. The same holds true for delimiting of array elements.

Actor handles

Actor messages are of two types: those that return handles and those that do not. A "handle" is a floating point number identifying an Actor that is active in VSS. Handles are used for updating an Actor with new control data. When an Actor is no longer needed the handle is used to terminate the Actor and free the associated memory.

Here is the basic message syntax for messages that return Actor handles:

ActorHandle = Create ActorType;
Specific to Generator Actors are sound-generating instances of those actors, called "Sounds". These are created similarly, with their corresponding handles returned, through messages with the basic syntax:
SoundHandle = BeginSound ActorHandle;
examples:
	(4) ThisActor = Create SampleActor
	(5) ThisSound = BeginSound ThisActor, SetAmp amplitude, SetFile "filename";
Create and BeginSound are Actor messages in VSS. SampleActor is an Actor type in VSS. ThisActor and ThisSound are user-created variable names to store the Actor handle returned by VSS. Notice the Actor handle returned in line (4) is used as an argument in line (5).


5. Actor classes

The signal flow for sound authoring uses four Actor classes:
  1. Sound Sources
  2. Modifiers
  3. Processors
  4. Message Groups

Sound Sources are audio signal generators; they produce a signal which is passed to the computer's audio hardware and presented to an external audio signal path, which typically includes an audio amplifier and output transducers such as loudspeakers or headphones. Sound Sources are organized in Sound Groups. A Sound Group is a user-defined collection of sources of a common engine type. A sound group allows a parameter change to be performed synchronously to multiple sound sources of a given type.

Modifiers are functions which modify a control parameter. A Modifier can generate or modify a control signal. An example is an Envelope, a piecewise-linear function applied in timesteps as a scalar to a parameter value, to result in a linear change of the parameter value over a specified duration. Another example is a Mapper which rescales a range of values in a control data stream.

Processors receive an audio signal and perform digital signal processing operations to output a modification of the audio signal. An example is a Directional Mixer which distributes an audio signal to multiple outputs. Another example is a Reverberator which creates time-delayed filtered repetitions of an audio signal.

Message Groups encapsulate data from a client and specify how that data is applied to control sound synthesis. Message Groups provide a structured manner by which events occurring in the client may be tied to Actor Messages in VSS, through the .aud file interface. Thus, Message Groups are used to define and control the nature of the interaction between the running client and the Actors in VSS.

Events occurring in the client are characterized by the computational state of the client and by the interaction between the user and the client. These events may be indexed by time, by value, or both. Through Message Groups, a mapping is established between client-side states (their occurrence in time, and the conditions they represent) and server-side Actors.

Message Groups are the only Actor class directly visible to the client. Message Groups are referred to within the client by their handle name and by the data passed in their argument array. The mapping of client-side events to sound is then constructed external to the client code wherever possible, using Actor Messages within the .aud file.


6. Sound authoring

Sound authoring is a creative and interpretive process which includes a data analysis task, a user interface analysis task, and an orchestration task. The process unfolds something as follows. The GUI work cycle follows these basic steps.


7. Audfiles

Audfiles are currently ASCII files for run-time configuration of VSS. Audfiles are used to initialize the mapping between data and sound requests from a client, and sound messages passed to VSS. A client can open multiple audfiles. An audfile defines the VSS messages associated with Message Groups named in the client application.

Using .aud files

Using .aud files a client application does not have to recompile when the mapping of data to sound is altered. This supports rapid prototyping and rapid refinement. "Plug and play" of sound algorithms -- in the form of message groups -- is made possible by borrowing structures from existing .aud files and calling those structures from new client applications. Message groups are an .aud file structure discussed in section 8.

Client-side view of .aud files

Here we discuss how to structure a client application to interact with .aud files. The syntax for the commands discussed are as follows:

	int AUDinit(const char * filename);

void AUDupdate (int handle, char *MessageGroupName, int Numfloats, float *dataArray);

void AUDterminate(int handle);

There are three basic steps involved in preparing a client application to communicate with .aud files:

First: Establish client connections and link as described in part II.

	#include "vssClient.h"

/* Near the top of C++ client application */ if (!BeginSoundServer() ) exit(2);

/* Here are hypotehtical interaction, simulation and graphic display events */ while(not_done) {

interactions = check_control_devices(); status = update_simulation_states(interactions); update_environment_graphics(status); not_done = query_status(status); }

/* okay we're getting ready to exit */ EndSoundServer();

Second: After initializing contact with the server, open one or more .aud files. The .aud file handle returned by AUDinit() allows the client application to access items in more than one .aud file and to clean up VSS memory using AUDterminate() when the client application closes. If AUDinit returns a number less than zero, a syntax error was detected in the .aud file.

	#include "vssClient.h"

/* create variables for file handles*/ float handle1, handle2;

if (!BeginSoundServer() ) exit(2);

/* open the audfiles of choice */ int hContinuous = AUDinit("AUD/continuous_sounds.aud"); int hConditional = AUDinit("AUD/conditional_sounds.aud"); if (hContinuous<0 || hConditional<0) exit(3);

while(not_done) {

interactions = check_control_devices(); status = update_simulation_states(interactions); update_environment_graphics(status); not_done = query_status(status); }

/* time to clean up */ AUDterminate(hContinuous); AUDterminate(hConditional); EndSoundServer();

Third: use AUDupdate() to make particular sounds when the correct conditions occur. The sounds are created according to a Message Group which is specified in the .aud file. The name of the Message Group is not declared in the client application. The .aud file contains a call which initializes a Message Group of the correct name; this must happen before the client can use that name in an AUDupdate(). When VSS receives the name from the client it looks to see if a Message Group of that name has been initialized.

	#include "vssClient.h"

float handle1, handle2;

/* create variables for sending data to message groups */ float dataArray1[arraySize1], dataArray2[arraySize2];

if (!BeginSoundServer() ) exit(2);

int hContinuous = AUDinit("AUD/continuous_sounds.aud"); int hConditional = AUDinit("AUD/conditional_sounds.aud"); if (hContinuous<0 || hConditional<0) exit(3);

while (not_done) {

interactions = check_control_devices(); status = update_simulation_states(interactions); update_environment_graphics(status); not_done = query_status(status);

/* update and transmit curent state to VSS */ /* values needed for continuous events*/ dataArray1 = get_relevant_data(status);

AUDupdate(hContinuous, "OneMessageGroup", arraySize1, dataArray1);

/* values needed for special events */ if (status == alarmcondition) dataArray2 = get_special_data(status);

if (special) AUDupdate(hConditional, "AnotherMessageGroup", arraySize2, dataArray2);

not_done = query_status(status); }

AUDterminate(hContinuous); AUDterminate(hConditional); EndSoundServer();

The dataArray is an array of floats declared in the client application. The application developer identifies values from the application dynamics or interface state, to be used with specific Message Groups declared in .aud files. The client app places these values in the arrays used in AUDupdate(). Further examples are provided in the Using .aud files section of the VSS reference manual.

Connecting clients to .aud files

In order to create an .aud file you must know what the client application is doing and under what conditions it calls various message groups. You also need to know the order of variables stored in the data arrays passed to message groups. There is no way to automatically tell the .aud file where the variables have come from, just as there is no way to automatically tell the client how the outgoing values are going to be used by the .aud file. When creating a message group you explicitly write the .aud file and the client data arrays to agree with one another. Of course, once the structure and contents of the data arrays is defined in the client, variations of usage can be created in alternative versions of an .aud file, without changing (recompiling) the client.

.aud file command syntax

.aud file commands are instructions for transmitting Actor messages within VSS. The .aud file stores a configuration of commands for initializing Actors and initializing the messages that Actors will receive during run-time. Commands fall into one of two forms:

	handleName = commandName, arg1, arg2,...;
or

	commandName, arg1, arg2,...;
The former is for commands which return handles to be referenced later by other commands. arg1 is usually but not always a handle allocated in a previous message. Commas, spaces, or both may be used as delimiters between commands and args. Both the /*C*/ and the //C++ style comments work in .aud files.

The simplest .aud files have no message groups

In the simplest case an .aud file does not need to have any message groups. All of the control information can be hard-coded in the .aud file itself. Then when the . aud file is called by an AUDinit() it will execute all of its instructions. Without message groups it cannot be controlled from the client after it is initialized. All of its contents are sent into VSS memory which executes until the messages self-terminate or they are terminated by explicitly resetting VSS.

We provide a simple client for playing this sort of .aud file. It is called audTest and it only calls AUDinit(). AUDupdate() is not called; therefore no values are passed interactively to VSS. See the audtest tutorial for introductory examples of .aud file syntax, before studying message groups.

Designating time in .aud files

Usually the client application controls the temporal structure and dynamics of the sound environment. VSS provides a number of time-based functions to elaborate and differentiate the temporal consequences of client-based events. These include EnvelopeActor,LaterActor, and SequenceActor. In .audfiles the sleep command may be used to create time delays.

8. Message groups

A Message Group is a datatype declared in an .aud file to manage a set of Actor messages. A Message Group supports the specification of any number of Actor messages listed in a particular order. The list of messages in a Message Group are executed as a group upon passing the Message Group name to an AUDupdate() function in the client application. The syntax is:

	void AUDupdate
	    (int handle, char *MessageGroupName, int Numfloats, float *dataArray);

A valid name corresponds to a Message Group declared in the .aud file.

The array of floats (dataArray) is updated elsewhere by the client application. Then dataArray is passed to AUDupdate(). AUDupdate() passes the updated array to the Actor messages in the message group. This is the most important feature of the message group. The client application does not have to define the usage of the data in dataArray. The assignment of values from dataArray to arguments of the Actor messages is performed in Message Groups in the .aud file .

Because the .aud file defines the usage of the values in dataArray, the client application may send arrays of arbitrary size to AUDupdate(). Values sent to VSS in dataArray that are not needed may simply be ignored by the Message Group in the .aud file.

Structuring client code for AUDupdate()

The type of sound determines how often VSS needs to receive control data from the client. Updates to the data array and corresponding calls to AUDupdate() should be positioned in the client code according to the frequency of the particular state changes involved. There are two classes of AUDupdate() usage: conditional calls, and continuous calls. Conditional calls only occur when special conditions are present, such as the collision of two objects or the push of a wand button. These conditions are tested in the client app. Continuous calls are required for updating sounds that may need to change regularly according to continuously changing system states. The localization of sounds according to moving objects or a moving listener is an example of a continuous sound modification situation.

Some message groups may not require control data from the client application. These messages are called with a null array of size 0:

	AUDupdate(file_handle, "message_with_no_variables", 0, NULL);

Using message groups

A Message group is defined using two commands:

	messageHandle = Create MessageGroup;
	AddMessage messageHandle, commandName, args,...;
The .aud file control flow uses five steps:

  1. Load DSOs for required Actors
  2. Create and name the Actors which will be manipulated through the messages
  3. Create and name message groups
  4. Send setup messages (like BeginSound or SetFreq) to those Actors
  5. Add a list of messages to each message group for the Actor to respond to

The AddMessage command completes the link between the client and the server. The array values passed through the call to AUDupdate(...); are "picked out" here and applied to the Actor referenced by ActorHandle. In this manner, the client can call any Actor message on the list of available commands. "commandName" can be any Actor command.

In the following example .aud file a sample Actor is created and used as an argument to initialize amplitude, to initialize playback sample rate, and to play a soundfile whenever the "Play" message group is called in AUDupdate();

	SampHandle = Create SampleActor;
	Play = Create MessageGroup;
	AddMessage Play SetAmp SampHandle *0;
	AddMessage Play SetPlaybackRate SampHandle *1;
	AddMessage Play PlaySample SampHandle "mysoundfile';

Assigning data from the client to Actor messages

In the above AddMessage syntax, note the use of "*" before the arguments and the corresponding use of integers as arguments in the Actor message. The *'s indicate that 0 and 1 are NOT hard-coded floating point values for those arguments, instead the 0 and 1 are indices into the 0th and 1st positions in the data array which the client passes in AUDupdate(); With this syntax the client application can pass a new amplitude value and playback sample rate each time the soundfile is called to play.


9. CAVE Apps

In what order do I put these calls in my CAVE app?

It's best, before firing up the whole CAVE, to verify that the runtime environment is intact (in particular, that VSS is running and your .aud files are okay). Therefore, call BeginSoundServer() and AUDinit() first, before calling any CAVExxx() functions.

Also, note that CAVEExit() actually exits the program, so call AUDterminate() and EndSoundServer() before CAVEExit(), or just call CAVEHalt() instead of CAVEExit()

In other words:

  1. BeginSoundServer()
  2. AUDinit()
  3. CAVEConfigure(), CAVEMalloc(), CAVEInit(), CAVEDisplay()
  4. main loop: AUDupdate()'s.
  5. AUDterminate()
  6. EndSoundServer()
  7. CAVEExit()

A few more hints:

Put the libraries -lsnd -lcave -lsphere near the beginning, not near the end, of the link line in your makefile. Certainly put them before low-level things like -lm -lgl. For some reason, you'll get link errors on the SGI if you put libraries "out of order."

Use sginap() and gr_osview to balance cpu load. For a given audio result, you may need to increase the duration of sginap()'s in your client code. Watch gr_osview's CPU usage: as it gets near 100%, you'll hear interruptions in the audio; you may also identify occasions in running your app which demand extra resources. If you have several SGI's, you can tell your app to be a client of several sound servers at once, distribute the sound computation among the machines, and use an analog mixer to combine the machines' audio outputs.

You may prefer to call BeginSoundServer() after CAVEConfigure(), not before, in order to automatically pick up the environment variable $SOUNDSERVER which the cave configuration files can specify for you. Note carefully how this mechanism works, though.

10. Tutorials

See this page for further tutorial examples.


11. Sound file libraries

Note: VSS requires aiff files, 16-bit mono or stereo. On the SGI other file formats (AIFC, WAV, AU) may be converted using the soundfiler utility or the "sfconvert" command.

Caution: most web sounds are very low quality due to reduction of sampling rate and bit rate. Conversion does not upgrade quality once it has been encoded at a lower resolution.

Sound file Library (at NCSA only): /afs/ncsa/projects/blanca/public/sounds
Some of these have not been converted into aiff format.

Soundfile repositories

www.synthzone.com/sampling.htm is a huge list of sites containing samples and other stuff. If that's overkill, try the following:
http://netvet.wustl.edu/sounds.htm animal sounds
http://www.ts.umu.se/~larserik/drpage/ 44k 16bit mono drums
http://hyperreal.com/music/machines/samples.html drum & bass sounds
ftp://207.120.130.2/WAV_Files/ Peavey. lotsa stuff. (but weird sampling rates)
http://soundamerica.com/ Huge. Go here first.
http://www.sound-dimensions.com/sbytes.html amateur miscellany
http://www.vionline.com/sound.html "classic" FX
http://wcarchive.cdrom.com/pub/demos/music/samples/ tons of music

Finally, www.xoom.com is enormous, if you don't mind "registering".

Finally finally, search the whole web if you can guess the filename someone might have used. Go to Altavista and enter queries like

Click on Altavista's help button to figure out what these mean. Netscape tip: shift-click on the link to force Netscape to save the file instead of merely trying to play it.