atmoky Engage Server SDK is now available! Check out the documentation to learn more.
Going further
Audio Properties

Audio Object Properties

Client Signal Flow

The chart below gives you an idea of the client side signal flow and gain structure inside Engage. All AudioObject parameters are local to the respective instance, only Space parameters are global.

Controlling the gain

You can control the overall gain of an audio object by using one of the respective setters:
const gainLinear = audioObject.getGainLinear();
const gainDb = audioObject.getGainDb();

Setting the reverb send level

Per default, sources don't contribute to the reverb effect. But you can change that by setting their reverb send level:
// setting the reverb sent level in dB
// or in linear scale
Sometimes it is desireable to make the reverb send level dependent on the distance between the source and the listener. This can be achieved by assigning a DistanceModel to the reverb send (see Distance Model).


If you want to create the feeling that a bigger object sits between an AudioObject and the listener, you can increase the occlusion value of a source. This will dampen the audio signal and redude its volume, simulating occlusion/obstruction.
The occlusion value has to be between 0 and 1, with 0.0 no occlusion happening and 1.0 full occlusion.
const objectOcclusion = audioObject.getOcclusion();
const participantOcclusion = participant.getOcclusion();


Real audio sources are not omnidirectional, but have a certain directivity. That means, that they sound different e.g. if you are standing behind or in front of it. In Engage, the directivity of an audio object can be controlled by defining a set of parameters: the innerAngle defines the angular spread around the frontal direction where sound is emitted unattenuated. The outerAngle defines the angular spread around the frontal direction outside of which sound is emitted attenuated, whereas the outerGain and the outerLowPass values define the gain and the amount of low-pass filtering applied within the outer angle. For directions in between the inner and the outer angle, attenuation values are interpolated.
audioObject.setDirectivity(Math.PI / 2, 3 * Math.PI / 2, 0.1, 0.7);
const directivity = audioObject.getDirectivity();
participant.setDirectivity(Math.PI / 2, 3 * Math.PI / 2, 0.1, 0.7);
const directivity = participant.getDirectivity();

Position Mode

Audio objects are typically rendered as heard from the listener's position. This mode is called scene-locked (PositionMode.scenelocked), and it is the default mode. However, sometimes it is desireable to render audio objects in a head-locked fashion, i.e. that the rendered direction and distance of an audio object is independent of the listener's position and orientation. This is especially useful for rendering audio objects that are attached to the listener, such as a head-locked GUI or a head-locked voice chat. To enable head-locked rendering, set the position mode of an audio object to PositionMode.headlocked:
// the rendered direction is alwys set to 'front'
audioObject.setPosition(0, 0, -1);

Rendering Mode

Per default, audio objects are rendered spatially, i.e. that they emit sound from a specific point in space. However, sometimes it is desireable to pipe audio data directly to the output, without any spatialization. The Rendering Mode can be set via the respective setters in AudioObject (AudioObject.setRenderingMode) and RemoteParticipant(RemoteParticipant.setRenderingMode):
If the rendering mode is set to DirectThrough, the audio object will not be spatialized and the position, direct path gain, reverb send gain, and the occlusion value will be ignored. Only the overall gain will be applied to the audio data.

atmoky Logo

© 2024 atmoky, GmbH.
All rights reserved.

We use cookies to measure and improve your browsing experience, and serve personalized ads or content.