atmoky Engage Server SDK is now available! Check out the documentation to learn more.
Documentation
Going further
Audio Properties

Audio Object Properties

Client Signal Flow

The chart below gives you an idea of the client side signal flow and gain structure inside Engage. All AudioObject parameters are local to the respective instance, only Space parameters are global.


Controlling the gain

You can control the overall gain of an audio object by using one of the respective setters:
audioObject.setGainLinear(0.5);
const gainLinear = audioObject.getGainLinear();
 
audioObject.setGainDb(-3.0);
const gainDb = audioObject.getGainDb();

Setting the reverb send level

Per default, sources don't contribute to the reverb effect. But you can change that by setting their reverb send level:
participant.setReverbSendDecibel(-6.0);
 
// setting the reverb sent level in dB
audioObject.setReverbSendDecibel(-6.0);
 
// or in linear scale
audioObject.setReverbSendLinear(0.5);
Sometimes it is desireable to make the reverb send level dependent on the distance between the source and the listener. This can be achieved by assigning a DistanceModel to the reverb send (see Distance Model).

Occlusion

If you want to create the feeling that a bigger object sits between an AudioObject and the listener, you can increase the occlusion value of a source. This will dampen the audio signal and redude its volume, simulating occlusion/obstruction.
The occlusion value has to be between 0 and 1, with 0.0 no occlusion happening and 1.0 full occlusion.
audioObject.setOcclusion(0.5);
const objectOcclusion = audioObject.getOcclusion();
 
participant.setOcclusion(0.3);
const participantOcclusion = participant.getOcclusion();

Position Mode

Audio objects are typically rendered as heard from the listener's position. This mode is called scene-locked (PositionMode.scenelocked), and it is the default mode. However, sometimes it is desireable to render audio objects in a head-locked fashion, i.e. that the rendered direction and distance of an audio object is independent of the listener's position and orientation. This is especially useful for rendering audio objects that are attached to the listener, such as a head-locked GUI or a head-locked voice chat. To enable head-locked rendering, set the position mode of an audio object to PositionMode.headlocked:
audioObject.setPositionMode(PositionMode.headlocked);
 
// the rendered direction is alwys set to 'front'
audioObject.setPosition(0, 0, -1);

Rendering Mode

Per default, audio objects are rendered spatially, i.e. that they emit sound from a specific point in space. However, sometimes it is desireable to pipe audio data directly to the output, without any spatialization. The Rendering Mode can be set via the respective setters in AudioObject (AudioObject.setRenderingMode) and RemoteParticipant(RemoteParticipant.setRenderingMode):
audioObject.setRenderingMode(RenderingMode.DirectThrough);
If the rendering mode is set to DirectThrough, the audio object will not be spatialized and the position, direct path gain, reverb send gain, and the occlusion value will be ignored. Only the overall gain will be applied to the audio data.

atmoky Logo

© 2023 atmoky, GmbH.
All rights reserved.

We use cookies to measure and improve your browsing experience, and serve personalized ads or content.