IAudioFrameObserver
The audio frame observer.
You can call registerAudioFrameObserver to register or unregister the IAudioFrameObserver audio frame observer.
onEarMonitoringAudioFrame
Gets the in-ear monitoring audio frame.
public abstract boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Method 1: After calling setEarMonitoringAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
- Method 2: After calling registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getEarMonitoringAudioParams callback, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Without practical meaning.
onMixedAudioFrame
Retrieves the mixed captured and playback audio frame.
public abstract boolean onMixedAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Method 1: After calling setMixedAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onMixedAudioFrame callback according to the sampling interval.
- Method 2: After calling registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getMixedAudioParams callback, and triggers the onMixedAudioFrame callback according to the sampling interval.
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Without practical meaning.
onPlaybackAudioFrame
Gets the raw audio frame for playback.
public abstract boolean onPlaybackAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Method 1: After calling setPlaybackAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
- Method 2: After calling registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getPlaybackAudioParams callback, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Without practical meaning.
onPlaybackAudioFrameBeforeMixing
Retrieves the audio frame before mixing of subscribed remote users.
public abstract boolean onPlaybackAudioFrameBeforeMixing(int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
Parameters
- userId
- The ID of subscribed remote users.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Without practical meaning.
onRecordAudioFrame
Gets the captured audio frame.
public abstract boolean onRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Method 1: After calling setRecordingAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onRecordAudioFrame callback according to the sampling interval.
- Method 2: After calling registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getRecordAudioParams callback, and triggers the onRecordAudioFrame callback according to the sampling interval.
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
Parameters
- channelId
- The channel ID.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Without practical meaning.
getRecordAudioParams
Sets the audio format for the onRecordAudioFrame callback.
public abstract AudioParams getRecordAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onRecordAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The captured audio data, see AudioParams.
getMixedAudioParams
Sets the audio format for the onMixedAudioFrame callback.
public abstract AudioParams getMixedAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onMixedAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The mixed captured and playback audio data. See AudioParams.
getPlaybackAudioParams
Sets the audio format for the onPlaybackAudioFrame callback.
public abstract AudioParams getMixedAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onPlaybackAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The audio data for playback, see AudioParams.
getEarMonitoringAudioParams
Sets the audio format for the onEarMonitoringAudioFrame callback.
public abstract AudioParams getEarMonitoringAudioParams();
- Since
- v4.0.1
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onEarMonitoringAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The audio data of in-ear monitoring, see AudioParams.
getObservedAudioFramePosition
Sets the frame position for the video observer.
- (AgoraAudioFramePosition)getObservedAudioFramePosition NS_SWIFT_NAME(getObservedAudioFramePosition());
You can set one or more positions you need to observe by modifying the return value of getObservedAudioFramePosition based on your scenario requirements:
When the annotation observes multiple locations, the | (or operator) is required. To conserve system resources, you can reduce the number of frame positions that you want to observe.
Returns
- POSITION_PLAYBACK(0x0001): This position can observe the playback audio mixed by all remote users, corresponding to the onPlaybackAudioFrame callback.
- POSITION_RECORD(0x0002): This position can observe the collected local user's audio, corresponding to the onRecordAudioFrame callback.
- POSITION_MIXED(0x0004): This position can observe the playback audio mixed by the loacl user and all remote users, corresponding to the onMixedAudioFrame callback.
- POSITION_BEFORE_MIXING(0x0008): This position can observe the audio of a single remote user before mixing, corresponding to the onPlaybackAudioFrameBeforeMixing callback.
- POSITION_EAR_MONITORING(0x0010): This position can observe the in-ear monitoring audio of the local user, corresponding to the onEarMonitoringAudioFrame callback.