IAudioFrameObserverBase

The audio frame observer.

You can call RegisterAudioFrameObserver to register or unregister the IAudioFrameObserverBase audio frame observer.

FAudioFrame

Raw audio data.

USTRUCT(BlueprintType)
struct FAudioFrame {

	GENERATED_BODY()
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	EAUDIO_FRAME_TYPE type;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int samplesPerChannel;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	EBYTES_PER_SAMPLE bytesPerSample;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int channels;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int samplesPerSec;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int64 buffer;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int64 renderTimeMs;
	UPROPERTY(VisibleAnywhere, BlueprintReadWrite, Category = "Agora|AudioFrame")
	int avsync_type;
};

Attributes

type

The type of the audio frame. See EAUDIO_FRAME_TYPE.

samplesPerChannel
The number of samples per channel in the audio frame.
bytesPerSample
The number of bytes per sample. For PCM, this parameter is generally set to 16 bits (2 bytes).
channels
The number of audio channels (the data are interleaved if it is stereo).
  • 1: Mono.
  • 2: Stereo.
samplesPerSec
The number of samples per channel in the audio frame.
buffer

The data buffer of the audio frame. When the audio frame uses a stereo channel, the data buffer is interleaved.

The size of the data buffer is as follows: buffer = samples × channels × bytesPerSample.

renderTimeMs

The timestamp (ms) of the external audio frame.

You can use this timestamp to restore the order of the captured audio frame, and synchronize audio and video frames in video scenarios, including scenarios where external video sources are used.

avsync_type
Reserved for future use.

FOnMixedAudioFrame

Retrieves the mixed captured and playback audio frame.

DECLARE_DYNAMIC_MULTICAST_DELEGATE_TwoParams(FOnMixedAudioFrame, const FString, channelId, const FAudioFrame&, audioFrame);
To ensure that the data format of mixed captured and playback audio frame meets the expectations, Agora recommends that you choose one of the following two ways to set the data format:
  • Method 1: After calling SetMixedAudioFrameParameters to set the audio data format and RegisterAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the FOnMixedAudioFrame callback according to the sampling interval.
  • Method 2: After calling RegisterAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the FGetObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the FGetMixedAudioParams callback, and triggers the FOnMixedAudioFrame callback according to the sampling interval.
Note:
  • The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.

Parameters

audioFrame
The raw audio data. See FAudioFrame.
channelId
The channel ID.

Returns

Without practical meaning.

FOnPlaybackAudioFrame

Gets the raw audio frame for playback.

DECLARE_DYNAMIC_MULTICAST_DELEGATE_TwoParams(FOnPlaybackAudioFrame, const FString, channelId, const FAudioFrame&, audioFrame);
To ensure that the data format of audio frame for playback is as expected, Agora recommends that you choose one of the following two methods to set the audio data format:
  • Method 1: After calling SetPlaybackAudioFrameParameters to set the audio data format and RegisterAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the FOnPlaybackAudioFrame callback according to the sampling interval.
  • Method 2: After calling RegisterAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the FGetObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the FGetPlaybackAudioParams callback, and triggers the FOnPlaybackAudioFrame callback according to the sampling interval.
Note:
  • The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.

Parameters

audioFrame
The raw audio data. See FAudioFrame.
channelId
The channel ID.

Returns

Without practical meaning.

FOnRecordAudioFrame

Gets the captured audio frame.

DECLARE_DYNAMIC_MULTICAST_DELEGATE_TwoParams(FOnRecordAudioFrame, const FString, channelId, const FAudioFrame&, audioFrame);
To ensure that the format of the cpatured audio frame is as expected, you can choose one of the following two methods to set the audio data format:
  • Method 1: After calling SetRecordingAudioFrameParameters to set the audio data format and RegisterAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the FOnRecordAudioFrame callback according to the sampling interval.
  • Method 2: After calling RegisterAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the FGetObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the FGetRecordAudioParams callback, and triggers the FOnRecordAudioFrame callback according to the sampling interval.
Note:
  • The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.

Parameters

audioFrame
The raw audio data. See FAudioFrame.
channelId
The channel ID.

Returns

Without practical meaning.

FGetObservedAudioFramePosition

Sets the frame position for the video observer.

DECLARE_DYNAMIC_MULTICAST_DELEGATE(FGetObservedAudioFramePosition);
After successfully registering the audio data observer, the SDK uses this callback for each specific audio frame processing node to determine whether to trigger the following callbacks:

You can set one or more positions you need to observe by modifying the return value of FGetObservedAudioFramePosition based on your scenario requirements:

When the annotation observes multiple locations, the | (or operator) is required. To conserve system resources, you can reduce the number of frame positions that you want to observe.

Returns

Returns a bitmask that sets the observation position, with the following values:
  • AUDIO_FRAME_POSITION_PLAYBACK(0x0001): This position can observe the playback audio mixed by all remote users, corresponding to the FOnPlaybackAudioFrame callback.
  • AUDIO_FRAME_POSITION_RECORD(0x0002): This position can observe the collected local user's audio, corresponding to the FOnRecordAudioFrame callback.
  • AUDIO_FRAME_POSITION_MIXED(0x0004): This position can observe the playback audio mixed by the loacl user and all remote users, corresponding to the FOnMixedAudioFrame callback.
  • AUDIO_FRAME_POSITION_BEFORE_MIXING(0x0008): This position can observe the audio of a single remote user before mixing, corresponding to the FOnPlaybackAudioFrameBeforeMixing callback.
  • AUDIO_FRAME_POSITION_EAR_MONITORING(0x0010): This position can observe the in-ear monitoring audio of the local user, corresponding to the FOnEarMonitoringAudioFrame callback.

FGetRecordAudioParams

Sets the audio format for the FOnRecordAudioFrame callback.

DECLARE_DYNAMIC_MULTICAST_DELEGATE(FGetRecordAudioParams);

You need to register the callback when calling the RegisterAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.

Attention:

The SDK triggers the FOnRecordAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).

Ensure that the sample interval ≥ 0.01 (s).

Returns

The captured audio data, see AudioParams.

FGetMixedAudioParams

Sets the audio format for the FOnMixedAudioFrame callback.

DECLARE_DYNAMIC_MULTICAST_DELEGATE(FGetMixedAudioParams);

You need to register the callback when calling the RegisterAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.

Attention:

The SDK triggers the FOnMixedAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).

Ensure that the sample interval ≥ 0.01 (s).

Returns

The mixed captured and playback audio data. See AudioParams.

FGetPlaybackAudioParams

Sets the audio format for the FOnPlaybackAudioFrame callback.

DECLARE_DYNAMIC_MULTICAST_DELEGATE(FGetPlaybackAudioParams);

You need to register the callback when calling the RegisterAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.

Attention:

The SDK triggers the FOnPlaybackAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).

Ensure that the sample interval ≥ 0.01 (s).

Returns

The audio data for playback, see AudioParams.