IMediaEngine
The IMediaEngine class.
createCustomAudioTrack
Creates a custom audio track.
virtual rtc::track_id_t createCustomAudioTrack(rtc::AUDIO_TRACK_TYPE trackType, const rtc::AudioTrackConfig& config) = 0;
Details
- Since
- v4.2.0
- Call this method to create a custom audio track and get the audio track ID.
- Call joinChannel [2/2] to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to
true
. - Call pushAudioFrame and specify trackId as the audio track ID set in step 2. You can then publish the corresponding custom audio source in the channel.
Parameters
- trackType
- The type of the custom audio track. See AUDIO_TRACK_TYPE.Attention: If AUDIO_TRACK_DIRECT is specified for this parameter, you must set
publishMicrophoneTrack
tofalse
in ChannelMediaOptions when calling joinChannel [2/2] to join the channel; otherwise, joining the channel fails and returns the error code -2. - config
- The configuration of the custom audio track. See AudioTrackConfig.
Returns
- If the method call is successful, the audio track ID is returned as the unique identifier of the audio track.
- If the method call fails, 0xffffffff is returned.
destroyCustomAudioTrack
Destroys the specified audio track.
virtual int destroyCustomAudioTrack(rtc::track_id_t trackId) = 0;
Details
- Since
- v4.2.0
Parameters
- trackId
- The custom audio track ID returned in createCustomAudioTrack.
Returns
- 0: Success.
- < 0: Failure.
pullAudioFrame
Pulls the remote audio data.
virtual int pullAudioFrame(IAudioFrameObserver::AudioFrame* frame) = 0;
After a successful call of this method, the app pulls the decoded and mixed audio data for playback.
Call timing
Call this method after joining a channel.
Before calling this method, call setExternalAudioSink(enabled: true)
to notify the app to enable and set the external audio rendering.
Restrictions
- After calling this method, the app automatically pulls the audio data from the SDK. By setting the audio data parameters, the SDK adjusts the frame buffer to help the app handle latency, effectively avoiding audio playback jitter.
- After registering the onPlaybackAudioFrame callback, the SDK sends the audio data to the app through the callback. Any delay in processing the audio frames may result in audio jitter.
This method is only used for retrieving audio data after remote mixing. If you need to get audio data from different audio processing stages such as capture and playback, you can register the corresponding callbacks by calling registerAudioFrameObserver.
Parameters
- frame
- Pointers to AudioFrame.
Returns
- 0: Success.
- < 0: Failure.
pushAudioFrame
Pushes the external audio frame.
virtual int pushAudioFrame(IAudioFrameObserver::AudioFrame* frame, rtc::track_id_t trackId = 0) = 0;
Call this method to push external audio frames through the audio track.
Call timing
- Call createCustomAudioTrack to create a custom audio track and get the audio track ID.
- Call joinChannel [2/2] to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to
true
.
Restrictions
None.
Parameters
- frame
- The external audio frame. See AudioFrame.
- trackId
- The audio track ID. If you want to publish a custom external audio source, set this parameter to the ID of the corresponding custom audio track you want to publish.
Returns
- 0: Success.
- < 0: Failure.
pushVideoFrame
Pushes the external raw video frame to the SDK through video tracks.
virtual int pushVideoFrame(base::ExternalVideoFrame* frame, unsigned int videoTrackId = 0) = 0;
Details
- Call createCustomVideoTrack to create a video track and get the video track ID.
- Call joinChannel [2/2] to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to
true
. - Call this method and specify videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel.
- If you no longer need to capture external video data, you can call destroyCustomVideoTrack to destroy the custom video track.
- If you only want to use the external video data for local preview and not publish it in the channel, you can call muteLocalVideoStream to cancel sending video stream or call updateChannelMediaOptions to set publishCustomVideoTrack to
false
.
Applicable scenarios
The SDK supports the ID3D11Texture2D video format since v4.2.3, which is widely used in game scenarios. When you need to push this type of video frame to the SDK, call this method and set the format in the frame to VIDEO_TEXTURE_ID3D11TEXTURE2D
, set the d3d11_texture_2d and texture_slice_index members, and set the format of the video frame to ID3D11Texture2D.
Parameters
- frame
-
The external raw video frame to be pushed. See ExternalVideoFrame.
- videoTrackId
- The video track ID returned by calling the createCustomVideoTrack method. The default value is 0.
Returns
- 0: Success.
- < 0: Failure.
registerAudioFrameObserver
Registers an audio frame observer object.
virtual int registerAudioFrameObserver(IAudioFrameObserver* observer) = 0;
Call this method to register an audio frame observer object (register a callback). When you need the SDK to trigger the onMixedAudioFrame, onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing or onEarMonitoringAudioFrame callback, you need to use this method to register the callbacks.
Call timing
Call this method before joining a channel.
Restrictions
None.
Parameters
- observer
-
The observer instance. See IAudioFrameObserver. Set the value as NULL to release the instance. Agora recommends calling this method after receiving onLeaveChannel to release the audio observer object.
Returns
- 0: Success.
- < 0: Failure.
registerFaceInfoObserver
Registers or unregisters a facial information observer.
virtual int registerFaceInfoObserver(IFaceInfoObserver* observer) = 0;
Details
- Since
- v4.3.1
You can call this method to register the onFaceInfo callback to receive the facial information processed by Agora speech driven extension. When calling this method to register a facial information observer, you can register callbacks in the IFaceInfoObserver class as needed. After successfully registering the facial information observer, the SDK triggers the callback you have registered when it captures the facial information converted by the speech driven extension.
- Call this method before joining a channel.
- Before calling this method, you need to make sure that the speech driven extension has been enabled by calling enableExtension.
Applicable scenarios
Facial information processed by the Agora speech driven extension is BS (Blend Shape) data that complies with ARkit standards. You can further process the BS data using third-party 3D rendering engines, such as driving avatar to make mouth movements corresponding to speech.
Parameters
- observer
- Facial information observer, see IFaceInfoObserver. If you need to unregister a facial information observer, pass in NULL.
Returns
- 0: Success.
- < 0: Failure.
registerVideoEncodedFrameObserver
Registers a receiver object for the encoded video image.
virtual int registerVideoEncodedFrameObserver(IVideoEncodedFrameObserver* observer) = 0;
Details
If you only want to observe encoded video frames (such as h.264 format) without decoding and rendering the video, Agora recommends that you implement one IVideoEncodedFrameObserver class through this method.
- Call registerVideoFrameObserver to register the raw video frame observer before joining the channel.
- Call registerVideoEncodedFrameObserver to register the encoded video frame observer before joining the channel.
- After joining the channel, get the user IDs of group B users through onUserJoined, and then call setRemoteVideoSubscriptionOptions to set the encodedFrameOnly of this group of users to
true
. - Call muteAllRemoteVideoStreams
(false)
to start receiving the video streams of all remote users. Then:- The raw video data of group A users can be obtained through the callback in IVideoFrameObserver, and the SDK renders the data by default.
- The encoded video data of group B users can be obtained through the callback in IVideoEncodedFrameObserver.
- Call this method before joining a channel.
Parameters
- observer
- The video frame observer object. See IVideoEncodedFrameObserver.
Returns
- 0: Success.
- < 0: Failure.
registerVideoFrameObserver
Registers a raw video frame observer object.
virtual int registerVideoFrameObserver(IVideoFrameObserver* observer) = 0;
If you want to observe raw video frames (such as YUV or RGBA format), Agora recommends that you implement one IVideoFrameObserver class with this method.
When calling this method to register a video observer, you can register callbacks in the IVideoFrameObserver class as needed. After you successfully register the video frame observer, the SDK triggers the registered callbacks each time a video frame is received.
- Call registerVideoFrameObserver to register the raw video frame observer before joining the channel.
- Call registerVideoEncodedFrameObserver to register the encoded video frame observer before joining the channel.
- After joining the channel, get the user IDs of group B users through onUserJoined, and then call setRemoteVideoSubscriptionOptions to set the encodedFrameOnly of this group of users to
true
. - Call muteAllRemoteVideoStreams
(false)
to start receiving the video streams of all remote users. Then:- The raw video data of group A users can be obtained through the callback in IVideoFrameObserver, and the SDK renders the data by default.
- The encoded video data of group B users can be obtained through the callback in IVideoEncodedFrameObserver.
Applicable scenarios
After registering the raw video observer, you can use the obtained raw video data in various video pre-processing scenarios, such as virtual backgrounds and image enhacement by yourself.
Call timing
Call this method before joining a channel.
Restrictions
- When network conditions deteriorate, the video resolution decreases incrementally.
- If the user adjusts the video profile, the resolution of the video returned in the callbacks also changes.
Parameters
- observer
- The observer instance. See IVideoFrameObserver. To release the instance, set the value as NULL.
Returns
- 0: Success.
- < 0: Failure.
setExternalAudioSink
Sets the external audio sink.
virtual int setExternalAudioSink(bool enabled, int sampleRate, int channels) = 0;
After enabling the external audio sink, you can call pullAudioFrame to pull remote audio frames. The app can process the remote audio and play it with the audio effects that you want.
Applicable scenarios
This method applies to scenarios where you want to use external audio data for playback.
Call timing
Call this method before joining a channel.
Restrictions
Once you enable the external audio sink, the app will not retrieve any audio data from the onPlaybackAudioFrame callback.
Parameters
- enabled
- Whether to enable or disable the external audio sink:
true
: Enables the external audio sink.false
: (Default) Disables the external audio sink.
- sampleRate
- The sample rate (Hz) of the external audio sink, which can be set as 16000, 32000, 44100, or 48000.
- channels
- The number of audio channels of the external audio sink:
- 1: Mono.
- 2: Stereo.
Returns
- 0: Success.
- < 0: Failure.
setExternalAudioSource
Sets the external audio source parameters.
virtual int setExternalAudioSource(bool enabled,
int sampleRate,
int channels,
bool localPlayback = false,
bool publish = true) = 0;
- Deprecated:
- This method is deprecated, use createCustomAudioTrack instead.
Call timing
Call this method before joining a channel.
Restrictions
None.
Parameters
- enabled
- Whether to enable the external audio source:
true
: Enable the external audio source.false
: (Default) Disable the external audio source.
- sampleRate
- The sample rate (Hz) of the external audio source which can be set as
8000
,16000
,32000
,44100
, or48000
. - channels
- The number of channels of the external audio source, which can be set as
1
(Mono) or2
(Stereo). - localPlayback
- Whether to play the external audio source:
true
: Play the external audio source.false
: (Default) Do not play the external source.
- publish
- Whether to publish audio to the remote users:
true
: (Default) Publish audio to the remote users.false
: Do not publish audio to the remote users.
Returns
- 0: Success.
- < 0: Failure.
setExternalVideoSource
Configures the external video source.
virtual int setExternalVideoSource(
bool enabled, bool useTexture, EXTERNAL_VIDEO_SOURCE_TYPE sourceType = VIDEO_FRAME,
rtc::SenderOptions encodedVideoOption = rtc::SenderOptions()) = 0;
After calling this method to enable an external video source, you can call pushVideoFrame to push external video data to the SDK.
Call timing
Call this method before joining a channel.
Restrictions
Dynamic switching of video sources is not supported within the channel. To switch from an external video source to an internal video source, you must first leave the channel, call this method to disable the external video source, and then rejoin the channel.
Parameters
- enabled
- Whether to use the external video source:
true
: Use the external video source. The SDK prepares to accept the external video frame.false
: (Default) Do not use the external video source.
- useTexture
- Whether to use the external video frame in the Texture format.
true
: Use the external video frame in the Texture format.false
: (Default) Do not use the external video frame in the Texture format.
- sourceType
- Whether the external video frame is encoded. See EXTERNAL_VIDEO_SOURCE_TYPE.
- encodedVideoOption
- Video encoding options. This parameter needs to be set if sourceType is ENCODED_VIDEO_FRAME. To set this parameter, contact technical support.
Returns
- 0: Success.
- < 0: Failure.