Video Features

Video Basic Features

disableVideo

Disables the video module.

virtual int disableVideo() = 0;
This method affects the internal engine and can be called after leaving a channel. Calling this method resets the entire engine, which may cause slower response time. Depending on your needs, you can use the following methods to control specific features of the video module:

Timing

You can call this method before or after joining a channel:
  • If you call it before joining a channel, you enter audio-only mode.
  • If you call it after joining a channel, you switch from video mode to audio-only mode.
You can call enableVideo later to switch back to video mode.

Return Values

  • 0: Success.
  • < 0: Failure.

enableLocalVideo

Enables or disables local video capture.

virtual int enableLocalVideo(bool enabled) = 0;

This method disables or re-enables local video capture without affecting the reception of remote video streams. After calling enableVideo, local video capture is enabled by default. If you call enableLocalVideo(false) in a channel, it also stops publishing the video stream in the channel. To resume capturing, call enableLocalVideo(true) and then call updateChannelMediaOptions to set the options parameter to publish the captured local video stream in the channel.

Note:
  • You can call this method before or after joining a channel. However, if you call it before joining, the setting only takes effect after joining.
  • This method enables the internal engine and remains effective after leaving the channel.

Parameters

enabled
Whether to enable local video capture:
  • true: (Default) Enable local video capture.
  • false: Disable local video capture. After disabling, remote users cannot receive the local user's video stream, but the local user can still receive remote video streams. A local camera is not required when set to false.

Return Values

  • 0: Success.
  • < 0: Failure.

enableVideo

Enables the video module.

virtual int enableVideo() = 0;

By default, the video module is disabled. Call this method to enable it. To disable the video module later, call disableVideo.

Note: This method enables the internal engine and remains effective after leaving the channel. Calling this method resets the entire engine and may cause slower response. You can use the following methods to control specific video functionalities as needed: Calling this method resets the settings of enableLocalVideo, muteRemoteVideoStream, and muteAllRemoteVideoStreams. Use with caution.

Timing

You can call this method before or after joining a channel:
  • If you call it before joining a channel, the video module is enabled.
  • If you call it during a voice call, the call switches to a video call.

Return Values

  • 0: Success.
  • < 0: Failure.

setVideoEncoderConfiguration

Sets the encoder configuration for the local video.

virtual int setVideoEncoderConfiguration(const VideoEncoderConfiguration& config) = 0;
Note:
  • This method and the getMirrorApplied method both support setting mirror effects. Agora recommends using only one of them; using both may cause the mirror effect to overlap and the setting to fail.
  • The config specified in this method represents the maximum values under ideal network conditions. If the video engine cannot render the video using the specified config due to unstable network conditions, it will try lower parameters in the list until a usable configuration is found.

Timing

You can call this method before or after joining a channel. If you do not need to reset the video encoding properties after joining the channel, Agora recommends calling this method before enableVideo to reduce the time to render the first video frame.

Parameters

config
Video encoding configuration. See VideoEncoderConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure.

setVideoEncoderConfigurationEx

Sets the encoder configuration for the local video.

virtual int setVideoEncoderConfigurationEx(const VideoEncoderConfiguration& config, const RtcConnection& connection) = 0;

Each configuration profile corresponds to a set of video parameters, including resolution, frame rate, and bitrate. The config parameter represents the maximum values under ideal network conditions. If the video engine cannot render the video using the specified config due to unstable network conditions, it will attempt to use lower parameters from the list until a suitable configuration is found.

Scenario

This method applies to multi-channel scenarios.

Parameters

config
Video encoder configuration. See VideoEncoderConfiguration.
connection
Connection information. See RtcConnection.

Return Values

  • 0: Success.
  • < 0: Failure.

setVideoScenario

Sets the video application scenario.

virtual int setVideoScenario(VIDEO_APPLICATION_SCENARIO_TYPE scenarioType) = 0;
Since
Available since v4.2.0.
Note: Call this method before joining a channel.

Parameters

scenarioType
Type of video application scenario. See VIDEO_APPLICATION_SCENARIO_TYPE.
  • APPLICATION_SCENARIO_MEETING (1): Suitable for meeting scenarios. The SDK automatically enables the following strategies:
    • When the low-quality video stream requires higher bitrate, the SDK uses various anti-network congestion techniques to improve its performance and ensure smooth reception.
    • The SDK monitors the number of high-quality video stream subscribers in real time and adjusts its configuration dynamically:
      • If no one subscribes to the high-quality stream, the SDK reduces its bitrate and frame rate to save upstream bandwidth.
      • If there are subscribers, the SDK resets to the last VideoEncoderConfiguration set via setVideoEncoderConfiguration; if not set, the defaults are:
        • Resolution: (Windows/macOS) 1280 × 720; (Android/iOS) 960 × 540.
        • Frame rate: 15 fps.
        • Bitrate: (Windows/macOS) 1600 Kbps; (Android/iOS) 1000 Kbps.
    • The SDK monitors the number of low-quality stream subscribers and enables/disables the stream accordingly:
      • If setDualStreamMode is set to never send low-quality streams (DISABLE_SIMULCAST_STREAM), this strategy is invalid.
      • If no one subscribes, the SDK disables the stream to save bandwidth.
      • If there are subscribers, the SDK enables the stream and resets to the last SimulcastStreamConfig set via setDualStreamMode; if not set, the defaults are:
        • Resolution: 480 × 272.
        • Frame rate: 15 fps.
        • Bitrate: 500 Kbps.
  • APPLICATION_SCENARIO_1V1 (2): Suitable for one-on-one live streaming. The SDK optimizes for low latency and high quality, improving video quality, first frame rendering speed, latency on mid- and low-end devices, and smoothness under weak network conditions. Note: This value is only for host-to-host scenarios.
  • APPLICATION_SCENARIO_LIVESHOW (3): Suitable for live show scenarios. The SDK enables multiple optimizations, including automatic audio/video frame acceleration (no need to call enableInstantMediaRendering) to reduce first frame delay and enabling B-frame encoding to improve image quality and bandwidth efficiency, even under weak networks or on low-end devices.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -1: General error (no specific reason).
    • -4: Video application scenario not supported, possibly due to using the voice SDK instead of the video SDK.
    • -7: IRtcEngine object not initialized. Please initialize the IRtcEngine object before calling this method.

startPreview [1/2]

Enables local video preview.

virtual int startPreview() = 0;
Note:
  • Mirror mode is enabled by default for local preview.
  • After leaving the channel, local preview remains enabled. You need to call stopPreview to disable it.

Timing

You must call this method after calling enableVideo and setupLocalVideo.

Return Values

  • 0: Success.
  • < 0: Failure.

startPreview [2/2]

Enables local video preview and specifies the video source to preview.

virtual int startPreview(VIDEO_SOURCE_TYPE sourceType) = 0;
Note:
  • Mirror mode is enabled by default for local preview.
  • After leaving the channel, local preview remains enabled. You need to call stopPreview to disable it.

Timing

You must call this method after calling enableVideo and setupLocalVideo.

Parameters

sourceType
The video source type. See VIDEO_SOURCE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

stopPreview [1/2]

Stops the local video preview.

virtual int stopPreview() = 0;

After calling startPreview to start the preview, if you need to stop the local video preview, you can call this method.

Timing

Call this method before joining or after leaving a channel.

Return Values

  • 0: Success.
  • < 0: Failure.

stopPreview [2/2]

Stops the local video preview.

virtual int stopPreview(VIDEO_SOURCE_TYPE sourceType) = 0;

After calling startPreview, if you need to stop the preview, you can call this method.

Timing

Call this method before joining or after leaving a channel.

Parameters

sourceType
The video source type. See VIDEO_SOURCE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

onFirstLocalVideoFrame

Callback triggered when the first local video frame is displayed in the local view.

virtual void onFirstLocalVideoFrame(VIDEO_SOURCE_TYPE source, int width, int height, int elapsed)

Trigger Timing

Triggered when the first local video frame is displayed in the local view.

Parameters

source
Video source type. See VIDEO_SOURCE_TYPE.
width
Width of the first local video frame in pixels.
height
Height of the first local video frame in pixels.
elapsed
Time elapsed (ms) from calling joinChannel to this callback. If startPreview was called before joining the channel, this is the time from startPreview to this event.

onFirstLocalVideoFramePublished

Callback triggered when the first local video frame is published.

virtual void onFirstLocalVideoFramePublished(VIDEO_SOURCE_TYPE source, int elapsed)
The SDK triggers this callback when any of the following conditions are met:
  • The local client enables the video module and successfully joins a channel by calling joinChannel.
  • The local client calls muteLocalVideoStream(true) followed by muteLocalVideoStream(false).
  • The local client calls disableVideo followed by enableVideo.
  • The local client successfully pushes video frames to the SDK using pushVideoFrame.

Trigger Timing

Triggered when the first local video frame is published.

Parameters

source
Video source type. See VIDEO_SOURCE_TYPE.
elapsed
Time elapsed (ms) from calling joinChannel to this callback.

onFirstRemoteVideoDecoded

Callback triggered when the first remote video frame is received and decoded.

virtual void onFirstRemoteVideoDecoded(uid_t uid, int width, int height, int elapsed) __deprecated
Deprecated
This method is deprecated.

Trigger Timing

Triggered when a remote user joins the channel and sends a video stream, or resumes sending after stopping for 15 seconds. Interruptions may be due to the remote user leaving the channel, going offline, or calling disableVideo.

Parameters

uid
Remote user ID sending the video stream.
width
Width of the video stream (pixels).
height
Height of the video stream (pixels).
elapsed
Time (ms) from calling joinChannel to this callback.

onFirstRemoteVideoFrame

Callback when the first frame of remote video is displayed.

virtual void onFirstRemoteVideoFrame(uid_t uid, int width, int height, int elapsed)
Note: The onFirstRemoteVideoFrame callback is triggered only when the video frame is rendered by the SDK. If you use custom video rendering, this callback will not be triggered. You need to implement this functionality independently outside the SDK.

Trigger Timing

Triggered when the renderer receives the first frame of remote video.

Parameters

uid
Remote user ID, i.e., the user sending the video stream.
width
Width of the video stream (pixels).
height
Height of the video stream (pixels).
elapsed
Time (ms) from calling joinChannel to triggering this callback.

onLocalVideoEvent

Callback triggered when a local video event occurs.

virtual void onLocalVideoEvent(VIDEO_SOURCE_TYPE source, LOCAL_VIDEO_EVENT_TYPE event)
Since
Available since v4.6.1.

You can use this callback to get the reason for the event.

Parameters

source
Video source type. See VIDEO_SOURCE_TYPE.
event
Local video event type. See LOCAL_VIDEO_EVENT_TYPE.

onLocalVideoStateChanged

Callback for local video state changes.

virtual void onLocalVideoStateChanged(VIDEO_SOURCE_TYPE source, LOCAL_VIDEO_STREAM_STATE state, LOCAL_VIDEO_STREAM_REASON reason)
Note: Note that duplicate frame detection only applies to video frames with resolution greater than 200 × 200, frame rate ≥ 10 fps, and bitrate < 20 Kbps. In most cases, if video capture fails, you can diagnose the issue using the reason parameter in this callback. However, on some devices, Android does not throw any error callbacks when capture fails (e.g., freezes), causing the SDK to be unable to report the reason for local video state changes. In such cases, you can determine whether video frames are not being captured by checking if the state reported is LOCAL_VIDEO_STREAM_STATE_CAPTURING or LOCAL_VIDEO_STREAM_STATE_ENCODING, and the captureFrameRate in the onLocalVideoStats callback is 0.

Trigger Timing

The SDK triggers this callback and sets state to LOCAL_VIDEO_STREAM_STATE_FAILED and reason to LOCAL_VIDEO_STREAM_REASON_CAPTURE_FAILURE in the following scenarios:
  • The app goes to the background and the system reclaims the camera resource.
  • On Android 9 and above, the system automatically reclaims camera permissions after the app runs in the background for a while.
  • On Android 6 and above, if the camera is occupied by a third-party app and then released, the SDK triggers this callback with state as LOCAL_VIDEO_STREAM_STATE_CAPTURING and reason as LOCAL_VIDEO_STREAM_REASON_OK.
  • The camera starts normally but fails to output video frames for four consecutive seconds.
  • When the camera outputs captured video frames, if the SDK detects 15 consecutive duplicate frames, it triggers this callback with state as LOCAL_VIDEO_STREAM_STATE_CAPTURING and reason as LOCAL_VIDEO_STREAM_REASON_CAPTURE_FAILURE.

Parameters

source
Video source type. See VIDEO_SOURCE_TYPE.
state
Local video state. See LOCAL_VIDEO_STREAM_STATE.
reason
Reason for the local video state change. See LOCAL_VIDEO_STREAM_REASON.

onLocalVideoStats

Callback for local video stream statistics.

virtual void onLocalVideoStats(VIDEO_SOURCE_TYPE source, const LocalVideoStats& stats)

Trigger Timing

The SDK triggers this callback every two seconds.

Parameters

source
The video source type. See VIDEO_SOURCE_TYPE.
stats
Statistics of the local video stream. See LocalVideoStats.

onRemoteVideoStateChanged

Callback for remote video state changes.

virtual void onRemoteVideoStateChanged(uid_t uid, REMOTE_VIDEO_STATE state, REMOTE_VIDEO_STATE_REASON reason, int elapsed)
Note: This callback does not work properly when the number of users (in communication channels) or hosts (in live broadcast channels) exceeds 32.

Trigger Timing

Triggered when the remote video stream state changes.

Parameters

uid
Remote user ID whose video state changed.
state
Remote video state. See REMOTE_VIDEO_STATE.
reason
Reason for the remote video state change. See REMOTE_VIDEO_STATE_REASON.
elapsed
Time elapsed (ms) from calling joinChannel to this callback.

onRemoteVideoStats

Callback for statistics of video streams sent by remote users.

virtual void onRemoteVideoStats(const RemoteVideoStats& stats)

Trigger Timing

The SDK triggers this callback every two seconds for each remote user. If multiple users or hosts in the channel send video streams, the SDK triggers this callback multiple times accordingly.

Parameters

stats
Statistics of the remote video stream. See RemoteVideoStats.

onRemoteVideoTransportStats

Reports transport-layer statistics of each remote video stream.

virtual void onRemoteVideoTransportStats(uid_t uid, unsigned short delay, unsigned short lost, unsigned short rxKBitRate) __deprecated
Deprecated
This method is deprecated. Use onRemoteVideoStats instead.

Trigger Timing

During a call, this callback is triggered every 2 seconds when you receive video packets sent by a remote user or host.

Parameters

uid
The remote user ID, indicating the sender of the video packets.
delay
Network delay from the sender to the receiver (ms).
lost
Packet loss rate (%) of the video packets sent by the remote user.
rxKBitRate
Video receiving bitrate (Kbps).

onUserEnableLocalVideo

Callback when a remote user enables or disables local video capture.

virtual void onUserEnableLocalVideo(uid_t uid, bool enabled) __deprecated
Deprecated
This method is deprecated. Use the following enum values in the onRemoteVideoStateChanged callback instead: REMOTE_VIDEO_STATE_STOPPED and REMOTE_VIDEO_STATE_REASON_REMOTE_MUTED, REMOTE_VIDEO_STATE_DECODING and REMOTE_VIDEO_STATE_REASON_REMOTE_UNMUTED.

This callback is triggered when the remote user calls enableLocalVideo to resume or stop video capture.

Parameters

uid
The remote user ID.
enabled
Whether the specified remote user enables local video capture:
  • true: Enables the video module. Other users in the channel can see the remote user's video.
  • false: Disables the video module. Other users in the channel cannot receive the remote user's video stream, but the remote user can still receive video streams from others.

onUserEnableVideo

Callback triggered when a remote user enables or disables the video module.

virtual void onUserEnableVideo(uid_t uid, bool enabled)

Once the video module is disabled, the user can only make voice calls and cannot send or receive any video.

Trigger Timing

This callback is triggered when the remote user calls enableVideo or disableVideo.

Parameters

uid
The remote user ID.
enabled
Whether the video module is enabled:
  • true: The video module is enabled.
  • false: The video module is disabled.

onUserMuteVideo

Callback for changes in remote video publishing state.

virtual void onUserMuteVideo(uid_t uid, bool muted)

This callback is triggered when the remote user calls muteLocalVideoStream to stop or resume publishing the video stream. The SDK reports the remote user's video publishing state to the local user.

Note: This callback may be inaccurate when the number of users (in communication scenario) or hosts (in live broadcast scenario) in the channel exceeds 32.

Parameters

uid
The user ID of the remote user.
muted
Whether the remote user stops publishing the video stream:
  • true: The remote user stops publishing the video stream.
  • false: The remote user resumes publishing the video stream.

onVideoPublishStateChanged

Callback for video publishing state changes.

virtual void onVideoPublishStateChanged(VIDEO_SOURCE_TYPE source, const char* channel, STREAM_PUBLISH_STATE oldState, STREAM_PUBLISH_STATE newState, int elapseSinceLastState)

Parameters

source
Video source type. See VIDEO_SOURCE_TYPE.
channel
Channel name.
oldState
Previous video publishing state. See STREAM_PUBLISH_STATE.
newState
Current video publishing state. See STREAM_PUBLISH_STATE.
elapseSinceLastState
Time elapsed from the previous state to the current state (ms).

onVideoSizeChanged

Triggered when the video size or rotation of a specified user changes.

virtual void onVideoSizeChanged(VIDEO_SOURCE_TYPE sourceType, uid_t uid, int width, int height, int rotation)
Note: The rotation parameter is always 0 on iOS.

Parameters

sourceType
Video source type. See VIDEO_SOURCE_TYPE.
uid
User ID whose video size or rotation changed. A uid of 0 indicates the local user (local preview).
width
Width of the video stream (pixels).
height
Height of the video stream (pixels).
rotation
Video rotation angle, range [0, 360).
Note: The rotation parameter is always 0 on iOS.

onVideoStopped

Callback triggered when the video stops playing.

virtual void onVideoStopped() __deprecated {}
Deprecated
This method is deprecated. Use the onLocalVideoStateChanged callback with LOCAL_VIDEO_STREAM_STATE_STOPPED instead.
Note: You can use this callback to change the view configuration after the video stops playing, such as displaying another image in the view.

Camera Capture

setCameraStabilizationMode

Sets the camera stabilization mode.

virtual int setCameraStabilizationMode(CAMERA_STABILIZATION_MODE mode) = 0;

Camera stabilization mode is disabled by default. You need to call this method to enable stabilization and set the appropriate mode.

Note:
  • Camera stabilization applies only to video resolutions greater than 1280 × 720.
  • The higher the stabilization level, the smaller the field of view and the higher the latency. To improve user experience, it is recommended to set the mode parameter to CAMERA_STABILIZATION_MODE_LEVEL_1.
  • This method applies to iOS only.

Scenario

In mobile shooting, low-light environments, or when using mobile devices, you can set the camera stabilization mode to reduce shake and obtain more stable and clearer images.

Timing

You must call this method after the camera is successfully started, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).

Parameters

mode
The camera stabilization mode. See CAMERA_STABILIZATION_MODE.

Return Values

  • 0: Success.
  • < 0: Failure.

startCameraCapture

Starts video capture.

virtual int startCameraCapture(VIDEO_SOURCE_TYPE sourceType, const CameraCapturerConfiguration& config) = 0;

You can call this method and specify sourceType to start capturing video from one or more cameras.

Note: On iOS, if you want to enable multi-camera capture, you must call enableMultiCamera and set enabled to true before calling startCameraCapture.

Parameters

sourceType
Type of video source. See VIDEO_SOURCE_TYPE.
Note:
  • On iOS devices, if the device has multiple cameras or supports external cameras, up to 2 video streams can be captured simultaneously.
  • On Android devices, if the device has multiple cameras or supports external cameras, up to 4 video streams can be captured simultaneously.
  • On desktop platforms, up to 4 video streams can be captured simultaneously.
config
Video capture configuration. See CameraCapturerConfiguration.
Note: On iOS, this parameter has no effect. Use the config parameter in the enableMultiCamera method to configure video capture.

Return Values

  • 0: Success.
  • < 0: Failure.

stopCameraCapture

Stops video capture.

virtual int stopCameraCapture(VIDEO_SOURCE_TYPE sourceType) = 0;

After calling startCameraCapture to start capturing video from one or more cameras, you can call this method and set the sourceType parameter to stop video capture from the specified camera.

Note: If you are using local video mixing, calling this method may interrupt the local video mixing. On iOS, if you want to disable multi-camera capture, you must call enableMultiCamera and set enabled to false after calling this method.

Parameters

sourceType
Type of video source. See VIDEO_SOURCE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

Screen Capture

getCount

Gets the number of shareable windows and screens.

virtual unsigned int getCount() = 0;
Note: getCount is available on macOS and Windows only.

Return Values

Returns the number of shareable windows and screens if the method call succeeds.

getScreenCaptureSources

Gets the list of shareable screens and windows.

virtual IScreenCaptureSourceList* getScreenCaptureSources(const SIZE& thumbSize, const SIZE& iconSize, const bool includeScreen) = 0;

You can call this method before sharing a screen or window to get a list of shareable screens and windows. Users can easily select the screen or window to share via the thumbnails in the list. The list also includes information such as window ID and screen ID, which you can use to call startScreenCaptureByWindowId or startScreenCaptureByDisplayId to start sharing.

Note: This method is only applicable to macOS and Windows platforms.

Parameters

thumbSize
Target size (width and height in pixels) for the screen or window thumbnail. The SDK scales the original image so that the longest side matches the target size without distorting the image. For example, if the original image is 400 × 300 and thumbSize is 100 × 100, the actual thumbnail size is 100 × 75. If the target size is larger than the original, the thumbnail is the original image and the SDK does not scale it. See [SIZE](https://learn.microsoft.com/en-us/windows/win32/api/windef/ns-windef-size).
iconSize
Target size (width and height in pixels) for the application icon. The SDK scales the original image so that the longest side matches the target size without distorting the image. For example, if the original image is 400 × 300 and iconSize is 100 × 100, the actual icon size is 100 × 75. If the target size is larger than the original, the icon is the original image and the SDK does not scale it. See [SIZE](https://learn.microsoft.com/en-us/windows/win32/api/windef/ns-windef-size).
includeScreen
Whether to return screen information:
  • true: Return both screen and window information.
  • false: Return only window information.

Return Values

getSourceInfo

Gets information about the specified shareable window or screen.

virtual ScreenCaptureSourceInfo getSourceInfo(unsigned int index) = 0;

After obtaining IScreenCaptureSourceList, pass in the index of the specified window or screen to get its information from ScreenCaptureSourceInfo.

Note: This method is available on macOS and Windows only.

Parameters

index
The index of the shareable window or screen. The value range is [0, getCount() ).

Return Values

Returns a ScreenCaptureSourceInfo object if the method call succeeds. See ScreenCaptureSourceInfo.

queryScreenCaptureCapability

Queries the maximum frame rate supported by the device during screen sharing.

virtual int queryScreenCaptureCapability() = 0;
Since
Available since v4.2.0.

To ensure optimal performance during screen sharing, especially when enabling high frame rates like 60 fps, it is recommended that you call this method before sharing to query the maximum frame rate supported by the device. If the device does not support high frame rates, you can lower the frame rate of the screen sharing stream accordingly to avoid affecting the quality.

Note: This method is only available on Android and iOS platforms.

Return Values

release

Releases IScreenCaptureSourceList.

virtual void release() = 0;

After obtaining the list of shareable windows and screens, call this method to release IScreenCaptureSourceList to avoid memory leaks, instead of deleting the object directly.

Note: This method is available on macOS and Windows only.

setExternalMediaProjection

Configures an external MediaProjection for the SDK to capture screen video streams.

virtual int setExternalMediaProjection(void* mediaProjection) = 0;

After calling this method, the external MediaProjection you set will replace the one requested by the SDK to capture screen video streams. When screen sharing stops or IRtcEngine is destroyed, the SDK will automatically release the MediaProjection.

Note: Before calling this method, you must obtain the [MediaProjection](https://developer.android.com/reference/android/media/projection/MediaProjection) permission. This method is for Android only.

Scenario

If you have already obtained a MediaProjection, you can use your own MediaProjection directly without using the one requested by the SDK. Applicable scenarios include:
  • On customized system devices, avoid system pop-ups (such as user authorization for screen capture) and start capturing screen video streams directly.
  • In screen sharing processes involving one or more subprocesses, avoid potential errors when creating objects in subprocesses to prevent screen capture failure.
This method is applicable in multi-channel scenarios.

Timing

Call this method after calling startScreenCapture.

Parameters

mediaProjection
MediaProjection object used to capture screen video streams.

Return Values

  • 0: Success.
  • < 0: Failure.

setScreenCaptureContentHint

Sets the content hint for screen sharing.

virtual int setScreenCaptureContentHint(VIDEO_CONTENT_HINT contentHint) = 0;

This method sets the content type hint for screen sharing, so that the SDK can apply corresponding optimization algorithms based on different types of content. If you do not call this method, the default content hint is CONTENT_HINT_NONE.

Note: You can call this method before or after starting or stopping screen sharing.

Parameters

contentHint
Content hint for screen sharing. See VIDEO_CONTENT_HINT for details.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state, possibly because you are already sharing another screen or window. Call stopScreenCapture to stop the current sharing and try again.

setScreenCaptureScenario

Sets the screen sharing scenario.

virtual int setScreenCaptureScenario(SCREEN_SCENARIO_TYPE screenScenario) = 0;

Call this method to set the screen sharing scenario. The SDK automatically optimizes the shared video quality and user experience based on the scenario.

Note: It is recommended to call this method before joining a channel.

Parameters

screenScenario
Screen sharing scenario. See SCREEN_SCENARIO_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

startScreenCapture [1/2]

Starts screen capture.

virtual int startScreenCapture(const ScreenCaptureParameters2& captureParams) = 0;
You can call this method before or after joining a channel:
  • Call this method first, then call joinChannel and set publishScreenCaptureVideo to true to start screen sharing.
  • Call this method after joining the channel, then call updateChannelMediaOptions and set publishScreenCaptureVideo to true to start screen sharing.
Note:
  • This method is available on Android and iOS only.
  • On iOS, screen sharing is supported on iOS 12.0 and above.
  • If you use a custom audio source instead of SDK audio capture, Agora recommends adding keep-alive logic to your app to prevent screen sharing from being interrupted when the app goes to the background.
  • This feature requires high device performance. Agora recommends using iPhone X or above.
  • This method depends on the iOS screen sharing dynamic library AgoraReplayKitExtension.xcframework. If the library is removed, screen sharing will not work.
  • On Android, if the user does not grant screen capture permission, the SDK triggers the onPermissionError (2) callback.
  • On Android 9 and above, to prevent the app from being killed in the background, Agora recommends adding the foreground service permission android.permission.FOREGROUND_SERVICE in /app/Manifests/AndroidManifest.xml.
  • Due to performance limitations, Android TV does not support screen sharing.
  • Due to system limitations, do not change the video encoding resolution of the screen sharing stream during screen sharing on Huawei devices, as this may cause a crash.
  • Due to system limitations, some Xiaomi devices do not support capturing system audio during screen sharing.
  • To avoid system audio capture failure during screen sharing, Agora recommends setting the audio scenario to AUDIO_SCENARIO_GAME_STREAMING via setAudioScenario before joining the channel.
  • Billing for screen sharing streams is based on dimensions in ScreenVideoParameters:
    • If not set, Agora bills as 1280 × 720.
    • If set, Agora bills based on the provided value.

Scenario

Used in screen sharing scenarios.

Parameters

captureParams
The encoding parameters for screen sharing. See ScreenCaptureParameters2.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2 (iOS): Parameter is NULL.
    • -2 (Android): System version too low. Ensure Android API level is at least 21.
    • -3 (Android): Unable to capture system audio. Ensure Android API level is at least 29.

startScreenCapture [2/2]

Starts screen capture from the specified video source.

virtual int startScreenCapture(VIDEO_SOURCE_TYPE sourceType, const ScreenCaptureConfiguration& config) = 0;
This method supports starting screen capture from the specified video source, suitable for multi-channel or local compositing scenarios. The Agora SDK provides multiple screen capture methods. Choose based on your needs:
Note:
  • If you start screen capture using this method, you need to call stopScreenCapture to stop it.
  • On Windows, up to four screen capture video streams are supported.
  • On macOS, only one screen capture video stream is supported.
  • This method is available on macOS and Windows only.

Scenario

Used for screen sharing scenarios, supports multi-channel usage.

Timing

You can call this method before or after joining a channel, as follows:
  • Call this method first, then call joinChannel and set publishScreenCaptureVideo to true to start screen sharing.
  • Call this method after joining the channel, then call updateChannelMediaOptions and set publishScreenCaptureVideo to true to start screen sharing.

Parameters

sourceType
Type of video source. See VIDEO_SOURCE_TYPE.
Note: On macOS, this parameter must be set to VIDEO_SOURCE_SCREEN (2).
config
Screen capture configuration. See ScreenCaptureConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure.

startScreenCaptureByDisplayId

Captures the screen by specifying a display ID.

virtual int startScreenCaptureByDisplayId(int64_t displayId, const Rectangle& regionRect, const ScreenCaptureParameters& captureParams) = 0;
Note: This method is only applicable to Windows and macOS platforms.

Timing

You can call this method either before or after joining a channel. The differences are:
  • Call this method before joining the channel, then call joinChannel and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.
  • Call this method after joining the channel, then call updateChannelMediaOptions and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.

Parameters

displayId
The ID of the screen to be shared.
Note: On Windows, to share both the primary and secondary screens simultaneously, set displayId to -1 when calling this method.
regionRect
(Optional) The region relative to the screen to be shared. Pass NULL to share the entire screen. See Rectangle.
captureParams
Screen sharing configuration. The default video resolution is 1920 × 1080 (2,073,600 pixels). Agora calculates costs based on this parameter. The video properties of the screen sharing stream should be set through this parameter and are not related to setVideoEncoderConfiguration. See ScreenCaptureParameters.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Call stopScreenCapture to stop the current sharing and try again.

startScreenCaptureByScreenRect

Captures the entire screen or a portion of it by specifying a screen region.

virtual int startScreenCaptureByScreenRect(const Rectangle& screenRect, const Rectangle& regionRect, const ScreenCaptureParameters& captureParams) __deprecated = 0;
Deprecated
Deprecated since v4.5.0. Use startScreenCaptureByDisplayId instead.
You can use this method to share the entire screen or a portion of it by specifying the screen region. You can call this method either before or after joining a channel:
  • Call this method before joining the channel, then call joinChannel and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.
  • Call this method after joining the channel, then call updateChannelMediaOptions and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.
Note: This method is only applicable to Windows.

Parameters

screenRect
The screen's position relative to the virtual screen. See Rectangle.
regionRect
(Optional) The region's position relative to the screen. See Rectangle. If this parameter is not set, the SDK shares the entire screen. If the specified region exceeds screen bounds, only the visible part is shared. If width or height is set to 0, the entire screen is shared.
captureParams
Encoding parameters for screen sharing. Default resolution is 1920 × 1080 (2,073,600 pixels). Agora calculates costs based on this parameter. See ScreenCaptureParameters.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Call stopScreenCapture to stop the current sharing and try again.

startScreenCaptureByWindowId

Captures the entire or part of a window by specifying the window ID.

virtual int startScreenCaptureByWindowId(int64_t windowId, const Rectangle& regionRect, const ScreenCaptureParameters& captureParams) = 0;
This method supports capturing the entire or part of a window by specifying the window ID, and supports window sharing for UWP (Universal Windows Platform) applications. Agora has tested mainstream UWP applications with the latest SDK. Details are as follows:
OS Version Software Name Compatible Version Supported
Windows 10 Chrome 76.0.3809.100 No
Office Word 18.1903.1152.0 Yes
Office Excel 18.1903.1152.0 No
Office PPT 18.1903.1152.0 Yes
WPS Word 11.1.0.9145 Yes
WPS Excel 11.1.0.9145 Yes
WPS PPT 11.1.0.9145 Yes
Built-in Media Player All versions Yes
Windows 8 Chrome All versions Yes
Office Word All versions Yes
Office Excel All versions Yes
Office PPT All versions Yes
WPS Word 11.1.0.9098 Yes
WPS Excel 11.1.0.9098 Yes
WPS PPT 11.1.0.9098 Yes
Built-in Media Player All versions Yes
Windows 7 Chrome 73.0.3683.103 No
Office Word All versions Yes
Office Excel All versions Yes
Office PPT All versions Yes
WPS Word 11.1.0.9098 No
WPS Excel 11.1.0.9098 No
WPS PPT 11.1.0.9098 Yes
Built-in Media Player All versions No
Note: startScreenCaptureByWindowId relies on WGC (Windows Graphics Capture) or GDI (Graphics Device Interface) for window sharing. On systems earlier than Windows 10 2004, WGC cannot disable mouse capture. Therefore, calling captureMouseCursor(false) on devices with system versions lower than Windows 10 2004 may be ineffective. See ScreenCaptureParameters. This method is applicable to macOS and Windows platforms only.

Scenario

In screen sharing scenarios, you need to call this method to start capturing the screen video stream.

Timing

You can call this method before or after joining a channel, as follows:
  • Call this method before joining a channel, then call joinChannel to join the channel and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.
  • Call this method after joining a channel, then call updateChannelMediaOptions and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.

Parameters

windowId
The ID of the window to be shared.
regionRect
(Optional) Set the position of the shared region relative to the screen.
  • If this parameter is not set, the SDK shares the entire screen.
  • If the specified region exceeds the window boundary, the SDK only shares the area within the window.
  • If the specified width or height is 0, the SDK shares the entire window.
See Rectangle.
captureParams
Screen sharing configuration. The default video resolution is 1920 × 1080, i.e., 2,073,600 pixels. Agora uses the value of this parameter for billing. See ScreenCaptureParameters.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may be because you are already sharing another screen or window. Call stopScreenCapture to stop the current sharing and try again.

stopScreenCapture [2/2]

Stops video capture from the specified video source.

virtual int stopScreenCapture(VIDEO_SOURCE_TYPE sourceType) = 0;

If you started one or more screen captures by calling startScreenCapture, you need to call this method to stop screen capture and specify the screen to stop via the sourceType parameter.

Note: This method is available on macOS and Windows only.

Timing

You can call this method before or after joining a channel.

Parameters

sourceType
Type of video source. See VIDEO_SOURCE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

updateScreenCapture

Updates screen capture parameters.

virtual int updateScreenCapture(const ScreenCaptureParameters2& captureParams) = 0;
If system audio was not captured when screen sharing started and you want to update parameters to publish system audio, follow these steps:
  1. Call this method and set captureAudio to true.
  2. Call updateChannelMediaOptions and set publishScreenCaptureAudio to true to publish the captured audio.
Note:
  • This method is available on Android and iOS only.
  • On iOS, screen sharing is supported on iOS 12.0 and above.

Parameters

captureParams
The encoding parameters for screen sharing. See ScreenCaptureParameters2.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may happen if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

updateScreenCaptureParameters

Updates screen capture parameters.

virtual int updateScreenCaptureParameters(const ScreenCaptureParameters& captureParams) = 0;
Note: This method is applicable to Windows and macOS only. Call it after starting screen or window sharing.

Parameters

captureParams
The encoding parameters for screen sharing. Video properties of the screen sharing stream should be set via this parameter only, and are independent of setVideoEncoderConfiguration. See ScreenCaptureParameters.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may happen if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

updateScreenCaptureRegion

Updates the screen sharing region.

virtual int updateScreenCaptureRegion(const Rectangle& regionRect) = 0;

Timing

Call this method after starting screen or window sharing.

Parameters

regionRect
The position of the screen sharing region relative to the screen or window.
  • If this parameter is not set, the SDK shares the entire screen or window.
  • If the region exceeds the screen or window bounds, the SDK only shares the valid portion.
  • If the width or height is set to 0, the SDK shares the entire screen or window.
See Rectangle.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may happen if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

Video Pre-processing and Post-processing

Video Rendering

enableInstantMediaRendering

Enables instant rendering of audio and video frames.

virtual int enableInstantMediaRendering() = 0;
Since
Added in v4.1.1.

After calling this method, the SDK enables instant rendering mode to accelerate the rendering of the first frame after the user joins the channel.

Note: Both host and audience need to call this method to experience instant rendering of audio and video frames. To disable this mode, you must call release to destroy the IRtcEngine object.

Scenario

Agora recommends enabling this mode for audience in ultra-low latency live streaming scenarios.

Timing

You must call this method before joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure.

setLocalRenderMode [1/2]

Updates the display mode of the local video view.

virtual int setLocalRenderMode(media::base::RENDER_MODE_TYPE renderMode) __deprecated = 0;
Deprecated
Deprecated since v4.0.0. Use setLocalRenderMode [2/2] instead.

You can call this method after initializing the local video view to update its render mode and mirror mode. This method only affects the video view seen by the local user and does not affect the published video stream.

Note:
  • Make sure to call setupLocalVideo to initialize the local video view before calling this method.
  • You can call this method multiple times during a call to update the local video view.

Parameters

renderMode
The display mode of the local video view. See RENDER_MODE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

setLocalRenderMode [2/2]

Updates the display mode of the local video view.

virtual int setLocalRenderMode(media::base::RENDER_MODE_TYPE renderMode, VIDEO_MIRROR_MODE_TYPE mirrorMode) = 0;

You can call this method after initializing the local video view to update its render mode and mirror mode. This method only affects the video view seen by the local user and does not affect the published video.

Note: setLocalRenderMode only takes effect for the primary camera (PRIMARY_CAMERA_SOURCE). For scenarios using custom video capture or other video sources, use setupLocalVideo instead.

Timing

  • Make sure to call setupLocalVideo to initialize the local video view before calling this method.
  • During a call, you can call this method multiple times to update the display mode of the local video view.

Parameters

renderMode
The display mode of the local video view. See RENDER_MODE_TYPE.
mirrorMode
The mirror mode of the local video view. See VIDEO_MIRROR_MODE_TYPE.
Note: When using the front camera, the SDK enables mirror mode by default; when using the rear camera, mirror mode is disabled by default.

Return Values

  • 0: Success.
  • < 0: Failure.

setLocalRenderTargetFps

Sets the maximum frame rate for local video rendering.

virtual int setLocalRenderTargetFps(VIDEO_SOURCE_TYPE sourceType, int targetFps) = 0;

Scenario

In scenarios where video rendering frame rate is not critical (such as screen sharing or online education), you can call this method to set the maximum frame rate for local video rendering. The SDK will try to keep the actual local rendering frame rate near this value to reduce CPU usage and improve system performance.

Timing

You can call this method before or after joining a channel.

Parameters

sourceType
Video source type. See VIDEO_SOURCE_TYPE.
targetFps
Capture frame rate (fps) for local video. Supported values: 1, 7, 10, 15, 24, 30, 60.
Note: Set this parameter to a value lower than the actual video frame rate, otherwise the setting will be invalid.

Return Values

  • 0: Success.
  • < 0: Failure.

setLocalVideoMirrorMode

Sets the mirror mode of the local video.

virtual int setLocalVideoMirrorMode(VIDEO_MIRROR_MODE_TYPE mirrorMode) __deprecated = 0;
Deprecated
This method is deprecated. Use setupLocalVideo or setLocalRenderMode instead.

Parameters

mirrorMode
The mirror mode of the local video. See VIDEO_MIRROR_MODE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

setRemoteRenderMode

Updates the display mode of the remote user's video view.

virtual int setRemoteRenderMode(uid_t uid, media::base::RENDER_MODE_TYPE renderMode, VIDEO_MIRROR_MODE_TYPE mirrorMode) = 0;

You can call this method after initializing the remote user's video view to update the render mode and mirror mode. This method only affects the remote video view seen by the local user.

Note:
  • Please call this method after calling setupRemoteVideo to initialize the remote view.
  • During a call, you can call this method multiple times as needed to update the display mode of the remote user's video view.

Parameters

uid
Remote user ID.
renderMode
Render mode of the remote user's view. See RENDER_MODE_TYPE.
mirrorMode
Mirror mode of the remote user's view. See VIDEO_MIRROR_MODE_TYPE.

Return Values

  • 0: Success.
  • < 0: Failure.

setRemoteRenderModeEx

Sets the video render mode for a specified remote user.

virtual int setRemoteRenderModeEx(uid_t uid, media::base::RENDER_MODE_TYPE renderMode, VIDEO_MIRROR_MODE_TYPE mirrorMode, const RtcConnection& connection) = 0;

After initializing the remote video view using setupRemoteVideo, you can call this method to update its render and mirror modes. This method only affects the local user's view.

Note:
  • Call this method after initializing the remote view with setupRemoteVideo.
  • During a call, you can call this method multiple times to update the display mode of the remote video view.

Scenario

This method applies to multi-channel scenarios.

Parameters

uid
Remote user ID.
renderMode
Video render mode. See RENDER_MODE_TYPE.
mirrorMode
Mirror mode. See VIDEO_MIRROR_MODE_TYPE.
connection
Connection information. See RtcConnection.

Return Values

  • 0: Success.
  • < 0: Failure.

setRemoteRenderTargetFps

Sets the maximum frame rate for remote video rendering.

virtual int setRemoteRenderTargetFps(int targetFps) = 0;

Scenario

In scenarios where video rendering frame rate is not critical (e.g., screen sharing, online education) or when the remote user is using a low- or mid-end device, you can call this method to set the maximum frame rate for remote client video rendering. The SDK will try to keep the actual rendering frame rate near this value to reduce CPU usage and improve system performance.

Timing

You can call this method before or after joining a channel.

Parameters

targetFps
Maximum rendering frame rate (fps) for remote video. Supported values: 1, 7, 10, 15, 24, 30, 60.
  • Set this parameter to a value lower than the actual video frame rate, otherwise the setting will not take effect.

Return Values

  • 0: Success.
  • < 0: Failure.

setRenderMode

Sets the render mode of the media player.

virtual int setRenderMode(media::base::RENDER_MODE_TYPE renderMode) = 0;

Parameters

renderMode
The render mode. See RENDER_MODE_TYPE.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails.

setupLocalVideo

Initializes the local video view.

virtual int setupLocalVideo(const VideoCanvas& canvas) = 0;

This method initializes the view for the local video stream on the local device. It only affects what the local user sees and does not affect the publishing of the local video. You can call this method to bind the local video stream to a specified video view (view) and set the rendering and mirror modes for that view. The binding remains effective after leaving the channel. To stop rendering or unbind the local video from the view, set view to NULL. In real-time interaction scenarios, if you want to preview multiple views from different perspectives locally, you can call this method multiple times and set different positions for each view. For example, after setting the video source to the camera, configure two views with position set to POSITION_POST_CAPTURER_ORIGIN and POSITION_POST_CAPTURER respectively to preview both the original unprocessed frame and the preprocessed frame (with beauty, virtual background, watermark, etc.).

Note: If you only need to update the rendering or mirror mode of the local video view during a call, use setLocalRenderMode instead.

Scenario

After initialization, call this method to set up local video before joining a channel. In real-time interaction scenarios, if you want to preview multiple views from different perspectives locally, you can call this method multiple times with different view and positions.

Timing

You can call this method before or after joining a channel.

Parameters

canvas
Local video view and related settings. See VideoCanvas.

Return Values

  • 0: Success.
  • < 0: Failure.

setupRemoteVideo

Initializes the video view of a remote user.

virtual int setupRemoteVideo(const VideoCanvas& canvas) = 0;

This method initializes the video view of a remote video stream on the local device and only affects what the local user sees. You can use this method to bind the remote video stream to the specified view and set the rendering and mirror modes. You need to specify the remote user ID when calling this method. If the app has not yet obtained the remote user ID, set it after receiving the onUserJoined callback. To unbind the remote user from the view, set the view parameter to NULL. When the remote user leaves the channel, the SDK automatically unbinds the user. In scenarios where video mixing layout customization is needed on mobile, you can call this method and set a separate view for each sub-video stream in the mixed stream.

Note:
  • To update the rendering or mirror mode of the remote video view during a call, use the setRemoteRenderMode method.
  • When using the recording service, the app does not need to bind views because the recording service does not send video streams. If your app cannot identify the recording service, bind the remote user to the view when the SDK triggers the onFirstRemoteVideoDecoded callback.

Parameters

canvas
The remote video view and its related settings. See VideoCanvas.

Return Values

  • 0: Success.
  • < 0: Failure.

setupRemoteVideoEx

Initializes the video view of a remote user.

virtual int setupRemoteVideoEx(const VideoCanvas& canvas, const RtcConnection& connection) = 0;

This method initializes the view for the remote video stream on the local device and only affects what you see locally. You can use this method to bind the remote video stream to a specified view and set the rendering and mirror modes for that view. You need to set the remote user's ID using VideoCanvas before the remote user joins the channel. If you haven't obtained the remote user's ID, you can set it after receiving the onUserJoined callback. If video recording is enabled, the recording service joins the channel as a virtual client, and other clients will also receive the onUserJoined callback. Do not bind this virtual client to a view, as it does not send any video stream. To unbind the remote user from the view, set the view parameter to NULL. After the remote user leaves the channel, the SDK automatically unbinds the view.

Note: To update the rendering or mirror mode of the remote video view during a call, use the setRemoteRenderModeEx method.

Scenario

This method applies to multi-channel scenarios.

Parameters

canvas
Settings for the remote video view. See VideoCanvas.
connection
Connection information. See RtcConnection.

Return Values

  • 0: Success.
  • < 0: Failure.

startMediaRenderingTracing

Enables tracing of the video frame rendering process.

virtual int startMediaRenderingTracing() = 0;
Since
Added in v4.1.1.

After successfully calling this method, the SDK tracks the rendering status of video frames in the channel and reports relevant information via the onVideoRenderingTracingResult callback.

Note:
  • If you do not call this method, the SDK starts tracking video rendering events from the time you call joinChannel. You can call this method at an appropriate time based on your application scenario to set the starting point for rendering event tracing.
  • After the local user leaves the current channel, the SDK automatically starts tracking video rendering events again when you rejoin the channel.

Scenario

You can call this method in conjunction with UI settings in your app (such as buttons or sliders) to enhance user experience. For example, call this method when the user clicks the "Join Channel" button, then use the onVideoRenderingTracingResult callback to get the rendering time of video frames to optimize related metrics.

Return Values

  • 0: Success.
  • < 0: Failure.

startMediaRenderingTracingEx

Enables tracing of the video frame rendering process.

virtual int startMediaRenderingTracingEx(const RtcConnection& connection) = 0;
Since
Added since v4.1.1.

After calling this method, the SDK starts tracing the rendering status of video frames in the channel from the time of the call and reports related information through the onVideoRenderingTracingResult callback. You can call this method in conjunction with UI controls in your app (such as buttons or sliders) to optimize user experience. For example, call this method when the user clicks the 'Join Channel' button and use the onVideoRenderingTracingResult callback to obtain the time taken for video frame rendering, thereby optimizing related metrics.

Note:
  • If you do not call this method, the SDK starts tracing video rendering events from the moment joinChannel is called. You can call this method at an appropriate time based on your actual application scenario to set the starting point for tracing video rendering events.
  • After the local user leaves the current channel, the SDK automatically starts tracing video rendering events from the time you joined the channel.

Scenario

This method is applicable in multi-channel scenarios.

Parameters

connection
Connection information. See RtcConnection.

Return Values

  • 0: Success.
  • < 0: Failure.

onTranscodedStreamLayoutInfo

Triggered when the local user receives a mixed video stream with video layout information.

virtual void onTranscodedStreamLayoutInfo(uid_t uid, int width, int height, int layoutCount,const VideoLayout* layoutlist)

When you first receive the mixed video from the video mixing server, or when the layout information of the mixed stream changes, the SDK triggers this callback to report the layout information of each sub-video stream in the mixed video.

Note: This callback applies to Android and iOS platforms only.

Trigger Timing

Triggered when the local user first receives the mixed video from the video mixing server, or when the layout information of the mixed stream changes.

Parameters

uid
User ID of the publisher of the mixed video stream.
width
Width of the mixed video (in pixels).
height
Height of the mixed video (in pixels).
layoutCount
Number of layout information entries in the mixed video.
layoutlist
Array of layout information for sub-video streams. See VideoLayout.

onVideoRenderingTracingResult

Callback for video frame rendering events.

virtual void onVideoRenderingTracingResult(uid_t uid, MEDIA_TRACE_EVENT currentEvent, VideoRenderingTracingInfo tracingInfo)

After calling startMediaRenderingTracing or joining a channel, the SDK triggers this callback to report video frame rendering events and various metrics during rendering. You can optimize these metrics to improve the rendering efficiency of the first video frame.

Trigger Timing

Triggered after calling startMediaRenderingTracing or joining a channel.

Parameters

uid
The user ID.
currentEvent
The current video frame rendering event. See MEDIA_TRACE_EVENT.
tracingInfo
Metrics during the video frame rendering process. See VideoRenderingTracingInfo.

Raw Video Data

getMirrorApplied

Callback triggered each time the SDK receives a video frame, used to set whether to mirror the captured video.

virtual bool getMirrorApplied() { return false; }

If you want the obtained video data to be a mirrored image of the original video, you need to register this callback when calling registerVideoFrameObserver. After successfully registering the video frame observer, the SDK triggers this callback each time it receives a video frame. You need to set whether to mirror the video frame through the return value of this callback.

Note:
  • On Windows platform, the supported video data formats for this callback include: I420, RGBA, and TextureBuffer.
  • On Android platform, the supported video data formats for this callback include: I420, RGBA, and Texture.
  • On iOS platform, the supported video data formats for this callback include: I420, RGBA, and CVPixelBuffer.
  • On macOS platform, the supported video data formats for this callback include: I420 and RGBA.
  • This method and the setVideoEncoderConfiguration method both support setting mirror effects. It is recommended to use only one of them. Using both simultaneously may result in overlapping mirror effects, causing the setting to fail.

Timing

Triggered each time the SDK receives a video frame.

Return Values

  • true: The captured video is mirrored.
  • false: (Default) The captured video is not mirrored.

getObservedFramePosition

Sets the frame position for the video observer.

virtual uint32_t getObservedFramePosition()
After successfully registering the video data observer, the SDK uses this callback to determine whether to trigger the onCaptureVideoFrame, onRenderVideoFrame, and onPreEncodeVideoFrame callbacks at specific video frame processing positions, so that you can observe the locally captured video data, remotely received video data, and pre-encoded video data. You can set one or more positions to observe by modifying the return value according to actual scenarios:
  • POSITION_POST_CAPTURER (1 << 0): Position after video capture, corresponds to onCaptureVideoFrame callback.
  • POSITION_PRE_RENDERER (1 << 1): Position before rendering remote video data, corresponds to onRenderVideoFrame callback.
  • POSITION_PRE_ENCODER (1 << 2): Position before encoding, corresponds to onPreEncodeVideoFrame callback.
Note:
  • Use | (bitwise OR operator) to set multiple observation positions.
  • The default observation positions are POSITION_POST_CAPTURER (1 << 0) and POSITION_PRE_RENDERER (1 << 1).
  • To save system resources, it is recommended to reduce the number of observation positions.
  • When the video processing mode is PROCESS_MODE_READ_WRITE and the observation position is POSITION_PRE_ENCODER | POSITION_POST_CAPTURER, getMirrorApplied does not take effect. You need to modify the video processing mode or observation position.

Return Values

If the method call succeeds, it returns a bitmask used to control the frame position of the video observer. See VIDEO_MODULE_POSITION.

getRotationApplied

Callback indicating whether to rotate the captured video.

virtual bool getRotationApplied() { return false; }

If you want to rotate the captured video according to the rotation member in the VideoFrame class, make sure to register this callback when calling registerVideoFrameObserver. After successfully registering the video frame observer, the SDK triggers this callback each time it receives a video frame. You need to use the return value of this callback to set whether to rotate the video frame.

Note:
  • On Android, this callback supports video data formats: I420, RGBA, and Texture.
  • On Windows, this callback supports video data formats: I420, RGBA, and TextureBuffer.
  • On iOS, this callback supports video data formats: I420, RGBA, and CVPixelBuffer.
  • On macOS, this callback supports video data formats: I420 and RGBA.

Timing

Triggered each time the SDK receives a video frame.

Return Values

  • true: Rotate the captured video.
  • false: (Default) Do not rotate the captured video.

getVideoFormatPreference

Sets the format of raw video data output by the SDK.

virtual VIDEO_PIXEL_FORMAT getVideoFormatPreference() { return VIDEO_PIXEL_DEFAULT; }

After calling registerVideoFrameObserver to register the callback, the SDK triggers this callback each time a video frame is received. You need to set the preferred video data format in the return value of this callback.

Note: The default pixel format of raw video (VIDEO_PIXEL_DEFAULT) is as follows:
  • On Windows, the default video frame type is YUV420.
  • On Android, the default video frame type may be I420Buffer or TextureBuffer. The texture format of TextureBuffer may be OES or RGB. If the video frame type returned by calling getVideoFormatPreference is VIDEO_PIXEL_DEFAULT, you need to adapt to I420Buffer or TextureBuffer when processing video data. Cases where the video frame type is fixed to I420Buffer include but are not limited to:
    • Specific devices, such as: LG G5 SE (H848), Google Pixel 4a, Samsung Galaxy A7, or Xiaomi Max.
    • When beauty effect extensions are integrated and video denoising or low-light enhancement is enabled.
  • On iOS and macOS, the default video frame type may be I420 or CVPixelBufferRef.

Return Values

Sets the format of raw video data. See VIDEO_PIXEL_FORMAT.

getVideoFrameProcessMode

getVideoFrameProcessMode callback. Triggered each time the SDK receives a video frame to set the video frame processing mode.

virtual VIDEO_FRAME_PROCESS_MODE getVideoFrameProcessMode() { return PROCESS_MODE_READ_ONLY; }

After successfully registering the video frame observer, the SDK triggers this callback each time it receives a video frame. You need to set your preferred processing mode through the return value of this callback.

Timing

Triggered each time the SDK receives a video frame.

Return Values

When the method call succeeds, returns a VIDEO_FRAME_PROCESS_MODE enum value indicating the video frame processing mode you set. See VIDEO_FRAME_PROCESS_MODE.

registerVideoFrameObserver

Registers a raw video frame observer object.

virtual int registerVideoFrameObserver(IVideoFrameObserver* observer) = 0;

Use this method to register a custom IVideoFrameObserver instance to observe raw video frames (such as YUV or RGBA format). After successful registration, the SDK triggers the corresponding callbacks when video frames are received. You can use this raw video data in scenarios such as virtual background and beauty effects.

Note: When processing video data returned in the callback, note that the width and height parameters may change under the following conditions:
  • When the network condition deteriorates, the video resolution will gradually decrease.
  • When the user adjusts the video profile, the resolution returned in the callback will also change accordingly.

Timing

Call this method before joining a channel.

Parameters

observer
Observer instance. To release the instance, set this value to NULL. See IVideoFrameObserver.

Return Values

  • 0: Success.
  • < 0: Failure.

onCaptureVideoFrame

Callback for locally captured video frames.

virtual bool onCaptureVideoFrame(agora::rtc::VIDEO_SOURCE_TYPE sourceType, VideoFrame& videoFrame) = 0;

This callback is used to obtain raw video data captured by the local device for preprocessing. You can directly modify videoFrame in this callback and return true to send the modified video data to the SDK. To pass the preprocessed video data to the SDK, you must call getVideoFrameProcessMode to set the video processing mode to read-write (PROCESS_MODE_READ_WRITE).

Note:
  • If the video data type is RGBA, the SDK does not support processing data with an alpha channel.
  • It is recommended to ensure that the modified parameters in videoFrame match the actual data in the video frame buffer, otherwise rotation, stretching, or other abnormalities may occur in local preview or remote display.
  • The default video format returned by this callback is YUV420. If you need another format, you can set the preferred format in the getVideoFormatPreference callback.

Trigger Timing

Triggered each time the SDK captures a video frame after successfully registering the video data observer.

Parameters

sourceType
Video source type. See VIDEO_SOURCE_TYPE.
videoFrame
Output parameter representing the video frame. See VideoFrame.
Note: The default video frame data format obtained through this callback is as follows:
  • Android: I420 or RGB (GLES20.GL_TEXTURE_2D)
  • iOS: I420 or CVPixelBufferRef
  • macOS: I420 or CVPixelBufferRef
  • Windows: YUV420

Return Values

  • When the video processing mode is PROCESS_MODE_READ_ONLY:
    • true: Retain for further use.
    • false: Retain for further use.
  • When the video processing mode is PROCESS_MODE_READ_WRITE:
    • true: Instructs the SDK to accept the video frame.
    • false: Instructs the SDK to discard the video frame.

onFrame

Callback triggered when the player receives a video frame.

virtual void onFrame(const VideoFrame* frame) = 0;

After registering the video frame observer, this callback is triggered each time the player receives a video frame and reports detailed information about the video frame.

Parameters

frame
Video frame information. See VideoFrame.

onPreEncodeVideoFrame

Callback for pre-encoded video frames.

virtual bool onPreEncodeVideoFrame(agora::rtc::VIDEO_SOURCE_TYPE sourceType, VideoFrame& videoFrame) = 0;

After successfully registering the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can obtain the video data before encoding and process it according to your scenario. After processing, you can return the processed video data to the SDK.

Note:
  • If you want to send the preprocessed video data back to the SDK, you must call getVideoFrameProcessMode to set the video processing mode to read-write (PROCESS_MODE_READ_WRITE).
  • If you want to obtain the pre-encoded video data from the second screen capture source, you need to set the frame position to POSITION_PRE_ENCODER (1 << 2) using getObservedFramePosition.
  • The video data obtained from this callback has already been preprocessed, including cropping, rotation, and beauty effects.
  • It is recommended to ensure that the modified parameters in videoFrame match the actual situation of the video frame in the buffer, otherwise rotation anomalies, image distortion, and other issues may occur in local preview and remote video display.

Trigger Timing

Triggered when the SDK receives a video frame before encoding.

Parameters

sourceType
Video source type. See VIDEO_SOURCE_TYPE.
videoFrame
Output parameter representing the video frame before encoding. See VideoFrame.
Note: The default format of video frame data obtained through this callback is as follows:
  • Android: I420 or RGB (GLES20.GL_TEXTURE_2D).
  • iOS: I420 or CVPixelBufferRef.
  • macOS: I420 or CVPixelBufferRef.
  • Windows: YUV420.

Return Values

  • When the video processing mode is PROCESS_MODE_READ_ONLY:
    • true: Retain for further use.
    • false: Retain for further use.
  • When the video processing mode is PROCESS_MODE_READ_WRITE:
    • true: Instructs the SDK to accept the video frame.
    • false: Instructs the SDK to discard the video frame.

onRenderVideoFrame

Callback triggered when receiving a video frame sent by a remote user.

virtual bool onRenderVideoFrame(const char* channelId, rtc::uid_t remoteUid, VideoFrame& videoFrame) = 0;

After successfully registering the video frame observer, the SDK triggers this callback each time it receives a video frame from a remote user. You can use this callback to access the remote video data before rendering and process it according to your scenario. The default returned video format is YUV420. If you need a different format, set your preferred format in getVideoFormatPreference.

Note:
  • If you want to send the preprocessed video data back to the SDK, you must first call getVideoFrameProcessMode to set the processing mode to read-write (PROCESS_MODE_READ_WRITE).
  • If the video data you retrieve is of type RGBA, the SDK does not support processing the alpha channel.
  • It is recommended to ensure that the modified parameters in videoFrame match the actual frame data in the video frame buffer; otherwise, this may cause issues such as rotation or stretching in local preview or remote display.

Trigger Timing

This callback is triggered each time the SDK receives a video frame sent by a remote user.

Parameters

channelId
Channel ID.
remoteUid
User ID of the remote user who sent the current video frame.
videoFrame
Video frame. See VideoFrame.
Note: The default video frame data format obtained through this callback is as follows:
  • Android: I420 or RGB (GLES20.GL_TEXTURE_2D)
  • iOS: I420 or CVPixelBufferRef
  • macOS: I420 or CVPixelBufferRef
  • Windows: YUV420

Return Values

  • When the video processing mode is PROCESS_MODE_READ_ONLY:
    • true: Retain for further use.
    • false: Retain for further use.
  • When the video processing mode is PROCESS_MODE_READ_WRITE:
    • true: Instruct the SDK to accept the video frame.
    • false: Instruct the SDK to discard the video frame.

Encoded Video Data

registerVideoEncodedFrameObserver

Registers an observer object to receive encoded video frames.

virtual int registerVideoEncodedFrameObserver(IVideoEncodedFrameObserver* observer) = 0;

If you only need to monitor encoded video frames (such as H.264 format) without decoding and rendering them, it is recommended to register a custom IVideoEncodedFrameObserver instance using this method.

Note: Call this method before joining a channel.

Parameters

observer
Encoded video frame observer object. See IVideoEncodedFrameObserver.

Return Values

  • 0: Success.
  • < 0: Failure.

onEncodedVideoFrameReceived

Reports that the receiver has received the encoded video frame sent by the remote user.

virtual bool onEncodedVideoFrameReceived(const char* channelId, rtc::uid_t uid, const uint8_t* imageBuffer, size_t length, const rtc::EncodedVideoFrameInfo& videoEncodedFrameInfo) = 0;
Since
Available since v4.6.0.

If you call the setRemoteVideoSubscriptionOptions method and set encodedFrameOnly to true, the SDK triggers this callback locally to report the received encoded video frame information.

Parameters

channelId
The channel name.
uid
The remote user ID.
imageBuffer
The buffer of the encoded video image.
length
The length of the video image data.
videoEncodedFrameInfo
Information about the encoded video frame. See EncodedVideoFrameInfo.

Return Values

  • true: Callback handled successfully.
  • false: Callback handling failed.

Custom Video Capture and Rendering

createCustomVideoTrack

Creates a custom video track.

virtual video_track_id_t createCustomVideoTrack() = 0;
To publish a custom video source, follow these steps:
  1. Call this method to create a video track and get the video track ID.
  2. Call joinChannel to join a channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID you want to publish, and set publishCustomVideoTrack to true.
  3. Call pushVideoFrame and set videoTrackId to the video track ID set in step 2 to publish the corresponding custom video source in the channel.

Return Values

  • The video track ID if the method call succeeds, which serves as the unique identifier of the video track.
  • 0xffffffff if the method call fails.

destroyCustomVideoTrack

Destroys the specified video track.

virtual int destroyCustomVideoTrack(video_track_id_t video_track_id) = 0;

Parameters

video_track_id
The video track ID returned by createCustomVideoTrack.

Return Values

  • 0: Success.
  • < 0: Failure.

pushVideoFrame

Pushes external raw video frames to the SDK via a video track.

virtual int pushVideoFrame(base::ExternalVideoFrame* frame, unsigned int videoTrackId = 0) = 0;
To publish a custom video source, follow these steps:
  1. Call createCustomVideoTrack to create a video track and obtain the track ID.
  2. Call joinChannel to join a channel. In ChannelMediaOptions, set customVideoTrackId to the track ID and publishCustomVideoTrack to true.
  3. Call pushVideoFrame, and set videoTrackId to the track ID from step 2 to publish the custom video source in the channel.
Note: If you only need to push one custom video source to the channel, you can directly call setExternalVideoSource, and the SDK will automatically create a video track with videoTrackId set to 0. Warning: After calling pushVideoFrame, even if you stop pushing external video frames to the SDK, the custom video stream will still be counted in video duration and incur charges. Agora recommends taking appropriate actions to avoid such billing:
  • If you no longer need to capture external video data, call destroyCustomVideoTrack to destroy the custom video track.
  • If you only want to use external video data for local preview without publishing it to the channel, call muteLocalVideoStream to stop sending the video stream, or call updateChannelMediaOptions to set publishCustomVideoTrack to false.

Scenario

Since v4.2.3, the SDK supports the ID3D11Texture2D video format, which is widely used in gaming scenarios. When you need to push such video frames to the SDK, call pushVideoFrame, set format in frame to VIDEO_TEXTURE_ID3D11TEXTURE2D, set d3d11_texture_2d and texture_slice_index, and set the video frame format to ID3D11Texture2D.

Parameters

frame
The external raw video frame to be pushed. See ExternalVideoFrame.
videoTrackId
Video track ID returned by createCustomVideoTrack.
Note: If you only need to push one custom video source, set videoTrackId to 0.

Return Values

  • 0: Success.
  • < 0: Failure.

setExternalRemoteEglContext

Sets the EGL context used to render remote video streams.

virtual int setExternalRemoteEglContext(void* eglContext) = 0;

This method replaces the SDK's default remote EGL context, allowing you to manage the EGL context yourself. The SDK automatically releases the context when the engine is destroyed.

Note: This method is only applicable on Android.

Timing

Call this method before joining a channel.

Parameters

eglContext
EGL context used to render remote video streams.

Return Values

  • 0: Success.
  • < 0: Failure.

setExternalVideoSource

Configures an external video source.

virtual int setExternalVideoSource(bool enabled, bool useTexture, EXTERNAL_VIDEO_SOURCE_TYPE sourceType = VIDEO_FRAME, rtc::SenderOptions encodedVideoOption = rtc::SenderOptions()) = 0;

After enabling the external video source by calling this method, you can call pushVideoFrame to push external video data to the SDK.

Note: Switching video sources dynamically is not supported in the channel. To switch from an external to an internal video source, you must leave the channel, call this method to disable the external video source, and then rejoin the channel.

Timing

Call this method before joining a channel.

Parameters

enabled
Whether to use an external video source:
  • true: Use an external video source, and the SDK will prepare to receive external video frames.
  • false: (Default) Do not use an external video source.
useTexture
Whether to use Texture format for external video frames:
  • true: Use Texture format.
  • false: (Default) Do not use Texture format.
sourceType
Type of external video frame, whether it is encoded. See EXTERNAL_VIDEO_SOURCE_TYPE.
encodedVideoOption
Video encoding options. Required only when sourceType is ENCODED_VIDEO_FRAME.
Note: Contact technical support to configure this parameter.

Return Values

  • 0: Success.
  • < 0: Failure.