IRtcEngine

The base interface class of the RTC SDK that implements the main functions of real-time audio and video.

IRtcEngine provides the main methods for the app to call. You must call createAgoraRtcEngine to create an IRtcEngine object before calling other APIs.

registerEventHandler

Adds a primary callback event.

abstract registerEventHandler(eventHandler: IRtcEngineEventHandler): boolean;

The interface class IRtcEngineEventHandler is used by the SDK to send callback event notifications to the app. The app receives SDK event notifications by inheriting methods of this interface class. All methods in the interface class have default (empty) implementations. The app can inherit only the events it cares about as needed. In the callback methods, the app should not perform time-consuming operations or call APIs that may block (such as sendStreamMessage), as this may affect the SDK's operation.

Parameters

eventHandler
The callback event to be added. See IRtcEngineEventHandler.

Return Values

  • true: Method call succeeds.
  • false: Method call fails. See Error Codes for details and resolution suggestions.

addListener

Adds an IRtcEngineEvent listener.

addListener?<EventType extends keyof IRtcEngineEvent>(
      eventType: EventType,
      listener: IRtcEngineEvent[EventType]
    ): void;

After successfully calling this method, you can listen to events and retrieve data from the corresponding IRtcEngine object through IRtcEngineEvent. You can add multiple listeners for the same event as needed.

Parameters

eventType
The name of the target event to listen to. See IRtcEngineEvent.
listener
The callback function corresponding to eventType. For example, to add onJoinChannelSuccess:
const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {};
engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess);

addVideoWatermark

Adds a local video watermark.

abstract addVideoWatermark(
    watermarkUrl: string,
    options: WatermarkOptions
  ): number;
Deprecated
Deprecated: This method is deprecated. Use addVideoWatermarkWithConfig instead.
This method adds a PNG image as a watermark to the local published live video stream. Users in the same live channel, audience of CDN live streaming, and capture devices can see or capture the watermark image. Currently, only one watermark can be added to the live video stream. A newly added watermark replaces the previous one. The watermark coordinates depend on the settings in the setVideoEncoderConfiguration method:
  • If the video orientation (OrientationMode) is fixed to landscape or landscape in adaptive mode, landscape coordinates are used.
  • If the video orientation is fixed to portrait or portrait in adaptive mode, portrait coordinates are used.
  • When setting watermark coordinates, the image area of the watermark must not exceed the video dimensions set in setVideoEncoderConfiguration; otherwise, the excess part will be cropped.
Note:
  • You must call this method after calling enableVideo.
  • If you only want to add a watermark during CDN streaming, you can use this method or startRtmpStreamWithTranscoding to set the watermark.
  • The watermark image must be in PNG format. This method supports all pixel formats of PNG images: RGBA, RGB, Palette, Gray, and Alpha_gray.
  • If the size of the PNG image to be added does not match the size set in this method, the SDK will scale or crop the PNG image to match the settings.
  • If local video is set to mirror mode, the local watermark will also be mirrored. To avoid mirrored watermark when local users view their own video, it is recommended not to use mirroring and watermark features together. Implement the local watermark feature at the application level.

Parameters

watermarkUrl
The local path of the watermark image to be added. This method supports adding watermark images from local absolute/relative paths.
options
The configuration options for the watermark image to be added. See WatermarkOptions.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

addVideoWatermarkWithConfig

Adds a watermark image to the local video stream.

abstract addVideoWatermarkWithConfig(configs: WatermarkConfig): number;
Since
Available since v4.6.2.

You can use this method to overlay a watermark image on the local video stream and configure its position, size, and visibility in the preview using WatermarkConfig.

Parameters

configs
Watermark configuration. See WatermarkConfig.

Return Values

  • 0: Success.
  • < 0: Failure.

adjustAudioMixingPlayoutVolume

Adjusts the local playback volume of the music file.

abstract adjustAudioMixingPlayoutVolume(volume: number): number;

Timing

You must call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Parameters

volume
Volume of the music file. The range is [0,100], where 100 (default) is the original volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustAudioMixingPublishVolume

Adjusts the remote playback volume of the music file.

abstract adjustAudioMixingPublishVolume(volume: number): number;

This method adjusts the playback volume of the mixed music file on the remote end.

Timing

You must call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Parameters

volume
Volume of the music file. The range is [0,100], where 100 (default) is the original volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustAudioMixingVolume

Adjusts the playback volume of the music file.

abstract adjustAudioMixingVolume(volume: number): number;

This method adjusts the playback volume of the mixed music file on both the local and remote ends.

Note: Calling this method does not affect the playback volume of sound effect files set by the playEffect method.

Timing

You must call this method after startAudioMixing.

Parameters

volume
The volume range of the music file is 0~100. 100 (default) is the original file volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustCustomAudioPlayoutVolume

Adjusts the playout volume of a custom audio capture track locally.

abstract adjustCustomAudioPlayoutVolume(
    trackId: number,
    volume: number
  ): number;

After calling this method to set the local playout volume of the audio, if you want to readjust the volume, you can call this method again.

Note: Before calling this method, make sure you have already called the createCustomAudioTrack method to create a custom audio capture track.

Parameters

trackId
Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
volume
Playout volume of the custom captured audio, ranging from [0, 100]. 0 means mute, and 100 means original volume.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

adjustCustomAudioPublishVolume

Adjusts the playback volume of a custom audio capture track on the remote end.

abstract adjustCustomAudioPublishVolume(
    trackId: number,
    volume: number
  ): number;

After calling this method to set the playback volume of the audio on the remote end, you can call this method again to readjust the volume.

Note: Before calling this method, make sure you have called the createCustomAudioTrack method to create a custom audio capture track.

Parameters

trackId
Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
volume
Playback volume of the custom captured audio, ranging from [0,100]. 0 means mute, 100 means original volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustLoopbackSignalVolume

Adjusts the volume of the loopback audio signal.

abstract adjustLoopbackSignalVolume(volume: number): number;

After calling enableLoopbackRecording to enable loopback recording, you can call this method to adjust the volume of the loopback signal.

Parameters

volume
The volume of the music file, ranging from 0 to 100. 100 (default) represents the original volume of the file.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustPlaybackSignalVolume

Adjusts the playback volume of all remote users' signal locally.

abstract adjustPlaybackSignalVolume(volume: number): number;

This method adjusts the signal volume of all remote users after mixing and playing back locally. If you want to adjust the signal volume of a specific remote user locally, we recommend calling adjustUserPlaybackSignalVolume.

Timing

Can be called before or after joining a channel.

Parameters

volume
Volume, range is [0,400].
  • 0: Mute.
  • 100: (Default) Original volume.
  • 400: 4 times the original volume, with overflow protection.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustRecordingSignalVolume

Adjusts the volume of the recording audio signal.

abstract adjustRecordingSignalVolume(volume: number): number;

If you only need to mute the audio signal, it is recommended to use muteRecordingSignal.

Timing

Can be called before or after joining a channel.

Parameters

volume
Volume, ranging from [0,400].
  • 0: Mute.
  • 100: (Default) Original volume.
  • 400: Four times the original volume, with built-in overflow protection.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

adjustUserPlaybackSignalVolume

Adjusts the playback volume of a specified remote user.

abstract adjustUserPlaybackSignalVolume(uid: number, volume: number): number;

You can call this method during a call to adjust the playback volume of a specified remote user. To adjust the playback volume of multiple users, call this method multiple times.

Timing

You need to call this method after joining a channel.

Parameters

uid
The ID of the remote user.
volume
The volume, with a range of [0,400].
  • 0: Mute.
  • 100: (Default) Original volume.
  • 400: Four times the original volume, with overflow protection.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

clearVideoWatermarks

Removes added video watermarks.

abstract clearVideoWatermarks(): number;

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

complain

Reports call quality issues.

abstract complain(callId: string, description: string): number;

This method allows users to report call quality issues. You need to call it after leaving the channel.

Parameters

callId
Call ID. You can get this parameter by calling getCallId.
description
Description of the call. The length should be less than 800 bytes.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -1: General error (not categorized).
    • -2: Invalid parameter.
    • -7: Method called before IRtcEngine is initialized.

configRhythmPlayer

Configures the virtual metronome.

abstract configRhythmPlayer(config: AgoraRhythmPlayerConfig): number;
Deprecated
Deprecated since v4.6.2.
  • After calling startRhythmPlayer, you can call this method to reconfigure the virtual metronome.
  • Once the virtual metronome is enabled, the SDK starts playing the specified audio files from the beginning and controls the playback duration of each file based on the beatsPerMinute setting in AgoraRhythmPlayerConfig. For example, if beatsPerMinute is set to 60, the SDK plays one beat per second. If the file duration exceeds the beat duration, the SDK only plays the portion corresponding to the beat duration.
  • By default, the sound of the virtual metronome is not published to remote users. If you want remote users to hear the metronome, set publishRhythmPlayerTrack in ChannelMediaOptions to true after calling this method.

Timing

You can call this method before or after joining a channel.

Parameters

config
Metronome configuration. See AgoraRhythmPlayerConfig.

createCustomVideoTrack

Creates a custom video track.

abstract createCustomVideoTrack(): number;
When you need to publish a custom captured video in the channel, follow these steps:
  1. Call this method to create a video track and get the video track ID.
  2. When calling joinChannel to join the channel, set customVideoTrackId in ChannelMediaOptions to the video track ID you want to publish, and set publishCustomVideoTrack to true.
  3. Call pushVideoFrame and specify videoTrackId as the video track ID from step 2 to publish the corresponding custom video source in the channel.

Return Values

  • If the method call succeeds, returns the video track ID as the unique identifier for the video track.
  • If the method call fails, returns 0xffffffff. See Error Codes for details and resolution suggestions.

createDataStream

Creates a data stream.

abstract createDataStream(config: DataStreamConfig): number;
Note: Within the lifecycle of IRtcEngine, each user can create up to 5 data streams. The data streams are destroyed when leaving the channel. To use them again, you need to recreate them.

Timing

This method can be called before or after joining a channel.

Parameters

config
Data stream configuration. See DataStreamConfig.

Return Values

  • ID of the created data stream: if the method call succeeds.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

createMediaPlayer

Creates a media player object.

abstract createMediaPlayer(): IMediaPlayer;

Before calling other APIs under the IMediaPlayer class, you need to call this method to create a media player instance. If you need to create multiple instances, you can call this method multiple times.

Timing

You can call this method before or after joining a channel.

Parameters

delegate
Event handler of IRtcEngine. See IRtcEngineEventHandler.

Return Values

  • The IMediaPlayer object, if the method call succeeds.
  • An empty pointer , if the method call fails.

createVideoEffectObject

Creates an IVideoEffectObject video effect object.

abstract createVideoEffectObject(
    bundlePath: string,
    type?: MediaSourceType
  ): IVideoEffectObject;
Since
Available since v4.6.2.

Parameters

bundlePath
The path to the video effect resource bundle.
type
The media source type. See MediaSourceType.

Return Values

destroyCustomVideoTrack

Destroys the specified video track.

abstract destroyCustomVideoTrack(videoTrackId: number): number;

Parameters

videoTrackId
The video track ID returned by the createCustomVideoTrack method.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

destroyMediaPlayer

Destroys the media player.

abstract destroyMediaPlayer(mediaPlayer: IMediaPlayer): number;

Parameters

mediaPlayer
IMediaPlayer object.

Return Values

  • ≥ 0: The method call succeeds and returns the media player ID.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

destroyRendererByConfig

Destroys multiple video renderer objects.

abstract destroyRendererByConfig(sourceType: VideoSourceType, channelId?: string, uid?: number): void;

Parameters

sourceType
Type of video source. See VideoSourceType.
uid
Remote user ID.

destroyRendererByView

Destroys a single video renderer object.

abstract destroyRendererByView(view: any): void;

Parameters

view
The HTMLElement object to be destroyed.

destroyVideoEffectObject

Destroys a video effect object.

abstract destroyVideoEffectObject(
    videoEffectObject: IVideoEffectObject
  ): number;
Since
Available since v4.6.2.

Parameters

videoEffectObject
The video effect object to destroy. See IVideoEffectObject.

Return Values

  • 0: Success.
  • < 0: Failure.

disableAudio

Disables the audio module.

abstract disableAudio(): number;

The audio module is enabled by default. You can call this method to disable it.

Note: This method resets the entire engine and has a relatively slow response time. Therefore, we recommend using the following methods to control the audio module:

Timing

Can be called before or after joining a channel. Remains effective after leaving the channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

disableAudioSpectrumMonitor

Disables audio spectrum monitoring.

abstract disableAudioSpectrumMonitor(): number;

After calling enableAudioSpectrumMonitor, if you want to disable audio spectrum monitoring, call this method.

Note: This method can be called before or after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

disableVideo

Disables the video module.

abstract disableVideo(): number;

This method disables the video module.

Note:
  • This method sets the internal engine to a disabled state, which remains effective after leaving the channel.
  • Calling this method resets the entire engine and has a relatively slow response time. Depending on your needs, you can use the following methods to control specific video module features independently:

Timing

Can be called before or after joining a channel:
  • If called before joining a channel, audio-only mode is enabled.
  • If called after joining a channel, the mode switches from video to audio-only. You can call the enableVideo method to switch back to video mode.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting tips.

enableAudio

Enables the audio module.

abstract enableAudio(): number;

The audio module is enabled by default. If you have disabled it using disableAudio, you can call this method to re-enable it.

Note:

Timing

Can be called before or after joining a channel. Remains effective after leaving the channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableAudioSpectrumMonitor

Enables audio spectrum monitoring.

abstract enableAudioSpectrumMonitor(intervalInMS?: number): number;

If you want to obtain audio spectrum data of local or remote users, register an audio spectrum observer and enable audio spectrum monitoring.

Note: This method can be called before or after joining a channel.

Parameters

intervalInMS
The time interval (ms) at which the SDK triggers the onLocalAudioSpectrum and onRemoteAudioSpectrum callbacks. The default is 100 ms. The value must not be less than 10 ms; otherwise, the method call fails.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter settings.

enableAudioVolumeIndication

Enables audio volume indication.

abstract enableAudioVolumeIndication(
    interval: number,
    smooth: number,
    reportVad: boolean
  ): number;

This method allows the SDK to periodically report to the app the volume information of the local user who is sending streams and the remote users (up to 3) with the highest instantaneous volume.

Timing

Can be called before or after joining a channel.

Parameters

interval
The time interval for the volume indication:
  • ≤ 0: Disables the volume indication feature.
  • > 0: Returns the interval for volume indication, in milliseconds. It is recommended to set it above 100 ms. Must not be less than 10 ms, otherwise the onAudioVolumeIndication callback will not be received.
smooth
The smoothing factor that specifies the sensitivity of the volume indication. The range is [0,10], and the recommended value is 3. The larger the value, the more sensitive the fluctuation; the smaller the value, the smoother the fluctuation.
reportVad
  • true: Enables the local voice detection feature. When enabled, the vad parameter in the onAudioVolumeIndication callback reports whether a human voice is detected locally.
  • false: (Default) Disables the local voice detection feature. Except in scenarios where the engine automatically performs local voice detection, the vad parameter in the onAudioVolumeIndication callback does not report whether a human voice is detected locally.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableCameraCenterStage

Enables or disables the Center Stage feature.

abstract enableCameraCenterStage(enabled: boolean): number;

The Center Stage feature is disabled by default. You need to call this method to enable it. To disable this feature, call this method again and set enabled to false.

Note: This method is only supported on macOS. Due to high performance requirements, you need to use this feature on the following or higher-performance devices:
  • iPad:
    • 12.9-inch iPad Pro (5th generation)
    • 11-inch iPad Pro (3rd generation)
    • iPad (9th generation)
    • iPad mini (6th generation)
    • iPad Air (5th generation)
  • 2020 M1 MacBook Pro 13" + iPhone 11 (using iPhone as an external camera for MacBook)
Agora recommends calling isCameraCenterStageSupported before enabling this feature to check whether the current device supports Center Stage.

Scenario

The Center Stage feature can be widely used in scenarios such as online meetings, live shows, and online education. Hosts can enable this feature to keep themselves centered in the frame whether they move or not, ensuring a better visual presentation.

Timing

This method must be called after the camera is successfully started, i.e., after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).

Parameters

enabled
Whether to enable the Center Stage feature:
  • true: Enable Center Stage.
  • false: Disable Center Stage.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableContentInspect

Enables/disables local snapshot upload.

abstract enableContentInspect(
    enabled: boolean,
    config: ContentInspectConfig
  ): number;

After enabling local snapshot upload, the SDK captures and uploads snapshots of the video sent by the local user based on the module type and frequency you set in ContentInspectConfig. After capturing, the Agora server sends a callback notification to your server via HTTPS request and uploads all snapshots to the third-party cloud storage you specify.

Note:
  • Before calling this method, make sure you have enabled the local snapshot upload service in the Agora Console.
  • If you choose the Agora proprietary plugin for video moderation (ContentInspectSupervision), you must integrate the dynamic library libagora_content_inspect_extension.dll. Deleting this library will cause the local snapshot upload feature to fail.

Timing

You can call this method either before or after joining a channel.

Parameters

enabled
Whether to enable local snapshot upload:
  • true: Enable local snapshot upload.
  • false: Disable local snapshot upload.
config
Local snapshot upload configuration. See ContentInspectConfig.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableCustomAudioLocalPlayback

Sets whether to play external audio sources locally.

abstract enableCustomAudioLocalPlayback(
    trackId: number,
    enabled: boolean
  ): number;

After calling this method to enable local playback of externally captured audio sources, you can call this method again and set enabled to false to stop local playback. You can call adjustCustomAudioPlayoutVolume to adjust the local playback volume of the custom audio capture track.

Note: Before calling this method, make sure you have called the createCustomAudioTrack method to create a custom audio capture track.

Parameters

trackId
Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
enabled
Whether to play the external audio source locally:
  • true: Play locally.
  • false: (Default) Do not play locally.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableDualStreamMode

Enables or disables dual-stream mode on the sender and sets the low-quality video stream.

abstract enableDualStreamMode(
    enabled: boolean,
    streamConfig?: SimulcastStreamConfig
  ): number;
Deprecated
Deprecated: Deprecated since v4.2.0. Use setDualStreamMode instead.
You can call this method on the sending side to enable or disable dual-stream mode. Dual-stream refers to high-quality and low-quality video streams:
  • High-quality stream: High resolution and high frame rate video stream.
  • Low-quality stream: Low resolution and low frame rate video stream.
After enabling dual-stream mode, you can call setRemoteVideoStreamType on the receiving side to choose whether to receive the high-quality or low-quality video stream.
Note:
  • This method applies to all types of streams sent by the sender, including but not limited to camera-captured video streams, screen sharing streams, and custom captured video streams.
  • To enable dual-stream in multi-channel scenarios, call the enableDualStreamModeEx method.
  • This method can be called before or after joining a channel.

Parameters

enabled
Whether to enable dual-stream mode:
  • true: Enable dual-stream mode.
  • false: (Default) Disable dual-stream mode.
streamConfig
Configuration of the low-quality video stream. See SimulcastStreamConfig.
Note: If mode is set to DisableSimulcastStream, then streamConfig will not take effect.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableEncryption

Enables or disables built-in encryption.

abstract enableEncryption(enabled: boolean, config: EncryptionConfig): number;

The SDK automatically disables encryption after the user leaves the channel. To re-enable encryption, you must call this method before the user joins the channel again.

Note:
  • All users in the same channel must use the same encryption mode and key when calling this method.
  • If built-in encryption is enabled, the RTMP streaming feature cannot be used.

Scenario

Scenarios with high security requirements.

Timing

This method must be called before joining a channel.

Parameters

enabled
Whether to enable built-in encryption:
  • true: Enable built-in encryption.
  • false: (Default) Disable built-in encryption.
config
Configure the built-in encryption mode and key. See EncryptionConfig.

Return Values

  • 0: Success.
  • < 0: Failure
    • -2: Invalid parameters. You need to reset the parameters.
    • -4: Incorrect encryption mode or failed to load external encryption library. Check the enum value or reload the external encryption library.
    • -7: SDK not initialized. You must create the IRtcEngine object and complete initialization before calling the API.

enableExtension

Enables or disables an extension.

abstract enableExtension(
    provider: string,
    extension: string,
    enable?: boolean,
    type?: MediaSourceType
  ): number;
Note:
  • To enable multiple extensions, call this method multiple times.
  • Once this method is called successfully, no other extensions can be loaded.

Timing

It is recommended to call this method after joining a channel.

Parameters

provider
Name of the extension provider.
extension
Name of the extension.
enable
Whether to enable the extension:
  • true: Enable the extension.
  • false: Disable the extension.
type
Media source type of the extension. See MediaSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -3: The extension dynamic library is not loaded. Agora recommends checking whether the library is in the expected location or whether the library name is correct.

enableInEarMonitoring

Enables in-ear monitoring.

abstract enableInEarMonitoring(
    enabled: boolean,
    includeAudioFilters: EarMonitoringFilterType
  ): number;

This method is used to enable or disable in-ear monitoring.

Note: Users must use headphones (wired or Bluetooth) to hear the in-ear monitoring effect.

Timing

Can be called before or after joining a channel.

Parameters

enabled
Enable/disable in-ear monitoring:
  • true: Enable in-ear monitoring.
  • false: (Default) Disable in-ear monitoring.
includeAudioFilters
Type of in-ear monitoring audio filter. See EarMonitoringFilterType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -8: Please ensure the current audio route is Bluetooth or headphones.

enableInstantMediaRendering

Enables accelerated rendering of audio and video frames.

abstract enableInstantMediaRendering(): number;

After this method is successfully called, the SDK enables accelerated rendering for both video and audio, speeding up the first frame rendering and audio output after joining a channel.

Note: Both broadcaster and audience must call this method to experience accelerated rendering. Once successfully called, you can only disable accelerated rendering by calling the release method to destroy the IRtcEngine object.

Scenario

Agora recommends enabling this mode for audience users in live streaming scenarios.

Timing

This method must be called before joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -7: The IRtcEngine is not initialized when the method is called.

enableLocalAudio

Enables or disables local audio capture.

abstract enableLocalAudio(enabled: boolean): number;
When a user joins a channel, audio is enabled by default. This method can disable or re-enable local audio, i.e., stop or restart local audio capture. The difference between this method and muteLocalAudioStream is:
  • enableLocalAudio: Enables or disables local audio capture and processing. After using enableLocalAudio to disable or enable local capture, there will be a brief interruption when listening to remote playback locally.
  • muteLocalAudioStream: Stops or resumes sending the local audio stream without affecting the audio capture status.

Scenario

This method does not affect receiving and playing remote audio streams. enableLocalAudio(false) is suitable for scenarios where you only want to receive remote audio without sending locally captured audio.

Timing

This method can be called before or after joining a channel. Calling it before joining only sets the device state; it takes effect immediately after joining.

Parameters

enabled
  • true: Re-enable local audio, i.e., enable local audio capture (default);
  • false: Disable local audio, i.e., stop local audio capture.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableLocalVideo

Enables or disables local video capture.

abstract enableLocalVideo(enabled: boolean): number;

This method disables or re-enables local video capture without affecting the reception of remote video. After calling enableVideo, local video capture is enabled by default. If you call enableLocalVideo(false) in a channel to disable local video capture, it also stops publishing the video stream in the channel. To re-enable it, call enableLocalVideo(true), then call updateChannelMediaOptions and set the options parameter to publish the locally captured video stream to the channel. After successfully disabling or enabling local video capture, the remote user receives the onRemoteVideoStateChanged callback.

Note:
  • This method can be called before or after joining a channel, but settings made before joining take effect only after joining.
  • This method sets the internal engine to an enabled state and remains effective after leaving the channel.

Parameters

enabled
Whether to enable local video capture.
  • true: (Default) Enables local video capture.
  • false: Disables local video capture. After disabling, the remote user will not receive the local user's video stream; however, the local user can still receive the remote user's video stream. When set to false, this method does not require a local camera.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting tips.

enableLoopbackRecording

Enables loopback recording.

abstract enableLoopbackRecording(
    enabled: boolean,
    deviceName?: string
  ): number;

After enabling loopback recording, the sound played by the sound card will be mixed into the local audio stream and can be sent to the remote side.

Note:
  • This method can be called before or after joining a channel.
  • If you call disableAudio to disable the audio module, loopback recording will also be disabled. To re-enable loopback recording, you need to call enableAudio to enable the audio module and then call enableLoopbackRecording again.

Parameters

enabled
Whether to enable loopback recording:
  • true: Enable loopback recording; the system sound > output interface displays the virtual sound card name.
  • false: (Default) Disable loopback recording; the system sound > output interface does not display the virtual sound card name.
deviceName
Note: Electron for UnionTech OS SDK does not support this parameter.
  • macOS: The device name of the virtual sound card. Default is empty, which means using the AgoraALD virtual sound card for recording.
  • Windows: The device name of the sound card. Default is empty, which means using the built-in sound card of the device.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableSoundPositionIndication

Enables/disables stereo sound for remote users.

abstract enableSoundPositionIndication(enabled: boolean): number;

To use setRemoteVoicePosition for spatial audio positioning, make sure to call this method before joining the channel to enable stereo sound for remote users.

Parameters

enabled
Whether to enable stereo sound for remote users:
  • true: Enable stereo sound.
  • false: Disable stereo sound.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableSpatialAudio

Enables or disables spatial audio.

abstract enableSpatialAudio(enabled: boolean): number;

After enabling spatial audio, you can call setRemoteUserSpatialAudioParams to set spatial audio parameters for remote users.

Note:
  • This method can be called before or after joining a channel.
  • This method depends on the spatial audio dynamic library libagora_spatial_audio_extension.dll. Removing the library will prevent the feature from working properly.

Parameters

enabled
Whether to enable spatial audio:
  • true: Enable spatial audio.
  • false: Disable spatial audio.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableVideo

Enables the video module.

abstract enableVideo(): number;

The video module is disabled by default. You need to call this method to enable it. To disable the video module later, call the disableVideo method.

Note:
  • This method sets the internal engine to an enabled state and remains effective after leaving the channel.
  • Calling this method resets the entire engine and has a relatively slow response time. Depending on your needs, you can use the following methods to control specific video module features independently:
  • When this method is called in a channel, it resets the settings of enableLocalVideo, muteRemoteVideoStream, and muteAllRemoteVideoStreams, so use with caution.

Timing

This method can be called before or during a channel:
  • If called before joining a channel, the video module is enabled.
  • If called during an audio-only call, it automatically switches to a video call.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting tips.

enableVideoImageSource

Enables or disables the placeholder image streaming feature.

abstract enableVideoImageSource(
    enable: boolean,
    options: ImageTrackOptions
  ): number;

When publishing a video stream, you can call this method to use a custom image to replace the current video stream content. After enabling this feature, you can customize the placeholder image through the ImageTrackOptions parameter. After disabling the feature, remote users will continue to see the video stream you are publishing.

Timing

It is recommended to call this method after joining a channel.

Parameters

enable
Whether to enable placeholder image streaming:
  • true: Enable placeholder image streaming.
  • false: (Default) Disable placeholder image streaming.
options
Placeholder image settings. See ImageTrackOptions.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

enableVirtualBackground

Enables/disables the virtual background.

abstract enableVirtualBackground(
    enabled: boolean,
    backgroundSource: VirtualBackgroundSource,
    segproperty: SegmentationProperty,
    type?: MediaSourceType
  ): number;

The virtual background feature allows you to replace the original background of the local user with a static image, dynamic video, blur effect, or separate the portrait from the background to achieve picture-in-picture. Once successfully enabled, all users in the channel can see the customized background. Call this method after enableVideo or startPreview.

Note:
  • Using a video as a virtual background may cause continuous memory usage growth, which could lead to app crashes. To avoid this, reduce the resolution and frame rate of the video.
  • This feature requires high device performance. When this method is called, the SDK automatically checks the device capability. It is recommended to use devices with the following specs:
    • CPU: i5 or better
  • It is recommended to use this feature in scenarios that meet the following conditions:
    • Use a high-definition camera and ensure even lighting.
    • The video frame contains few objects, the user is shown as a half-body portrait with minimal occlusion, and the background color is simple and different from the user's clothing.
  • This method depends on the virtual background dynamic library libagora_segmentation_extension.dll. Deleting this library will prevent the feature from functioning properly.

Parameters

enabled
Whether to enable the virtual background:
  • true: Enable virtual background.
  • false: Disable virtual background.
backgroundSource
Custom background. See VirtualBackgroundSource. To adapt the resolution of the custom background image to the SDK's video capture resolution, the SDK scales and crops the image without distortion.
segproperty
Processing properties of the background image. See SegmentationProperty.
type
Media source type for applying the effect. See MediaSourceType.
Note: This parameter only supports the following settings in this method:
  • For camera-captured local video, keep the default value PrimaryCameraSource.
  • For custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.
    • -4: Device capability does not meet the requirements for using virtual background. Consider using a higher-performance device.

enableVoiceAITuner

Enables or disables the AI voice tuner feature.

abstract enableVoiceAITuner(enabled: boolean, type: VoiceAiTunerType): number;

The AI voice tuner feature enhances voice quality and adjusts voice tone styles.

Scenario

Social and entertainment scenarios with high audio quality requirements, such as online karaoke, online podcasts, and live shows.

Timing

Can be called before or after joining a channel.

Parameters

enabled
Whether to enable the AI voice tuner feature:
  • true: Enable the AI voice tuner.
  • false: (Default) Disable the AI voice tuner.
type
AI tuner effect type. See VoiceAiTunerType.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

enableWebSdkInteroperability

Enables interoperability with the Web SDK (applicable only in live broadcast scenarios).

abstract enableWebSdkInteroperability(enabled: boolean): number;
Deprecated
Deprecated: This method is obsolete. The SDK automatically enables interoperability with the Web SDK. You do not need to call this method.

This method enables or disables interoperability with the Web SDK. If there are users joining the channel via the Web SDK, make sure to call this method; otherwise, Web users will see a black screen when viewing the Native stream. This method is only applicable in live broadcast scenarios. In communication scenarios, interoperability is enabled by default.

Parameters

enabled
Whether to enable interoperability with the Web SDK:
  • true: Enable interoperability.
  • false: (Default) Disable interoperability.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getAudioMixingCurrentPosition

Gets the playback progress of the music file.

abstract getAudioMixingCurrentPosition(): number;

This method gets the current playback progress of the music file in milliseconds.

Note:
  • You must call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
  • If you need to call getAudioMixingCurrentPosition multiple times, make sure the interval between calls is greater than 500 ms.

Return Values

  • ≥ 0: Success. Returns the current playback position of the music file (ms). 0 indicates the music file has not started playing.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getAudioMixingDuration

Gets the total duration of the music file.

abstract getAudioMixingDuration(): number;

This method gets the total duration of the music file in milliseconds.

Timing

You must call this method after startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Return Values

  • ≥ 0: Success. Returns the duration of the music file.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getAudioMixingPlayoutVolume

Gets the local playback volume of the music file.

abstract getAudioMixingPlayoutVolume(): number;

You can call this method to get the local playback volume of the mixed music file, which helps troubleshoot volume-related issues.

Timing

You must call this method after startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Return Values

  • ≥ 0: Success. Returns the volume value in the range [0,100].
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getAudioMixingPublishVolume

Gets the remote playback volume of the music file.

abstract getAudioMixingPublishVolume(): number;

This API helps you troubleshoot volume-related issues.

Note: You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Return Values

  • ≥ 0: Returns the volume value if the method call succeeds. The range is [0,100].
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getAudioTrackCount

Gets the audio track index of the current music file.

abstract getAudioTrackCount(): number;
Note:
  • You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Return Values

  • Returns the audio track index of the current music file if the method call succeeds.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getCallId

Gets the call ID.

abstract getCallId(): string;

Each time the client joins a channel, a corresponding callId is generated to identify the call session. You can call this method to get the callId and then pass it to methods such as rate and complain.

Timing

This method must be called after joining a channel.

Return Values

  • Returns the current call ID if the method call succeeds.
  • Returns an empty string if the method call fails.

getConnectionState

Gets the current network connection state.

abstract getConnectionState(): ConnectionStateType;

Timing

Can be called before or after joining a channel.

Return Values

The current network connection state. See ConnectionStateType.

getCurrentMonotonicTimeInMs

Gets the SDK's current Monotonic Time.

abstract getCurrentMonotonicTimeInMs(): number;

Monotonic Time refers to a monotonically increasing time sequence whose value increases over time. The unit is milliseconds. In custom video and audio capture scenarios, to ensure audio-video synchronization, Agora recommends that you call this method to get the SDK's current Monotonic Time and pass this value into the timestamp parameter of the captured VideoFrame or AudioFrame.

Timing

Can be called before or after joining a channel.

Return Values

  • ≥ 0: Success. Returns the SDK's current Monotonic Time in milliseconds.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getEffectCurrentPosition

Gets the playback progress of the specified sound effect file.

abstract getEffectCurrentPosition(soundId: number): number;
Note: You need to call this method after playEffect.

Parameters

soundId
The ID of the sound effect. Each sound effect has a unique ID.

Return Values

  • If the method call succeeds, returns the playback progress of the specified sound effect file (in milliseconds).
  • < 0: Failure. See Error Codes for details and resolution suggestions.

getEffectDuration

Gets the total duration of the specified audio effect file.

abstract getEffectDuration(filePath: string): number;
Note: You must call this method after joining a channel.

Parameters

filePath
File path:
  • Windows: The absolute path or URL of the audio file, including the file name and extension. For example, C:\music\audio.mp4.
  • macOS: The absolute path or URL of the audio file, including the file name and extension. For example, /var/mobile/Containers/Data/audio.mp4.

Return Values

  • If the method call succeeds, returns the duration (ms) of the specified audio effect file.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

getEffectsVolume

Gets the playback volume of audio effect files.

abstract getEffectsVolume(): number;

The volume range is 0~100. 100 (default) is the original file volume.

Note: You must call this method after playEffect.

Return Values

  • The volume of the audio effect file.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

getErrorDescription

Gets the description of a warning or error.

abstract getErrorDescription(code: number): string;

Parameters

code
The error code reported by the SDK.

Return Values

The specific error description.

getExtensionProperty

Retrieves detailed information about an extension.

abstract getExtensionProperty(
    provider: string,
    extension: string,
    key: string,
    bufLen: number,
    type?: MediaSourceType
  ): string;

Timing

Can be called before or after joining a channel.

Parameters

provider
Name of the extension provider.
extension
Name of the extension.
key
Key of the extension property.
type
Media source type of the extension. See MediaSourceType.
bufLen
Maximum length of the extension property JSON string. Maximum is 512 bytes.

Return Values

  • If the method call succeeds, returns the extension information.
  • If the method call fails, returns an empty string.

getFaceShapeAreaOptions

Gets options for facial area enhancement.

abstract getFaceShapeAreaOptions(
    shapeArea: FaceShapeArea,
    type?: MediaSourceType
  ): FaceShapeAreaOptions;

Call this method to get the current parameter settings for the facial area enhancement.

Scenario

When users open the facial area and intensity adjustment menu in the app, you can call this method to get the current options and update the UI accordingly.

Timing

Call this method after enableVideo.

Parameters

shapeArea
Facial area. See FaceShapeArea.
type
The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
  • When using the camera to capture local video, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

getFaceShapeBeautyOptions

Gets options for facial beauty effects.

abstract getFaceShapeBeautyOptions(
    type?: MediaSourceType
  ): FaceShapeBeautyOptions;

Call this method to get the current parameter settings for facial beauty effects.

Scenario

When users open the facial beauty style and intensity menu in the app, you can call this method to get the current options and update the UI accordingly.

Timing

Call this method after enableVideo.

Parameters

type
The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
  • When using the camera to capture local video, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

getNativeHandle

Gets the C++ handle of the Native SDK.

abstract getNativeHandle(): number;

This method gets the C++ handle of the Native SDK engine, used in special scenarios such as registering audio and video callbacks.

Return Values

The Native handle of the SDK engine.

getNetworkType

Gets the local network connection type.

abstract getNetworkType(): number;

You can call this method at any time to get the current network type.

Note: This method can be called before or after joining a channel.

Return Values

  • ≥ 0: Success. Returns the local network connection type.
    • 0: Network disconnected.
    • 1: LAN.
    • 2: Wi-Fi (including hotspot).
    • 3: 2G mobile network.
    • 4: 3G mobile network.
    • 5: 4G mobile network.
    • 6: 5G mobile network.
  • < 0: Failure. Returns an error code.
    • -1: Unknown network type.

getNtpWallTimeInMs

Gets the current NTP (Network Time Protocol) time.

abstract getNtpWallTimeInMs(): number;

In real-time chorus scenarios, especially when the downlink is inconsistent across receiving ends due to network issues, you can call this method to get the current NTP time as a reference to align lyrics and music across multiple receivers, achieving chorus synchronization.

Return Values

The current NTP time as a Unix timestamp (milliseconds).

getScreenCaptureSources

Gets a list of shareable screen and window objects.

abstract getScreenCaptureSources(
    thumbSize: Size,
    iconSize: Size,
    includeScreen: boolean
  ): ScreenCaptureSourceInfo[];

Before screen or window sharing, you can call this method to get a list of shareable screen and window objects, allowing users to select a screen or window to share via thumbnails. The list contains important information such as window ID and screen ID. After obtaining the ID, you can call startScreenCaptureByWindowId or startScreenCaptureByDisplayId to start sharing.

Parameters

thumbSize
The target size (width and height in pixels) of the screen or window thumbnail. The SDK scales the original image to match the longest side of the target size without distortion. For example, if the original size is 400 × 300 and thumbSize is 100 × 100, the thumbnail size will be 100 × 75. If the target size is larger than the original, the original image is used without scaling.
iconSize
The target size (width and height in pixels) of the program icon. The SDK scales the original icon to match the longest side of the target size without distortion. For example, if the original size is 400 × 300 and iconSize is 100 × 100, the icon size will be 100 × 75. If the target size is larger than the original, the original icon is used without scaling.
includeScreen
Whether the SDK returns screen information in addition to window information:
  • true: SDK returns both screen and window information.
  • false: SDK returns only window information.

Return Values

ScreenCaptureSourceInfo array.

getUserInfoByUid

Gets user information by UID.

abstract getUserInfoByUid(uid: number): UserInfo;

After a remote user joins a channel, the SDK obtains the UID and User Account of the remote user, then caches a mapping table of UID and User Account, and triggers the onUserInfoUpdated callback locally. After receiving the callback, call this method with the UID to get the UserInfo object that contains the User Account of the specified user.

Timing

Call this method after receiving the onUserInfoUpdated callback.

Parameters

uid
User ID.

Return Values

  • The UserInfo object, if the method call succeeds.
  • null, if the method call fails.

getUserInfoByUserAccount

Gets user information by User Account.

abstract getUserInfoByUserAccount(userAccount: string): UserInfo;

After a remote user joins a channel, the SDK obtains the UID and User Account of the remote user, then caches a mapping table of UID and User Account, and triggers the onUserInfoUpdated callback locally. After receiving the callback, call this method with the User Account to get the UserInfo object that contains the UID of the specified user.

Timing

Call this method after receiving the onUserInfoUpdated callback.

Parameters

userAccount
User Account.

Return Values

  • The UserInfo object, if the method call succeeds.
  • null, if the method call fails.

getVolumeOfEffect

Gets the playback volume of the specified audio effect file.

abstract getVolumeOfEffect(soundId: number): number;

Parameters

soundId
The ID of the audio effect file.

Return Values

  • ≥ 0: The method call succeeds and returns the playback volume. Volume range is [0,100]. 100 is the original volume.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

initialize

Creates and initializes IRtcEngine.

abstract initialize(context: RtcEngineContext): number;
Note: All interface functions of the IRtcEngine class are asynchronous calls unless otherwise specified. It is recommended to call the interfaces in the same thread. The SDK supports only one IRtcEngine instance per app.

Timing

Make sure to call createAgoraRtcEngine and initialize to create and initialize IRtcEngine before calling other APIs.

Parameters

context
Configuration for the IRtcEngine instance. See RtcEngineContext.

Return Values

  • 0: Success.
  • < 0: Failure.
    • -1: General error (not categorized).
    • -2: Invalid parameter.
    • -7: SDK initialization failed.
    • -22: Resource allocation failed. This error occurs when the app uses too many resources or system resources are exhausted.
    • -101: Invalid App ID.

isCameraCenterStageSupported

Checks whether the camera supports Center Stage.

abstract isCameraCenterStageSupported(): boolean;

Before calling enableCameraCenterStage to enable the Center Stage feature, it is recommended to call this method to check whether the current device supports it.

Note: This method is only supported on macOS.

Timing

This method must be called after the camera is successfully started, i.e., after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).

Return Values

  • true: The current camera supports Center Stage.
  • false: The current camera does not support Center Stage.

isFeatureAvailableOnDevice

Checks whether the device supports the specified advanced feature.

abstract isFeatureAvailableOnDevice(type: FeatureType): boolean;

Checks whether the current device meets the requirements for advanced features such as virtual background and beauty effects.

Scenario

Before using advanced audio and video features, you can check whether the current device supports them to avoid performance degradation or feature unavailability on low-end devices. Based on the return value of this method, you can decide whether to show or enable the corresponding feature buttons, or prompt users with appropriate messages when the device capability is insufficient.

Parameters

type
The advanced feature type. See FeatureType.

Return Values

  • true: The device supports the specified advanced feature.
  • false: The device does not support the specified advanced feature.

joinChannel

Sets media options and joins a channel.

abstract joinChannel(
    token: string,
    channelId: string,
    uid: number,
    options: ChannelMediaOptions
  ): number;

This method allows you to set media options when joining a channel, such as whether to publish audio and video streams in the channel. Whether the user automatically subscribes to all remote audio and video streams in the channel upon joining. By default, the user subscribes to all other users' audio and video streams in the channel, which incurs usage and affects billing. If you want to unsubscribe, you can do so by setting the options parameter or using the corresponding mute methods.

Note:
  • This method only supports joining one channel at a time.
  • Apps with different App IDs cannot communicate with each other.
  • Before joining a channel, make sure the App ID used to generate the Token is the same as the one used to initialize the engine with the initialize method. Otherwise, joining the channel with the Token will fail.

Timing

Call this method after initialize.

Parameters

token
A dynamic key generated on the server for authentication. See Token Authentication.
Note:
  • (Recommended) If your project enables security mode (i.e., using APP ID + Token for authentication), this parameter is required.
  • If your project only enables debug mode (i.e., using APP ID for authentication), you can join a channel without providing a Token. You will automatically leave the channel 24 hours after joining.
  • If you need to join multiple channels simultaneously or switch channels frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from the server each time. See Using Wildcard Token.
channelId
Channel name. This parameter identifies the channel for real-time audio and video interaction. Users with the same App ID and channel name will join the same channel. The value must be a string no longer than 64 bytes. Supported character set (89 characters total):
  • 26 lowercase English letters a~z
  • 26 uppercase English letters A~Z
  • 10 digits 0~9
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
uid
User ID. This parameter identifies the user in the real-time audio and video channel. You need to set and manage the user ID yourself and ensure that each user ID in the same channel is unique. The value is a 32-bit unsigned integer. Recommended range: 1 to 2^32-1. If not specified (i.e., set to 0), the SDK automatically assigns one and returns it in the onJoinChannelSuccess callback. The application must remember and maintain this return value, as the SDK does not maintain it.
options
Channel media options. See ChannelMediaOptions.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.
    • -2: Invalid parameters. For example, invalid Token, uid is not an integer, or ChannelMediaOptions contains invalid values. Provide valid parameters and rejoin the channel.
    • -3: IRtcEngine initialization failed. Reinitialize the IRtcEngine object.
    • -7: IRtcEngine is not initialized. Initialize the IRtcEngine object before calling this method.
    • -8: Internal state error in IRtcEngine. Possible cause: startEchoTest was called to start echo test but stopEchoTest was not called before joining the channel. Call stopEchoTest before this method.
    • -17: Join channel rejected. Possible cause: the user is already in the channel. Use onConnectionStateChanged to check the connection state. Do not call this method again unless you receive ConnectionStateDisconnected(1).
    • -102: Invalid channel name. Provide a valid channelId and rejoin the channel.
    • -121: Invalid user ID. Provide a valid uid and rejoin the channel.

joinChannelWithUserAccount

Joins a channel using a User Account and Token, and sets channel media options.

abstract joinChannelWithUserAccount(
    token: string,
    channelId: string,
    userAccount: string,
    options?: ChannelMediaOptions
  ): number;

If you do not call registerLocalUserAccount to register a User Account before calling this method, the SDK automatically creates a User Account for you when joining the channel. Calling registerLocalUserAccount to register the account before calling this method can shorten the time to join the channel. After a user successfully joins a channel, the SDK subscribes to all audio and video streams from other users in the channel by default, which incurs usage and affects billing. If you want to unsubscribe, you can call the corresponding mute methods.

Note: To ensure communication quality, make sure to use the same type of user identity within the same channel. That is, either UID or User Account must be used consistently. If users join the channel via the Web SDK, ensure they use the same identity type.
  • This method only supports joining one channel at a time.
  • Apps with different App IDs cannot communicate with each other.
  • Before joining a channel, ensure that the App ID used to generate the Token is the same as the one used to initialize the engine via the initialize method. Otherwise, joining the channel with the Token will fail.

Timing

This method must be called after initialize.

Parameters

token
A dynamic key generated on your server for authentication. See Token Authentication.
Note:
  • (Recommended) If your project enables security mode (i.e., uses APP ID + Token for authentication), this parameter is required.
  • If your project only enables debug mode (i.e., uses APP ID only for authentication), you can join the channel without providing a Token. The user will automatically leave the channel after 24 hours.
  • If you need to join multiple channels simultaneously or switch channels frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from the server for each new channel. See Using Wildcard Token.
userAccount
The user’s User Account. This parameter identifies the user in the real-time audio and video channel. You need to set and manage the User Account yourself, and ensure that each user in the same channel has a unique User Account. This parameter is required, must not exceed 255 bytes, and cannot be null. Supported character set (89 characters total):
  • 26 lowercase English letters a-z
  • 26 uppercase English letters A-Z
  • 10 digits 0-9
  • Space
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
options
Channel media settings. See ChannelMediaOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter. For example, an invalid Token, non-integer uid, or invalid ChannelMediaOptions member value. Provide valid parameters and rejoin the channel.
    • -3: IRtcEngine object initialization failed. Re-initialize the IRtcEngine object.
    • -7: IRtcEngine object not initialized. You need to initialize the IRtcEngine object before calling this method.
    • -8: Internal state error of IRtcEngine object. Possible reason: startEchoTest was called to start the echo test but stopEchoTest was not called before joining the channel. Call stopEchoTest before this method.
    • -17: Join channel request rejected. Possible reason: the user is already in the channel. Use the onConnectionStateChanged callback to check if the user is in the channel. Do not call this method again unless the state is ConnectionStateDisconnected(1).
    • -102: Invalid channel name. Provide a valid channel name in channelId and rejoin the channel.
    • -121: Invalid user ID. Provide a valid user ID in uid and rejoin the channel.

joinChannelWithUserAccountEx

Joins a channel using a User Account and Token, and sets channel media options.

abstract joinChannelWithUserAccountEx(
    token: string,
    channelId: string,
    userAccount: string,
    options: ChannelMediaOptions
  ): number;

If you do not call registerLocalUserAccount to register a User Account before calling this method, the SDK automatically creates a User Account for you when joining the channel. Calling registerLocalUserAccount to register the account before calling this method can shorten the time to join the channel. After a user successfully joins a channel, the SDK subscribes to all audio and video streams from other users in the channel by default, which incurs usage and affects billing. If you want to unsubscribe, you can set the options parameter or call the corresponding mute methods.

Note: To ensure communication quality, make sure to use the same type of user identity within the same channel. That is, either UID or User Account must be used consistently. If users join the channel via the Web SDK, ensure they use the same identity type.
  • This method only supports joining one channel at a time.
  • Apps with different App IDs cannot communicate with each other.
  • Before joining a channel, ensure that the App ID used to generate the Token is the same as the one used to initialize the engine via the initialize method. Otherwise, joining the channel with the Token will fail.

Timing

This method must be called after initialize.

Parameters

token
A dynamic key generated on your server for authentication. See Token Authentication.
Note:
  • (Recommended) If your project enables security mode (i.e., uses APP ID + Token for authentication), this parameter is required.
  • If your project only enables debug mode (i.e., uses APP ID only for authentication), you can join the channel without providing a Token. The user will automatically leave the channel after 24 hours.
  • If you need to join multiple channels simultaneously or switch channels frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from the server for each new channel. See Using Wildcard Token.
userAccount
The user’s User Account. This parameter identifies the user in the real-time audio and video channel. You need to set and manage the User Account yourself, and ensure that each user in the same channel has a unique User Account. This parameter is required, must not exceed 255 bytes, and cannot be null. Supported character set (89 characters total):
  • 26 lowercase English letters a-z
  • 26 uppercase English letters A-Z
  • 10 digits 0-9
  • Space
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
options
Channel media settings. See ChannelMediaOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter. For example, an invalid Token, non-integer uid, or invalid ChannelMediaOptions member value. Provide valid parameters and rejoin the channel.
    • -3: IRtcEngine object initialization failed. Re-initialize the IRtcEngine object.
    • -7: IRtcEngine object not initialized. You need to initialize the IRtcEngine object before calling this method.
    • -8: Internal state error of IRtcEngine object. Possible reason: startEchoTest was called to start the echo test but stopEchoTest was not called before joining the channel. Call stopEchoTest before this method.
    • -17: Join channel request rejected. Possible reason: the user is already in the channel. Use the onConnectionStateChanged callback to check if the user is in the channel. Do not call this method again unless the state is ConnectionStateDisconnected(1).
    • -102: Invalid channel name. Provide a valid channel name in channelId and rejoin the channel.
    • -121: Invalid user ID. Provide a valid user ID in uid and rejoin the channel.

leaveChannel

Sets channel options and leaves the channel.

abstract leaveChannel(options?: LeaveChannelOptions): number;

After calling this method, the SDK stops all audio and video interactions, leaves the current channel, and releases all session-related resources. After successfully joining a channel, you must call this method to end the call, otherwise you cannot start a new one. If you have joined multiple channels using joinChannelEx, calling this method will leave all joined channels.

Note: This method is asynchronous. When the call returns, it does not mean the user has actually left the channel. If you call release immediately after this method, the SDK will not trigger the onLeaveChannel callback.

Timing

This method must be called after joining a channel.

Parameters

options
Options for leaving the channel. See LeaveChannelOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

loadExtensionProvider

Loads an extension.

abstract loadExtensionProvider(
    path: string,
    unloadAfterUse?: boolean
  ): number;

This method adds external SDK extensions (such as Marketplace extensions and SDK extension plugins) to the SDK.

Note: To load multiple extensions, call this method multiple times. This method is only supported on Windows.

Timing

Call this method immediately after initializing IRtcEngine.

Parameters

path
Path and name of the extension dynamic library. For example: /library/libagora_segmentation_extension.dll.
unloadAfterUse
Whether to automatically unload the extension after use:
  • true: Automatically unloads the extension when IRtcEngine is destroyed.
  • false: Does not automatically unload the extension until the process exits (recommended).

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

muteAllRemoteAudioStreams

Stops or resumes subscribing to all remote users' audio streams.

abstract muteAllRemoteAudioStreams(mute: boolean): number;

After this method is successfully called, the local user stops or resumes subscribing to all remote users' audio streams, including the streams of users who join the channel after this method is called.

Note: By default, the SDK subscribes to all remote users' audio streams when joining a channel. To change this behavior, set autoSubscribeAudio to false when calling joinChannel to join the channel. This disables subscription to all users' audio streams upon joining. If you call enableAudio or disableAudio after this method, the latter takes effect.

Timing

This method must be called after joining a channel.

Parameters

mute
Whether to stop subscribing to all remote users' audio streams:
  • true: Stop subscribing to all remote users' audio streams.
  • false: (Default) Subscribe to all remote users' audio streams.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

muteAllRemoteVideoStreams

Stops or resumes subscribing to all remote users' video streams.

abstract muteAllRemoteVideoStreams(mute: boolean): number;

After this method is successfully called, the local user stops or resumes subscribing to all remote users' video streams, including the streams of users who join the channel after this method is called.

Note: By default, the SDK subscribes to all remote users' video streams when joining a channel. To change this behavior, set autoSubscribeVideo to false when calling joinChannel to join the channel. This disables subscription to all users' video streams upon joining. If you call enableVideo or disableVideo after this method, the latter takes effect.

Timing

This method must be called after joining a channel.

Parameters

mute
Whether to stop subscribing to all remote users' video streams.
  • true: Stop subscribing to all users' video streams.
  • false: (Default) Subscribe to all users' video streams.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

muteLocalAudioStream

Stops or resumes publishing the local audio stream.

abstract muteLocalAudioStream(mute: boolean): number;

This method controls whether to publish the locally captured audio stream. If the local audio stream is not published, the audio capture device is not disabled, so the audio capture state is not affected.

Timing

Can be called before or after joining a channel.

Parameters

mute
Whether to stop publishing the local audio stream.
  • true: Stop publishing.
  • false: (Default) Publish.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

muteLocalVideoStream

Stops or resumes publishing the local video stream.

abstract muteLocalVideoStream(mute: boolean): number;

This method controls whether to publish the locally captured video stream. If the local video stream is not published, the video capture device is not disabled, so the video capture state is not affected. Compared with calling enableLocalVideo(false) to stop local video capture and thus cancel publishing, this method responds faster.

Timing

Can be called before or after joining a channel.

Parameters

mute
Whether to stop sending the local video stream.
  • true: Stop sending the local video stream.
  • false: (Default) Send the local video stream.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

muteRecordingSignal

Whether to mute the recording signal.

abstract muteRecordingSignal(mute: boolean): number;
If you have already called adjustRecordingSignalVolume to adjust the volume of the audio capture signal, then calling this method and setting it to true causes the SDK to:
  1. Record the adjusted volume.
  2. Mute the audio capture signal.
When you call this method again and set it to false, the recording signal is restored to the volume recorded by the SDK before muting.

Timing

Can be called before or after joining a channel.

Parameters

mute
  • true: Mute.
  • false: (Default) Original volume.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

muteRemoteAudioStream

Stops or resumes subscribing to the audio stream of a specified remote user.

abstract muteRemoteAudioStream(uid: number, mute: boolean): number;

Timing

This method must be called after joining a channel.

Parameters

uid
The user ID of the specified remote user.
mute
Whether to stop subscribing to the audio stream of the specified remote user.
  • true: Stop subscribing to the specified user's audio stream.
  • false: (Default) Subscribe to the specified user's audio stream.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

muteRemoteVideoStream

Stops or resumes subscribing to the video stream of a specified remote user.

abstract muteRemoteVideoStream(uid: number, mute: boolean): number;

Timing

This method must be called after joining a channel.

Parameters

uid
The user ID of the specified remote user.
mute
Whether to stop subscribing to the video stream of the specified remote user.
  • true: Stop subscribing to the specified user's video stream.
  • false: (Default) Subscribe to the specified user's video stream.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

pauseAllChannelMediaRelay

Pauses media stream forwarding to all destination channels.

abstract pauseAllChannelMediaRelay(): number;

After starting media stream forwarding across channels, if you need to pause forwarding to all channels, you can call this method. To resume forwarding, call the resumeAllChannelMediaRelay method.

Note: You must call this method after calling startOrUpdateChannelMediaRelay to start cross-channel media stream forwarding.

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.
    • -5: This method call was rejected. There is no ongoing cross-channel media stream forwarding.

pauseAllEffects

Pauses playback of all audio effect files.

abstract pauseAllEffects(): number;

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

pauseAudioMixing

Pauses the playback of the music file.

abstract pauseAudioMixing(): number;

After calling startAudioMixing to play a music file, call this method to pause the playback. To stop playback, call stopAudioMixing.

Timing

You need to call this method after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

pauseEffect

Pauses playback of an audio effect file.

abstract pauseEffect(soundId: number): number;

Parameters

soundId
The ID of the audio effect. Each audio effect has a unique ID.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

playAllEffects

Plays all audio effect files.

abstract playAllEffects(
    loopCount: number,
    pitch: number,
    pan: number,
    gain: number,
    publish?: boolean
  ): number;

After calling preloadEffect multiple times to preload several audio effect files, you can call this method to play all preloaded audio effects.

Parameters

loopCount
The number of times to loop the audio effect:
  • -1: Loops indefinitely until stopEffect or stopAllEffects is called.
  • 0: Plays the audio effect once.
  • 1: Plays the audio effect twice.
pitch
The pitch of the audio effect. Range: [0.5,2.0]. Default is 1.0, which is the original pitch. The smaller the value, the lower the pitch.
pan
The spatial position of the audio effect. Range: [-1.0,1.0]:
  • -1.0: Audio appears on the left.
  • 0: Audio appears in the center.
  • 1.0: Audio appears on the right.
gain
The volume of the audio effect. Range: [0,100]. 100 is the default and represents the original volume. The smaller the value, the lower the volume.
publish
Whether to publish the audio effect to remote users:
  • true: Publishes the audio effect to remote users. Both local and remote users can hear it.
  • false: (Default) Does not publish the audio effect. Only local users can hear it.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

playEffect

Plays the specified local or online audio effect file.

abstract playEffect(
    soundId: number,
    filePath: string,
    loopCount: number,
    pitch: number,
    pan: number,
    gain: number,
    publish?: boolean,
    startPos?: number
  ): number;

You can call this method multiple times with different soundID and filePath to play multiple audio effects simultaneously. For optimal user experience, it is recommended to play no more than 3 audio effects at the same time.

Note: If you need to play online audio effect files, Agora recommends caching the online files to the local device first, then calling preloadEffect to preload them into memory before calling this method for playback. Otherwise, playback failures or no sound may occur due to timeout or failure in loading online files.

Timing

This method can be called before or after joining a channel.

Parameters

soundId
The ID of the audio effect. Each audio effect has a unique ID.
Note: If you have preloaded the effect using preloadEffect, make sure this parameter matches the soundId used in preloadEffect.
filePath
The path of the audio file to play. Supports URLs of online files and absolute file paths, including the file name and extension. Supported formats include MP3, AAC, M4A, MP4, WAV, 3GP, etc.
Note: If you have preloaded the effect using preloadEffect, make sure this parameter matches the filePath used in preloadEffect.
loopCount
The number of times to loop the audio effect.
  • ≥ 0: Number of loops. For example, 1 means play twice in total.
  • -1: Infinite loop.
pitch
The pitch of the audio effect. Range: [0.5,2.0]. Default is 1.0, which is the original pitch. The smaller the value, the lower the pitch.
pan
The spatial position of the audio effect. Range: [-1.0,1.0], for example:
  • -1.0: Audio appears on the left
  • 0.0: Audio appears in the center
  • 1.0: Audio appears on the right
gain
The volume of the audio effect. Range: [0.0,100.0]. Default is 100.0, which is the original volume. The smaller the value, the lower the volume.
publish
Whether to publish the audio effect to remote users:
  • true: Publishes the audio effect. Both local and remote users can hear it.
  • false: Does not publish the audio effect. Only local users can hear it.
startPos
The playback position of the audio effect file in milliseconds.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

preloadChannel

Preloads a channel using token, channelId, and uid.

abstract preloadChannel(
    token: string,
    channelId: string,
    uid: number
  ): number;

Calling this method can reduce the time it takes for an audience member to join a channel when frequently switching channels, thereby shortening the delay before they hear the host's first audio frame and see the first video frame, improving the video experience on the audience side. If the channel has already been successfully preloaded, and the audience leaves and rejoins the channel, as long as the Token passed during preloading is still valid, re-preloading is not required.

Note: Preloading failure does not affect subsequent normal channel joining, nor does it increase the time to join the channel.
  • When calling this method, ensure the user role is set to audience and the audio scenario is not set to AudioScenarioChorus; otherwise, preloading will not take effect.
  • Ensure that the channel name, user ID, and Token passed during preloading are the same as those used when joining the channel; otherwise, preloading will not take effect.
  • Currently, one IRtcEngine instance supports preloading up to 20 channels. If this limit is exceeded, only the latest 20 preloaded channels will take effect.

Timing

To improve the user experience of channel preloading, Agora recommends calling this method as early as possible after confirming the channel name and user information, and before joining the channel.

Parameters

token
The dynamic key generated on the server for authentication. See Token Authentication. When the token expires, depending on the number of preloaded channels, you can pass a new token for preloading in different ways:
  • To preload one channel: call this method to pass the new token.
  • To preload multiple channels:
    • If you use a wildcard token, call updatePreloadChannelToken to update the token for all preloaded channels. When generating a wildcard token, the user ID must not be set to 0. See Using Wildcard Tokens.
    • If you use different tokens: call this method and pass your user ID, corresponding channel name, and the updated token.
channelId
The name of the channel to preload. This parameter identifies the channel for real-time audio and video interaction. Under the same App ID, users who enter the same channel name will join the same channel for audio and video interaction. This parameter is a string with a maximum length of 64 bytes. The following character set is supported (a total of 89 characters):
  • 26 lowercase English letters a~z
  • 26 uppercase English letters A~Z
  • 10 digits 0~9
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
uid
User ID. This parameter identifies the user in the real-time audio and video interaction channel. You must set and manage the user ID yourself and ensure that each user ID is unique within the same channel. This parameter is a 32-bit unsigned integer. Recommended range: 1 to 2^32-1. If not specified (i.e., set to 0), the SDK will automatically assign one and return it in the onJoinChannelSuccess callback. The application must remember and manage this return value, as the SDK does not maintain it.

Return Values

  • 0: Method call succeeds.
  • < 0: Method call fails. See Error Codes for details and resolution suggestions.
    • -7: IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
    • -102: Invalid channel name. You need to enter a valid channel name and rejoin the channel.

preloadChannelWithUserAccount

Preloads a channel using token, channelId, and userAccount.

abstract preloadChannelWithUserAccount(
    token: string,
    channelId: string,
    userAccount: string
  ): number;

Calling this method can reduce the time it takes for an audience member to join a channel when frequently switching channels, thereby shortening the delay before they hear the host's first audio frame and see the first video frame, improving the video experience on the audience side. If the channel has already been successfully preloaded, and the audience leaves and rejoins the channel, as long as the Token passed during preloading is still valid, re-preloading is not required.

Note: Preloading failure does not affect subsequent normal channel joining, nor does it increase the time to join the channel.
  • When calling this method, ensure the user role is set to audience and the audio scenario is not set to AudioScenarioChorus; otherwise, preloading will not take effect.
  • Ensure that the channel name, user User Account, and Token passed during preloading are the same as those used when joining the channel; otherwise, preloading will not take effect.
  • Currently, one IRtcEngine instance supports preloading up to 20 channels. If this limit is exceeded, only the latest 20 preloaded channels will take effect.

Timing

To improve the user experience of channel preloading, Agora recommends calling this method as early as possible after confirming the channel name and user information, and before joining the channel.

Parameters

token
The dynamic key generated on the server for authentication. See Token Authentication. When the token expires, depending on the number of preloaded channels, you can pass a new token for preloading in different ways:
  • To preload one channel: call this method to pass the new token.
  • To preload multiple channels:
    • If you use a wildcard token, call updatePreloadChannelToken to update the token for all preloaded channels. When generating a wildcard token, the user ID must not be set to 0. See Using Wildcard Tokens.
    • If you use different tokens: call this method and pass your user ID, corresponding channel name, and the updated token.
channelId
The name of the channel to preload. This parameter identifies the channel for real-time audio and video interaction. Under the same App ID, users who enter the same channel name will join the same channel for audio and video interaction. This parameter is a string with a maximum length of 64 bytes. The following character set is supported (a total of 89 characters):
  • 26 lowercase English letters a~z
  • 26 uppercase English letters A~Z
  • 10 digits 0~9
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
userAccount
User User Account. This parameter identifies the user in the real-time audio and video interaction channel. You must set and manage the user's User Account yourself and ensure that each User Account is unique within the same channel. This parameter is required, must not exceed 255 bytes, and cannot be null. The following character set is supported (a total of 89 characters):
  • 26 lowercase English letters a-z
  • 26 uppercase English letters A-Z
  • 10 digits 0-9
  • space
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

Return Values

  • 0: Method call succeeds.
  • < 0: Method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter. For example, User Account is empty. You need to enter valid parameters and rejoin the channel.
    • -7: IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
    • -102: Invalid channel name. You need to enter a valid channel name and rejoin the channel.

preloadEffect

Loads the audio effect file into memory.

abstract preloadEffect(
    soundId: number,
    filePath: string,
    startPos?: number
  ): number;

To ensure smooth communication, be mindful of the size of the preloaded audio effect files. For supported audio formats, see What audio formats are supported by the RTC SDK.

Timing

Agora recommends calling this method before joining a channel.

Parameters

soundId
The ID of the audio effect. Each audio effect has a unique ID.
filePath
File path:
  • Windows: Absolute path or URL of the audio file, including the file name and extension. For example, C:\music\audio.mp4.
  • macOS: Absolute path or URL of the audio file, including the file name and extension. For example, /var/mobile/Containers/Data/audio.mp4.
startPos
The start position for loading the audio effect file, in milliseconds.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

queryCodecCapability

Queries the video codec capabilities supported by the SDK.

abstract queryCodecCapability(): { codecInfo: CodecCapInfo[]; size: number };

Return Values

  • If the call succeeds, it returns an object with the following properties:
    • codecInfo: An array of CodecCapInfo representing the SDK's video encoding capabilities.
    • size: The size of the CodecCapInfo array.
  • If the call times out, adjust your logic to avoid calling this method on the main thread.

queryDeviceScore

Queries the device score level.

abstract queryDeviceScore(): number;

Scenario

In high-definition or ultra-high-definition video scenarios, you can first call this method to query the device score. If the returned score is low (e.g., below 60), you should lower the video resolution accordingly to avoid affecting the video experience. The minimum device score requirement varies by business scenario. For specific recommendations, please contact technical support.

Return Values

  • > 0: Success. The value is the current device score, ranging from [0,100]. A higher score indicates better device capability. Most devices score between 60 and 100.
  • < 0: Failure.

rate

Rates a call.

abstract rate(callId: string, rating: number, description: string): number;
Note: This method must be called after the user leaves the channel.

Parameters

callId
The call ID. You can get this parameter by calling getCallId.
rating
The rating for the call, from 1 to 5.
description
The description of the call. The length should be less than 800 bytes.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -1: General error (not specifically categorized).
    • -2: Invalid parameter.

registerAudioEncodedFrameObserver

Registers an audio encoded frame observer.

abstract registerAudioEncodedFrameObserver(
    config: AudioEncodedFrameObserverConfig,
    observer: IAudioEncodedFrameObserver
  ): number;
Note:
  • Call this method after joining a channel.
  • Since this method and startAudioRecording both set audio content and quality, it is not recommended to use this method together with startAudioRecording. Otherwise, only the method called later takes effect.

Parameters

config
Settings for the encoded audio observer. See AudioEncodedFrameObserverConfig.

Return Values

An IAudioEncodedFrameObserver object.

registerAudioSpectrumObserver

Registers an audio spectrum observer.

abstract registerAudioSpectrumObserver(
    observer: IAudioSpectrumObserver
  ): number;

After successfully registering the audio spectrum observer and calling enableAudioSpectrumMonitor to enable audio spectrum monitoring, the SDK reports the callbacks implemented in the IAudioSpectrumObserver class at the interval you set.

Note: This method can be called before or after joining a channel.

Parameters

observer
The audio spectrum observer. See IAudioSpectrumObserver.

Return Values

The IAudioSpectrumObserver object.

registerExtension

Registers an extension.

abstract registerExtension(
    provider: string,
    extension: string,
    type?: MediaSourceType
  ): number;

For external extensions (such as cloud marketplace extensions and SDK expansion extensions), after loading the extension, you need to call this method to register it. Internal SDK extensions (those included in the SDK package) are automatically loaded and registered after initializing IRtcEngine, so you do not need to call this method.

Note:
  • To register multiple extensions, you need to call this method multiple times.
  • The order in which different extensions process data in the SDK is determined by the order of registration. That is, extensions registered earlier process data first.

Timing

  • It is recommended to call this method after initializing IRtcEngine and before joining a channel.
  • For video-related extensions (such as beauty filters), you need to call this method before enabling the video module (enableVideo/enableLocalVideo).
  • Before calling this method, you need to call loadExtensionProvider to load the extension.

Parameters

provider
The name of the extension provider.
extension
The name of the extension.
type
The media source type of the extension. See MediaSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -3: The extension dynamic library was not loaded. Agora recommends checking whether the dynamic library is placed in the expected location or whether the library name is correct.

registerLocalUserAccount

Registers the local user's User Account.

abstract registerLocalUserAccount(appId: string, userAccount: string): number;
This method registers a User Account for the local user. After successful registration, the User Account can be used to identify the local user, and the user can use it to join a channel. This method is optional. If you want users to join a channel using a User Account, you can implement it in either of the following ways:
Note:
  • Ensure that the userAccount set in this method is unique within the channel.
  • To ensure communication quality, make sure to use the same type of identifier (UID or User Account) for all users in the channel. If users join the channel via Web SDK, ensure they use the same identifier type.

Parameters

appId
Your project's App ID registered in the console.
userAccount
User User Account. This parameter identifies the user in the real-time audio and video interaction channel. You must set and manage the user's User Account yourself and ensure that each User Account is unique within the same channel. This parameter is required, must not exceed 255 bytes, and cannot be null. The following character set is supported (a total of 89 characters):
  • 26 lowercase English letters a-z
  • 26 uppercase English letters A-Z
  • 10 digits 0-9
  • space
  • "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

Return Values

  • 0: Method call succeeds.
  • < 0: Method call fails. See Error Codes for details and resolution suggestions.

registerMediaMetadataObserver

Registers a media metadata observer to receive or send metadata.

abstract registerMediaMetadataObserver(
    observer: IMetadataObserver,
    type: MetadataType
  ): number;

You need to implement the IMetadataObserver class and specify the metadata type in this method. This method allows you to add synchronized metadata to the video stream for diverse live interactive scenarios, such as sending shopping links, e-coupons, and online quizzes.

Note: Call this method before joinChannel.

Parameters

observer
The metadata observer. See IMetadataObserver.
type
The metadata type. Currently, only VideoMetadata is supported. See MetadataType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

release

Destroys the IRtcEngine object.

abstract release(sync?: boolean): void;

This method releases all resources used by the SDK. Some apps only use real-time audio and video communication when needed and release resources when not in use. This method is suitable for such cases. After calling this method, you can no longer use any other SDK methods or callbacks. To use real-time audio and video communication again, you must call createAgoraRtcEngine and initialize again to create a new IRtcEngine object.

Note:
  • This method is a synchronous call. You must wait for the IRtcEngine resources to be released before performing other operations (e.g., creating a new IRtcEngine object). Therefore, it is recommended to call this method in a sub-thread to avoid blocking the main thread.
  • It is not recommended to call release in an SDK callback. Otherwise, a deadlock may occur because the SDK needs to wait for the callback to return before reclaiming related object resources.

Parameters

sync
Whether this method is a synchronous call:
  • true: This method is synchronous.
  • false: This method is asynchronous. Currently, only synchronous calls are supported. Do not set this parameter to false.

removeAllListeners

Removes all listeners for the specified event type.

removeAllListeners?<EventType extends keyof IMediaEngineEvent>(
      eventType?: EventType
    ): void;

Parameters

eventType
The target event name to listen for. See IRtcEngineEvent.

unregisterEventHandler

Removes the specified callback event.

abstract unregisterEventHandler(eventHandler: IRtcEngineEventHandler): boolean;

This method removes all previously added callback events.

Parameters

eventHandler
The callback event to be removed. See IRtcEngineEventHandler.

Return Values

  • true: Method call succeeds.
  • false: Method call fails. See Error Codes for details and resolution suggestions.

removeListener

Removes the specified IRtcEngineEvent listener.

removeListener?<EventType extends keyof IRtcEngineEvent>(
      eventType: EventType,
      listener: IRtcEngineEvent[EventType]
    ): void;

For some callback functions that have been listened to, if you no longer need to receive callback messages after receiving the corresponding callback event, you can call this method to remove the corresponding listener.

Parameters

eventType
The name of the target event to listen to. See IRtcEngineEvent.
listener
The callback function corresponding to eventType. You must pass in the same function object that was passed to addListener. For example, to stop listening to onJoinChannelSuccess:
const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {};
engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess);
engine.removeListener('onJoinChannelSuccess', onJoinChannelSuccess);

removeVideoWatermark

Removes a watermark image from the local video.

abstract removeVideoWatermark(id: string): number;
Since
Available since v4.6.2.

This method removes the previously added watermark image from the local video stream based on the specified unique ID.

Parameters

id
The ID of the watermark to be removed. This value must match the ID used when adding the watermark.

Return Values

  • 0: Success.
  • < 0: Failure.

renewToken

Renews the Token.

abstract renewToken(token: string): number;

This method renews the Token. The Token will expire after a certain period, and the SDK will then be unable to connect to the server.

Timing

In any of the following situations, Agora recommends that your server regenerate the Token and then call this method to pass in the new Token:

Parameters

token
The newly generated Token.

Return Values

  • 0: Method call succeeds.
  • < 0: Method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter. For example, the Token is empty.
    • -7: IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
    • -110: Invalid Token. Make sure that:
      • The user ID specified when generating the Token is the same as the one used when joining the channel,
      • The generated Token is the same as the one used to join the channel.

resumeAllChannelMediaRelay

Resumes media stream forwarding to all destination channels.

abstract resumeAllChannelMediaRelay(): number;

After calling the pauseAllChannelMediaRelay method, if you need to resume forwarding media streams to all destination channels, you can call this method.

Note: You must call this method after pauseAllChannelMediaRelay.

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.
    • -5: This method call was rejected. There is no paused cross-channel media stream forwarding.

resumeAllEffects

Resumes playback of all audio effect files.

abstract resumeAllEffects(): number;

After calling pauseAllEffects to pause all audio effects, you can call this method to resume playback.

Timing

Call this method after pauseAllEffects.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

resumeAudioMixing

Resumes the playback of the music file.

abstract resumeAudioMixing(): number;

After calling pauseAudioMixing to pause the music file, call this method to resume playback.

Timing

You need to call this method after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

resumeEffect

Resumes playback of the specified audio effect file.

abstract resumeEffect(soundId: number): number;

Parameters

soundId
The ID of the audio effect. Each audio effect has a unique ID.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

selectAudioTrack

Specifies the playback audio track of the current music file.

abstract selectAudioTrack(index: number): number;

After getting the number of audio tracks in the music file, you can call this method to specify any track for playback. For example, if a multi-track file contains songs in different languages on different tracks, you can use this method to set the playback language.

Note:

Parameters

index
The specified playback track. The value range should be greater than or equal to 0 and less than the return value of getAudioTrackCount.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

sendCustomReportMessage

Sends a custom report message.

abstract sendCustomReportMessage(
    id: string,
    category: string,
    event: string,
    label: string,
    value: number
  ): number;

Agora provides custom data reporting and analysis services. This service is currently in a free beta phase. During the beta, you can report up to 10 data entries within 6 seconds. Each custom data entry must not exceed 256 bytes, and each string must not exceed 100 bytes. To try this service, [contact sales](mailto:support@agora.io) to enable it and agree on the custom data format.

sendMetaData

Sends media metadata.

abstract sendMetaData(metadata: Metadata, sourceType: VideoSourceType): number;

If the metadata is sent successfully, the receiver will receive the onMetadataReceived callback.

Parameters

metadata
The media metadata. See Metadata.
sourceType
The type of video source. See VideoSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

sendStreamMessage

Sends a data stream.

abstract sendStreamMessage(streamId: number, data: Uint8Array, length: number): number;
After calling createDataStream, you can use this method to send data stream messages to all users in the channel. The SDK imposes the following restrictions on this method:
  • Each client in the channel can have up to 5 data channels simultaneously, and the total sending bitrate shared by all data channels is limited to 30 KB/s.
  • Each data channel can send up to 60 packets per second, with each packet up to 1 KB in size.
If the method call succeeds, the remote end triggers the onStreamMessage callback, where the remote user can receive the message; if it fails, the remote end triggers the onStreamMessageError callback.
Note:
  • This method must be called after joining a channel and after calling createDataStream to create a data channel.
  • This method applies to broadcaster users only.

Parameters

streamId
Data stream ID. You can get it via createDataStream.
data
The data to be sent.
length
Length of the data.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAINSMode

Enables or disables AI noise reduction and sets the noise reduction mode.

abstract setAINSMode(enabled: boolean, mode: AudioAinsMode): number;
You can call this method to enable AI noise reduction. This feature intelligently detects and reduces various steady and non-steady background noises while preserving voice quality, making speech clearer. Steady noise refers to noise with the same frequency at any point in time. Common examples include:
  • TV noise
  • Air conditioner noise
  • Factory machinery noise
Non-steady noise refers to noise that changes rapidly over time. Common examples include:
  • Thunder
  • Explosions
  • Cracking sounds

Scenario

In scenarios such as voice chat, online education, and online meetings, if the surrounding environment is noisy, the AI noise reduction feature can identify and reduce both steady and non-steady noises while preserving voice quality, thereby improving audio quality and user experience.

Timing

This method can be called before or after joining a channel.

Parameters

enabled
Whether to enable AI noise reduction:
  • true: Enable AI noise reduction.
  • false: (Default) Disable AI noise reduction.
mode
Noise reduction mode. See AudioAinsMode.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setAdvancedAudioOptions

Sets advanced audio options.

abstract setAdvancedAudioOptions(options: AdvancedAudioOptions): number;

If you have advanced requirements for audio processing, such as capturing and sending stereo audio, you can call this method to set advanced audio options.

Note: You need to call this method before joinChannel, enableAudio, and enableLocalAudio.

Parameters

options
Advanced audio options. See AdvancedAudioOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setAudioEffectParameters

Sets parameters for SDK preset voice effects.

abstract setAudioEffectParameters(preset: AudioEffectPreset, param1: number, param2: number): number;
Call this method to configure the following for local stream publishing users:
  • 3D voice effect: Set the surround cycle of the 3D voice effect.
  • Pitch correction effect: Set the base scale and main pitch of the pitch correction effect. To allow users to adjust the pitch correction effect themselves, it is recommended to bind the base scale and main pitch configuration options to UI elements in your application.
After setting, all users in the channel can hear the effect.To achieve better voice effects, it is recommended to do the following before calling this method:
  • Call setAudioScenario to set the audio scenario to high-quality, i.e., AudioScenarioGameStreaming(3).
  • Call setAudioProfile and set profile to AudioProfileMusicHighQuality(4) or AudioProfileMusicHighQualityStereo(5).
Note:

Parameters

preset
SDK preset effects. Supports the following settings:
  • RoomAcoustics3dVoice: 3D voice effect.
    • Before using this enum, you need to set the profile parameter of setAudioProfile to AudioProfileMusicStandardStereo(3) or AudioProfileMusicHighQualityStereo(5), otherwise the setting is invalid.
    • After enabling 3D voice, users need to use audio playback devices that support stereo to hear the expected effect.
  • PitchCorrection: Pitch correction effect.
param1
  • If preset is set to RoomAcoustics3dVoice, then param1 indicates the surround cycle of the 3D voice effect. Range: [1,60], in seconds. Default is 10, meaning the voice surrounds 360 degrees in 10 seconds.
  • If preset is set to PitchCorrection, then param1 indicates the base scale of the pitch correction effect:
    • 1: (Default) Natural major scale.
    • 2: Natural minor scale.
    • 3: Japanese minor scale.
param2
  • If preset is set to RoomAcoustics3dVoice, you need to set param2 to 0.
  • If preset is set to PitchCorrection, then param2 indicates the main pitch of the pitch correction effect:
    • 1: A
    • 2: A#
    • 3: B
    • 4: (Default) C
    • 5: C#
    • 6: D
    • 7: D#
    • 8: E
    • 9: F
    • 10: F#
    • 11: G
    • 12: G#

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setAudioEffectPreset

Sets the SDK's preset voice effects.

abstract setAudioEffectPreset(preset: AudioEffectPreset): number;

Call this method to set the SDK's preset voice effects for the local user who is sending the stream. This does not change the gender characteristics of the original voice. Once the effect is set, all users in the channel can hear it.

Note:

Timing

Can be called before or after joining a channel. To achieve better voice effects, it is recommended to perform the following before calling this method:
  • Call setAudioScenario to set the audio scenario to high quality, i.e., AudioScenarioGameStreaming(3).
  • Call setAudioProfile to set profile to AudioProfileMusicHighQuality(4) or AudioProfileMusicHighQualityStereo(5).

Parameters

preset
The preset audio effect option. See AudioEffectPreset.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioMixingDualMonoMode

Sets the channel mode of the current audio file.

abstract setAudioMixingDualMonoMode(mode: AudioMixingDualMonoMode): number;

In stereo audio files, the left and right channels can store different audio data. Depending on your needs, you can set the channel mode to original, left, right, or mixed.

Note: This method applies only to stereo audio files.

Scenario

In KTV scenarios, the left channel of an audio file stores the accompaniment, and the right channel stores the original vocals. You can configure as follows:
  • To hear only the accompaniment, set the channel mode to left.
  • To hear both accompaniment and vocals, set the channel mode to mixed.

Timing

You need to call this method after startAudioMixing and after receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Parameters

mode
The channel mode. See AudioMixingDualMonoMode.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioMixingPitch

Adjusts the pitch of the locally played music file.

abstract setAudioMixingPitch(pitch: number): number;

When mixing local vocals with a music file, you can call this method to adjust only the pitch of the music file.

Timing

You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Parameters

pitch
Adjusts the pitch of the locally played music file in semitone steps. The default value is 0, meaning no pitch adjustment. The range is [-12,12], where each adjacent value differs by one semitone. The greater the absolute value, the more the pitch is raised or lowered.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioMixingPlaybackSpeed

Sets the playback speed of the current music file.

abstract setAudioMixingPlaybackSpeed(speed: number): number;

You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged callback reporting the playback state as AudioMixingStatePlaying.

Parameters

speed
The playback speed of the music file. The recommended range is [50,400], where:
  • 50: 0.5x speed.
  • 100: Original speed.
  • 400: 4x speed.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioMixingPosition

Sets the playback position of the music file.

abstract setAudioMixingPosition(pos: number): number;

This method allows you to set the playback position of the audio file, so you can play the file from a specific position instead of from the beginning.

Timing

You need to call this method after startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.

Parameters

pos
Integer. The position of the progress bar in milliseconds.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioProfile

Sets the audio encoding profile and audio scenario.

abstract setAudioProfile(
    profile: AudioProfileType,
    scenario?: AudioScenarioType
  ): number;

Scenario

This method applies to various audio scenarios. You can choose as needed. For example, in scenarios that require high audio quality such as music education, it is recommended to set profile to AudioProfileMusicHighQuality (4) and scenario to AudioScenarioGameStreaming (3).

Timing

You can call this method before or after joining a channel.

Parameters

profile
The audio encoding profile, including sample rate, bitrate, encoding mode, and the number of channels. See AudioProfileType.
scenario
The audio scenario. The volume type of the device varies depending on the audio scenario. See AudioScenarioType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setAudioScenario

Sets the audio scenario.

abstract setAudioScenario(scenario: AudioScenarioType): number;

Scenario

This method applies to various audio scenarios. You can choose as needed. For example, in scenarios that require high audio quality such as music education, it is recommended to set scenario to AudioScenarioGameStreaming (3).

Timing

You can call this method before or after joining a channel.

Parameters

scenario
The audio scenario. The volume type of the device varies depending on the audio scenario. See AudioScenarioType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setBeautyEffectOptions

Sets beauty effect options.

abstract setBeautyEffectOptions(
    enabled: boolean,
    options: BeautyOptions,
    type?: MediaSourceType
  ): number;

Enables local beauty effects and sets the beauty effect options.

Note:
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Deleting this library will prevent the feature from functioning properly.
  • This feature requires high device performance. When calling this method, the SDK automatically checks the device's capabilities.

Timing

Call this method after enableVideo or startPreview.

Parameters

enabled
Whether to enable the beauty effect:
  • true: Enable the beauty effect.
  • false: (default) Disable the beauty effect.
options
Beauty effect options. See BeautyOptions for details.
type
The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
  • When using the camera to capture local video, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -4: The current device does not support this feature. Possible reasons include:
      • The device does not meet the performance requirements for beauty effects. It is recommended to use a higher-performance device.

setCameraCapturerConfiguration

Sets the camera capture configuration.

abstract setCameraCapturerConfiguration(
    config: CameraCapturerConfiguration
  ): number;

Timing

This method must be called before starting local camera capture, such as before calling startPreview or joinChannel.

Parameters

config
Camera capture configuration. See CameraCapturerConfiguration.

Return Values

  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setCameraDeviceOrientation

Sets the rotation angle of the captured video.

abstract setCameraDeviceOrientation(
    type: VideoSourceType,
    orientation: VideoOrientation
  ): number;
Note:
  • This method is for Windows only.
  • This method must be called after enableVideo. The setting takes effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
  • When the video capture device does not have a gravity sensor, you can call this method to manually adjust the rotation angle of the captured video.

Parameters

type
Video source type. See VideoSourceType.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setChannelProfile

Sets the channel profile.

abstract setChannelProfile(profile: ChannelProfileType): number;

You can call this method to set the channel profile. The SDK applies different optimization strategies based on the profile. For example, in the live streaming profile, the SDK prioritizes video quality. The default channel profile after SDK initialization is live streaming.

Note: To ensure the quality of real-time audio and video, all users in the same channel must use the same channel profile.

Timing

This method must be called before joining a channel.

Parameters

profile
The channel profile. See ChannelProfileType.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.
    • -2: Invalid parameter.
    • -7: The SDK is not initialized.

setClientRole

Sets the user role and audience latency level in live streaming.

abstract setClientRole(
    role: ClientRoleType,
    options?: ClientRoleOptions
  ): number;

By default, the SDK sets the user role to audience. You can call this method to set the user role to broadcaster. The user role (role) determines the user's permissions at the SDK level, such as whether the user can publish streams.

Note: When the user role is set to broadcaster, the audience latency level only supports AudienceLatencyLevelUltraLowLatency. If you call this method before joining a channel and set role to BROADCASTER, the local user will not receive the onClientRoleChanged callback.

Timing

This method can be called either before or after joining a channel. If you call this method to set the user role to broadcaster before joining a channel and set the local video properties using setupLocalVideo, local video preview starts automatically when the user joins the channel. If you call this method to switch the user role after joining a channel, upon success, the SDK automatically calls muteLocalAudioStream and muteLocalVideoStream to update the publishing state of audio and video streams.

Parameters

role
The user role. See ClientRoleType.
Note: Users with the audience role cannot publish audio or video streams in the channel. To publish in live streaming, make sure the user role is switched to broadcaster.
options
Detailed user settings, including user level. See ClientRoleOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.
    • -1: General error (not categorized).
    • -2: Invalid parameter.
    • -5: This method call was rejected.
    • -7: The SDK is not initialized.

setCloudProxy

Sets the cloud proxy service.

abstract setCloudProxy(proxyType: CloudProxyType): number;

When a user's network access is restricted by a firewall, you need to add the IP addresses and port numbers provided by Agora to the firewall whitelist, then call this method to enable the cloud proxy and set the proxy type using the proxyType parameter. Once successfully connected to the cloud proxy, the SDK triggers the onConnectionStateChanged (ConnectionStateConnecting, ConnectionChangedSettingProxyServer) callback. To disable an already set Force UDP or Force TCP cloud proxy, call setCloudProxy(NoneProxy). To change the current cloud proxy type, first call setCloudProxy(NoneProxy), then call setCloudProxy again with the desired proxyType value.

Note:
  • It is recommended to call this method outside the channel.
  • When a user is behind an intranet firewall, the features of CDN live streaming and cross-channel media relay are not available when using Force UDP cloud proxy.
  • When using Force UDP cloud proxy, the startAudioMixing method cannot play online audio files over HTTP. CDN live streaming and cross-channel media relay use TCP-based cloud proxy.

Parameters

proxyType
The cloud proxy type. See CloudProxyType. This parameter is required. If not set, the SDK will return an error.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.
    • -2: Invalid parameter.
    • -7: The SDK is not initialized.

setColorEnhanceOptions

Enables color enhancement.

abstract setColorEnhanceOptions(
    enabled: boolean,
    options: ColorEnhanceOptions,
    type?: MediaSourceType
  ): number;

Video captured by the camera may suffer from color distortion. The color enhancement feature improves color richness and accuracy by intelligently adjusting video characteristics such as saturation and contrast, making the video more vivid. You can call this method to enable color enhancement and configure its effects.

Note:
  • Call this method after enableVideo.
  • Color enhancement requires certain device performance. If the device overheats or experiences issues after enabling color enhancement, it is recommended to lower the enhancement level or disable the feature.
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Deleting this library will prevent the feature from functioning properly.

Parameters

enabled
Whether to enable color enhancement:
  • true: Enable color enhancement.
  • false: (default) Disable color enhancement.
options
Color enhancement options used to configure the effect. See ColorEnhanceOptions.
type
The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
  • When using the camera to capture local video, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setDirectCdnStreamingAudioConfiguration

Sets the audio encoding properties when the host streams directly to the CDN.

abstract setDirectCdnStreamingAudioConfiguration(
    profile: AudioProfileType
  ): number;
Deprecated
Deprecated since v4.6.2.

setDirectCdnStreamingVideoConfiguration

Sets the video encoding properties when the host streams directly to the CDN.

abstract setDirectCdnStreamingVideoConfiguration(
    config: VideoEncoderConfiguration
  ): number;
Deprecated
Deprecated since v4.6.2.

This method only applies to video captured by the camera, screen sharing, or custom video sources. That is, it applies to video collected when publishCameraTrack or publishCustomVideoTrack is set to true in DirectCdnStreamingMediaOptions. If the resolution you set exceeds the range supported by your camera device, the SDK adapts it based on your settings and selects the closest resolution with the same aspect ratio for capture, encoding, and streaming. You can get the actual resolution of the pushed video stream via the onDirectCdnStreamingStats callback.

Parameters

config
Video encoding parameter configuration. See VideoEncoderConfiguration.
Note: When streaming directly to the CDN, the SDK currently only supports setting OrientationMode to landscape (OrientationFixedLandscape) or portrait (OrientationFixedPortrait).

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.

setDualStreamMode

Sets the dual-stream mode on the sender and configures the low-quality video stream.

abstract setDualStreamMode(
    mode: SimulcastStreamMode,
    streamConfig?: SimulcastStreamConfig
  ): number;
By default, the SDK enables the adaptive low-quality stream mode (AutoSimulcastStream) on the sender. In this mode, the sender does not actively send the low-quality stream. A receiver with host privileges can call setRemoteVideoStreamType to request the low-quality stream, and the sender starts sending it automatically upon receiving the request.
  • If you want to change this behavior, you can call this method and set mode to DisableSimulcastStream (never send low-quality stream) or EnableSimulcastStream (always send low-quality stream).
  • To revert to the default behavior after making changes, call this method again and set mode to AutoSimulcastStream.
Note: The differences and relationship between this method and enableDualStreamMode are as follows:
  • Calling this method with mode set to DisableSimulcastStream has the same effect as calling enableDualStreamMode with enabled set to false.
  • Calling this method with mode set to EnableSimulcastStream has the same effect as calling enableDualStreamMode with enabled set to true.
  • Both methods can be called before or after joining a channel. If both are used, the settings in the method called later take effect.

Parameters

mode
The mode for sending video streams. See SimulcastStreamMode.
streamConfig
The configuration for the low-quality video stream. See SimulcastStreamConfig.
Note: If mode is set to DisableSimulcastStream, streamConfig will not take effect.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

setEarMonitoringAudioFrameParameters

Sets the audio data format for in-ear monitoring.

abstract setEarMonitoringAudioFrameParameters(
    sampleRate: number,
    channel: number,
    mode: RawAudioFrameOpModeType,
    samplesPerCall: number
  ): number;

This method sets the audio data format for the onEarMonitoringAudioFrame callback.

Note:
  • Before calling this method, you need to call enableInEarMonitoring and set includeAudioFilters to EarMonitoringFilterBuiltInAudioFilters or EarMonitoringFilterNoiseSuppression.
  • The SDK calculates the sampling interval using the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Make sure the sampling interval is not less than 0.01 seconds. The SDK triggers the onEarMonitoringAudioFrame callback based on this interval.

Parameters

sampleRate
Sampling rate (Hz) of the audio reported in onEarMonitoringAudioFrame. Can be set to 8000, 16000, 32000, 44100, or 48000.
channel
Number of audio channels reported in onEarMonitoringAudioFrame. Can be set to 1 or 2:
  • 1: Mono.
  • 2: Stereo.
mode
Usage mode of the audio frame. See RawAudioFrameOpModeType.
samplesPerCall
Number of audio samples reported in onEarMonitoringAudioFrame, typically 1024 in scenarios like CDN streaming.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setEffectPosition

Sets the playback position of the specified audio effect file.

abstract setEffectPosition(soundId: number, pos: number): number;

After successful setting, the local audio effect file plays from the specified position.

Note: Call this method after playEffect.

Parameters

soundId
The ID of the audio effect. Each audio effect has a unique ID.
pos
The playback position of the audio effect file, in milliseconds.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setEffectsVolume

Sets the playback volume of audio effect files.

abstract setEffectsVolume(volume: number): number;

Timing

Call this method after playEffect.

Parameters

volume
Playback volume. The range is [0,100]. The default is 100, representing the original volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setExtensionProperty

Sets the properties of an extension.

abstract setExtensionProperty(
    provider: string,
    extension: string,
    key: string,
    value: string,
    type?: MediaSourceType
  ): number;

After enabling an extension, you can call this method to set its properties.

Note: To set properties for multiple extensions, you need to call this method multiple times.

Timing

Call this method after calling enableExtension to enable the extension.

Parameters

provider
The name of the extension provider.
extension
The name of the extension.
key
The key of the extension property.
value
The value corresponding to the extension property key.
type
The media source type of the extension. See MediaSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setExtensionProviderProperty

Sets the properties of an extension provider.

abstract setExtensionProviderProperty(
    provider: string,
    key: string,
    value: string
  ): number;

You can call this method to set the properties of an extension provider and initialize related parameters based on the provider type.

Note: To set properties for multiple extension providers, you need to call this method multiple times.

Timing

Call this method after registerExtension and before enableExtension.

Parameters

provider
The name of the extension provider.
key
The key of the extension property.
value
The value corresponding to the extension property key.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setFaceShapeAreaOptions

Sets face shaping area options and specifies the media source.

abstract setFaceShapeAreaOptions(
    options: FaceShapeAreaOptions,
    type?: MediaSourceType
  ): number;

If the preset face shaping effects implemented in the setFaceShapeBeautyOptions method do not meet your expectations, you can use this method to set face shaping area options and fine-tune individual facial features for more refined results.

Note: Face shaping is a value-added service. See Billing Strategy for billing details.
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Removing this library will prevent the feature from functioning properly.
  • This feature requires high device performance. When you call this method, the SDK automatically checks the device capability.

Timing

Call this method after setFaceShapeBeautyOptions.

Parameters

options
Face shaping area options. See FaceShapeAreaOptions.
type
The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, this parameter only supports the following settings:
  • When capturing local video using the camera, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -4: The current device does not support this feature. Possible reasons include:
      • The device does not meet the performance requirements for beauty effects. Consider using a higher-performance device.

setFaceShapeBeautyOptions

Sets face shaping effect options and specifies the media source.

abstract setFaceShapeBeautyOptions(
    enabled: boolean,
    options: FaceShapeBeautyOptions,
    type?: MediaSourceType
  ): number;

Call this method to enhance facial features using preset parameters to achieve effects such as face slimming, eye enlargement, and nose slimming in one step. You can also adjust the overall intensity of the effect.

Note: Face shaping is a value-added service. See Billing Strategy for billing details.
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Removing this library will prevent the feature from functioning properly.
  • This feature requires high device performance. When you call this method, the SDK automatically checks the device capability.

Timing

Call this method after enableVideo.

Parameters

enabled
Whether to enable the face shaping effect:
  • true: Enable face shaping.
  • false: (default) Disable face shaping.
options
Face shaping style options. See FaceShapeBeautyOptions.
type
The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, this parameter only supports the following settings:
  • When capturing local video using the camera, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -4: The current device does not support this feature. Possible reasons include:
      • The device does not meet the performance requirements for beauty effects. Consider using a higher-performance device.

setFilterEffectOptions

Sets filter effect options and specifies the media source.

abstract setFilterEffectOptions(
    enabled: boolean,
    options: FilterEffectOptions,
    type?: MediaSourceType
  ): number;
Note:
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Removing this library will prevent the feature from functioning properly.
  • This feature requires high device performance. When you call this method, the SDK automatically checks the device capability.

Timing

Call this method after enableVideo.

Parameters

enabled
Whether to enable the filter effect:
  • true: Enable filter.
  • false: (default) Disable filter.
options
Filter options. See FilterEffectOptions.
type
The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, this parameter only supports the following settings:
  • When capturing local video using the camera, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setHeadphoneEQParameters

Sets the low and high frequency parameters of the headphone equalizer.

abstract setHeadphoneEQParameters(lowGain: number, highGain: number): number;

In spatial audio scenarios, if the expected effect is not achieved after calling the setHeadphoneEQPreset method to use a preset headphone equalizer, you can further adjust the headphone equalizer effect by calling this method.

Parameters

lowGain
Low frequency parameter of the headphone equalizer. Value range: [-10,10]. The higher the value, the deeper the sound.
highGain
High frequency parameter of the headphone equalizer. Value range: [-10,10]. The higher the value, the sharper the sound.

Return Values

  • 0: Success.
  • < 0: Failure
    • -1: General error (not specifically classified).

setHeadphoneEQPreset

Sets a preset headphone equalizer effect.

abstract setHeadphoneEQPreset(preset: HeadphoneEqualizerPreset): number;

This method is mainly used in spatial audio scenarios. You can select a preset headphone equalizer to listen to audio and achieve the desired audio experience.

Note: If your headphones already have good equalization effects, calling this method may not significantly enhance the experience and may even degrade it.

Parameters

preset
Preset headphone equalizer effect. See HeadphoneEqualizerPreset.

Return Values

  • 0: Success.
  • < 0: Failure
    • -1: General error (not specifically classified).

setInEarMonitoringVolume

Sets the in-ear monitoring volume.

abstract setInEarMonitoringVolume(volume: number): number;

Timing

Can be called before or after joining a channel.

Parameters

volume
Volume, range is [0,400].
  • 0: Mute.
  • 100: (Default) Original volume.
  • 400: 4 times the original volume, with overflow protection.

setLocalRenderMode

Updates the local view display mode.

abstract setLocalRenderMode(
    renderMode: RenderModeType,
    mirrorMode?: VideoMirrorModeType
  ): number;

After initializing the local user view, you can call this method to update the rendering and mirroring mode of the local user view. This method only affects the video seen by the local user and does not affect the published video.

Note: This method only takes effect for the first camera (PrimaryCameraSource). In scenarios using custom video capture or other video sources, use the setupLocalVideo method instead.

Timing

  • Call this method after initializing the local view with setupLocalVideo.
  • You can call this method multiple times during a call to update the local view display mode.

Parameters

renderMode
Local view display mode. See RenderModeType.
mirrorMode
Local view mirror mode. See VideoMirrorModeType.
Note: If you are using the front camera, the local view mirror mode is enabled by default. If you are using the rear camera, it is disabled by default.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalRenderTargetFps

Sets the maximum frame rate for local video rendering.

abstract setLocalRenderTargetFps(
    sourceType: VideoSourceType,
    targetFps: number
  ): number;

Scenario

In scenarios with low frame rate requirements for video rendering (e.g., screen sharing, online education), you can call this method to set the maximum frame rate for local video rendering. The SDK will try to match the actual rendering frame rate to this value to reduce CPU usage and improve system performance.

Timing

This method can be called before or after joining a channel.

Parameters

sourceType
Type of video source. See VideoSourceType.
targetFps
Maximum rendering frame rate (fps). Supported values: 1, 7, 10, 15, 24, 30, 60.
Note: Set this parameter to a value lower than the actual video frame rate; otherwise, the setting will not take effect.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalVideoMirrorMode

Sets the local video mirror mode.

abstract setLocalVideoMirrorMode(mirrorMode: VideoMirrorModeType): number;
Deprecated
Deprecated: This method is deprecated.

Parameters

mirrorMode
The local video mirror mode. See VideoMirrorModeType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalVoiceEqualization

Sets the local voice equalization effect.

abstract setLocalVoiceEqualization(
    bandFrequency: AudioEqualizationBandFrequency,
    bandGain: number
  ): number;

Timing

Can be called before or after joining a channel.

Parameters

bandFrequency
Index of the frequency band. The range is [0,9], representing 10 frequency bands. The corresponding center frequencies are [31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, 16k] Hz. See AudioEqualizationBandFrequency.
bandGain
Gain of each band in dB. The value range is [-15,15], with a default of 0.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalVoiceFormant

Sets the formant ratio to change the voice timbre.

abstract setLocalVoiceFormant(formantRatio: number): number;

Formant ratio is a parameter that affects the timbre of the voice. A smaller value results in a deeper voice, while a larger value results in a sharper voice. Once set, all users in the channel can hear the effect. If you want to change both timbre and pitch, it is recommended to use this method together with setLocalVoicePitch.

Scenario

In scenarios like voice live streaming, voice chat rooms, and karaoke rooms, you can call this method to set the local voice's formant ratio to change the timbre.

Timing

Can be called before or after joining a channel.

Parameters

formantRatio
Formant ratio. The range is [-1.0, 1.0]. Default is 0.0, which means no change to the original timbre.
Note: Recommended range is [-0.4, 0.6]. Effects may be suboptimal outside this range.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalVoicePitch

Sets the pitch of the local voice.

abstract setLocalVoicePitch(pitch: number): number;

Timing

Can be called before or after joining a channel.

Parameters

pitch
Voice frequency. Can be set in the range [0.5, 2.0]. Lower values result in lower pitch. Default is 1.0, meaning no pitch change.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLocalVoiceReverb

Sets the reverb effect for the local voice.

abstract setLocalVoiceReverb(
    reverbKey: AudioReverbType,
    value: number
  ): number;

The SDK provides a simpler method setAudioEffectPreset to directly apply preset reverb effects such as pop, R&B, and KTV.

Note: This method can be called before or after joining a channel.

Parameters

reverbKey
Reverb effect key. This method supports 5 reverb keys. See AudioReverbType.
value
The value corresponding to each reverb effect key.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLogFile

Sets the log file.

abstract setLogFile(filePath: string): number;
Deprecated
Deprecated: This method is deprecated. Set the log file path using the context parameter when calling initialize.

Sets the output log file for the SDK. All logs generated during SDK runtime will be written to this file.

Note: The app must ensure the specified directory exists and is writable.

Timing

This method must be called immediately after initialize, otherwise the logs may be incomplete.

Parameters

filePath
The full path of the log file. The log file is encoded in UTF-8.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.

setLogFileSize

Sets the size of SDK output log files.

abstract setLogFileSize(fileSizeInKBytes: number): number;
Deprecated
Deprecated: This method is deprecated. Use the logConfig parameter in initialize to set the log file size instead.
By default, the SDK generates 5 SDK log files and 5 API call log files, as follows:
  • SDK log files are named: agorasdk.log, agorasdk.1.log, agorasdk.2.log, agorasdk.3.log, agorasdk.4.log.
  • API call log files are named: agoraapi.log, agoraapi.1.log, agoraapi.2.log, agoraapi.3.log, agoraapi.4.log.
  • Each SDK log file has a default size of 2,048 KB; each API call log file also has a default size of 2,048 KB. All log files are UTF-8 encoded.
  • The latest logs are always written to agorasdk.log and agoraapi.log.
  • When agorasdk.log is full, the SDK performs the following operations:
    1. Deletes agorasdk.4.log (if it exists).
    2. Renames agorasdk.3.log to agorasdk.4.log.
    3. Renames agorasdk.2.log to agorasdk.3.log.
    4. Renames agorasdk.1.log to agorasdk.2.log.
    5. Creates a new agorasdk.log file.
  • The rotation rule for agoraapi.log is the same as for agorasdk.log.
Note: This method only sets the size of the agorasdk.log file and does not affect agoraapi.log.

Parameters

fileSizeInKBytes
The size of a single agorasdk.log file in KB. Value range: [128,20480]. Default is 2,048 KB. If you set fileSizeInKBytes to less than 128 KB, the SDK automatically adjusts it to 128 KB. If you set it to more than 20,480 KB, the SDK adjusts it to 20,480 KB.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.

setLogFilter

Sets the log output level.

abstract setLogFilter(filter: LogFilterType): number;
Deprecated
Deprecated: Use logConfig in initialize instead.

This method sets the SDK log output level. Different output levels can be used individually or in combination. The log levels are, in order: LogFilterOff, LogFilterCritical, LogFilterError, LogFilterWarn, LogFilterInfo, and LogFilterDebug. By selecting a level, you can see log messages for that level and all levels above it. For example, if you select LogFilterWarn, you will see log messages for LogFilterCritical, LogFilterError, and LogFilterWarn.

Parameters

filter
Log filter level. See LogFilterType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLogLevel

Sets the SDK log output level.

abstract setLogLevel(level: LogLevel): number;
Deprecated
Deprecated: This method is deprecated. Set the log output level via the context parameter when calling initialize.

By selecting a level, you can see log messages for that level.

Parameters

level
Log level. See LogLevel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setLowlightEnhanceOptions

Sets low-light enhancement.

abstract setLowlightEnhanceOptions(
    enabled: boolean,
    options: LowlightEnhanceOptions,
    type?: MediaSourceType
  ): number;

You can call this method to enable low-light enhancement and configure its effect.

Note:
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Removing this library will prevent the feature from functioning properly.
  • Low-light enhancement requires certain device performance. If the device overheats after enabling this feature, consider lowering the enhancement level or disabling the feature.
  • To achieve high-quality low-light enhancement (LowLightEnhanceLevelHighQuality), you must first enable video denoising using setVideoDenoiserOptions. The corresponding configurations are:
    • For automatic mode (LowLightEnhanceAuto), video denoising must be set to high quality (VideoDenoiserLevelHighQuality) and auto mode (VideoDenoiserAuto).
    • For manual mode (LowLightEnhanceManual), video denoising must be set to high quality (VideoDenoiserLevelHighQuality) and manual mode (VideoDenoiserManual).

Scenario

Low-light enhancement can adaptively adjust the video brightness in low-light conditions (such as backlight, cloudy days, or dark scenes) and uneven lighting environments to restore or highlight image details and improve the overall visual quality.

Timing

Call this method after enableVideo.

Parameters

enabled
Whether to enable low-light enhancement:
  • true: Enable low-light enhancement.
  • false: (default) Disable low-light enhancement.
options
Low-light enhancement options used to configure the effect. See LowlightEnhanceOptions.
type
The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, this parameter only supports the following settings:
  • When capturing local video using the camera, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setMaxMetadataSize

Sets the maximum size of media metadata.

abstract setMaxMetadataSize(size: number): number;

After calling registerMediaMetadataObserver, you can call this method to set the maximum size of the media metadata.

Parameters

size
The maximum size of the media metadata.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setMixedAudioFrameParameters

Sets the raw audio data format after audio mixing of captured and playback audio.

abstract setMixedAudioFrameParameters(
    sampleRate: number,
    channel: number,
    samplesPerCall: number
  ): number;

The SDK calculates the sampling interval using the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Make sure the sampling interval is not less than 0.01 seconds. The SDK triggers the onMixedAudioFrame callback based on this interval.

Timing

This method must be called before joining a channel.

Parameters

sampleRate
Sampling rate (Hz) of the audio data. Can be set to 8000, 16000, 32000, 44100, or 48000.
channel
Number of audio channels. Can be set to 1 or 2:
  • 1: Mono.
  • 2: Stereo.
samplesPerCall
Number of audio samples, typically 1024 in scenarios like CDN streaming.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setParameters

JSON configuration information for the SDK, used to provide technical previews or customized features.

abstract setParameters(parameters: string): number;

Parameters

parameters
Parameters in JSON string format.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setPlaybackAudioFrameBeforeMixingParameters

Sets the raw audio playback data format before mixing.

abstract setPlaybackAudioFrameBeforeMixingParameters(
    sampleRate: number,
    channel: number
    samplesPerCall: number
  ): number;

The SDK triggers the onPlaybackAudioFrameBeforeMixing callback based on this sampling interval.

Timing

This method must be called before joining a channel.

Parameters

sampleRate
Sampling rate (Hz) of the audio data. Can be set to 8000, 16000, 32000, 44100, or 48000.
channel
Number of audio channels. Can be set to 1 or 2:
  • 1: Mono.
  • 2: Stereo.
samplesPerCall
Sets the number of samples in the audio data returned in the onPlaybackAudioFrameBeforeMixing callback. In RTMP streaming scenarios, it is recommended to set this to 1024.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setPlaybackAudioFrameParameters

Sets the format of the raw audio data for playback.

abstract setPlaybackAudioFrameParameters(
    sampleRate: number,
    channel: number,
    mode: RawAudioFrameOpModeType,
    samplesPerCall: number
  ): number;

The SDK calculates the sampling interval based on the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Make sure the sampling interval is no less than 0.01 seconds. The SDK triggers the onPlaybackAudioFrame callback based on this interval.

Timing

You must call this method before joining a channel.

Parameters

sampleRate
The sampling rate (Hz) of the audio data. You can set it to 8000, 16000, 24000, 32000, 44100, or 48000.
channel
The number of audio channels. You can set it to 1 or 2:
  • 1: Mono.
  • 2: Stereo.
mode
The usage mode of the audio frame. See RawAudioFrameOpModeType.
samplesPerCall
The number of audio samples per call. Typically set to 1024 in scenarios like CDN streaming.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

setRecordingAudioFrameParameters

Sets the format of the raw audio data for recording.

abstract setRecordingAudioFrameParameters(
    sampleRate: number,
    channel: number,
    mode: RawAudioFrameOpModeType,
    samplesPerCall: number
  ): number;

The SDK calculates the sampling interval based on the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Make sure the sampling interval is no less than 0.01 seconds. The SDK triggers the onRecordAudioFrame callback based on this interval.

Timing

You must call this method before joining a channel.

Parameters

sampleRate
The sampling rate (Hz) of the audio data. You can set it to 8000, 16000, 32000, 44100, or 48000.
channel
The number of audio channels. You can set it to 1 or 2:
  • 1: Mono.
  • 2: Stereo.
mode
The usage mode of the audio frame. See RawAudioFrameOpModeType.
samplesPerCall
The number of audio samples per call. Typically set to 1024 in scenarios like CDN streaming.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

setRemoteDefaultVideoStreamType

Sets the default video stream type to subscribe to.

abstract setRemoteDefaultVideoStreamType(streamType: VideoStreamType): number;
Depending on the sender's default behavior and the configuration of setDualStreamMode, the receiver's use of this method falls into the following cases:
  • By default, the SDK enables the adaptive low stream mode (AutoSimulcastStream) on the sender side. That is, the sender only sends the high stream. Only receivers with host role can call this method to request the low stream. Once the sender receives the request, it starts sending the low stream automatically. At this point, all users in the channel can call this method to switch to low stream subscription mode.
  • If the sender calls setDualStreamMode and sets mode to DisableSimulcastStream (never send low stream), then this method has no effect.
  • If the sender calls setDualStreamMode and sets mode to EnableSimulcastStream (always send low stream), then both host and audience receivers can call this method to switch to low stream subscription mode.
When receiving low video streams, the SDK dynamically adjusts the video stream size based on the size of the video window to save bandwidth and computing resources. The aspect ratio of the low stream is the same as that of the high stream. Based on the current aspect ratio of the high stream, the system automatically allocates resolution, frame rate, and bitrate for the low stream.
Note: If you call both this method and setRemoteVideoStreamType, the SDK uses the settings in setRemoteVideoStreamType.

Timing

This method can only be called before joining a channel. The SDK does not support changing the default video stream type after joining a channel.

Parameters

streamType
Default video stream type to subscribe to: VideoStreamType.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setRemoteRenderMode

Updates the display mode of the remote view.

abstract setRemoteRenderMode(
    uid: number,
    renderMode: RenderModeType,
    mirrorMode: VideoMirrorModeType
  ): number;

After initializing the remote user view, you can call this method to update the rendering and mirror mode of the remote user view as displayed locally. This method only affects the video seen by the local user.

Note:
  • Call this method after initializing the remote view using setupRemoteVideo.
  • You can call this method multiple times during a call to update the display mode of the remote user view.

Parameters

uid
Remote user ID.
renderMode
The rendering mode of the remote user view. See RenderModeType.
mirrorMode
The mirror mode of the remote user view. See VideoMirrorModeType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setRemoteRenderTargetFps

Sets the maximum frame rate for remote video rendering.

abstract setRemoteRenderTargetFps(targetFps: number): number;

Scenario

In scenarios where high video rendering frame rate is not required (e.g., screen sharing, online education), or when the remote user is using mid- to low-end devices, you can call this method to set the maximum frame rate for remote video rendering. The SDK will try to match the actual rendering frame rate to this value to reduce CPU usage and improve system performance.

Timing

You can call this method before or after joining a channel.

Parameters

targetFps
Maximum rendering frame rate (fps). Supported values: 1, 7, 10, 15, 24, 30, 60.
Note: Set this parameter to a rendering frame rate lower than the actual video frame rate, otherwise the setting will not take effect.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setRemoteSubscribeFallbackOption

Sets fallback options for subscribed audio and video streams under poor network conditions.

abstract setRemoteSubscribeFallbackOption(
    option: StreamFallbackOptions
  ): number;

Under poor network conditions, the quality of real-time audio and video may degrade. You can call this method and set option to StreamFallbackOptionVideoStreamLow or StreamFallbackOptionAudioOnly. When the downlink network is poor and audio/video quality is severely affected, the SDK will switch the video stream to a lower quality or disable it to ensure audio quality. The SDK continuously monitors network quality and resumes audio/video subscription when the network improves. When the subscribed stream falls back to audio-only or recovers to audio and video, the SDK triggers the onRemoteSubscribeFallbackToAudioOnly callback.

Parameters

option
Fallback options for the subscribed stream. See STREAM_FALLBACK_OPTIONS.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setRemoteUserSpatialAudioParams

Sets the spatial audio parameters for a remote user.

abstract setRemoteUserSpatialAudioParams(
    uid: number,
    params: SpatialAudioParams
  ): number;

You must call this method after enableSpatialAudio. After successfully setting the spatial audio parameters for the remote user, the local user hears the remote user with a sense of spatiality.

Parameters

uid
User ID. Must be the same as the user ID used when joining the channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setRemoteVideoStreamType

Sets the video stream type to subscribe to.

abstract setRemoteVideoStreamType(
    uid: number,
    streamType: VideoStreamType
  ): number;
Depending on the sender's default behavior and the configuration of setDualStreamMode, the receiver's use of this method falls into the following cases:
  • By default, the SDK enables the adaptive low stream mode (AutoSimulcastStream) on the sender side. That is, the sender only sends the high stream. Only receivers with host role can call this method to request the low stream. Once the sender receives the request, it starts sending the low stream automatically. At this point, all users in the channel can call this method to switch to low stream subscription mode.
  • If the sender calls setDualStreamMode and sets mode to DisableSimulcastStream (never send low stream), then this method has no effect.
  • If the sender calls setDualStreamMode and sets mode to EnableSimulcastStream (always send low stream), then both host and audience receivers can call this method to switch to low stream subscription mode.
When receiving low video streams, the SDK dynamically adjusts the video stream size based on the size of the video window to save bandwidth and computing resources. The aspect ratio of the low stream is the same as that of the high stream. Based on the current aspect ratio of the high stream, the system automatically allocates resolution, frame rate, and bitrate for the low stream.
Note:
  • This method can be called before or after joining a channel.
  • If you call both this method and setRemoteDefaultVideoStreamType, the SDK uses the settings in this method.

Parameters

uid
User ID.
streamType
Video stream type: VideoStreamType.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setRemoteVideoSubscriptionOptions

Sets subscription options for the remote video stream.

abstract setRemoteVideoSubscriptionOptions(
    uid: number,
    options: VideoSubscriptionOptions
  ): number;
When the remote side sends dual streams, you can call this method to set subscription options for the remote video stream. The SDK's default subscription behavior for remote video streams depends on the type of registered video observer:
  • If IVideoFrameObserver is registered, the SDK subscribes to both raw and encoded data by default.
  • If IVideoEncodedFrameObserver is registered, the SDK subscribes only to encoded data by default.
  • If both observers are registered, the SDK follows the one registered later. For example, if IVideoFrameObserver is registered later, the SDK subscribes to both raw and encoded data by default.
If you want to change the default behavior above, or set different subscription options for different uids, you can call this method to configure.

Parameters

uid
Remote user ID.
options
Subscription settings for the video stream. See VideoSubscriptionOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

setRemoteVoicePosition

Sets the 2D position of a remote user's voice, that is, the horizontal position.

abstract setRemoteVoicePosition(
    uid: number,
    pan: number,
    gain: number
  ): number;

Sets the 2D position and volume of a remote user's voice to help the local user determine the direction of the sound. By calling this method to set the position where the remote user's voice appears, the difference in sound between the left and right channels creates a sense of direction, allowing the user to determine the remote user's real-time position. In multiplayer online games, such as battle royale games, this method can effectively enhance the directional perception of game characters and simulate realistic scenarios.

Note:
  • To use this method, you must call enableSoundPositionIndication before joining the channel to enable stereo sound for remote users.
  • For the best listening experience, it is recommended to use wired headphones when using this method.
  • This method must be called after joining the channel.

Parameters

uid
The ID of the remote user
pan
Sets the 2D position of the remote user's voice. The range is [-1.0, 1.0]:
  • (Default) 0.0: The sound appears in the center.
  • -1.0: The sound appears on the left.
  • 1.0: The sound appears on the right.
gain
Sets the volume of the remote user's voice. The range is [0.0, 100.0], and the default is 100.0, which represents the user's original volume. The smaller the value, the lower the volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setScreenCaptureContentHint

Sets the content type for screen sharing.

abstract setScreenCaptureContentHint(contentHint: VideoContentHint): number;

The SDK uses different algorithms to optimize the sharing effect based on the content type. If you do not call this method, the SDK defaults the content type of screen sharing to ContentHintNone, meaning no specific content type.

Note: This method can be called before or after starting screen sharing.

Parameters

contentHint
The content type of screen sharing. See VideoContentHint.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

setScreenCaptureScenario

Sets the screen sharing scenario.

abstract setScreenCaptureScenario(screenScenario: ScreenScenarioType): number;

When you start screen sharing or window sharing, you can call this method to set the screen sharing scenario. The SDK adjusts the quality of the shared screen according to the scenario you set.

Note: Agora recommends calling this method before joining a channel.

Parameters

screenScenario
The screen sharing scenario. See ScreenScenarioType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setSubscribeAudioAllowlist

Sets the audio subscription allowlist.

abstract setSubscribeAudioAllowlist(
    uidList: number[],
    uidNumber: number
  ): number;

You can call this method to specify the audio streams you want to subscribe to.

Note:
  • This method can be called before or after joining a channel.
  • The audio subscription allowlist is not affected by muteRemoteAudioStream, muteAllRemoteAudioStreams, or autoSubscribeAudio in ChannelMediaOptions.
  • After setting the allowlist, if you leave and rejoin the channel, the allowlist remains effective.
  • If a user is in both the audio subscription blocklist and allowlist, only the blocklist takes effect.

Parameters

uidList
List of user IDs in the audio subscription allowlist. If you want to subscribe to the audio stream of a specific user, add that user's ID to this list. If you want to remove a user from the allowlist, call setSubscribeAudioAllowlist again with an updated list that excludes the user's uid.
uidNumber
Number of users in the allowlist.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setSubscribeAudioBlocklist

Sets the audio subscription blocklist.

abstract setSubscribeAudioBlocklist(
    uidList: number[],
    uidNumber: number
  ): number;

You can call this method to specify the audio streams you do not want to subscribe to.

Note:
  • This method can be called before or after joining a channel.
  • The audio subscription blocklist is not affected by muteRemoteAudioStream, muteAllRemoteAudioStreams, or autoSubscribeAudio in ChannelMediaOptions.
  • After setting the blocklist, if you leave and rejoin the channel, the blocklist remains effective.
  • If a user is in both the audio subscription blocklist and allowlist, only the blocklist takes effect.

Parameters

uidList
List of user IDs in the audio subscription blocklist. If you want to block the audio stream of a specific user, add that user's ID to this list. If you want to remove a user from the blocklist, call setSubscribeAudioBlocklist again with an updated list that excludes the user's uid.
uidNumber
Number of users in the blocklist.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setSubscribeVideoAllowlist

Sets the video subscription allowlist.

abstract setSubscribeVideoAllowlist(
    uidList: number[],
    uidNumber: number
  ): number;

You can call this method to specify the video streams you want to subscribe to.

Note:
  • This method can be called before or after joining a channel.
  • The video subscription allowlist is not affected by muteRemoteVideoStream, muteAllRemoteVideoStreams, or autoSubscribeVideo in ChannelMediaOptions.
  • After setting the allowlist, if you leave and rejoin the channel, the allowlist remains effective.
  • If a user is in both the audio subscription blocklist and allowlist, only the blocklist takes effect.

Parameters

uidList
List of user IDs in the video subscription allowlist. If you want to subscribe only to the video stream of a specific user, add that user's ID to this list. If you want to remove a user from the allowlist, call setSubscribeVideoAllowlist again with an updated list that excludes the user's uid.
uidNumber
Number of users in the allowlist.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setSubscribeVideoBlocklist

Sets the video subscription blocklist.

abstract setSubscribeVideoBlocklist(
    uidList: number[],
    uidNumber: number
  ): number;

You can call this method to specify the video streams you do not want to subscribe to.

Note:
  • You can call this method either before or after joining a channel.
  • The video subscription blocklist is not affected by muteRemoteVideoStream, muteAllRemoteVideoStreams, or autoSubscribeVideo in ChannelMediaOptions.
  • After setting the blocklist, it remains effective even if you leave and rejoin the channel.
  • If a user is in both the audio subscription allowlist and blocklist, only the blocklist takes effect.

Parameters

uidList
The user ID list for the video subscription blocklist. If you want to block the video stream from a specific user, add that user's ID to this list. If you want to remove a user from the blocklist, you need to call the setSubscribeVideoBlocklist method again to update the list so that it no longer includes the uid of the user you want to remove.
uidNumber
The number of users in the blocklist.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setVideoDenoiserOptions

Sets video denoising.

abstract setVideoDenoiserOptions(
    enabled: boolean,
    options: VideoDenoiserOptions,
    type?: MediaSourceType
  ): number;

You can call this method to enable video denoising and configure its effect.

Note: If the denoising strength provided by this method does not meet your needs, Agora recommends that you call the setBeautyEffectOptions method to enable the beauty skin smoothing feature for better video denoising. Recommended BeautyOptions settings for strong denoising:
  • lighteningContrastLevel: LighteningContrastNormal
  • lighteningLevel: 0.0
  • smoothnessLevel: 0.5
  • rednessLevel: 0.0
  • sharpnessLevel: 0.1
  • This method depends on the video enhancement dynamic library libagora_clear_vision_extension.dll. Removing this library will prevent the feature from functioning properly.
  • Video denoising requires certain device performance. If the device overheats after enabling this feature, consider lowering the denoising level or disabling the feature.

Scenario

Poor lighting and low-end video capture devices can cause noticeable noise in video images, affecting video quality. In real-time interactive scenarios, video noise can also consume bitrate resources during encoding and reduce encoding efficiency.

Timing

Call this method after enableVideo.

Parameters

enabled
Whether to enable video denoising:
  • true: Enable video denoising.
  • false: (default) Disable video denoising.
options
Video denoising options used to configure the effect. See VideoDenoiserOptions.
type
The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, this parameter only supports the following settings:
  • When capturing local video using the camera, keep the default value PrimaryCameraSource.
  • To use custom captured video, set this parameter to CustomVideoSource.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setVideoEncoderConfiguration

Sets the video encoding configuration.

abstract setVideoEncoderConfiguration(
    config: VideoEncoderConfiguration
  ): number;

Sets the encoding configuration for the local video. Each video encoding configuration corresponds to a set of video parameters, including resolution, frame rate, and bitrate.

Note:
  • The config parameter of this method specifies the maximum values achievable under ideal network conditions. If the network condition is poor, the video engine may not use this config to render the local video and will automatically downgrade to a suitable video parameter configuration.

Timing

This method can be called before or after joining a channel. If the user does not need to reset the video encoding configuration after joining the channel, it is recommended to call this method before enableVideo to speed up the time to the first frame.

Parameters

config
Video encoding configuration. See VideoEncoderConfiguration.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting tips.

setVideoScenario

Sets the video application scenario.

abstract setVideoScenario(scenarioType: VideoApplicationScenarioType): number;

After successfully calling this method to set the video application scenario, the SDK enables best practice strategies based on the specified scenario and automatically adjusts key performance indicators to optimize video experience quality.

Note: You must call this method before joining a channel.

Parameters

scenarioType
The video application scenario. See VideoApplicationScenarioType. ApplicationScenarioMeeting (1) is suitable for meeting scenarios. If you have called setDualStreamMode to set the low stream to never send (DisableSimulcastStream), the dynamic switching of the low stream is not effective in meeting scenarios. This enum value only applies to broadcaster vs broadcaster scenarios. The SDK enables the following strategies for this scenario:
  • For meeting scenarios with higher requirements for low stream bitrate, multiple anti-weak network techniques are automatically enabled to improve the low stream's resistance to poor network conditions and ensure smooth reception of multiple streams.
  • Monitors the number of subscribers to the high stream in real time and dynamically adjusts the high stream configuration based on the number of subscribers:
    • If no one subscribes to the high stream, its bitrate and frame rate are automatically reduced to save upstream bandwidth and consumption.
    • If someone subscribes to the high stream, it resets to the VideoEncoderConfiguration set by the most recent call to setVideoEncoderConfiguration. If no configuration was set previously, the following values are used:
      • Video resolution: 1280 × 720
      • Frame rate: 15 fps
      • Bitrate: 1600 Kbps
  • Monitors the number of subscribers to the low stream in real time and dynamically enables or disables the low stream:
    • If no one subscribes to the low stream, it is automatically disabled to save upstream bandwidth and consumption.
    • If someone subscribes to the low stream, it is enabled and reset to the SimulcastStreamConfig set by the most recent call to setDualStreamMode. If no configuration was set previously, the following values are used:
      • Video resolution: 480 × 272
      • Frame rate: 15 fps
      • Bitrate: 500 Kbps
ApplicationScenario1v1 (2) is suitable for 1v1 video call scenarios. The SDK optimizes strategies for this scenario to meet the requirements of low latency and high video quality, improving performance in terms of image quality, first frame rendering, latency on mid-to-low-end devices, and smoothness under poor network conditions.ApplicationScenarioLiveshow (3) is suitable for showroom live streaming scenarios. The SDK optimizes strategies for this scenario to meet the high demands on first frame rendering time and image clarity. For example, it enables audio and video frame accelerated rendering by default to improve first frame rendering experience, eliminating the need to call enableInstantMediaRendering separately. It also enables B-frames by default to ensure high image quality and transmission efficiency. Additionally, it enhances video quality and smoothness under poor network conditions and on low-end devices.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -1: General error (not specifically classified).
    • -4: Setting the video application scenario is not supported. This may occur if the audio SDK is currently in use.
    • -7: IRtcEngine object is not initialized. You need to successfully initialize the IRtcEngine object before calling this method.

setVoiceBeautifierParameters

Sets the parameters for preset voice beautifier effects.

abstract setVoiceBeautifierParameters(preset: VoiceBeautifierPreset, param1: number, param2: number): number;
Call this method to set the gender characteristics and reverb effect of the singing beautifier. This method applies to the local user who is sending the stream. Once set, all users in the channel can hear the effect. To achieve better voice effects, it is recommended to perform the following before calling this method:
  • Call setAudioScenario to set the audio scenario to high quality, i.e., AudioScenarioGameStreaming(3).
  • Call setAudioProfile to set profile to AudioProfileMusicHighQuality(4) or AudioProfileMusicHighQualityStereo(5).
Note:

Parameters

preset
Preset effect:
  • SINGING_BEAUTIFIER: Singing beautifier.
param1
Gender characteristic of the singing voice:
  • 1: Male.
  • 2: Female.
param2
Reverb effect of the singing voice:
  • 1: Small room reverb.
  • 2: Large room reverb.
  • 3: Hall reverb.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setVoiceBeautifierPreset

Sets a preset voice beautifier effect.

abstract setVoiceBeautifierPreset(preset: VoiceBeautifierPreset): number;

Call this method to set a preset voice beautifier effect for the local user who sends the stream. After the effect is set, all users in the channel can hear the effect. You can set different beautifier effects for users depending on the scenario.

Note:

Timing

Can be called before or after joining a channel. To achieve better voice effects, it is recommended to perform the following operations before calling this method:
  • Call setAudioScenario to set the audio scenario to high-quality mode, that is, AudioScenarioGameStreaming(3).
  • Call setAudioProfile to set profile to AudioProfileMusicHighQuality(4) or AudioProfileMusicHighQualityStereo(5).

Parameters

preset
The preset voice beautifier option. See VoiceBeautifierPreset.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setVoiceConversionPreset

Sets a preset voice conversion effect.

abstract setVoiceConversionPreset(preset: VoiceConversionPreset): number;

Call this method to set a preset voice conversion effect provided by the SDK for the local user who sends the stream. After the effect is set, all users in the channel can hear the effect. You can set different voice conversion effects for users depending on the scenario.

Note:

Timing

Can be called before or after joining a channel. To achieve better voice effects, it is recommended to perform the following operations before calling this method:
  • Call setAudioScenario to set the audio scenario to high-quality mode, that is, AudioScenarioGameStreaming(3).
  • Call setAudioProfile to set profile to AudioProfileMusicHighQuality(4) or AudioProfileMusicHighQualityStereo(5).

Parameters

preset
The preset voice conversion option: VoiceConversionPreset.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setVolumeOfEffect

Sets the playback volume of the specified audio effect file.

abstract setVolumeOfEffect(soundId: number, volume: number): number;

Timing

Call this method after playEffect.

Parameters

soundId
The ID of the specified audio effect. Each audio effect has a unique ID.
volume
Playback volume. The range is [0,100]. The default is 100, representing the original volume.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setupLocalVideo

Initializes the local view.

abstract setupLocalVideo(canvas: VideoCanvas): number;

This method initializes the local view and sets the display properties of the local user video. It only affects the video seen by the local user and does not affect the publishing of the local video. Call this method to bind the local video stream to a display window (view), and set the rendering and mirror mode of the local user view. The binding remains effective after leaving the channel. To stop rendering or unbind, call this method and set the view parameter to null to stop rendering and clear the rendering cache.

Note:
  • In Flutter, you do not need to call this method manually. Use AgoraVideoView to render local and remote views.

Scenario

In app development, this method is typically called after initialization to configure local video, and then join the channel. If you want to display multiple preview views in the local video preview, each showing different observation points in the video pipeline, you can call this method multiple times with different view values and set different observation positions for each. For example, set the video source to camera and assign two view values with position set to PositionPostCapturerOrigin and PositionPostCapturer respectively, to simultaneously preview the raw video and the processed video (beauty effects, virtual background, watermark) locally.

Timing

You can call this method before or after joining a channel.

Parameters

canvas
Display properties of the local video. See VideoCanvas.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

setupRemoteVideo

Initializes the remote user view.

abstract setupRemoteVideo(canvas: VideoCanvas): number;

This method binds the remote user to a display view and sets the rendering and mirror mode of the remote user view for local display. It only affects the video seen by the local user. You need to specify the remote user's ID when calling this method. It is recommended to set it before joining the channel. If the ID is not available beforehand, you can call this method upon receiving the onUserJoined callback. To unbind a remote user from a view, call this method and set view to null. After leaving the channel, the SDK clears the binding between the remote user and the view.

Note:
  • When using the recording service, since it does not send video streams, the app does not need to bind a view for it. If the app cannot identify the recording service, bind the remote user view upon receiving the onFirstRemoteVideoDecoded callback.
  • To stop rendering the view, set view to null and call this method again to stop rendering and clear the rendering cache.

Parameters

canvas
Display properties of the remote video. See VideoCanvas.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startAudioMixing

Starts playing a music file.

abstract startAudioMixing(
    filePath: string,
    loopback: boolean,
    cycle: number,
    startPos?: number
  ): number;

For supported audio file formats, see What audio file formats does RTC SDK support. If the local music file does not exist, the file format is not supported, or the online music file URL is inaccessible, the SDK reports AudioMixingReasonCanNotOpen.

Note:
  • Using this method to play short sound effect files may result in playback failure. To play sound effects, use playEffect.
  • If you need to call this method multiple times, ensure that the interval between calls is greater than 500 ms.

Timing

This method can be called either before or after joining a channel.

Parameters

filePath
File path:
  • Windows: The absolute path or URL of the audio file, including the file name and extension. For example: C:\music\audio.mp4.
  • macOS: The absolute path or URL of the audio file, including the file name and extension. For example: /var/mobile/Containers/Data/audio.mp4.
loopback
Whether to play the music file only locally:
  • true: Play only locally. Only the local user can hear the music.
  • false: Publish the music file to remote users. Both local and remote users can hear the music.
cycle
Number of times to play the music file.
  • > 0: Number of times to play. For example, 1 means play once.
  • -1: Play in an infinite loop.
startPos
The playback position of the music file in milliseconds.

Return Values

  • 0: Success.
  • < 0: Failure:
    • -1: General error (not specifically categorized).
    • -2: Invalid parameter.
    • -3: SDK not ready:
      • Check whether the audio module is enabled.
      • Check the integrity of the assembly.
      • IRtcEngine initialization failed. Please reinitialize IRtcEngine.

startCameraCapture

Starts video capture using the camera.

abstract startCameraCapture(
    sourceType: VideoSourceType,
    config: CameraCapturerConfiguration
  ): number;

Call this method to start multiple camera captures simultaneously by specifying sourceType.

Parameters

sourceType
The type of video source. See VideoSourceType.
Note:
  • On desktop platforms, up to 4 video streams from camera capture are supported.
config
Video capture configuration. See CameraCapturerConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startDirectCdnStreaming

Starts direct CDN streaming on the host side.

abstract startDirectCdnStreaming(
    eventHandler: IDirectCdnStreamingEventHandler,
    publishUrl: string,
    options: DirectCdnStreamingMediaOptions
  ): number;
Deprecated
Deprecated since v4.6.2.
The SDK does not support pushing streams to the same URL more than once at the same time. Media options explanation: The SDK does not support setting both publishCameraTrack and publishCustomVideoTrack to true, nor both publishMicrophoneTrack and publishCustomAudioTrack to true. You can configure media options (DirectCdnStreamingMediaOptions) based on your scenario. For example: If you want to push custom audio and video streams collected by the host, configure the media options as follows:
  • Set publishCustomAudioTrack to true and call pushAudioFrame
  • Set publishCustomVideoTrack to true and call pushVideoFrame
  • Ensure publishCameraTrack is false (default value)
  • Ensure publishMicrophoneTrack is false (default value)
Since v4.2.0, the SDK supports pushing audio-only streams. You can set publishCustomAudioTrack or publishMicrophoneTrack to true in DirectCdnStreamingMediaOptions and call pushAudioFrame to push an audio-only stream.

Parameters

eventHandler
See onDirectCdnStreamingStateChanged and onDirectCdnStreamingStats.
publishUrl
CDN streaming URL.
options
Media options for the host. See DirectCdnStreamingMediaOptions.

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.

startEchoTest

Starts an audio and video call loop test.

abstract startEchoTest(config: EchoTestConfiguration): number;

To test whether local audio and video transmission is functioning properly, you can call this method to perform an audio and video call loop test. This checks whether the system's audio and video devices and the user's uplink and downlink networks are working properly. After starting the test, the user should speak or face the camera. The audio or video will play back after about 2 seconds. If audio plays normally, it indicates the system audio devices and network are functioning well. If video plays normally, it indicates the system video devices and network are functioning well.

Note:
  • When calling this method in a channel, ensure that no audio/video streams are being published.
  • After calling this method, you must call stopEchoTest to end the test. Otherwise, the user cannot perform another loop test or join a channel.
  • In live broadcast scenarios, only the host can call this method.

Timing

This method can be called before or after joining a channel.

Parameters

config
Configuration for the audio and video call loop test. See EchoTestConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startLastmileProbeTest

Starts a last-mile network probe test before a call.

abstract startLastmileProbeTest(config: LastmileProbeConfig): number;

Performs a last-mile network probe test before a call to report uplink and downlink bandwidth, packet loss, jitter, and round-trip time.

Timing

This method must be called before joining a channel. Do not call other methods before receiving the onLastmileQuality and onLastmileProbeResult callbacks, or this method may fail due to excessive API calls.

Parameters

config
Last mile probe configuration. See LastmileProbeConfig.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startLocalAudioMixer

Starts local audio mixing.

abstract startLocalAudioMixer(config: LocalAudioMixerConfiguration): number;
This method allows you to merge multiple audio streams locally into a single stream. For example, you can merge audio from the local microphone, media player, sound card, and remote users into one stream, and then publish it to the channel.
  • To mix local audio streams, set publishMixedAudioTrack in ChannelMediaOptions to true to publish the mixed stream to the channel.
  • To mix remote audio streams, ensure the remote streams are published and subscribed to in the channel.
Note: To ensure audio quality, it is recommended not to exceed 10 audio streams in local mixing.

Scenario

You can enable this feature in the following scenarios:
  • Used with local video compositing, to synchronize and publish the audio streams related to the composed video.
  • In live streaming, users receive audio streams from the channel, mix them locally, and forward them to another channel.
  • In education, teachers can mix audio from interactions with students locally and forward the mixed stream to another channel.

Timing

This method can be called before or after joining a channel.

Parameters

config
Configuration for local audio mixing. See LocalAudioMixerConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -7: The IRtcEngine object is not initialized. You must successfully initialize the IRtcEngine object before calling this method.

startLocalVideoTranscoder

Starts local video mixing.

abstract startLocalVideoTranscoder(
    config: LocalTranscoderConfiguration
  ): number;

After calling this method, you can merge multiple video streams into a single stream locally. For example, merge camera-captured video, screen sharing, media player video, remote video, video files, images, etc., into a single stream, and then publish the mixed stream to the channel.

Note:
  • Local video mixing consumes significant CPU resources. Agora recommends using this feature on high-performance devices.
  • If you need to mix locally captured video streams, the SDK supports the following combinations:
    • On Windows: up to 4 camera video streams + 4 screen sharing streams.
    • On macOS: up to 4 camera video streams + 1 screen sharing stream.
  • When configuring mixing, ensure that the camera video stream capturing the portrait has a higher layer index than the screen sharing stream, otherwise the portrait may be covered and not displayed in the final mixed stream.

Scenario

You can enable local video mixing in scenarios such as remote meetings, live streaming, and online education. This allows users to conveniently view and manage multiple video feeds, and supports features like picture-in-picture. Here is a typical scenario for implementing picture-in-picture:
  1. Call enableVirtualBackground and set the custom background to BackgroundNone, i.e., separate the portrait from the background in the camera-captured video.
  2. Call startScreenCaptureBySourceType to start capturing the screen sharing video stream.
  3. Call this method and set the portrait video source as one of the sources in the local mixing configuration to achieve picture-in-picture in the final mixed video.

Timing

Parameters

config
Configuration for local video mixing. See LocalTranscoderConfiguration.
Note:
  • The maximum resolution for each video stream in the local mixing is 4096 × 2160. Exceeding this limit will cause the mixing to fail.
  • The maximum resolution for the mixed video stream is 4096 × 2160.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.

startMediaRenderingTracing

Starts video frame rendering tracing.

abstract startMediaRenderingTracing(): number;

After this method is successfully called, the SDK uses the time of this call as the starting point and reports video frame rendering information via the onVideoRenderingTracingResult callback.

Note:
  • If you do not call this method, the SDK starts tracing video rendering events automatically using the time of the joinChannel call as the starting point. You can call this method at an appropriate time based on your business scenario to customize the tracing.
  • After leaving the current channel, the SDK automatically resets the time to the next joinChannel call.

Scenario

Agora recommends using this method together with UI elements in your app (such as buttons or sliders) to measure the time to first frame rendering from the user's perspective. For example, call this method when the user clicks the Join Channel button, and then use the onVideoRenderingTracingResult callback to obtain the duration of each stage in the video rendering process, allowing you to optimize each stage accordingly.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -7: IRtcEngine is not initialized when the method is called.

startOrUpdateChannelMediaRelay

Starts or updates cross-channel media stream forwarding.

abstract startOrUpdateChannelMediaRelay(
    configuration: ChannelMediaRelayConfiguration
  ): number;
The first successful call to this method starts cross-channel media stream forwarding. To forward streams to multiple destination channels or exit a forwarding channel, you can call this method again to add or remove destination channels. This feature supports forwarding to up to 6 destination channels. After successfully calling this method, the SDK triggers the onChannelMediaRelayStateChanged callback to report the current cross-channel media stream forwarding state. Common states include:
  • If the onChannelMediaRelayStateChanged callback reports RelayStateRunning (2) and RelayOk (0), it means the SDK has started forwarding media streams between the source and destination channels.
  • If the onChannelMediaRelayStateChanged callback reports RelayStateFailure (3), it means an error occurred during cross-channel media stream forwarding.
Note:
  • Call this method after successfully joining a channel.
  • In a live streaming scenario, only users with the broadcaster role can call this method.
  • The cross-channel media stream forwarding feature requires technical support to enable.
  • This feature does not support string-type UIDs.

Parameters

configuration
Cross-channel media stream forwarding configuration. See ChannelMediaRelayConfiguration.

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.
    • -1: General error (not specifically classified).
    • -2: Invalid parameter.
    • -8: Internal state error. Possibly because the user role is not broadcaster.

startPreviewWithoutSourceType

Starts video preview.

abstract startPreviewWithoutSourceType(): number;

This method starts the local video preview.

Note:
  • Mirror mode is enabled by default for local preview.
  • After leaving the channel, the local preview remains active. You need to call stopPreview to stop the local preview.

Timing

You must call this method after enableVideo.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startPreview

Starts video preview and specifies the video source to preview.

abstract startPreview(sourceType?: VideoSourceType): number;

This method starts the local video preview and specifies the video source to appear in the preview.

Note:
  • Mirror mode is enabled by default for local preview.
  • After leaving the channel, the local preview remains active. You need to call stopPreview to stop the local preview.

Timing

You must call this method after enableVideo.

Parameters

sourceType
The type of video source. See VideoSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startRhythmPlayer

Starts the virtual metronome.

abstract startRhythmPlayer(
    sound1: string,
    sound2: string,
    config: AgoraRhythmPlayerConfig
  ): number;
Deprecated
Deprecated since v4.6.2.
  • Once the virtual metronome is enabled, the SDK starts playing the specified audio files from the beginning and controls the playback duration of each file based on the beatsPerMinute setting in AgoraRhythmPlayerConfig. For example, if beatsPerMinute is set to 60, the SDK plays one beat per second. If the file duration exceeds the beat duration, the SDK only plays the portion corresponding to the beat duration.
  • By default, the sound of the virtual metronome is not published to remote users. If you want remote users to hear the metronome, set publishRhythmPlayerTrack in ChannelMediaOptions to true after calling this method.

Scenario

In music or physical education scenarios, instructors often use a metronome to help students practice with the correct rhythm. A measure consists of strong and weak beats, with the first beat of each measure being the strong beat and the rest being weak beats.

Timing

You can call this method before or after joining a channel.

Parameters

sound1
The absolute path or URL of the strong beat file, including the file name and extension. For example, C:\music\audio.mp4. Supported audio formats: see Supported Audio Formats by RTC SDK.
sound2
The absolute path or URL of the weak beat file, including the file name and extension. For example, C:\music\audio.mp4. Supported audio formats: see Supported Audio Formats by RTC SDK.
config
Metronome configuration. See AgoraRhythmPlayerConfig.

startRtmpStreamWithTranscoding

Starts pushing streams to a CDN and sets the transcoding configuration.

abstract startRtmpStreamWithTranscoding(
    url: string,
    transcoding: LiveTranscoding
  ): number;

Agora recommends using the more comprehensive server-side streaming feature. See Implement server-side CDN streaming. Call this method to push live audio and video streams to the specified CDN streaming URL and set the transcoding configuration. This method can only push media streams to one URL at a time. To push to multiple URLs, call this method multiple times. Each stream push represents a streaming task. The default maximum number of concurrent tasks is 200, meaning you can run up to 200 streaming tasks simultaneously under one Agora project. To increase the quota, contact technical support. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.

Note:
  • Call this method after joining a channel.
  • Only hosts in a live streaming scenario can call this method.
  • If the stream push fails and you want to retry, you must call stopRtmpStream before calling this method again. Otherwise, the SDK returns the same error code as the previous failure.

Parameters

url
The CDN streaming URL. Must be in RTMP or RTMPS format. The character length must not exceed 1024 bytes. Special characters such as Chinese characters are not supported.
transcoding
The transcoding configuration for the CDN stream. See LiveTranscoding.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: The URL or transcoding parameter is invalid. Check your URL or parameter settings.
    • -7: The SDK is not initialized before calling this method.
    • -19: The CDN streaming URL is already in use. Use a different URL.

startRtmpStreamWithoutTranscoding

Starts pushing media streams without transcoding.

abstract startRtmpStreamWithoutTranscoding(url: string): number;

Agora recommends using the more advanced server-side streaming feature. See Implement Server-side Streaming. Call this method to push live audio and video streams to a specified streaming URL. This method supports pushing to only one URL at a time. To push to multiple URLs, call this method multiple times. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.

Note:
  • Call this method after joining a channel.
  • Only broadcasters in a live streaming scenario can call this method.
  • If the streaming fails and you want to retry, you must call stopRtmpStream before calling this method again. Otherwise, the SDK will return the same error code as the previous failure.

Parameters

url
The streaming URL. Must be in RTMP or RTMPS format. The maximum length is 1024 bytes. Special characters such as Chinese characters are not supported.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.
    • -2: The URL or transcoding parameter is incorrect. Check your URL or parameter settings.
    • -7: The SDK is not initialized before calling this method.
    • -19: The streaming URL is already in use. Use a different streaming URL.

startScreenCaptureBySourceType

Starts screen capture and specifies the video source.

abstract startScreenCaptureBySourceType(
    sourceType: VideoSourceType,
    config: ScreenCaptureConfiguration
  ): number;
Note: This method is only available on macOS and Windows platforms.
  • If you start screen capture using this method, you must call stopScreenCaptureBySourceType to stop it.
  • On Windows, up to 4 screen capture video streams are supported.
  • On macOS, only 1 screen capture video stream is supported.

Scenario

In screen sharing scenarios, you need to call this method to start capturing screen video streams. The SDK provides several methods to start screen capture. Choose based on your use case:
  • startScreenCaptureByDisplayId/startScreenCaptureByWindowId: Supports capturing a single screen or window, suitable for sharing a single screen.
  • startScreenCaptureBySourceType: Supports specifying multiple video sources to capture multiple screen sharing streams, used for local compositing or multi-channel publishing scenarios.

Timing

This method can be called before or after joining a channel:
  • If called before joining a channel, then call joinChannel and set publishScreenCaptureVideo to true to start screen sharing.
  • If called after joining a channel, then call updateChannelMediaOptions and set publishScreenCaptureVideo to true to start screen sharing.

Parameters

sourceType
The type of video source. See VideoSourceType.
Note: On macOS, only VideoSourceScreen (2) is supported for this parameter.
config
Screen capture configuration. See ScreenCaptureConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

startScreenCaptureByDisplayId

Starts capturing video stream from the specified screen.

abstract startScreenCaptureByDisplayId(
    displayId: number,
    regionRect: Rectangle,
    captureParams: ScreenCaptureParameters
  ): number;

Captures the video stream of a screen or a specific region of the screen.

Scenario

In screen sharing scenarios, you need to call this method to start capturing screen video streams. For implementation details, see Screen Sharing.

Timing

This method can be called before or after joining a channel:
  • If called before joining a channel, then call joinChannel and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.
  • If called after joining a channel, then call updateChannelMediaOptions and set publishScreenTrack or publishSecondaryScreenTrack to true to start screen sharing.

Parameters

displayId
Specifies the screen ID to be shared.
Note: On Windows, if you want to share two screens (main + secondary) simultaneously, set displayId to -1 when calling this method.
regionRect
(Optional) Specifies the region to be shared relative to the entire screen. To share the entire screen, set this to nil.
captureParams
Configuration parameters for screen sharing. The default video encoding resolution is 1920 × 1080 (2,073,600 pixels), which is used for billing. See ScreenCaptureParameters.
Note: The video properties of the screen sharing stream should only be set via this parameter and are unrelated to setVideoEncoderConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

startScreenCaptureByScreenRect

Starts capturing the video stream of a specified screen region.

abstract startScreenCaptureByScreenRect(
    screenRect: Rectangle,
    regionRect: Rectangle,
    captureParams: ScreenCaptureParameters
  ): number;
Deprecated
Deprecated: This method is deprecated. Use startScreenCaptureByDisplayId instead. If you need to share the screen when an external display is connected, it is strongly recommended to use startScreenCaptureByDisplayId.
Shares a screen or a portion of the screen. You need to specify the screen region to be shared in this method. This method can be called either before or after joining a channel. The differences are as follows:
  • If you call this method before joining a channel, then call joinChannel to join the channel and set publishScreenTrack or publishSecondaryScreenTrack to true, screen sharing starts.
  • If you call this method after joining a channel, then call updateChannelMediaOptions to update the channel media options and set publishScreenTrack or publishSecondaryScreenTrack to true, screen sharing starts.
Note: This method is only applicable to Windows platform.

Parameters

screenRect
Specifies the position of the screen to be shared relative to the virtual screen.
regionRect
Specifies the position of the region to be shared relative to the entire screen. If not specified, the entire screen is shared. See Rectangle. If the shared region exceeds the screen boundaries, only the content within the screen is shared; if width or height is set to 0, the entire screen is shared.
captureParams
Encoding configuration parameters for screen sharing. The default resolution is 1920 x 1080, i.e., 2,073,600 pixels. This pixel count is used for billing purposes. See ScreenCaptureParameters.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

startScreenCaptureByWindowId

Starts capturing the video stream of a specified window.

abstract startScreenCaptureByWindowId(
    windowId: any,
    regionRect: Rectangle,
    captureParams: ScreenCaptureParameters
  ): number;
Shares a window or a portion of the window. You need to specify the window ID you want to share in this method. This method supports sharing Universal Windows Platform (UWP) application windows. Agora has tested mainstream UWP applications using the latest SDK. The results are as follows:
OS Version Application Compatible Version Supported
Windows 10 Chrome 76.0.3809.100 No
Office Word 18.1903.1152.0 Yes
Office Excel 18.1903.1152.0 No
Office PPT 18.1903.1152.0 Yes
WPS Word 11.1.0.9145 Yes
WPS Excel 11.1.0.9145 Yes
WPS PPT 11.1.0.9145 Yes
Built-in Media Player All versions Yes
Windows 8 Chrome All versions Yes
Office Word All versions Yes
Office Excel All versions Yes
Office PPT All versions Yes
WPS Word 11.1.0.9098 Yes
WPS Excel 11.1.0.9098 Yes
WPS PPT 11.1.0.9098 Yes
Built-in Media Player All versions Yes
Windows 7 Chrome 73.0.3683.103 No
Office Word All versions Yes
Office Excel All versions Yes
Office PPT All versions Yes
WPS Word 11.1.0.9098 No
WPS Excel 11.1.0.9098 No
WPS PPT 11.1.0.9098 Yes
Built-in Media Player All versions No
Note: This method is only applicable to macOS and Windows platforms. The SDK's window sharing feature relies on WGC (Windows Graphics Capture) or GDI (Graphics Device Interface). WGC cannot disable mouse capture on systems earlier than Windows 10 version 2004. Therefore, when sharing a window on such systems, captureMouseCursor(false) may not take effect. See ScreenCaptureParameters.

Scenario

In screen sharing scenarios, you need to call this method to start capturing the screen video stream. For implementation details, see Screen Sharing.

Timing

This method can be called either before or after joining a channel. The differences are as follows:
  • If you call this method before joining a channel, then call joinChannel to join the channel and set publishScreenTrack or publishSecondaryScreenTrack to true, screen sharing starts.
  • If you call this method after joining a channel, then call updateChannelMediaOptions to update the channel media options and set publishScreenTrack or publishSecondaryScreenTrack to true, screen sharing starts.

Parameters

windowId
Specifies the ID of the window to be shared.
regionRect
(Optional) Specifies the position of the region to be shared relative to the entire screen. If not specified, the entire screen is shared. See Rectangle. If the shared region exceeds the window boundaries, only the content within the window is shared; if width or height is 0, the entire window is shared.
Note: Electron for UnionTech OS SDK currently does not support this parameter.
captureParams
Configuration parameters for screen sharing. The default resolution is 1920 x 1080, i.e., 2,073,600 pixels. This pixel count is used for billing purposes. See ScreenCaptureParameters.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

stopAllEffects

Stops playback of all audio effect files.

abstract stopAllEffects(): number;

When you no longer need to play audio effects, you can call this method to stop playback. If you only need to pause playback, call pauseAllEffects.

Timing

Call this method after playEffect.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopAudioMixing

Stops playing the music file.

abstract stopAudioMixing(): number;

After calling startAudioMixing to play a music file, call this method to stop playback. To pause playback instead, call pauseAudioMixing.

Timing

You need to call this method after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopCameraCapture

Stops video capture using the camera.

abstract stopCameraCapture(sourceType: VideoSourceType): number;

After calling startCameraCapture to start one or more video streams from camera capture, you can call this method to stop one or more video streams by setting sourceType.

Note: If you are using the local compositing feature, calling this method to stop video capture from the first camera will interrupt the local compositing.

Parameters

sourceType
The type of video source. See VideoSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopChannelMediaRelay

Stops the media stream relay across channels. Once stopped, the broadcaster leaves all destination channels.

abstract stopChannelMediaRelay(): number;

After a successful call, the SDK triggers the onChannelMediaRelayStateChanged callback. If it reports RelayStateIdle (0) and RelayOk (0), it indicates that the media stream relay has stopped.

Note: If the method call fails, the SDK triggers the onChannelMediaRelayStateChanged callback and reports the error code RelayErrorServerNoResponse (2) or RelayErrorServerConnectionLost (8). You can call the leaveChannel method to leave the channel, and the media stream relay will stop automatically.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and troubleshooting.
    • -5: The method call is rejected. There is no ongoing media stream relay.

stopDirectCdnStreaming

Stops direct CDN streaming on the host side.

abstract stopDirectCdnStreaming(): number;
Deprecated
Deprecated since v4.6.2.

Return Values

  • 0: The method call was successful.
  • < 0: The method call failed. See Error Codes for details and resolution suggestions.

stopEchoTest

Stops the audio loopback test.

abstract stopEchoTest(): number;

After calling startEchoTest, you must call this method to end the test. Otherwise, the user will not be able to perform another loop test or join a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -5(ERR_REFUSED): Failed to stop the test. The test may not be running.

stopEffect

Stops playback of the specified audio effect file.

abstract stopEffect(soundId: number): number;

When you no longer need to play a specific audio effect file, you can call this method to stop playback. If you only need to pause playback, call pauseEffect.

Timing

Call this method after playEffect.

Parameters

soundId
The ID of the specified audio effect file. Each audio effect file has a unique ID.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopLastmileProbeTest

Stops the last-mile network probe test.

abstract stopLastmileProbeTest(): number;

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopLocalAudioMixer

Stops local audio mixing.

abstract stopLocalAudioMixer(): number;

After calling startLocalAudioMixer, if you want to stop local audio mixing, call this method.

Timing

This method must be called after startLocalAudioMixer.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -7: The IRtcEngine object has not been initialized. You need to successfully initialize the IRtcEngine object before calling this method.

stopPreview

Stops video preview.

abstract stopPreview(sourceType?: VideoSourceType): number;

Scenario

After calling startPreview to start preview, if you want to stop the local video preview, call this method.

Timing

Call this method before joining a channel or after leaving a channel.

Parameters

sourceType
The type of video source. See VideoSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopRhythmPlayer

Stops the virtual metronome.

abstract stopRhythmPlayer(): number;

After calling startRhythmPlayer, you can call this method to stop the virtual metronome.

stopRtmpStream

Stops pushing streams to a CDN.

abstract stopRtmpStream(url: string): number;

Agora recommends using the more comprehensive server-side streaming feature. See Implement server-side CDN streaming. Call this method to stop the live stream on the specified CDN streaming URL. This method can only stop one URL at a time. To stop multiple URLs, call this method multiple times. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.

Parameters

url
The CDN streaming URL. Must be in RTMP or RTMPS format. The character length must not exceed 1024 bytes. Special characters such as Chinese characters are not supported.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

stopScreenCapture

Stops screen capture.

abstract stopScreenCapture(): number;

Scenario

If you call startScreenCaptureBySourceType, startScreenCaptureByWindowId, or startScreenCaptureByDisplayId to start screen capture, you need to call this method to stop it.

Timing

You can call this method either before or after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

stopScreenCaptureBySourceType

Stops screen capture for the specified video source.

abstract stopScreenCaptureBySourceType(sourceType: VideoSourceType): number;

Scenario

If you call startScreenCaptureBySourceType to start one or more screen captures, you need to call this method to stop screen capture and specify the screen via the sourceType parameter.

Timing

You can call this method either before or after joining a channel.

Parameters

sourceType
The type of the video source. See VideoSourceType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

takeSnapshot

Takes a video snapshot.

abstract takeSnapshot(uid: number, filePath: string): number;

This method captures a snapshot of the specified user's video stream, generates a JPG image, and saves it to the specified path.

Note:
  • This method is asynchronous. When the call returns, the SDK has not yet completed the snapshot.
  • When used for local video snapshot, it captures the video stream specified in ChannelMediaOptions.
  • If the video is pre-processed, such as with watermark or beautification, the snapshot includes the effects of the pre-processing.

Timing

This method must be called after joining a channel.

Parameters

uid
User ID. Set to 0 to capture a snapshot of the local user's video.
filePath
Note: Make sure the directory exists and is writable.
Local path to save the snapshot. Must include the file name and format, for example:
  • Windows: C:\Users\<user_name>\AppData\Local\Agora\<process_name>\example.jpg
  • macOS: ~/Library/Logs/example.jpg

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

takeSnapshotWithConfig

Takes a video snapshot at a specified observation point.

abstract takeSnapshotWithConfig(uid: number, config: SnapshotConfig): number;

This method captures a snapshot of the specified user's video stream, generates a JPG image, and saves it to the specified path.

Note:
  • This method is asynchronous. When the call returns, the SDK has not yet completed the snapshot.
  • When used for local video snapshot, it captures the video stream specified in ChannelMediaOptions.

Timing

This method must be called after joining a channel.

Parameters

uid
User ID. Set to 0 to capture a snapshot of the local user's video.
config
Snapshot configuration. See SnapshotConfig.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

unloadAllEffects

Releases all preloaded audio effect files from memory.

abstract unloadAllEffects(): number;

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

unloadEffect

Releases a preloaded audio effect file from memory.

abstract unloadEffect(soundId: number): number;

After loading an audio effect file into memory using preloadEffect, call this method to release the audio effect file.

Timing

You can call this method before or after joining a channel.

Parameters

soundId
The ID of the specified audio effect file. Each audio effect file has a unique ID.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

unregisterAudioEncodedFrameObserver

Unregisters the audio encoded frame observer.

abstract unregisterAudioEncodedFrameObserver(
    observer: IAudioEncodedFrameObserver
  ): number;

Parameters

observer
Audio encoded frame observer. See IAudioEncodedFrameObserver.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

unregisterAudioSpectrumObserver

Unregisters the audio spectrum observer.

abstract unregisterAudioSpectrumObserver(
    observer: IAudioSpectrumObserver
  ): number;

After calling registerAudioSpectrumObserver, if you want to unregister the audio spectrum observer, call this method.

Note: This method can be called before or after joining a channel.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

unregisterMediaMetadataObserver

Unregisters the media metadata observer.

abstract unregisterMediaMetadataObserver(
    observer: IMetadataObserver,
    type: MetadataType
  ): number;

Parameters

observer
The metadata observer. See IMetadataObserver.
type
The metadata type. Currently, only VideoMetadata is supported. See MetadataType.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

updateChannelMediaOptions

Updates channel media options after joining the channel.

abstract updateChannelMediaOptions(options: ChannelMediaOptions): number;

Parameters

options
Channel media options. See ChannelMediaOptions.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.
    • -2: Invalid ChannelMediaOptions values. For example, using an invalid token or setting an invalid user role. You must provide valid parameters.
    • -7: The IRtcEngine object is not initialized. You must initialize the IRtcEngine object before calling this method.
    • -8: The internal state of the IRtcEngine object is incorrect. This may happen if the user is not in a channel. Use the onConnectionStateChanged callback to determine whether the user is in a channel. If you receive ConnectionStateDisconnected(1) or ConnectionStateFailed(5), the user is not in a channel. You must call joinChannel before using this method.

updateLocalAudioMixerConfiguration

Updates the configuration of local audio mixing.

abstract updateLocalAudioMixerConfiguration(
    config: LocalAudioMixerConfiguration
  ): number;

After calling startLocalAudioMixer, if you want to update the configuration of local audio mixing, call this method.

Note: To ensure audio quality, it is recommended that the number of audio streams participating in local mixing does not exceed 10.

Timing

This method must be called after startLocalAudioMixer.

Parameters

config
Configuration for local audio mixing. See LocalAudioMixerConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.
    • -7: The IRtcEngine object has not been initialized. You need to successfully initialize the IRtcEngine object before calling this method.

updateLocalTranscoderConfiguration

Updates the local video mixing configuration.

abstract updateLocalTranscoderConfiguration(
    config: LocalTranscoderConfiguration
  ): number;

After calling startLocalVideoTranscoder, if you want to update the local video mixing configuration, call this method.

Note: If you want to update the type of local video source used for mixing, such as adding a second camera or screen capture, you need to call this method after startCameraCapture or startScreenCaptureBySourceType.

Parameters

config
Configuration for local video mixing. See LocalTranscoderConfiguration.

Return Values

  • 0: Success.
  • < 0: Failure. See Error Codes for details and resolution suggestions.

updatePreloadChannelToken

Updates the wildcard token for the preloaded channel.

abstract updatePreloadChannelToken(token: string): number;

You need to manage the lifecycle of the wildcard token yourself. When the token expires, you must generate a new one on your server and pass it to the SDK using this method.

Scenario

In scenarios where frequent channel switching or multiple channels are needed, using a wildcard token avoids repeated token requests when switching channels, which speeds up the switching process and reduces the load on your token server.

Parameters

token
The new token.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and troubleshooting.
    • -2: Invalid parameter. For example, an invalid token. You must provide valid parameters and rejoin the channel.
    • -7: The IRtcEngine object is not initialized. You must initialize the IRtcEngine object before calling this method.

updateRtmpTranscoding

Updates the CDN transcoding configuration.

abstract updateRtmpTranscoding(transcoding: LiveTranscoding): number;

Agora recommends using the more comprehensive server-side streaming feature. See Implement server-side CDN streaming. After enabling transcoding streaming, you can dynamically update the transcoding configuration based on your scenario. After the configuration is updated, the SDK triggers the onTranscodingUpdated callback.

Parameters

transcoding
The transcoding configuration for the CDN stream. See LiveTranscoding.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.

updateScreenCaptureParameters

Updates the screen capture configuration parameters.

abstract updateScreenCaptureParameters(
    captureParams: ScreenCaptureParameters
  ): number;
Note:
  • Call this method after starting screen or window sharing.

Parameters

captureParams
Encoding configuration parameters for screen sharing. See ScreenCaptureParameters.
Note: The video properties of the screen sharing stream should be set only through this parameter and are unrelated to setVideoEncoderConfiguration.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.

updateScreenCaptureRegion

Updates the screen capture region.

abstract updateScreenCaptureRegion(regionRect: Rectangle): number;
Note: Call this method after starting screen or window sharing.

Parameters

regionRect
The position of the region to be shared relative to the entire screen or window. If not specified, the entire screen or window is shared. See Rectangle. If the shared region exceeds the screen or window boundaries, only the content within the screen or window is shared; if width or height is set to 0, the entire screen or window is shared.

Return Values

  • 0: The method call succeeds.
  • < 0: The method call fails. See Error Codes for details and resolution suggestions.
    • -2: Invalid parameter.
    • -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.