IRtcEngine
The base interface class of the RTC SDK that implements the main functions of real-time audio and video.
IRtcEngine provides the main methods for the App to call. Before calling other APIs, you must first call createAgoraRtcEngine to create an IRtcEngine object.
registerEventHandler
Adds a primary callback event.
abstract registerEventHandler(eventHandler: IRtcEngineEventHandler): boolean;
The interface class IRtcEngineEventHandler is used by the SDK to send callback event notifications to the app. The app obtains SDK event notifications by inheriting the methods of this interface class. All methods of the interface class have default (empty) implementations. The app can inherit only the events it cares about as needed. In the callback methods, the app should not perform time-consuming operations or call APIs that may cause blocking (such as sendStreamMessage), otherwise it may affect the operation of the SDK.
Parameters
- eventHandler
- The callback event to be added. See IRtcEngineEventHandler.
Return Values
- true: The method call succeeds.
- false: The method call fails. See Error Codes for details and resolution suggestions.
addListener
Adds an IRtcEngineEvent listener.
addListener?<EventType extends keyof IRtcEngineEvent>(
eventType: EventType,
listener: IRtcEngineEvent[EventType]
): void;
After successfully calling this method, you can listen to events of the corresponding IRtcEngine object and get data through IRtcEngineEvent. You can add multiple listeners for the same event as needed.
Parameters
- eventType
- The name of the target event to listen to. See IRtcEngineEvent.
- listener
- The callback function corresponding to
eventType. For example, to add onJoinChannelSuccess:const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {}; engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess);
addVideoWatermark
Adds a local video watermark.
abstract addVideoWatermark(
watermarkUrl: string,
options: WatermarkOptions
): number;
- Deprecated
- Deprecated: This method is deprecated. Use addVideoWatermarkWithConfig instead.
- If the video encoding orientation (OrientationMode) is fixed to landscape or landscape in adaptive mode, landscape coordinates are used for the watermark.
- If the video encoding orientation (OrientationMode) is fixed to portrait or portrait in adaptive mode, portrait coordinates are used.
- When setting the watermark coordinates, the image area of the watermark must not exceed the video dimensions set in the setVideoEncoderConfiguration method. Otherwise, the exceeding part will be cropped.
- You must call this method after calling enableVideo.
- If you only want to add a watermark for CDN streaming, you can use this method or startRtmpStreamWithTranscoding.
- The watermark image must be in PNG format. This method supports all pixel formats of PNG: RGBA, RGB, Palette, Gray, and Alpha_gray.
- If the size of the PNG image to be added does not match the size you set in this method, the SDK will scale or crop the PNG image to match the setting.
- If local video is set to mirror mode, the local watermark will also be mirrored. To avoid the watermark being mirrored when local users view the local video, it is recommended not to use both mirror and watermark features for local video. Implement the local watermark feature at the application level.
Parameters
- watermarkUrl
- The local path of the watermark image to be added. This method supports adding watermark images from local absolute/relative paths.
- options
- Settings for the watermark image to be added. See WatermarkOptions.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
addVideoWatermarkWithConfig
Adds a watermark image to the local video stream.
abstract addVideoWatermarkWithConfig(configs: WatermarkConfig): number;
- Since
- Available since v4.6.2.
You can use this method to overlay a watermark image on the local video stream and configure the position, size, and visibility of the watermark in the preview using WatermarkConfig.
Parameters
- configs
- Watermark configuration. See WatermarkConfig.
Return Values
- 0: Success.
- < 0: Failure.
adjustAudioMixingPlayoutVolume
Adjusts the local playback volume of the music file.
abstract adjustAudioMixingPlayoutVolume(volume: number): number;
Timing
You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Parameters
- volume
- The volume of the music file. The range is [0,100]. 100 (default) is the original volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustAudioMixingPublishVolume
Adjusts the remote playback volume of the music file.
abstract adjustAudioMixingPublishVolume(volume: number): number;
This method adjusts the playback volume of the mixed music file on the remote side.
Timing
You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Parameters
- volume
- The volume of the music file. The range is [0,100]. 100 (default) is the original volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustAudioMixingVolume
Adjusts the playback volume of the music file.
abstract adjustAudioMixingVolume(volume: number): number;
This method adjusts the playback volume of the mixed music file on both local and remote sides.
Timing
You need to call this method after startAudioMixing.
Parameters
- volume
- The volume range of the music file is 0~100. 100 (default) is the original file volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustCustomAudioPlayoutVolume
Adjusts the playback volume of a custom audio capture track locally.
abstract adjustCustomAudioPlayoutVolume(
trackId: number,
volume: number
): number;
After calling this method to set the local playback volume of audio, if you want to readjust the volume, you can call this method again.
Parameters
- trackId
- Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
- volume
- Playback volume of the custom captured audio, ranging from [0, 100]. 0 means mute, 100 means original volume.
Return Values
- 0: The method call was successful.
- < 0: The method call failed. See Error Codes for details and resolution suggestions.
adjustCustomAudioPublishVolume
Adjusts the playback volume of a custom audio capture track on the remote end.
abstract adjustCustomAudioPublishVolume(
trackId: number,
volume: number
): number;
After calling this method to set the playback volume of the audio on the remote end, you can call this method again to readjust the volume.
Parameters
- trackId
- Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
- volume
- Playback volume of the custom captured audio, in the range [0,100]. 0 means mute, 100 means original volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustPlaybackSignalVolume
Adjusts the signal volume of all remote users for local playback.
abstract adjustPlaybackSignalVolume(volume: number): number;
This method adjusts the signal volume of all remote users after mixing for local playback. If you need to adjust the signal volume of a specific remote user for local playback, it is recommended to call adjustUserPlaybackSignalVolume.
Timing
Can be called before or after joining a channel.
Parameters
- volume
- Volume, range is [0,400].
- 0: Mute.
- 100: (Default) Original volume.
- 400: 4 times the original volume, with built-in overflow protection.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustRecordingSignalVolume
Adjusts the recording signal volume.
abstract adjustRecordingSignalVolume(volume: number): number;
If you only want to mute the audio signal, we recommend using muteRecordingSignal.
Timing
You can call this method before or after joining a channel.
Parameters
- volume
- The volume. The value ranges from [0,400].
- 0: Mute.
- 100: (Default) Original volume.
- 400: Four times the original volume with overflow protection.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
adjustUserPlaybackSignalVolume
Adjusts the playback volume of a specified remote user locally.
abstract adjustUserPlaybackSignalVolume(uid: number, volume: number): number;
You can call this method during a call to adjust the playback volume of a specified remote user locally. To adjust the playback volume of multiple users locally, call this method multiple times.
Timing
You must call this method after joining a channel.
Parameters
- uid
- Remote user ID.
- volume
- Volume. The range is [0,400].
- 0: Mute.
- 100: (Default) Original volume.
- 400: Four times the original volume with built-in overflow protection.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
clearVideoWatermarks
Removes added video watermarks.
abstract clearVideoWatermarks(): number;
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
complain
Report call quality issues.
abstract complain(callId: string, description: string): number;
This method allows users to report call quality issues. It must be called after leaving the channel.
Parameters
- callId
- Call ID. You can obtain this by calling getCallId.
- description
- Description of the call. The length should be less than 800 bytes.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -1: General error (not specifically classified).
- -2: Invalid parameter.
- -7: Method called before IRtcEngine is initialized.
configRhythmPlayer
Configures the virtual metronome.
abstract configRhythmPlayer(config: AgoraRhythmPlayerConfig): number;
- Deprecated
- Deprecated since v4.6.2.
- After calling startRhythmPlayer, you can call this method to reconfigure the virtual metronome.
- After the virtual metronome is enabled, the SDK starts playing the specified audio files from the beginning and controls the playback duration of each file based on the
beatsPerMinuteyou set in AgoraRhythmPlayerConfig. For example, ifbeatsPerMinuteis set to60, the SDK plays one beat per second. If the file duration exceeds the beat duration, the SDK only plays the portion of the audio corresponding to the beat duration. - By default, the sound of the virtual metronome is not published to remote users. If you want remote users to hear the sound of the virtual metronome, you can set
publishRhythmPlayerTrackin ChannelMediaOptions to true after calling this method.
Timing
Can be called before or after joining a channel.
Parameters
- config
- Metronome configuration. See AgoraRhythmPlayerConfig.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
createAgoraRtcEngine
Creates an IRtcEngine object.
export function createAgoraRtcEngine(): IRtcEngine {
return instance;
}
Currently, RTC SDK v4.x only supports creating one IRtcEngine object per App.
Return Values
IRtcEngine object.
createCustomVideoTrack
Creates a custom video track.
abstract createCustomVideoTrack(): number;
- Call this method to create a video track and obtain the video track ID.
- When calling joinChannel to join the channel, set
customVideoTrackIdin ChannelMediaOptions to the video track ID you want to publish, and setpublishCustomVideoTrackto true. - Call pushVideoFrame and specify
videoTrackIdas the video track ID specified in step 2 to publish the corresponding custom video source in the channel.
Return Values
- If the method call succeeds, returns the video track ID as the unique identifier of the video track.
- If the method call fails, returns 0xffffffff. See Error Codes for details and resolution suggestions.
createDataStream
Creates a data stream.
abstract createDataStream(config: DataStreamConfig): number;
Timing
This method can be called before or after joining a channel.
Parameters
- config
- Data stream configuration. See DataStreamConfig.
Return Values
- The ID of the created data stream: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
createMediaPlayer
Creates a media player instance.
abstract createMediaPlayer(): IMediaPlayer;
Before calling other APIs under the IMediaPlayer class, you must call this method to create a media player instance. If you need multiple instances, you can call this method multiple times.
Timing
This method can be called before or after joining a channel.
Parameters
- delegate
- The event handler of IRtcEngine. See IRtcEngineEventHandler.
Return Values
- The IMediaPlayer object, if the method call succeeds.
- An empty pointer , if the method call fails.
createVideoEffectObject
Creates an IVideoEffectObject video effect object.
abstract createVideoEffectObject(
bundlePath: string,
type?: MediaSourceType
): IVideoEffectObject;
- Since
- Available since v4.6.2.
Parameters
- bundlePath
- The path to the video effect resource bundle.
- type
- The media source type. See MediaSourceType.
Return Values
- The IVideoEffectObject object, if the method call succeeds. See IVideoEffectObject.
- An empty pointer , if the method call fails.
destroyCustomVideoTrack
Destroys the specified video track.
abstract destroyCustomVideoTrack(videoTrackId: number): number;
Parameters
- videoTrackId
- The video track ID returned by the createCustomVideoTrack method.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
destroyMediaPlayer
Destroys the media player.
abstract destroyMediaPlayer(mediaPlayer: IMediaPlayer): number;
Parameters
- mediaPlayer
- The IMediaPlayer object.
Return Values
- ≥ 0: Success. Returns the media player ID.
- < 0: Failure. See Error Codes for details and resolution suggestions.
destroyVideoEffectObject
Destroys the video effect object.
abstract destroyVideoEffectObject(
videoEffectObject: IVideoEffectObject
): number;
- Since
- Available since v4.6.2.
Parameters
- videoEffectObject
- The video effect object to destroy. See IVideoEffectObject.
Return Values
- 0: Success.
- < 0: Failure.
disableAudio
Disables the audio module.
abstract disableAudio(): number;
The audio module is enabled by default. You can call this method to disable it.
- enableLocalAudio: Whether to enable microphone capture and create a local audio stream.
- muteLocalAudioStream: Whether to publish the local audio stream.
- muteRemoteAudioStream: Whether to receive and play the remote audio stream.
- muteAllRemoteAudioStreams: Whether to receive and play all remote audio streams.
Timing
Can be called before or after joining a channel. Remains effective after leaving the channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
disableAudioSpectrumMonitor
Disables audio spectrum monitoring.
abstract disableAudioSpectrumMonitor(): number;
Call this method to disable audio spectrum monitoring after calling enableAudioSpectrumMonitor.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
disableVideo
Disables the video module.
abstract disableVideo(): number;
This method disables the video module.
- This method sets the internal engine to a disabled state and remains effective after leaving the channel.
- Calling this method resets the entire engine and may take longer to respond. You can use the following methods to control specific video module features as needed:
- enableLocalVideo: Whether to enable camera capture and create a local video stream.
- muteLocalVideoStream: Whether to publish the local video stream.
- muteRemoteVideoStream: Whether to receive and play the remote video stream.
- muteAllRemoteVideoStreams: Whether to receive and play all remote video streams.
Timing
- If called before joining, it enables audio-only mode.
- If called after joining, it switches from video mode to audio-only mode. You can call enableVideo to switch back to video mode.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableAudio
Enables the audio module.
abstract enableAudio(): number;
The audio module is enabled by default. If you have disabled it using disableAudio, you can call this method to re-enable it.
- Calling this method resets the entire engine and has a slower response time. You can use the following methods to control specific audio module functions as needed:
- enableLocalAudio: Whether to enable microphone capture and create a local audio stream.
- muteLocalAudioStream: Whether to publish the local audio stream.
- muteRemoteAudioStream: Whether to receive and play the remote audio stream.
- muteAllRemoteAudioStreams: Whether to receive and play all remote audio streams.
- When called in a channel, this method resets the settings of enableLocalAudio, muteRemoteAudioStream, and muteAllRemoteAudioStreams. Use with caution.
Timing
Can be called before or after joining a channel. Remains effective after leaving the channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableAudioSpectrumMonitor
Enables audio spectrum monitoring.
abstract enableAudioSpectrumMonitor(intervalInMS?: number): number;
If you want to obtain the audio spectrum data of local or remote users, register an audio spectrum observer and enable audio spectrum monitoring.
Parameters
- intervalInMS
- The interval (ms) at which the SDK triggers the onLocalAudioSpectrum and onRemoteAudioSpectrum callbacks. Default is 100 ms. The value must not be less than 10 ms, otherwise the method call will fail.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter settings.
enableAudioVolumeIndication
Enables audio volume indication.
abstract enableAudioVolumeIndication(
interval: number,
smooth: number,
reportVad: boolean
): number;
This method allows the SDK to periodically report the volume information of the local user who is sending a stream and up to 3 remote users with the highest instantaneous volume to the app.
Timing
Can be called before or after joining a channel.
Parameters
- interval
- Sets the time interval for volume indication:
- ≤ 0: Disables the volume indication feature.
- > 0: The time interval (ms) for returning volume indications. It is recommended to set it to more than 100 ms. If it is less than 10 ms, the onAudioVolumeIndication callback may not be received.
- smooth
- Smoothness factor that specifies the sensitivity of the volume indication. The range is [0,10], and the recommended value is 3. The larger the value, the more sensitive the fluctuation; the smaller the value, the smoother the fluctuation.
- reportVad
-
- true: Enables local voice activity detection. When enabled, the
vadparameter in the onAudioVolumeIndication callback reports whether voice is detected locally. - false: (Default) Disables local voice activity detection. Unless the engine automatically performs local voice detection, the
vadparameter in the onAudioVolumeIndication callback does not report local voice detection.
- true: Enables local voice activity detection. When enabled, the
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableCameraCenterStage
Enables or disables the Center Stage feature.
abstract enableCameraCenterStage(enabled: boolean): number;
Center Stage is disabled by default. You need to call this method to enable it. To disable the feature, call this method again and set enabled to false.
- iPad:
- 12.9-inch iPad Pro (5th generation)
- 11-inch iPad Pro (3rd generation)
- iPad (9th generation)
- iPad mini (6th generation)
- iPad Air (5th generation)
- 2020 M1 MacBook Pro 13" + iPhone 11 (using iPhone as an external camera for MacBook)
Scenario
The Center Stage feature can be widely used in scenarios such as online meetings, live shows, and online education. Hosts can enable this feature to keep themselves centered in the frame whether they move or not, ensuring a better visual presentation.
Timing
This method must be called after the camera is successfully started, i.e., after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Parameters
- enabled
- Whether to enable the Center Stage feature:
- true: Enable Center Stage.
- false: Disable Center Stage.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableContentInspect
Enables/disables local snapshot upload.
abstract enableContentInspect(
enabled: boolean,
config: ContentInspectConfig
): number;
After local snapshot upload is enabled, the SDK captures and uploads snapshots of the video sent by the local user based on the module type and frequency you set in ContentInspectConfig. Once the snapshot is complete, the Agora server sends a callback notification to your server via HTTPS and uploads all snapshots to your specified third-party cloud storage.
- Before calling this method, make sure you have enabled the local snapshot upload service in the Agora Console.
- When using the Agora self-developed plugin for video moderation (
ContentInspectSupervision), you must integrate the local snapshot upload dynamic librarylibagora_content_inspect_extension.dll. Deleting this library will prevent the local snapshot upload function from working properly.
Timing
Can be called before or after joining a channel.
Parameters
- enabled
- Specifies whether to enable local snapshot upload:
- true: Enable local snapshot upload.
- false: Disable local snapshot upload.
- config
- Configuration for local snapshot upload. See ContentInspectConfig.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableCustomAudioLocalPlayback
Sets whether to play external audio sources locally.
abstract enableCustomAudioLocalPlayback(
trackId: number,
enabled: boolean
): number;
After calling this method to enable local playback of externally captured audio sources, you can call this method again and set enabled to false to stop local playback.
You can call adjustCustomAudioPlayoutVolume to adjust the local playback volume of the custom audio capture track.
Parameters
- trackId
- Audio track ID. Set this parameter to the custom audio track ID returned by the createCustomAudioTrack method.
- enabled
- Whether to play the external audio source locally:
- true: Play locally.
- false: (Default) Do not play locally.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableDualStreamMode
Enables or disables the dual-stream mode and sets the low-quality video stream on the sender side.
abstract enableDualStreamMode(
enabled: boolean,
streamConfig?: SimulcastStreamConfig
): number;
- Deprecated
- Deprecated: Deprecated since v4.2.0. Use setDualStreamMode instead.
- High-quality stream: High resolution and high frame rate video stream.
- Low-quality stream: Low resolution and low frame rate video stream.
- This method applies to all types of streams sent by the sender, including but not limited to camera-captured video, screen sharing, and custom video streams.
- To enable dual-stream mode in multi-channel scenarios, call enableDualStreamModeEx.
- This method can be called before or after joining a channel.
Parameters
- enabled
- Whether to enable dual-stream mode:
- true: Enable dual-stream mode.
- false: (Default) Disable dual-stream mode.
- streamConfig
- Configuration for the low-quality video stream. See SimulcastStreamConfig.
Note: When
modeis set toDisableSimulcastStream, settingstreamConfighas no effect.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
enableEncryption
Enable or disable built-in encryption.
abstract enableEncryption(enabled: boolean, config: EncryptionConfig): number;
After the user leaves the channel, the SDK automatically disables encryption. To re-enable encryption, you need to call this method before the user joins the channel again.
- All users in the same channel must use the same encryption mode and key when calling this method.
- If built-in encryption is enabled, you cannot use the CDN streaming feature.
Scenario
Scenarios with high security requirements.
Timing
This method must be called before joining a channel.
Parameters
- enabled
- Whether to enable built-in encryption:
true: Enable built-in encryption.false: (default) Disable built-in encryption.
- config
- Configure the built-in encryption mode and key. See EncryptionConfig.
Return Values
- 0: Success.
- < 0: Failure
- -2: Invalid parameter. You need to re-specify the parameter.
- -4: Incorrect encryption mode or failed to load external encryption library. Check if the enum value is correct or reload the external encryption library.
- -7: SDK not initialized. You must create the IRtcEngine object and complete initialization before calling the API.
enableExtension
Enables/disables an extension.
abstract enableExtension(
provider: string,
extension: string,
enable?: boolean,
type?: MediaSourceType
): number;
- To enable multiple extensions, call this method multiple times.
- After this method is called successfully, no other extensions can be loaded.
Timing
It is recommended to call this method after joining a channel.
Parameters
- provider
- The name of the extension provider.
- extension
- The name of the extension.
- enable
- Whether to enable the extension:
- true: Enable the extension.
- false: Disable the extension.
- type
- The media source type of the extension. See MediaSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -3: The extension dynamic library is not loaded. Agora recommends checking whether the library is placed in the expected location or whether the library name is correct.
enableFaceDetection
Enables/disables local face detection.
abstract enableFaceDetection(enabled: boolean): number;
Timing
This method must be called after the camera is started (e.g., by calling startPreview or enableVideo).
Parameters
- enabled
- Whether to enable face detection:
- true: Enable face detection.
- false: (Default) Disable face detection.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableInEarMonitoring
Enables in-ear monitoring.
abstract enableInEarMonitoring(
enabled: boolean,
includeAudioFilters: EarMonitoringFilterType
): number;
Enables or disables in-ear monitoring.
Timing
You can call this method before or after joining a channel.
Parameters
- enabled
- Whether to enable in-ear monitoring:
- true: Enable in-ear monitoring.
- false: (Default) Disable in-ear monitoring.
- includeAudioFilters
- The type of audio filter for in-ear monitoring. See EarMonitoringFilterType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -8: Make sure the current audio route is set to Bluetooth or headphones.
enableInstantMediaRendering
Enables accelerated rendering of audio and video frames.
abstract enableInstantMediaRendering(): number;
After successfully calling this method, the SDK enables accelerated rendering for both video and audio frames, which speeds up the time to first frame and first audio after a user joins a channel.
Scenario
Agora recommends enabling this mode for audience users in live streaming scenarios.
Timing
This method must be called before joining a channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -7: IRtcEngine not initialized before calling the method.
enableLocalAudio
Enables or disables the local audio capture.
abstract enableLocalAudio(enabled: boolean): number;
- enableLocalAudio: Enables or disables local audio capture and processing. When you disable or enable local capture using enableLocalAudio, there will be a brief interruption in playing remote audio locally.
- muteLocalAudioStream: Stops or resumes sending the local audio stream without affecting the state of audio capture.
Scenario
This method does not affect receiving and playing remote audio streams. enableLocalAudio(false) is suitable for scenarios where you only want to receive remote audio without sending locally captured audio.
Timing
You can call this method before or after joining a channel. If you call it before joining, it only sets the device state. It takes effect immediately after joining the channel.
Parameters
- enabled
-
- true: Re-enables the local audio function, that is, starts local audio capture (default);
- false: Disables the local audio function, that is, stops local audio capture.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableLocalVideo
Enables or disables local video capture.
abstract enableLocalVideo(enabled: boolean): number;
This method disables or re-enables local video capture without affecting the reception of remote video.
After calling enableVideo, local video capture is enabled by default.
If you call enableLocalVideo(false) in a channel, it stops local video capture and also stops publishing the video stream in the channel. To re-enable it, call enableLocalVideo(true), then call updateChannelMediaOptions and set the options parameter to publish the locally captured video stream to the channel.
After successfully enabling or disabling local video capture, the remote side triggers the onRemoteVideoStateChanged callback.
- This method can be called before or after joining a channel, but the settings take effect only after joining the channel.
- This method sets the internal engine to the enabled state and remains effective after leaving the channel.
Parameters
- enabled
- Whether to enable local video capture.
- true: (Default) Enables local video capture.
- false: Disables local video capture. After disabling, remote users will not receive the local user's video stream, but the local user can still receive remote video streams. When set to false, this method does not require a local camera.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
enableMultiCamera
Enables or disables multi-camera capture.
abstract enableMultiCamera(
enabled: boolean,
config: CameraCapturerConfiguration
): number;
- Call this method to enable multi-camera capture.
- Call startPreview to start local video preview.
- Call startCameraCapture and set
sourceTypeto specify the second camera to start capturing. - Call joinChannelEx and set
publishSecondaryCameraTrackto true to publish the video stream from the second camera in the channel.
- Call stopCameraCapture.
- Call this method and set
enabledto false.
- iPhone XR
- iPhone XS
- iPhone XS Max
- iPad Pro (3rd generation and later)
- If called before startPreview, the local video preview will display the images captured by both cameras.
- If called after startPreview, the SDK will first stop the current camera capture, then start both the original and second cameras. The local video preview will briefly go black and then automatically recover.
Parameters
- enabled
- Whether to enable multi-camera video capture mode:
- true: Enable multi-camera capture mode. The SDK uses multiple cameras to capture video.
- false: Disable multi-camera capture mode. The SDK uses only a single camera to capture video.
- config
- Capture configuration for the second camera. See CameraCapturerConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableSoundPositionIndication
Enables/disables stereo sound for remote users.
abstract enableSoundPositionIndication(enabled: boolean): number;
To use setRemoteVoicePosition for spatial audio positioning, make sure to call this method before joining a channel to enable stereo sound for remote users.
Parameters
- enabled
- Whether to enable stereo sound for remote users:
- true: Enable stereo sound.
- false: Disable stereo sound.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableSpatialAudio
Enables or disables spatial audio.
abstract enableSpatialAudio(enabled: boolean): number;
After enabling spatial audio, you can call setRemoteUserSpatialAudioParams to set spatial audio parameters for remote users.
- This method can be called before or after joining a channel.
- This method depends on the spatial audio dynamic library
libagora_spatial_audio_extension.dll. Removing this library will prevent the feature from working properly.
Parameters
- enabled
- Whether to enable spatial audio:
- true: Enable spatial audio.
- false: Disable spatial audio.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableVideo
Enables the video module.
abstract enableVideo(): number;
The video module is disabled by default and needs to be enabled by calling this method. To disable the video module later, call the disableVideo method.
- This method sets the internal engine to the enabled state and remains effective after leaving the channel.
- Calling this method resets the entire engine and has a relatively slow response time. Depending on your needs, you can use the following methods to independently control specific video module functions:
- enableLocalVideo: Whether to start camera capture and create a local video stream.
- muteLocalVideoStream: Whether to publish the local video stream.
- muteRemoteVideoStream: Whether to receive and play remote video streams.
- muteAllRemoteVideoStreams: Whether to receive and play all remote video streams.
- When called in a channel, this method resets the settings of enableLocalVideo, muteRemoteVideoStream, and muteAllRemoteVideoStreams, so use with caution.
Timing
- If called before joining a channel, it enables the video module.
- If called during an audio-only call, the call automatically switches to a video call.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
enableVideoImageSource
Enables or disables image placeholder streaming.
abstract enableVideoImageSource(
enable: boolean,
options: ImageTrackOptions
): number;
When publishing a video stream, you can call this method to use a custom image to replace the current video stream for streaming. After enabling this feature, you can customize the placeholder image using the ImageTrackOptions parameter. After disabling the feature, remote users will continue to see the current published video stream.
Timing
It is recommended to call this method after joining the channel.
Parameters
- enable
- Whether to enable image placeholder streaming:
- true: Enable image placeholder streaming.
- false: (Default) Disable image placeholder streaming.
- options
- Placeholder image settings. See ImageTrackOptions.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
enableVirtualBackground
Enables/disables the virtual background.
abstract enableVirtualBackground(
enabled: boolean,
backgroundSource: VirtualBackgroundSource,
segproperty: SegmentationProperty,
type?: MediaSourceType
): number;
The virtual background feature allows replacing the local user's original background with a static image, dynamic video, blur effect, or separating the portrait from the background to create a picture-in-picture effect. Once enabled successfully, all users in the channel can see the customized background. Call this method after enableVideo or startPreview.
- Using a video as the virtual background increases memory usage over time, which may cause the app to crash. To avoid this, reduce the resolution and frame rate of the video.
- This feature requires high device performance. The SDK automatically checks the device capability when calling this method. Recommended devices include:
- Snapdragon 700 series 750G and above
- Snapdragon 800 series 835 and above
- Dimensity 700 series 720 and above
- Kirin 800 series 810 and above
- Kirin 900 series 980 and above
- Devices with A9 chip and above:
- iPhone 6S and above
- iPad Air 3rd generation and above
- iPad 5th generation and above
- iPad Pro 1st generation and above
- iPad mini 5th generation and above
- Recommended usage scenarios:
- Use a high-definition camera and ensure even lighting.
- Few objects in the frame, half-body portrait with minimal occlusion, and a background color distinct from clothing.
- This method depends on the virtual background dynamic library
libagora_segmentation_extension.dll. Deleting this library will prevent the feature from working.
Parameters
- enabled
- Whether to enable virtual background:
- true: Enable virtual background.
- false: Disable virtual background.
- backgroundSource
- Custom background. See VirtualBackgroundSource. To adapt the resolution of the custom background image to the SDK's video capture resolution, the SDK scales and crops the image without distortion.
- segproperty
- Processing properties of the background image. See SegmentationProperty.
- type
- Media source type for applying the effect. See MediaSourceType.
Note: This parameter only supports the following settings:
- For video captured by the camera, use the default
PrimaryCameraSource. - For custom captured video, set this parameter to
CustomVideoSource.
- For video captured by the camera, use the default
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -4: Device capability does not meet the requirements for using the virtual background. Consider using a higher-performance device.
enableVoiceAITuner
Enables or disables the AI tuner feature.
abstract enableVoiceAITuner(enabled: boolean, type: VoiceAiTunerType): number;
The AI tuner feature enhances voice quality and adjusts voice tone style.
Scenario
Social and entertainment scenarios with high audio quality requirements, such as online karaoke, online podcasts, and live shows.
Timing
Can be called before or after joining the channel.
Parameters
- enabled
- Whether to enable the AI tuner feature:
- true: Enable the AI tuner feature.
- false: (Default) Disable the AI tuner feature.
- type
- AI tuner effect type. See VoiceAiTunerType.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
enableWebSdkInteroperability
Enable interoperability with the Web SDK (live broadcast only).
abstract enableWebSdkInteroperability(enabled: boolean): number;
- Deprecated
- Deprecated: This method is deprecated. The SDK automatically enables interoperability with the Web SDK, so you do not need to call this method.
This method enables or disables interoperability with the Web SDK. If there are users joining the channel via the Web SDK, make sure to call this method. Otherwise, Web users may see a black screen from the Native side. This method is applicable only in live broadcast scenarios. In communication scenarios, interoperability is enabled by default.
Parameters
- enabled
- Whether to enable interoperability with the Web SDK:
- true: Enable interoperability.
- false: (default) Disable interoperability.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getAudioDeviceInfo
Gets the audio device information.
abstract getAudioDeviceInfo(): DeviceInfo;
After calling this method, you can get whether the audio device supports ultra-low latency capture and playback.
- This method can be called before or after joining a channel.
Return Values
- Non-null: The method call succeeds.
- Null: The method call fails. See Error Codes for details and resolution suggestions.
getAudioDeviceManager
Gets the IAudioDeviceManager object to manage audio devices.
abstract getAudioDeviceManager(): IAudioDeviceManager;
Return Values
An IAudioDeviceManager object.
getAudioMixingCurrentPosition
Gets the playback progress of the music file.
abstract getAudioMixingCurrentPosition(): number;
This method gets the current playback progress of the music file, in milliseconds.
- You need to call this method after calling startAudioMixing and receiving the
onAudioMixingStateChanged(AudioMixingStatePlaying)callback. - If you need to call getAudioMixingCurrentPosition multiple times, make sure the interval between calls is greater than 500 ms.
Return Values
- ≥ 0: Success. Returns the current playback position of the music file (ms). 0 means the music file has not started playing.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getAudioMixingDuration
Gets the total duration of the music file.
abstract getAudioMixingDuration(): number;
This method gets the total duration of the music file, in milliseconds.
Timing
You need to call this method after startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Return Values
- ≥ 0: Success. Returns the duration of the music file.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getAudioMixingPlayoutVolume
Gets the local playback volume of the music file.
abstract getAudioMixingPlayoutVolume(): number;
You can call this method to get the local playback volume of the mixed music file, which helps troubleshoot volume-related issues.
Timing
You need to call this method after startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Return Values
- ≥ 0: Success. Returns the volume value, range is [0,100].
- < 0: Failure. See Error Codes for details and resolution suggestions.
getAudioMixingPublishVolume
Gets the remote playback volume of the music file.
abstract getAudioMixingPublishVolume(): number;
This API helps developers troubleshoot volume-related issues.
onAudioMixingStateChanged(AudioMixingStatePlaying) callback.Return Values
- ≥ 0: Success. Returns the volume value, range is [0,100].
- < 0: Failure. See Error Codes for details and resolution suggestions.
getAudioTrackCount
Gets the audio track index of the current music file.
abstract getAudioTrackCount(): number;
- You need to call this method after calling startAudioMixing and receiving the
onAudioMixingStateChanged(AudioMixingStatePlaying)callback.
Return Values
- Returns the audio track index of the current music file if the method call succeeds.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getCallId
Get the call ID.
abstract getCallId(): string;
Each time the client joins a channel, a corresponding callId is generated to identify the call session. You can call this method to obtain the callId parameter, then pass it to methods like rate and complain.
Timing
This method must be called after joining a channel.
Return Values
- Returns the current call ID if the method call succeeds.
- Returns an empty string if the method call fails.
getCameraMaxZoomFactor
Gets the maximum zoom factor supported by the camera.
abstract getCameraMaxZoomFactor(): number;
- This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Return Values
The maximum zoom factor supported by the device camera.
getConnectionState
Gets the current network connection state.
abstract getConnectionState(): ConnectionStateType;
Timing
Can be called before and after joining a channel.
Return Values
The current network connection state. See ConnectionStateType.
getCurrentMonotonicTimeInMs
Gets the current Monotonic Time from the SDK.
abstract getCurrentMonotonicTimeInMs(): number;
Monotonic Time refers to a monotonically increasing time sequence whose value increases over time. The unit is milliseconds. In scenarios such as custom video capture and custom audio capture, to ensure audio-video synchronization, Agora recommends that you call this method to get the current Monotonic Time from the SDK and pass this value to the timestamp parameter of the captured VideoFrame or AudioFrame.
Timing
Can be called before and after joining a channel.
Return Values
- ≥ 0: Success. Returns the current Monotonic Time (milliseconds) from the SDK.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getEffectCurrentPosition
Gets the playback progress of the specified sound effect file.
abstract getEffectCurrentPosition(soundId: number): number;
Parameters
- soundId
- ID of the sound effect. Each sound effect has a unique ID.
Return Values
- If the method call succeeds, returns the playback progress of the specified sound effect file (in milliseconds).
- < 0: Failure. See Error Codes for details and resolution suggestions.
getEffectDuration
Gets the total duration of the specified sound effect file.
abstract getEffectDuration(filePath: string): number;
Parameters
- filePath
- File path:
- Android: File path, must include the file name and extension. Supports URL addresses for online files, URI addresses for local files, absolute paths, or paths starting with
/assets/. Accessing local files via absolute path may cause permission issues. Using URI addresses is recommended. For example:content://com.android.providers.media.documents/document/audio%3A14441. - iOS: Absolute path or URL of the audio file, must include the file name and extension. For example:
/var/mobile/Containers/Data/audio.mp4.
- Android: File path, must include the file name and extension. Supports URL addresses for online files, URI addresses for local files, absolute paths, or paths starting with
Return Values
- If the method call succeeds, returns the duration of the specified sound effect file (in milliseconds).
- < 0: Failure. See Error Codes for details and resolution suggestions.
getEffectsVolume
Gets the playback volume of the sound effect file.
abstract getEffectsVolume(): number;
Volume range is 0~100. 100 (default) is the original file volume.
Return Values
- Volume of the sound effect file.
- < 0: Failure. See Error Codes for details and resolution suggestions.
getErrorDescription
Gets the warning or error description.
abstract getErrorDescription(code: number): string;
Parameters
- code
- The error code reported by the SDK.
Return Values
The specific error description.
getExtensionProperty
Gets detailed information about the plugin.
abstract getExtensionProperty(
provider: string,
extension: string,
key: string,
bufLen: number,
type?: MediaSourceType
): string;
Timing
Can be called before or after joining a channel.
Parameters
- provider
- The name of the plugin provider.
- extension
- The name of the plugin.
- key
- The key of the plugin property.
- type
- The media source type of the plugin. See MediaSourceType.
- bufLen
- The maximum length of the plugin property JSON string. Maximum value is 512 bytes.
Return Values
- If the method call succeeds, returns the plugin information.
- If the method call fails, returns an empty string.
getFaceShapeAreaOptions
Gets face shaping area options.
abstract getFaceShapeAreaOptions(
shapeArea: FaceShapeArea,
type?: MediaSourceType
): FaceShapeAreaOptions;
Call this method to get the current parameter settings of a face shaping area.
Scenario
When users open the facial area and intensity adjustment menu in the app, you can call this method to get the current options and update the UI accordingly.
Timing
Call this method after enableVideo.
Parameters
- shapeArea
- The face shaping area. See FaceShapeArea.
- type
- The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
- For local video captured by the camera, keep the default value
PrimaryCameraSource. - For custom captured video, set this parameter to
CustomVideoSource.
- For local video captured by the camera, keep the default value
Return Values
- If the method call succeeds, returns a FaceShapeAreaOptions object.
- If the method call fails, returns null.
getFaceShapeBeautyOptions
Gets face beauty effect options.
abstract getFaceShapeBeautyOptions(
type?: MediaSourceType
): FaceShapeBeautyOptions;
Call this method to get the current parameter settings of the face beauty effect.
Scenario
When users open the facial beauty style and intensity menu in the app, you can call this method to get the current options and update the UI accordingly.
Timing
Call this method after enableVideo.
Parameters
- type
- The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
- For local video captured by the camera, keep the default value
PrimaryCameraSource. - For custom captured video, set this parameter to
CustomVideoSource.
- For local video captured by the camera, keep the default value
Return Values
- If the method call succeeds, returns a FaceShapeBeautyOptions object.
- If the method call fails, returns null.
getLocalSpatialAudioEngine
Gets the ILocalSpatialAudioEngine object.
abstract getLocalSpatialAudioEngine(): ILocalSpatialAudioEngine;
Return Values
An ILocalSpatialAudioEngine object.
getMediaEngine
Gets the IMediaEngine object.
abstract getMediaEngine(): IMediaEngine;
Return Values
IMediaEngine object.
getNativeHandle
Gets the C++ handle of the Native SDK.
abstract getNativeHandle(): number;
This method gets the C++ handle of the Native SDK engine, used in special scenarios such as registering audio and video callbacks.
Return Values
The Native handle of the SDK engine.
getNetworkType
Gets the local network connection type.
abstract getNetworkType(): number;
You can call this method at any time to get the current network type in use.
Return Values
- ≥ 0: Success. Returns the local network connection type.
- 0: Network disconnected.
- 1: LAN.
- 2: Wi-Fi (including hotspot).
- 3: 2G mobile network.
- 4: 3G mobile network.
- 5: 4G mobile network.
- 6: 5G mobile network.
- < 0: Failure. Returns an error code.
- -1: Unknown network connection type.
getNtpWallTimeInMs
Gets the current NTP (Network Time Protocol) time.
abstract getNtpWallTimeInMs(): number;
In real-time chorus scenarios, especially when downlink inconsistencies occur at different receiving ends due to network issues, you can call this method to get the current NTP time as the reference time to align lyrics and music across multiple receivers for chorus synchronization.
Return Values
The current NTP time as a Unix timestamp (milliseconds).
getUserInfoByUid
Gets user information by UID.
abstract getUserInfoByUid(uid: number): UserInfo;
After a remote user joins the channel, the SDK obtains the UID and User Account of the remote user, then caches a mapping table containing the UID and User Account of the remote user, and triggers the onUserInfoUpdated callback locally. After receiving the callback, call this method and pass in the UID to get the UserInfo object containing the specified user's User Account.
Timing
Call after receiving the onUserInfoUpdated callback.
Parameters
- uid
- User ID.
Return Values
- The UserInfo object, if the method call succeeds.
- null, if the method call fails.
getUserInfoByUserAccount
Gets user information by User Account.
abstract getUserInfoByUserAccount(userAccount: string): UserInfo;
After a remote user joins the channel, the SDK obtains the UID and User Account of the remote user, then caches a mapping table containing the UID and User Account of the remote user, and triggers the onUserInfoUpdated callback locally. After receiving the callback, call this method and pass in the User Account to get the UserInfo object containing the specified user's UID.
Timing
Call after receiving the onUserInfoUpdated callback.
Parameters
- userAccount
- User Account.
Return Values
- The UserInfo object, if the method call succeeds.
- null, if the method call fails.
getVersion
getVideoDeviceManager
Gets the IVideoDeviceManager object to manage video devices.
abstract getVideoDeviceManager(): IVideoDeviceManager;
Return Values
An IVideoDeviceManager object.
getVolumeOfEffect
Gets the playback volume of the specified audio effect file.
abstract getVolumeOfEffect(soundId: number): number;
Parameters
- soundId
- The ID of the audio effect file.
Return Values
- ≥ 0: The method call succeeds and returns the playback volume. The volume range is [0,100], where 100 is the original volume.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
initialize
Creates and initializes IRtcEngine.
abstract initialize(context: RtcEngineContext): number;
Timing
Make sure to call createAgoraRtcEngine and initialize to create and initialize IRtcEngine before calling other APIs.
Parameters
- context
- Configuration for the IRtcEngine instance. See RtcEngineContext.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails.
- -1: General error (not specifically classified).
- -2: Invalid parameter set.
- -7: SDK initialization failed.
- -22: Resource allocation failed. This error is returned when the App uses too many resources or system resources are exhausted.
- -101: Invalid App ID.
isCameraAutoExposureFaceModeSupported
Checks whether the device supports auto exposure.
abstract isCameraAutoExposureFaceModeSupported(): boolean;
Return Values
- true: The device supports auto exposure.
- false: The device does not support auto exposure.
isCameraAutoFocusFaceModeSupported
Checks whether the device supports face auto-focus.
abstract isCameraAutoFocusFaceModeSupported(): boolean;
- This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Return Values
- true: The device supports face auto-focus.
- false: The device does not support face auto-focus.
isCameraCenterStageSupported
Checks whether the camera supports Center Stage.
abstract isCameraCenterStageSupported(): boolean;
Before calling enableCameraCenterStage to enable the Center Stage feature, you are advised to call this method to check whether the current device supports Center Stage.
Timing
You must call this method after the camera is successfully started, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Return Values
- true: The current camera supports Center Stage.
- false: The current camera does not support Center Stage.
isCameraExposurePositionSupported
Checks whether the device supports manual exposure.
abstract isCameraExposurePositionSupported(): boolean;
- This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Return Values
- true: The device supports manual exposure.
- false: The device does not support manual exposure.
isCameraExposureSupported
Checks whether the current camera supports exposure adjustment.
abstract isCameraExposureSupported(): boolean;
- You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - It is recommended to call this method before using setCameraExposureFactor to adjust the exposure factor, to check whether the current camera supports exposure adjustment.
- This method checks whether the currently used camera supports exposure adjustment, that is, the camera specified by setCameraCapturerConfiguration.
Return Values
- true: The method call succeeds.
- false: The method call fails. See Error Codes for details and troubleshooting.
isCameraFaceDetectSupported
Checks whether the device camera supports face detection.
abstract isCameraFaceDetectSupported(): boolean;
- This method is only applicable to Android and iOS.
- You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Return Values
- true: The device camera supports face detection.
- false: The device camera does not support face detection.
isCameraFocusSupported
Checks whether the device supports manual focus.
abstract isCameraFocusSupported(): boolean;
- You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Return Values
- true: The device supports manual focus.
- false: The device does not support manual focus.
isCameraTorchSupported
Checks whether the device supports keeping the flashlight on.
abstract isCameraTorchSupported(): boolean;
- You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - In general, the app uses the front camera by default. If the front camera does not support keeping the flashlight on, this method returns
false. To check whether the rear camera supports this feature, use switchCamera to switch the camera first, then call this method. - On iPads with system version 15, even if isCameraTorchSupported returns true, due to system limitations, you may still fail to turn on the flashlight using setCameraTorchOn.
Return Values
- true: The device supports keeping the flashlight on.
- false: The device does not support keeping the flashlight on.
isCameraZoomSupported
Checks whether the device supports camera zoom.
abstract isCameraZoomSupported(): boolean;
Timing
You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Return Values
- true: The device supports camera zoom.
- false: The device does not support camera zoom.
isFeatureAvailableOnDevice
Checks whether the device supports the specified advanced feature.
abstract isFeatureAvailableOnDevice(type: FeatureType): boolean;
Checks whether the current device meets the requirements for advanced features such as virtual background and beauty effects.
Scenario
Before using advanced audio and video features, you can check whether the current device supports them to avoid performance degradation or feature unavailability on low-end devices. Based on the return value of this method, you can decide whether to show or enable the corresponding feature buttons, or prompt users with appropriate messages when the device capability is insufficient.
Parameters
- type
- The type of advanced feature. See
FeatureType.
Return Values
- true: The device supports the specified advanced feature.
- false: The device does not support the specified advanced feature.
isSpeakerphoneEnabled
Checks whether the speakerphone is enabled.
abstract isSpeakerphoneEnabled(): boolean;
Timing
This method can be called before or after joining a channel.
Return Values
- true: The speakerphone is enabled, and audio is routed to the speaker.
- false: The speakerphone is not enabled, and audio is routed to a non-speaker device (earpiece, headset, etc.).
joinChannel
Sets media options and joins a channel.
abstract joinChannel(
token: string,
channelId: string,
uid: number,
options: ChannelMediaOptions
): number;
This method allows you to set media options when joining a channel, such as whether to publish audio and video streams in the channel. When a user joins a channel, whether to automatically subscribe to all remote audio and video streams in the channel. By default, the user subscribes to all other users' audio and video streams in the channel, which results in usage and affects billing. If you want to unsubscribe, you can do so by setting the options parameter or using the corresponding mute methods.
- This method only supports joining one channel at a time.
- Apps with different App IDs cannot communicate with each other.
- Before joining a channel, make sure the App ID used to generate the Token is the same as the one used in the initialize method to initialize the engine, otherwise joining the channel with the Token will fail.
Timing
You must call this method after initialize.
Parameters
- token
- A dynamic key generated on the server for authentication. See Token Authentication.
Note:
- (Recommended) If your project enables the security mode, i.e., uses APP ID + Token for authentication, this parameter is required.
- If your project only enables debug mode, i.e., uses only the APP ID for authentication, you can join the channel without a Token. The user will automatically leave the channel 24 hours after successfully joining.
- If you need to join multiple channels simultaneously or switch channels frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from the server each time. See Use Wildcard Token.
- channelId
- Channel name. This parameter identifies the channel for real-time audio and video interaction. Users with the same App ID and channel name will join the same channel. This parameter is a string of up to 64 bytes. Supported character set (89 characters total):
- 26 lowercase English letters a~z
- 26 uppercase English letters A~Z
- 10 digits 0~9
- "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
- uid
- User ID. This parameter identifies the user in the real-time audio and video interaction channel. You must set and manage the user ID yourself and ensure it is unique within the same channel. This parameter is a 32-bit unsigned integer. Recommended range: 1 to 2^32-1. If not specified (i.e., set to 0), the SDK automatically assigns one and returns it in the onJoinChannelSuccess callback. The application must remember and maintain this value; the SDK does not maintain it.
- options
- Channel media options. See ChannelMediaOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter. For example, an invalid Token is used,
uidis not an integer, or a ChannelMediaOptions member is invalid. You need to provide valid parameters and rejoin the channel. - -3: IRtcEngine object initialization failed. You need to reinitialize the IRtcEngine object.
- -7: IRtcEngine object is not initialized. You must initialize the IRtcEngine object before calling this method.
- -8: Internal state error of the IRtcEngine object. Possible reason: startEchoTest was called to start an echo test, but stopEchoTest was not called before calling this method. You must call stopEchoTest before this method.
- -17: Join channel request is rejected. Possible reason: the user is already in the channel. Use the onConnectionStateChanged callback to check if the user is in the channel. Do not call this method again unless you receive the
ConnectionStateDisconnected(1) state. - -102: Invalid channel name. You must provide a valid channel name in
channelIdand rejoin the channel. - -121: Invalid user ID. You must provide a valid user ID in
uidand rejoin the channel.
- -2: Invalid parameter. For example, an invalid Token is used,
joinChannelWithUserAccount
Joins a channel using a User Account and Token, and sets channel media options.
abstract joinChannelWithUserAccount(
token: string,
channelId: string,
userAccount: string,
options?: ChannelMediaOptions
): number;
If you have not called registerLocalUserAccount to register a User Account before calling this method, the SDK automatically creates one for you. Calling registerLocalUserAccount before this method can reduce the time it takes to join the channel. After successfully joining a channel, the user subscribes to all other users' audio and video streams by default, which results in usage and affects billing. If you want to unsubscribe, you can do so by calling the corresponding mute methods.
- This method only supports joining one channel at a time.
- Apps with different App IDs cannot communicate with each other.
- Before joining a channel, make sure the App ID used to generate the Token is the same as the one used in the initialize method to initialize the engine, otherwise joining the channel with the Token will fail.
Timing
You must call this method after initialize.
Parameters
- token
- A dynamic key generated on your server for authentication. See Use Token Authentication.
Note:
- (Recommended) If your project enables the security mode, i.e., uses APP ID + Token for authentication, this parameter is required.
- If your project only enables debug mode, i.e., uses only the APP ID for authentication, you can join the channel without a Token. The user will automatically leave the channel 24 hours after successfully joining.
- If you need to join multiple channels simultaneously or switch channels frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from the server each time. See Use Wildcard Token.
- userAccount
- User Account. This parameter identifies the user in the real-time audio and video interaction channel. You must set and manage the User Account yourself and ensure it is unique within the same channel. This parameter is required, must not exceed 255 bytes, and cannot be null. Supported character set (89 characters total):
- 26 lowercase English letters a-z
- 26 uppercase English letters A-Z
- 10 digits 0-9
- Space
- "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
- options
- Channel media options. See ChannelMediaOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter. For example, an invalid Token is used,
uidis not an integer, or a ChannelMediaOptions member is invalid. You need to provide valid parameters and rejoin the channel. - -3: IRtcEngine object initialization failed. You need to reinitialize the IRtcEngine object.
- -7: IRtcEngine object is not initialized. You must initialize the IRtcEngine object before calling this method.
- -8: Internal state error of the IRtcEngine object. Possible reason: startEchoTest was called to start an echo test, but stopEchoTest was not called before calling this method. You must call stopEchoTest before this method.
- -17: Join channel request is rejected. Possible reason: the user is already in the channel. Use the onConnectionStateChanged callback to check if the user is in the channel. Do not call this method again unless you receive the
ConnectionStateDisconnected(1) state. - -102: Invalid channel name. You must provide a valid channel name in
channelIdand rejoin the channel. - -121: Invalid user ID. You must provide a valid user ID in
uidand rejoin the channel.
- -2: Invalid parameter. For example, an invalid Token is used,
joinChannelWithUserAccountEx
Joins a channel using a User Account and Token, and sets channel media options.
abstract joinChannelWithUserAccountEx(
token: string,
channelId: string,
userAccount: string,
options: ChannelMediaOptions
): number;
Before calling this method, if you have not called registerLocalUserAccount to register a User Account, the SDK automatically creates one for you when you join the channel. Calling registerLocalUserAccount before this method shortens the time needed to join the channel.
After successfully joining a channel, the user subscribes to all remote users' audio and video streams by default, which incurs usage and affects billing. To unsubscribe, you can set the options parameter or call the corresponding mute methods.
- This method only supports joining one channel at a time.
- Apps with different App IDs cannot communicate with each other.
- Before joining a channel, ensure the App ID used to generate the Token is the same as the one used to initialize the engine with initialize, otherwise joining the channel with Token will fail.
Timing
This method must be called after initialize.
Parameters
- token
- A dynamic key generated on your server for authentication. See Token Authentication.
Note:
- (Recommended) If your project has enabled the security mode using APP ID + Token for authentication, this parameter is required.
- If your project is in debug mode using only APP ID for authentication, you can join the channel without a Token. You will automatically leave the channel 24 hours after joining.
- If you need to join multiple channels or switch frequently, Agora recommends using a wildcard Token to avoid requesting a new Token from your server each time. See Using Wildcard Token.
- userAccount
- The user's User Account. This parameter identifies the user in the real-time audio and video channel. You must set and manage the User Account yourself and ensure that each user in the same channel has a unique User Account. This parameter is required and must not exceed 255 bytes or be null. Supported character set (89 characters total):
- 26 lowercase letters a-z
- 26 uppercase letters A-Z
- 10 digits 0-9
- Space
- "!" "#" "$" "%" "&" "(" ")" "+" "-" ":" ";" "<" "=" "." ">" "?" "@" "[" "]" "^" "_" "{" "}" "|" "~" ","
- options
- Channel media options. See ChannelMediaOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter. For example, the Token is invalid, the
uidis not an integer, or ChannelMediaOptions contains invalid values. Provide valid parameters and rejoin the channel. - -3: IRtcEngine initialization failed. Reinitialize the IRtcEngine object.
- -7: IRtcEngine is not initialized. Initialize IRtcEngine before calling this method.
- -8: Internal state error of IRtcEngine. Possible reason: startEchoTest was called but stopEchoTest was not called before joining the channel. Call stopEchoTest before this method.
- -17: Join channel request rejected. Possible reason: the user is already in the channel. Use the onConnectionStateChanged callback to check if the user is in the channel. Do not call this method again unless the state is
ConnectionStateDisconnected(1). - -102: Invalid channel name. Provide a valid
channelIdand rejoin the channel. - -121: Invalid user ID. Provide a valid
uidand rejoin the channel.
- -2: Invalid parameter. For example, the Token is invalid, the
leaveChannel
Sets channel options and leaves the channel.
abstract leaveChannel(options?: LeaveChannelOptions): number;
After calling this method, the SDK stops all audio and video interactions, leaves the current channel, and releases all session-related resources. You must call this method after successfully joining a channel to end the call; otherwise, you cannot start a new call. If you have joined multiple channels using joinChannelEx, calling this method leaves all joined channels.
Timing
Call this method after joining a channel.
Parameters
- options
- Options for leaving the channel. See LeaveChannelOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
loadExtensionProvider
Loads a plugin.
abstract loadExtensionProvider(
path: string,
unloadAfterUse?: boolean
): number;
This method adds external SDK plugins (such as marketplace plugins and SDK extension plugins) to the SDK.
Timing
Call this method immediately after initializing IRtcEngine.
Parameters
- path
- The path and name of the plugin dynamic library. For example:
/library/libagora_segmentation_extension.dll. - unloadAfterUse
- Whether to automatically unload the plugin after use:
- true: Automatically unloads the plugin when IRtcEngine is destroyed.
- false: Does not automatically unload the plugin until the process exits (recommended).
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
muteAllRemoteAudioStreams
Stops or resumes subscribing to all remote users' audio streams.
abstract muteAllRemoteAudioStreams(mute: boolean): number;
After successfully calling this method, the local user stops or resumes subscribing to all remote users' audio streams, including those who join the channel after the method is called.
autoSubscribeAudio to false when calling joinChannel.
If enableAudio or disableAudio is called after this method, the latter will take effect.Timing
This method must be called after joining a channel.
Parameters
- mute
- Whether to stop subscribing to all remote users' audio streams:
- true: Stop subscribing to all remote users' audio streams.
- false: (Default) Subscribe to all remote users' audio streams.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
muteAllRemoteVideoStreams
Stops or resumes subscribing to all remote users' video streams.
abstract muteAllRemoteVideoStreams(mute: boolean): number;
After successfully calling this method, the local user stops or resumes subscribing to all remote users' video streams, including those who join the channel after the method is called.
autoSubscribeVideo to false when calling joinChannel.
If enableVideo or disableVideo is called after this method, the latter will take effect.Timing
This method must be called after joining a channel.
Parameters
- mute
- Whether to stop subscribing to all remote users' video streams.
- true: Stop subscribing to all users' video streams.
- false: (Default) Subscribe to all users' video streams.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
muteLocalAudioStream
Stops or resumes publishing the local audio stream.
abstract muteLocalAudioStream(mute: boolean): number;
This method controls whether to publish the locally captured audio stream. Not publishing the local audio stream does not disable the audio capturing device, so it does not affect the audio capture status.
Timing
Can be called before or after joining a channel.
Parameters
- mute
- Whether to stop publishing the local audio stream.
- true: Stop publishing.
- false: (Default) Publish.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
muteLocalVideoStream
Stops or resumes publishing the local video stream.
abstract muteLocalVideoStream(mute: boolean): number;
This method controls whether to publish the locally captured video stream. Not publishing the local video stream does not disable the video capturing device, so it does not affect the video capture status.
Compared to calling enableLocalVideo(false) to disable video capture and thus stop publishing the local video stream, this method responds faster.
Timing
Can be called before or after joining a channel.
Parameters
- mute
- Whether to stop sending the local video stream.
- true: Stop sending the local video stream.
- false: (Default) Send the local video stream.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
muteRecordingSignal
Whether to mute the recording signal.
abstract muteRecordingSignal(mute: boolean): number;
- Record the adjusted volume.
- Mute the audio capture signal.
Timing
Can be called before or after joining the channel.
Parameters
- mute
-
- true: Mute.
- false: (Default) Original volume.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
muteRemoteAudioStream
Stops or resumes subscribing to the specified remote user's audio stream.
abstract muteRemoteAudioStream(uid: number, mute: boolean): number;
Timing
This method must be called after joining a channel.
Parameters
- uid
- The user ID of the specified user.
- mute
- Whether to stop subscribing to the specified remote user's audio stream.
- true: Stop subscribing to the specified user's audio stream.
- false: (Default) Subscribe to the specified user's audio stream.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
muteRemoteVideoStream
Stops or resumes subscribing to the video stream of a specified remote user.
abstract muteRemoteVideoStream(uid: number, mute: boolean): number;
Timing
You must call this method after joining a channel.
Parameters
- uid
- The user ID of the specified remote user.
- mute
- Whether to stop subscribing to the video stream of the specified remote user.
- true: Stop subscribing to the video stream of the specified user.
- false: (Default) Subscribe to the video stream of the specified user.
Return Values
- 0: Success.
- < 0: Failure. See Error Code for details and resolution suggestions.
pauseAllChannelMediaRelay
Pauses media stream forwarding to all destination channels.
abstract pauseAllChannelMediaRelay(): number;
After starting media stream forwarding across channels, you can call this method to pause forwarding to all channels. To resume forwarding, call the resumeAllChannelMediaRelay method.
Return Values
- 0: The method call was successful.
- < 0: The method call failed. See Error Codes for details and resolution suggestions.
- -5: This method call was rejected. No ongoing cross-channel media stream forwarding exists.
pauseAllEffects
Pauses playback of all audio effect files.
abstract pauseAllEffects(): number;
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
pauseAudioMixing
Pauses the playback of a music file.
abstract pauseAudioMixing(): number;
After you call the startAudioMixing method to play a music file, call this method to pause the playback. To stop the playback, call stopAudioMixing.
Timing
You must call this method after joining a channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
pauseEffect
Pauses playback of the specified audio effect file.
abstract pauseEffect(soundId: number): number;
Parameters
- soundId
- The ID of the audio effect. Each audio effect has a unique ID.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
playAllEffects
Plays all audio effect files.
abstract playAllEffects(
loopCount: number,
pitch: number,
pan: number,
gain: number,
publish?: boolean
): number;
After calling preloadEffect multiple times to preload multiple audio effect files, you can call this method to play all preloaded audio effect files.
Parameters
- loopCount
- The number of times the audio effect loops:
- -1: Loops indefinitely until stopEffect or stopAllEffects is called.
- 0: Plays the audio effect once.
- 1: Plays the audio effect twice.
- pitch
- The pitch of the audio effect. The range is [0.5,2.0]. The default value is 1.0, which represents the original pitch. The smaller the value, the lower the pitch.
- pan
- The spatial position of the audio effect. The range is [-1.0,1.0]:
- -1.0: The audio effect appears on the left.
- 0: The audio effect appears in the center.
- 1.0: The audio effect appears on the right.
- gain
- The volume of the audio effect. The range is [0,100]. 100 is the default value, representing the original volume. The smaller the value, the lower the volume.
- publish
- Whether to publish the audio effect to remote users:
- true: Publishes the audio effect to remote users. Both local and remote users can hear it.
- false: (Default) Does not publish the audio effect to remote users. Only local users can hear it.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
playEffect
Plays the specified local or online audio effect file.
abstract playEffect(
soundId: number,
filePath: string,
loopCount: number,
pitch: number,
pan: number,
gain: number,
publish?: boolean,
startPos?: number
): number;
You can call this method multiple times with different soundID and filePath to play multiple audio effect files simultaneously. For optimal user experience, it is recommended not to play more than 3 audio effects at the same time.
Timing
This method can be called before or after joining a channel.
Parameters
- soundId
- The ID of the audio effect. Each audio effect has a unique ID.
Note: If you have preloaded the audio effect using preloadEffect, make sure this parameter matches the
soundIdset in preloadEffect. - filePath
- The path of the file to play. Supports URL of online files and absolute path of local files. Must include the file name and extension. Supported audio formats include MP3, AAC, M4A, MP4, WAV, 3GP, etc.
Note: If you have preloaded the audio effect using preloadEffect, make sure this parameter matches the
filePathset in preloadEffect. - loopCount
- The number of times the audio effect loops.
- ≥ 0: Number of loops. For example, 1 means loop once, i.e., play twice in total.
- -1: Loop indefinitely.
- pitch
- The pitch of the audio effect. The range is [0.5,2.0]. The default value is 1.0, which represents the original pitch. The smaller the value, the lower the pitch.
- pan
- The spatial position of the audio effect. The range is [-1.0,1.0], for example:
- -1.0: The audio effect appears on the left
- 0.0: The audio effect appears in the center
- 1.0: The audio effect appears on the right
- gain
- The volume of the audio effect. The range is [0.0,100.0]. The default value is 100.0, which represents the original volume. The smaller the value, the lower the volume.
- publish
- Whether to publish the audio effect to remote users:
- true: Publishes the audio effect to remote users. Both local and remote users can hear it.
- false: Does not publish the audio effect to remote users. Only local users can hear it.
- startPos
- The playback position of the audio effect file in milliseconds.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
preloadChannel
Preloads a channel using token, channelId, and uid.
abstract preloadChannel(
token: string,
channelId: string,
uid: number
): number;
Calling this method reduces the time it takes for a viewer to join a channel when frequently switching channels, thereby shortening the time to hear the host's first audio frame and see the first video frame, improving the viewer's video experience. If the channel has already been preloaded successfully, and the viewer leaves and rejoins the channel, as long as the Token used during preload is still valid, there is no need to preload again.
- When calling this method, ensure the user role is set to audience and the audio scenario is not
AudioScenarioChorus, otherwise preload will not take effect. - Ensure the channel name, user ID, and Token passed during preload match those used when joining the channel, otherwise preload will not take effect.
- A single IRtcEngine instance supports up to 20 preloaded channels. If exceeded, only the latest 20 preloaded channels are effective.
Timing
To improve the user experience of preloading channels, Agora recommends calling this method as early as possible after confirming the channel name and user information, and before joining the channel.
Parameters
- token
- A dynamic key generated on your server for authentication. See Token Authentication.
When the Token expires, depending on the number of preloaded channels, you can provide a new Token in different ways:
- For one preloaded channel: call this method again with the new Token.
- For multiple preloaded channels:
- If using a wildcard Token, call updatePreloadChannelToken to update the Token for all preloaded channels. When generating a wildcard Token, the user ID must not be 0. See Using Wildcard Token.
- If using different Tokens: call this method with the user ID, channel name, and the updated Token.
- channelId
- The name of the channel to preload. This parameter identifies the channel for real-time audio and video interaction. Users with the same App ID and channel name join the same channel.
This parameter must be a string no longer than 64 bytes. Supported character set (89 characters total):
- 26 lowercase letters a~z
- 26 uppercase letters A~Z
- 10 digits 0~9
- "!" "#" "$" "%" "&" "(" ")" "+" "-" ":" ";" "<" "=" "." ">" "?" "@" "[" "]" "^" "_" "{" "}" "|" "~" ","
- uid
- User ID. This parameter identifies the user in the real-time audio and video channel. You must set and manage the user ID yourself and ensure uniqueness within the channel. This parameter is a 32-bit unsigned integer. Recommended range: 1 to 2^32-1. If not specified (i.e., set to 0), the SDK automatically assigns one and returns it in the onJoinChannelSuccess callback. The application must store and manage this value; the SDK does not manage it.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -7: IRtcEngine is not initialized. Initialize IRtcEngine before calling this method.
- -102: Invalid channel name. Provide a valid channel name and rejoin the channel.
preloadChannelWithUserAccount
Preloads a channel using token, channelId, and userAccount.
abstract preloadChannelWithUserAccount(
token: string,
channelId: string,
userAccount: string
): number;
Calling this method can reduce the time it takes for a viewer to join a channel when frequently switching channels, thereby shortening the time to hear the first audio frame and see the first video frame from the host, and improving the video experience for viewers. If the channel has already been successfully preloaded, and the viewer leaves and rejoins the channel, as long as the token used during preloading is still valid, there is no need to preload it again.
- When calling this method, make sure the user role is set to audience and the audio scenario is not set to
AudioScenarioChorus, otherwise preloading will not take effect. - Make sure the
channelId,userAccount, andtokenpassed during preloading are the same as those used when joining the channel later; otherwise, preloading will not take effect. - Currently, one IRtcEngine instance supports preloading up to 20 channels. If this limit is exceeded, only the latest 20 preloaded channels take effect.
Timing
To enhance the user experience of preloading channels, Agora recommends that you call this method as early as possible after confirming the channel name and user information, and before joining the channel.
Parameters
- token
- A dynamic key generated on your server for authentication. See Use Token Authentication.
When the token expires, depending on the number of preloaded channels, you can pass a new token in different ways:
- For a single preloaded channel: call this method to pass the new token.
- For multiple preloaded channels:
- If you use a wildcard token, call updatePreloadChannelToken to update the token for all preloaded channels. When generating a wildcard token, the user ID must not be 0. See Use Wildcard Token.
- If you use different tokens: call this method and pass your user ID, corresponding channel name, and the updated token.
- channelId
- The name of the channel to preload. This parameter identifies the channel for real-time audio and video interaction. Under the same App ID, users with the same channel name will join the same channel for interaction.
This parameter must be a string within 64 bytes. The supported character set includes 89 characters:
- 26 lowercase English letters a~z
- 26 uppercase English letters A~Z
- 10 digits 0~9
- "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
- userAccount
- The user's User Account. This parameter identifies the user in the real-time audio and video interaction channel. You need to set and manage the User Account yourself and ensure that each user in the same channel has a unique User Account. This parameter is required, must not exceed 255 bytes, and cannot be null. The supported character set includes 89 characters:
- 26 lowercase English letters a-z
- 26 uppercase English letters A-Z
- 10 digits 0-9
- space
- "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter. For example, the User Account is empty. You need to provide valid parameters and rejoin the channel.
- -7: The IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
- -102: Invalid channel name. You need to provide a valid channel name and rejoin the channel.
preloadEffect
Loads the audio effect file into memory.
abstract preloadEffect(
soundId: number,
filePath: string,
startPos?: number
): number;
To ensure smooth communication, pay attention to the size of the audio effect files you preload. Supported audio formats for preloading are listed in Supported Audio Formats.
Timing
Agora recommends calling this method before joining a channel.
Parameters
- soundId
- The ID of the audio effect. Each audio effect has a unique ID.
- filePath
- File path:
- Android: The file path must include the file name and extension. Supports URL of online files, URI of local files, absolute path, or paths starting with
/assets/. Accessing local files via absolute path may require permissions. It is recommended to use URI to access local files. For example:content://com.android.providers.media.documents/document/audio%3A14441. - iOS: The absolute path or URL of the audio file. Must include the file name and extension. For example:
/var/mobile/Containers/Data/audio.mp4.
- Android: The file path must include the file name and extension. Supports URL of online files, URI of local files, absolute path, or paths starting with
- startPos
- The start position for loading the audio effect file, in milliseconds.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
queryCameraFocalLengthCapability
Queries the focal length capabilities supported by the camera.
abstract queryCameraFocalLengthCapability(): {
focalLengthInfos: FocalLengthInfo[];
size: number;
};
To enable wide-angle or ultra-wide-angle camera modes, it is recommended to call this method first to check whether the device supports the corresponding focal length capabilities. Then, based on the query result, call setCameraCapturerConfiguration to adjust the camera's focal length configuration for optimal capture performance.
Return Values
focalLengthInfos: An array of FocalLengthInfo objects that include the camera's direction and focal length type.size: The number of focal length entries actually returned.
queryCodecCapability
Queries the video codec capabilities supported by the SDK.
abstract queryCodecCapability(): { codecInfo: CodecCapInfo[]; size: number };
Return Values
- If the call succeeds, returns an object with the following properties:
codecInfo: An array of CodecCapInfo, representing the SDK's video encoding capabilities.size: The size of the CodecCapInfo array.
- If the call times out, modify your logic to avoid calling this method on the main thread.
queryDeviceScore
Queries the device score level.
abstract queryDeviceScore(): number;
Scenario
In high-definition or ultra-high-definition video scenarios, you can first call this method to query the device score. If the returned score is low (e.g., below 60), you should lower the video resolution accordingly to avoid affecting the video experience. The minimum device score requirement varies by business scenario. For specific recommendations, please contact technical support.
Return Values
- > 0: Success. The value is the current device score, ranging from [0,100]. A higher value indicates stronger device capability. Most device scores range from 60 to 100.
- < 0: Failure.
queryScreenCaptureCapability
Queries the maximum frame rate supported by the device for screen sharing.
abstract queryScreenCaptureCapability(): number;
Scenario
In screen sharing scenarios, if you want to enable high frame rates (e.g., 60 fps) but are unsure whether the device supports it, you can call this method to query the maximum frame rate supported by the device. If high frame rates are not supported, you can lower the frame rate of the screen sharing stream accordingly to ensure the expected sharing experience.
Return Values
- If the method call succeeds, returns the maximum frame rate supported by the device. See ScreenCaptureFramerateCapability.
- <0: The method call fails. See Error Codes for details and resolution suggestions.
rate
Rates a call.
abstract rate(callId: string, rating: number, description: string): number;
Parameters
- callId
- Call ID. You can get this parameter by calling getCallId.
- rating
- Rating for the call, from 1 (lowest) to 5 (highest).
- description
- Description of the call. The length must be less than 800 bytes.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
- -1: General error (not classified).
- -2: Invalid parameter.
registerAudioEncodedFrameObserver
Registers an audio encoded frame observer.
abstract registerAudioEncodedFrameObserver(
config: AudioEncodedFrameObserverConfig,
observer: IAudioEncodedFrameObserver
): number;
- Call this method after joining a channel.
- Since this method and
startAudioRecordingboth set audio content and quality, it is not recommended to use this method together withstartAudioRecording. Otherwise, only the method called later will take effect.
Parameters
- config
- Configuration for the encoded audio observer. See AudioEncodedFrameObserverConfig.
Return Values
An IAudioEncodedFrameObserver object.
registerAudioSpectrumObserver
Registers an audio spectrum observer.
abstract registerAudioSpectrumObserver(
observer: IAudioSpectrumObserver
): number;
After successfully registering an audio spectrum observer and calling enableAudioSpectrumMonitor to enable audio spectrum monitoring, the SDK reports callbacks implemented in the IAudioSpectrumObserver class at the interval you set.
Parameters
- observer
- The audio spectrum observer. See IAudioSpectrumObserver.
Return Values
The IAudioSpectrumObserver object.
registerExtension
Registers an extension.
abstract registerExtension(
provider: string,
extension: string,
type?: MediaSourceType
): number;
For external SDK extensions (such as marketplace plugins and SDK extension plugins), after loading the plugin, you need to call this method to register it. Internal SDK plugins (included in the SDK package) are automatically loaded and registered after initializing IRtcEngine, so you don't need to call this method.
- To register multiple plugins, call this method multiple times.
- The order in which different plugins process data in the SDK is determined by the order in which they are registered. That is, plugins registered earlier process data first.
Timing
- It is recommended to call this method after initializing IRtcEngine and before joining a channel.
- For video-related plugins (such as beauty filters), call this method before enabling the video module (enableVideo/enableLocalVideo).
- Before calling this method, you need to call loadExtensionProvider to load the plugin.
Parameters
- provider
- The name of the plugin provider.
- extension
- The name of the plugin.
- type
- The media source type of the plugin. See MediaSourceType.
registerLocalUserAccount
Registers the local user's User Account.
abstract registerLocalUserAccount(appId: string, userAccount: string): number;
- Call registerLocalUserAccount to register the account, then call joinChannelWithUserAccount to join the channel. This can reduce the time to join the channel.
- Directly call joinChannelWithUserAccount to join the channel.
- Ensure the
userAccountset in this method is unique within the channel. - To ensure communication quality, make sure all users in a channel use the same type of identifier. That is, all users in the same channel must use either UID or User Account. If users join via the Web SDK, ensure they also use the same identifier type.
Parameters
- appId
- The App ID of your project registered in the console.
- userAccount
- The user's User Account. This parameter identifies the user in the real-time audio and video interaction channel. You need to set and manage the User Account yourself and ensure that each user in the same channel has a unique User Account. This parameter is required, must not exceed 255 bytes, and cannot be null. The supported character set includes 89 characters:
- 26 lowercase English letters a-z
- 26 uppercase English letters A-Z
- 10 digits 0-9
- space
- "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
registerMediaMetadataObserver
Registers a media metadata observer to receive or send metadata.
abstract registerMediaMetadataObserver(
observer: IMetadataObserver,
type: MetadataType
): number;
You need to implement the IMetadataObserver class yourself and specify the metadata type in this method. This method allows you to add synchronized metadata to the video stream for interactive live streaming, such as sending shopping links, e-coupons, and online quizzes.
Parameters
- observer
- The metadata observer. See IMetadataObserver.
- type
- The metadata type. Currently, only
VideoMetadatais supported. See MetadataType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
release
Destroys the IRtcEngine object.
abstract release(sync?: boolean): void;
This method releases all resources used by the SDK. Some apps only use real-time audio and video communication when needed, and release the resources when not in use for other operations. This method is suitable for such cases. After calling this method, you can no longer use other methods and callbacks of the SDK. To use the real-time audio and video communication function again, you must call createAgoraRtcEngine and initialize in sequence to create a new IRtcEngine object.
- This method is a synchronous call. You need to wait for the IRtcEngine resources to be released before performing other operations (such as creating a new IRtcEngine object), so it is recommended to call this method in a sub-thread to avoid blocking the main thread.
- It is not recommended to call release in the SDK's callback, otherwise the SDK will wait for the callback to return before recycling the related object resources, which may cause a deadlock.
Parameters
- sync
- Whether the method is a synchronous call:
- true: The method is synchronous.
- false: The method is asynchronous. Currently, only synchronous calls are supported. Do not set this parameter to this value.
removeAllListeners
Removes all listeners for the specified event type.
removeAllListeners?<EventType extends keyof IMediaEngineEvent>(
eventType?: EventType
): void;
Parameters
- eventType
- The name of the target event to listen to. See IRtcEngineEvent.
unregisterEventHandler
Removes the specified callback event.
abstract unregisterEventHandler(
eventHandler: IRtcEngineEventHandler
): boolean;
This method removes all previously added callback events.
Parameters
- eventHandler
- The callback event to be removed. See IRtcEngineEventHandler.
Return Values
- true: The method call succeeds.
- false: The method call fails. See Error Codes for details and resolution suggestions.
removeListener
Removes the specified IRtcEngineEvent listener.
removeListener?<EventType extends keyof IRtcEngineEvent>(
eventType: EventType,
listener: IRtcEngineEvent[EventType]
): void;
For some callback functions that have been listened to, if you no longer need to receive callback messages after receiving the corresponding event, you can call this method to remove the corresponding listener.
Parameters
- eventType
- The name of the target event to listen to. See IRtcEngineEvent.
- listener
- The callback function corresponding to
eventType. You must pass in the same function object that was passed to addListener. For example, to stop listening to onJoinChannelSuccess:const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {}; engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess); engine.removeListener('onJoinChannelSuccess', onJoinChannelSuccess);
removeVideoWatermark
Removes the watermark image from the local video.
abstract removeVideoWatermark(id: string): number;
- Since
- Available since v4.6.2.
This method removes a previously added watermark image from the local video stream based on the specified unique ID.
Parameters
- id
- The ID of the watermark to be removed. This value must match the ID used when adding the watermark.
Return Values
- 0: Success.
- < 0: Failure.
renewToken
Renews the token.
abstract renewToken(token: string): number;
This method is used to renew the token. The token will expire after a certain period, after which the SDK will not be able to connect to the server.
Timing
- When receiving the onTokenPrivilegeWillExpire callback indicating the token is about to expire;
- When receiving the onRequestToken callback indicating the token has expired;
- When receiving the onConnectionStateChanged callback with
ConnectionChangedTokenExpired(9);
Parameters
- token
- The newly generated token.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter. For example, the token is empty.
- -7: The IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
- -110: Invalid token. Make sure:
- The user ID specified when generating the token matches the one used to join the channel,
- The generated token matches the one used to join the channel.
resumeAllChannelMediaRelay
Resumes media stream forwarding to all destination channels.
abstract resumeAllChannelMediaRelay(): number;
After calling the pauseAllChannelMediaRelay method, you can call this method to resume media stream forwarding to all destination channels.
Return Values
- 0: The method call was successful.
- < 0: The method call failed. See Error Codes for details and resolution suggestions.
- -5: This method call was rejected. No paused cross-channel media stream forwarding exists.
resumeAllEffects
Resumes playback of all audio effect files.
abstract resumeAllEffects(): number;
After calling pauseAllEffects to pause all audio effect files, you can call this method to resume playback.
Timing
This method must be called after pauseAllEffects.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
resumeAudioMixing
Resumes the playback of a music file.
abstract resumeAudioMixing(): number;
After you call pauseAudioMixing to pause the playback of a music file, call this method to resume playback.
Timing
You must call this method after joining a channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
resumeEffect
Resumes playing the specified audio effect file.
abstract resumeEffect(soundId: number): number;
Parameters
- soundId
- The ID of the audio effect. Each audio effect has a unique ID.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
selectAudioTrack
Specifies the audio track to play in the current music file.
abstract selectAudioTrack(index: number): number;
After retrieving the number of audio tracks in a music file, you can call this method to specify any track for playback. For example, if different tracks in a multi-track file contain songs in different languages, you can use this method to set the playback language.
- For supported audio file formats, see What audio file formats does the RTC SDK support?.
- You must call this method after calling startAudioMixing and receiving the
onAudioMixingStateChanged(AudioMixingStatePlaying)callback.
Parameters
- index
- The specified audio track to play. The value must be greater than or equal to 0 and less than the return value of getAudioTrackCount.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
sendCustomReportMessage
Sends a custom report message.
abstract sendCustomReportMessage(
id: string,
category: string,
event: string,
label: string,
value: number
): number;
Agora provides custom data reporting and analytics services. This service is currently in a free beta period. During the beta, you can send up to 10 custom data messages within 6 seconds. Each message must not exceed 256 bytes, and each string must not exceed 100 bytes. To try this service, please [contact sales](mailto:support@agora.io) to enable it and agree on the custom data format.
sendMetaData
Sends media metadata.
abstract sendMetaData(metadata: Metadata, sourceType: VideoSourceType): number;
If the media metadata is sent successfully, the receiver will receive the onMetadataReceived callback.
Parameters
- metadata
- The media metadata. See Metadata.
- sourceType
- The type of video source. See VideoSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
sendStreamMessage
Sends a data stream.
abstract sendStreamMessage( streamId: number, data: Uint8Array, length: number ): number;
- Each client in the channel can have up to 5 data channels simultaneously, and the total sending bitrate of all data channels is limited to 30 KB/s.
- Each data channel can send up to 60 packets per second, with a maximum size of 1 KB per packet.
- This method must be called after joining a channel and after creating a data channel using createDataStream.
- This method applies to broadcaster users only.
Parameters
- streamId
- Data stream ID. Obtained via createDataStream.
- data
- Data to be sent.
- length
- Length of the data.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAINSMode
Enables or disables AI noise reduction and sets the noise reduction mode.
abstract setAINSMode(enabled: boolean, mode: AudioAinsMode): number;
- TV noise
- Air conditioner noise
- Factory machinery noise, etc.
- Thunder
- Explosions
- Cracking sounds, etc.
- This method depends on the AI noise reduction dynamic library. Removing the dynamic library will cause the feature to fail. For the name of the AI noise reduction dynamic library, see Plugin List.
- Currently, it is not recommended to enable this feature on devices running Android 6.0 or below.
Scenario
In scenarios such as voice chat, online education, and online meetings, if the surrounding environment is noisy, the AI noise reduction feature can identify and reduce both steady and non-steady noises while preserving voice quality, thereby improving audio quality and user experience.
Timing
This method can be called before or after joining the channel.
Parameters
- enabled
- Whether to enable AI noise reduction:
- true: Enable AI noise reduction.
- false: (Default) Disable AI noise reduction.
- mode
- Noise reduction mode. See AudioAinsMode.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setAdvancedAudioOptions
Sets advanced audio options.
abstract setAdvancedAudioOptions(options: AdvancedAudioOptions): number;
If you have advanced requirements for audio processing, such as capturing and sending stereo sound, you can call this method to set advanced audio options.
Parameters
- options
- Advanced audio options. See AdvancedAudioOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setAudioEffectParameters
Sets parameters for SDK preset voice effects.
abstract setAudioEffectParameters( preset: AudioEffectPreset, param1: number, param2: number ): number;
- 3D voice effect: Set the surround cycle for the 3D voice effect.
- Pitch correction effect: Set the base scale and tonic pitch. To allow users to adjust pitch correction effects easily, it is recommended to bind the base scale and tonic pitch settings to your application's UI elements.
- Call setAudioScenario to set the audio scenario to high-quality, i.e.,
AudioScenarioGameStreaming(3). - Call setAudioProfile to set
profiletoAudioProfileMusicHighQuality(4) orAudioProfileMusicHighQualityStereo(5).
- This method can be called before or after joining the channel.
- Do not set the
profileparameter of setAudioProfile toAudioProfileSpeechStandard(1) orAudioProfileIot(6), otherwise this method will not take effect. - This method works best for voice processing and is not recommended for audio data containing music.
- After calling setAudioEffectParameters, avoid calling the following methods as they will override the effects set by setAudioEffectParameters:
- This method depends on the voice beautifier dynamic library
libagora_audio_beauty_extension.dll. Removing the dynamic library will cause the feature to fail.
Parameters
- preset
- SDK preset audio effects. The following are supported:
RoomAcoustics3dVoice: 3D voice effect.- Before using this enum, you need to set the
profileparameter of setAudioProfile toAudioProfileMusicStandardStereo(3) orAudioProfileMusicHighQualityStereo(5), otherwise the enum setting is invalid. - After enabling 3D voice, users must use audio playback devices that support stereo to hear the expected effect.
- Before using this enum, you need to set the
PitchCorrection: Pitch correction effect.
- param1
-
- If
presetis set toRoomAcoustics3dVoice, thenparam1represents the surround cycle of the 3D voice effect. Value range: [1,60] seconds. Default is 10, which means the voice surrounds 360 degrees in 10 seconds. - If
presetis set toPitchCorrection, thenparam1represents the base scale:1: (Default) Major natural scale.2: Minor natural scale.3: Minor pentatonic scale.
- If
- param2
-
- If
presetis set toRoomAcoustics3dVoice, setparam2to0. - If
presetis set toPitchCorrection, thenparam2represents the tonic pitch:1: A2: A#3: B4: (Default) C5: C#6: D7: D#8: E9: F10: F#11: G12: G#
- If
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setAudioEffectPreset
Sets the SDK's preset voice effects.
abstract setAudioEffectPreset(preset: AudioEffectPreset): number;
Call this method to set the SDK's preset voice effects for the local user who is sending the stream. This does not change the gender characteristics of the original voice. After setting the effect, all users in the channel can hear it.
- Do not set the
profileparameter of setAudioProfile toAudioProfileSpeechStandard(1) orAudioProfileIot(6), otherwise this method will not take effect. - If you call setAudioEffectPreset and set an enum other than
RoomAcoustics3dVoiceorPitchCorrection, do not call setAudioEffectParameters, or the effect set by setAudioEffectPreset will be overridden. - After calling setAudioEffectPreset, it is not recommended to call the following methods, or the effect set by setAudioEffectPreset will be overridden:
- This method depends on the beautifier dynamic library
libagora_audio_beauty_extension.dll. Deleting this library will cause the feature to fail to start properly.
Timing
- Call setAudioScenario to set the audio scenario to high-quality mode, i.e.,
AudioScenarioGameStreaming(3). - Call setAudioProfile to set the
profiletoAudioProfileMusicHighQuality(4) orAudioProfileMusicHighQualityStereo(5).
Parameters
- preset
- Preset audio effect option. See AudioEffectPreset.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAudioMixingDualMonoMode
Sets the channel mode of the current audio file.
abstract setAudioMixingDualMonoMode(mode: AudioMixingDualMonoMode): number;
In stereo audio files, the left and right channels can store different audio data. Depending on your needs, you can set the channel mode to original, left channel, right channel, or mixed mode.
Scenario
- To hear only the accompaniment, set the channel mode to left.
- To hear both accompaniment and vocals, set the channel mode to mixed.
Timing
You must call this method after startAudioMixing and after receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Parameters
- mode
- The channel mode. See AudioMixingDualMonoMode.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAudioMixingPitch
Adjusts the pitch of the music file played locally.
abstract setAudioMixingPitch(pitch: number): number;
When mixing local vocals with a music file, you can call this method to adjust only the pitch of the music file.
Timing
You must call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Parameters
- pitch
- Adjusts the pitch of the music file played locally in semitone steps. The default value is 0, which means no pitch adjustment. The valid range is [-12,12]. Each adjacent value differs by one semitone. The greater the absolute value, the more the pitch is raised or lowered.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAudioMixingPlaybackSpeed
Sets the playback speed of the current music file.
abstract setAudioMixingPlaybackSpeed(speed: number): number;
You must call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged callback reporting the playback state as AudioMixingStatePlaying.
Parameters
- speed
- The playback speed of the music file. The recommended range is [50,400], where:
- 50: 0.5x speed.
- 100: Original speed.
- 400: 4x speed.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAudioMixingPosition
Sets the playback position of the music file.
abstract setAudioMixingPosition(pos: number): number;
This method sets the playback position of an audio file, allowing you to play from a specific point instead of from the beginning.
Timing
You must call this method after startAudioMixing and after receiving the onAudioMixingStateChanged(AudioMixingStatePlaying) callback.
Parameters
- pos
- An integer. The position in the progress bar, in milliseconds.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setAudioProfile
Sets the audio encoding profile and scenario.
abstract setAudioProfile(
profile: AudioProfileType,
scenario?: AudioScenarioType
): number;
AudioScenarioGameStreaming(3). In this scenario, the SDK switches to media volume to avoid the issue.Scenario
This method applies to various audio scenarios. You can choose as needed. For example, in scenarios that require high audio quality such as music education, it is recommended to set profile to AudioProfileMusicHighQuality (4) and scenario to AudioScenarioGameStreaming (3).
Timing
You can call this method before or after joining a channel.
Parameters
- profile
- The audio encoding profile, including sample rate, bitrate, encoding mode, and the number of channels. See AudioProfileType.
- scenario
- The audio scenario. The volume type of the device varies depending on the audio scenario. See AudioScenarioType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setAudioScenario
Sets the audio scenario.
abstract setAudioScenario(scenario: AudioScenarioType): number;
AudioScenarioGameStreaming(3). In this scenario, the SDK switches to media volume to avoid the issue.Scenario
This method applies to various audio scenarios. You can choose as needed. For example, in scenarios that require high audio quality such as music education, it is recommended to set scenario to AudioScenarioGameStreaming (3).
Timing
You can call this method before or after joining a channel.
Parameters
- scenario
- The audio scenario. The volume type of the device varies depending on the audio scenario. See AudioScenarioType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setAudioSessionOperationRestriction
Sets the SDK’s operation permissions on the Audio Session.
abstract setAudioSessionOperationRestriction(
restriction: AudioSessionOperationRestriction
): number;
By default, both the SDK and the app have permission to operate the Audio Session. If you want only the app to operate the Audio Session, you can call this method to restrict the SDK’s permission. You can call this method before or after joining a channel. Once this method is called to restrict the SDK’s operation permission, the restriction takes effect when the SDK attempts to change the Audio Session.
- This method applies only to the iOS platform.
- This method does not restrict the app’s permission to operate the Audio Session.
Parameters
- restriction
- The SDK’s operation permission on the Audio Session. See AudioSessionOperationRestriction. This parameter is a bit mask, and each bit corresponds to a permission.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setBeautyEffectOptions
Sets beauty effect options.
abstract setBeautyEffectOptions(
enabled: boolean,
options: BeautyOptions,
type?: MediaSourceType
): number;
Enables the local beauty effect and sets the beauty effect options.
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will prevent the feature from working properly. - This feature has high performance requirements. When calling this method, the SDK automatically checks the current device capability.
Timing
Call this method after enableVideo or startPreview.
Parameters
- enabled
- Whether to enable the beauty effect:
- true: Enable the beauty effect.
- false: (default) Disable the beauty effect.
- options
- Beauty options. See BeautyOptions for details.
- type
- The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
- For local video captured by the camera, keep the default value
PrimaryCameraSource. - For custom captured video, set this parameter to
CustomVideoSource.
- For local video captured by the camera, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -4: The current device does not support this feature. Possible reasons:
- The device does not meet the performance requirements for beauty effects. Consider using a higher-performance device.
- The device runs a version lower than Android 5.0, which does not support this operation. Consider upgrading the OS or using a different device.
- -4: The current device does not support this feature. Possible reasons:
setCameraAutoExposureFaceModeEnabled
Enables or disables auto exposure.
abstract setCameraAutoExposureFaceModeEnabled(enabled: boolean): number;
Parameters
- enabled
- Whether to enable auto exposure:
- true: Enable auto exposure.
- false: Disable auto exposure.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and troubleshooting.
setCameraAutoFocusFaceModeEnabled
Enables or disables face auto focus.
abstract setCameraAutoFocusFaceModeEnabled(enabled: boolean): number;
By default, the SDK disables face auto focus on Android and enables it on iOS. To configure face auto focus manually, call this method.
Timing
You must call this method after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Parameters
- enabled
- Whether to enable face auto focus:
- true: Enable face auto focus.
- false: Disable face auto focus.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and troubleshooting.
setCameraCapturerConfiguration
Sets the camera capture configuration.
abstract setCameraCapturerConfiguration(
config: CameraCapturerConfiguration
): number;
Timing
You must call this method before enabling local camera capture, such as before calling startPreview or joinChannel.
Parameters
- config
- Camera capture configuration. See CameraCapturerConfiguration.
Note: You do not need to set the
deviceIdparameter in this method.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and troubleshooting.
setCameraDeviceOrientation
Sets the rotation angle of the captured video.
abstract setCameraDeviceOrientation(
type: VideoSourceType,
orientation: VideoOrientation
): number;
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - If the video capture device does not support gravity sensing, you can call this method to manually adjust the rotation angle of the captured video frame.
Parameters
- type
- The type of video source. See VideoSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setCameraExposureFactor
Sets the exposure factor of the current camera.
abstract setCameraExposureFactor(factor: number): number;
When the lighting in the shooting environment is insufficient or too bright, it can affect the quality of the captured video. To achieve better video effects, you can use this method to adjust the exposure factor of the camera.
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - It is recommended that you call isCameraExposureSupported before using this method to check whether the current camera supports adjusting the exposure factor.
- When you call this method, it sets the exposure factor for the currently used camera, which is the one specified in setCameraCapturerConfiguration.
Parameters
- factor
- The exposure factor of the camera. The default value is 0, which means using the camera's default exposure. The larger the value, the greater the exposure. If the video image is overexposed, you can lower the exposure factor; if the video image is underexposed and dark details are lost, you can increase the exposure factor. If the specified exposure factor exceeds the supported range of the device, the SDK automatically adjusts it to the supported range. On Android, the range is [-20.0,20.0]; on iOS, the range is [-8.0,8.0].
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setCameraExposurePosition
Sets the manual exposure position.
abstract setCameraExposurePosition(
positionXinView: number,
positionYinView: number
): number;
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - After this method is successfully called, the local client triggers the onCameraExposureAreaChanged callback.
Parameters
- positionXinView
- The X coordinate of the touch point relative to the view.
- positionYinView
- The Y coordinate of the touch point relative to the view.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setCameraFocusPositionInPreview
Sets the manual focus position and triggers focusing.
abstract setCameraFocusPositionInPreview(
positionX: number,
positionY: number
): number;
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1). - After this method is successfully called, the local client triggers the onCameraFocusAreaChanged callback.
Parameters
- positionX
- The X coordinate of the touch point relative to the view.
- positionY
- The Y coordinate of the touch point relative to the view.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setCameraStabilizationMode
Sets the camera stabilization mode.
abstract setCameraStabilizationMode(mode: CameraStabilizationMode): number;
Camera stabilization is disabled by default. You need to call this method to enable and set an appropriate stabilization mode.
- Camera stabilization only takes effect when the video resolution is greater than 1280 × 720.
- The higher the stabilization level, the smaller the camera's field of view and the greater the camera delay. To ensure user experience, we recommend setting the
modeparameter toCameraStabilizationModeLevel1.
Scenario
In scenarios such as mobile shooting, low-light environments, or handheld shooting, you can set the camera stabilization mode to reduce the impact of camera shake and obtain more stable and clearer images.
Timing
You must call this method after the camera is successfully started, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Parameters
- mode
- Camera stabilization mode. See CameraStabilizationMode.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setCameraTorchOn
Sets whether to turn on the flashlight.
abstract setCameraTorchOn(isOn: boolean): number;
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Parameters
- isOn
- Whether to turn on the flashlight:
- true: Turn on the flashlight.
- false: (default) Turn off the flashlight.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setCameraZoomFactor
Sets the zoom factor of the camera.
abstract setCameraZoomFactor(factor: number): number;
Some iOS devices have rear cameras composed of multiple lenses, such as dual cameras (wide-angle and ultra-wide-angle) or triple cameras (wide-angle, ultra-wide-angle, and telephoto). For such composite lenses with ultra-wide-angle capabilities, you can call setCameraCapturerConfiguration and set cameraFocalLengthType to CameraFocalLengthDefault (0) (standard lens), then call this method to set the camera zoom factor to a value less than 1.0 to achieve an ultra-wide-angle shooting effect.
- You must call this method after enableVideo. The setting takes effect after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as
LocalVideoStreamStateCapturing(1).
Parameters
- factor
- The zoom factor of the camera. For devices that do not support ultra-wide-angle, the range is from 1.0 to the maximum zoom factor; for devices that support ultra-wide-angle, the range is from 0.5 to the maximum zoom factor. You can use getCameraMaxZoomFactor to get the maximum zoom factor supported by the device.
Return Values
- If the method call succeeds: returns the set
factorvalue. - If the method call fails: returns a value < 0.
setChannelProfile
Sets the channel profile.
abstract setChannelProfile(profile: ChannelProfileType): number;
You can call this method to set the channel profile. The SDK uses different optimization strategies for different scenarios. For example, in the live streaming scenario, it prioritizes video quality. The default channel profile after SDK initialization is live streaming.
Timing
You must call this method before joining a channel.
Parameters
- profile
- The channel profile. See ChannelProfileType.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter.
- -7: The SDK is not initialized.
setClientRole
Sets the user role and audience latency level in a live streaming scenario.
abstract setClientRole(
role: ClientRoleType,
options?: ClientRoleOptions
): number;
By default, the SDK sets the user role to audience. You can call this method to set the user role to broadcaster. The user role (role) determines the user's permissions at the SDK level, such as whether they can publish streams.
AudienceLatencyLevelUltraLowLatency (ultra-low latency).
If you call this method before joining the channel and set role to BROADCASTER, the local onClientRoleChanged callback is not triggered.Timing
You can call this method either before or after joining a channel. If you call this method after joining the channel to switch user roles and the call succeeds, the SDK automatically calls muteLocalAudioStream and muteLocalVideoStream to update the publishing state of audio and video streams.
Parameters
- role
- The user role. See ClientRoleType.
Note: Users with the audience role cannot publish audio or video streams in the channel. When publishing in a live streaming scenario, ensure the user role is switched to broadcaster.
- options
- User-specific settings, including user level. See ClientRoleOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -1: General error (not categorized).
- -2: Invalid parameter.
- -5: The call was rejected.
- -7: The SDK is not initialized.
setCloudProxy
Sets the cloud proxy service.
abstract setCloudProxy(proxyType: CloudProxyType): number;
When a user's network is restricted by a firewall, you need to add the IP addresses and port numbers provided by Agora to the firewall whitelist, then call this method to enable the cloud proxy and set the proxy type via the proxyType parameter.
After successfully connecting to the cloud proxy, the SDK triggers the onConnectionStateChanged (ConnectionStateConnecting, ConnectionChangedSettingProxyServer) callback.
To disable a configured Force UDP or Force TCP cloud proxy, call setCloudProxy(NoneProxy).
To change the configured cloud proxy type, first call setCloudProxy(NoneProxy), then call setCloudProxy again with the desired proxyType.
- It is recommended to call this method outside the channel.
- If the user is in an intranet firewall environment, the features of CDN live streaming and cross-channel media relay are not available when using Force UDP cloud proxy.
- When using Force UDP cloud proxy, online audio files using HTTP protocol cannot be played via startAudioMixing. CDN live streaming and cross-channel media relay use TCP cloud proxy.
Parameters
- proxyType
- Cloud proxy type. See CloudProxyType. This parameter is required. If not set, the SDK returns an error.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
- -2: Invalid parameter.
- -7: SDK not initialized.
setColorEnhanceOptions
Sets color enhancement options.
abstract setColorEnhanceOptions(
enabled: boolean,
options: ColorEnhanceOptions,
type?: MediaSourceType
): number;
Video captured by the camera may have color distortion. The color enhancement feature intelligently adjusts video characteristics such as saturation and contrast to improve color richness and accuracy, resulting in more vivid video. You can call this method to enable the color enhancement feature and set its effect.
- Call this method after enableVideo.
- Color enhancement requires certain device performance. If the device overheats or encounters issues after enabling it, consider lowering the enhancement level or disabling the feature.
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will prevent the feature from working properly.
Parameters
- enabled
- Whether to enable the color enhancement feature:
- true: Enable color enhancement.
- false: (default) Disable color enhancement.
- options
- Color enhancement options used to set the enhancement effect. See ColorEnhanceOptions.
- type
- The media source type to apply the effect to. See MediaSourceType.
Note: This method only supports the following two settings:
- For local video captured by the camera, keep the default value
PrimaryCameraSource. - For custom captured video, set this parameter to
CustomVideoSource.
- For local video captured by the camera, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setDefaultAudioRouteToSpeakerphone
Sets the default audio route.
abstract setDefaultAudioRouteToSpeakerphone(
defaultToSpeaker: boolean
): number;
- Voice call: Earpiece
- Voice live streaming: Speaker
- Video call: Speaker
- Video live streaming: Speaker
Timing
Call this method before joining a channel. To switch the audio route after joining a channel, call setEnableSpeakerphone.
Parameters
- defaultToSpeaker
- Whether to use the speaker as the default audio route:
- true: Set the default audio route to speaker.
- false: Set the default audio route to earpiece.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setDirectCdnStreamingAudioConfiguration
Sets the audio encoding profile for direct CDN streaming from the host.
abstract setDirectCdnStreamingAudioConfiguration(
profile: AudioProfileType
): number;
- Deprecated
- Deprecated since v4.6.2.
This method is only effective for audio collected from the microphone or custom audio sources, i.e., when publishMicrophoneTrack or publishCustomAudioTrack is set to true in DirectCdnStreamingMediaOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setDirectCdnStreamingVideoConfiguration
Sets the video encoding profile for direct CDN streaming from the host.
abstract setDirectCdnStreamingVideoConfiguration(
config: VideoEncoderConfiguration
): number;
- Deprecated
- Deprecated since v4.6.2.
This method is only effective for video collected from the camera, screen sharing, or custom video sources, i.e., when publishCameraTrack or publishCustomVideoTrack is set to true in DirectCdnStreamingMediaOptions.
If the resolution you set exceeds the capabilities of your camera device, the SDK adapts the resolution to the closest supported value with the same aspect ratio for capturing, encoding, and streaming. You can use the onDirectCdnStreamingStats callback to get the actual resolution of the pushed video stream.
Parameters
- config
- Video encoding configuration. See VideoEncoderConfiguration.
Note: When streaming directly to CDN, the SDK currently only supports setting OrientationMode to landscape (
OrientationFixedLandscape) or portrait (OrientationFixedPortrait).
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setDualStreamMode
Sets the dual-stream mode and configures the low-quality video stream on the sender side.
abstract setDualStreamMode(
mode: SimulcastStreamMode,
streamConfig?: SimulcastStreamConfig
): number;
AutoSimulcastStream) on the sender side, meaning the sender does not actively send the low-quality stream. A receiver with host role can call setRemoteVideoStreamType to request the low-quality stream, and the sender starts sending it automatically upon receiving the request.
- To change this behavior, call this method and set
modetoDisableSimulcastStream(never send low-quality stream) orEnableSimulcastStream(always send low-quality stream). - To revert to the default behavior after making changes, call this method again and set
modetoAutoSimulcastStream.
- Calling this method with
modeset toDisableSimulcastStreamis equivalent to calling enableDualStreamMode withenabledset to false. - Calling this method with
modeset toEnableSimulcastStreamis equivalent to calling enableDualStreamMode withenabledset to true. - Both methods can be called before or after joining a channel. If both are used, the settings from the later call take precedence.
Parameters
- mode
- The mode for sending video streams. See SimulcastStreamMode.
- streamConfig
- Configuration for the low-quality video stream. See SimulcastStreamConfig.
Note: When
modeis set toDisableSimulcastStream,streamConfighas no effect.
Return Values
- 0: Success.
- < 0: Failure. See Error Code for details and resolution suggestions.
setEarMonitoringAudioFrameParameters
Sets the audio data format for in-ear monitoring.
abstract setEarMonitoringAudioFrameParameters(
sampleRate: number,
channel: number,
mode: RawAudioFrameOpModeType,
samplesPerCall: number
): number;
This method sets the audio data format for the onEarMonitoringAudioFrame callback.
- Before calling this method, you need to call enableInEarMonitoring and set
includeAudioFilterstoEarMonitoringFilterBuiltInAudioFiltersorEarMonitoringFilterNoiseSuppression. - The SDK calculates the sampling interval using the
samplesPerCall,sampleRate, andchannelparameters in this method. The formula is: sampling interval =samplesPerCall/ (sampleRate×channel). Ensure the interval is no less than 0.01 seconds. The SDK triggers the onEarMonitoringAudioFrame callback based on this interval.
Parameters
- sampleRate
- The sample rate (Hz) of the audio reported in onEarMonitoringAudioFrame, can be set to 8000, 16000, 32000, 44100, or 48000.
- channel
- The number of audio channels reported in onEarMonitoringAudioFrame, can be set to 1 or 2:
- 1: Mono.
- 2: Stereo.
- mode
- The usage mode of the audio frame. See RawAudioFrameOpModeType.
- samplesPerCall
- The number of audio samples reported in onEarMonitoringAudioFrame, typically 1024 in scenarios like CDN streaming.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setEffectPosition
Sets the playback position of the specified audio effect file.
abstract setEffectPosition(soundId: number, pos: number): number;
After the setting is successful, the local audio effect file starts playing from the specified position.
Parameters
- soundId
- The ID of the audio effect. Each audio effect has a unique ID.
- pos
- The playback position of the audio effect file, in milliseconds.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setEffectsVolume
Sets the playback volume of audio effect files.
abstract setEffectsVolume(volume: number): number;
Timing
You need to call this method after playEffect.
Parameters
- volume
- Playback volume. The range is [0,100]. The default value is 100, which means the original volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setEnableSpeakerphone
Enables or disables speakerphone playback.
abstract setEnableSpeakerphone(speakerOn: boolean): number;
For default audio routes in different scenarios,.
- This method only sets the audio route used by the user in the current channel and does not affect the SDK's default audio route. If the user leaves the current channel and joins a new one, the SDK's default audio route will still be used.
- If the user uses external audio playback devices such as Bluetooth or wired headsets, this method has no effect, and audio will only be played through the external device. If multiple external devices are connected, audio will be played through the most recently connected device.
Scenario
If the SDK's default audio route or the setting of setDefaultAudioRouteToSpeakerphone does not meet your needs, you can call this method to switch the current audio route.
Timing
Call this method after joining a channel.
Parameters
- speakerOn
- Whether to enable speakerphone playback:
- true: Enable. Audio route is speaker.
- false: Disable. Audio route is earpiece.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setExtensionProperty
Sets a plugin property.
abstract setExtensionProperty(
provider: string,
extension: string,
key: string,
value: string,
type?: MediaSourceType
): number;
After enabling a plugin, you can call this method to set its properties.
Timing
Call this method after calling enableExtension to enable the plugin.
Parameters
- provider
- The name of the plugin provider.
- extension
- The name of the plugin.
- key
- The key of the plugin property.
- value
- The value corresponding to the plugin property key.
- type
- The media source type of the plugin. See MediaSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setExtensionProviderProperty
Sets a property for the plugin provider.
abstract setExtensionProviderProperty(
provider: string,
key: string,
value: string
): number;
You can call this method to set properties for the plugin provider and initialize related parameters based on the provider type.
Timing
Call this method after registerExtension and before enableExtension.
Parameters
- provider
- The name of the plugin provider.
- key
- The key of the plugin property.
- value
- The value corresponding to the plugin property key.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setExternalMediaProjection
Sets an external MediaProjection to capture screen video streams.
abstract setExternalMediaProjection(mediaProjection: any): number;
After successfully calling this method, the external MediaProjection you set will replace the MediaProjection obtained by the SDK to capture screen video streams.
When screen sharing stops or IRtcEngine is destroyed, the SDK automatically releases the MediaProjection.
MediaProjection permission before calling this method.Scenario
MediaProjection yourself, you can use your own MediaProjection to replace the one obtained by the SDK. Here are two usage scenarios:
- On customized system devices, you can avoid system pop-ups (which require user permission for screen capture) and directly start screen capture.
- In a screen sharing process with one or more subprocesses, this avoids errors caused by creating objects in subprocesses that may lead to capture failure.
Timing
This method must be called before startScreenCapture.
Parameters
- mediaProjection
- A MediaProjection object used to capture screen video streams.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setExternalRemoteEglContext
Sets the EGL context for rendering remote video streams.
abstract setExternalRemoteEglContext(eglContext: any): number;
By setting this method, developers can replace the SDK's default remote EGL context, facilitating unified EGL context management. When the engine is destroyed, the SDK automatically releases the EGL context.
Scenario
This method is applicable in scenarios where remote video rendering is done using video data in Texture format.
Timing
This method must be called before joining a channel.
Parameters
- eglContext
- The EGL context object used for rendering remote video streams.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setFaceShapeAreaOptions
Sets face shape area options and specifies the media source.
abstract setFaceShapeAreaOptions(
options: FaceShapeAreaOptions,
type?: MediaSourceType
): number;
If the preset face shaping effects implemented in the setFaceShapeBeautyOptions method do not meet your expectations, you can use this method to set face shape area options to fine-tune individual facial features for more refined face shaping effects.
- On Android, this method is only supported on Android 4.4 and above.
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will cause the feature to fail. - This feature has high performance requirements. When calling this method, the SDK automatically checks the capabilities of the current device.
Timing
Call this method after setFaceShapeBeautyOptions.
Parameters
- options
- Face shape area options. See FaceShapeAreaOptions.
- type
- The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, only the following two settings are supported:
- When using the camera to capture local video, keep the default value
PrimaryCameraSource. - To use custom captured video, set this parameter to
CustomVideoSource.
- When using the camera to capture local video, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -4: The current device does not support this feature. Possible reasons include:
- The device does not meet the performance requirements for beauty effects. Consider using a higher-performance device.
- The device runs a version lower than Android 4.4, which is not supported. Consider upgrading the OS or changing the device.
- -4: The current device does not support this feature. Possible reasons include:
setFaceShapeBeautyOptions
Sets face shaping effect options and specifies the media source.
abstract setFaceShapeBeautyOptions(
enabled: boolean,
options: FaceShapeBeautyOptions,
type?: MediaSourceType
): number;
Call this method to apply preset parameters for facial modifications such as face slimming, eye enlargement, and nose slimming in one go, and to adjust the overall intensity of the effects.
- On Android, this method is only supported on Android 4.4 and above.
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will cause the feature to fail. - This feature has high performance requirements. When calling this method, the SDK automatically checks the capabilities of the current device.
Timing
Call this method after enableVideo.
Parameters
- enabled
- Whether to enable face shaping effects:
- true: Enable face shaping.
- false: (Default) Disable face shaping.
- options
- Face shaping style options. See FaceShapeBeautyOptions.
- type
- The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, only the following two settings are supported:
- When using the camera to capture local video, keep the default value
PrimaryCameraSource. - To use custom captured video, set this parameter to
CustomVideoSource.
- When using the camera to capture local video, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -4: The current device does not support this feature. Possible reasons include:
- The device does not meet the performance requirements for beauty effects. Consider using a higher-performance device.
- The device runs a version lower than Android 4.4, which is not supported. Consider upgrading the OS or changing the device.
- -4: The current device does not support this feature. Possible reasons include:
setFilterEffectOptions
Sets filter effect options and specifies the media source.
abstract setFilterEffectOptions(
enabled: boolean,
options: FilterEffectOptions,
type?: MediaSourceType
): number;
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will cause the feature to fail. - This feature has high performance requirements. When calling this method, the SDK automatically checks the capabilities of the current device.
Timing
Call this method after enableVideo.
Parameters
- enabled
- Whether to enable filter effects:
- true: Enable filter effects.
- false: (Default) Disable filter effects.
- options
- Filter options. See FilterEffectOptions.
- type
- The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, only the following two settings are supported:
- When using the camera to capture local video, keep the default value
PrimaryCameraSource. - To use custom captured video, set this parameter to
CustomVideoSource.
- When using the camera to capture local video, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setHeadphoneEQParameters
Sets the low and high frequency parameters of the headphone equalizer.
abstract setHeadphoneEQParameters(lowGain: number, highGain: number): number;
In spatial audio scenarios, if the expected effect is not achieved after calling setHeadphoneEQPreset to use a preset headphone equalizer effect, you can call this method to further adjust the headphone equalizer.
Parameters
- lowGain
- Low frequency parameter of the headphone equalizer. Range: [-10, 10]. The higher the value, the deeper the sound.
- highGain
- High frequency parameter of the headphone equalizer. Range: [-10, 10]. The higher the value, the sharper the sound.
Return Values
- 0: Success.
- < 0: Failure
- -1: General error (not specifically classified).
setHeadphoneEQPreset
Sets a preset headphone equalizer effect.
abstract setHeadphoneEQPreset(preset: HeadphoneEqualizerPreset): number;
This method is mainly used in spatial audio scenarios. You can select a preset headphone equalizer to listen to audio and achieve the desired audio experience.
Parameters
- preset
- Preset headphone equalizer effect. See HeadphoneEqualizerPreset.
Return Values
- 0: Success.
- < 0: Failure
- -1: General error (not specifically classified).
setInEarMonitoringVolume
Sets the in-ear monitoring volume.
abstract setInEarMonitoringVolume(volume: number): number;
Timing
Can be called before or after joining the channel.
Parameters
- volume
- Volume, range: [0,400].
- 0: Mute.
- 100: (Default) Original volume.
- 400: 4 times the original volume, with built-in overflow protection.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter setting, such as in-ear monitoring volume out of range (< 0 or > 400).
setLocalRenderMode
Updates the local view display mode.
abstract setLocalRenderMode(
renderMode: RenderModeType,
mirrorMode?: VideoMirrorModeType
): number;
After initializing the local user view, you can call this method to update the rendering and mirror mode of the local user view. This method only affects the video image seen by the local user and does not affect the publishing of the local video.
Timing
- You can call this method multiple times during a call to update the display mode of the local user view.
Parameters
- renderMode
- The display mode of the local view. See RenderModeType.
- mirrorMode
- The mirror mode of the local view. See VideoMirrorModeType.
Note: If you use the front camera, the mirror mode of the local user view is enabled by default; if you use the rear camera, the mirror mode is disabled by default.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalRenderTargetFps
Sets the maximum frame rate for local video rendering.
abstract setLocalRenderTargetFps(
sourceType: VideoSourceType,
targetFps: number
): number;
Scenario
In scenarios with low frame rate requirements for video rendering (e.g., screen sharing, online education), you can call this method to set the maximum frame rate for local video rendering. The SDK will try to match the actual rendering frame rate to this value to reduce CPU usage and improve system performance.
Timing
This method can be called before or after joining a channel.
Parameters
- sourceType
- The type of video source. See VideoSourceType.
- targetFps
- Maximum rendering frame rate (fps). Supported values: 1, 7, 10, 15, 24, 30, 60.
Note: Set this parameter to a value lower than the actual frame rate of the video, otherwise the setting will not take effect.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalVideoMirrorMode
Sets the local video mirror mode.
abstract setLocalVideoMirrorMode(mirrorMode: VideoMirrorModeType): number;
- Deprecated
- Deprecated: This method is deprecated.
Parameters
- mirrorMode
- The mirror mode of the local video. See VideoMirrorModeType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalVoiceEqualization
Sets the local voice equalization effect.
abstract setLocalVoiceEqualization(
bandFrequency: AudioEqualizationBandFrequency,
bandGain: number
): number;
Timing
Can be called before or after joining a channel.
Parameters
- bandFrequency
- Index of the frequency band. The value range is [0,9], representing 10 frequency bands. The corresponding center frequencies are [31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, 16k] Hz. See AudioEqualizationBandFrequency.
- bandGain
- Gain of each band in dB. The value range is [-15,15], and the default is 0.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalVoiceFormant
Sets the formant ratio to change the voice timbre.
abstract setLocalVoiceFormant(formantRatio: number): number;
Formant ratio is a parameter that affects the timbre of the voice. A smaller value results in a deeper voice, while a larger value results in a sharper voice. After setting the formant ratio, all users in the channel can hear the effect. If you want to change both timbre and pitch, Agora recommends using it together with setLocalVoicePitch.
Scenario
In scenarios like voice live streaming, voice chat rooms, and karaoke rooms, you can call this method to set the local voice's formant ratio to change the timbre.
Timing
Can be called before or after joining a channel.
Parameters
- formantRatio
- Formant ratio. The value range is [-1.0, 1.0]. The default is 0.0, which means no change to the original timbre.
Note: Agora recommends a value range of [-0.4, 0.6]. Effects outside this range may sound suboptimal.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalVoicePitch
Sets the local voice pitch.
abstract setLocalVoicePitch(pitch: number): number;
Timing
Can be called before or after joining a channel.
Parameters
- pitch
- Voice frequency. Can be set within the range [0.5, 2.0]. The smaller the value, the lower the pitch. The default is 1.0, meaning no pitch change.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLocalVoiceReverb
Sets local voice reverb effects.
abstract setLocalVoiceReverb(
reverbKey: AudioReverbType,
value: number
): number;
The SDK provides a simpler method setAudioEffectPreset to directly achieve preset reverb effects such as Pop, R&B, and KTV.
Parameters
- reverbKey
- Reverb effect key. There are 5 reverb effect keys in total. See AudioReverbType.
- value
- Value corresponding to each reverb effect key.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLogFile
Sets the log file.
abstract setLogFile(filePath: string): number;
- Deprecated
- Deprecated: This method is deprecated. Please set the log file path via the
contextparameter when calling initialize.
Sets the SDK's output log file. All logs generated during SDK runtime will be written to this file.
Timing
This method must be called immediately after initialize, otherwise the log output may be incomplete.
Parameters
- filePath
- Full path of the log file. The log file is UTF-8 encoded.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setLogFileSize
Sets the size of the SDK log output file.
abstract setLogFileSize(fileSizeInKBytes: number): number;
- Deprecated
- Deprecated: This method is deprecated. Use the
logConfigparameter in initialize instead to set the log file size.
- SDK log file names:
agorasdk.log,agorasdk.1.log,agorasdk.2.log,agorasdk.3.log,agorasdk.4.log. - API call log file names:
agoraapi.log,agoraapi.1.log,agoraapi.2.log,agoraapi.3.log,agoraapi.4.log. - Each SDK log file has a default size of 2,048 KB; API call log files also default to 2,048 KB. All log files are UTF-8 encoded.
- The latest logs are always written to
agorasdk.logandagoraapi.log. - When
agorasdk.logis full, the SDK performs the following operations in order:- Delete the
agorasdk.4.logfile (if it exists). - Rename
agorasdk.3.logtoagorasdk.4.log. - Rename
agorasdk.2.logtoagorasdk.3.log. - Rename
agorasdk.1.logtoagorasdk.2.log. - Create a new
agorasdk.logfile.
- Delete the
- The rollover rules for
agoraapi.logare the same as foragorasdk.log.
agorasdk.log file and does not affect agoraapi.log.Parameters
- fileSizeInKBytes
- The size of a single
agorasdk.loglog file in KB. The valid range is [128,20480], and the default is 2,048 KB. If you setfileSizeInKByteto less than 128 KB, the SDK automatically adjusts it to 128 KB. If you set it to more than 20,480 KB, the SDK automatically adjusts it to 20,480 KB.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLogFilter
Sets the log output level.
abstract setLogFilter(filter: LogFilterType): number;
- Deprecated
- Deprecated: Use
logConfigin initialize instead.
This method sets the log output level of the SDK. Different output levels can be used individually or in combination. The log levels in order are: LogFilterOff, LogFilterCritical, LogFilterError, LogFilterWarn, LogFilterInfo, and LogFilterDebug.
When you select a level, you will see logs for that level and all levels before it.
For example, if you select LogFilterWarn, you will see logs for LogFilterCritical, LogFilterError, and LogFilterWarn.
Parameters
- filter
- Log filter level. See LogFilterType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLogLevel
Sets the log output level of the SDK.
abstract setLogLevel(level: LogLevel): number;
- Deprecated
- Deprecated: This method is deprecated. Set the log output level via the
contextparameter when calling initialize.
When you select a level, you will see log information for that level.
Parameters
- level
- Log level. See LogLevel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setLowlightEnhanceOptions
Enables low-light enhancement.
abstract setLowlightEnhanceOptions(
enabled: boolean,
options: LowlightEnhanceOptions,
type?: MediaSourceType
): number;
You can call this method to enable low-light enhancement and configure its effect.
- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will cause the feature to fail. - Low-light enhancement has certain performance requirements. If the device overheats or experiences issues after enabling this feature, consider lowering the enhancement level or disabling the feature.
- To achieve high-quality low-light enhancement (
LowLightEnhanceLevelHighQuality), you must first call setVideoDenoiserOptions to enable video denoising. The mapping is as follows:- When low-light enhancement is in auto mode (
LowLightEnhanceAuto), video denoising must be set to high quality (VideoDenoiserLevelHighQuality) and auto mode (VideoDenoiserAuto). - When low-light enhancement is in manual mode (
LowLightEnhanceManual), video denoising must be set to high quality (VideoDenoiserLevelHighQuality) and manual mode (VideoDenoiserManual).
- When low-light enhancement is in auto mode (
Scenario
Low-light enhancement can adaptively adjust the video brightness in low-light conditions (such as backlight, cloudy days, or dark scenes) and uneven lighting environments to restore or highlight image details and improve the overall visual quality.
Timing
Call this method after enableVideo.
Parameters
- enabled
- Whether to enable low-light enhancement:
- true: Enable low-light enhancement.
- false: (Default) Disable low-light enhancement.
- options
- Low-light enhancement options to configure the effect. See LowlightEnhanceOptions.
- type
- The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, only the following two settings are supported:
- When using the camera to capture local video, keep the default value
PrimaryCameraSource. - To use custom captured video, set this parameter to
CustomVideoSource.
- When using the camera to capture local video, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setMaxMetadataSize
Sets the maximum size of media metadata.
abstract setMaxMetadataSize(size: number): number;
After calling registerMediaMetadataObserver, you can call this method to set the maximum size of media metadata.
Parameters
- size
- The maximum size of media metadata.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setMixedAudioFrameParameters
Sets the raw audio data format after audio capture and playback mixing.
abstract setMixedAudioFrameParameters(
sampleRate: number,
channel: number,
samplesPerCall: number
): number;
The SDK calculates the sampling interval using the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Ensure the interval is no less than 0.01 seconds. The SDK triggers the onMixedAudioFrame callback based on this interval.
Timing
This method must be called before joining a channel.
Parameters
- sampleRate
- The sample rate (Hz) of the audio data, can be set to 8000, 16000, 32000, 44100, or 48000.
- channel
- The number of audio channels, can be set to 1 or 2:
- 1: Mono.
- 2: Stereo.
- samplesPerCall
- The number of audio samples, typically 1024 in scenarios like CDN streaming.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setParameters
The SDK's JSON configuration, used for technical preview or customized features.
abstract setParameters(parameters: string): number;
Parameters
- parameters
- Parameters in JSON string format.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setPlaybackAudioFrameBeforeMixingParameters
Sets the raw audio playback data format before mixing.
abstract setPlaybackAudioFrameBeforeMixingParameters(
sampleRate: number,
channel: number
samplesPerCall: number
): number;
The SDK triggers the onPlaybackAudioFrameBeforeMixing callback based on this sampling interval.
Timing
This method must be called before joining a channel.
Parameters
- sampleRate
- The sample rate (Hz) of the audio data, can be set to 8000, 16000, 32000, 44100, or 48000.
- channel
- The number of audio channels, can be set to 1 or 2:
- 1: Mono.
- 2: Stereo.
- samplesPerCall
- Sets the number of audio samples returned in the onPlaybackAudioFrameBeforeMixing callback. In RTMP streaming scenarios, it is recommended to set this to 1024.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setPlaybackAudioFrameParameters
Sets the data format of the playback raw audio.
abstract setPlaybackAudioFrameParameters(
sampleRate: number,
channel: number,
mode: RawAudioFrameOpModeType,
samplesPerCall: number
): number;
The SDK calculates the sampling interval using the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Ensure that the sampling interval is no less than 0.01 seconds. The SDK triggers the onPlaybackAudioFrame callback based on this interval.
Timing
You must call this method before joining a channel.
Parameters
- sampleRate
- The sample rate (Hz) of the audio data. You can set it to 8000, 16000, 24000, 32000, 44100, or 48000.
- channel
- The number of audio channels. You can set it to 1 or 2:
- 1: Mono.
- 2: Stereo.
- mode
- The operation mode of the audio frame. See RawAudioFrameOpModeType.
- samplesPerCall
- The number of audio samples per call. Typically set to 1024 in scenarios such as CDN streaming.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setRecordingAudioFrameParameters
Sets the data format of the recorded raw audio.
abstract setRecordingAudioFrameParameters(
sampleRate: number,
channel: number,
mode: RawAudioFrameOpModeType,
samplesPerCall: number
): number;
The SDK calculates the sampling interval using the samplesPerCall, sampleRate, and channel parameters in this method. The formula is: sampling interval = samplesPerCall / (sampleRate × channel). Ensure that the sampling interval is no less than 0.01 seconds. The SDK triggers the onRecordAudioFrame callback based on this interval.
Timing
You must call this method before joining a channel.
Parameters
- sampleRate
- The sample rate (Hz) of the audio data. You can set it to 8000, 16000, 32000, 44100, or 48000.
- channel
- The number of audio channels. You can set it to 1 or 2:
- 1: Mono.
- 2: Stereo.
- mode
- The operation mode of the audio frame. See RawAudioFrameOpModeType.
- samplesPerCall
- The number of audio samples per call. Typically set to 1024 in scenarios such as CDN streaming.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
setRemoteDefaultVideoStreamType
Sets the default video stream type to subscribe to.
abstract setRemoteDefaultVideoStreamType(streamType: VideoStreamType): number;
- By default, the SDK enables adaptive low-quality stream mode (
AutoSimulcastStream) on the sender side, meaning the sender only sends the high-quality stream. Only receivers with host role can call this method to request the low-quality stream. Once the sender receives the request, it starts sending the low-quality stream. At this point, all users in the channel can call this method to switch to low-quality stream subscription mode. - If the sender calls setDualStreamMode and sets
modetoDisableSimulcastStream(never send low-quality stream), this method has no effect. - If the sender calls setDualStreamMode and sets
modetoEnableSimulcastStream(always send low-quality stream), both host and audience receivers can call this method to switch to low-quality stream subscription mode.
Timing
You must call this method before joining a channel. The SDK does not support changing the default video stream type after joining a channel.
Parameters
- streamType
- The default video stream type to subscribe to: VideoStreamType.
Return Values
- 0: Success.
- < 0: Failure. See Error Code for details and resolution suggestions.
setRemoteRenderMode
Updates the remote view display mode.
abstract setRemoteRenderMode(
uid: number,
renderMode: RenderModeType,
mirrorMode: VideoMirrorModeType
): number;
After initializing the remote user view, you can call this method to update the rendering and mirror mode of the remote user view as displayed locally. This method only affects the video image seen by the local user.
- You can call this method multiple times during a call to update the display mode of the remote user view.
Parameters
- uid
- Remote user ID.
- renderMode
- The rendering mode of the remote user view. See RenderModeType.
- mirrorMode
- The mirror mode of the remote user view. See VideoMirrorModeType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRemoteRenderTargetFps
Sets the maximum frame rate for remote video rendering.
abstract setRemoteRenderTargetFps(targetFps: number): number;
Scenario
In scenarios where high video rendering frame rate is not required (e.g., screen sharing, online education), or when the remote user is using mid- to low-end devices, you can call this method to set the maximum frame rate for remote video rendering. The SDK will try to match the actual rendering frame rate to this value to reduce CPU usage and improve system performance.
Timing
This method can be called before or after joining a channel.
Parameters
- targetFps
- Maximum rendering frame rate (fps). Supported values: 1, 7, 10, 15, 24, 30, 60.
Note: Set this parameter to a value lower than the actual frame rate of the video, otherwise the setting will not take effect.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRemoteSubscribeFallbackOption
Sets the fallback option for subscribed audio and video streams under poor network conditions.
abstract setRemoteSubscribeFallbackOption(
option: StreamFallbackOptions
): number;
Under poor network conditions, the quality of real-time audio and video may degrade. You can call this method and set option to StreamFallbackOptionVideoStreamLow or StreamFallbackOptionAudioOnly. When the downlink network is weak and audio/video quality is severely affected, the SDK will switch the video stream to a lower stream or disable the video stream to ensure audio quality. The SDK continuously monitors network quality and resumes audio and video subscription when conditions improve.
When the subscribed stream falls back to audio or recovers to audio and video, the SDK triggers the onRemoteSubscribeFallbackToAudioOnly callback.
Parameters
- option
- Fallback option for the subscribed stream. See STREAM_FALLBACK_OPTIONS.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRemoteUserSpatialAudioParams
Sets the spatial audio parameters for a remote user.
abstract setRemoteUserSpatialAudioParams(
uid: number,
params: SpatialAudioParams
): number;
You need to call this method after enableSpatialAudio. After successfully setting the spatial audio parameters for the remote user, the local user will hear spatial audio effects from the remote user.
Parameters
- uid
- User ID. Must be the same as the user ID used when joining the channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRemoteVideoStreamType
Sets the video stream type to subscribe to.
abstract setRemoteVideoStreamType(
uid: number,
streamType: VideoStreamType
): number;
- By default, the SDK enables adaptive low-quality stream mode (
AutoSimulcastStream) on the sender side, meaning the sender only sends the high-quality stream. Only receivers with host role can call this method to request the low-quality stream. Once the sender receives the request, it starts sending the low-quality stream. At this point, all users in the channel can call this method to switch to low-quality stream subscription mode. - If the sender calls setDualStreamMode and sets
modetoDisableSimulcastStream(never send low-quality stream), this method has no effect. - If the sender calls setDualStreamMode and sets
modetoEnableSimulcastStream(always send low-quality stream), both host and audience receivers can call this method to switch to low-quality stream subscription mode.
- You can call this method either before or after joining a channel.
- If you call both this method and setRemoteDefaultVideoStreamType, the SDK uses the settings from this method.
Parameters
- uid
- The user ID.
- streamType
- The video stream type: VideoStreamType.
Return Values
- 0: Success.
- < 0: Failure. See Error Code for details and resolution suggestions.
setRemoteVideoSubscriptionOptions
Sets the subscription options for remote video streams.
abstract setRemoteVideoSubscriptionOptions(
uid: number,
options: VideoSubscriptionOptions
): number;
- If IVideoFrameObserver is registered, both raw and encoded data are subscribed by default.
- If IVideoEncodedFrameObserver is registered, only encoded data is subscribed by default.
- If both observers are registered, the default behavior follows the later registered observer. For example, if IVideoFrameObserver is registered later, both raw and encoded data are subscribed by default.
uids, you can call this method.Parameters
- uid
- Remote user ID.
- options
- Subscription settings for the video stream. See VideoSubscriptionOptions.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRemoteVoicePosition
Sets the 2D position of a remote user's voice, i.e., horizontal position.
abstract setRemoteVoicePosition(
uid: number,
pan: number,
gain: number
): number;
Sets the 2D position and volume of a remote user's voice to help the local user locate the sound source. By calling this method, you can set the position where the remote user's voice appears. The difference between the left and right channels creates a sense of direction, allowing the user to determine the real-time position of the remote user. In multiplayer online games such as battle royale, this method can effectively enhance the sense of direction of game characters and simulate a real environment.
- Before using this method, you must call enableSoundPositionIndication before joining the channel to enable stereo sound for remote users.
- For the best audio experience, it is recommended to use wired headphones when using this method.
- This method must be called after joining a channel.
Parameters
- uid
- The ID of the remote user
- pan
- Sets the 2D position of the remote user's voice. The range is [-1.0, 1.0]:
- (Default) 0.0: Voice appears in the center.
- -1.0: Voice appears on the left.
- 1.0: Voice appears on the right.
- gain
- Sets the volume of the remote user's voice. The range is [0.0, 100.0], with a default of 100.0, representing the original volume. The smaller the value, the lower the volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setRouteInCommunicationMode
Selects the audio route in communication volume mode.
abstract setRouteInCommunicationMode(route: number): number;
This method is used to switch the audio route from a Bluetooth headset to the earpiece, wired headset, or speaker in communication volume mode (MODE_IN_COMMUNICATION).
Timing
Can be called before or after joining a channel.
Parameters
- route
- The desired audio route:
- -1: The system default audio route.
- 0: Headset with microphone.
- 1: Earpiece.
- 2: Headset without microphone.
- 3: Built-in speaker.
- 4: (Not supported) External speaker.
- 5: Bluetooth headset.
- 6: USB device.
Return Values
No practical meaning.
setScreenCaptureContentHint
Sets the content type of screen sharing.
abstract setScreenCaptureContentHint(contentHint: VideoContentHint): number;
The SDK optimizes the sharing experience using different algorithms based on the content type. If you do not call this method, the SDK defaults the screen sharing content type to ContentHintNone, meaning no specific content type.
Parameters
- contentHint
- The content type of screen sharing. See VideoContentHint.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: Invalid parameter.
- -8: Invalid screen sharing state. This may occur if you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing, then restart screen sharing.
setScreenCaptureScenario
Sets the screen sharing scenario.
abstract setScreenCaptureScenario(screenScenario: ScreenScenarioType): number;
When starting screen or window sharing, you can call this method to set the screen sharing scenario. The SDK adjusts the shared video quality based on the scenario you set.
Parameters
- screenScenario
- The screen sharing scenario. See ScreenScenarioType.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
setSubscribeAudioAllowlist
Sets the audio subscription allowlist.
abstract setSubscribeAudioAllowlist(
uidList: number[],
uidNumber: number
): number;
You can call this method to specify the audio streams you want to subscribe to.
- This method can be called before or after joining a channel.
- The audio subscription allowlist is not affected by muteRemoteAudioStream, muteAllRemoteAudioStreams, or the
autoSubscribeAudiosetting in ChannelMediaOptions. - After setting the allowlist, if you leave and rejoin the channel, the allowlist remains effective.
- If a user is included in both the audio subscription allowlist and blocklist, only the blocklist takes effect.
Parameters
- uidList
- List of user IDs in the audio subscription allowlist.
If you want to subscribe to a specific user's audio stream, add that user's ID to this list. To remove a user from the allowlist, call setSubscribeAudioAllowlist again with an updated list that excludes the
uidof the user you want to remove. - uidNumber
- Number of users in the allowlist.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setSubscribeAudioBlocklist
Sets the audio subscription blocklist.
abstract setSubscribeAudioBlocklist(
uidList: number[],
uidNumber: number
): number;
You can call this method to specify the audio streams you do not want to subscribe to.
- This method can be called before or after joining a channel.
- The audio subscription blocklist is not affected by muteRemoteAudioStream, muteAllRemoteAudioStreams, or the
autoSubscribeAudiosetting in ChannelMediaOptions. - After setting the blocklist, if you leave and rejoin the channel, the blocklist remains effective.
- If a user is included in both the audio subscription allowlist and blocklist, only the blocklist takes effect.
Parameters
- uidList
- List of user IDs in the subscription blocklist.
If you want to exclude a specific user's audio stream from being subscribed to, add that user's ID to this list. To remove a user from the blocklist, call setSubscribeAudioBlocklist again with an updated list that excludes the
uidof the user you want to remove. - uidNumber
- Number of users in the blocklist.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setSubscribeVideoAllowlist
Sets the video subscription allowlist.
abstract setSubscribeVideoAllowlist(
uidList: number[],
uidNumber: number
): number;
You can call this method to specify the video streams you want to subscribe to.
- You can call this method either before or after joining a channel.
- The video subscription allowlist is not affected by muteRemoteVideoStream, muteAllRemoteVideoStreams, or
autoSubscribeVideoin ChannelMediaOptions. - After setting the allowlist, if you leave and rejoin the channel, the allowlist remains effective.
- If a user is in both the audio subscription blocklist and allowlist, only the blocklist takes effect.
Parameters
- uidList
- User ID list of the video subscription allowlist.
If you want to subscribe only to a specific user's video stream, add that user's ID to this list. To remove a user from the allowlist, you need to call the setSubscribeVideoAllowlist method again and update the video subscription allowlist to exclude the
uidof the user you want to remove. - uidNumber
- The number of users in the allowlist.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setSubscribeVideoBlocklist
Sets the video subscription blocklist.
abstract setSubscribeVideoBlocklist(
uidList: number[],
uidNumber: number
): number;
You can call this method to specify the video streams you do not want to subscribe to.
- You can call this method either before or after joining a channel.
- The video subscription blocklist is not affected by muteRemoteVideoStream, muteAllRemoteVideoStreams, or
autoSubscribeVideoin ChannelMediaOptions. - After setting the blocklist, if you leave and rejoin the channel, the blocklist remains effective.
- If a user is in both the audio subscription blocklist and allowlist, only the blocklist takes effect.
Parameters
- uidList
- User ID list of the video subscription blocklist.
If you want to exclude a specific user's video stream from being subscribed to, add that user's ID to this list. To remove a user from the blocklist, you need to call the setSubscribeVideoBlocklist method again and update the user ID list to exclude the
uidof the user you want to remove. - uidNumber
- The number of users in the blocklist.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setVideoDenoiserOptions
Enables video denoising.
abstract setVideoDenoiserOptions(
enabled: boolean,
options: VideoDenoiserOptions,
type?: MediaSourceType
): number;
You can call this method to enable video denoising and configure its effect.
lighteningContrastLevel: LighteningContrastNormallighteningLevel: 0.0smoothnessLevel: 0.5rednessLevel: 0.0sharpnessLevel: 0.1- This method depends on the video enhancement dynamic library
libagora_clear_vision_extension.dll. Removing this library will cause the feature to fail. - Video denoising has certain performance requirements. If the device overheats or experiences issues after enabling this feature, consider lowering the denoising level or disabling the feature.
Scenario
Poor lighting and low-end video capture devices can cause noticeable noise in video images, affecting video quality. In real-time interactive scenarios, video noise can also consume bitrate resources during encoding and reduce encoding efficiency.
Timing
Call this method after enableVideo.
Parameters
- enabled
- Whether to enable video denoising:
- true: Enable video denoising.
- false: (Default) Disable video denoising.
- options
- Video denoising options to configure the effect. See VideoDenoiserOptions.
- type
- The media source type to which the effect is applied. See MediaSourceType.
Note: In this method, only the following two settings are supported:
- When using the camera to capture local video, keep the default value
PrimaryCameraSource. - To use custom captured video, set this parameter to
CustomVideoSource.
- When using the camera to capture local video, keep the default value
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setVideoEncoderConfiguration
Sets video encoding properties.
abstract setVideoEncoderConfiguration(
config: VideoEncoderConfiguration
): number;
Sets the encoding properties for local video. Each video encoding configuration corresponds to a series of video-related parameter settings, including resolution, frame rate, and bitrate.
- The
configparameter of this method sets the maximum values achievable under ideal network conditions. If the network is poor, the video engine will not use thisconfigto render local video and will automatically downgrade to suitable video parameters.
Timing
This method must be called before joining a channel. It is recommended to call this method before enableVideo to speed up the time to first frame.
Parameters
- config
- Video encoding parameter configuration. See VideoEncoderConfiguration.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
setVideoScenario
Sets the video application scenario.
abstract setVideoScenario(scenarioType: VideoApplicationScenarioType): number;
After successfully calling this method to set the video application scenario, the SDK applies best practice strategies based on the specified scenario, automatically adjusting key performance indicators to optimize video experience quality.
Parameters
- scenarioType
- Video application scenario. See VideoApplicationScenarioType.
ApplicationScenarioMeeting(1) is suitable for meeting scenarios. If the user has called setDualStreamMode to set the low stream to always not send (DisableSimulcastStream), the meeting scenario has no effect on dynamic switching of the low stream. This enum value only applies to broadcaster vs broadcaster scenarios. The SDK enables the following strategies for this scenario:- For high bitrate requirements of low streams in meeting scenarios, multiple weak network resistance technologies are automatically enabled to improve the low stream's resistance and ensure smoothness when subscribing to multiple streams.
- Real-time monitoring of the number of subscribers to the high stream, dynamically adjusting high stream configuration:
- When no one subscribes to the high stream, bitrate and frame rate are automatically reduced to save upstream bandwidth and consumption.
- When someone subscribes to the high stream, it resets to the VideoEncoderConfiguration set by the user's most recent call to setVideoEncoderConfiguration. If not previously set, the following values are used:
- Resolution: 960 × 540
- Frame rate: 15 fps
- Bitrate: 1000 Kbps
- Real-time monitoring of the number of subscribers to the low stream, dynamically enabling or disabling the low stream:
- When no one subscribes to the low stream, it is automatically disabled to save upstream bandwidth and consumption.
- When someone subscribes to the low stream, it is enabled and reset to the SimulcastStreamConfig set by the user's most recent call to setDualStreamMode. If not previously set, the following values are used:
- Resolution: 480 × 272
- Frame rate: 15 fps
- Bitrate: 500 Kbps
ApplicationScenario1v1(2) is suitable for 1v1 video call scenarios. The SDK optimizes strategies for low latency and high video quality, improving performance in image quality, first frame rendering, latency on mid-to-low-end devices, and smoothness under weak networks.ApplicationScenarioLiveshow(3) is suitable for showroom live streaming scenarios. For this scenario's high requirements on first frame rendering time and image clarity, the SDK applies optimizations such as enabling audio/video frame accelerated rendering by default to enhance first frame experience (no need to call enableInstantMediaRendering), and enabling B-frames by default to ensure high image quality and transmission efficiency. It also enhances video quality and smoothness under weak networks and on low-end devices.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
- -1: General error (not specifically categorized).
- -4: Setting video scenario is not supported. Possible reason: using an audio-only SDK.
- -7: IRtcEngine object is not initialized. You need to initialize the IRtcEngine object successfully before calling this method.
setVoiceBeautifierParameters
Sets parameters for preset voice beautifier effects.
abstract setVoiceBeautifierParameters( preset: VoiceBeautifierPreset, param1: number, param2: number ): number;
- Call setAudioScenario to set the audio scenario to high-quality mode, i.e.,
AudioScenarioGameStreaming(3). - Call setAudioProfile to set the
profiletoAudioProfileMusicHighQuality(4) orAudioProfileMusicHighQualityStereo(5).
- This method can be called before or after joining a channel.
- Do not set the
profileparameter of setAudioProfile toAudioProfileSpeechStandard(1) orAudioProfileIot(6), otherwise this method will not take effect. - This method is optimized for voice and is not recommended for audio data containing music.
- After calling setVoiceBeautifierParameters, it is not recommended to call the following methods, or the effect set by setVoiceBeautifierParameters will be overridden:
- This method depends on the beautifier dynamic library
libagora_audio_beauty_extension.dll. Deleting this library will cause the feature to fail to start properly.
Parameters
- preset
- Preset effect:
SINGING_BEAUTIFIER: Singing beautifier.
- param1
- Gender characteristics of the singing voice:
1: Male voice.2: Female voice.
- param2
- Reverb effect of the singing voice:
1: Small room reverb.2: Large room reverb.3: Hall reverb.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setVoiceBeautifierPreset
Sets a predefined voice beautifier effect.
abstract setVoiceBeautifierPreset(preset: VoiceBeautifierPreset): number;
Call this method to set a predefined voice beautifier effect for the local user who sends the stream. After the effect is set, all users in the channel can hear it. You can choose different beautifier effects for different scenarios.
- Do not set the
profileparameter in setAudioProfile toAudioProfileSpeechStandard(1) orAudioProfileIot(6), or this method will not take effect. - This method works best for voice processing. It is not recommended to use it for audio data that contains music.
- After calling setVoiceBeautifierPreset, it is not recommended to call the following methods, or the effect set by setVoiceBeautifierPreset will be overridden:
- This method depends on the voice beautifier dynamic library
libagora_audio_beauty_extension.dll. If the library is deleted, this feature will not work properly.
Timing
- Call setAudioScenario to set the audio scenario to high-quality mode, i.e.,
AudioScenarioGameStreaming(3). - Call setAudioProfile and set
profiletoAudioProfileMusicHighQuality(4) orAudioProfileMusicHighQualityStereo(5).
Parameters
- preset
- Predefined voice beautifier effect option. See VoiceBeautifierPreset.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setVoiceConversionPreset
Sets a predefined voice conversion effect.
abstract setVoiceConversionPreset(preset: VoiceConversionPreset): number;
Call this method to set a predefined voice conversion effect for the local user who sends the stream. After the effect is set, all users in the channel can hear it. You can choose different voice conversion effects for different scenarios.
- Do not set the
profileparameter in setAudioProfile toAudioProfileSpeechStandard(1) orAudioProfileIot(6), or this method will not take effect. - This method works best for voice processing. It is not recommended to use it for audio data that contains music.
- After calling setVoiceConversionPreset, it is not recommended to call the following methods, or the effect set by setVoiceConversionPreset will be overridden:
- This method depends on the voice beautifier dynamic library
libagora_audio_beauty_extension.dll. If the library is deleted, this feature will not work properly.
Timing
- Call setAudioScenario to set the audio scenario to high-quality mode, i.e.,
AudioScenarioGameStreaming(3). - Call setAudioProfile and set
profiletoAudioProfileMusicHighQuality(4) orAudioProfileMusicHighQualityStereo(5).
Parameters
- preset
- Predefined voice conversion effect option: VoiceConversionPreset.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
setVolumeOfEffect
Sets the playback volume of the specified audio effect file.
abstract setVolumeOfEffect(soundId: number, volume: number): number;
Timing
You need to call this method after playEffect.
Parameters
- soundId
- The ID of the specified audio effect. Each audio effect has a unique ID.
- volume
- Playback volume. The range is [0,100]. The default value is 100, which means the original volume.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
startAudioMixing
Starts playing a music file.
abstract startAudioMixing(
filePath: string,
loopback: boolean,
cycle: number,
startPos?: number
): number;
For supported audio file formats, see What audio formats does the RTC SDK support. If the local music file does not exist, the file format is not supported, or the online music file URL is inaccessible, the SDK reports AudioMixingReasonCanNotOpen.
- Using this method to play short sound effect files may result in failure. If you need to play sound effects, use playEffect instead.
- If you need to call this method multiple times, make sure the interval between calls is greater than 500 ms.
- When calling this method on Android, note the following:
- Make sure to use a device running Android 4.2 or later with API Level 16 or higher.
- If playing an online music file, avoid using a redirect URL. Redirect URLs may not open on some devices.
- If calling this method on an emulator, ensure the music file is located in the
/sdcard/directory and is in MP3 format.
Timing
You can call this method before or after joining a channel.
Parameters
- filePath
- File path:
- Android: File path, must include the file name and extension. Supports URL addresses for online files, URI addresses for local files, absolute paths, or paths starting with
/assets/. Accessing local files via absolute path may cause permission issues. Using URI addresses is recommended. For example:content://com.android.providers.media.documents/document/audio%3A14441. - iOS: Absolute path or URL of the audio file, must include the file name and extension. For example:
/var/mobile/Containers/Data/audio.mp4.
- Android: File path, must include the file name and extension. Supports URL addresses for online files, URI addresses for local files, absolute paths, or paths starting with
- loopback
- Whether to play the music file only locally:
- true: Play the music file locally only. Only the local user can hear the music.
- false: Publish the locally played music file to remote users. Both local and remote users can hear the music.
- cycle
- Number of times to play the music file.
- > 0: Number of times to play. For example, 1 means play once.
- -1: Play in an infinite loop.
- startPos
- Playback position of the music file in milliseconds.
Return Values
- 0: Success.
- < 0: Failure:
- -1: General error (uncategorized).
- -2: Invalid parameter.
- -3: SDK not ready:
- Check if the audio module is enabled.
- Check the integrity of the assembly.
- IRtcEngine initialization failed. Please reinitialize IRtcEngine.
startCameraCapture
Starts video capture using the camera.
abstract startCameraCapture(
sourceType: VideoSourceType,
config: CameraCapturerConfiguration
): number;
Call this method to start multiple camera captures simultaneously by specifying sourceType.
enabled to true before calling this method.Parameters
- sourceType
- Type of video source. See VideoSourceType.
Note:
- iOS devices support up to 2 video streams from camera capture (requires devices with multiple cameras or external camera support).
- Android devices support up to 4 video streams from camera capture (requires devices with multiple cameras or external camera support).
- config
- Video capture configuration. See CameraCapturerConfiguration.
Note: On iOS, this parameter has no effect. Use the
configparameter in enableMultiCamera to configure video capture.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
startDirectCdnStreaming
Starts direct CDN streaming from the host.
abstract startDirectCdnStreaming(
eventHandler: IDirectCdnStreamingEventHandler,
publishUrl: string,
options: DirectCdnStreamingMediaOptions
): number;
- Deprecated
- Deprecated since v4.6.2.
publishCameraTrack and publishCustomVideoTrack to true, nor both publishMicrophoneTrack and publishCustomAudioTrack to true. You can configure the media options (DirectCdnStreamingMediaOptions) based on your scenario. For example:
If you want to push custom audio and video streams from the host, set the media options as follows:
- Set
publishCustomAudioTrackto true and call pushAudioFrame - Set
publishCustomVideoTrackto true and call pushVideoFrame - Ensure
publishCameraTrackis false (default) - Ensure
publishMicrophoneTrackis false (default)
publishCustomAudioTrack or publishMicrophoneTrack to true in DirectCdnStreamingMediaOptions and call pushAudioFrame to push audio-only streams.Parameters
- eventHandler
- See onDirectCdnStreamingStateChanged and onDirectCdnStreamingStats.
- publishUrl
- CDN streaming URL.
- options
- Media options for the host. See DirectCdnStreamingMediaOptions.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
startEchoTest
Starts an audio and video call loopback test.
abstract startEchoTest(config: EchoTestConfiguration): number;
To test whether local audio/video sending and receiving are functioning properly, you can call this method to start an audio and video call loopback test, which checks whether the system's audio/video devices and the user's uplink/downlink network are working correctly. After the test starts, the user should speak or face the camera. The audio or video will play back after about 2 seconds. If audio plays back normally, it means the system audio devices and network are working. If video plays back normally, it means the system video devices and network are working.
- When calling this method in a channel, ensure no audio/video streams are being published.
- After calling this method, you must call stopEchoTest to end the test. Otherwise, the user cannot perform another loopback test or join a channel.
- In live streaming scenarios, only the host can call this method.
Timing
This method can be called before or after joining a channel.
Parameters
- config
- Configuration for the audio and video call loopback test. See EchoTestConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
startLastmileProbeTest
Starts a last-mile network probe test before a call.
abstract startLastmileProbeTest(config: LastmileProbeConfig): number;
Starts a last-mile network probe test before a call to provide feedback on uplink and downlink bandwidth, packet loss, jitter, and round-trip time.
Timing
This method must be called before joining a channel. Do not call other methods until you receive the onLastmileQuality and onLastmileProbeResult callbacks, or this method may fail due to frequent API calls.
Parameters
- config
- Configuration for the last-mile network probe. See LastmileProbeConfig.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
startLocalAudioMixer
Starts local audio mixing.
abstract startLocalAudioMixer(config: LocalAudioMixerConfiguration): number;
- To mix locally captured audio, set
publishMixedAudioTrackin ChannelMediaOptions to true to publish the mixed audio stream to the channel. - To mix remote audio streams, ensure that the remote streams are published in the channel and have been subscribed to.
Scenario
- Used with local video compositing, to synchronize and publish the audio streams related to the composed video.
- In live streaming, users receive audio streams from the channel, mix them locally, and forward them to another channel.
- In education, teachers can mix audio from interactions with students locally and forward the mixed stream to another channel.
Timing
This method can be called before or after joining a channel.
Parameters
- config
- Configuration for local audio mixing. See LocalAudioMixerConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -7: IRtcEngine object is not initialized. You must initialize the IRtcEngine object before calling this method.
startLocalVideoTranscoder
Starts local video compositing.
abstract startLocalVideoTranscoder(
config: LocalTranscoderConfiguration
): number;
After calling this method, you can merge multiple video streams locally into a single stream. For example, merge video from the camera, screen sharing, media player, remote users, video files, images, etc., into one stream, and then publish the composited stream to the channel.
- Local compositing consumes significant CPU resources. Agora recommends enabling this feature on high-performance devices.
- If you need to composite locally captured video streams, the SDK supports the following combinations:
- On Android and iOS, up to 2 camera video streams (requires device support for dual cameras or external cameras) + 1 screen sharing stream.
- When configuring compositing, ensure that the camera video stream capturing the portrait has a higher layer index than the screen sharing stream. Otherwise, the portrait may be covered and not appear in the final composited stream.
Scenario
- Call
enableVirtualBackgroundand set the custom background toBackgroundNone, i.e., separate the portrait from the background in the camera-captured video. - Call
startScreenCaptureBySourceTypeto start capturing the screen sharing video stream. - Call this method and set the portrait video source as one of the sources in the local mixing configuration to achieve picture-in-picture in the final mixed video.
Timing
- To composite locally captured video streams, call this method after startCameraCapture or startScreenCapture.
- To publish the composited stream to the channel, set
publishTranscodedVideoTrackto true in ChannelMediaOptions when calling joinChannel or updateChannelMediaOptions.
Parameters
- config
- Local compositing configuration. See LocalTranscoderConfiguration.
Note:
- The maximum resolution for each video stream in the compositing is 4096 × 2160. Exceeding this limit will cause the compositing to fail.
- The maximum resolution of the composited video stream is 4096 × 2160.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
startMediaRenderingTracing
Starts video frame rendering tracing.
abstract startMediaRenderingTracing(): number;
After this method is successfully called, the SDK uses the time of the call as the starting point and reports video frame rendering information through the onVideoRenderingTracingResult callback.
- If you do not call this method, the SDK uses the time of calling joinChannel to join the channel as the default starting point and automatically starts tracing video rendering events. You can call this method at an appropriate time based on your business scenario to customize the tracing point.
- After leaving the current channel, the SDK automatically resets the tracing point to the next time you join a channel.
Scenario
Agora recommends using this method together with UI elements in your app (such as buttons or sliders) to measure the time to first frame rendering from the user's perspective. For example, call this method when the user clicks the Join Channel button, and then use the onVideoRenderingTracingResult callback to obtain the duration of each stage in the video rendering process, allowing you to optimize each stage accordingly.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -7: IRtcEngine is not initialized before calling the method.
startOrUpdateChannelMediaRelay
Starts or updates cross-channel media stream forwarding.
abstract startOrUpdateChannelMediaRelay(
configuration: ChannelMediaRelayConfiguration
): number;
- If the onChannelMediaRelayStateChanged callback reports
RelayStateRunning(2) andRelayOk(0), it means the SDK has started forwarding media streams between the source and destination channels. - If the callback reports
RelayStateFailure(3), it means an error occurred in cross-channel media stream forwarding.
- Call this method after successfully joining a channel.
- In a live streaming scenario, only users with the broadcaster role can call this method.
- To use the cross-channel media stream forwarding feature, you need to contact technical support to enable it.
- This feature does not support string-type UIDs.
Parameters
- configuration
- Configuration for cross-channel media stream forwarding. See ChannelMediaRelayConfiguration.
Return Values
- 0: The method call was successful.
- < 0: The method call failed. See Error Codes for details and resolution suggestions.
- -1: General error (not specifically classified).
- -2: Invalid parameter.
- -8: Internal state error. Possibly because the user role is not broadcaster.
startPreviewWithoutSourceType
Starts video preview.
abstract startPreviewWithoutSourceType(): number;
This method starts the local video preview.
- Local preview enables mirroring by default.
- After leaving the channel, local preview remains active. You need to call stopPreview to stop the local preview.
Timing
This method must be called after enableVideo.
Return Values
- 0: Method call succeeds.
- < 0: Method call fails. See Error Codes for details and resolution suggestions.
startPreview
Starts video preview and specifies the video source for preview.
abstract startPreview(sourceType?: VideoSourceType): number;
This method starts local video preview and specifies the video source to appear in the preview.
- Local preview enables mirror mode by default.
- After leaving the channel, the local preview remains active. You need to call stopPreview to stop the local preview.
Timing
You must call this method after enableVideo.
Parameters
- sourceType
- The type of video source. See VideoSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
startRhythmPlayer
Starts the virtual metronome.
abstract startRhythmPlayer( sound1: string, sound2: string, config: AgoraRhythmPlayerConfig ): number;
- Deprecated
- Deprecated since v4.6.2.
- After the virtual metronome is enabled, the SDK starts playing the specified audio files from the beginning and controls the playback duration of each file based on the
beatsPerMinuteyou set in AgoraRhythmPlayerConfig. For example, ifbeatsPerMinuteis set to60, the SDK plays one beat per second. If the file duration exceeds the beat duration, the SDK only plays the portion of the audio corresponding to the beat duration. - By default, the sound of the virtual metronome is not published to remote users. If you want remote users to hear the sound of the virtual metronome, you can set
publishRhythmPlayerTrackin ChannelMediaOptions to true after calling this method.
Scenario
In music or physical education scenarios, instructors often use a metronome to help students practice with the correct rhythm. A measure consists of strong and weak beats, with the first beat of each measure being the strong beat and the rest being weak beats.
Timing
Can be called before or after joining a channel.
Parameters
- sound1
- The absolute path or URL of the strong beat file, including the file name and extension. For example,
C:\music\audio.mp4. For supported audio formats, see Supported Audio Formats in RTC SDK. - sound2
- The absolute path or URL of the weak beat file, including the file name and extension. For example,
C:\music\audio.mp4. For supported audio formats, see Supported Audio Formats in RTC SDK. - config
- Metronome configuration. See AgoraRhythmPlayerConfig.
Return Values
- 0: Success.
- < 0: Failure
- -22: Audio file not found. Please provide valid
sound1andsound2.
- -22: Audio file not found. Please provide valid
startRtmpStreamWithTranscoding
Starts pushing media streams to a CDN and sets the transcoding configuration.
abstract startRtmpStreamWithTranscoding(
url: string,
transcoding: LiveTranscoding
): number;
Agora recommends using the more comprehensive server-side CDN streaming service. See Implement server-side CDN streaming. Call this method to push live audio and video streams to the specified CDN streaming URL and set the transcoding configuration. This method can only push media streams to one URL at a time. To push to multiple URLs, call this method multiple times. Each push stream represents a streaming task. The maximum number of concurrent tasks is 200 by default, which means you can run up to 200 streaming tasks simultaneously under one Agora project. To increase the quota, contact technical support. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.
- Call this method after joining a channel.
- Only hosts in a live streaming scenario can call this method.
- If the streaming fails and you want to restart it, you must call stopRtmpStream before calling this method again. Otherwise, the SDK returns the same error code as the previous failure.
Parameters
- url
- The CDN streaming URL. The format must be RTMP or RTMPS. The character length must not exceed 1024 bytes. Chinese characters and other special characters are not supported.
- transcoding
- The transcoding configuration for CDN streaming. See LiveTranscoding.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2: The URL or transcoding parameter is invalid. Check your URL or parameter settings.
- -7: The SDK is not initialized before calling this method.
- -19: The CDN streaming URL is already in use. Use another CDN streaming URL.
startRtmpStreamWithoutTranscoding
Starts RTMP streaming without transcoding.
abstract startRtmpStreamWithoutTranscoding(url: string): number;
Agora recommends using the more advanced server-side streaming feature. See Implement server-side streaming. Call this method to push live audio and video streams to the specified RTMP streaming URL. This method can push to only one URL at a time. To push to multiple URLs, call this method multiple times. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.
- Call this method after joining a channel.
- Only hosts in live streaming scenarios can call this method.
- If the streaming fails and you want to restart it, make sure to call stopRtmpStream first before calling this method again. Otherwise, the SDK will return the same error code as the previous failed attempt.
Parameters
- url
- The RTMP or RTMPS streaming URL. The maximum length is 1024 bytes. Chinese characters and special characters are not supported.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
- -2: Invalid URL or transcoding parameter. Check your URL or parameter settings.
- -7: SDK not initialized before calling this method.
- -19: The RTMP streaming URL is already in use. Use a different URL.
startScreenCapture
Starts screen capture.
abstract startScreenCapture(captureParams: ScreenCaptureParameters2): number;
- The billing standard for screen sharing streams is based on the
dimensionsvalue in ScreenVideoParameters:- If not specified, billing is based on 1280 × 720.
- If specified, billing is based on the value you provide.
- On iOS, screen sharing is only supported on iOS 12.0 and later.
- On iOS, if you use custom audio capture instead of SDK audio capture, to prevent screen sharing from stopping when the app goes to the background, it is recommended to implement a keep-alive mechanism.
- On iOS, this feature requires high device performance. It is recommended to use it on iPhone X or later.
- On iOS, this method depends on the screen sharing dynamic library
AgoraReplayKitExtension.xcframework. Removing this library will cause screen sharing to fail. - On Android, if the user does not grant screen capture permission to the app, the SDK triggers the
onPermissionError(2)callback. - On Android 9 and later, to prevent the system from killing the app when it goes to the background, it is recommended to add the foreground service permission
android.permission.FOREGROUND_SERVICEin/app/Manifests/AndroidManifest.xml. - Due to Android performance limitations, screen sharing is not supported on Android TV.
- Due to Android system limitations, when using Huawei phones for screen sharing, to avoid crashes, do not change the video encoding resolution during sharing.
- Due to Android system limitations, some Xiaomi phones do not support capturing system audio during screen sharing.
- To improve the success rate of capturing system audio during screen sharing, it is recommended to set the audio scenario to
AudioScenarioGameStreamingusing the setAudioScenario method before joining the channel.
Scenario
In screen sharing scenarios, you need to call this method to start capturing screen video streams. For implementation details, see Screen Sharing.
Timing
- If called before joining the channel, then call joinChannel and set
publishScreenCaptureVideoto true, screen sharing starts. - If called after joining the channel, then call updateChannelMediaOptions and set
publishScreenCaptureVideoto true, screen sharing starts.
Parameters
- captureParams
- The configuration for screen sharing encoding parameters. See ScreenCaptureParameters2.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
- -2 (iOS): Parameter is null.
- -2 (Android): System version too low. Make sure the Android API level is at least 21.
- -3 (Android): Cannot capture system audio. Make sure the Android API level is at least 29.
stopAllEffects
Stops playing all audio effect files.
abstract stopAllEffects(): number;
When you no longer need to play audio effect files, you can call this method to stop playback. If you only want to pause playback, call pauseAllEffects.
Timing
You need to call this method after playEffect.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopAudioMixing
Stops playing the music file.
abstract stopAudioMixing(): number;
After calling the startAudioMixing method to play a music file, you can call this method to stop playback. If you only need to pause playback, call pauseAudioMixing instead.
Timing
You must call this method after joining a channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopCameraCapture
Stops capturing video from the camera.
abstract stopCameraCapture(sourceType: VideoSourceType): number;
After calling startCameraCapture to start one or more camera video streams, you can call this method and specify sourceType to stop one or more of the camera video captures.
enabled to false.
If you are using the local composite layout feature, calling this method to stop video capture from the first camera will interrupt the local composite layout.Parameters
- sourceType
- The type of video source. See VideoSourceType.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
stopChannelMediaRelay
Stops the media stream relay across channels. Once stopped, the host leaves all destination channels.
abstract stopChannelMediaRelay(): number;
After a successful call, the SDK triggers the onChannelMediaRelayStateChanged callback. If it reports RelayStateIdle (0) and RelayOk (0), it indicates that the media stream relay has stopped.
RelayErrorServerNoResponse (2) or RelayErrorServerConnectionLost (8). You can call the leaveChannel method to leave the channel, and the media stream relay will automatically stop.Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and troubleshooting.
- -5: The method call is rejected. There is no ongoing media stream relay across channels.
stopDirectCdnStreaming
Stops direct CDN streaming from the host.
abstract stopDirectCdnStreaming(): number;
- Deprecated
- Deprecated since v4.6.2.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
stopEchoTest
Stops the audio call loopback test.
abstract stopEchoTest(): number;
After calling startEchoTest, you must call this method to end the test. Otherwise, the user will not be able to perform the next audio/video call loopback test or join a channel.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -5(ERR_REFUSED): Failed to stop the test. The test may not be running.
stopEffect
Stops playing the specified audio effect file.
abstract stopEffect(soundId: number): number;
When you no longer need to play a specific audio effect file, you can call this method to stop playback. If you only want to pause playback, call pauseEffect.
Timing
You need to call this method after playEffect.
Parameters
- soundId
- The ID of the specified audio effect file. Each audio effect file has a unique ID.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopLastmileProbeTest
Stops the last mile network probe test before a call.
abstract stopLastmileProbeTest(): number;
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopLocalAudioMixer
Stops local audio mixing.
abstract stopLocalAudioMixer(): number;
After calling startLocalAudioMixer, if you want to stop local audio mixing, call this method.
Timing
This method must be called after startLocalAudioMixer.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -7: The IRtcEngine object is not initialized. You need to successfully initialize the IRtcEngine object before calling this method.
stopLocalVideoTranscoder
Stops local video compositing.
abstract stopLocalVideoTranscoder(): number;
After calling startLocalVideoTranscoder, if you want to stop local video compositing, call this method.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopPreview
Stops video preview.
abstract stopPreview(sourceType?: VideoSourceType): number;
Scenario
After calling startPreview to start preview, if you want to stop the local video preview, call this method.
Timing
Call this method before joining or after leaving the channel.
Parameters
- sourceType
- The type of video source. See VideoSourceType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopRhythmPlayer
Stops the virtual metronome.
abstract stopRhythmPlayer(): number;
After calling startRhythmPlayer, you can call this method to stop the virtual metronome.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
stopRtmpStream
Stops CDN streaming.
abstract stopRtmpStream(url: string): number;
Agora recommends using the more comprehensive server-side CDN streaming service. See Implement server-side CDN streaming. Call this method to stop the live streaming to the specified CDN streaming URL. This method can only stop streaming to one URL at a time. To stop streaming to multiple URLs, call this method multiple times. After calling this method, the SDK triggers the onRtmpStreamingStateChanged callback locally to report the streaming status.
Parameters
- url
- The CDN streaming URL. The format must be RTMP or RTMPS. The character length must not exceed 1024 bytes. Chinese characters and other special characters are not supported.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
stopScreenCapture
Stops screen capture.
abstract stopScreenCapture(): number;
Scenario
If you call startScreenCaptureBySourceType, startScreenCaptureByWindowId, or startScreenCaptureByDisplayId to start screen capture, you need to call this method to stop it.
Timing
This method can be called before or after joining the channel.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
switchCamera
Switches between front and rear cameras.
abstract switchCamera(): number;
You can call this method during the app's runtime to dynamically switch between cameras based on the actual availability, without restarting the video stream or reconfiguring the video source.
VideoSourceCamera (0) when calling startCameraCapture.Timing
You must call this method after the camera is successfully turned on, that is, when the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
takeSnapshot
Takes a snapshot of the video.
abstract takeSnapshot(uid: number, filePath: string): number;
This method captures a snapshot of the specified user's video stream, generates a JPG image, and saves it to the specified path.
- When this method returns, the SDK has not actually captured the snapshot.
- When used for local video snapshot, it captures the video stream specified for publishing in ChannelMediaOptions.
- If the video has been pre-processed, such as adding watermark or beauty effects, the snapshot will include those effects.
Timing
This method must be called after joining a channel.
Parameters
- uid
- User ID. Set to 0 to capture the local user's video.
- filePath
-
Note: Make sure the directory exists and is writable.Local path to save the snapshot, including file name and format. For example:
- iOS:
/App Sandbox/Library/Caches/example.jpg - Android:
/storage/emulated/0/Android/data/<package name>/files/example.jpg
- iOS:
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
takeSnapshotWithConfig
Takes a video snapshot at a specified observation point.
abstract takeSnapshotWithConfig(uid: number, config: SnapshotConfig): number;
This method captures a snapshot of the specified user's video stream, generates a JPG image, and saves it to the specified path.
- When this method returns, the SDK has not actually captured the snapshot.
- When used for local video snapshot, it captures the video stream specified for publishing in ChannelMediaOptions.
Timing
This method must be called after joining a channel.
Parameters
- uid
- User ID. Set to 0 to capture the local user's video.
- config
- Snapshot settings. See SnapshotConfig.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
unloadAllEffects
Releases all preloaded audio effect files from memory.
abstract unloadAllEffects(): number;
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
unloadEffect
Releases a preloaded audio effect file from memory.
abstract unloadEffect(soundId: number): number;
After calling preloadEffect to load an audio effect file into memory, call this method to release the file.
Timing
You can call this method either before or after joining a channel.
Parameters
- soundId
- The ID of the specified audio effect file. Each audio effect file has a unique ID.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
unregisterAudioEncodedFrameObserver
Unregisters an audio encoded frame observer.
abstract unregisterAudioEncodedFrameObserver(
observer: IAudioEncodedFrameObserver
): number;
Parameters
- observer
- Audio encoded frame observer. See IAudioEncodedFrameObserver.
Return Values
- 0: The method call was successful.
- < 0: The method call failed. See Error Codes for details and resolution suggestions.
unregisterAudioSpectrumObserver
Unregisters the audio spectrum observer.
abstract unregisterAudioSpectrumObserver(
observer: IAudioSpectrumObserver
): number;
Call this method to unregister the audio spectrum observer after calling registerAudioSpectrumObserver.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
unregisterMediaMetadataObserver
Unregisters the media metadata observer.
abstract unregisterMediaMetadataObserver(
observer: IMetadataObserver,
type: MetadataType
): number;
Parameters
- observer
- The metadata observer. See IMetadataObserver.
- type
- The metadata type. Currently, only
VideoMetadatais supported. See MetadataType.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
updateChannelMediaOptions
Updates the channel media options after joining a channel.
abstract updateChannelMediaOptions(options: ChannelMediaOptions): number;
Parameters
- options
- The channel media options. See ChannelMediaOptions.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -2: The value of a ChannelMediaOptions member is invalid. For example, an illegal token is used or an invalid user role is set. You need to provide valid parameters.
- -7: The IRtcEngine object is not initialized. You need to initialize the IRtcEngine object before calling this method.
- -8: The internal state of the IRtcEngine object is incorrect. A possible reason is that the user is not in a channel. It is recommended to determine whether the user is in a channel through the onConnectionStateChanged callback. If you receive
ConnectionStateDisconnected(1) orConnectionStateFailed(5), it means the user is not in a channel. You need to call joinChannel before calling this method.
updateLocalAudioMixerConfiguration
Updates the configuration for local audio mixing.
abstract updateLocalAudioMixerConfiguration(
config: LocalAudioMixerConfiguration
): number;
After calling startLocalAudioMixer, if you want to update the configuration for local audio mixing, call this method.
Timing
This method must be called after startLocalAudioMixer.
Parameters
- config
- Configuration for local audio mixing. See LocalAudioMixerConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -7: The IRtcEngine object is not initialized. You need to successfully initialize the IRtcEngine object before calling this method.
updateLocalTranscoderConfiguration
Updates the local video compositing configuration.
abstract updateLocalTranscoderConfiguration(
config: LocalTranscoderConfiguration
): number;
After calling startLocalVideoTranscoder, if you want to update the local video compositing configuration, call this method.
Parameters
- config
- Configuration for local video compositing. See LocalTranscoderConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
updatePreloadChannelToken
Updates the wildcard token for the preloaded channel.
abstract updatePreloadChannelToken(token: string): number;
You need to manage the lifecycle of the wildcard token yourself. When the wildcard token expires, you need to generate a new one on your server and pass it in through this method.
Scenario
In scenarios where frequent channel switching or multiple channels are needed, using a wildcard token avoids repeated token requests when switching channels, which speeds up the switching process and reduces the load on your token server.
Parameters
- token
- The new token.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -2: The passed parameter is invalid. For example, an illegal token is used. You need to provide valid parameters and rejoin the channel.
- -7: The IRtcEngine object is not initialized. You need to initialize the IRtcEngine object before calling this method.
updateRtmpTranscoding
Updates the transcoding configuration for CDN streaming.
abstract updateRtmpTranscoding(transcoding: LiveTranscoding): number;
Agora recommends using the more comprehensive server-side CDN streaming service. See Implement server-side CDN streaming. After enabling transcoding streaming, you can dynamically update the transcoding configuration based on your scenario. After the update, the SDK triggers the onTranscodingUpdated callback.
Parameters
- transcoding
- The transcoding configuration for CDN streaming. See LiveTranscoding.
Return Values
- 0: The method call succeeds.
- < 0: The method call fails. See Error Codes for details and resolution suggestions.
updateScreenCapture
Updates the parameter configuration for screen capture.
abstract updateScreenCapture(captureParams: ScreenCaptureParameters2): number;
- Call this method and set
captureAudioto true. - Call updateChannelMediaOptions and set
publishScreenCaptureAudioto true to publish the audio captured from the screen.
- This method is applicable to Android and iOS only.
- On iOS, screen sharing is supported on iOS 12.0 and later.
Parameters
- captureParams
- Encoding parameter configuration for screen sharing. See ScreenCaptureParameters2.
Return Values
- 0: Success.
- < 0: Failure.
- -2: The parameter passed in is invalid.
- -8: The screen sharing state is invalid. This may be because you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing and start screen sharing again.
updateScreenCaptureParameters
Updates the parameter configuration for screen capture.
abstract updateScreenCaptureParameters(
captureParams: ScreenCaptureParameters
): number;
- Call this method after screen sharing or window sharing is enabled.
Parameters
- captureParams
- Encoding parameter configuration for screen sharing. See ScreenCaptureParameters2.
Note: The video properties of the screen sharing stream only need to be set through this parameter and are not related to setVideoEncoderConfiguration.
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -2: The parameter passed in is invalid.
- -8: The screen sharing state is invalid. This may be because you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing and start screen sharing again.
updateScreenCaptureRegion
Updates the screen capture region.
abstract updateScreenCaptureRegion(regionRect: Rectangle): number;
Return Values
- 0: Success.
- < 0: Failure. See Error Codes for details and resolution suggestions.
- -2: The parameter passed in is invalid.
- -8: The screen sharing state is invalid. This may be because you are already sharing another screen or window. Try calling stopScreenCapture to stop the current sharing and start screen sharing again.