Interface IAgoraRTC

The entry point of the Agora Web SDK.

Hierarchy

Index

Events

audio-context-state-changed

  • audio-context-state-changed(currState: AudioContextState | "interrupted", prevState: AudioContextState | "interrupted" | undefined): void
  • since


       4.20.0

    Callback for changes in the Audio Context state. The "interrupted" state in this callback is currently only triggered on iOS devices.

    Parameters

    • currState: AudioContextState | "interrupted"

      The current state, with possible values:

      • AudioContextState: Enumerated values are detailed in BaseAudioContext.state.
      • "interrupted": Audio and video playback is interrupted by a system phone call or another app. You call the resumeAudioContext method to resume audio and video playback.
    • prevState: AudioContextState | "interrupted" | undefined

      The previous state, with possible values:

      • AudioContextState: Enumerated values are detailed in BaseAudioContext.state.
      • "interrupted": Audio and video playback is interrupted by a system phone call or another app.
      • undefined: No previous state.

    Returns void

autoplay-failed

  • autoplay-failed(): void
  • since


       4.18.0

    If you needs a more flexible way of listening to autoplay failures, Agora recommends that you call IAgoraRTC.on and pass in this callback to replace onAutoplayFailed.

    AgoraRTC.on("autoplay-failed", (info) => {
      console.log("Autoplay failed!", info.state, info.device);
    });
    

    Returns void

camera-changed

  • since


       4.18.0

    If you needs a more flexible way of listening to camera device changes, Agora recommends that you call IAgoraRTC.on and pass in this callback to replace onCameraChanged.

    AgoraRTC.on("camera-changed", (info) => {
      console.log("Camera changed!", info.state, info.device);
    });
    

    Parameters

    Returns void

microphone-changed

  • microphone-changed(deviceInfo: DeviceInfo): void
  • since


       4.18.0

    If you needs a more flexible way of listening to microphone device changes, Agora recommends that you call IAgoraRTC.on and pass in this callback to replace onMicrophoneChanged.

    AgoraRTC.on("microphone-changed", (info) => {
      console.log("Microphone changed!", info.state, info.device);
    });
    

    Parameters

    Returns void

playback-device-changed

  • playback-device-changed(deviceInfo: DeviceInfo): void
  • since


       4.18.0

    If you needs a more flexible way of listening to audio playback device changes, Agora recommends that you call IAgoraRTC.on and pass in this callback to replace onPlaybackDeviceChanged.

    AgoraRTC.on("playback-device-changed", (info) => {
      console.log("Playback device changed!", info.state, info.device);
    });
    

    Parameters

    Returns void

security-policy-violation

  • security-policy-violation(): void
  • since


       4.18.0

    If you needs a more flexible way of listening to CSP rule violations, Agora recommends that you call IAgoraRTC.on and pass in this callback to replace onSecurityPolicyViolation.

    AgoraRTC.on("security-policy-violation", (info) => {
      console.log("Security policy violation!", info.state, info.device);
    });
    

    Returns void

Global Callback Properties

Optional onAudioAutoplayFailed

onAudioAutoplayFailed: function

Occurs when the autoplay of an audio track fails.

deprecated

from v4.6.0. Use onAutoplayFailed instead.

If multiple tracks call play and all trigger autoplay blocking, the SDK triggers onAudioAutoplayFailed multiple times.

The autoplay failure is caused by browsers' autoplay blocking, which does not affect video tracks.

In the Agora Web SDK, once the user has interacted with the webpage, the autoplay blocking is removed. You can deal with the issue in either of the following ways:

  • If you do not want to receive the onAudioAutoplayFailed callback, ensure that the user has interacted with the webpage before RemoteAudioTrack.play or LocalAudioTrack.play is called.
  • If you cannot guarantee a user interaction before the call of RemoteAudioTrack.play or LocalAudioTrack.play, you can display a button and instruct the user to click it in the onAudioAutoplayFailed callback.

As the number of visits on a webpage increases, the browser adds the webpage to the autoplay whitelist, but this information is not accessible by JavaScript.

The following example shows how to display a button for the user to click when autoplay fails.

 let isAudioAutoplayFailed = false;
 AgoraRTC.onAudioAutoplayFailed = () => {
  if (isAudioAutoplayFailed) return;

  isAudioAutoplayFailed = true;
  const btn = document.createElement("button");
  btn.innerText = "Click me to resume the audio playback";
  btn.onClick = () => {
    isAudioAutoplayFailed = false;
    btn.remove();
  };
  document.body.append(btn);
};

If multiple audio tracks call play, the onAudioAutoplayFailed is triggered multiple times. The example uses the isAudioAutoplayFailed object to avoid repeatedly creating buttons.

Type declaration

    • (): void
    • Returns void

Optional onAutoplayFailed

onAutoplayFailed: function
since


   4.6.0

Occurs when the autoplay of an audio track or a video track fails.

Different from onAudioAutoplayFailed, if multiple tracks call play and all trigger autoplay blocking, the SDK triggers onAutoplayFailed only once before a user gesture for removing the autoplay blocking occurs.

The autoplay failure of audible media is caused by browsers' autoplay blocking. On most web browsers, inaudible media are not affected by autoplay blocking. However, on iOS Safari with low power mode enabled, or other iOS in-app browsers that implement a custom autoplay policy, such as WeChat browser, the autoplay of inaudible media is blocked.

In the Agora Web SDK, once the user has interacted with the webpage, the autoplay blocking is removed. You can deal with the issue in either of the following ways:

  • If you do not want to receive the onAutoplayFailed callback, ensure that the user has interacted with the webpage before RemoteTrack.play or LocalTrack.play is called.
  • If you cannot guarantee a user interaction before the call of RemoteTrack.play or LocalTrack.play, you can display a button and instruct the user to click it in the onAutoplayFailed callback.

As the number of visits on a webpage increases, the browser may add the webpage to the autoplay whitelist, but this information is not accessible by JavaScript.

The following example demonstrates how to display a button for the user to click when autoplay fails.

 AgoraRTC.onAutoplayFailed = () => {
  const btn = document.createElement("button");
  btn.innerText = "Click me to resume the audio playback";
  btn.onClick = () => {
    btn.remove();
  };
  document.body.append(btn);
};

Since the SDK only triggers onAutoplayFailed once before a user gesture that removes the autoplay blocking occurs, you do not need to maintain the state of isAutoPlayFailed as you did for the onAudioAutoplayFailed callback.

Type declaration

    • (): void
    • Returns void

Optional onCameraChanged

onCameraChanged: function

Occurs when a video capture device is added or removed.

AgoraRTC.onCameraChanged = (info) => {
  console.log("camera changed!", info.state, info.device);
};

Parameters

  • deviceInfo: The information of the video capture device. See DeviceInfo.

Type declaration

Optional onMicrophoneChanged

onMicrophoneChanged: function

Occurs when an audio sampling device is added or removed.

AgoraRTC.onMicrophoneChanged = (info) => {
  console.log("microphone changed!", info.state, info.device);
};

Parameters

  • deviceInfo: The information of the device. See DeviceInfo.

Type declaration

Optional onPlaybackDeviceChanged

onPlaybackDeviceChanged: function
since


   4.1.0

Occurs when an audio playback device is added or removed.

AgoraRTC.onPlaybackDeviceChanged = (info) => {
  console.log("speaker changed!", info.state, info.device);
};

Parameters

  • deviceInfo: The information of the device. See DeviceInfo.

Type declaration

Optional onSecurityPolicyViolation

onSecurityPolicyViolation: function
since


   4.15.0

Occurs when Agora-related services cause CSP (Content Security Policy) violations.

When Agora fails to load a resource or send a request due to CSP violations, the SDK triggers this callback. After receiving this callback, modify your CSP configuration to ensure that you can access Agora-related services.

Type declaration

    • (event: SecurityPolicyViolationEvent): void
    • Parameters

      • event: SecurityPolicyViolationEvent

      Returns void

Other Properties

VERSION

VERSION: string

The version of the Agora Web SDK.

Agora Core Methods

createClient

  • Creates a local client object for managing a call.

    This is usually the first step of using the Agora Web SDK.

    Parameters

    • config: ClientConfig

      The configurations for the client object, including channel profile and codec. The default codec is vp8 and default channel profile is rtc. See ClientConfig for details.

    Returns IAgoraRTCClient

Local Track Methods

createBufferSourceAudioTrack

  • Creates an audio track from an audio file or AudioBuffer object.

    This method works with both the local and online audio files, supporting the following formats:

    • MP3.
    • AAC.
    • Other audio formats supported by the browser.

    Parameters

    Returns Promise<IBufferSourceAudioTrack>

    Unlike other audio track objects, this audio track object adds the methods for audio playback control, such as playing, pausing, seeking and playback status querying.

createCameraVideoTrack

  • Creates a video track from the video captured by a camera.

    Parameters

    • Optional config: CameraVideoTrackInitConfig

      Configurations for the captured video, such as the capture device and the encoder configuration.

    Returns Promise<ICameraVideoTrack>

createCustomAudioTrack

createCustomVideoTrack

createMicrophoneAudioTrack

createScreenVideoTrack

  • Creates a video track for screen sharing.

    Parameters

    • config: ScreenVideoTrackInitConfig

      Configurations for the screen-sharing video, such as encoder configuration and capture configuration.

    • withAudio: "enable"

      Whether to share the audio of the screen sharing input source when sharing the screen.

      • enable: Share the audio.
      • disable: (Default) Do not share the audio.
      • auto: Share the audio, dependent on whether the browser supports this function.
      • ScreenAudioTrackInitConfig: Customized initialization configurations for audio sharing, including the 3A processing parameters (AEC, AGC, ANS).

        Note:

        • This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
        • Additional information on browser versions and feature support across different operating systems:
          • On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
          • On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
          • On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
        • For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.

    Returns Promise<[ILocalVideoTrack, ILocalAudioTrack]>

  • Creates a video track for screen sharing.

    Parameters

    • config: ScreenVideoTrackInitConfig

      Configurations for the screen-sharing video, such as encoder configuration and capture configuration.

    • withAudio: "disable"

      Whether to share the audio of the screen sharing input source when sharing the screen.

      • enable: Share the audio.
      • disable: (Default) Do not share the audio.
      • auto: Share the audio, dependent on whether the browser supports this function.
      • ScreenAudioTrackInitConfig: Customized initialization configurations for audio sharing, including the 3A processing parameters (AEC, AGC, ANS).

        Note:

        • This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
        • Additional information on browser versions and feature support across different operating systems:
          • On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
          • On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
          • On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
        • For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.

    Returns Promise<ILocalVideoTrack>

    • If withAudio is enable, then this method returns a list containing a video track for screen sharing and an audio track. If the end user does not check Share audio, the SDK throws an error.
    • If withAudio is disable, then this method returns a video track for screen sharing.
    • If withAudio is auto or ScreenAudioTrackInitConfig, then the SDK attempts to share the audio on browsers supporting this function.
      • If the end user checks Share audio, then this method returns a list containing a video track for screen sharing and an audio track.
      • If the end user does not check Share audio, then this method only returns a video track for screen sharing.
  • Creates a video track for screen sharing.

    Parameters

    • config: ScreenVideoTrackInitConfig

      Configurations for the screen-sharing video, such as encoder configuration and capture configuration.

    • Optional withAudio: "enable" | "disable" | "auto" | ScreenAudioTrackInitConfig

      Whether to share the audio of the screen sharing input source when sharing the screen.

      • enable: Share the audio.
      • disable: (Default) Do not share the audio.
      • auto: Share the audio, dependent on whether the browser supports this function.
      • ScreenAudioTrackInitConfig: Customized initialization configurations for audio sharing, including the 3A processing parameters (AEC, AGC, ANS).

        Note:

        • This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
        • Additional information on browser versions and feature support across different operating systems:
          • On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
          • On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
          • On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
        • For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.

    Returns Promise<[ILocalVideoTrack, ILocalAudioTrack] | ILocalVideoTrack>

    • If withAudio is enable, then this method returns a list containing a video track for screen sharing and an audio track. If the end user does not check Share audio, the SDK throws an error.
    • If withAudio is disable, then this method returns a video track for screen sharing.
    • If withAudio is auto or ScreenAudioTrackInitConfig, then the SDK attempts to share the audio on browsers supporting this function.
      • If the end user checks Share audio, then this method returns a list containing a video track for screen sharing and an audio track.
      • If the end user does not check Share audio, then this method only returns a video track for screen sharing.

Logger Methods

disableLogUpload

  • disableLogUpload(): void
  • Disables log upload.

    The log-upload function is disabled by default. If you have called enableLogUpload, then call this method when you need to stop uploading the log.

    Returns void

enableLogUpload

  • enableLogUpload(): void
  • Enables log upload.

    Call this method to enable log upload to Agora’s server.

    The log-upload function is disabled by default. To enable this function, you must call this method before calling all the other methods.

    If a user fails to join the channel, the log information (for that user) is unavailable on Agora's server.

    Returns void

setLogLevel

  • setLogLevel(level: number): void
  • Sets the output log level of the SDK.

    Choose a level to see the logs preceding that level. The log level follows the sequence of NONE, ERROR, WARNING, INFO, and DEBUG.

    For example, if you set the log level as AgoraRTC.setLogLevel(1);, then you can see logs in levels INFO, ERROR, and WARNING.

    Parameters

    • level: number

      The output log level.

      • 0: DEBUG. Output all API logs.
      • 1: INFO. Output logs of the INFO, WARNING and ERROR level.
      • 2: WARNING. Output logs of the WARNING and ERROR level.
      • 3: ERROR. Output logs of the ERROR level.
      • 4: NONE. Do not output any log.

    Returns void

Media Devices Methods

getCameras

  • getCameras(skipPermissionCheck?: boolean): Promise<MediaDeviceInfo[]>
  • Enumerates the video capture devices available, such as cameras.

    If this method call succeeds, the SDK returns a list of video input devices in an array of MediaDeviceInfo objects.

    Calling this method turns on the camera shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.

    Parameters

    • Optional skipPermissionCheck: boolean

      Whether to skip the permission check. If you set this parameter as true, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.

      • true: Skip the permission check.
      • false: (Default) Do not skip the permission check.

    Returns Promise<MediaDeviceInfo[]>

getDevices

  • getDevices(skipPermissionCheck?: boolean): Promise<MediaDeviceInfo[]>
  • Enumerates the media input and output devices available, such as microphones, cameras, and headsets.

    If this method call succeeds, the SDK returns a list of media devices in an array of MediaDeviceInfo objects.

    Note:

    • Calling this method turns on the camera and microphone shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.
    • The MediaDeviceInfo.deviceId property of a device may change. For example, it is reset when the user clears cookies. Agora does not recommend using the deviceId property to implement your business logic.
    getDevices().then(devices => {
    console.log("first device id", devices[0].deviceId);
    }).catch(e => {
    console.log("get devices error!", e);
    });
    

    Parameters

    • Optional skipPermissionCheck: boolean

      Whether to skip the permission check. If you set this parameter as true, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.

      • true: Skip the permission check.
      • false: (Default) Do not skip the permission check.

    Returns Promise<MediaDeviceInfo[]>

getElectronScreenSources

  • Gets the sources for screen-sharing through Electron.

    If your electron environment has set contextIsolation: true, calling this function will throw an error. You need to get screen source id with contextBridge.exposeInMainWorld method by yourself.

    // preload.js
    
    const {
      contextBridge, desktopCapturer
    } = require("electron");
    
    contextBridge.exposeInMainWorld(
      "electronDesktopCapturer", {
        getSources: async (...args) => {
          const sources = await desktopCapturer.getSources(...args);
          return sources;
        }
      }
    );
    
    // renderer.js
    (async () => {
      sources = await window.electronDesktopCapturer.getSources(["window", "screen"]);
      const source = sources[0];   // just for example ,you shuould make an UI for user to select the exact source.
      const screenVideoTrack = await AgoraRTC.createScreenVideoTrack({ electronScreenSourceId: source.id });
    })()
    

    If this method call succeeds, the SDK returns a list of screen sources in an array of ElectronDesktopCapturerSource objects.

    Parameters

    • Optional type: ScreenSourceType

      The type of screen sources (window/application/screen) to get. See ScreenSourceType. If it is left empty, this method gets all the available sources.

    Returns Promise<ElectronDesktopCapturerSource[]>

getMicrophones

  • getMicrophones(skipPermissionCheck?: boolean): Promise<MediaDeviceInfo[]>
  • Enumerates the audio sampling devices available, such as microphones.

    If this method call succeeds, the SDK returns a list of audio input devices in an array of MediaDeviceInfo objects.

    Calling this method turns on the microphone shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.

    Parameters

    • Optional skipPermissionCheck: boolean

      Whether to skip the permission check. If you set this parameter as true, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.

      • true: Skip the permission check.
      • false: (Default) Do not skip the permission check.

    Returns Promise<MediaDeviceInfo[]>

getPlaybackDevices

  • getPlaybackDevices(skipPermissionCheck?: boolean): Promise<MediaDeviceInfo[]>
  • since


       4.1.0

    Enumerates the audio playback devices available, such as speakers.

    If this method call succeeds, the SDK returns a list of audio playback devices in an array of MediaDeviceInfo objects.

    • This method is supported on Chrome, Firefox, and Edge, but is not supported on Safari.
    • Calling this method turns on the microphone briefly for the device permission request. On browsers including Chrome 67+ and Firefox 70+, the SDK cannot get accurate device information without permission for the media device.

    Parameters

    • Optional skipPermissionCheck: boolean

      Whether to skip the permission check. If you set this parameter as true, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.

      • true: Skip the permission check.
      • false: (Default) Do not skip the permission check.

    Returns Promise<MediaDeviceInfo[]>

Other Methods

checkAudioTrackIsActive

  • Check whether an audio track is active.

    The SDK determines whether an audio track is active by checking whether the volume changes during the specified time frame.

    Agora recommends calling this method before starting a call to check the availability of the audio sampling device. You can pass the audio track from the audio sampled by a microphone as a parameter in this method to check whether the microphone works.

    Notes:

    • The check may fail in a quiet environment. Agora suggests you instruct the end user to speak or make some noise when calling this method.
    • If an audio track is muted, this method returns false.
    • Do not call this method frequently as the check may affect web performance.
    const audioTrack = await AgoraRTC.createMicrophoneAudioTrack({ microphoneId });
    AgoraRTC.checkAudioTrackIsActive(audioTrack).then(result => {
      console.log(`${ microphoneLabel } is ${ result ? "available" : "unavailable" }`);
    }).catch(e => {
      console.log("check audio track error!", e);
    });
    

    Parameters

    • track: ILocalAudioTrack | IRemoteAudioTrack

      The local or remote audio track to be checked.

    • Optional timeout: number

      The time frame (ms) for checking. The default value is 5,000 ms.

    Returns Promise<boolean>

    Whether the volume in the specified audio track changes during the specified time frame:

    • true: The volume changes. For the microphone audio track, it means the audio sampling device works.
    • false: The volume does not change. Possible reasons:
      • The audio sampling device does not work properly.
      • The volume in the customized audio track does not change.
      • The audio track is muted.

checkSystemRequirements

  • checkSystemRequirements(): boolean
  • Checks the compatibility of the current browser.

    Use this method before calling createClient to check if the SDK is compatible with the web browser.

    Returns boolean

    • true: The SDK is compatible with the current web browser.
    • false: The SDK is incompatible with the current web browser.

checkVideoTrackIsActive

  • Checks whether a video track is active.

    The SDK determines whether a video track is active by checking for image changes during the specified time frame.

    Agora recommends calling this method before starting a call to check the availability of the video capture device. You can pass the camera video track as a parameter in this method to check whether the camera works.

    Notes:

    • If a video track is muted, this method returns false.
    • Do not call this method frequently as the check may affect web performance.
    const videoTrack = await AgoraRTC.createCameraVideoTrack({ cameraId });
    AgoraRTC.checkVideoTrackIsActive(videoTrack).then(result => {
      console.log(`${ cameraLabel } is ${ result ? "available" : "unavailable" }`);
    }).catch(e => {
      console.log("check video track error!", e);
    });
    

    Parameters

    • track: ILocalVideoTrack | IRemoteVideoTrack

      The local or remote video track to be checked.

    • Optional timeout: number

      The time frame (ms) for checking. The default value is 5,000 ms.

    Returns Promise<boolean>

    Whether the image in the specified video track changes during the specified time frame:

    • true: The image changes. For the camera video track, it means the video capture device works.
    • false: The image does not change. Possible reasons:
      • The video capturing device does not work properly or is blocked.
      • The video track is muted.

createChannelMediaRelayConfiguration

createMicrophoneAndCameraTracks

  • Creates an audio track and a video track.

    Creates an audio track from the audio sampled by a microphone and a video track from the video captured by a camera.

    Calling this method differs from calling createMicrophoneAudioTrack and createCameraVideoTrack separately:

    • This method call requires access to the microphone and the camera at the same time. In this case, users only need to do authorization once.
    • Calling createMicrophoneAudioTrack and createCameraVideoTrack requires access to the microphone and the camera separately. In this case, users need to do authorization twice.

    Parameters

    • Optional audioConfig: MicrophoneAudioTrackInitConfig

      Configurations for the sampled audio, such as the capture device and the encoder configurations.

    • Optional videoConfig: CameraVideoTrackInitConfig

      Configurations for the captured video, such as the capture device and the encoder configurations.

    Returns Promise<[IMicrophoneAudioTrack, ICameraVideoTrack]>

getListeners

  • getListeners(event: string): Function[]
  • Gets all the listeners for a specified event.

    Parameters

    • event: string

      The event name.

    Returns Function[]

getSupportedCodec

  • getSupportedCodec(): Promise<object>
  • Gets the codecs that the browser supports.

    This method gets a list of the codecs supported by the SDK and the web browser. The Agora Web SDK supports video codecs VP8 and H.264, and audio codec OPUS.

    Note:

    • The method works with all major browsers. It gets an empty list if it does not recognize the browser or the browser does not support WebRTC.
    • The returned codec list is based on the SDP used by the web browser and for reference only.
    • Some Android phones claim to support H.264 but have problems in communicating with other platforms using this codec, in which case we recommend VP8 instead.
    AgoraRTC.getSupportedCodec().then(result => {
    console.log(`Supported video codec: ${result.video.join(",")}`);
    console.log(`Supported audio codec: ${result.audio.join(",")}`);
    });
    

    Returns Promise<object>

    A Promise object. In the .then(function(result){}) callback, result has the following properties:

    • video: array, the supported video codecs. The array may include "H264", "VP8", or be empty.
    • audio: array, the supported audio codecs. The array may include "OPUS", or be empty.

off

  • off(event: string, listener: Function): void
  • Removes the listener for a specified event.

    Parameters

    • event: string

      The event name.

    • listener: Function

      The callback that corresponds to the event listener.

    Returns void

on

  • on(event: "camera-changed", listener: typeof event_camera_changed): void
  • on(event: "microphone-changed", listener: typeof event_microphone_changed): void
  • on(event: "playback-device-changed", listener: typeof event_playback_device_changed): void
  • on(event: "autoplay-failed", listener: typeof event_autoplay_failed): void
  • on(event: "security-policy-violation", listener: typeof event_security_policy_violation): void
  • on(event: "audio-context-state-changed", listener: typeof event_audio_context_state_changed): void

once

  • once(event: string, listener: Function): void
  • Listens for a specified event once.

    When the specified event happens, the SDK triggers the callback that you pass and then removes the listener.

    Parameters

    • event: string

      The event name.

    • listener: Function

      The callback to trigger.

    Returns void

preload

  • preload(appid: string, channel: string, token: string | null, uid?: UID | null): Promise<void>
  • Preload channels using appid, channel, token, and uid.

    Calling this method reduces the time it takes to join a channel when the viewer switches channels frequently, thus shortening the time it takes for the viewer to hear the first frame of the host's audio as well as to see the first frame of the screen, and improving the video experience on the viewer's end.

    If the current channel has been preloaded successfully and the viewer needs to join the channel again after joining or leaving the channel, there is no need to re-preload the channel as long as the token passed in during preloading is still valid.

    Note:

    • Preload is only valid for two minutes.
    • In order to protect page performance, this method adopts a one-time best-effort strategy and cannot guarantee success. However, a failed preload will not affect the viewer's ability to join the channel normally, nor will it increase the time taken to join the channel.
    • The system caches up to 10 latest preloading data.
    • Currently this method does not support forwarding via proxy.

    Parameters

    • appid: string

      The App ID of your Agora project.

    • channel: string

      A string that provides a unique channel name for the call. The length must be within 64 bytes. Supported character scopes:

      • All lowercase English letters: a to z.
      • All uppercase English letters: A to Z.
      • All numeric characters: 0 to 9.
      • The space character.
      • Punctuation characters and other symbols, including: "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", " {", "}", "|", "~", ",".
    • token: string | null

      The token generated at your server:

    • Optional uid: UID | null

      The user ID, an integer or a string, ASCII characters only. Ensure this ID is unique. If you set the uid to null, the Agora server assigns an integer uid.

      • If you use a number as the user ID, it should be a 32-bit unsigned integer with a value ranging from 0 to (232-1).
      • If you use a string as the user ID, the maximum length is 255 characters.
      To ensure a better end-user experience, Agora recommends using a number as the user ID.

      Note:

      • All users in the same channel should have the same type (number or string) of uid.
      • You can use string UIDs to interoperate with the Native SDK 2.8 or later. Ensure that the Native SDK uses the User Account to join the channel. See Use String User Accounts.
      • To ensure the data accuracy in Agora Analytics, Agora recommends that you specify uid for each user and ensure it is unique.

    Returns Promise<void>

processExternalMediaAEC

  • processExternalMediaAEC(element: HTMLMediaElement): void
  • since


       4.5.0

    Enables the AEC (Acoustic Echo Canceller) for the audio played on the local client. In a scenario where multiple users play a media file at the same time, such as watching a movie together, if the user A plays the media file through HTMLMediaElement on Chrome with a speaker, the SDK captures the audio played by a speaker together with the voice of the user A. The other users can hear the audio sent by the user A and the audio played locally, which sounds like an echo. To deal with this echo issue, you can call processExternalMediaAEC and pass in the HTMLMediaElement to enable the AEC for the audio played on the local client.

    <audio crossOrigin="anonymous" src="http://www.test.com/test.mp3" id="audioDom"></audio>
    <script>
      const element = document.getElementById("audioDom");
      AgoraRTC.processExternalMediaAEC(element);
    </script>
    

    Note: If you play a cross-origin media file, you must set the crossOrigin property in HTMLMediaElement as "anonymous" to allow the SDK to capture the media.

    Parameters

    • element: HTMLMediaElement

      The HTMLMediaElement object to which the echo cancellation is applied.

    Returns void

registerExtensions

  • registerExtensions(extensions: IExtension<any>[]): void

removeAllListeners

  • removeAllListeners(event?: string): void
  • Removes all listeners for a specified event.

    Parameters

    • Optional event: string

      The event name. If left empty, all listeners for all events are removed.

    Returns void

resumeAudioContext

  • resumeAudioContext(): void
  • Resumes audio and video playback.

    On some versions of iOS devices, the app call might not automatically resume after being interrupted by a WeChat call or system phone call. You can call this method to resume the app call.

    Agora recommends that you listen for the "audio-context-state-changed" event using IAgoraRTC.on, and handle the following in the callback function event_audio_context_state_changed:

    • When the state changes to "interrupted", display a pop-up to notify the user that the app call is interrupted and needs to be resumed by clicking a button. After the user clicks the button, call resumeAudioContext.
    • When the state changes to "running", close the pop-up.

    Returns void

setArea

  • setArea(area: AREAS[] | object): void
  • since


       4.2.0

    Sets the region for connection.

    This advanced feature applies to scenarios that have regional restrictions.

    By default, the SDK connects to nearby Agora servers. After specifying the region, the SDK connects to the Agora servers within that region.

    // Specify the region for connection as North America.
    AgoraRTC.setArea({
      areaCode:"NORTH_AMERICA"
    })
    
    // Exclude Mainland China from the regions for connection.
    AgoraRTC.setArea({
      areaCode:"GLOBAL",
      excludedArea:"CHINA"
    })
    

    Parameters

    • area: AREAS[] | object

      The region for connection. For supported regions, see AREAS. Choose either of the following ways to specify the region for connection:

      • Set the areaCode parameter to specify only one region for connection.
      • Set the areaCode parameter to specify a large region and the excludedArea parameter to specify a small region. The region for connection is the large region excluding the small region. You can only specify the large region as "GLOBAL".

    Returns void