The current state, with possible values:
AudioContextState
: Enumerated values are detailed in BaseAudioContext.state."interrupted"
: Audio and video playback is interrupted by a system phone call or another app. You call the resumeAudioContext method to resume audio and video playback.The previous state, with possible values:
AudioContextState
: Enumerated values are detailed in BaseAudioContext.state."interrupted"
: Audio and video playback is interrupted by a system phone call or another app.undefined
: No previous state.The device information. See DeviceInfo.
The device information. See DeviceInfo.
The device information. See DeviceInfo.
Occurs when the autoplay of an audio track fails.
Occurs when a video capture device is added or removed.
AgoraRTC.onCameraChanged = (info) => {
console.log("camera changed!", info.state, info.device);
};
Parameters
Occurs when an audio sampling device is added or removed.
AgoraRTC.onMicrophoneChanged = (info) => {
console.log("microphone changed!", info.state, info.device);
};
Parameters
The version of the Agora Web SDK.
Creates a local client object for managing a call.
This is usually the first step of using the Agora Web SDK.
The configurations for the client object, including channel profile and codec. The default codec is vp8
and default channel profile is rtc
. See ClientConfig for details.
Creates an audio track from an audio file or AudioBuffer object.
This method works with both the local and online audio files, supporting the following formats:
Configurations such as the file path, caching strategies, and encoder configuration.
Unlike other audio track objects, this audio track object adds the methods for audio playback control, such as playing, pausing, seeking and playback status querying.
Creates a video track from the video captured by a camera.
Configurations for the captured video, such as the capture device and the encoder configuration.
Creates a customized audio track.
This method creates a customized audio track from a MediaStreamTrack object.
Configurations for the customized audio track.
Creates a customized video track.
This method creates a customized video track from a MediaStreamTrack object.
Configurations for the customized video track. See CustomVideoTrackInitConfig.
As of v4.17.1, you can set the resolution and frame rate (in addition to the sending bitrate) for a customized video track by config.
Creates an audio track from the audio sampled by a microphone.
Configurations for the sampled audio, such as the capture device and the encoder configuration. See MicrophoneAudioTrackInitConfig.
Creates a video track for screen sharing.
Configurations for the screen-sharing video, such as encoder configuration and capture configuration.
Whether to share the audio of the screen sharing input source when sharing the screen.
enable
: Share the audio.disable
: (Default) Do not share the audio.auto
: Share the audio, dependent on whether the browser supports this function.AEC
, AGC
, ANS
).Note:
- This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
- Additional information on browser versions and feature support across different operating systems:
- On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
- On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
- On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
- For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.
Creates a video track for screen sharing.
Configurations for the screen-sharing video, such as encoder configuration and capture configuration.
Whether to share the audio of the screen sharing input source when sharing the screen.
enable
: Share the audio.disable
: (Default) Do not share the audio.auto
: Share the audio, dependent on whether the browser supports this function.AEC
, AGC
, ANS
).Note:
- This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
- Additional information on browser versions and feature support across different operating systems:
- On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
- On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
- On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
- For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.
withAudio
is enable
, then this method returns a list containing a video track for screen sharing and an audio track. If the end user does not check Share audio, the SDK throws an error.withAudio
is disable
, then this method returns a video track for screen sharing.withAudio
is auto
or ScreenAudioTrackInitConfig
, then the SDK attempts to share the audio on browsers supporting this function.Creates a video track for screen sharing.
Configurations for the screen-sharing video, such as encoder configuration and capture configuration.
Whether to share the audio of the screen sharing input source when sharing the screen.
enable
: Share the audio.disable
: (Default) Do not share the audio.auto
: Share the audio, dependent on whether the browser supports this function.AEC
, AGC
, ANS
).Note:
- This function is only available for desktop browsers that support the Web SDK instead of mobile devices. For the specific list of supported browsers, see Supported platforms.
- Additional information on browser versions and feature support across different operating systems:
- On macOS, Chrome 74 or later supports audio and video sharing, only when sharing Chrome tabs. Firefox and Safari 14 or later support window and screen sharing, but do not support audio sharing.
- On Windows, Chrome 74 or later and Edge support audio sharing when sharing the screen and browser tabs, but not when sharing application windows. Firefox supports window and screen sharing, but does not support audio sharing.
- On ChromeOS, Chrome supports audio sharing when sharing the screen and browser tabs, but not when sharing application windows.
- For the audio sharing to take effect, the end user must check Share audio in the pop-up window when sharing the screen.
withAudio
is enable
, then this method returns a list containing a video track for screen sharing and an audio track. If the end user does not check Share audio, the SDK throws an error.withAudio
is disable
, then this method returns a video track for screen sharing.withAudio
is auto
or ScreenAudioTrackInitConfig
, then the SDK attempts to share the audio on browsers supporting this function.Disables log upload.
The log-upload function is disabled by default. If you have called enableLogUpload, then call this method when you need to stop uploading the log.
Enables log upload.
Call this method to enable log upload to Agora’s server.
The log-upload function is disabled by default. To enable this function, you must call this method before calling all the other methods.
If a user fails to join the channel, the log information (for that user) is unavailable on Agora's server.
Sets the output log level of the SDK.
Choose a level to see the logs preceding that level. The log level follows the sequence of NONE, ERROR, WARNING, INFO, and DEBUG.
For example, if you set the log level as AgoraRTC.setLogLevel(1);
, then you can see logs in levels INFO, ERROR, and WARNING.
The output log level.
Enumerates the video capture devices available, such as cameras.
If this method call succeeds, the SDK returns a list of video input devices in an array of MediaDeviceInfo objects.
Calling this method turns on the camera shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.
Whether to skip the permission check. If you set this parameter as true
, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.
true
: Skip the permission check.false
: (Default) Do not skip the permission check.Enumerates the media input and output devices available, such as microphones, cameras, and headsets.
If this method call succeeds, the SDK returns a list of media devices in an array of MediaDeviceInfo objects.
Note:
- Calling this method turns on the camera and microphone shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.
- The MediaDeviceInfo.deviceId property of a device may change. For example, it is reset when the user clears cookies. Agora does not recommend using the
deviceId
property to implement your business logic.
getDevices().then(devices => {
console.log("first device id", devices[0].deviceId);
}).catch(e => {
console.log("get devices error!", e);
});
Whether to skip the permission check. If you set this parameter as true
, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.
true
: Skip the permission check.false
: (Default) Do not skip the permission check.Gets the sources for screen-sharing through Electron.
If your electron environment has set
contextIsolation: true
, calling this function will throw an error. You need to get screen source id withcontextBridge.exposeInMainWorld
method by yourself.
// preload.js
const {
contextBridge, desktopCapturer
} = require("electron");
contextBridge.exposeInMainWorld(
"electronDesktopCapturer", {
getSources: async (...args) => {
const sources = await desktopCapturer.getSources(...args);
return sources;
}
}
);
// renderer.js
(async () => {
sources = await window.electronDesktopCapturer.getSources(["window", "screen"]);
const source = sources[0]; // just for example ,you shuould make an UI for user to select the exact source.
const screenVideoTrack = await AgoraRTC.createScreenVideoTrack({ electronScreenSourceId: source.id });
})()
If this method call succeeds, the SDK returns a list of screen sources in an array of ElectronDesktopCapturerSource objects.
The type of screen sources (window/application/screen) to get. See ScreenSourceType. If it is left empty, this method gets all the available sources.
Enumerates the audio sampling devices available, such as microphones.
If this method call succeeds, the SDK returns a list of audio input devices in an array of MediaDeviceInfo objects.
Calling this method turns on the microphone shortly for the device permission request. On browsers including Chrome 67+, Firefox 70+, and Safari 12+, the SDK cannot get accurate device information without permission for the media device.
Whether to skip the permission check. If you set this parameter as true
, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.
true
: Skip the permission check.false
: (Default) Do not skip the permission check.Whether to skip the permission check. If you set this parameter as true
, the SDK does not trigger the request for media device permission. In this case, the retrieved media device information may be inaccurate.
true
: Skip the permission check.false
: (Default) Do not skip the permission check.Check whether an audio track is active.
The SDK determines whether an audio track is active by checking whether the volume changes during the specified time frame.
Agora recommends calling this method before starting a call to check the availability of the audio sampling device. You can pass the audio track from the audio sampled by a microphone as a parameter in this method to check whether the microphone works.
Notes:
- The check may fail in a quiet environment. Agora suggests you instruct the end user to speak or make some noise when calling this method.
- If an audio track is muted, this method returns
false
.- Do not call this method frequently as the check may affect web performance.
const audioTrack = await AgoraRTC.createMicrophoneAudioTrack({ microphoneId });
AgoraRTC.checkAudioTrackIsActive(audioTrack).then(result => {
console.log(`${ microphoneLabel } is ${ result ? "available" : "unavailable" }`);
}).catch(e => {
console.log("check audio track error!", e);
});
The local or remote audio track to be checked.
The time frame (ms) for checking. The default value is 5,000 ms.
Whether the volume in the specified audio track changes during the specified time frame:
true
: The volume changes. For the microphone audio track, it means the audio sampling device works.false
: The volume does not change. Possible reasons:Checks the compatibility of the current browser.
Use this method before calling createClient to check if the SDK is compatible with the web browser.
true
: The SDK is compatible with the current web browser.false
: The SDK is incompatible with the current web browser.Checks whether a video track is active.
The SDK determines whether a video track is active by checking for image changes during the specified time frame.
Agora recommends calling this method before starting a call to check the availability of the video capture device. You can pass the camera video track as a parameter in this method to check whether the camera works.
Notes:
- If a video track is muted, this method returns
false
.- Do not call this method frequently as the check may affect web performance.
const videoTrack = await AgoraRTC.createCameraVideoTrack({ cameraId });
AgoraRTC.checkVideoTrackIsActive(videoTrack).then(result => {
console.log(`${ cameraLabel } is ${ result ? "available" : "unavailable" }`);
}).catch(e => {
console.log("check video track error!", e);
});
The local or remote video track to be checked.
The time frame (ms) for checking. The default value is 5,000 ms.
Whether the image in the specified video track changes during the specified time frame:
true
: The image changes. For the camera video track, it means the video capture device works.false
: The image does not change. Possible reasons:Creates an object for configuring the media stream relay.
Creates an audio track and a video track.
Creates an audio track from the audio sampled by a microphone and a video track from the video captured by a camera.
Calling this method differs from calling createMicrophoneAudioTrack and createCameraVideoTrack separately:
- This method call requires access to the microphone and the camera at the same time. In this case, users only need to do authorization once.
- Calling createMicrophoneAudioTrack and createCameraVideoTrack requires access to the microphone and the camera separately. In this case, users need to do authorization twice.
Configurations for the sampled audio, such as the capture device and the encoder configurations.
Configurations for the captured video, such as the capture device and the encoder configurations.
Gets all the listeners for a specified event.
The event name.
Gets the codecs that the browser supports.
This method gets a list of the codecs supported by the SDK and the web browser. The Agora Web SDK supports video codecs VP8 and H.264, and audio codec OPUS.
Note:
- The method works with all major browsers. It gets an empty list if it does not recognize the browser or the browser does not support WebRTC.
- The returned codec list is based on the SDP used by the web browser and for reference only.
- Some Android phones claim to support H.264 but have problems in communicating with other platforms using this codec, in which case we recommend VP8 instead.
AgoraRTC.getSupportedCodec().then(result => {
console.log(`Supported video codec: ${result.video.join(",")}`);
console.log(`Supported audio codec: ${result.audio.join(",")}`);
});
A Promise
object. In the .then(function(result){})
callback, result
has the following properties:
video
: array, the supported video codecs. The array may include "H264"
, "VP8"
, or be empty.audio
: array, the supported audio codecs. The array may include "OPUS"
, or be empty.Removes the listener for a specified event.
The event name.
The callback that corresponds to the event listener.
The event name.
See event_camera_changed.
The event name.
The event name.
The event name.
The event name.
The event name.
Listens for a specified event once.
When the specified event happens, the SDK triggers the callback that you pass and then removes the listener.
The event name.
The callback to trigger.
Preload channels using appid
, channel
, token
, and uid
.
Calling this method reduces the time it takes to join a channel when the viewer switches channels frequently, thus shortening the time it takes for the viewer to hear the first frame of the host's audio as well as to see the first frame of the screen, and improving the video experience on the viewer's end.
If the current channel has been preloaded successfully and the viewer needs to join the channel again after joining or leaving the channel, there is no need to re-preload the channel as long as the token passed in during preloading is still valid.
Note:
- Preload is only valid for two minutes.
- In order to protect page performance, this method adopts a one-time best-effort strategy and cannot guarantee success. However, a failed preload will not affect the viewer's ability to join the channel normally, nor will it increase the time taken to join the channel.
- The system caches up to 10 latest preloading data.
- Currently this method does not support forwarding via proxy.
The App ID of your Agora project.
A string that provides a unique channel name for the call. The length must be within 64 bytes. Supported character scopes:
The token generated at your server:
The user ID, an integer or a string, ASCII characters only. Ensure this ID is unique. If you set the uid
to null
, the Agora server assigns an integer uid.
Note:
- All users in the same channel should have the same type (number or string) of
uid
.- You can use string UIDs to interoperate with the Native SDK 2.8 or later. Ensure that the Native SDK uses the User Account to join the channel. See Use String User Accounts.
- To ensure the data accuracy in Agora Analytics, Agora recommends that you specify
uid
for each user and ensure it is unique.
The HTMLMediaElement object to which the echo cancellation is applied.
The extension instance.
Removes all listeners for a specified event.
The event name. If left empty, all listeners for all events are removed.
Resumes audio and video playback.
On some versions of iOS devices, the app call might not automatically resume after being interrupted by a WeChat call or system phone call. You can call this method to resume the app call.
Agora recommends that you listen for the "audio-context-state-changed"
event using IAgoraRTC.on,
and handle the following in the callback function event_audio_context_state_changed:
"interrupted"
, display a pop-up to notify the user that the app call is interrupted and needs to be resumed by clicking a button. After the user clicks the button, call resumeAudioContext
."running"
, close the pop-up.The region for connection. For supported regions, see AREAS. Choose either of the following ways to specify the region for connection:
areaCode
parameter to specify only one region for connection.areaCode
parameter to specify a large region and the excludedArea
parameter to specify a small region. The region for connection is the large region excluding the small region. You can only specify the large region as "GLOBAL"
.
The entry point of the Agora Web SDK.