Last updated: 2021-09-22 17:45:00

    Use Cases

    TRTC supports four room entry modes, among which video call (VideoCall) and audio call (AudioCall) are classified as call mode, while interactive video live streaming (Live) and interactive audio live streaming (VoiceChatRoom) are classified as live streaming mode.
    In call mode, there can be a maximum of 300 members in a single TRTC room, and up to 30 of them can speak at the same time. This service is suitable for various scenarios such as one-to-one video call, video conferencing with up to 300 attendees, online medical diagnosis, video interview, video customer service, and online werewolf.

    How It Works

    The TRTC service consists of two types of server nodes: access servers and proxy servers:

    • Access server
      Backed by the highest-quality lines and high-performance servers, this type of nodes is ideal for processing end-to-end low-latency co-anchoring calls, but the fees per unit time are relatively high.
    • Proxy server
      With general lines and average-performance servers, this type of nodes is suitable for processing high-concurrence playback of pulled streams, and the fees per unit time are low.

    In call mode, all users in the TRTC room will be assigned to access servers, which means that each user is an "anchor" and can speak at any time (up to 30 concurrent upstreams are supported), so it is suitable for scenarios such as online conferencing, but the number of members in a single room is limited to 300.

    Sample Code

    You can log in to [GitHub] to get the sample code related to this document.

    Note:

    If your access to GitHub is slow, you can directly download TXLiteAVSDK_TRTC_Android_latest.zip.

    Directions

    Step 1. Integrate the SDK

    You can integrate the TRTC SDK into your project in the following ways:

    Method 1. Automatically load the SDK (aar)

    The TRTC SDK has been released to the JCenter repository, and you can configure Gradle to download updates automatically.
    You only need to use Android Studio to open the project to be integrated with the SDK (integration with TRTC-API-Example has already been completed, and the sample code is for your reference) and modify the app/build.gradle file in simple steps to complete SDK integration:

    1. Add the TRTC SDK dependencies to dependencies.
      dependencies {
        compile 'com.tencent.liteav:LiteAVSDK_TRTC:latest.release'
      }
      
    2. In defaultConfig, specify the CPU architecture to be used by the application.
      Note:

      Currently, the TRTC SDK supports armeabi, armeabi-v7a, and arm64-v8a.

      defaultConfig {
        ndk {
            abiFilters "armeabi", "armeabi-v7a", "arm64-v8a"
        }
      }
      
    3. Click Sync Now to sync the SDK.
      If JCenter can be connected to, the SDK will be automatically downloaded and integrated into the project.

    Method 2. Download the ZIP package for manual integration

    You can directly download the ZIP package and integrate the SDK into your project as instructed in Quick Integration (Android).

    Step 2. Configure application permissions

    Add the permissions to request camera, mic, and network access in the AndroidManifest.xml file.

    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.READ_PHONE_STATE" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.BLUETOOTH" />
    <uses-feature android:name="android.hardware.camera" />
    <uses-feature android:name="android.hardware.camera.autofocus" />
    

    Step 3. Initialize an SDK instance and listen on the event callback

    1. Call the sharedInstance() API to create a TRTCCloud instance.
      // Create a `trtcCloud` instance
      mTRTCCloud = TRTCCloud.sharedInstance(getApplicationContext());
      mTRTCCloud.setListener(new TRTCCloudListener());
      
    2. Set the setListener attribute to register the event callback and listen on relevant events and error notifications.
      // Listen on error notifications, which indicate that the SDK cannot continue to run
      @Override
      public void onError(int errCode, String errMsg, Bundle extraInfo) {
      Log.d(TAG, "sdk callback onError");
      if (activity != null) {
          Toast.makeText(activity, "onError: " + errMsg + "[" + errCode+ "]" , Toast.LENGTH_SHORT).show();
          if (errCode == TXLiteAVCode.ERR_ROOM_ENTER_FAIL) {
              activity.exitRoom();
          }
      }
      }
      

    Step 4. Assemble the room entry parameter TRTCParams

    When calling the enterRoom() API, you need to enter a key parameter TRTCParams, which includes the following required fields:

    Parameter Field Type Description Example
    sdkAppId Numeric Application ID. You can view the SDKAppID in the TRTC Console. 1400000123
    userId String It can contain only letters (a–z and A–Z), digits (0–9), underscores, and hyphens. test_user_001
    userSig String userSig can be calculated based on userId. For the calculation method, please see How to Calculate UserSig. eJyrVareCeYrSy1SslI...
    roomId Numeric Room IDs in string type are not supported by default, as they will lower the room entry speed. If you need to used string-type room IDs, please submit a ticket for assistance. 29834
    Note:

    In TRTC, users with the same userId cannot be in the same room at the same time; otherwise, there will be a conflict.

    Step 5. Create and enter a room

    1. Call enterRoom() to enter the audio/video room specified by roomId in the TRTCParams parameter. If the room does not exist, the SDK will automatically create it with the roomId value as the room number.
    2. Please set the appropriate appScene parameter according to the actual application scenario. An incorrect selection may lead to higher lagging rate or lower video definition than expected.
      • For video calls, please set TRTC_APP_SCENE_VIDEOCALL.
      • For audio calls, please set TRTC_APP_SCENE_AUDIOCALL.
    3. After successful room entry, the SDK will call back the onEnterRoom(result) event. If result is greater than 0, the room entry succeeds, and the specific value indicates the time in milliseconds (ms) used for entering the room; if result is less than 0, the room entry fails, and the specific value indicates the error code of the failure.
    public void enterRoom() {
      TRTCCloudDef.TRTCParams trtcParams = new TRTCCloudDef.TRTCParams();
      trtcParams.sdkAppId = sdkappid;
      trtcParams.userId = userid;
      trtcParams.roomId = 908;
      trtcParams.userSig = usersig;
      mTRTCCloud.enterRoom(trtcParams, TRTC_APP_SCENE_VIDEOCALL);
    }
    @Override
    public void onEnterRoom(long result) {
      if (result > 0) {
          toastTip("Entered room successfully; the total time used is [\(result)] ms")
      } else {
          toastTip("Failed to enter the room; the error code is [\(result)]")
      }
    }
    
    Note:

    • If the room entry fails, the SDK will also call back the onError event and return the parameters errCode (error code), errMsg (error message), and extraInfo (reserved parameter).
    • If you are already in a room, you must call exitRoom() to exit the current room first before entering the next room.

    To avoid unexpected events, make sure that the same appScene value is used by different clients.

    Step 6. Subscribe to remote audio/video streams

    The SDK supports both automatic subscription and manual subscription.

    Automatic subscription mode (default)

    In automatic subscription mode, after room entry, the SDK will automatically receive audio streams from other users in the room to achieve the best "instant broadcasting" effect:

    1. When another user in the room is upstreaming audio data, you will receive the onUserAudioAvailable() event notification, and the SDK will automatically play back the audio of the remote user.
    2. You can block the audio data of a specified userId through muteRemoteAudio(userId, true) or all remote users through muteAllRemoteAudio(true). After that, the SDK will no longer pull the audio data of the corresponding remote users.
    3. When another user in the room is upstreaming video data, you will receive the onUserVideoAvailable() event notification; however, since the SDK has not received instructions on how to display the video data at this time, video data will not be processed automatically. You need to associate the video data of the remote user with the display view by calling the startRemoteView(userId, view) method.
    4. You can specify the display mode of the local video image through setRemoteViewFillMode():
      • Fill indicates the fill mode where the image may be scaled up proportionally or cropped, but no black bars will exist.
      • Fit indicates the fit mode where the image may be scaled down proportionally to fit the screen, but black bars may exist.
    5. You can block the video data of a specified userId through stopRemoteView(userId) or all remote users through stopAllRemoteView(). After that, the SDK will no longer pull the video data of the corresponding remote users.
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
      TXCloudVideoView remoteView = remoteViewDic[userId];
      if (available) {
          mTRTCCloud.startRemoteView(userId, remoteView);
          mTRTCCloud.setRemoteViewFillMode(userId, TRTC_VIDEO_RENDER_MODE_FIT);
      } else {
          mTRTCCloud.stopRemoteView(userId);
      }
    }
    
    Note:

    If you do not call startRemoteView() to subscribe to the video stream immediately after receiving the onUserVideoAvailable() event callback, the SDK will stop receiving remote video data within 5 seconds.

    Manual subscription mode

    You can specify the SDK to enter the manual subscription mode through the setDefaultStreamRecvMode() API. In this mode, the SDK will not automatically receive audio/video data of other users in the room; instead, you need to manually trigger the receipt through API functions.

    1. Before room entry, call the setDefaultStreamRecvMode(false, false) API to set the SDK to manual subscription mode.
    2. When another user in the room is upstreaming audio data, you will receive the onUserAudioAvailable() event notification. At this time, you need to manually subscribe to the user's audio data by calling muteRemoteAudio(userId, false), and the SDK will decode and play back it after receiving it.
    3. When another user in the room is upstreaming video data, you will receive the onUserVideoAvailable() event notification. At this time, you need to manually subscribe to the user's video data by calling startRemoteView(userId, remoteView), and the SDK will decode and play back it after receiving it.

    Step 7. Publish the local audio/video stream

    1. Call startLocalAudio() to enable local mic capture and encode and send the captured audio.
    2. Call startLocalPreview() to enable local camera capture and encode and send the captured video.
    3. Call setLocalViewFillMode() to set the display mode of the local video image:
      • Fill indicates the fill mode where the image may be scaled up proportionally or cropped, but no black bars will exist.
      • Fit indicates the fit mode where the image may be scaled down proportionally to fit the screen, but black bars may exist.
    4. Call setVideoEncoderParam() to set the encoding parameter of the local video, which determines the image quality of the video watched by other users in the room.
      // Sample code: send local audio/video streams
      mTRTCCloud.setLocalViewFillMode(TRTC_VIDEO_RENDER_MODE_FIT);
      mTRTCCloud.startLocalPreview(mIsFrontCamera, mLocalView);
      mTRTCCloud.startLocalAudio();
      

    Step 8. Exit the current room

    Call the exitRoom() method to exit the room. The SDK needs to disable and release hardware devices such as the camera and mic during room exit. Therefore, room exit is not completed as soon as the method is called. Only after the onExitRoom() callback is received can the room exit be considered completed.

    // Please wait for the `onExitRoom` event callback after calling the room exit method
    mTRTCCloud.exitRoom()
    @Override
    public void onExitRoom(int reason) {
      Log.i(TAG, "onExitRoom: reason = " + reason);
    }
    
    Note:

    If multiple audio/video SDKs are integrated into your application, please enable other SDKs only after receiving the onExitRoom callback; otherwise, hardware occupancy issues may occur.