Last updated: 2020-08-12 10:02:33

    Use/Consultation

    What is Hadoop-COS?

    Hadoop-COS is a tool that helps integrate big-data computing frameworks including Apache Hadoop, Spark, and Tez. It allows you to read and write Tencent Cloud COS data just as you do with HDFS. It can also be used as Deep Storage for Druid and other query and analysis engines.

    How do I use the Hadoop-COS jar file with self-built Hadoop?

    Edit the Hadoop-COS POM file to make sure its version is the same as that of Hadoop. Next, put the Hadoop-COS jar and COS JAVA SDK jar files in the directory hadoop/share/hadoop/common/lib. For more information, see Hadoop-COS.

    CosFileSystem Class Not Found

    Why do I receive the following message during loading, prompting me that the class CosFileSystem was not found: “Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.CosFileSystem not found”?

    Possible cause
    The configuration was loaded correctly, but the hadoop classpath does not include the location of Hadoop-COS jar.

    Solution
    Load the location of Hadoop-COS jar in the hadoop classpath.

    Why am I prompted that the class CosFileSystem was not found when using Apache Hadoop?

    COS offers 2 versions: Apache Hadoop and Hadoop-COS, which differ in the configuration of fs.cosn.impl and fs.AbstractFileSystem.cosn.impl.

    • Apache Hadoop:
      <property>
            <name>fs.cosn.impl</name>
            <value>org.apache.hadoop.fs.cosn.CosNFileSystem</value>
      </property>
      <property>
            <name>fs.AbstractFileSystem.cosn.impl</name>
            <value>org.apache.hadoop.fs.cosn.CosN</value>
      </property>
    • Tencent COS:
      <property>
            <name>fs.cosn.impl</name>
            <value>org.apache.hadoop.fs.CosFileSystem</value>
      </property>
      <property>
            <name>fs.AbstractFileSystem.cosn.impl</name>
            <value>org.apache.hadoop.fs.CosN</value>
      </property>

    Frequency Control and Bandwidth

    Why am I receiving a 503 error?

    In big data scenarios, a high level of concurrency may trigger the COS frequency control policy, resulting in a 503 Reduce your request rate error. You can initiate retries for the failed requests by configuring the fs.cosn.maxRetries parameter, which defaults to a maximum of 200 retries.

    Why does my bandwidth limit setting seem to be ineffective?

    The bandwidth limit setting fs.cosn.traffic.limit(b/s) is supported only by the latest versions of Hadoop-COS with a tag of 5.8.3 or above. For more information, see Hadoop-COS on GitHub.

    Parts

    How do I set a reasonable part size for a multipart upload via Hadoop-COS?

    Hadoop-COS uploads large files to COS via concurrent uploads of multiple parts. You can control the size of each part by configuring fs.cosn.upload.part.size(Byte).

    Because a COS multipart upload allows at most 10,000 parts for a single file, you need to estimate the largest possible file size you may need to upload to determine value of this parameter. For example, with a part size of 8 MB, you can upload a single file of up to 78 GB in size. A maximum part size of 2 GB is supported, meaning that the largest singe file size supported is 19 TB. A 400 error will be thrown if the number of parts exceeds 10,000. If you encounter said error, please check if you have configured this parameter correctly.

    Why can’t I see a large file immediately after it was uploaded to COS?

    Hadoop-COS uploads all large files, i.e. those larger than the part size (fs.cosn.upload.part.size), through multipart upload. You can see the file on COS only after all of its parts have been uploaded. Currently, COS does not support Append operations.

    Buffers

    Which Buffer type should I choose for upload? What’s the difference between them?

    You can choose a buffer type by setting fs.cosn.upload.buffer to one of the following 3 values:

    • mapped_disk (default): You need to place fs_cosn.tmp.dir under a directory large enough to avoid a full disk runtime.
    • direct_memory: Uses JVM off-heap memory (out of JVM control; not recommended)
    • non_direct_memory: Uses JVM on-heap memory; We recommend configuring it to 128 M

    Why do I get the following error message when I set the buffer type to mapped_disk: create buffer failed. buffer type: mapped_disk, buffer factory:org.apache.hadoop.fs.buffer.CosNMappedBufferFactory?

    Possible cause
    You do not have read or write permission for the temporary directory used by Hadoop-COS. The directory used by default is /tmp/hadoop_cos and can be customized by configuring fs.cosn.tmp.dir.

    Solution
    Obtain read and write permission for the temporary directory used by Hadoop-COS.

    Runtime Exceptions

    What should I do if the following exception is thrown when I perform computing tasks: java.net.ConnectException: Cannot assign requested address (connect failed) (state=42000,code=40000)?

    Generally, this exception occurs when you have established too many TCP non-persistent connections within a short period of time. After the connections are closed, local ports will enter a 60-second timeout period by default instead of being immediately repossessed. During the timeout period, there will be no ports available to establish a socket connection with the server.
    Solution

    Modify the /etc/sysctl.conf file with changes to the following kernel parameters:

    net.ipv4.tcp_timestamps = 1     #Enables TCP timestamps
    net.ipv4.tcp_tw_reuse = 1       #Supports the use of a socket in TIME_WAIT status to form a new TCP connection
    net.ipv4.tcp_tw_recycle = 1     #Enables quick repossession of a socket in TIME-WAIT status
    net.ipv4.tcp_syncookies=1       #Enables SYN Cookies. The default value is 0. When the SYN waiting queue overflows, cookies are enabled to prevent a small number of SYN attacks.
    net.ipv4.tcp_fin_timeout = 10              #Waiting time after the port is released.
    net.ipv4.tcp_keepalive_time = 1200           #The time interval between which TCP sends KeepAlive messages. The default value is 2 hours. Change it to 20 minutes.
    net.ipv4.ip_local_port_range = 1024 65000    #The range of ports for external connections. The default value is 32768 – 61000. Change it to 1024 – 65000.
    net.ipv4.tcp_max_tw_buckets = 10240          #The maximum number (default: 180000) of sockets in TIME_WAIT status. Exceeding this number will directly release all the new TIME_WAIT sockets. You may consider reducing this parameter for a smaller number of sockets in TIME_WAIT status.

    When I upload a file, why does the exception “java.lang.Thread.State: TIME_WAITING (parking)” occur with “org.apache.hadoop.fs.BufferPoll.getBuffer” and “java.util.concurrent.locks.LinkedBlockingQueue.poll” locked in the stack?

    Possible cause

    You may have repeatedly initialized the buffer without actually triggering the write operation.

    Solution

    Change the configuration to the following:

    <property>
            <name>fs.cosn.upload.buffer</name>
            <value>mapped_disk</value>
    </property>
    <property>
            <name>fs.cosn.upload.buffer.size</name>
            <value>-1</value>
    </property>

    Was this page helpful?

    Was this page helpful?

    • Not at all
    • Not very helpful
    • Somewhat helpful
    • Very helpful
    • Extremely helpful
    Send Feedback
    Help