The HDFS TO COS tool is used to copy data from HDFS to Tencent Cloud COS.
Linux or Windows
JDK v1.7 or v1.8
For more information on environment installation and configuration, please see Java Installation and Configuration.
core-site.xmlfile of the HDFS cluster to be synced to the
core-site.xmlfile contains the configuration information of NameNode.
cos_info.confconfiguration file, configure the bucket, region, and API key information. The bucket name is formed by connecting a user-defined string and the system-generated
APPIDwith a hyphen, for example,
If the command line parameter conflicts with the configuration file, the command line parameter prevails.
Linux is used as an example below.
The execution result is as shown below:
Copy from HDFS to COS. If a file with the same name as the file to be copied already exists in COS, the former will be overwritten.
./hdfs_to_cos_cmd --hdfs_path=/tmp/hive --cos_path=/hdfs/20170224/
Copy from HDFS to COS. If a file with the same name and length as the file to be copied already exists in COS, the latter will be skipped (this is suitable for repeated copy).
./hdfs_to_cos_cmd --hdfs_path=/tmp/hive --cos_path=/hdfs/20170224/ -skip_if_len_match
Only the length is checked here, as the overheads would be very high if the digests of files in Hadoop are calculated.
Copy from HDFS to COS. If the
Har directory (Hadoop archive file) exists in HDFS, the .har files can be automatically decompressed by specifying the
./hdfs_to_cos_cmd --decompress_har --hdfs_path=/tmp/hive --cos_path=/hdfs/20170224/
--decompress_har parameter is not specified, the directory will be copied as an ordinary HDFS directory, that is, the files in the
Har directory such as index and
masterindex will be copied as-is.
conf: configuration file, which is used to store `core-site.xml` and `cos_info.conf` log: log directory src: Java source program dep: compiled executable JAR package
Please make sure that the entered configuration information is correct, including bucket, region, and API key information. The bucket name is formed by connecting the user-defined string and system-generated
APPID with a hyphen, such as
examplebucket-1250000000. Please also make sure that the time on the server is in sync with Beijing time (if there is a difference of about 1 minute, it is okay, but if the difference is large, please set the server time correctly).
Please make sure that the server where the copy program is located can also access the DataNode. The NameNode uses a public IP address and can be accessed, but the DataNode where the obtained block is located uses a private IP address and cannot be accessed directly; therefore, it is recommended that the copy program be placed in a Hadoop node for execution, so that both the NameNode and DataNode can be accessed.
Please use the current account to download a file with the Hadoop command, check whether everything is correct, and then use the synchronization tool to sync the data in Hadoop.
Files that already exist in COS will be overwritten by default in case of repeated upload, unless you explicitly specify the
-skip_if_len_match parameter, which indicates to skip files during upload if they have the same length as existing files.
cos path is considered as a directory by default, and files that are eventually copied from HDFS will be stored in this directory.
To copy data from Tencent Cloud EMR HDFS to COS, you are advised to use the high-performance DistCp tool. For more information, please see Migrating Data Between HDFS and COS.