您好,登錄后才能下訂單哦!
本篇文章為大家展示了Hadoop distcp命令如何跨集群復制文件,內容簡明扼要并且容易理解,絕對能使你眼前一亮,通過這篇文章的詳細介紹希望你能有所收獲。
hadoop提供了Hadoop distcp命令在Hadoop不同集群之間進行數據復制和copy。
使用格式為:hadoop distcp -pbc hdfs://namenode1/test hdfs://namenode2/test
distcp copy只有Map沒有Reduce
usage: distcp OPTIONS [source_path...] <target_path>
OPTIONS
-append Reuse existing data in target files and append new
data to them if possible
-async Should distcp execution be blocking
-atomic Commit all changes or none
-bandwidth <arg> Specify bandwidth per map in MB
-delete Delete from target, files missing in source
-diff <arg> Use snapshot diff report to identify the
difference between source and target
-f <arg> List of files that need to be copied
-filelimit <arg> (Deprecated!) Limit number of files copied to <= n
-i Ignore failures during copy
-log <arg> Folder on DFS where distcp execution logs are
saved
-m <arg> Max number of concurrent maps to use for copy
-mapredSslConf <arg> Configuration for ssl config file, to use with
hftps://
-overwrite Choose to overwrite target files unconditionally,
even if they exist.
-p <arg> preserve status (rbugpcaxt)(replication,
block-size, user, group, permission,
checksum-type, ACL, XATTR, timestamps). If -p is
specified with no <arg>, then preserves
replication, block size, user, group, permission,
checksum type and timestamps. raw.* xattrs are
preserved when both the source and destination
paths are in the /.reserved/raw hierarchy (HDFS
only). raw.* xattrpreservation is independent of
the -p flag. Refer to the DistCp documentation for
more details.
-sizelimit <arg> (Deprecated!) Limit number of files copied to <= n
bytes
-skipcrccheck Whether to skip CRC checks between source and
target paths.
-strategy <arg> Copy strategy to use. Default is dividing work
based on file sizes
-tmp <arg> Intermediate work path to be used for atomic
commit
-update Update target, copying only missingfiles or
directories
不同版本的Hadoop集群由于RPC協議版本不一樣不能直接使用命令 hadoop distcp hdfs://namenode1/test hdfs://namenode2/test
對于不同Hadoop版本間的拷貝,用戶應該使用HftpFileSystem。 這是一個只讀文件系統,所以DistCp必須運行在目標端集群上(更確切的說是在能夠寫入目標集群的TaskTracker上)。 源的格式是hftp://<dfs.http.address>/<path> (默認情況dfs.http.address是 <namenode>:50070)。
上述內容就是Hadoop distcp命令如何跨集群復制文件,你們學到知識或技能了嗎?如果還想學到更多技能或者豐富自己的知識儲備,歡迎關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。