您好,登錄后才能下訂單哦!
官方文檔:http://hadoop.apache.org/docs/r1.2.1/file_system_shell.html
1、登錄主節點,切換到hdfs用戶
[hdfs@cdhm1~]#su - hdfs
2、列出當前目錄有哪些子目錄,有哪些文件
[hdfs@cdhm1 ~]$hadoop fs -ls / Found 2 items drwxrwxrwt - hdfs supergroup 0 2017-05-23 16:39 /tmp drwxr-xr-x - hdfs supergroup 02017-05-24 15:45 /user
3、在Hadoop文件系統當中,創建一個test目錄
[hdfs@cdhm1 ~]$hadoop fs -mkdir /user/hdfs/test
查看目錄下有個test目錄,創建成功
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 4 items drwx------ - hdfs supergroup0 2017-05-26 08:00 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging drwxr-xr-x - hdfs supergroup 0 2017-07-03 10:19 /user/hdfs/test
4、刪除test目錄
[hdfs@cdhm1 ~]$ hadoop fs -rmr /user/hdfs/test 17/07/03 10:46:06INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval =1440 minutes, Emptier interval = 0 minutes. Moved:'hdfs://cdhm1:8020/user/hdfs/test' to trash at:hdfs://cdhm1:8020/user/hdfs/.Trash/Current
再次查看test目錄已經沒有了
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 3 items drwx------ - hdfs supergroup 0 2017-07-03 10:46 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging
5、把Hadoop文件系統當中的文件下載到本地
[hdfs@cdhm1 ~]$hadoop fs -get /user/hdfs/test2
查看本地test2下載成功
[hdfs@cdhm1 ~]$ ls test2 test.txt
把本地文件上傳到Hadoop文件系統當中
[hdfs@cdhm1 ~]$hadoop fs -put test.txt /user/hdfs/
查看hadoop中test.txt已成功上傳
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 5 items drwx------ - hdfs supergroup 0 2017-07-03 10:46 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging -rw-r--r-- 3 hdfs supergroup 0 2017-07-03 11:13 /user/hdfs/test.txt drwxr-xr-x - hdfs supergroup 0 2017-07-03 11:04 /user/hdfs/test2
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。