您好,登錄后才能下訂單哦!
本篇內容介紹了“HDFS垃圾回收站的配置及使用方法”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
HDFS會為每一個用戶創建一個回收站目錄:/user/用戶名/.Trash/,
每一個被用戶通過Shell刪除的文件/目錄,在系統回收站中都一個周期,也就是當系統回收站中的文件/目錄在一段時間之后沒有被用戶恢復的話,HDFS就會自動的把這個文件/目錄徹底刪除,之后,用戶就永遠也找不回這個文件/目錄了。
1. HDFS默認會關閉回收站功能。默認情況下HDFS刪除文件,無法恢復。
[hadoop@hadoop002 hadoop]$ hdfs dfs -rm /gw_test.log2
Deleted /gw_test.log2
2. 啟用回收站功能,需要配置core-site.xml文件
[hadoop@hadoop002 hadoop]$ vi etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop002:9000</value>
</property>
<!--多長時間創建CheckPoint NameNode截點上運行的CheckPointer 從Current文件夾創建CheckPoint;默認:0 由fs.trash.interval項指定 -->
<property>
<name>fs.trash.checkpoint.interva</name>
<value>0</value>
</property>
<!--多少分鐘.Trash下的CheckPoint目錄會被刪除,該配置服務器設置優先級大于客戶端,默認:不啟用 -->
<property>
<name>fs.trash.interval</name>
<value>1440</value> -- 清除周期分鐘(24小時)
</property>
</configuration>
[hadoop@hadoop002 hadoop]
3. 重啟hdfs服務
#停止hdfs服務
[hadoop@hadoop002 hadoop]$ sbin/stop-dfs.sh
Stopping namenodes on [hadoop002]
hadoop002: no namenode to stop
hadoop002: no datanode to stop
Stopping secondary namenodes [hadoop002]
hadoop002: no secondarynamenode to stop
#啟動hdfs服務
[hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [hadoop002]
hadoop002: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop002: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop002.out
Starting secondary namenodes [hadoop002]
hadoop002: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop002.out
[hadoop@hadoop002 hadoop]$
4. HDFS刪除文件,刪除的文件被存放在回收站下面;/user/hadoop/.Trash/Current
#刪除文件/gw_test.log3,
[hadoop@hadoop002 hadoop]$ hdfs dfs -rm /gw_test.log3
18/05/25 15:27:47 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hadoop002:9000/gw_test.log3' to trash at: hdfs://hadoop002:9000/user/hadoop/.Trash/Current/gw_test.log3
#查看根目錄下,gw_test.log3文件不存在
[hadoop@hadoop002 hadoop]$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x - root root 0 2018-05-23 13:16 /root
drwx------ - hadoop supergroup 0 2018-05-22 11:23 /tmp
drwxr-xr-x - hadoop supergroup 0 2018-05-22 11:22 /user
[hadoop@hadoop002 hadoop]$
#查看回收站目錄下的文件,
[hadoop@hadoop002 hadoop]$ hdfs dfs -ls /user/hadoop/.Trash/Current
Found 1 items
-rw-r--r-- 1 hadoop supergroup 25 2018-05-23 13:04 /user/hadoop/.Trash/Current/gw_test.log3
5. 恢復文件
#恢復文件操作
[hadoop@hadoop002 hadoop]$ hdfs dfs -mv /user/hadoop/.Trash/Current/gw_test.log3 /gw_test.log3
#查看根目錄下文件是否被恢復
[hadoop@hadoop002 hadoop]$ hdfs dfs -ls /
Found 4 items
-rw-r--r-- 1 hadoop supergroup 25 2018-05-23 13:04 /gw_test.log3
drwxr-xr-x - root root 0 2018-05-23 13:16 /root
drwx------ - hadoop supergroup 0 2018-05-22 11:23 /tmp
drwxr-xr-x - hadoop supergroup 0 2018-05-22 11:22 /user
6. 刪除文件跳過回收站
# -skipTrash參數表示跳過回收站
[hadoop@hadoop002 hadoop]$ hdfs dfs -rm -skipTrash /gw_test.log3
Deleted /gw_test.log3
“HDFS垃圾回收站的配置及使用方法”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。