您好,登錄后才能下訂單哦!
這篇文章主要介紹HDFS中小文件的排查方式之如何分析fsimage,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
cloudera manager監控頁面HDFS大部分機器出現類似告警"存在隱患 : DataNode 有 xxxxxx 個塊。 警告閾值:500,000 塊。",cm給出的建議:
這是 DataNode 運行狀況檢查,用于檢查 DataNode 是否含有過多的塊。如果 DataNode 含有過多的塊,可能影響 DataNode 的性能。具有大量塊數的 DataNode 將需要較大的 java 堆并且可能遇到較長時間的垃圾回收暫停。另外,大量塊數可能表明存在許多小文件。不會為處理許多小文件而優化 HDFS,跨許多小文件進行操作時處理時間可能受影響。
如果只有部分 DataNode 有大量塊,運行 HDFS 重新平衡命令可以通過移動 DataNode 之間的數據解決該問題。如果 HDFS 重新平衡命令將群集報告為平衡,沒有修復塊不平衡,則問題與存在的許多小文件有關。參閱 HDFS 文檔了解解決該問題的最佳做法。如果許多小文件不是您的使用案例的關注點,則考慮禁用該運行狀況測試。如果所有 DataNode 都有大量塊數且該問題與小文件無關,則應添加更多 DataNode。
思路:確認hdfs集群中是否確實存在大量小文件,根據實際需要對小文件進行合并,對于歷史數據及時清理歸檔。
# Usage: hdfs dfsadmin [-fetchImage <local directory>] $ hdfs dfsadmin -fetchImage /data 20/10/27 17:48:04 INFO namenode.TransferFsImage: Opening connection to http://namenode1:50070/imagetransfer?getimage=1&txid=latest 20/10/27 17:48:04 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds 20/10/27 17:48:05 INFO namenode.TransferFsImage: Transfer took 1.14s at 357022.87 KB/s $ ll -h /data/fsimage_0000000000930647029 -rw-r----- 1 app app 397M 10月 27 17:48 /data/fsimage_0000000000930647029
參照官網給出的《Offline Image Viewer Guide》,讀取fsimage,格式化為文本,具體操作如下:
$ export HADOOP_OPTS="-Xmx32G" # increase heap size avoid oom $ hdfs oiv -i /data/fsimage_0000000000930647029 -o /data/fsimage.csv -p Delimited -delimiter "," 20/10/27 19:35:45 INFO offlineImageViewer.PBImageTextWriter: Loading string table 20/10/27 19:35:45 INFO offlineImageViewer.FSImageHandler: Loading 20 strings 20/10/27 19:35:45 INFO offlineImageViewer.PBImageTextWriter: Loading inode references 20/10/27 19:35:45 INFO offlineImageViewer.FSImageHandler: Loading inode references 20/10/27 19:35:45 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode references 20/10/27 19:35:45 INFO offlineImageViewer.PBImageTextWriter: Loading directories 20/10/27 19:35:45 INFO offlineImageViewer.PBImageTextWriter: Loading directories in INode section. 20/10/27 19:35:52 INFO offlineImageViewer.PBImageTextWriter: Found 224875 directories in INode section. 20/10/27 19:35:52 INFO offlineImageViewer.PBImageTextWriter: Finished loading directories in 6598ms 20/10/27 19:35:52 INFO offlineImageViewer.PBImageTextWriter: Loading INode directory section. 20/10/27 19:35:54 INFO offlineImageViewer.PBImageTextWriter: Scanned 214127 INode directories to build namespace. 20/10/27 19:35:54 INFO offlineImageViewer.PBImageTextWriter: Finished loading INode directory section in 1784ms 20/10/27 19:35:54 INFO offlineImageViewer.PBImageTextWriter: Found 3697297 INodes in the INode section 20/10/27 19:36:55 INFO offlineImageViewer.PBImageTextWriter: Outputted 3697297 INodes. $ head /data/fsimage.csv Path,Replication,ModificationTime,AccessTime,PreferredBlockSize,BlocksCount,FileSize,NSQUOTA,DSQUOTA,Permission,UserName,GroupName /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=16/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,438472,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=12/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,437489,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=11/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,437340,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=09/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,435482,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=05/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,420584,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=07/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,432046,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=19/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,436986,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=14/000151_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,436246,0,0,-rwxrwxrwt,user1,hive /user/hive/warehouse/test.db/table1/partitionday=20200914/partitionhour=10/000193_0.lzo,3,2020-09-15 04:15,2020-09-15 04:15,134217728,1,443284,0,0,-rwxrwxrwt,user1,hive $ sed -i -e "1d" /data/fsimage.csv # 刪除fsimage.csv的首行表頭
CREATE TABLE `fsimage_info_csv`( `path` string, `replication` int, `modificationtime` string, `accesstime` string, `preferredblocksize` bigint, `blockscount` int, `filesize` bigint, `nsquota` string, `dsquota` string, `permission` string, `username` string, `groupname` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ( 'field.delim'=',', 'serialization.format'=',') STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://nameservice1/user/hive/warehouse/fsimage_info_csv';
$ hdfs dfs -put /data/fsimage.csv /user/hive/warehouse/fsimage_info_csv/
根據fsimage文件查看一下文件大小的分布,如下:
$ hdfs oiv -p FileDistribution -i fsimage_0000000000930647029 -o fs_distribution $ cat fs_distribution Processed 0 inodes. Processed 1048576 inodes. Processed 2097152 inodes. Processed 3145728 inodes. Size NumFiles 0 209746 2097152 2360944 4194304 184952 6291456 121774 8388608 37136 // 省略中間部分 10485760 36906 12582912 51616 14680064 19209 16777216 14617 18874368 7655 20971520 5625 23068672 26746 25165824 112429 27262976 10304 29360128 12315 31457280 11966 33554432 15739 35651584 10180 115425148928 1 totalFiles = 3472422 totalDirectories = 224875 totalBlocks = 3401315 totalSpace = 122170845300822 maxFileSize = 115423398874
以查找三級目錄下的小文件數量為例,如下:
SELECT dir_path , COUNT(*) AS small_file_num FROM ( SELECT relative_size, dir_path FROM ( SELECT ( CASE filesize < 4194304 WHEN TRUE THEN 'small' ELSE 'large' END) AS relative_size, concat('/',split(PATH,'\/')[1], '/',split(PATH,'\/')[2], '/' ,split(PATH,'\/')[3], '/',split(PATH,'\/')[4], '/', split( PATH,'\/')[5]) AS dir_path FROM DEFAULT.fsimage_info_csv WHERE permission LIKE 'd%') t1 WHERE relative_size='small') t2 GROUP BY dir_path ORDER BY small_file_num
相應數據脫敏后輸出如下:
/data/load/201905032130 1 //省略中間部分 /user/hive/warehouse/teset.db/table1 2244 /user/hive/warehouse/teset.db/table2 2244 /user/hive/warehouse/teset.db/table3 2244 /user/hive/warehouse/teset.db/table4 2246 /user/hive/warehouse/teset.db/table5 2246 /user/hive/warehouse/teset.db/table6 2248 /user/hive/warehouse/teset.db/table7 2508 /user/hive/warehouse/teset.db/table8 3427 Time taken: 53.929 seconds, Fetched: 32947 row(s)
根據涉及目錄,反向找到涉及程序,嘗試優化避免小文件的產生
及時合并歸檔小文件
及時清理歷史小文件
以上是“HDFS中小文件的排查方式之如何分析fsimage”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。