您好,登錄后才能下訂單哦!
hadoop second namenode異常 Inconsistent checkpoint fields
沒有訪問量情況下,namenode進程:cpu 100% ;內存使用超多;沒有錯誤日志;
secondarynamenode報錯:
java.io.IOException: Inconsistent checkpoint fields. LV = -57 namespaceID = 371613059 cTime = 0 ; clusterId = CID-b8a5f273-515a-434c-87c0-4446d4794c85 ; blockpoolId = BP-1082677108-127.0.0.1-1433842542163. Expecting respectively: -57; 1687946377; 0; CID-603ff285-de5a-41a0-85e8-f033ea1916fc; BP-2591078-127.0.0.1-1433770362761. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357) at java.lang.Thread.run(Thread.java:662)
造成如上異常的原因很多,其中一個原因為:second namenode的數據目錄中的edit log與當前的數據版本不一致導致
解決方法:
手動刪除second nodenode目錄下的文件,然后重啟hadoop:
查詢發現second namenode下的edit log竟然是很久以前的:
/opt/hadoop-2.5.1/dfs/tmp/dfs/namesecondary/current
[root@hbase current]# ll total 116 -rw-r--r-- 1 root root 42 Jun 8 2015 edits_0000000000000000001-0000000000000000002 -rw-r--r-- 1 root root 8991 Jun 8 2015 edits_0000000000000000003-0000000000000000089 -rw-r--r-- 1 root root 4370 Jun 8 2015 edits_0000000000000000090-0000000000000000123 -rw-r--r-- 1 root root 3817 Jun 9 2015 edits_0000000000000000124-0000000000000000152 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000153-0000000000000000172 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000173-0000000000000000192 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000193-0000000000000000212 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000213-0000000000000000232 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000233-0000000000000000252 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000253-0000000000000000272 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000273-0000000000000000292 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000293-0000000000000000312 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000313-0000000000000000332 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000333-0000000000000000352 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000353-0000000000000000372 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000373-0000000000000000392 -rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000393-0000000000000000412 -rw-r--r-- 1 root root 6732 Jun 9 2015 edits_0000000000000000413-0000000000000000468 -rw-r--r-- 1 root root 4819 Jun 9 2015 edits_0000000000000000469-0000000000000000504 -rw-r--r-- 1 root root 2839 Jun 9 2015 fsp_w_picpath_0000000000000000468 -rw-r--r-- 1 root root 62 Jun 9 2015 fsp_w_picpath_0000000000000000468.md5 -rw-r--r-- 1 root root 2547 Jun 9 2015 fsp_w_picpath_0000000000000000504 -rw-r--r-- 1 root root 62 Jun 9 2015 fsp_w_picpath_0000000000000000504.md5 -rw-r--r-- 1 root root 199 Jun 9 2015 VERSION
上面的問題解決方法是在配置了hadoop.tmp.dir的情況下,如果沒有配置,則無法找到edit log文件,需要進行配置,在hdfs-site.xml或core-site.xml中進行配置;
hadoop.tmp.dir配置參數指定 hdfs的默認臨時路徑,這個最好配置,如果在新增節點或者其他情況下莫名其妙的DataNode啟動不了,就刪除此文件中的tmp目錄即可。不過如果刪除了NameNode機器的此目錄,那么就需要重新執行NameNode格式化的命令。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。