您好,登錄后才能下訂單哦!
小編給大家分享一下Yarn上的不健康節點UNHEALTHY nodes怎么處理,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
自己的三臺虛擬機hadoop001、hadoop002、hadoop003
檢查23188 發現有Unhealthy Nodes,正常的active nodes數目不對
另外查看
$ yarn node -list -all Total Nodes:4 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hadoop001:34354 UNHEALTHY hadoop001:23999 0 hadoop002:60027 RUNNING hadoop002:23999 0 hadoop001:50623 UNHEALTHY hadoop001:23999 0 hadoop003:39700 UNHEALTHY hadoop003:23999 0
查看resourcemanager的日志可以看到
2016-09-10 12:02:05,953 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Added node hadoop002:60027 cluster capacity: <memory:4096, vCores:4> 2016-09-10 12:02:05,990 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Node hadoop001:50623 reported UNHEALTHY with details: 1/1 local-dirs are bad: /data/disk1/data/yarn/local; 1/1 log-dirs are bad: /opt/beh/logs /yarn/userlog 2016-09-10 12:02:05,991 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: hadoop001:50623 Node Transitioned from RUNNING to UNHEALTHY 2016-09-10 12:02:05,993 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Removed node hadoop001:50623 cluster capacity: <memory:2048, vCores:2> 2016-09-10 12:02:06,378 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop003 to /default-rack
檢查nodemanager的日志可以查看到
2016-09-10 12:02:02,869 INFO org.mortbay.log: jetty-6.1.26.cloudera.4 2016-09-10 12:02:02,905 INFO org.mortbay.log: Extract jar:file:/opt/beh/core/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.0-cdh6.4.4.jar!/webapps/node to /tmp/Jetty_0_0_0_0_23999_node____tgfx6h/webapp 2016-09-10 12:02:03,242 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:23999 2016-09-10 12:02:03,242 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app /node started at 23999 2016-09-10 12:02:03,735 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2016-09-10 12:02:03,775 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: [] 2016-09-10 12:02:03,783 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[] 2016-09-10 12:02:03,822 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 2016-09-10 12:02:03,824 INFO org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking registerNodeManager of class ResourceTrackerPBClientImpl over rm2 after 1 fail over attempts. Trying to fail over after sleeping fo r 2138ms. java.net.ConnectException: Call From hadoop002/192.168.30.22 to hadoop002:23125 failed on connection exception: java.net.ConnectException: 拒絕連接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy27.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy28.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:257) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:191) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:264) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:463) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:509) Caused by: java.net.ConnectException: 拒絕連接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) at org.apache.hadoop.ipc.Client.call(Client.java:1438) ... 19 more 2016-09-10 12:02:05,965 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm1 2016-09-10 12:02:05,996 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -1513537506 2016-09-10 12:02:05,998 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 701920721 2016-09-10 12:02:05,999 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as hadoop002:60027 with total resource of <memory:2048, vCores:2> 2016-09-10 12:02:05,999 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
NodeManager默認會每兩分鐘檢查本地磁盤(local-dirs),找出那些目錄可以使用。注意如果判定這個磁盤不可用,則在重啟 NodeManager之前,就算磁盤好了,也不會把它變成可用。當好磁盤數少于一定量時,會把這臺機器變成unhealthy,將不會再給這臺機器分配任務。
查看自己的虛擬機磁盤情況,發現001和003的磁盤都要滿了,于是清除不需要的文件,騰出剩余空間,UNHEALTHY nodes立馬恢復正常
$ yarn node -list -all Total Nodes:4 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hadoop001:34354 RUNNING hadoop001:23999 0 hadoop002:60027 RUNNING hadoop002:23999 0 hadoop003:39700 RUNNING hadoop003:23999 0 hadoop001:50623 LOST hadoop001:23999 0
此處為什么有2個hadoop001,因為修改了配置文件重啟過一次,所有出現了2個,其中有一個為LOST狀態,另一個正常RUNNING,不影響使用,yarn重啟后就可恢復正常。
看完了這篇文章,相信你對“Yarn上的不健康節點UNHEALTHY nodes怎么處理”有了一定的了解,如果想了解更多相關知識,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。