91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hadoop2 namenode HA的示例分析

發布時間:2021-12-09 15:25:25 來源:億速云 閱讀:113 作者:小新 欄目:云計算

這篇文章主要介紹Hadoop2 namenode HA的示例分析,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!

實驗的Hadoop版本為2.5.2,硬件環境是5臺虛擬機,使用的均是CentOS6.6操作系統,虛擬機IP和hostname分別為:
192.168.63.171    node1.zhch
192.168.63.172    node2.zhch
192.168.63.173    node3.zhch
192.168.63.174    node4.zhch
192.168.63.175    node5.zhch

ssh免密碼、防火墻、JDK這里就不在贅述了。虛擬機的角色分配是 node1為 主namenode 節點,node2為 備namendoe節點,node3、4、5為 datanode節點;node1、2、3上還將部署 zookeeper 和 journalnode。

一、搭建Zookeeper集群
 Storm0.9.4安裝 中搭建Zookeeper集群的部分

[yyl@node1 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[yyl@node1 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[yyl@node2 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[yyl@node2 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader
[yyl@node3 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[yyl@node3 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower


二、配置Hadoop環境
 

## 解壓
[yyl@node1 program]$ tar -zxf hadoop-2.5.2.tar.gz 
## 創建文件夾
[yyl@node1 program]$ mkdir hadoop-2.5.2/name
[yyl@node1 program]$ mkdir hadoop-2.5.2/data
[yyl@node1 program]$ mkdir hadoop-2.5.2/journal
[yyl@node1 program]$ mkdir hadoop-2.5.2/tmp

## 配置hadoop-env.sh
[yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/
[yyl@node1 hadoop]$ vim hadoop-env.sh
export JAVA_HOME=/usr/lib/java/jdk1.7.0_80

## 配置yarn-env.sh
[yyl@node1 hadoop]$ vim yarn-env.sh
export JAVA_HOME=/usr/lib/java/jdk1.7.0_80

## 配置slaves
[yyl@node1 hadoop]$ vim slaves
node3.zhch
node4.zhch
node5.zhch

## 配置core-site.xml
[yyl@node1 hadoop]$ vim core-site.xml
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>
<property>
  <name>io.file.buffer.size</name>
  <value>131072</value> 
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>file:/home/yyl/program/hadoop-2.5.2/tmp</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
  <value>node1.zhch:2181,node2.zhch:2181,node3.zhch:2181</value>
</property>
<property>
  <name>ha.zookeeper.session-timeout.ms</name>
  <value>1000</value>
</property>
</configuration>

## 配置hdfs-site.xml
[yyl@node1 hadoop]$ vim hdfs-site.xml
<configuration>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/yyl/program/hadoop-2.5.2/name</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/yyl/program/hadoop-2.5.2/data</value>
</property>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>
<property>
  <name>dfs.permissions.enabled</name>
  <value>false</value>
</property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>node1.zhch:9000</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>node2.zhch:9000</value>
</property>
<property>
  <name>dfs.namenode.servicerpc-address.mycluster.nn1</name>
  <value>node1.zhch:53310</value>
</property>
<property>
  <name>dfs.namenode.servicerpc-address.mycluster.nn2</name>
  <value>node2.zhch:53310</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>node1.zhch:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>node2.zhch:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node1.zhch:8485;node2.zhch:8485;node3.zhch:8485/mycluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/yyl/.ssh/id_rsa</value>
</property>
<property>
  <name>dfs.ha.fencing.ssh.connect-timeout</name>
  <value>30000</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/home/yyl/program/hadoop-2.5.2/journal</value>
</property>
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>
<property>
  <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
  <value>60000</value>
</property>
<property>
  <name>ipc.client.connect.timeout</name>
  <value>60000</value>
</property>
<property>
  <name>dfs.image.transfer.bandwidthPerSec</name>
  <value>4194304</value>
</property>
</configuration>

## 配置mapred-site.xml
[yyl@node1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[yyl@node1 hadoop]$ vim mapred-site.xml
<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property> 
  <name>mapreduce.jobhistory.address</name> 
  <value>node1.zhch:10020,node2.zhch:10020</value> 
</property> 
<property> 
  <name>mapreduce.jobhistory.webapp.address</name> 
  <value>node1.zhch:19888,node2.zhch:19888</value> 
</property>
</configuration>

## 配置yarn-site.xml
[yyl@node1 hadoop]$ vim yarn-site.xml 
<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
  <name>yarn.resourcemanager.address</name>
  <value>node1.zhch:8032</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>node1.zhch:8030</value>
</property>
<property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>node1.zhch:8031</value>
</property>
<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>node1.zhch:8033</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>node1.zhch:8088</value>
</property>
</configuration>

## 分發到各個節點
[yyl@node1 hadoop]$ cd /home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/
## 在各個節點上設置hadoop環境變量
[yyl@node1 ~]$ vim .bash_profile 
export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin


三、創建znode
把各個 zookeeper 啟動,在其中一個 namenode 節點執行如下命令,用于在 Zookeeper 中創建一個 znode

[yyl@node1 ~]$ hdfs zkfc -formatZK
## 驗證創建是否成功:
[yyl@node3 ~]$ zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /
[hadoop-ha, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
[mycluster]
[zk: localhost:2181(CONNECTED) 2]


四、啟動journalnode
在 node1.zhch、node2.zhch、node3.zhch 上運行命令:hadoop-daemon.sh start journalnode

[yyl@node1 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node1.zhch.out
[yyl@node1 ~]$ jps
1126 QuorumPeerMain
1349 JournalNode
1395 Jps
[yyl@node2 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node2.zhch.out
[yyl@node2 ~]$ jps
1524 JournalNode
1570 Jps
1376 QuorumPeerMain
[yyl@node3 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node3.zhch.out
[yyl@node3 ~]$ jps
1289 JournalNode
1126 QuorumPeerMain
1335 Jps


五、NameNode

## 在 主namenode 節點上使用命令 hadoop namenode -format 格式化 namenode 和 journalnode 目錄
[yyl@node1 ~]$ hadoop namenode -format

## 啟動主namenode
[yyl@node1 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out
[yyl@node1 ~]$ jps
1478 NameNode
1561 Jps
1126 QuorumPeerMain
1349 JournalNode

## 在 備namenode節點 同步元數據
[yyl@node2 ~]$ hdfs namenode -bootstrapStandby

## 啟動 備NameNode
[yyl@node2 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out
[yyl@node2 ~]$ jps
1524 JournalNode
1626 NameNode
1709 Jps
1376 QuorumPeerMain
## 在 兩個namenode節點 都執行以下命令來配置自動故障轉移:安裝和運行ZKFC
[yyl@node1 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node1.zhch.out
[yyl@node1 ~]$ jps
1624 DFSZKFailoverController
1478 NameNode
1682 Jps
1126 QuorumPeerMain
1349 JournalNode
[yyl@node2 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node2.zhch.out
[yyl@node2 ~]$ jps
1524 JournalNode
1746 DFSZKFailoverController
1626 NameNode
1800 Jps
1376 QuorumPeerMain


六、啟動 DataNode 和 Yarn

[yyl@node1 ~]$ hadoop-daemons.sh start datanode
node4.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.out
node3.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.out
node5.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.out

[yyl@node1 ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.out
node4.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.out
node3.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.out
node5.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.out
[yyl@node1 ~]$ jps
1763 ResourceManager
1624 DFSZKFailoverController
1478 NameNode
1126 QuorumPeerMain
1349 JournalNode
2028 Jps
[yyl@node3 ~]$ jps
1289 JournalNode
1462 NodeManager
1367 DataNode
1126 QuorumPeerMain
1559 Jps


下次啟動的時候,在 zookeeper 集群已啟動的前提下,直接執行以下命令就可以全部啟動所有進程和服務了:
sh start-dfs.sh
sh start-yarn.sh

可以通過URL來查看namenode狀態
http://node1.zhch:50070    http://node2.zhch:50070
也可以通過命令來查看
[yyl@node1 ~]$ hdfs haadmin -getServiceState nn1
active
[yyl@node1 ~]$ hdfs haadmin -getServiceState nn2
standby

七、測試
在主namenode機器上通過jps命令查找到namenode的進程號,然后通過kill -9的方式殺掉進程,觀察另一個namenode節點是否會從狀態standby變成active狀態:

[yyl@node1 ~]$ jps
1763 ResourceManager
1624 DFSZKFailoverController
1478 NameNode
2128 Jps
1126 QuorumPeerMain
1349 JournalNode
[yyl@node1 ~]$ kill -9 1478
[yyl@node1 ~]$ hdfs haadmin -getServiceState nn2
active
[yyl@node1 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out
[yyl@node1 ~]$ hdfs haadmin -getServiceState nn1
standby

以上是“Hadoop2 namenode HA的示例分析”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

利津县| 竹山县| 娄底市| 五常市| 林州市| 乐平市| 平定县| 双桥区| 牡丹江市| 于田县| 广东省| 万全县| 弥勒县| 棋牌| 舒城县| 隆子县| 灵台县| 东阿县| 安阳市| 丰都县| 舞阳县| 宁津县| 姜堰市| 保德县| 工布江达县| 安乡县| 甘南县| 陆良县| 扶沟县| 沧州市| 秭归县| 彩票| 慈利县| 白沙| 老河口市| 邯郸县| 潼南县| 客服| 镇江市| 安乡县| 白河县|