您好,登錄后才能下訂單哦!
一、環境
系統 CentOS7.0 64位
namenode01 192.168.0.220
namenode02 192.168.0.221
datanode01 192.168.0.222
datanode02 192.168.0.223
datanode03 192.168.0.224
二、配置基礎環境
在所有的機器上添加本地hosts文件解析
[root@namenode01 ~]# tail -5 /etc/hosts 192.168.0.220 namenode01 192.168.0.221 namenode02 192.168.0.222 datanode01 192.168.0.223 datanode02 192.168.0.224 datanode03
在5臺機器上創建hadoop用戶,并設置密碼是hadoop,這里只以naemenode01為例子
[root@namenode01 ~]# useradd hadoop [root@namenode01 ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully.
配置5臺機器hadoop用戶之間互相免密碼ssh登錄
#namenode01的操作 [root@namenode01 ~]# su - hadoop [hadoop@namenode01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | .o.E++=. | | ...o++o | | .+ooo | | o== o | | oS.= | | .. | | | | | | | +-----------------+ [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗證結果 [hadoop@namenode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode01 ~]$ ssh datanode03 hostname datanode03 #在namenode02上操作 [root@namenode02 ~]# su - hadoop [hadoop@namenode02 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | | | | | . o.| | . ...o.o| | S +....o | | +.E.O o. | | o ooB o . | | .. | | .. | +-----------------+ [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗證結果 [hadoop@namenode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode02 ~]$ ssh datanode03 hostname datanode03 #在datanode01上操作 [root@datanode01 ~]# su - hadoop [hadoop@datanode01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | +O+= | | +=*.o | | .ooo.o | | . oo+ . | |. . ... S | | o | |. . E | | . . | | . | +-----------------+ [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗證結果 [hadoop@datanode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode01 ~]$ ssh datanode03 hostname datanode03 #datanode02上操作 [hadoop@datanode02 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | E. | | .. | | . | | . | | . o+So | | . o oB | | . . oo.. | |.+ o o o... | |=+B . ... | +-----------------+ [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗證結果 [hadoop@datanode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode02 ~]$ ssh datanode03 hostname datanode03 #datanode03上操作 [root@datanode03 ~]# su - hadoop [hadoop@datanode03 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | o=. | | ..o.. . | | o.+ * . | | . . E O | | S B o | | o. . . | | o . | | +. | | o. | +-----------------+ [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗證結果 [hadoop@datanode03 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode03 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode03 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode03 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode03 ~]$ ssh datanode03 hostname datanode03
三、安裝jdk環境
[root@namenode01 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06 [root@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/ #配置環境變量配置文件 [root@namenode01 ~]# cat /etc/profile.d/java.sh JAVA_HOME=/usr/local/jdk1.8.0_74 JAVA_BIN=/usr/local/jdk1.8.0_74/bin JRE_HOME=/usr/local/jdk1.8.0_74/jre PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar export JAVA_HOME PATH #加載環境變量 [root@namenode01 ~]# source /etc/profile.d/java.sh [root@namenode01 ~]# which java /usr/local/jdk1.8.0_74/bin/java #測試結果 [root@namenode01 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #將環境變量配置文件和二進制包復制到其余的4臺機器上 [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/ #測試結果,以namenode02為例子 [root@namenode02 ~]# source /etc/profile.d/java.sh [root@namenode02 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
四、安裝hadoop
#下載hadoop軟件 [root@namenode01 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz [root@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ #添加hadoop的環境變量配置文件 [root@namenode01 ~]# cat /etc/profile.d/hadoop.sh HADOOP_HOME=/usr/local/hadoop PATH=$HADOOP_HOME/bin:$PATH export HADOOP_BASE PATH #切換到hadoop用戶下,檢查jdk環境是否正常 [root@namenode01 ~]# su - hadoop Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1 [hadoop@namenode01 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #開始編輯hadoop的配置文件 #編輯hadoop的環境變量文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8.0_74 #修改JAVA_HOME變量的值 #編輯core-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/temp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>io.file.buffers.size</name> <value>131072</value> </property> </configuration> #編輯hdfs-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/data/hdfs/dfs/name</value> #namenode目錄 </property> <property> <name>dfs.datanode.data.dir</name> <value>/data/hdfs/data</value> #datanode目錄 </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> #和core-site.xml文件中保持一致 </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>namenode01,namenode02</value> #namenode節點 </property> <property> <name>dfs.namenode.rpc-address.mycluster.namenode01</name> <value>namenode01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.namenode02</name> <value>namenode02:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.namenode01</name> <value>namenode01:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.namenode02</name> <value>namenode02:50070</value> </property> <property> #namenode往journalnode寫edits文件,填寫所有的journalnode節點 <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/data/hdfs/journal</value> #journalnode目錄 </property> <property> <name>dfs.client.faliover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fening.methods</name> <value>sshfence</value> #通過什么方法進行fence操作 </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> #主機之間的認證 </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>6000</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>false</value> #關閉主備自動切換,后面通過zookeeper來切換 </property> <property> <name>dfs.replication</name> <value>3</value> #replicaion的數量,默認為3分,少于這個數量會報錯 </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration> #編輯yarn-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-service</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>namenode01:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>namenode01:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>namenode01:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>namenode01:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>namenode01:8033</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>15360</value> </property> </configuration> #編輯mapred-site.xml文件 [hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapredue.jobtracker.http.address</name> <value>namenode01:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>namenode01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>namenode01:19888</value> </property> </configuration> #編輯slaves配置文件 [hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves datanode01 datanode02 datanode03 #在namenodee01上切換到root用戶下,創建相應的目錄 [root@namenode01 ~]# mkdir /data/hdfs [root@namenode01 ~]# chown hadoop.hadoop /data/hdfs/ #將hadoop用戶的環境變量配置文件復制到其余4臺機器上 [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/ #復制hadoop安裝文件到其余的4臺機器上 [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/ #修改目錄的權限,以namenode02為例 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ [root@namenode02 ~]# ll /usr/local |grep hadoop lrwxrwxrwx 1 root root 24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/ drwxr-xr-x 9 hadoop hadoop 139 Apr 28 17:16 hadoop-2.5.2 #創建目錄 [root@namenode02 ~]# mkdir /data/hdfs [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/ #檢查jdk環境 [root@namenode02 ~]# su - hadoop Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0 [hadoop@namenode02 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) [hadoop@namenode02 ~]$ which hadoop /usr/local/hadoop/bin/hadoop
五、啟動hadoop
#在所有服務器執行hadoop-daemon.sh start journalnode,要在hadoop用戶下執行 #只貼出namenoe01的過程 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out #在namenode01上執行 [hadoop@namenode01 ~]$ hadoop namenode -format #說明:第一次啟動的時候需要執行hadoop namenoe -format,非首次啟動則運行hdfs namenode -initializeSharedEdits 這里需要解釋一下。 首次啟動是指安裝的時候就配置了HA,hdfs還沒有數據。這時需要用format命令把namenode1格式化。 非首次啟動是指原來有一個沒有配置HA的HDFS已經在運行了,HDFS上已經有數據了,現在需要配置HA而加入一臺namenode。這時候namenode1通過initializeSharedEdits命令來初始化journalnode,把edits文件共享到journalnode上。 #開始啟動namenode節點 #在namenode01上執行 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode #在namenode02上執行 [hadoop@namenode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode-bootstrapStandby #啟動datanode節點 [hadoop@datanode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode03 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode #驗證結果 #查看namenode01結果 [hadoop@namenode01 ~]$ jps 2467 NameNode #namenode角色 2270 JournalNode 2702 Jps #查看namenode02的結果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2680 Jps #查看datanode01的結果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2466 Jps 2358 DataNode #datanode角色 2267 JournalNode #查看datannode02的結果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2691 Jps 2612 DataNode #datanode角色 2265 JournalNode #查看datanode03的結果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode #datanode角色 12067 Jps 11895 JournalNode
六、zookeeper高可用環境搭建
#下載軟件,使用root用戶的身份去安裝 [root@namenode01 ~]# wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz #解壓文件/usr/local下,并修改權限 [root@namenode01 ~]# tar xf zookeeper-3.4.6.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ #修改zookeeper配置文件 [root@namenode01 ~]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.6/conf/zoo.cfg [root@namenode01 ~]# egrep -v "^#|^$" /usr/local/zookeeper-3.4.6/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/hdfs/zookeeper/data dataLogDir=/data/hdfs/zookeeper/logs clientPort=2181 server.1=namenode01:2888:3888 server.2=namenode02:2888:3888 server.3=datanode01:2888:3888 server.4=datanode02:2888:3888 server.5=datanode03:2888:3888 #配置zookeeper環境變量 [root@namenode01 ~]# cat /etc/profile.d/zookeeper.sh export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6 export PATH=$PATH:$ZOOKEEPER_HOME/bin #在namenode01上創建相關的目錄和myid文件 [root@namenode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode01 ~]# tree /data/hdfs/zookeeper /data/hdfs/zookeeper ├── data └── logs [root@namenode01 ~]# echo "1" >/data/hdfs/zookeeper/data/myid [root@namenode01 ~]# cat /data/hdfs/zookeeper/data/myid 1 [root@namenode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode01 ~]# ll /data/hdfs/ total 0 drwxrwxr-x 3 hadoop hadoop 17 Apr 29 10:05 dfs drwxrwxr-x 3 hadoop hadoop 22 Apr 29 10:05 journal drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:42 zookeeper #將zookeeper安裝目錄和環境變量配置文件復制到其余的幾臺機器上,以復制到namenode02為例 [root@namenode01 ~]# scp -r /usr/local/zookeeper-3.4.6 namenode02:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/zookeeper.sh namenode02:/etc/profile.d/ #namenode02上創建相關的目錄和文件,并修改相應目錄的權限 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@namenode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:47 zookeeper-3.4.6 [root@namenode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode02 ~]# echo "2" >/data/hdfs/zookeeper/data/myid [root@namenode02 ~]# cat /data/hdfs/zookeeper/data/myid 2 [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:50 zookeeper #在datanode01上創建相關的目錄和文件,并修改相應目錄的權限 [root@datanode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode01 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:48 zookeeper-3.4.6 [root@datanode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode01 ~]# echo "3" >/data/hdfs/zookeeper/data/myid [root@datanode01 ~]# cat /data/hdfs/zookeeper/data/myid 3 [root@datanode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode01 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:54 zookeeper #在datanode02上創建相關的目錄和文件,并修改相應目錄的權限 [root@datanode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:49 zookeeper-3.4.6 [root@datanode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode02 ~]# echo "4" >/data/hdfs/zookeeper/data/myid [root@datanode02 ~]# cat /data/hdfs/zookeeper/data/myid 4 [root@datanode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:56 zookeeper #在datanode03上創建相關的目錄和文件,并修改相應目錄的權限 [root@datanode03 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode03 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 18:49 zookeeper-3.4.6 [root@datanode03 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode03 ~]# echo "5" >/data/hdfs/zookeeper/data/myid [root@datanode03 ~]# cat /data/hdfs/zookeeper/data/myid 5 [root@datanode03 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode03 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 18:57 zookeeper #在5臺機器上已hadoop的身份窮zookeeper #namenode01上啟動 [hadoop@namenode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #namenode02上啟動 [hadoop@namenode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode01上啟動 [hadoop@datanode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode02上啟動 [hadoop@datanode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode03上啟動 [hadoop@datanode03 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #查看namenode01的結果 [hadoop@namenode01 ~]$ jps 2467 NameNode 3348 QuorumPeerMain #zookeeper進程 3483 Jps 2270 JournalNode [hadoop@namenode01 ~]$ zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看namenode02的結果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2888 QuorumPeerMain 2936 Jps [hadoop@namenode01 ~]$ ssh namenode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode01的結果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2881 QuorumPeerMain 2358 DataNode 2267 JournalNode 2955 Jps [hadoop@namenode01 ~]$ ssh datanode01 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode02的結果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2849 QuorumPeerMain 2612 DataNode 2885 Jps 2265 JournalNode [hadoop@namenode01 ~]$ ssh datanode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode03的結果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode 12276 Jps 12213 QuorumPeerMain 11895 JournalNode [hadoop@namenode01 ~]$ ssh datanode03 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: leader
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。