您好,登錄后才能下訂單哦!
本集群所需軟件:
VMwareworkstation64_12.5.7.0
Xmanager Enterprise 4 Build 0182
CentOS-7-x86_64-DVD-1708
hadoop-2.8.2.tar
jdk-8u151-linux-x64.tar
三臺主機
master 192.168.1.140
slave1 192.168.1.141
slave2 192.168.1.142
1.安裝虛擬機
(1)安裝虛擬機
(2)克隆虛擬機
2.修改主機名
hostnamectl set-hostname master(slave1,slave2)
3.修改IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
ONBOOT=yes
IPADDR=192.168.1.140
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS=192.168.1.254
4.修改/etc/hosts文件
添加
192.168.1.140 master
192.168.1.141 slave1
192.168.1.142 slave2
5.關閉防火墻
systemctl stop firewalld
禁止防火墻
systemctl disable firewalld
6.修改SELinux
vi /etc/sysconfig/selinux
SELinux=disabled
7.免密碼登錄
ssh-keygen 然后一直回車
在master上將id_rsa.pub重定向到authorized_keys
cat id_rsa.pub>>authorized_keys
將authorized_keys分發到slaver1、slaver2的/root/.ssh目錄下
scp authorized_keys slaver1/slaver2的IP地址 /root/.ssh/
8.卸載jdk
rpm -qa | grep java
刪除所有java
rpm -e java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64 --nodeps
9.yum源配置
cd /run/media/root/CentOS\ 7\ x86_64/Packages/
cp * /yum
cd /yum
rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh createrepo-0.9.9-28.el7.noarch.rpm
createrepo .
cd /etc/yum.repos.d/
rm -rf *
vi yum.repo
[local]
name=yum.repo
baseurl=file:///yum
gpgcheck=0
enabled=1
yum clean all
yum repolist
10.配置ftp服務
yum -y install ftp* vsftpd*
slave yum源用baseurl=ftp://192.168.1.140/pub
11.安裝yum包
yum -y install libc*
yum -y install openssh*
yum -y install man*
yum -y install compat-libstdc++-33*
yum -y install libaio-0.*
yum -y install libaio-devel*
yum -y install sysstat-9.*
yum -y install glibc-2.*
yum -y install glibc-devel-2.* glibc-headers-2.*
yum -y install ksh-2*
yum -y install libgcc-4.*
yum -y install libstdc++-4.*
yum -y install libstdc++-4.*.i686*
yum -y install libstdc++-devel-4.*
yum -y install gcc-4.*x86_64*
yum -y install gcc-c++-4.*x86_64*
yum -y install elfutils-libelf-0*x86_64* elfutils-libelf-devel-0*x86_64*
yum -y install elfutils-libelf-0*i686* elfutils-libelf-devel-0*i686*
yum -y install libtool-ltdl*i686*
yum -y install ncurses*i686*
yum -y install ncurses*
yum -y install readline*
yum -y install unixODBC*
yum -y install zlib
yum -y install zlib*
yum -y install openssl*
yum -y install patch*
yum -y install git*
yum -y install lzo-devel* zlib-devel* gcc* autoconf* automake* libtool*
yum -y install lzop*
yum -y install lrzsz*
yum -y install lzo-devel* zlib-devel* gcc* autoconf* automake* libtool*
yum -y install nc*
yum -y install glibc*
yum -y install gzip*
yum -y install zlib*
yum -y install gcc*
yum -y install gcc-c++*
yum -y install make*
yum -y install protobuf*
yum -y install protoc*
yum -y install cmake*
yum -y install openssl-devel*
yum -y install ncurses-devel*
yum -y install unzip*
yum -y install telnet*
yum -y install telnet-server*
yum -y install wget*
yum -y install svn*
yum -y install ntpdate*
關閉不必要的服務
chkconfig autofs off
chkconfig acpid off
chkconfig sendmail off
chkconfig cups-config-daemon off
chkconfig cpus off
chkconfig xfs off
chkconfig lm_sensors off
chkconfig gpm off
chkconfig openibd off
chkconfig pcmcia off
chkconfig cpuspeed off
chkconfig nfslock off
chkconfig iptables off
chkconfig ip6tables off
chkconfig rpcidmapd off
chkconfig apmd off
chkconfig sendmail off
chkconfig arptables_jf off
chkconfig microcode_ctl off
chkconfig rpcgssd off
12.安裝Java
將jdk-8u151-linux-x64.tar.gz安裝包上傳到/usr/local下
解壓 tar xzvf jdk 補全
改名 mv jdk1.8.0_151/ java
修改/etc/profile文件
export JAVA_HOME=/usr/local/java
export JRE_HOME=/usr/java/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
生成環境變量
source /etc/profile
scp命令將整個文件夾分發到從節點slaver1、slaver2上laver2上
scp -r /etc/profile root@slaver1(slave2):/etc/profile
13.hadoop安裝
上傳Hadoop包到/usr/local/下
解壓 tar xzvf hadoop-2.8.2.tar.gz
改名 mv hadoop-2.8.2 hadoop
修改/etc/profile文件
export HADOOP_HOME=/usr/local/hadoop
#export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native
export HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib"
#export HADOOP_ROOT_LOGGER=DEBUG,console
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
生成環境變量
source /etc/profile
修改配置文件
cd /usr/local/hadoop/etc/hadoop
(1)vi hadoop-env.sh
修改內容export JAVA_HOME=/usr/local/java
(2)vi core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.1.140:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
(3)修改yarn-site.xml文件
vi yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.1.140:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.1.140:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.1.140:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.1.140:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>192.168.1.140:8088</value>
</property>
(4)修改hdfs-site.xml文件
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.1.140:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
(5)修改mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
(6)修改slaves文件
192.168.1.140
192.168.1.141
192.168.1.142
14.將master配置好的文件發送到slave1,slave2上,并生成環境變量
15.格式化hadoop
Hadoop namenode -format
16.啟動集群
start-all.sh
master :
[root@master ~]# jps
4705 SecondaryNameNode
8712 Jps
4299 NameNode
4891 ResourceManager
5035 NodeManager
4447 DataNode
slave1:
[root@slave1 ~]# jps
2549 DataNode
2729 NodeManager
3258 Jps
slave2:
[root@slave2 ~]# jps
3056 Jps
2596 DataNode
2938 NodeManager
17.在瀏覽器輸入master地址+:8088(MR管理界面)
在瀏覽器輸入master地址+:50070(HDFS管理界面)
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。