您好,登錄后才能下訂單哦!
小編給大家分享一下hadoop 2偽分布式如何搭建,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
單點偽分布式
1.下載hadoop
2.安裝jdk,設置環境變量
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=./:/$JAVA_HOME/lib:$JAVA_HOME/jre/lib
[root@iZ281cu2lqjZ etc]# source /etc/profile
3.創建用于和組
groupadd hadoop
useradd -g hadoop yarn
useradd -g hadoop hdfs
useradd -g hadoop mapred
4.創建數據和日志目錄
mkdir -p /var/data/hadoop/hdfs/nn
mkdir -p /var/data/hadoop/hdfs/snn
mkdir -p /var/data/hadoop/hdfs/dn
chown hdfs:hadoop /var/data/hadoop/hdfs -R
mkdir -p /var/log/hadoop/yarn
chown yarn:hadoop /var/log/hadoop/yarn -R
進入yarnhadoop的目錄中
mkdir logs
chmod g+w logs
chown yarn:hadoop . -R
5.配置core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>hdfs</value>
</property>
fs.default.name 是Namenode指定了主機名和請求端口號;
hadoop.http.staticuser.user 是hdfs的默認用戶名
6.配置hdfs.site.xml
<property>
<name>dfs.replication</name>
<value>1</value> ---默認為3
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/var/data/hadoop/hdfs/nn</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>file:/var/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>fs.checkpoint.edit.dir</name>
<value>file:/var/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/var/data/hadoop/hdfs/db</value>
</property>
7.配置mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
為mapreduce制定框架名,使用 yarn
8 配置yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
主要是配置shuffle,默認是沒有配置shuffle
9.調整JAVA堆大小
hadoop-env.sh
HADOOP_HEAPSIZE="500"
yarn-env.sh
YARN_HEAPSIZE=500
10.格式化HDFS
切換到hdfs用戶,進入到hadoop bin目錄執行
./hdfss namenode -format
碰到問題
11../hdfs namenode -format
12.[hdfs@localhost sbin]$ ./hadoop-daemon.sh start namenode
starting namenode, logging to /opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-namenode-localhost.out
[hdfs@localhost sbin]$ ./hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-secondarynamenode-localhost.out
[hdfs@localhost sbin]$ ./hadoop-daemon.sh start datanode
starting datanode, logging to /opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-datanode-localhost.out
使用JPS可以檢查進程
結果:
[hdfs@localhost sbin]$ jps
3915 SecondaryNameNode
3969 DataNode
3833 NameNode
4047 Jps
12.啟動yarn
[yarn@localhost sbin]$ ./yarn-daemon.sh start nodemanager
/opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 121: unexpected EOF while looking for matching `"'
/opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 124: syntax error: unexpected end of file
starting nodemanager, logging to /opt/yarn/hadoop-2.7.1/logs/yarn-yarn-nodemanager-localhost.out
/opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 121: unexpected EOF while looking for matching `"'
/opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 124: syntax error: unexpected end of file
[yarn@localhost sbin]$ jps
4132 ResourceManager
4567 Jps
4456 NodeManager
13 驗證
訪問: ip:50070
ip:8088
最后可以運行hadoop包中的例子進行檢驗。這就是偽分布式簡單的安裝步驟。
問題:如果遇到50070端口不能訪問就是 yarn沒有成功啟動。
去sbin啟動yarn ./start-yarn.sh時報
localhost: Error: JAVA_HOME is not set and could not be found.
需要修改 hadoop-env.xml中java_home設置為絕對路徑。
重啟yarn 解決問題。
配置hbase:
修改hbase-env.sh
修改# export JAVA_HOME=/usr/local/java/jdk1.7.0_79
修改 hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
啟動hbase:
[root@iZ281cu2lqjZ bin]# ./start-hbase.sh
root@localhost's password:
localhost: starting zookeeper, logging to /usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-zookeeper-iZ281cu2lqjZ.out
starting master, logging to /usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-master-iZ281cu2lqjZ.out
starting regionserver, logging to /usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-1-regionserver-iZ281cu2lqjZ.out
[root@iZ281cu2lqjZ bin]# jps
1597 DataNode
3180 ResourceManager
3463 NodeManager
1462 NameNode
8680 HRegionServer
1543 SecondaryNameNode
8536 HQuorumPeer
8597 HMaster
8729 Jps
完成
看完了這篇文章,相信你對“hadoop 2偽分布式如何搭建”有了一定的了解,如果想了解更多相關知識,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。