您好,登錄后才能下訂單哦!
這篇文章主要講解了“hadoop安裝文件配置教程”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“hadoop安裝文件配置教程”吧!
1.目前只是單機環境,namenode和datanode都在一臺機器。hadoop版本選的是2.7.2,jdk選的是jdk-8u131-linux-64.rpm
2.安裝jdk
rpm -ivh jdk-8u111-linux-x64.rpm
3.安裝密鑰
ssh -keygen -t rsa
在root目錄下會自動生成.ssh目錄
4.把公鑰寫到authorized_keys里面
5.修改權限
6.關閉防火墻
7.解壓hadoop安裝包
tar zxf hadoop-2.7.2.tar.gz
8.修改 /etc/profile
#java
JAVA_HOME=/usr/java/default
export PATH=$PATH:$JAVA_HOME/bin
#hadoop
export HADOOP_HOME=/hadoop_soft/hadoop-2.7.2
export HADOOP_OPTS="$HADOOP_OPTS
-Djava.library.path=/hadoop_soft/hadoop-2.7.2/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
##export LD_LIBRARY_PATH=/hadoop_soft/hadoop-2.7.2/lib/native/:$LD_LIBRARY_PATH
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
9.修改配置文件 hadoop-2.7.2/etc/hadoop/
(1) core-site.xml fs.defaultFS 就是namenode的節點名稱和地址
fs.defaultFS
hdfs://192.168.1.120:9000
hadoop.tmp.dir
/hadoop_soft/hadoop-2.7.2/current/tmp
fs.trash.interval
4320
(2)hdfs-site.xml
dfs.namenode.name.dir
/hadoop_soft/hadoop-2.7.2/current/dfs/name
dfs.namenode.data.dir
/hadoop_soft/hadoop-2.7.2/current/data
dfs.replication
1
dfs.webhdfs.enabled
true
dfs.permissions.superusergroup
staff
dfs.permissions.enabled
false
(3). yarn-site.xml
yarn.resourcemanager.hostname
192.168.1.115
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
192.168.1.120:18040
yarn.resourcemanager.scheduler.address
192.168.1.120:18030
yarn.resourcemanager.resource-tracker.address
192.168.1.120:18025
yarn.resourcemanager.admin.address
192.168.1.120:18141
yarn.resourcemanager.webapp.address
192.168.1.120:18088
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
86400
yarn.log-aggregation.retain-check-interval-seconds
86400
yarn.nodemanager.remote-app-log-dir
/tmp/logs
yarn.nodemanager.remote-app-log-dir-suffix
logs
(4).復制mapred-site.xml.template到mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobtracker.http.address
192.168.1.120:50030
mapreduce.jobhistory.address
192.168.1.120:10020
mapreduce.jobhistory.webapp.address
192.168.1.120:19888
mapreduce.jobhistory-done-dir
/jobhistory/done
mapreduce.intermediate-done-dir
/jobhistory/done_intermediate
mapreduce.job.ubertask.enable
true
(5).編輯slaves,添加主機的IP
192.168.1.120
(6).在hadoop-env.sh文件中添加java_home,找到文件JAVA_HOME這一行
10.格式化文件系統
Hdfs namenode –format
11.啟動 hadoop-2.7.2/sbin/start-all.sh
12.驗證 jps
6433 NameNode
6532 DataNode
7014 NodeManager
6762 SecondaryNameNode
6910 ResourceManager
7871 Jps
13.hadoop 基本命令
hadoop fs –mkdir /hadoop-test
hadoop fs -find / -name hadoop-test
hadoop fs -put NOTICE.txt /hadoop-test/
hadoop fs –rm -R
感謝各位的閱讀,以上就是“hadoop安裝文件配置教程”的內容了,經過本文的學習后,相信大家對hadoop安裝文件配置教程這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關知識點的文章,歡迎關注!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。