您好,登錄后才能下訂單哦!
3.0.3玩不好,現將2.6.0tar.gz上傳到 / usr , chmod -R hadoop:hadop hadoop-2.6.0 , rm掉3.0.3
2.在/etc/profile中 配置java的環境配置 , hadoop環境配置
ssh免密登錄配置 (查看之前記錄)
3. 配置文件
hadoop-env.sh中配置java環境
core-sit.xml
官網上沒有提到 端口9000這個配置,但是如果不添加, start-dfs.sh的時候會出現如下錯誤:
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
hdfs-site.xml
參數 | 描述 | 默認 | 配置文件 | 例子值 |
dfs.name.dir | name node的元數據,以,號隔開,hdfs會把元數據冗余復制到這些目錄,一般這些目錄是不同的塊設備,不存在的目錄會被忽略掉 | {hadoop.tmp.dir} /dfs/name | hdfs-site.xm | /hadoop/hdfs/name |
dfs.name.edits.dir | node node的事務文件存儲的目錄,以,號隔開,hdfs會把事務文件冗余復制到這些目錄,一般這些目錄是不同的塊設備,不存在的目錄會被忽略掉 | ${dfs.name.dir}/current?? | hdfs-site.xm | ${ |
4.格式化文件系統
# hadoop namenode –format
[root@zui hadoop]# hadoop namenode -format (因為這里用到了root用戶, 所以start-dfs.sh如果不在root下執行,啟動不了namenode / datanode and secondnamenode , yarn沒有關系)
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = zui/182.61.17.191
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0
STARTUP_MSG: classpath =/***********各種jar包的path/
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e34 96499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10 Z
STARTUP_MSG: java = 1.8.0_152
************************************************************/
18/07/23 17:03:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/07/23 17:03:29 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-cb98355b-6a1d-47a2-964c-48dc32752b55
18/07/23 17:03:30 INFO namenode.FSNamesystem: No KeyProvider found.
18/07/23 17:03:30 INFO namenode.FSNamesystem: fsLock is fair:true
18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/07/23 17:03:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 23 17:03:30
18/07/23 17:03:30 INFO util.GSet: Computing capacity for map BlocksMap
18/07/23 17:03:30 INFO util.GSet: VM type = 64-bit
18/07/23 17:03:30 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
18/07/23 17:03:30 INFO util.GSet: capacity = 2^21 = 2097152 entries
18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/07/23 17:03:30 INFO blockmanagement.BlockManager: defaultReplication= 1
18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplication= 512
18/07/23 17:03:30 INFO blockmanagement.BlockManager: minReplication= 1
18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplicationStreams= 2
18/07/23 17:03:30 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
18/07/23 17:03:30 INFO blockmanagement.BlockManager: replicationRecheckInterval= 3000
18/07/23 17:03:30 INFO blockmanagement.BlockManager: encryptDataTransfer= false
18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 1000
18/07/23 17:03:30 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
18/07/23 17:03:30 INFO namenode.FSNamesystem: supergroup = supergroup
18/07/23 17:03:30 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/07/23 17:03:30 INFO namenode.FSNamesystem: HA Enabled: false
18/07/23 17:03:30 INFO namenode.FSNamesystem: Append Enabled: true
18/07/23 17:03:31 INFO util.GSet: Computing capacity for map INodeMap
18/07/23 17:03:31 INFO util.GSet: VM type = 64-bit
18/07/23 17:03:31 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
18/07/23 17:03:31 INFO util.GSet: capacity = 2^20 = 1048576 entries
18/07/23 17:03:31 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/07/23 17:03:31 INFO util.GSet: Computing capacity for map cachedBlocks
18/07/23 17:03:31 INFO util.GSet: VM type = 64-bit
18/07/23 17:03:31 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
18/07/23 17:03:31 INFO util.GSet: capacity = 2^18 = 262144 entries
18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension= 30000
18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/07/23 17:03:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/07/23 17:03:31 INFO util.GSet: VM type = 64-bit
18/07/23 17:03:31 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/07/23 17:03:31 INFO util.GSet: capacity = 2^15 = 32768 entries
18/07/23 17:03:31 INFO namenode.NNConf: ACLs enabled? false
18/07/23 17:03:31 INFO namenode.NNConf: XAttrs enabled? true
18/07/23 17:03:31 INFO namenode.NNConf: Maximum size of an xattr: 16384
18/07/23 17:03:31 INFO namenode.FSImage: Allocated new BlockPoolId: BP-702429615-182.61.17.191-1532336611838
18/07/23 17:03:31 INFO common.Storage: Storage directory /usr/hadoop-2.6.0/data/tmp/dfs/name has been successfully formatted.
18/07/23 17:03:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/07/23 17:03:32 INFO util.ExitUtil: Exiting with status 0
18/07/23 17:03:32 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zui/182.61.17.191
************************************************************/
[root@zui hadoop]# [root@zui hadoop]# hadoop namenode -format
-bash: [root@zui: command not found
[root@zui hadoop]# DEPRECATED: Use of this script to execute hdfs command is deprecated.
-bash: DEPRECATED:: command not found
[root@zui hadoop]# Instead use the hdfs command for it.
-bash: Instead: command not found
[root@zui hadoop]#
[root@zui hadoop]# 18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:
-bash: 18/07/23: No such file or directory
[root@zui hadoop]# /************************************************************
-bash: /appd.log: Text file busy
[root@zui hadoop]# STARTUP_MSG: Starting NameNode
-bash: STARTUP_MSG:: command not found
[root@zui hadoop]# STARTUP_MSG: host = zui/182.61.17.191
-bash: STARTUP_MSG:: command not found
[root@zui hadoop]# STARTUP_MSG: args = [-format]
-bash: STARTUP_MSG:: command not found
[root@zui hadoop]# STARTUP_MSG: version = 2.6.0
-bash: STARTUP_MSG:: command not found
格式化成功,這里我把打印的信息貼上了,深入的學習是需要分析的
5.
執行 start-dfs.sh
檢查 結果jps
6.通過瀏覽器訪問 : http://公網ip:50070/
來張大圖爽快一把
全文參考: https://blog.csdn.net/liuge36/article/details/78353930
如有雷同,全屬抄襲
2018 07 23
Hadoop中的資源調度 : yarn
mapreduce-site.xml
yarn-site.xml
切換到hadoop用戶,執行 start-yarn.sh, 因為免密配置是在hadoop用戶下操作的,如果root用戶,需要一次次輸入密碼
因為之前start-dfs的操作是在root下操作的,所以log文件對hadoop用戶 Permission denied
檢查如下;
將logs用戶和組 assign給 hadoop (提示:免密登錄在什么用戶下配置的,后面hadoop任何操作都要在這個user下 1.其他用戶操作不知要輸入多少次密碼,如果一百次操作都要輸入pwd你會暈掛的 2.假使前面用了root,后面恍然大悟切回到hadoop用戶了,但是有些生成的文件是root用戶和組,如果hadoop下也需要操作這些目錄那么明顯沒有權限,運行檢查發現100個文件,運氣好也許一個 chown -R就好了,運氣不好 100次 chown你來試試)
再次執行 start-yarn.sh
查看 ,為什么沒有顯示 namenode 和 datanode的進程, 此時http://182.61.**.***:50070也還是可以訪問的呀 ????
在瀏覽器輸入,OK, 看到下面結果,偽分布式搭建完成
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。