您好,登錄后才能下訂單哦!
本篇內容介紹了“Ambari2.6安裝部署Hadoop2.7的步驟”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
Apache Ambari是一種基于Web的工具,支持Apache Hadoop集群的供應、管理和監控。Ambari已支持大多數Hadoop組件,包括HDFS、MapReduce、Hive、Pig、 Hbase、Zookeper、Sqoop和Hcatalog等。Apache Ambari 支持HDFS、MapReduce、Hive、Pig、Hbase、Zookeper、Sqoop和Hcatalog等的集中管理。也是5個頂級hadoop管理工具之一。Ambari能夠安裝安全的(基于Kerberos)Hadoop集群,以此實現了對Hadoop 安全的支持,提供了基于角色的用戶認證、授權和審計功能,并為用戶管理集成了LDAP和Active Directory。
之所以選擇Ambari部署hadoop而不是CDH,是因為CDH最新版本只支持Hadoop2.6.X,Ambari最新版本支持Hadoop2.7.3。
一、安裝部署參考官網http://ambari.apache.org/ 及簡書https://www.jianshu.com/p/73f9670f71cf ,主要分以下幾步:
1、節點互信
2、關閉防火墻、selinux
3、安裝ambari-server
4、設置ambari-server
5、圖形界面部署hadoop各組件
二、如下是新增節點步驟:
1、注意密鑰為master1節點 prod-hadoop-master-01 /root/.ssh/d_rsa文件
2、注冊節點
3、安裝服務也可添加后再安裝
4、配置默認即可
5、確認下沒有變更就開始部署
6、安裝進度完成即可,也可以登陸首頁等待后續安裝完成
三、補充Ambari沒有集成組件安裝:
1、解決ambari-service、ambari-agent默認安裝數據目錄在/下
ambari-agent stop
mv /var/lib/ambari-agent /data/disk1/
ln -s /data/disk1/ambari-agent /var/lib/ambari-agent
mv /usr/hdp /data/disk1/
ln -s /data/disk1/hdp/ /usr/hdp
ambari-agent start
2、ambari與presto整合
參考
https://www.jianshu.com/p/0b5f52a959d5
https://github.com/prestodb/ambari-presto-service/releases
https://github.com/prestodb/ambari-presto-service/releases/download/v1.2/ambari-presto-1.2.tar.gz
[root@prod-hadoop-master-01 ~]# tar zxvf ambari-presto-1.2.tar.gz -C /var/lib/ambari-server/resources/stacks/HDP/2.6/services/
ambari-presto-1.2/
ambari-presto-1.2/configuration/
ambari-presto-1.2/configuration/connectors.properties.xml
ambari-presto-1.2/configuration/jvm.config.xml
ambari-presto-1.2/configuration/config.properties.xml
ambari-presto-1.2/configuration/node.properties.xml
ambari-presto-1.2/HISTORY.rst
ambari-presto-1.2/themes/
ambari-presto-1.2/themes/theme.json
ambari-presto-1.2/Makefile
ambari-presto-1.2/setup.py
ambari-presto-1.2/MANIFEST.in
ambari-presto-1.2/PKG-INFO
ambari-presto-1.2/package/
ambari-presto-1.2/package/scripts/
ambari-presto-1.2/package/scripts/presto_cli.py
ambari-presto-1.2/package/scripts/presto_worker.py
ambari-presto-1.2/package/scripts/presto_coordinator.py
ambari-presto-1.2/package/scripts/init.py
ambari-presto-1.2/package/scripts/params.py
ambari-presto-1.2/package/scripts/download.ini
ambari-presto-1.2/package/scripts/common.py
ambari-presto-1.2/package/scripts/presto_client.py
ambari-presto-1.2/setup.cfg
ambari-presto-1.2/ambari_presto.egg-info/
ambari-presto-1.2/ambari_presto.egg-info/dependency_links.txt
ambari-presto-1.2/ambari_presto.egg-info/not-zip-safe
ambari-presto-1.2/ambari_presto.egg-info/PKG-INFO
ambari-presto-1.2/ambari_presto.egg-info/top_level.txt
ambari-presto-1.2/ambari_presto.egg-info/SOURCES.txt
ambari-presto-1.2/LICENSE
ambari-presto-1.2/README.md
ambari-presto-1.2/metainfo.xml
ambari-presto-1.2/requirements.txt
[root@prod-hadoop-master-01 ~]# cd /var/lib/ambari-server/resources/stacks/HDP/2.6/services/
[root@prod-hadoop-master-01 services]# ls
ACCUMULO ATLAS FALCON HBASE HIVE KERBEROS MAHOUT PIG RANGER_KMS SPARK SQOOP stack_advisor.pyc STORM TEZ ZEPPELIN
ambari-presto-1.2 DRUID FLUME HDFS KAFKA KNOX OOZIE RANGER SLIDER SPARK2 stack_advisor.py stack_advisor.pyo SUPERSET YARN ZOOKEEPER
[root@prod-hadoop-master-01 services]# mv ambari-presto-1.2/ PRESTO
[root@prod-hadoop-master-01 services]# chmod -R +x PRESTO/*
[root@prod-hadoop-master-01 services]# ambari-server restart
平臺上添加presto服務器,一個控制節點,兩個worker節點
3、安裝kylin組件
https://blog.csdn.net/vivismilecs/article/details/72763665
下載安裝
tar -zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz -C /hadoop/
cd /hadoop/
chown -R hdfs:hadoop kylin/
vim /etc/profile
source /etc/profile
echo $KYLIN_HOME
/hadoop/kylin
切換用戶檢查環境是否正確安裝
su hdfs
hive(進入hive,quit;退出)
hbase shell(進入hbase shell,ctrl+c結束)
[hdfs@prod-hadoop-data-01 kylin]$ bin/check-env.sh
Retrieving hadoop conf dir...
KYLIN_HOME is set to /hadoop/kylin
hdfs is not in the sudoers file. This incident will be reported.
Failed to create hdfs:///kylin/spark-history. Please make sure the user has right to access hdfs:///kylin/spark-history
排錯
[hdfs@prod-hadoop-data-01 kylin]$ exit
[root@prod-hadoop-data-01 hadoop]# vim /etc/sudoers.d/waagent
檢測
[hdfs@prod-hadoop-data-01 kylin]$ bin/check-env.sh
Retrieving hadoop conf dir...
KYLIN_HOME is set to /hadoop/kylin
啟動
[hdfs@prod-hadoop-data-01 kylin]$ bin/kylin.sh start
Retrieving hadoop conf dir...
KYLIN_HOME is set to /hadoop/kylin
Retrieving hive dependency...
Retrieving hbase dependency...
Retrieving hadoop conf dir...
Retrieving kafka dependency...
Retrieving Spark dependency...
Start to check whether we need to migrate acl tables
Retrieving hadoop conf dir...
KYLIN_HOME is set to /hadoop/kylin
Retrieving hive dependency...
Retrieving hbase dependency...
Retrieving hadoop conf dir...
Retrieving kafka dependency...
Retrieving Spark dependency...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/apache-kylin-2.3.1-bin/tool/kylin-tool-2.3.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/apache-kylin-2.3.1-bin/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-05-24 14:23:21,974 INFO [main] common.KylinConfig:319 : Loading kylin-defaults.properties from file:/hadoop/apache-kylin-2.3.1-bin/tool/kylin-tool-2.3.1.jar!/kylin-defaults.properties
2018-05-24 14:23:22,016 DEBUG [main] common.KylinConfig:278 : KYLIN_CONF property was not set, will seek KYLIN_HOME env variable
2018-05-24 14:23:22,019 INFO [main] common.KylinConfig:99 : Initialized a new KylinConfig from getInstanceFromEnv : 494317290
2018-05-24 14:23:22,120 INFO [main] persistence.ResourceStore:86 : Using metadata url kylin_metadata@hbase for resource store
2018-05-24 14:23:24,034 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:24,034 INFO [main] hbase.HBaseConnection:258 : connection is null or closed, creating a new one
2018-05-24 14:23:24,168 INFO [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0x7561db12 connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Client environment:zookeeper.version=3.4.6-292--1, built on 05/11/2018 07:09 GMT
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Client environment:host.name=prod-hadoop-data-01.hadoop
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.version=1.8.0_91
2018-05-24 14:23:24,177 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.vendor=Oracle Corporation
2018-05-24 14:23:24,177 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.home=/usr/local/java
2018-05-24 14:23:24,182 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.class.path=/hadoop/kylin/tool/kylin-tool-2.3.1.jar:1.8.1.jar:/hadoop/kylin/spark/jars/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/kylin/spark/jars/chill-java-0.8.0.jar:jar:/hadoop/kylin/spark/jars/xercesImpl-2.9.1.jar:/hadoop/kylin/spark/jars/netty-3.8.0.Final.jar:/usr/hdp/current/ext/hbase/*
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.library.path=:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/native
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.io.tmpdir=/tmp
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Client environment:java.compiler=<NA>
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Client environment:os.name=Linux
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Client environment:os.arch=amd64
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Client environment:os.version=2.6.32-696.18.7.el6.x86_64
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Client environment:user.name=hdfs
2018-05-24 14:23:24,194 INFO [main] zookeeper.ZooKeeper:100 : Client environment:user.home=/home/hdfs
2018-05-24 14:23:24,194 INFO [main] zookeeper.ZooKeeper:100 : Client environment:user.dir=/hadoop/apache-kylin-2.3.1-bin
2018-05-24 14:23:24,195 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@66b72664
2018-05-24 14:23:24,237 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:24,246 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:50746, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14:23:24,256 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003b, negotiated timeout = 60000
2018-05-24 14:23:24,892 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:24,944 INFO [main] imps.CuratorFrameworkImpl:224 : Starting
2018-05-24 14:23:24,947 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=120000 watcher=org.apache.curator.ConnectionState@67207d8a
2018-05-24 14:23:24,950 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-master-02.hadoop/172.20.3.5:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:24,951 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:60080, server: prod-hadoop-master-02.hadoop/172.20.3.5:2181
2018-05-24 14:23:24,952 DEBUG [main] util.ZookeeperDistributedLock:143 : 6616@prod-hadoop-data-01 trying to lock /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:24,957 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-master-02.hadoop/172.20.3.5:2181, sessionid = 0x3638801b4480045, negotiated timeout = 60000
2018-05-24 14:23:24,962 INFO [main-EventThread] state.ConnectionStateManager:228 : State change: CONNECTED
2018-05-24 14:23:25,031 INFO [main] util.ZookeeperDistributedLock:155 : 6616@prod-hadoop-data-01 acquired lock at /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:25,036 DEBUG [main] hbase.HBaseConnection:337 : Creating HTable 'kylin_metadata'
2018-05-24 14:23:27,822 INFO [main] client.HBaseAdmin:789 : Created kylin_metadata
2018-05-24 14:23:27,823 DEBUG [main] hbase.HBaseConnection:350 : HTable 'kylin_metadata' created
2018-05-24 14:23:27,824 DEBUG [main] util.ZookeeperDistributedLock:223 : 6616@prod-hadoop-data-01 trying to unlock /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:27,833 INFO [main] util.ZookeeperDistributedLock:234 : 6616@prod-hadoop-data-01 released lock at /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:28,105 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase: hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:28,105 INFO [main] hbase.HBaseConnection:258 : connection is null or closed, creating a new one
2018-05-24 14:23:28,106 INFO [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0xf339eae connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14:23:28,106 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@2822c6ff
2018-05-24 14:23:28,109 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:28,109 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:50760, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14:23:28,115 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003c, negotiated timeout = 60000
2018-05-24 14:23:28,138 INFO [close-hbase-conn] hbase.HBaseConnection:137 : Closing HBase connections...
2018-05-24 14:23:28,144 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1703 : Closing zookeeper sessionid=0x163882326e1003c
2018-05-24 14:23:28,152 INFO [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x163882326e1003c closed
2018-05-24 14:23:28,152 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,154 INFO [Thread-8] zookeeper.ZooKeeper:684 : Session: 0x3638801b4480045 closed
2018-05-24 14:23:28,154 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,162 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:2167 : Closing master protocol: MasterService
2018-05-24 14:23:28,163 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1703 : Closing zookeeper sessionid=0x163882326e1003b
2018-05-24 14:23:28,168 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,169 INFO [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x163882326e1003b closed
A new Kylin instance is started by hdfs. To stop it, run 'kylin.sh stop'
Check the log at /hadoop/kylin/logs/kylin.log
Web UI is at http://<hostname>:7070/kylin
“Ambari2.6安裝部署Hadoop2.7的步驟”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。