您好,登錄后才能下訂單哦!
一、環境
1、zk集群
10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181
2、metastore數據庫
10.10.103.246:3306
二、安裝
1、安裝配置數據庫
yum -y install mysql55-server mysql55
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'10.10.103.246' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'127.0.0.1' IDENTIFIED BY 'hive'; CREATE DATABASE IF NOT EXISTS metastore; USE metastore; SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-1.1.0.mysql.sql;#執行這個會報錯,然后在執行下面sql source /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.13.0.mysql.sql;
2、安裝hive
yum -y install hive hive-jdbc hive-metastore hive-server2
3、配置
vim /etc/hive/conf/hive-site.xml <?xml version="1.0"?> <configuration> <property> <name>hive.execution.engine</name> <value>spark</value> </property> <property> <name>hive.enable.spark.execution.engine</name> <value>true</value> </property> <property> <name>spark.master</name> <value>yarn-client</value> </property> <property> <name>spark.enentLog.enabled</name> <value>true</value> </property> <property> <name>spark.enentLog.dir</name> <value>hdfs://mycluster:8020/spark-log</value> </property> <property> <name>spark.serializer</name> <value>org.apache.spark.serializer.KryoSerializer</value> </property> <property> <name>spark.executor.memeory</name> <value>1g</value> </property> <property> <name>spark.driver.memeory</name> <value>1g</value> </property> <property> <name>spark.executor.extraJavaOptions</name> <value>-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"</value> </property> <property> <name>hive.metastore.uris</name> <value>thrift://10.10.103.246:9083</value> </property> <property> <name>hive.metastore.local</name> <value>false</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://10.10.103.246/metastore</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>true</value> </property> <property> <name>datanucleus.autoStartMechanism</name> <value>SchemaTable</value> </property> <property> <name>hive.support.concurrency</name> <value>true</value> </property> <property> <name>hive.zookeeper.quorum</name> <value>10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181</value> </property> <property> <name>hive.aux.jars.path</name> <value>file:///usr/lib/hive/lib/zookeeper.jar</value> </property> <property> <name>hive.metastore.schema.verification</name><value>false</value> </property> </configuration>
4、啟動metastore服務
/etc/init.d/hive-metastore start
5、驗證
[root@ip-10-10-103-246 conf]# hive Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 17/05/12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist 17/05/12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties WARNING: Hive CLI is deprecated and migration to Beeline is recommended. hive> > > > create table navy1(ts BIGINT,line STRING); OK Time taken: 0.925 seconds hive> select count(*) from navy1; Query ID = root_20170512150505_8f7fb28e-cf32-4efc-bb95-6add37f13fb6 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Spark Job = f045ab15-baaa-40e7-9641-d821fa313abe Running with YARN Application = application_1494472050574_0014 Kill Command = /usr/lib/hadoop/bin/yarn application -kill application_1494472050574_0014 Query Hive on Spark job[0] stages: 0 1 Status: Running (Hive on Spark job[0]) Job Progress Format CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost] 2017-05-12 15:05:30,835 Stage-0_0: 0(+1)/1 Stage-1_0: 0/1 2017-05-12 15:05:33,853 Stage-0_0: 1/1 Finished Stage-1_0: 1/1 Finished Status: Finished successfully in 16.05 seconds OK 0 Time taken: 19.325 seconds, Fetched: 1 row(s) hive>
6、遇到的問題
報錯:
hive> select count(*) from test; Query ID = root_20170512143232_48d9f363-7b60-4414-9310-e6348104f476 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.initiateSparkConf(HiveSparkClientFactory.java:74) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.setup(SparkSessionManagerImpl.java:81) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:102) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:111) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:99) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 24 more FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. org/apache/hadoop/hbase/HBaseConfiguration
解決:
yum -y install hbase
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。