您好,登錄后才能下訂單哦!
本篇內容介紹了“Hive的安裝和配置方法”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
Hive簡介
(1) hive不支持OLTP處理
(2)Hive 1.2 及之后需要java1.7或更新版本
Hive安裝
(1)hive可以安裝在任何一個機器上,前提是這個機器必須要有hadoop軟件(可以不啟動hdfs,yarn等進程),因為Hive需要用到hadoop軟件下的一些jar包
(2)hive1.x默認在哪個目錄下啟動就會在該目錄下創建一個目錄metastore_db存放用戶產生的元數據,這樣很不方便使用,會導致每個用戶看到不一樣的內容,因此可以使用mysql存放元數據
下載鏈接:
http://mirror.olnevhost.net/pub/apache/hive/
[root@Darren2 local]# tar -zxvf apache-hive-1.2.2-bin.tar.gz
[root@Darren2 apache-hive-1.2.2-bin]# bin/hive
hive> create database testdb;
hive> show databases;
hive> use testdb;
hive> create table t1(c1 int,c2 string)
> row format delimited
> fields terminated by ',';
[root@Darren2 hive]# hdfs dfs -ls -R /user/hive/warehouse/
drwxr-xr-x - root supergroup 0 2017-11-25 14:25 /user/hive/warehouse/testdb.db
drwxr-xr-x - root supergroup 0 2017-11-25 14:25 /user/hive/warehouse/testdb.db/t1
[root@Darren2 hive]# cat /tmp/t1.data
1,aaa
2,bbb
3,ccc
hive>select * from t1;
1 aaa
2 bbb
3 ccc
hive> select * from t1 where c2 = 'bbb';
2 bbb
hive> select count(*) from t1 group by c1;
Query ID = root_20171125143038_249cc07f-270b-422c-a165-4da49e05e6c7
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1511577448141_0003, Tracking URL =http://Darren2:8088/proxy/application_1511577448141_0003/
Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1511577448141_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-11-25 14:30:51,203 Stage-1 map = 0%, reduce = 0%
2017-11-25 14:31:00,747 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2017-11-25 14:31:09,057 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.86 sec
MapReduce Total cumulative CPU time: 2 seconds 860 msec
Ended Job = job_1511577448141_0003
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.86 sec HDFS Read: 6747 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 860 msec
OK
1
1
1
Time taken: 31.234 seconds, Fetched: 3 row(s)
也可以通過瀏覽器訪問查看job執行的進度:http://192.168.163.102:8088/
元數據存放在mysql中配置方法
創建對應的hive庫,啟動hive的時候,會在其下生成很多對應的表
(1)創建連接mysql的配置文件hive-site.xml
[root@Darren2 conf]# vim /usr/local/hive-1.2.2/conf/hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhos:3306/hive?create=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>147258</value>
</property>
</configyration>
(2)下載java連接mysql的驅動mysql-connector-java-5.1.45.tar.gz
https://dev.mysql.com/downloads/file/?id=474257
解壓之后把jar包mysql-connector-java-5.1.45-bin.jar放在hive-1.2.2/lib目錄下
(3)在mysql創建hive庫
root@localhost [(none)]>create database hive;
(4)測試
[root@Darren2 conf]# cd /usr/local/hive-1.2.2/bin/
[root@Darren2 bin]# ./hive
使用beeline客戶端連接hiveserver2的方法
在一個節點上啟動hiveserver2服務,可以查看是否監聽到10000這個端口來判斷是否能啟動成功,然后在另外一個節點上使用beeline客戶端連接hiveserver2,用戶使用root,密碼無
#啟動hiveserver2
[root@Darren2 bin]# ./hiveserver2
#使用另一個節點連接:
[root@Darren2 bin]# ./beeline
beeline> !connect jdbc:hive2://192.168.163.102:10000
Connecting to jdbc:hive2://192.168.163.102:10000
Enter username for jdbc:hive2://192.168.163.102:10000: root
Enter password for jdbc:hive2://192.168.163.102:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.163.102:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| testdb1 |
| testdb2 |
+----------------+--+
“Hive的安裝和配置方法”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。