91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hive基礎sql語法(DML)

發布時間:2020-08-06 10:30:48 來源:網絡 閱讀:1484 作者:wangkunj 欄目:大數據
DML操作(Data Manipulation Language)

參考官方文檔: DML文檔

  • 因update和delete在Hive中一般用不到,本篇文章不做講解。本文主要介紹Load和insert操作。
1. LOAD(加載數據)

LOAD作用是加載文件到表中(Loading files into tables)

  • 下面是官網上為我們列出的語法:

    LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
    LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] [INPUTFORMAT 'inputformat' SERDE 'serde'] (3.0 or later)
  • 1.加載數據到表中時,Hive不做任何轉換。加載操作只是把數據拷貝或移動操作,即移動數據文件到Hive表相應的位置。

  • 2.加載的目標可以是一個表,也可以是一個分區。如果表是分區的,則必須通過指定所有分區列的值來指定一個表的分區。

  • 3.filepath可以是一個文件,也可以是一個目錄。不管什么情況下,filepath被認為是一個文件集合。

LOCAL:表示輸入文件在本地文件系統(Linux),如果沒有加LOCAL,hive則會去HDFS上查找該文件。
OVERWRITE:表示如果表中有數據,則先刪除數據,再插入新數據,如果沒有這個關鍵詞,則直接附加數據到表中。
PARTITION:如果表中存在分區,可以按照分區進行導入。

# 創建一張員工表
hive> create table emp 
    > (empno int, ename string, job string, mgr int, hiredate string, salary double, comm double, deptno int)
    > ROW FORMAT DELIMITED 
    > FIELDS TERMINATED BY '\t' ;
OK
Time taken: 0.651 seconds
# 把本地文件系統中emp.txt導入表
hive> LOAD DATA LOCAL INPATH '/home/hadoop/emp.txt' OVERWRITE INTO TABLE emp; 
Loading data to table default.emp
Table default.emp stats: [numFiles=1, numRows=0, totalSize=886, rawDataSize=0]
OK
Time taken: 1.848 seconds
# 查看表數據
hive> select * from emp;
OK
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
# 不用OVERWRITE關鍵字
hive>  LOAD DATA LOCAL INPATH '/home/hadoop/emp.txt'  INTO TABLE emp;
# 再次查看
hive> select * from emp;
OK
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
Time taken: 0.137 seconds, Fetched: 28 row(s)
# 再次OVERWRITE覆蓋導入
hive> LOAD DATA LOCAL INPATH '/home/hadoop/emp.txt' OVERWRITE INTO TABLE emp; 
# 發現數據被覆蓋了
hive> select * from emp;
OK
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
Time taken: 0.164 seconds, Fetched: 14 row(s)
2. insert into到表(Inserting data into Hive Tables from queries)
  • 下面是官網上為我們列出的語法:
Standard syntax:
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 FROM from_statement;
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;

Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;

Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;

官網又給我們列出一大堆語法,看著就很可怕,但是仔細整理后再來看看你會發現并沒有什么,下面對其進行分析:

  • 1 標準語法(Standard syntax):INSERT OVERWRITE TABLE tablename1 select_statement1 FROM from_statement; 其實就是一個簡單的插入語句。
  • 2.可以使用PARTITION 關鍵字,進行分區插入。
  • 3.OVERWRITE是否選擇覆蓋。
  • 4 使用插入語法會跑mr作業。

  • 5 multiple inserts:代表多行插入。

  • 6 dynamic partition inserts:動態分區插入。

注:這里有兩種插語法,也就是加上OVERWRITE關鍵字和不加的區別。

# insert overwrite
hive> insert overwrite table emp2 select * from emp;
Query ID = hadoop_20180624141010_3063d504-ff2f-4003-843f-7dca60f1dd7e
Total jobs = 3
Launching Job 1 out of 3
...
OK
Time taken: 19.554 seconds
hive> select * from emp2;
OK
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
Time taken: 0.143 seconds, Fetched: 14 row(s)

# insert追加
hive> insert into table emp2 select * from emp;
Query ID = hadoop_20180624141010_3063d504-ff2f-4003-843f-7dca60f1dd7e
Total jobs = 3
...
OK
Time taken: 18.539 seconds
hive> select * from emp2;
OK
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
7698    BLAKE   MANAGER 7839    1981-5-1        2850.0  NULL    30
7782    CLARK   MANAGER 7839    1981-6-9        2450.0  NULL    10
7788    SCOTT   ANALYST 7566    1987-4-19       3000.0  NULL    20
7839    KING    PRESIDENT       NULL    1981-11-17      5000.0  NULL    10
7844    TURNER  SALESMAN        7698    1981-9-8        1500.0  0.0     30
7876    ADAMS   CLERK   7788    1987-5-23       1100.0  NULL    20
7900    JAMES   CLERK   7698    1981-12-3       950.0   NULL    30
7902    FORD    ANALYST 7566    1981-12-3       3000.0  NULL    20
7934    MILLER  CLERK   7782    1982-1-23       1300.0  NULL    10
Time taken: 0.132 seconds, Fetched: 28 row(s)
  • Inserting values into tables(手動插入一條或多條記錄,會跑mr作業 不常用)
  • 官方用法:
    INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...]
hive> create table stu(
    > id int,
    > name string
    > )
    >  ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
OK
Time taken: 0.253 seconds
hive> insert into table stu values (1,"zhangsan");
Query ID = hadoop_20180624141010_3063d504-ff2f-4003-843f-7dca60f1dd7e
Total jobs = 3
...
OK
Time taken: 16.589 seconds
hive> select * from stu;
OK
1       zhangsan
Time taken: 0.123 seconds, Fetched: 1 row(s)
3. 數據導出(Writing data into the filesystem from queries)

查詢結果可以通過語句插入文件系統中:

  • 下面是官網上為我們列出的語法:
    
    Standard syntax:(標準語法)
    INSERT OVERWRITE [LOCAL] DIRECTORY directory1
    [ROW FORMAT row_format] [STORED AS file_format] (Note: Only available starting with Hive 0.11.0)
    SELECT ... FROM ...

Hive extension (multiple inserts):(導出多條記錄)
FROM from_statement
INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1
[INSERT OVERWRITE [LOCAL] DIRECTORY directory2 select_statement2] ...

row_format
: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
[NULL DEFINED AS char] (Note: Only available starting with Hive 0.13)

**LOCAL**:加上LOCAL關鍵字代表導入本地系統,不加默認導入HDFS; 
**STORED AS**:可以指定存儲格式。

```shell
hive> insert overwrite local directory '/home/hadoop/stu' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select * from stu;
Query ID = hadoop_20180624153131_97271014-abcf-4e70-b318-7de85d27c97f
Total jobs = 1
...
OK
Time taken: 15.09 seconds
# 結果
[hadoop@hadoop000 stu]$ pwd
/home/hadoop/stu
[hadoop@hadoop000 stu]$ cat 000000_0 
1       zhangsan

# 導出多條記錄
hive> from emp
    > INSERT OVERWRITE  LOCAL DIRECTORY '/home/hadoop/tmp/hivetmp1'
    > ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t"
    > select empno, ename  
    > INSERT OVERWRITE  LOCAL DIRECTORY '/home/hadoop/tmp/hivetmp2'
    > ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t"
    > select ename;
Query ID = hadoop_20180624153131_97271014-abcf-4e70-b318-7de85d27c97f
Total jobs = 1
...
OK
Time taken: 16.261 seconds
# 結果
[hadoop@hadoop000 tmp]$ cd hivetmp1
[hadoop@hadoop000 hivetmp1]$ ll
total 4
-rw-r--r-- 1 hadoop hadoop 154 Jun 24 15:39 000000_0
[hadoop@hadoop000 hivetmp1]$ cat 000000_0 
7369    SMITH
7499    ALLEN
7521    WARD
7566    JONES
7654    MARTIN
7698    BLAKE
7782    CLARK
7788    SCOTT
7839    KING
7844    TURNER
7876    ADAMS
7900    JAMES
7902    FORD
7934    MILLER
[hadoop@hadoop000 hivetmp1]$ cd ..
[hadoop@hadoop000 tmp]$ cd hivetmp2
[hadoop@hadoop000 hivetmp2]$ cat 000000_0 
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

參考: https://blog.csdn.net/yu0_zhang0/article/details/79007784

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

梓潼县| 江华| 利辛县| 屯门区| 衡阳市| 福建省| 和平区| 洞口县| 尚义县| 平舆县| 富宁县| 汾西县| 禹州市| 贡嘎县| 莎车县| 清水河县| 仁寿县| 永年县| 宁海县| 大埔县| 天柱县| 迭部县| 贵州省| 平南县| 南澳县| 通许县| 虹口区| 宁武县| 区。| 佛山市| 芮城县| 庄浪县| 龙州县| 阿荣旗| 四川省| 嘉荫县| 大理市| 乌兰浩特市| 靖远县| 通许县| 项城市|