您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關apache-hive-1.2.1中local mr的示例分析,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
在hive中運行sql有很多是比較小的SQL,數據量小,計算量小。這些比較小的SQL 如果也采用分布式的方式來執行,那么就得不償失,因為SQL真正執行的時間可能只有10s,但是分布式任務生成的其他過程執行可能要1min。這樣小任務采用local mr方式執行,就是本地執行,通過把輸入數據拉回到客戶端來執行
三個參數來決定:
hive.exec.mode.local.auto=true 是否啟動本地mr模式
hive.exec.mode.local.auto.input.files.max=4 input files的數量,默認是4個
hive.exec.mode.local.auto.inputbytes.max=134217728 input files的大小,默認是128M
注意:
hive.exec.mode.local.auto是大前提,只有設置為true,才可能會啟用local mr模式
hive.exec.mode.local.auto.input.files.max 和 hive.exec.mode.local.auto.inputbytes.max是 與的關系,只有同時滿足才會執行local mr
t_1==> 5個文件
t_2==> 2個文件
hive>set hive.exec.mode.local.auto=false hive> select * from t_2 order by id; Query ID = hadoop_20160125132157_d767beb0-f674-4962-ac3c-8fbdd2949d01 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1453706740954_0006, Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0006/ Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job -kill job_1453706740954_0006 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2016-01-25 13:22:19,210 Stage-1 map = 0%, reduce = 0% 2016-01-25 13:22:26,497 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.47 sec 2016-01-25 13:22:40,207 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.68 sec MapReduce Total cumulative CPU time: 3 seconds 680 msec Ended Job = job_1453706740954_0006 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.68 sec HDFS Read: 5465 HDFS Write: 32 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 680 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> select * from t_2 order by id; hive> select * from t_2 order by id; Automatically selecting local only mode for query ==> 啟動用本地模式 Query ID = hadoop_20160125132322_9649b904-ad87-47fa-89ad-5e5f67315ac8 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Job running in-process (local Hadoop) 2016-01-25 13:23:27,192 Stage-1 map = 100%, reduce = 100% Ended Job = job_local1850780899_0002 MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1464 HDFS Write: 1618252652 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> select * from t_1 order by id; Query ID = hadoop_20160125132411_3ecd7ee9-8ccb-4bcc-8582-6d797c13babd Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Cannot run job locally: Number of Input Files (= 5) is larger than hive.exec.mode.local.auto.input.files.max(= 4) ==>5 > 4 還是啟用了分布式 Starting Job = job_1453706740954_0007, Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0007/ Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job -kill job_1453706740954_0007 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2016-01-25 13:24:38,775 Stage-1 map = 0%, reduce = 0% 2016-01-25 13:24:52,115 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.55 sec 2016-01-25 13:24:59,548 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.84 sec MapReduce Total cumulative CPU time: 3 seconds 840 msec Ended Job = job_1453706740954_0007 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.84 sec HDFS Read: 5814 HDFS Write: 56 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 840 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> set hive.exec.mode.local.auto.input.files.max=5; ==> 設置輸入文件數max=5 hive> select * from t_1 order by id; Automatically selecting local only mode for query ==> 啟用了本地模式 Query ID = hadoop_20160125132558_db2f4fca-f6bf-4b91-9569-c779a3b13386 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Job running in-process (local Hadoop) 2016-01-25 13:26:03,232 Stage-1 map = 100%, reduce = 100% Ended Job = job_local264155444_0003 MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1920 HDFS Write: 1887961792 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK
關于“apache-hive-1.2.1中local mr的示例分析”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。