您好,登錄后才能下訂單哦!
接上篇《 初探IBM大數據處理平臺BigInsights(1) 》,本篇講述Hadoop的一些基礎命令及利用MapReduce運行一個簡單的WordCount程序
1,在HDFS文件系統上創建test目錄
hadoop fs -mkdir /user/biadmin/test
2,將文件copy到test目錄下
hadoop fs -put /var/adm/ibmvmcoc-postinstall/BIlicense_en.txt /user/biadmin/test
3,查看test目錄下是否多了這個文件
biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop fs -ls /user/biadmin/test
Found 1 items
-rw-r--r-- 1 biadmin biadmin 62949 2016-01-01 22:34 /user/biadmin/test/BIlicense_en.txt
4,運行一個簡單的MapReduce程序
WordCount是用JAVA寫的針對Hadoop MapReduce的一個小程序,用于統計文本中每個單詞的出現次數,關于WordCount更多內容請參考-http://wiki.apache.org/hadoop/WordCount
執行程序是hadoop-example.jar,內容是在剛剛創建的test目錄下,輸出到WordCount_outpt子目錄中。如果沒有此目錄,會自動創建。
biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop jar /opt/ibm/biginsights/IHC/hadoop-example.jar wordcount /user/biadmin/test WordCount_output
16/01/01 22:36:08 INFO input.FileInputFormat: Total input paths to process : 1
16/01/01 22:36:18 INFO mapred.JobClient: Running job: job_201601012120_0001
16/01/01 22:36:19 INFO mapred.JobClient: map 0% reduce 0%
16/01/01 22:37:58 INFO mapred.JobClient: map 100% reduce 0%
16/01/01 22:39:07 INFO mapred.JobClient: map 100% reduce 100%
16/01/01 22:39:14 INFO mapred.JobClient: Job complete: job_201601012120_0001
16/01/01 22:39:15 INFO mapred.JobClient: Counters: 29
16/01/01 22:39:15 INFO mapred.JobClient: File System Counters
16/01/01 22:39:15 INFO mapred.JobClient: FILE: BYTES_READ=33219
16/01/01 22:39:15 INFO mapred.JobClient: FILE: BYTES_WRITTEN=419738
16/01/01 22:39:15 INFO mapred.JobClient: HDFS: BYTES_READ=63073
16/01/01 22:39:15 INFO mapred.JobClient: HDFS: BYTES_WRITTEN=24073
16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.JobCounter
16/01/01 22:39:15 INFO mapred.JobClient: TOTAL_LAUNCHED_MAPS=1
16/01/01 22:39:15 INFO mapred.JobClient: TOTAL_LAUNCHED_REDUCES=1
16/01/01 22:39:15 INFO mapred.JobClient: DATA_LOCAL_MAPS=1
16/01/01 22:39:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=95300
16/01/01 22:39:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=50249
16/01/01 22:39:15 INFO mapred.JobClient: FALLOW_SLOTS_MILLIS_MAPS=0
16/01/01 22:39:15 INFO mapred.JobClient: FALLOW_SLOTS_MILLIS_REDUCES=0
16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.TaskCounter
16/01/01 22:39:15 INFO mapred.JobClient: MAP_INPUT_RECORDS=755
16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_RECORDS=9865
16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_BYTES=102036
16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_MATERIALIZED_BYTES=33219
16/01/01 22:39:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=124
16/01/01 22:39:15 INFO mapred.JobClient: COMBINE_INPUT_RECORDS=9865
16/01/01 22:39:15 INFO mapred.JobClient: COMBINE_OUTPUT_RECORDS=2322
16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_INPUT_GROUPS=2322
16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_SHUFFLE_BYTES=33219
16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_INPUT_RECORDS=2322
16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_OUTPUT_RECORDS=2322
16/01/01 22:39:15 INFO mapred.JobClient: SPILLED_RECORDS=4644
16/01/01 22:39:15 INFO mapred.JobClient: CPU_MILLISECONDS=22130
16/01/01 22:39:15 INFO mapred.JobClient: PHYSICAL_MEMORY_BYTES=538050560
16/01/01 22:39:15 INFO mapred.JobClient: VIRTUAL_MEMORY_BYTES=3549384704
16/01/01 22:39:15 INFO mapred.JobClient: COMMITTED_HEAP_BYTES=2097152000
16/01/01 22:39:15 INFO mapred.JobClient: File Input Format Counters
16/01/01 22:39:15 INFO mapred.JobClient: Bytes Read=62949
16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.output.FileOutputFormat$Counter
16/01/01 22:39:15 INFO mapred.JobClient: BYTES_WRITTEN=24073
會自動創建WordCount_output目錄
biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop fs -ls WordCount_output
Found 3 items
-rw-r--r-- 1 biadmin biadmin 0 2016-01-01 22:39 WordCount_output/_SUCCESS
drwx--x--x - biadmin biadmin 0 2016-01-01 22:36 WordCount_output/_logs
-rw-r--r-- 1 biadmin biadmin 24073 2016-01-01 22:39 WordCount_output/part-r-00000
biadmin@bivm:~> hadoop fs -cat WordCount_output/*00
names, 1
national 1
nature 1
necessary 4
negligence 5
negligence, 4
negligence. 1
negligence; 2
neither 3
net 1
上面是用命令行方式來MapReduce,除此之外,IBM BigInsights還提供了基于Web界面的方式,打開Applications子選項,切換到Manage,可以看到預先定義的一些應用。在Test下面,有個WordCount應用,點開后選擇“Deploy”
然切換到Run,可以看到已經有了WordCount這個應用,
選中WordCount,輸入要統計文件所在的目錄及輸出目錄,點擊Run開始運行
同樣地,也可以通過Web界面來操作HDFS文件系統,包括創建、刪除、修改目錄或者文件
用瀏覽器打開JobTracker(http://192.168.133.135:50030/jobtracker.jsp),顯示出最近運行的MapReduce任務,點開JobID能看到更多詳細信息。
所謂的JobTracker是一個master服務,Hadoop啟動之后JobTracker接收Job,負責調度Job的每一個子任務task運行于TaskTracker上,并監控它們,如果發現有失敗的task就重新運行它。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。