您好,登錄后才能下訂單哦!
這篇“Distinct Count有什么作用”文章的知識點大部分人都不太理解,所以小編給大家總結了以下內容,內容詳細,步驟清晰,具有一定的借鑒價值,希望大家閱讀完這篇文章能有所收獲,下面我們一起來看看這篇“Distinct Count有什么作用”文章吧。
大數據(big data),IT行業術語,是指無法在一定時間范圍內用常規軟件工具進行捕捉、管理和處理的數據集合,是需要新處理模式才能具有更強的決策力、洞察發現力和流程優化能力的海量、高增長率和多樣化的信息資產。 |
Hive
在大數據場景下,報表很重要一項是UV(Unique Visitor)統計,即某時間段內用戶人數。例如,查看一周內app的用戶分布情況,Hive中寫HiveQL實現:
select app, count(distinct uid) as uv from log_table where week_cal = '2016-03-27'
Pig
與之類似,Pig的寫法:
-- all users define DISTINCT_COUNT(A, a) returns dist { B = foreach $A generate $a; unique_B = distinct B; C = group unique_B all; $dist = foreach C generate SIZE(unique_B); } A = load '/path/to/data' using PigStorage() as (app, uid); B = DISTINCT_COUNT(A, uid); -- A = load '/path/to/data' using PigStorage() as (app, uid); B = distinct A; C = group B by app; D = foreach C generate group as app, COUNT($1) as uv; -- suitable for small cardinality scenarios D = foreach C generate group as app, SIZE($1) as uv;
DataFu 為pig提供基數估計的UDF datafu.pig.stats.HyperLogLogPlusPlus,其采用HyperLogLog++算法,更為快速地Distinct Count:
define HyperLogLogPlusPlus datafu.pig.stats.HyperLogLogPlusPlus(); A = load '/path/to/data' using PigStorage() as (app, uid); B = group A by app; C = foreach B generate group as app, HyperLogLogPlusPlus($1) as uv;
Spark
在Spark中,Load數據后通過RDD一系列的轉換——map、distinct、reduceByKey進行Distinct Count:
rdd.map { row => (row.app, row.uid) } .distinct() .map { line => (line._1, 1) } .reduceByKey(_ + _) // or rdd.map { row => (row.app, row.uid) } .distinct() .mapValues{ _ => 1 } .reduceByKey(_ + _) // or rdd.map { row => (row.app, row.uid) } .distinct() .map(_._1) .countByValue()
同時,Spark提供近似Distinct Count的API:
rdd.map { row => (row.app, row.uid) } .countApproxDistinctByKey(0.001)
實現是基于HyperLogLog算法:
The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.
或者,將Schema化的RDD轉成DataFrame后,registerTempTable然后執行sql命令亦可:
val sqlContext = new SQLContext(sc) val df = rdd.toDF() df.registerTempTable("app_table") val appUsers = sqlContext.sql("select app, count(distinct uid) as uv from app_table group by app")
以上就是關于“Distinct Count有什么作用”這篇文章的內容,相信大家都有了一定的了解,希望小編分享的內容對大家有幫助,若想了解更多相關的知識內容,請關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。