您好,登錄后才能下訂單哦!
本期內容:
1. Spark Streaming中RDD為空處理
2. Streaming Context程序停止方式
Spark Streaming運用程序是根據我們設定的Batch Duration來產生RDD,產生的RDD存在partitons數據為空的情況,但是還是會執行foreachPartition,會獲取計算資源,然后計算一下,這種情況就會浪費
集群計算資源,所以需要在程序運行的時候進行過濾,參考如下代碼:
package com.dt.spark.sparkstreaming
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
object OnlineForeachRDD2DB {
def main(args: Array[String]){
val conf = new SparkConf() //創建SparkConf對象
conf.setAppName("OnlineForeachRDD2DB") //設置應用程序的名稱,在程序運行的監控界面可以看到名稱
conf.setMaster("spark://Master:7077") //此時,程序在Spark集群
/**
* 設置batchDuration時間間隔來控制Job生成的頻率并且創建Spark Streaming執行的入口
*/
val ssc = new StreamingContext(conf, Seconds(300))
val lines = ssc.socketTextStream("Master", 9999)
val words = lines.flatMap(line => line.split(" "))
val wordCounts = words.map(word => (word,1)).reduceByKey(_ + _)
wordCounts.foreachRDD{ rdd =>
/**
* 例如:rdd為空,rdd為空會產生什么問題呢?
* rdd沒有任何元素,但是也會做做foreachPartition,也會進行寫數據庫的操作或者把數據寫到HDFS上,
* rdd里面沒有任何記錄,但是還會獲取計算資源,然后計算一下,消耗計算資源,這個時候純屬浪費資源,
* 所以必須對空rdd進行處理;
* 例如:使用rdd.count()>0,但是rdd.count()會觸發一個Job;
* 使用rdd.isEmpty()的時候,take也會觸發Job;
* def isEmpty(): Boolean = withScope {
* partitions.length == 0 || take(1).length == 0
* }
*
* rdd.partitions.isEmpty里判斷的是length是否等于0,就代表是否有partition
* def isEmpty: Boolean = { length == 0 }
* 注:rdd.isEmpty()和rdd.partitions.isEmpty是兩種概念;
*/
//
if(rdd.partitions.length > 0) {
rdd.foreachPartition{ partitonOfRecord =>
if(partitionOfRecord.hasNext) // 判斷下partition中是否存在數據
{
val connection = ConnectionPool.getConnection()
partitonOfRecord.foreach(record => {
val sql = "insert into streaming_itemcount(item,rcount) values('" + record._1 + "'," + record._2 + ")"
val stmt = connection.createStatement()
stmt.executeUpdate(sql)
stmt.close()
})
ConnectionPool.returnConnection(connection)
}
}
}
}
ssc.start()
ssc.awaitTermination()
}
}
二、SparkStreaming程序停止方式
第一種是不管接受到數據是否處理完成,直接被停止掉。
第二種是接受到數據全部處理完成才停止掉,一般采用第二種方式。
第一種停止方式:
/**
* Stop the execution of the streams immediately (does not wait for all received data
* to be processed). By default, if `stopSparkContext` is not specified, the underlying
* SparkContext will also be stopped. This implicit behavior can be configured using the
* SparkConf configuration spark.streaming.stopSparkContextByDefault.
*
* 把streams的執行直接停止掉(并不會等待所有接受到的數據處理完成),默認情況下SparkContext也會被停止掉,
* 隱式的行為可以做配置,配置參數為spark.streaming.stopSparkContextByDefault。
*
* @param stopSparkContext If true, stops the associated SparkContext. The underlying SparkContext
* will be stopped regardless of whether this StreamingContext has been
* started.
*/
def stop(stopSparkContext: Boolean = conf.getBoolean("spark.streaming.stopSparkContextByDefault", true)
): Unit = synchronized {
stop(stopSparkContext, false)
}
第二種停止方式:
/**
* Stop the execution of the streams, with option of ensuring all received data
* has been processed.
*
* 所有接受到的數據全部被處理完成,才把streams的執行停止掉
*
* @param stopSparkContext if true, stops the associated SparkContext. The underlying SparkContext
* will be stopped regardless of whether this StreamingContext has been
* started.
* @param stopGracefully if true, stops gracefully by waiting for the processing of all
* received data to be completed
*/
def stop(stopSparkContext: Boolean, stopGracefully: Boolean): Unit = {
var shutdownHookRefToRemove: AnyRef = null
if (AsynchronousListenerBus.withinListenerThread.value) {
throw new SparkException("Cannot stop StreamingContext within listener thread of" +
" AsynchronousListenerBus")
}
synchronized {
try {
state match {
case INITIALIZED =>
logWarning("StreamingContext has not been started yet")
case STOPPED =>
logWarning("StreamingContext has already been stopped")
case ACTIVE =>
scheduler.stop(stopGracefully)
// Removing the streamingSource to de-register the metrics on stop()
env.metricsSystem.removeSource(streamingSource)
uiTab.foreach(_.detach())
StreamingContext.setActiveContext(null)
waiter.notifyStop()
if (shutdownHookRef != null) {
shutdownHookRefToRemove = shutdownHookRef
shutdownHookRef = null
}
logInfo("StreamingContext stopped successfully")
}
} finally {
// The state should always be Stopped after calling `stop()`, even if we haven't started yet
state = STOPPED
}
}
if (shutdownHookRefToRemove != null) {
ShutdownHookManager.removeShutdownHook(shutdownHookRefToRemove)
}
// Even if we have already stopped, we still need to attempt to stop the SparkContext because
// a user might stop(stopSparkContext = false) and then call stop(stopSparkContext = true).
if (stopSparkContext) sc.stop()
}
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。