您好,登錄后才能下訂單哦!
這篇文章主要介紹spark 3.0.1集成delta 0.7.0之如何實現delta自定義sql,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
我們在用delta的時候,得指定delta特定的格式,如下:
val data = spark.range(5, 10) data.write.format("delta").mode("overwrite").save("/tmp/delta-table") df.show()
那這個delta datasource是怎么集成到spark呢?我們來分析一下:
直接到DataStreamWriter,如下:
val cls = DataSource.lookupDataSource(source, df.sparkSession.sessionState.conf) val disabledSources = df.sparkSession.sqlContext.conf.disabledV2StreamingWriters.split(",") val useV1Source = disabledSources.contains(cls.getCanonicalName) || // file source v2 does not support streaming yet. classOf[FileDataSourceV2].isAssignableFrom(cls)
DataSource.lookupDataSource 方法是關鍵點。如下:
def lookupDataSource(provider: String, conf: SQLConf): Class[_] = { val provider1 = backwardCompatibilityMap.getOrElse(provider, provider) match { case name if name.equalsIgnoreCase("orc") && conf.getConf(SQLConf.ORC_IMPLEMENTATION) == "native" => classOf[OrcDataSourceV2].getCanonicalName case name if name.equalsIgnoreCase("orc") && conf.getConf(SQLConf.ORC_IMPLEMENTATION) == "hive" => "org.apache.spark.sql.hive.orc.OrcFileFormat" case "com.databricks.spark.avro" if conf.replaceDatabricksSparkAvroEnabled => "org.apache.spark.sql.avro.AvroFileFormat" case name => name } val provider2 = s"$provider1.DefaultSource" val loader = Utils.getContextOrSparkClassLoader val serviceLoader = ServiceLoader.load(classOf[DataSourceRegister], loader)
這里用到了ServiceLoader.load的方法,該是java的SPI,具體的細節可以網上查閱,我們說重點 直接找到ServiceLoader.LazyIterator部分
private class LazyIterator implements Iterator<S> { Class<S> service; ClassLoader loader; Enumeration<URL> configs = null; Iterator<String> pending = null; String nextName = null; private LazyIterator(Class<S> service, ClassLoader loader) { this.service = service; this.loader = loader; } private boolean hasNextService() { if (nextName != null) { return true; } if (configs == null) { try { String fullName = PREFIX + service.getName(); if (loader == null) configs = ClassLoader.getSystemResources(fullName); else configs = loader.getResources(fullName); } catch (IOException x) { fail(service, "Error locating configuration files", x); } }
其中的loader.getResources方法,就是查找classpath下的特定文件,如果有多個就會返回多個, 對于spark來說,查找的是class DataSourceRegister,也就是META-INF/services/org.apache.spark.sql.sources.DataSourceRegister文件,實際上spark內部的datasource的實現,通過通過這種方式加載進來的
我們查看一下delta的META-INF/services/org.apache.spark.sql.sources.DataSourceRegister文件為org.apache.spark.sql.delta.sources.DeltaDataSource,注意DeltaDatasource是基于Datasource v1進行開發的, 至此我們就知道了delta datasource和spark結合的大前提的實現
我們從delta的configurate sparksession入手,如下:
import org.apache.spark.sql.SparkSession val spark = SparkSession .builder() .appName("...") .master("...") .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") .getOrCreate()
我們可以看到 config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
從spark configuration,我們可以看到對該spark.sql.extensions的解釋是
A comma-separated list of classes that implement Function1[SparkSessionExtensions, Unit] used to configure Spark Session extensions. The classes must have a no-args constructor. If multiple extensions are specified, they are applied in the specified order. For the case of rules and planner strategies, they are applied in the specified order. For the case of parsers, the last parser is used and each parser can delegate to its predecessor. For the case of function name conflicts, the last registered function name is used.
一句話就是用來對sparksession的擴展,可以對spark sql的邏輯計劃進行擴展,且這個功能從spark 2.2.0就有了
看一下io.delta.sql.DeltaSparkSessionExtension類
class DeltaSparkSessionExtension extends (SparkSessionExtensions => Unit) { override def apply(extensions: SparkSessionExtensions): Unit = { extensions.injectParser { (session, parser) => new DeltaSqlParser(parser) } extensions.injectResolutionRule { session => new DeltaAnalysis(session, session.sessionState.conf) } extensions.injectCheckRule { session => new DeltaUnsupportedOperationsCheck(session) } extensions.injectPostHocResolutionRule { session => new PreprocessTableUpdate(session.sessionState.conf) } extensions.injectPostHocResolutionRule { session => new PreprocessTableMerge(session.sessionState.conf) } extensions.injectPostHocResolutionRule { session => new PreprocessTableDelete(session.sessionState.conf) } } }
DeltaSqlParser class就是delta對于自身語法的支持,那到底怎么支持以及支持什么呢? 我們看一下extensions.injectParser代碼
private[this] val parserBuilders = mutable.Buffer.empty[ParserBuilder] private[sql] def buildParser( session: SparkSession, initial: ParserInterface): ParserInterface = { parserBuilders.foldLeft(initial) { (parser, builder) => builder(session, parser) } } /** * Inject a custom parser into the [[SparkSession]]. Note that the builder is passed a session * and an initial parser. The latter allows for a user to create a partial parser and to delegate * to the underlying parser for completeness. If a user injects more parsers, then the parsers * are stacked on top of each other. */ def injectParser(builder: ParserBuilder): Unit = { parserBuilders += builder }
我們看到buildParser方法對我們傳入的DeltaSqlParser進行了方法的初始化,也就是說DeltaSqlParser 的delegate變量被賦值為initial, 而該buildParser方法 被BaseSessionStateBuilder調用:
/** * Parser that extracts expressions, plans, table identifiers etc. from SQL texts. * * Note: this depends on the `conf` field. */ protected lazy val sqlParser: ParserInterface = { extensions.buildParser(session, new SparkSqlParser(conf)) }
所以說initial的實參是SparkSqlParser,也就是SparkSqlParser成了DeltaSqlParser代理,我們再看看DeltaSqlParser的方法:
override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => builder.visit(parser.singleStatement()) match { case plan: LogicalPlan => plan case _ => delegate.parsePlan(sqlText) } }
這里涉及到了antlr4的語法,也就是說對于邏輯計劃的解析,如自身DeltaSqlParser能夠解析,就進行解析,不能的話就委托給SparkSqlParser進行解析,而解析是該類DeltaSqlAstBuilder的功能:
class DeltaSqlAstBuilder extends DeltaSqlBaseBaseVisitor[AnyRef] { /** * Create a [[VacuumTableCommand]] logical plan. Example SQL: * {{{ * VACUUM ('/path/to/dir' | delta.`/path/to/dir`) [RETAIN number HOURS] [DRY RUN]; * }}} */ override def visitVacuumTable(ctx: VacuumTableContext): AnyRef = withOrigin(ctx) { VacuumTableCommand( Option(ctx.path).map(string), Option(ctx.table).map(visitTableIdentifier), Option(ctx.number).map(_.getText.toDouble), ctx.RUN != null) } override def visitDescribeDeltaDetail( ctx: DescribeDeltaDetailContext): LogicalPlan = withOrigin(ctx) { DescribeDeltaDetailCommand( Option(ctx.path).map(string), Option(ctx.table).map(visitTableIdentifier)) } override def visitDescribeDeltaHistory( ctx: DescribeDeltaHistoryContext): LogicalPlan = withOrigin(ctx) { DescribeDeltaHistoryCommand( Option(ctx.path).map(string), Option(ctx.table).map(visitTableIdentifier), Option(ctx.limit).map(_.getText.toInt)) } override def visitGenerate(ctx: GenerateContext): LogicalPlan = withOrigin(ctx) { DeltaGenerateCommand( modeName = ctx.modeName.getText, tableId = visitTableIdentifier(ctx.table)) } override def visitConvert(ctx: ConvertContext): LogicalPlan = withOrigin(ctx) { ConvertToDeltaCommand( visitTableIdentifier(ctx.table), Option(ctx.colTypeList).map(colTypeList => StructType(visitColTypeList(colTypeList))), None) } override def visitSingleStatement(ctx: SingleStatementContext): LogicalPlan = withOrigin(ctx) { visit(ctx.statement).asInstanceOf[LogicalPlan] } protected def visitTableIdentifier(ctx: QualifiedNameContext): TableIdentifier = withOrigin(ctx) { ctx.identifier.asScala match { case Seq(tbl) => TableIdentifier(tbl.getText) case Seq(db, tbl) => TableIdentifier(tbl.getText, Some(db.getText)) case _ => throw new ParseException(s"Illegal table name ${ctx.getText}", ctx) } } override def visitPassThrough(ctx: PassThroughContext): LogicalPlan = null }
那這些方法比如visitVacuumTable,visitDescribeDeltaDetail是從哪里來的呢? 咱們看看DeltaSqlBase.g4:
singleStatement : statement EOF ; // If you add keywords here that should not be reserved, add them to 'nonReserved' list. statement : VACUUM (path=STRING | table=qualifiedName) (RETAIN number HOURS)? (DRY RUN)? #vacuumTable | (DESC | DESCRIBE) DETAIL (path=STRING | table=qualifiedName) #describeDeltaDetail | GENERATE modeName=identifier FOR TABLE table=qualifiedName #generate | (DESC | DESCRIBE) HISTORY (path=STRING | table=qualifiedName) (LIMIT limit=INTEGER_VALUE)? #describeDeltaHistory | CONVERT TO DELTA table=qualifiedName (PARTITIONED BY '(' colTypeList ')')? #convert | .*? #passThrough ;
這里涉及到的antlr4語法,不會的可以自行網上查閱。注意一下spark 和delta用到的都是visit的模式。
再來對于一下delta官網提供的操作 :
Vacuum Describe History Describe Detail Generate Convert to Delta Convert Delta table to a Parquet table
這樣就能對應上了,如Vacuum操作對應vacuumTable,Convert to Delta對應 convert.
其實delta支持拓展了spark,我們也可按照delta的方式,對spark進行擴展,從而實現自己的sql語法
以上是“spark 3.0.1集成delta 0.7.0之如何實現delta自定義sql”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。