您好,登錄后才能下訂單哦!
spark中如何實現行列轉換即寬表窄表轉換,很多新手對此不是很清楚,為了幫助大家解決這個難題,下面小編將為大家詳細講解,有這方面需求的人可以來學習下,希望你能有所收獲。
from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession, SQLContext, Row, functions as F from pyspark.sql.functions import array, col, explode, struct, lit conf = SparkConf().setAppName("test").setMaster("local[*]") sc = SparkContext(conf=conf) spark = SQLContext(sc) # df is datasource, by will exclude column def df_columns_to_line(df, by): # Filter dtypes and split into column names and type description df_a = df.select([col(c).cast("string") for c in df.columns]) cols, dtypes = zip(*((c, t) for (c, t) in df_a.dtypes if c not in by)) # Spark SQL supports only homogeneous columns assert len(set(dtypes)) == 1, "All columns have to be of the same type" # Create and explode an array of (column_name, column_value) structs kvs = explode(array([ struct(lit(c).alias("feature"), col(c).alias("value")) for c in cols ])).alias("kvs") return df_a.select(by + [kvs]).select(by + ["kvs.feature", "kvs.value"]) df = sc.parallelize([(1, 0.0, 0.6), (1, 0.6, 0.7)]).toDF(["A", "col_1", "col_2"]) df_row_data = df_columns_to_line(df, ["A"]) df.show() df_row_data.show()
>>> df.show() +---+-----+-----+ | A|col_1|col_2| +---+-----+-----+ | 1| 0.0| 0.6| | 1| 0.6| 0.7| +---+-----+-----+ >>> df_row_data.show() +---+-------+-----+ | A|feature|value| +---+-------+-----+ | 1| col_1| 0.0| | 1| col_2| 0.6| | 1| col_1| 0.6| | 1| col_2| 0.7| +---+-------+-----+
注意feature和value是原多列名轉換為行數據后,重新定義的最終兩列名
df_features = df_row_data.select('feature').distinct().collect() features = map(lambda r:r.feature, df_features) df_column_data = df_row_data.groupby("A").pivot('feature', features).agg(F.first('value', ignorenulls=True)) df_column_data.show()
+---+-----+-----+ | A|col_2|col_1| +---+-----+-----+ | 1| 0.6| 0.0| +---+-----+-----+
行轉列比較簡單,在上文結果基礎上直接轉換,關鍵是pivot函數的使用
看完上述內容是否對您有幫助呢?如果還想對相關知識有進一步的了解或閱讀更多相關文章,請關注億速云行業資訊頻道,感謝您對億速云的支持。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。