您好,登錄后才能下訂單哦!
這篇文章給大家介紹一下什么是Apache Flink 中Flink DataSet編程,內容非常詳細,感興趣的小伙伴們可以參考借鑒,希望對大家能有所幫助。
Flink中DataSet編程是非常常規的編程,只需要實現他的數據集的轉換(例如filtering, mapping, joining, grouping)。這個數據集最初是通過數據源創建(例如讀取文件、本地數據集加載本地集合),轉換的結果通過sink返回到本地(或者分布式)的文件系統或者終端。Flink程序可以運行在各種環境中例如單機,或者嵌入其他程序中。執行過程可以在本地JVM中或者集群中。
Source ===> Flink(transformation)===> Sink
readTextFile(path)
/ TextInputFormat
- Reads files line wise and returns them as Strings.
readTextFileWithValue(path)
/ TextValueInputFormat
- Reads files line wise and returns them as StringValues. StringValues are mutable strings.
readCsvFile(path)
/ CsvInputFormat
- Parses files of comma (or another char) delimited fields. Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field types.
readFileOfPrimitives(path, Class)
/ PrimitiveInputFormat
- Parses files of new-line (or another char sequence) delimited primitive data types such as String
or Integer
.
readFileOfPrimitives(path, delimiter, Class)
/ PrimitiveInputFormat
- Parses files of new-line (or another char sequence) delimited primitive data types such as String
or Integer
using the given delimiter.
fromCollection(Collection)
fromCollection(Iterator, Class)
fromElements(T ...)
fromParallelCollection(SplittableIterator, Class)
generateSequence(from, to)
基于集合的數據源往往用來在開發環境中或者程序員學習中,可以隨意造我們所需要的數據,因為方式簡單。下面從java和scala兩種方式來實現使用集合作為數據源。數據源是簡單的1到10
import org.apache.flink.api.java.ExecutionEnvironment; import java.util.ArrayList; import java.util.List; public class JavaDataSetSourceApp { public static void main(String[] args) throws Exception { ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment(); fromCollection(executionEnvironment); } public static void fromCollection(ExecutionEnvironment env) throws Exception { List<Integer> list = new ArrayList<Integer>(); for (int i = 1; i <= 10; i++) { list.add(i); } env.fromCollection(list).print(); } }
import org.apache.flink.api.scala.ExecutionEnvironment object DataSetSourceApp { def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment fromCollection(env) } def fromCollection(env: ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val data = 1 to 10 env.fromCollection(data).print() } }
在本地文件夾:E:\test\input,下面有一個hello.txt,內容如下:
hello world welcome hello welcome
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) textFile(env) } def textFile(env: ExecutionEnvironment): Unit = { val filePathFilter = "E:/test/input/hello.txt" env.readTextFile(filePathFilter).print() }
readTextFile方法需要參數1:文件路徑(可以使本地,也可以是hdfs://host:port/file/path),參數2:編碼(如果不寫,默認UTF-8)
是否可以指定文件夾?
我們直接傳遞文件夾路徑
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) textFile(env) } def textFile(env: ExecutionEnvironment): Unit = { //val filePathFilter = "E:/test/input/hello.txt" val filePathFilter = "E:/test/input" env.readTextFile(filePathFilter).print() }
運行結果正常。說明readTextFile方法傳入文件夾,也沒有問題,它將會遍歷文件夾下面的所有文件
public static void main(String[] args) throws Exception { ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment(); // fromCollection(executionEnvironment); textFile(executionEnvironment); } public static void textFile(ExecutionEnvironment env) throws Exception { String filePath = "E:/test/input/hello.txt"; // String filePath = "E:/test/input"; env.readTextFile(filePath).print(); }
同樣的道理,java中也可以指定文件或者文件夾,如果指定文件夾,那么將遍歷文件夾下面的所有文件。
創建一個CSV文件,內容如下:
name,age,job Tom,26,cat Jerry,24,mouse sophia,30,developer
讀取csv文件方法readCsvFile,參數如下:
filePath: String, lineDelimiter: String = "\n", fieldDelimiter: String = ",", 字段分隔符 quoteCharacter: Character = null, ignoreFirstLine: Boolean = false, 是否忽略第一行 ignoreComments: String = null, lenient: Boolean = false, includedFields: Array[Int] = null, 讀取文件的哪幾列 pojoFields: Array[String] = null)
讀取csv文件代碼如下:
def csvFile(env:ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val filePath = "E:/test/input/people.csv" env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print() }
如何只讀前兩列,就需要指定includedFields了,
env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()
之前使用Tuple方式指定類型,如何指定自定義的一個case class?
def csvFile(env: ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val filePath = "E:/test/input/people.csv" // env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print() // env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print() env.readCsvFile[MyCaseClass](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print() } case class MyCaseClass(name: String, age: Int)
如何指定POJO?
新建一個POJO類,people
public class People { private String name; private int age; private String job; @Override public String toString() { return "People{" + "name='" + name + '\'' + ", age=" + age + ", job='" + job + '\'' + '}'; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getJob() { return job; } public void setJob(String job) { this.job = job; } }
env.readCsvFile[People](filePath, ignoreFirstLine = true, pojoFields = Array("name", "age", "job")).print()
public static void csvFile(ExecutionEnvironment env) throws Exception { String filePath = "E:/test/input/people.csv"; DataSource<Tuple2<String, Integer>> types = env.readCsvFile(filePath).ignoreFirstLine().includeFields("11").types(String.class, Integer.class); types.print(); }
只取出第一列和第二列的數據。
讀取POJO數據:
env.readCsvFile(filePath).ignoreFirstLine().pojoType(People.class, "name", "age", "job").print();
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) // textFile(env) // csvFile(env) readRecursiveFiles(env) } def readRecursiveFiles(env: ExecutionEnvironment): Unit = { val filePath = "E:/test/nested" val parameter = new Configuration() parameter.setBoolean("recursive.file.enumeration", true) env.readTextFile(filePath).withParameters(parameter).print() }
def readCompressionFiles(env: ExecutionEnvironment): Unit = { val filePath = "E:/test/my.tar.gz" env.readTextFile(filePath).print() }
可以直接讀取壓縮文件。因為提高了空間利用率,但是卻導致CPU的壓力也提升了。因此需要一個權衡。需要調優,在各種情況下去選擇更合適的方式。不是任何一種優化都能帶來想要的結果。如果本身集群的CPU壓力就高,那么就不應該讀取壓縮文件了。
關于Apache Flink 中Flink DataSet編程的示例分析就分享到這里了,希望以上內容可以對大家有一定的幫助,可以學到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。