《4、玉兆、大龙-使用 FlinkHudi 构建流式数据湖平台.pdf》由会员分享,可在线阅读,更多相关《4、玉兆、大龙-使用 FlinkHudi 构建流式数据湖平台.pdf(26页珍藏版)》请在三个皮匠报告上搜索。
1、Danny Chan/Apache Hudi Committer刘大龙/阿里巴巴工程师Flink Hudi Flink Hudi Streaming Data LakeStreaming Data LakeBuild streaming data lake using Apache Flink and Apache HudiApache Hudi 101Apache Hudi 101Flink Hudi IntegrationFlink Hudi IntegrationFlink Hudi Use CaseFlink Hudi Use CaseApache Apache HudiHudi Ro
2、admapRoadmap#1#2#3#4#1#1Apache Hudi 101Apache Hudi 101Why Data Lake?Data LakeHudi,Delta Lake,IcebergTrasaction,Open FormatTraditional Data WarehouseTeradata,VerticaMPP DBCoupling Compution&StorageCloud Native Data WarehouseRedshift,Snowflake,BigQueryTransaction,Private Format 2015 2018 2021Apache Hu
3、di-Data Lake PlatformHudi is not just a table format!Timeline ServiceInstantInstantActionTimeStatecommitdelta_commitcleancompactionrollbackrequestedinflightcompletedsavepointAll actions performed on the tableat different instants of time that constructinstantaneous views of the tableInstantInstantFi
4、le FormatPartitionFileSliceFileSliceFileGroupFileGroupFileGroup*.parquet*.log.*A base path have several partitions,each partition has multiple file groups with unique IDs,each file group has several file slicesCopy On WriteThe incremental data set was cached as in-memory index,while scanning the bas
5、e file,the writer looks up the index and merges the record if possible.When the latest file slice size hits the threshold,rolls over to a new file groupFileGroupnew datanew datascan and mergescan and mergeMerge On ReadThe incremental data set(new data)was always appended to the latest version log fi
6、le.The log file handle rolls over to new file group automatically when the size hits the threshold(default 1GB)FG-1new datanew datarolls overfile slice*.log.*.log.*FG-2#2 2Flink Hudi IntegrationFlink Hudi IntegrationFlink Writer Pipelinerowdata to hoodieBucketAssignerStream WritercoordinatorCleanerS
7、huffle by Record KeyShuffle by Bucket IDSmall File Strategy804060602010120File groups managed by bucket assign task-1,FG-1 and FG-2 would be chosen to writeFG-1FG-2FG-3FG-4FG-5FG-6File groups managed by bucket assign task-2,would write to a new file groupThe file groups in a parttion are
8、divided equally among the bucket assign tasks,each task applies the small file strategy independentlyWrite State MachinecoordinatorStream Writecheckpoint barrierOn start up,the write task sends the bootstrap metadata to coordinator for instant initializationWrite State Machinecoordinatorcheckpoint b
9、arrierStream WriteThe write task triggers eager data flush when hitting the memory threshold!Write State Machinecoordinatorcheckpoint barrierStream WriteThe write task flushes all the data on checkpoint barrierWrite State MachinecoordinatorStream Writecheckpoint barrierThe write task unblocks the da
10、ta flush on successful checkpoints!Writing ModesFlink hudi provides several modes for different data ingestion scenarios,users can config the writing mode with flink SQL optionINSERTUPSERTBULKINSERTINDEXBOOTSTRAPLog data setLoad history data index for inc data setUpsert data setHistory data bulk loa
11、dReading ModesSNAPSHOTREADOPTIMIZEDINCREMENTALTIME TRAVELLatest snapshot data setSnapshot data before an instantOptimized view for MORHistory data with rangeStreamingStreaming read with start offsetFlink hudi provides several modes for different data reading views,users can config the reading mode w
12、ith flink SQL optionSQL ExampleCREATE TABLE h_table(uuid varchar(20),name varchar(10),age int,ts timestamp(3),partition varchar(20)WITH(connector=hudi,path=xxx/h_table,table.type=MERGE_ON_READ,write.bucket_assign.tasks=4,read.streaming.enabled=true,read.streaming.check-interval=4,write.tasks=4);#3 3
13、Flink Hudi Use CaseFlink Hudi Use CaseCase1:DB To Warehousecdc-connectorcdc-formatCase2:Near-Real-Time OLAPJOINAGGCase3:Incremental ETLODSDWDDWSADS#4 4Apache Hudi RoadmapApache Hudi RoadmapS Streaming Semantics Enhancementtreaming Semantics EnhancementTrino ConnectorTrino ConnectorRecord Level IndexRecord Level IndexIDID-based Schema Evolutionbased Schema EvolutionSecondarySecondary IndexIndexMetastore CatalogMetastore Catalog