上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

4、玉兆、大龙-使用 FlinkHudi 构建流式数据湖平台.pdf

编号:101791 PDF 26页 6.05MB 下载积分:VIP专享
下载报告请您先登录!

4、玉兆、大龙-使用 FlinkHudi 构建流式数据湖平台.pdf

1、Danny Chan/Apache Hudi Committer刘大龙/阿里巴巴工程师Flink Hudi Flink Hudi Streaming Data LakeStreaming Data LakeBuild streaming data lake using Apache Flink and Apache HudiApache Hudi 101Apache Hudi 101Flink Hudi IntegrationFlink Hudi IntegrationFlink Hudi Use CaseFlink Hudi Use CaseApache Apache HudiHudi Ro

2、admapRoadmap#1#2#3#4#1#1Apache Hudi 101Apache Hudi 101Why Data Lake?Data LakeHudi,Delta Lake,IcebergTrasaction,Open FormatTraditional Data WarehouseTeradata,VerticaMPP DBCoupling Compution&StorageCloud Native Data WarehouseRedshift,Snowflake,BigQueryTransaction,Private Format 2015 2018 2021Apache Hu

3、di-Data Lake PlatformHudi is not just a table format!Timeline ServiceInstantInstantActionTimeStatecommitdelta_commitcleancompactionrollbackrequestedinflightcompletedsavepointAll actions performed on the tableat different instants of time that constructinstantaneous views of the tableInstantInstantFi

4、le FormatPartitionFileSliceFileSliceFileGroupFileGroupFileGroup*.parquet*.log.*A base path have several partitions,each partition has multiple file groups with unique IDs,each file group has several file slicesCopy On WriteThe incremental data set was cached as in-memory index,while scanning the bas

5、e file,the writer looks up the index and merges the record if possible.When the latest file slice size hits the threshold,rolls over to a new file groupFileGroupnew datanew datascan and mergescan and mergeMerge On ReadThe incremental data set(new data)was always appended to the latest version log fi

6、le.The log file handle rolls over to new file group automatically when the size hits the threshold(default 1GB)FG-1new datanew datarolls overfile slice*.log.*.log.*FG-2#2 2Flink Hudi IntegrationFlink Hudi IntegrationFlink Writer Pipelinerowdata to hoodieBucketAssignerStream WritercoordinatorCleanerS

7、huffle by Record KeyShuffle by Bucket IDSmall File Strategy804060602010120File groups managed by bucket assign task-1,FG-1 and FG-2 would be chosen to writeFG-1FG-2FG-3FG-4FG-5FG-6File groups managed by bucket assign task-2,would write to a new file groupThe file groups in a parttion are

8、divided equally among the bucket assign tasks,each task applies the small file strategy independentlyWrite State MachinecoordinatorStream Writecheckpoint barrierOn start up,the write task sends the bootstrap metadata to coordinator for instant initializationWrite State Machinecoordinatorcheckpoint b

9、arrierStream WriteThe write task triggers eager data flush when hitting the memory threshold!Write State Machinecoordinatorcheckpoint barrierStream WriteThe write task flushes all the data on checkpoint barrierWrite State MachinecoordinatorStream Writecheckpoint barrierThe write task unblocks the da

10、ta flush on successful checkpoints!Writing ModesFlink hudi provides several modes for different data ingestion scenarios,users can config the writing mode with flink SQL optionINSERTUPSERTBULKINSERTINDEXBOOTSTRAPLog data setLoad history data index for inc data setUpsert data setHistory data bulk loa

11、dReading ModesSNAPSHOTREADOPTIMIZEDINCREMENTALTIME TRAVELLatest snapshot data setSnapshot data before an instantOptimized view for MORHistory data with rangeStreamingStreaming read with start offsetFlink hudi provides several modes for different data reading views,users can config the reading mode w

12、ith flink SQL optionSQL ExampleCREATE TABLE h_table(uuid varchar(20),name varchar(10),age int,ts timestamp(3),partition varchar(20)WITH(connector=hudi,path=xxx/h_table,table.type=MERGE_ON_READ,write.bucket_assign.tasks=4,read.streaming.enabled=true,read.streaming.check-interval=4,write.tasks=4);#3 3

13、Flink Hudi Use CaseFlink Hudi Use CaseCase1:DB To Warehousecdc-connectorcdc-formatCase2:Near-Real-Time OLAPJOINAGGCase3:Incremental ETLODSDWDDWSADS#4 4Apache Hudi RoadmapApache Hudi RoadmapS Streaming Semantics Enhancementtreaming Semantics EnhancementTrino ConnectorTrino ConnectorRecord Level IndexRecord Level IndexIDID-based Schema Evolutionbased Schema EvolutionSecondarySecondary IndexIndexMetastore CatalogMetastore Catalog

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(4、玉兆、大龙-使用 FlinkHudi 构建流式数据湖平台.pdf)为本站 (云闲) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部