1. 引入Apache Hudi支持多种分区方式数据集,如多级分区、单分区、时间日期分区、无分区数据集等,用户可根据实际需求选择合适的分区方式,下面来详细了解Hudi如何配置何种类型分区 。
2. 分区处理为说明Hudi对不同分区类型的处理,假定写入Hudi的Schema如下
{"type" : "record","name" : "HudiSchemaDemo","namespace" : "hoodie.HudiSchemaDemo","fields" : [ {"name" : "age","type" : [ "long", "null" ]}, {"name" : "location","type" : [ "string", "null" ]}, {"name" : "name","type" : [ "string", "null" ]}, {"name" : "sex","type" : [ "string", "null" ]}, {"name" : "ts","type" : [ "long", "null" ]}, {"name" : "date","type" : [ "string", "null" ]} ]}
其中一条具体数据如下
{"name": "zhangsan","ts": 1574297893837,"age": 16,"location": "beijing","sex":"male","date":"2020/08/16"}
2.1 单分区单分区表示使用一个字段表示作为分区字段的场景,可具体分为非日期格式字段(如location)和日期格式字段(如date)
2.1.1 非日期格式字段分区如使用上述location字段作为分区字段,在写入Hudi并同步至Hive时配置如下
df.write().format("org.apache.hudi").options(getQuickstartWriteConfigs()).option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY(), "COPY_ON_WRITE").option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), "ts").option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), "name").option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY(), partitionFields).option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY(), keyGenerator).option(TABLE_NAME, tableName).option("hoodie.datasource.hive_sync.enable", true).option("hoodie.datasource.hive_sync.table", tableName).option("hoodie.datasource.hive_sync.username", "root").option("hoodie.datasource.hive_sync.password", "123456").option("hoodie.datasource.hive_sync.jdbcurl", "jdbc:hive2://localhost:10000").option("hoodie.datasource.hive_sync.partition_fields", hivePartitionFields).option("hoodie.datasource.write.table.type", "COPY_ON_WRITE").option("hoodie.embed.timeline.server", false).option("hoodie.datasource.hive_sync.partition_extractor_class", hivePartitionExtractorClass).mode(saveMode).save(basePath);
值得注意如下几个配置项
- DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为location;
- hoodie.datasource.hive_sync.partition_fields配置为location,与写入Hudi的分区字段相同;
- DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置该选项,默认为org.apache.hudi.keygen.SimpleKeyGenerator;
- hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.MultiPartKeysValueExtractor;
CREATE EXTERNAL TABLE `notdateformatsinglepartitiondemo`(`_hoodie_commit_time` string,`_hoodie_commit_seqno` string,`_hoodie_record_key` string,`_hoodie_partition_path` string,`_hoodie_file_name` string,`age` bigint,`date` string,`name` string,`sex` string,`ts` bigint)PARTITIONED BY (`location` string)ROW FORMAT SERDE'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION'file:/tmp/hudi-partitions/notDateFormatSinglePartitionDemo'TBLPROPERTIES ('last_commit_time_sync'='20200816154250','transient_lastDdlTime'='1597563780')
查询表notdateformatsinglepartitiondemotips: 查询时请先将hudi-hive-sync-bundle-xxx.jar包放入$HIVE_HOME/lib下
文章插图
2.1.2 日期格式分区如使用上述date字段作为分区字段,核心配置项如下
- DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为date;
- hoodie.datasource.hive_sync.partition_fields配置为date,与写入Hudi的分区字段相同;
- DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置该选项,默认为org.apache.hudi.keygen.SimpleKeyGenerator;
- hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor;
推荐阅读
- 桂花茶的窨制技术详解,茉莉花窨制新技术连窨技术
- 详解匈牙利算法与二分图匹配
- 万能的Windows定时开关机设置方法详解,不需要BIOS支持
- 桃花彩妆步骤详解 如何画出桃花彩妆
- 详解DBSCAN聚类
- pytorch实现 GoogLeNet——CNN经典网络模型详解
- 使用Apache协议的是自由软件吗?
- CNN中常用的四种卷积详解
- 新媒体运营黎想:详解用户活跃、留存、流失3大问题
- 杭州人的最爱,沈括尝茶茶诗赏析详解