prashantwason commented on a change in pull request #5179:
URL: https://github.com/apache/hudi/pull/5179#discussion_r839337341
##########
File path:
hudi-common/src/main/java/org/apache/hudi/common/model/HoodiePartitionMetadata.java
##########
@@ -117,30 +133,119 @@ public void trySave(int taskPartitionId) {
}
}
+ private String getMetafileExtension() {
+ // To be backwards compatible, there is no extension to the properties
file base partition metafile
+ return format.isPresent() ? format.get().getFileExtension() : "";
+ }
+
+ /**
+ * Write the partition metadata in the correct format in the given file path.
+ *
+ * @param filePath Path of the file to write
+ * @throws IOException
+ */
+ private void writeMetafile(Path filePath) throws IOException {
+ if (format.isPresent()) {
+ Schema schema = HoodieAvroUtils.getRecordKeySchema();
+
+ switch (format.get()) {
+ case PARQUET:
+ // Since we are only interested in saving metadata to the footer,
the schema, blocksizes and other
+ // parameters are not important.
+ MessageType type =
Types.buildMessage().optional(PrimitiveTypeName.INT64).named("dummyint").named("dummy");
+ HoodieAvroWriteSupport writeSupport = new
HoodieAvroWriteSupport(type, schema, Option.empty());
+ try (ParquetWriter writer = new ParquetWriter(filePath,
writeSupport, CompressionCodecName.UNCOMPRESSED, 1024, 1024)) {
+ for (String key : props.stringPropertyNames()) {
+ writeSupport.addFooterMetadata(key, props.getProperty(key));
+ }
+ }
+ break;
+ case ORC:
+ // Since we are only interested in saving metadata to the footer,
the schema, blocksizes and other
+ // parameters are not important.
+ OrcFile.WriterOptions writerOptions =
OrcFile.writerOptions(fs.getConf()).fileSystem(fs)
+ .setSchema(AvroOrcUtils.createOrcSchema(schema));
+ try (Writer writer = OrcFile.createWriter(filePath, writerOptions)) {
+ for (String key : props.stringPropertyNames()) {
+ writer.addUserMetadata(key,
ByteBuffer.wrap(props.getProperty(key).getBytes()));
+ }
+ }
+ break;
+ default:
Review comment:
I dont think HFile format is currently queryable through standard query
engines. We have invented how records are saved within the HFile format. So for
it to be usable, we need to implement a RecordReader, etc. Also there are no
BaseFileUtils for HFile (no HFileUtils) like there are for Parquet
(ParquetUtils) which makes me think that HFile is not fully supported on query
side.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]