jackye1995 commented on issue #3700:
URL: https://github.com/apache/iceberg/issues/3700#issuecomment-991357906
Had some discussion with Steven and Ryan in Slack, the Flink checkpoint
serialization is handled by `FlinkManifestUtil`, which calls the core library
for serdes:
```java
static ManifestFile writeDataFiles(OutputFile outputFile, PartitionSpec
spec, List<DataFile> dataFiles)
throws IOException {
ManifestWriter<DataFile> writer = ManifestFiles.write(FORMAT_V2, spec,
outputFile, DUMMY_SNAPSHOT_ID);
try (ManifestWriter<DataFile> closeableWriter = writer) {
closeableWriter.addAll(dataFiles);
}
return writer.toManifestFile();
}
static List<DataFile> readDataFiles(ManifestFile manifestFile, FileIO io)
throws IOException {
try (CloseableIterable<DataFile> dataFiles =
ManifestFiles.read(manifestFile, io)) {
return Lists.newArrayList(dataFiles);
}
}
```
The schema at write time is fixed through `MetadataV1/2` even when we add
new fields in `DataFile`, so there should not be any compatibility issue.
I tried the following using 0.12 and 0.13:
In 0.12:
```
OutputFile outputFile =
icebergTable.io().newOutputFile("file:/Users/yzhaoqin/0.12-out.avro");
FlinkManifestUtil.writeDataFiles(outputFile, icebergTable.spec(),
Lists.newArrayList(icebergTable.currentSnapshot().addedFiles()));
```
In 0.13:
```
List<DataFile> dataFiles = FlinkManifestUtil.readDataFiles(
new
GenericManifestFile(icebergTable.io().newInputFile("file:/Users/yzhaoqin/0.12-out.avro"),
icebergTable.spec().specId()),
icebergTable.io());
```
and it worked.
So so far I am not able to reproduce the issue, @stevenzwu @openinx please
let me know if you are able to have a repro.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]