lamber-ken commented on a change in pull request #1542:
URL: https://github.com/apache/incubator-hudi/pull/1542#discussion_r412008079
##########
File path:
hudi-cli/src/main/java/org/apache/hudi/cli/commands/RepairsCommand.java
##########
@@ -147,14 +149,16 @@ public String overwriteHoodieProperties(
public void removeCorruptedPendingCleanAction() {
HoodieTableMetaClient client = HoodieCLI.getTableMetaClient();
- HoodieActiveTimeline activeTimeline =
HoodieCLI.getTableMetaClient().getActiveTimeline();
-
- activeTimeline.filterInflightsAndRequested().getInstants().forEach(instant
-> {
+ HoodieTimeline cleanerTimeline =
HoodieCLI.getTableMetaClient().getActiveTimeline().getCleanerTimeline();
+ LOG.info("Inspecting pending clean metadata in timeline for corrupted
files");
+
cleanerTimeline.filterInflightsAndRequested().getInstants().forEach(instant -> {
try {
CleanerUtils.getCleanerPlan(client, instant);
- } catch (IOException e) {
- LOG.warn("try to remove corrupted instant file: " + instant);
+ } catch (AvroRuntimeException e) {
Review comment:
`AvroRuntimeException` will never be catched. `Not an Avro data file` is
en `IOException`.
```
// org.apache.avro.file.DataFileReader#openReader
public static <D> FileReader<D> openReader(SeekableInput in,
DatumReader<D> reader)
throws IOException {
if (in.length() < MAGIC.length)
throw new IOException("Not an Avro data file");
// read magic header
byte[] magic = new byte[MAGIC.length];
in.seek(0);
for (int c = 0; c < magic.length; c = in.read(magic, c, magic.length-c)) {}
in.seek(0);
if (Arrays.equals(MAGIC, magic)) // current format
return new DataFileReader<D>(in, reader);
if (Arrays.equals(DataFileReader12.MAGIC, magic)) // 1.2 format
return new DataFileReader12<D>(in, reader);
throw new IOException("Not an Avro data file");
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]