dmgcodevil edited a comment on issue #1980: URL: https://github.com/apache/iceberg/issues/1980#issuecomment-750765052
Ok, based on my debugging research the _problem_ in this [line](https://github.com/apache/iceberg/blob/1b66bdfc084ac73fe999299d041aa2e5677f43c9/parquet/src/main/java/org/apache/iceberg/parquet/ParquetWriter.java#L130) i.e. `writer.getPos() ` returns the corrent size of the actual parquet file saved on disk, but `writeStore.isColumnFlushNeeded()` returns true and adds extra bytes. In my case, the actual file size is `4351656`, rows count = `470099` at the moment when `length()` is called , `ColumnWriteStoreV1.rowCountForNextSizeCheck` == `470100` so `writeStore.isColumnFlushNeeded()` returns true: ```java public boolean isColumnFlushNeeded() { // rowCount == 470099 // rowCountForNextSizeCheck == 470100 return rowCount + 1 >= rowCountForNextSizeCheck; } ``` maybe it can be fixed by adding an extra check : ```java @Override public void close() throws IOException { flushRowGroup(true); writeStore.close(); writer.end(metadata); this.closed = true; // new flag indicates that writer was closed @Override public long length() { try { if (closed) { return writer.getPos(); } else { return writer.getPos() + (writeStore.isColumnFlushNeeded() ? writeStore.getBufferedSize() : 0); } } catch (IOException e) { throw new RuntimeIOException(e, "Failed to get file length"); } } } ``` cc @rdblue ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
