amogh-jahagirdar commented on code in PR #13555:
URL: https://github.com/apache/iceberg/pull/13555#discussion_r2214381600
##########
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkWriteBuilder.java:
##########
@@ -120,12 +120,15 @@ public WriteBuilder overwrite(Filter[] filters) {
@Override
public Write build() {
// The write schema should only include row lineage in the output if it's
an overwrite
- // operation.
+ // operation or if it's a compaction.
// In any other case, only null row IDs and sequence numbers would be
produced which
// means the row lineage columns can be excluded from the output files
- boolean writeIncludesRowLineage = TableUtil.supportsRowLineage(table) &&
overwriteFiles;
+ boolean writeIncludesRowLineage =
+ TableUtil.supportsRowLineage(table)
+ && (overwriteFiles || writeConf.rewrittenFileSetId() != null);
StructType sparkWriteSchema = dsSchema;
- if (writeIncludesRowLineage) {
+ if (writeIncludesRowLineage
+ && !dsSchema.exists(field ->
field.name().equals(MetadataColumns.ROW_ID.name()))) {
Review Comment:
I don't think this `dsSchema` check should be neccessary anymore, let me
double check this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]