aokolnychyi commented on a change in pull request #2362:
URL: https://github.com/apache/iceberg/pull/2362#discussion_r615086184
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java
##########
@@ -513,17 +512,22 @@ protected WriterFactory(PartitionSpec spec, FileFormat
format, LocationProvider
@Override
public DataWriter<InternalRow> createWriter(int partitionId, long taskId,
long epochId) {
- OutputFileFactory fileFactory = new OutputFileFactory(
- spec, format, locations, io.value(), encryptionManager.value(),
partitionId, taskId);
- SparkAppenderFactory appenderFactory = new
SparkAppenderFactory(properties, writeSchema, dsSchema, spec);
+ Table table = tableBroadcast.getValue();
+
+ OutputFileFactory fileFactory = new OutputFileFactory(table, format,
partitionId, taskId);
+ SparkAppenderFactory appenderFactory = new SparkAppenderFactory(table,
writeSchema, dsSchema);
+
+ PartitionSpec spec = table.spec();
+ FileIO io = table.io();
+
if (spec.isUnpartitioned()) {
- return new Unpartitioned3Writer(spec, format, appenderFactory,
fileFactory, io.value(), targetFileSize);
Review comment:
Actually, I remember why I did not do this. If we were to expose writing
data with a partition spec that is not current, then we would need to pass the
correct partition spec here instead of consuming the default one from the table
object. I think it should be alright to keep it as is for now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]