singhpk234 commented on code in PR #5143:
URL: https://github.com/apache/iceberg/pull/5143#discussion_r908177019


##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkBatch.java:
##########
@@ -65,18 +63,24 @@ public InputPartition[] planInputPartitions() {
     Broadcast<Table> tableBroadcast = 
sparkContext.broadcast(SerializableTable.copyOf(table));
     String expectedSchemaString = SchemaParser.toJson(expectedSchema);
 
-    InputPartition[] readTasks = new InputPartition[tasks.size()];
+    InputPartition[] readTasks = new InputPartition[tasks().size()];
 
     Tasks.range(readTasks.length)
         .stopOnFailure()
         .executeWith(localityEnabled ? ThreadPools.getWorkerPool() : null)
         .run(index -> readTasks[index] = new ReadTask(
-            tasks.get(index), tableBroadcast, expectedSchemaString,
+            tasks().get(index), tableBroadcast, expectedSchemaString,

Review Comment:
   I see, as per my understanding, was thinking, we would be requiring it, as 
we have a possibility to call tasks() via mt (as we were in a thread pool), 
Though I see now in L66 above we already would have called tasks() which  would 
have populated this `task` anyhow, since we have made the function itself 
[synchronized](https://github.com/apache/iceberg/blob/master/spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkCopyOnWriteScan.java#L134),
 each of this access will now be sequential even for the case we have task as 
not null.
   
   All in all would say I don't have a strong reason for it, would say it's 
just something that caught my eye :) !



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to