RussellSpitzer commented on a change in pull request #2362:
URL: https://github.com/apache/iceberg/pull/2362#discussion_r616222642



##########
File path: spark2/src/main/java/org/apache/iceberg/spark/source/Reader.java
##########
@@ -105,8 +102,8 @@
   private List<CombinedScanTask> tasks = null; // lazy cache of tasks
   private Boolean readUsingBatch = null;
 
-  Reader(Table table, Broadcast<FileIO> io, Broadcast<EncryptionManager> 
encryptionManager,
-      boolean caseSensitive, DataSourceOptions options) {
+  Reader(SparkSession spark, Table table, boolean caseSensitive, 
DataSourceOptions options) {
+    this.sparkContext = new JavaSparkContext(spark.sparkContext());

Review comment:
       This is valid, but it raises my heart rate. I would prefer we use 
JavaSparkContext.fromSparkContext so when I see this I don't think we are 
making a brand new context :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to