jaceklaskowski commented on code in PR #40638:
URL: https://github.com/apache/spark/pull/40638#discussion_r1162889479


##########
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/PartitionReaderFactory.java:
##########
@@ -37,15 +37,15 @@
 public interface PartitionReaderFactory extends Serializable {
 
   /**
-   * Returns a row-based partition reader to read data from the given {@link 
InputPartition}.
+   * @return a row-based partition reader to read data from the given {@link 
InputPartition}.
    * <p>
    * Implementations probably need to cast the input partition to the concrete

Review Comment:
   Can we add `@note` here so `@return` is just one-liner?



##########
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/PartitionReaderFactory.java:
##########
@@ -55,7 +55,7 @@ default PartitionReader<ColumnarBatch> 
createColumnarReader(InputPartition parti
   }
 
   /**
-   * Returns true if the given {@link InputPartition} should be read by Spark 
in a columnar way.
+   * @return true if the given {@link InputPartition} should be read by Spark 
in a columnar way.
    * This means, implementations must also implement {@link 
#createColumnarReader(InputPartition)}

Review Comment:
   and here



##########
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/PartitionReaderFactory.java:
##########
@@ -37,15 +37,15 @@
 public interface PartitionReaderFactory extends Serializable {
 
   /**
-   * Returns a row-based partition reader to read data from the given {@link 
InputPartition}.
+   * @return a row-based partition reader to read data from the given {@link 
InputPartition}.
    * <p>
    * Implementations probably need to cast the input partition to the concrete
    * {@link InputPartition} class defined for the data source.
    */
   PartitionReader<InternalRow> createReader(InputPartition partition);
 
   /**
-   * Returns a columnar partition reader to read data from the given {@link 
InputPartition}.
+   * @return a columnar partition reader to read data from the given {@link 
InputPartition}.
    * <p>
    * Implementations probably need to cast the input partition to the concrete

Review Comment:
   same here



##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2ScanExecBase.scala:
##########
@@ -65,7 +63,11 @@ trait DataSourceV2ScanExecBase extends LeafExecNode {
   }
 
   override def vectorTypes: Option[Seq[String]] = {
-    Option(readerFactory.getVectorTypes.get.asScala.toSeq)
+    val vectorTypes = readerFactory.getVectorTypes
+    if (vectorTypes.isPresent) {
+      vectorTypes.get()

Review Comment:
   What does this line do? Looks like a missing `return`?



##########
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/PartitionReaderFactory.java:
##########
@@ -68,7 +68,7 @@ default boolean supportColumnarReads(InputPartition 
partition) {
   }
 
   /**
-   * Returns exact java types of the columns that are output in columnar 
processing mode.
+   * @return exact java types of the columns that are output in columnar 
processing mode.

Review Comment:
   "Fully-qualified Java class names of"?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to