Yaohua628 commented on code in PR #39996:
URL: https://github.com/apache/spark/pull/39996#discussion_r1105937406


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormat.scala:
##########
@@ -233,20 +239,24 @@ object FileFormat {
       fileSize: Long,
       fileModificationTime: Long): InternalRow =
     updateMetadataInternalRow(new GenericInternalRow(fieldNames.length), 
fieldNames,
-      filePath, fileSize, fileModificationTime)
+      filePath, fileSize, 0L, 0L, fileModificationTime)

Review Comment:
   I see this function currently is only used by the metadata filter 
optimization: could we update the function doc and add a comment mentioning it 
and that's basically why it's safe to set `0L` for both `fileBlockStart` and 
`fileBlockLenght`? Thanks!
   
   We could also consider using `-1L`, `-1L` or even `0L` and `fileSize` 
respectively.



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -799,4 +828,36 @@ class FileMetadataStructSuite extends QueryTest with 
SharedSparkSession {
       }
     }
   }
+
+  test("SPARK-42423: Add metadata column file block start and length") {
+    withSQLConf(
+        SQLConf.LEAF_NODE_DEFAULT_PARALLELISM.key -> "1",
+        SQLConf.FILES_MAX_PARTITION_BYTES.key -> "1") {
+      withTempPath { path =>
+        spark.range(2).write.json(path.getCanonicalPath)
+        assert(path.listFiles().count(_.getName.endsWith("json")) == 1)
+
+        val df = spark.read.json(path.getCanonicalPath)
+          .select("id", METADATA_FILE_BLOCK_START, METADATA_FILE_BLOCK_LENGTH)
+        assert(df.rdd.partitions.length > 1)
+        val res = df.collect()
+        assert(res.length == 2)
+        assert(res.head.getLong(0) == 0)
+        assert(res.head.getLong(1) == 0)
+        assert(res.head.getLong(2) > 0)
+        assert(res(1).getLong(0) == 1L)
+        assert(res(1).getLong(1) > 0)
+        assert(res(1).getLong(2) > 0)
+
+        val df2 = spark.read.json(path.getCanonicalPath)
+          .where("_metadata.File_bLoCk_start > 0 and _metadata.file_SizE > 0")
+          .select("id", METADATA_FILE_BLOCK_START, METADATA_FILE_BLOCK_LENGTH)

Review Comment:
   Could we check the listed files from the `queryExecution` to make sure we 
still apply the optimization for other `_metadata` fields? Like, 
`_metadata.file_block_start > 0 and _metadata.file_size > 10`: we can still 
filter out files with `size <= 10`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to