zhztheplayer commented on code in PR #11225:
URL: 
https://github.com/apache/incubator-gluten/pull/11225#discussion_r2577078554


##########
backends-velox/src/main/scala/org/apache/gluten/utils/ParquetMetadataUtils.scala:
##########
@@ -119,18 +134,35 @@ object ParquetMetadataUtils {
    * Parquet metadata. In this case, the Parquet scan should fall back to 
vanilla Spark since Velox
    * doesn't yet support Spark legacy datetime.
    */
-  private def isTimezoneFoundInMetadata(
+  private def isUnsupportedMetadata(
       fileStatus: LocatedFileStatus,
       conf: Configuration,
-      parquetOptions: ParquetOptions): Boolean = {
-    val footerFileMetaData =
+      parquetOptions: ParquetOptions): Option[String] = {
+    val footer =
       try {
-        ParquetFooterReader.readFooter(conf, fileStatus, 
SKIP_ROW_GROUPS).getFileMetaData
+        ParquetFooterReader.readFooter(conf, fileStatus, 
ParquetMetadataConverter.NO_FILTER)
       } catch {
         case _: RuntimeException =>
           // Ignored as it's could be a "Not a Parquet file" exception.
-          return false
+          return None
+      }
+    val validationChecks = Seq(
+      validateCodec(footer),
+      isTimezoneFoundInMetadata(footer, parquetOptions)

Review Comment:
   > Yes, it has a config to control, we could union all the 3 checks and 
config, please union the config in #11233, I will union the check
   
   Yes, let's use `spark.gluten.sql.fallbackUnexpectedMetadataParquet` for 
newly added checks. Though it's yet to be discussed whether to remove the old 
option for encryption validation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to