jinchengchenghh commented on issue #9335:
URL: 
https://github.com/apache/incubator-gluten/issues/9335#issuecomment-3088721846

   Meets some exception
   
   1. testDeleteNonExistingRecords
   // because lack metadata, canDeleteUsingMetadata return false, optimized 
logical plan cannot change from ReplaceData to DeleteFromTable, 
OptimizeMetadataOnlyDeleteFromTable
   
   
   org.junit.ComparisonFailure: Operation must match 
   Expected :delete
   Actual   :overwrite
   <Click to see difference>
   
   2. testDeleteWithNullConditions
   // because return partition value is wrong
   java.lang.AssertionError: Should have expected rows: row 1 col 1 contents 
should match expected: null<null> but was: java.lang.String<null>
   Expected :null<null>
   Actual   :java.lang.String<null>
   <Click to see difference>
   
   3. RowToVeloxColumnarExec Not supported type timestamptz, need to to add 
schema validate in icebergWrite
   
   4. If the input data location is 
/var/folders/63/845y6pk53dx_83hpw8ztdchw0000gn/T/junit13654523122631037893/junit1452945322859670916/data/
 (ends with /), throws this exception, velox should normalize the input 
location if it requires not have slash.
   17:41:22.516 ERROR org.apache.iceberg.spark.source.BaseReader: Error reading 
file(s): 
/var/folders/63/845y6pk53dx_83hpw8ztdchw0000gn/T/junit13654523122631037893/junit1452945322859670916/data//c5ef8e75-5287-449c-b4f9-5becdd8281e7.parquet
   java.lang.NullPointerException: Could not find InputFile associated with 
FileScanTask
        at 
org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:922)
        at 
org.apache.iceberg.spark.source.BatchDataReader.open(BatchDataReader.java:94)
        at 
org.apache.iceberg.spark.source.BatchDataReader.open(BatchDataReader.java:41)
        at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:146)
        at 
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
        at 
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
        at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to