mani-sethu opened a new issue, #10100:
URL: https://github.com/apache/incubator-gluten/issues/10100

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   [Expected behavior] 
   I am reading a delta table using spark and I am getting the following error, 
I pulled the latest changes in main branch of gluten post that only this issue 
came up.
   `Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
recent failure: Lost task 0.3 in stage 0.0 (TID 4) (10.193.40.6 executor 2): 
org.apache.gluten.exception.GlutenException: 
org.apache.gluten.exception.GlutenException: Exception: VeloxRuntimeError
   Error Source: RUNTIME
   Error Code: INVALID_STATE
   Reason: No magic bytes found at end of the Parquet file
   Retriable: False
   Expression: strncmp(copy.data() + readSize - 4, "PAR1", 4) == 0
   Context: Split [Hive: 
path/_delta_log/00000000000000000072.checkpoint.parquet 0 - 297373] Task 
   Gluten_Stage_0_TID_4_VTID_2
   Additional Context: Operator: TableScan[0] 0
   Function: loadFileMetaData
   File: 
/root/src/weiting/gluten/ep/build-velox/build/velox_ep/velox/dwio/parquet/reader/ParquetReader.cpp
   Line: 216
   Stack trace:
   # 0  _ZN8facebook5velox7process10StackTraceC1Ei
   # 1  
_ZN8facebook5velox14VeloxExceptionC1EPKcmS3_St17basic_string_viewIcSt11char_traitsIcEES7_S7_S7_bNS1_4TypeES7_
   # 2  `
   
   
   ### Gluten version
   
   Gluten-1.3, main branch
   
   ### Spark version
   
   Spark-3.5.x
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   ```bash
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to