MilanTyagi2004 commented on issue #61225:
URL: https://github.com/apache/doris/issues/61225#issuecomment-4073519839

   Hi, thanks for pointing to #59984 - I checked that PR in detail.
   
   It looks related, but I believe this issue is not fully covered by that fix.
   
   #59984 mainly fixes column resolution for equality delete using Iceberg 
field IDs instead of column names. However, in this case, the crash happens 
during the normal scan path (not specifically tied to equality delete), and 
there are two distinct failure patterns:
   
   1. The SIGSEGV in ByteArrayDictDecoder::_decode_values indicates invalid 
dictionary access (likely due to incorrect column reader initialization or 
mismatched metadata).
   
   2. The std::out_of_range in StructNode::children_column_exists suggests 
missing columns during schema traversal, where map::at is used without checking 
existence.
   
   From the stack trace, the failure occurs in:
   ParquetReader → IcebergTableReader → decode_values
   
   This suggests that column/schema mapping may still be inconsistent in 
non-equality-delete paths or nested schema handling, leading to:
   
   * wrong column binding → invalid dictionary decode (SIGSEGV)
   * missing struct fields → map::at exception
   
   So this seems like a similar root cause class (schema evolution + column 
resolution), but affecting a different execution path than #59984.
   
   I’m planning to:
   
   * verify whether field ID–based resolution is consistently applied in all 
scan paths
   * check nested struct handling in TableSchemaChangeHelper
   * add defensive checks to avoid unsafe map::at access
   
   If this direction makes sense, I can take this issue and work on a fix.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to