vaibhawvipul opened a new pull request, #3808:
URL: https://github.com/apache/datafusion-comet/pull/3808
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
Closes #3760 .
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
When running Spark SQL tests with native_datafusion scan, tests expecting
errors for duplicate/ambiguous fields in case-insensitive mode fail because
DataFusion's Parquet reader doesn't enforce Spark's case-sensitivity
validation. Instead of detecting duplicates and raising the proper Spark error,
the native reader silently returns wrong results or falls back to Spark.
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
Native duplicate field detection (Rust):
- Added per-column duplicate detection in schema_adapter.rs via
check_column_duplicate() - checks each Column expression in the physical plan
for ambiguous case-insensitive matches against the original physical schema
Removed plan-time fallback (Scala):
- Removed the fallback block in CometScanRule.scala that detected duplicate
field names at plan time and fell back to Spark - duplicates are now detected
at read time in the native reader
Spark SQL test diffs (3.4.3, 3.5.8, 4.0.1):
- Removed IgnoreCometNativeDataFusion annotations for issue-3760 from
FileBasedDataSourceSuite and ParquetFilterSuite
- Adapted error interception in tests to handle both Spark's
SparkException(FAILED_READ_FILE) wrapper and Comet's direct
SparkRuntimeException
## How are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
- Rust and Scala tests
- Spark SQL tests verified:
- Spark native readers should respect spark.sql.caseSensitive - parquet
- SPARK-25207: exception when duplicate fields in case-insensitive mode
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]