wForget opened a new pull request, #2459:
URL: https://github.com/apache/datafusion-comet/pull/2459

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes #2457.
   
   ## Rationale for this change
   
   In the failed test case of 
https://github.com/apache/datafusion-comet/pull/2444, I found that the struct 
type with duplicate keys will be deduplicated when converted to arrowType, 
which causes RowToColumanr to lose some columns.
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ## How are these changes tested?
   
   This PR does not avoid the issue, but makes the error message more obvious.
   
   Before this, the failing test case error in #2444 looked like:
   
   ```
   [info] - Struct Star Expansion *** FAILED *** (2 seconds, 840 milliseconds)
   [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 1230.0 failed 1 times, most recent failure: Lost task 0.0 in 
stage 1230.0 (TID 1549) (96a252fb2e73 executor driver): 
java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
   [info]       at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
   [info]       at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
   [info]       at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
   [info]       at java.base/java.util.Objects.checkIndex(Objects.java:374)
   [info]       at java.base/java.util.ArrayList.get(ArrayList.java:459)
   [info]       at 
org.apache.comet.vector.CometStructVector.getChild(CometStructVector.java:55)
   [info]       at 
org.apache.spark.sql.vectorized.ColumnarRow.getInt(ColumnarRow.java:113)
   [info]       at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source)
   [info]       at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source)
   [info]       at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
   [info]       at scala.collection.Iterator$$anon$11.next(Iterator.scala:496)
   ```
   
   after this:
   
   ```
   Job aborted due to stage failure: Task 0 in stage 7.0 failed 1 times, most 
recent failure: Lost task 0.0 in stage 7.0 (TID 9) (10.5.155.10 executor 
driver): org.apache.spark.SparkException: Duplicated field names in Arrow 
Struct are not allowed, got [a, a, b, b].
        at org.apache.spark.sql.comet.util.Utils$.toArrowField(Utils.scala:160)
        at 
org.apache.spark.sql.comet.util.Utils$.$anonfun$toArrowSchema$1(Utils.scala:198)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at org.apache.spark.sql.types.StructType.foreach(StructType.scala:102)
        at scala.collection.TraversableLike.map(TraversableLike.scala:286)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
        at org.apache.spark.sql.types.StructType.map(StructType.scala:102)
        at org.apache.spark.sql.comet.util.Utils$.toArrowSchema(Utils.scala:197)
        at 
org.apache.spark.sql.comet.execution.arrow.CometArrowConverters$ArrowBatchIterBase.<init>(CometArrowConverters.scala:54)
        at 
org.apache.spark.sql.comet.execution.arrow.CometArrowConverters$RowToArrowBatchIter.<init>(CometArrowConverters.scala:108)
        at 
org.apache.spark.sql.comet.execution.arrow.CometArrowConverters$.rowToArrowBatchIter(CometArrowConverters.scala:140)
        at 
org.apache.spark.sql.comet.CometSparkToColumnarExec.$anonfun$doExecuteColumnar$3(CometSparkToColumnarExec.scala:128)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to