tanelk commented on pull request #31964:
URL: https://github.com/apache/spark/pull/31964#issuecomment-855834147


   @sarutak 
   
   I ran git bisect and it seems, that this PR introduced a regression:
   ```
     test("SPARK-XXXXX: special char in column name") {
       withTempDir { dir =>
         val file = new File(dir, "output.csv")
         val fileContent =
           """
             |a / b
             |val1
             |""".stripMargin
   
         FileUtils.writeStringToFile(file, fileContent, StandardCharsets.UTF_8)
   
         spark.read
           .option("header", true)
           .csv(file.toString)
           .where(col("a / b").isNotNull)
           .show()
       }
     }
   ```
   fails with 
   ```
   [info] - SPARK-XXXXX: special char in column name *** FAILED *** (2 seconds, 
547 milliseconds)
   [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 
1.0 (TID 1) (archlinux executor driver): java.lang.IllegalArgumentException: `a 
/ b` does not exist. Available: a / b
   [info]       at 
org.apache.spark.sql.types.StructType.$anonfun$fieldIndex$1(StructType.scala:306)
   [info]       at scala.collection.immutable.Map$Map1.getOrElse(Map.scala:119)
   [info]       at 
org.apache.spark.sql.types.StructType.fieldIndex(StructType.scala:305)
   [info]       at 
org.apache.spark.sql.catalyst.OrderedFilters.$anonfun$predicates$4(OrderedFilters.scala:61)
   [info]       at 
org.apache.spark.sql.catalyst.OrderedFilters.$anonfun$predicates$4$adapted(OrderedFilters.scala:61)
   [info]       at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
   [info]       at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
   [info]       at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
   [info]       at 
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
   [info]       at 
scala.collection.TraversableLike.map(TraversableLike.scala:238)
   [info]       at 
scala.collection.TraversableLike.map$(TraversableLike.scala:231)
   [info]       at 
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
   [info]       at 
org.apache.spark.sql.catalyst.OrderedFilters.$anonfun$predicates$3(OrderedFilters.scala:61)
   [info]       at 
org.apache.spark.sql.catalyst.OrderedFilters.$anonfun$predicates$3$adapted(OrderedFilters.scala:50)
   [info]       at scala.collection.immutable.List.foreach(List.scala:392)
   [info]       at 
org.apache.spark.sql.catalyst.OrderedFilters.<init>(OrderedFilters.scala:50)
   [info]       at 
org.apache.spark.sql.catalyst.csv.UnivocityParser.<init>(UnivocityParser.scala:103)
   [info]       at 
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.$anonfun$buildReader$1(CSVFileFormat.scala:138)
   [info]       at 
org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
   [info]       at 
org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:133)
   [info]       at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:117)
   [info]       at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:165)
   [info]       at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:94)
   [info]       at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
   [info]       at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
   [info]       at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   [info]       at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
   [info]       at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:344)
   [info]       at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
   [info]       at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
   [info]       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   [info]       at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
   [info]       at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
   [info]       at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   [info]       at org.apache.spark.scheduler.Task.run(Task.scala:131)
   [info]       at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
   [info]       at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
   [info]       at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
   [info]       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [info]       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [info]       at java.lang.Thread.run(Thread.java:748)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to