Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@HyukjinKwon The current patch has a bit different behaviour between csv
and json cases when `_corrupt_record` has types other than `StringType`; in
json cases, it hits `requirement failed` and, in csv cases, it hits
`AnaysisException` in a driver side (See:
https://github.com/apache/spark/pull/16928/files#diff-a549ac2e19ee7486911e2e6403444d9dR109).
If we need to keep all the json behaviours, we need to drop the code to throw
the `AnalysisException` in the csv case. WDYT?
A json case:
```
scala> Seq("""{"a": "a", "b" :
1}""").toDF().write.text("/Users/maropu/Desktop/data")
scala> val dataSchema = StructType(StructField("a", IntegerType, true) ::
StructField("b", StringType, true) :: Nil)
scala> spark.read.schema(dataSchema.add("_corrupt_record",
StringType)).option("mode",
"PERMISSIVE").json("/Users/maropu/Desktop/data").show()
+----+----+-------------------+
| a| b| _corrupt_record|
+----+----+-------------------+
|null|null|{"a": "a", "b" : 1}|
+----+----+-------------------+
scala> spark.read.schema(dataSchema.add("_corrupt_record",
IntegerType)).option("mode",
"PERMISSIVE").json("/Users/maropu/Desktop/data").show()
17/02/21 02:18:04 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 8)
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212)
at
org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$1.apply$mcVI$sp(JacksonParser.scala:61)
at
org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$1.apply(JacksonParser.scala:61)
at
org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$1.apply(JacksonParser.scala:61)
at scala.Option.foreach(Option.scala:257)
at
org.apache.spark.sql.catalyst.json.JacksonParser.<init>(JacksonParser.scala:61)
at
org.apache.spark.sql.execution.datasources.json.JsonFileFormat$$anonfun$buildReader$1.apply(JsonFileFormat.scala:106)
at
org.apache.spark.sql.execution.datasources.json.JsonFileFormat$$anonfun$buildReader$1.apply(JsonFileFormat.scala:105)
```
A csv case:
```
scala> Seq("0,2013-111-11
12:13:14").toDF().write.text("/Users/maropu/Desktop/data")
scala> val dataSchema = StructType(StructField("a", IntegerType, true) ::
StructField("b", TimestampType, true) :: Nil)
scala> spark.read.schema(dataSchema.add("_corrupt_record",
StringType)).option("mode",
"PERMISSIVE").csv("/Users/maropu/Desktop/data").show()
+----+----+--------------------+
| a| b| _corrupt_record|
+----+----+--------------------+
|null|null|0,2013-111-11 12:...|
+----+----+--------------------+
scala> spark.read.schema(dataSchema.add("_corrupt_record",
IntegerType)).option("mode",
"PERMISSIVE").csv("/Users/maropu/Desktop/data").show()
org.apache.spark.sql.AnalysisException: A field for corrupt records must be
a string type and nullable;
at
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1.apply$mcVI$sp(CSVFileFormat.scala:112)
at
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1.apply(CSVFileFormat.scala:109)
at
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1.apply(CSVFileFormat.scala:109)
at scala.Option.map(Option.scala:146)
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]