Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16928#discussion_r101911027
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
 ---
    @@ -45,24 +46,35 @@ private[csv] class UnivocityParser(
       // A `ValueConverter` is responsible for converting the given value to a 
desired type.
       private type ValueConverter = String => Any
     
    +  private val inputSchema = columnNameOfCorruptRecord.map { fn =>
    +    StructType(schema.filter(_.name != fn))
    +  }.getOrElse(schema)
    +
       private val valueConverters =
    -    schema.map(f => makeConverter(f.name, f.dataType, f.nullable, 
options)).toArray
    +    inputSchema.map(f => makeConverter(f.name, f.dataType, f.nullable, 
options)).toArray
     
       private val parser = new CsvParser(options.asParserSettings)
     
       private var numMalformedRecords = 0
     
       private val row = new GenericInternalRow(requiredSchema.length)
     
    -  private val indexArr: Array[Int] = {
    +  private val shouldHandleCorruptRecord = 
columnNameOfCorruptRecord.isDefined
    +  private val corruptIndex = columnNameOfCorruptRecord.flatMap { fn =>
    +    requiredSchema.getFieldIndex(fn)
    +  }.getOrElse(-1)
    +
    +  private val indexArr: Array[(Int, Int)] = {
         val fields = if (options.dropMalformed) {
           // If `dropMalformed` is enabled, then it needs to parse all the 
values
           // so that we can decide which row is malformed.
           requiredSchema ++ schema.filterNot(requiredSchema.contains(_))
         } else {
           requiredSchema
         }
    -    fields.map(schema.indexOf(_: StructField)).toArray
    +    fields.zipWithIndex.filter { case (_, i) => i != corruptIndex }.map { 
case (f, i) =>
    +      (inputSchema.indexOf(f), i)
    +    }.toArray
    --- End diff --
    
    I see. I got it. You meant to support arbitrarily located column as corrupt 
field. I think CSV parsing is dependent on original schema's order when 
initially loaded. IMHO, I think it is okay to force locating this field at the 
end.
    I mean, I can't imagine a user adding the corrupt column in the middle of 
the schema.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to