Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22237#discussion_r222811744
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
 ---
    @@ -550,59 +550,93 @@ case class JsonToStructs(
           s"Input schema ${nullableSchema.catalogString} must be a struct, an 
array or a map.")
       }
     
    -  // This converts parsed rows to the desired output by the given schema.
    -  @transient
    -  lazy val converter = nullableSchema match {
    -    case _: StructType =>
    -      (rows: Seq[InternalRow]) => if (rows.length == 1) rows.head else null
    -    case _: ArrayType =>
    -      (rows: Seq[InternalRow]) => rows.head.getArray(0)
    -    case _: MapType =>
    -      (rows: Seq[InternalRow]) => rows.head.getMap(0)
    +  abstract class RowParser {
    --- End diff --
    
    Sure, I just wasn't going to mix it to other classes, and sub-classes 
should extend only `RowParser`.  


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to