Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21192#discussion_r185197844
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala
 ---
    @@ -120,8 +120,26 @@ private[sql] class JSONOptions(
           enc
       }
     
    -  val lineSeparatorInRead: Option[Array[Byte]] = lineSeparator.map { 
lineSep =>
    -    lineSep.getBytes(encoding.getOrElse("UTF-8"))
    +  /**
    +   * A sequence of bytes between two consecutive json records in read.
    +   * Format of the `lineSep` option is:
    +   *   selector (1 char) + separator spec (any length) | sequence of chars
    +   *
    +   * Currently the following selectors are supported:
    +   * - 'x' + sequence of bytes in hexadecimal format. For example: "x0a 
0d".
    --- End diff --
    
    Your approach looks interesting but your PRs 
https://github.com/apache/spark/pull/20125 and 
https://github.com/apache/spark/pull/16611 stuck. Are there any specific 
reasons for that?
    
    > Maybe, we could try a similar approach.
    
    We could but it requires extension of public API more widely than it is 
needed by this changes. Do you think it makes sense? Actually I would consider 
`lineSep` as a string in JSON format like:
    
    ```scala
    .option("lineSep", "[0x00, 0x0A, 0x00, 0x0D]")
    ```
    or
    ```scala
    .option("lineSep", """{"sep": "\r\n", "encoding": "UTF-16LE"}""")
    ```
    but for regular cases of simple `lineSep`, this approach looks pretty 
redundant:
    ```
    .option("lineSep", ",") vs .option("lineSep", """{"sep": ","}""")
    ``` 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to