Github user sr11231 commented on the issue:

    https://github.com/apache/spark/pull/17758
  
    Still when you load the json file through Dataset[String] by doing 
`spark.read.json(spark.read.textFile("json.file")`, Spark does not trow any 
error and you get DataFrame with duplicate columns. Is that an expected 
behaviour and a feature or it's actually a bug?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to