The reason is just simply JSON data source depends on Hadoop's
LineRecordReader when we first try to read the files.

There is a workaround for this here in this link,

I hope this is helpful.


2016-10-16 11:20 GMT+09:00 WangJianfei <wangjianfe...@otcaix.iscas.ac.cn>:

> Hi devs:
>    I'm doubt about the design of spark.read.json,  why the json file is not
> a standard json file, who can tell me the internal reason. Any advice is
> appreciated.
> --
> View this message in context: http://apache-spark-
> developers-list.1001551.n3.nabble.com/Why-the-json-file-
> used-by-sparkSession-read-json-must-be-a-valid-json-
> object-per-line-tp19464.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to