; From what you said, I guess that skipping faulty lines will be possible in
> later versions?
>
>
> Kind regards,
> Simon
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work
ssage in context:
http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work-tp8273p8293.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
xecutor.java:908)
>> java.lang.Thread.run(Thread.java:662) Driver stacktrace:
>>
>>
>>
>> Is the only possible reason that some of these 4.3 Million JSON-Objects
>> are
>> not valid JSON, or could there be another explanation?
>> And if it is the reason, is there some way to tell the function to just
>> skip
>> faulty lines?
>>
>>
>> Thanks,
>> Durin
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work-tp8273p8278.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
>
another explanation?
> And if it is the reason, is there some way to tell the function to just
> skip
> faulty lines?
>
>
> Thanks,
> Durin
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work-tp8273p8278.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
in context:
http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work-tp8273p8278.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
File(path: String): SchemaRDD
> Loads a JSON file (one object per line), returning the result as a
> SchemaRDD.
>
> I would assume this should work. However, executing this code return this
> error:
>
> 14/06/25 10:05:09 WARN scheduler.TaskSetManager: Lost TID 11 (task 0.0:11)
> 14/06/25 10:05:09 WARN scheduler.TaskSetManager: Loss was due to
> com.fasterxml.jackson.databind.JsonMappingException
> com.fasterxml.jackson.databind.JsonMappingException: No content to map due
> to end-of-input
> at [Source: java.io.StringReader@238df2e4; line: 1, column: 1]
> at
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:164)
> ...
>
>
> Does anyone know where the problem lies?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/jsonFile-function-in-SQLContext-does-not-work-tp8273.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
def jsonFile(path: String): SchemaRDD
Loads a JSON file (one object per line), returning the result as a
SchemaRDD.
I would assume this should work. However, executing this code return this
error:
14/06/25 10:05:09 WARN scheduler.TaskSetManager: Lost TID 11 (task 0.0:11)
14/06/25 10:05:09 WARN schedu