The 'Failed to parse a value' was the cause for execution failure.

Can you disclose the structure of your json file ?

Maybe try latest 1.6.0 RC to see if the problem goes away.

Thanks

On Sat, Dec 19, 2015 at 1:55 PM, Eran Witkon <eranwit...@gmail.com> wrote:

> Hi,
> I tried the following code in spark-shell on spark1.5.2:
>
> *val df =
> sqlContext.read.json("/home/eranw/Workspace/JSON/sample/sample2.json")*
> *df.count()*
>
> 15/12/19 23:49:40 ERROR Executor: Managed memory leak detected; size =
> 67108864 bytes, TID = 3
> 15/12/19 23:49:40 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID
> 3)
> java.lang.RuntimeException: Failed to parse a value for data type
> StructType() (current token: VALUE_STRING).
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.spark.sql.execution.datasources.json.JacksonParser$.convertField(JacksonParser.scala:172)
> at
> org.apache.spark.sql.execution.datasources.json.JacksonParser$$anonfun$parseJson$1$$anonfun$apply$1.apply(JacksonParser.scala:251)
> at
> org.apache.spark.sql.execution.datasources.json.JacksonParser$$anonfun$parseJson$1$$anonfun$apply$1.apply(JacksonParser.scala:246)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
> at
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:365)
> at
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.start(TungstenAggregationIterator.scala:622)
> at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$
> 1.org
> $apache$spark$sql$execution$aggregate$TungstenAggregate$$anonfun$$executePartition$1(TungstenAggregate.scala:110)
>
> Am I am doing something wrong?
> Eran
>

Reply via email to