[ 
https://issues.apache.org/jira/browse/LIVY-379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyorgy Gal updated LIVY-379:
----------------------------
    Fix Version/s: 0.10.0
                       (was: 0.9.0)

This issue has been moved to the 0.10.0 release as part of a bulk update. If 
you feel this is moved out inappropriately, feel free to provide justification 
and reset the Fix Version to 0.9.0.

> When livy job contains Option field , comes out ExecutionException
> ------------------------------------------------------------------
>
>                 Key: LIVY-379
>                 URL: https://issues.apache.org/jira/browse/LIVY-379
>             Project: Livy
>          Issue Type: Bug
>          Components: API, RSC
>    Affects Versions: 0.4.0
>         Environment: spark 2.1.1 
> hadoop 2.7.3
> hbase 1.2.1
> livy 0.4.0
> chill 0.8.0
>            Reporter: zzzhy
>            Priority: Major
>             Fix For: 0.10.0
>
>         Attachments: image-2017-07-12-16-18-55-988.png
>
>
> !image-2017-07-12-16-18-55-988.png|thumbnail! 
> the error stacktrace:
> {code:java}
> java.util.concurrent.ExecutionException, cause -> scala.MatchError: None (of 
> class scala.None$)
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:732)
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1281)
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
> org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1410)
> org.apache.spark.rdd.NewHadoopRDD.<init>(NewHadoopRDD.scala:78)
> org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1142)
> org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1132)
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
> org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1132)
> ...
> com.cloudera.livy.rsc.driver.BypassJob.call(BypassJob.java:40)
> com.cloudera.livy.rsc.driver.BypassJob.call(BypassJob.java:27)
> com.cloudera.livy.rsc.driver.JobWrapper.call(JobWrapper.java:57)
> com.cloudera.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:42)
> com.cloudera.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:27)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to