Thanks Saisai. I saw following in yarn container logs. I think that killed sparkcontext.
16/01/28 17:38:29 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]*Unknown/unsupported param List*(--properties-file, /tmp/hadoop-xactly/nm-local-dir/usercache/nir/appcache/application_1453752281504_3427/container_1453752281504_3427_01_000002/__spark_conf__/__spark_conf__.properties) Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options] Options: --jar JAR_PATH Path to your application's JAR file --class CLASS_NAME Name of your application's main class --primary-py-file A main Python file --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --args ARGS Arguments to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores for the executors (Default: 1) --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) But if you are saying creating sparkcontext manually in your application still works then I'll investigate more on my side. It just before I dig more I wanted to know if it was still supported. Nir On Thu, Jan 28, 2016 at 7:47 PM, Saisai Shao <sai.sai.s...@gmail.com> wrote: > I think I met this problem before, this problem might be due to some race > conditions in exit period. The way you mentioned is still valid, this > problem only occurs when stopping the application. > > Thanks > Saisai > > On Fri, Jan 29, 2016 at 10:22 AM, Nirav Patel <npa...@xactlycorp.com> > wrote: > >> Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client >> mode programmatically via creating a sparkConf and sparkContext object >> manually. It was inspired from spark self-contained application example >> here: >> >> >> https://spark.apache.org/docs/1.5.2/quick-start.html#self-contained-applications\ >> >> >> >> Only additional configuration we would provide would be all related to >> yarn like executor instance, cores etc. >> >> However after upgrading to spark 1.5.2 above application breaks on a line >> `val sparkContext = new SparkContext(sparkConf)` >> >> 16/01/28 17:38:35 ERROR util.Utils: Uncaught exception in thread main >> >> java.lang.NullPointerException >> >> at >> org.apache.spark.network.netty.NettyBlockTransferService.close(NettyBlockTransferService.scala:152) >> >> at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1228) >> >> at org.apache.spark.SparkEnv.stop(SparkEnv.scala:100) >> >> at >> org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1749) >> >> at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1185) >> >> at org.apache.spark.SparkContext.stop(SparkContext.scala:1748) >> >> at org.apache.spark.SparkContext.<init>(SparkContext.scala:593) >> >> >> So is this approach still supposed to work? Or do I must use >> SparkLauncher class with spark 1.5.2? >> >> >> Thanks >> >> >> >> [image: What's New with Xactly] <http://www.xactlycorp.com/email-click/> >> >> <https://www.nyse.com/quote/XNYS:XTLY> [image: LinkedIn] >> <https://www.linkedin.com/company/xactly-corporation> [image: Twitter] >> <https://twitter.com/Xactly> [image: Facebook] >> <https://www.facebook.com/XactlyCorp> [image: YouTube] >> <http://www.youtube.com/xactlycorporation> > > > -- [image: What's New with Xactly] <http://www.xactlycorp.com/email-click/> <https://www.nyse.com/quote/XNYS:XTLY> [image: LinkedIn] <https://www.linkedin.com/company/xactly-corporation> [image: Twitter] <https://twitter.com/Xactly> [image: Facebook] <https://www.facebook.com/XactlyCorp> [image: YouTube] <http://www.youtube.com/xactlycorporation>