Hi
We are talking about spark streaming in here?
Depending on what is streamed, you can work out an exit strategy through
the total messages streamed in or through a time window in which you can
monitor the duration and exit if the duration > Window allocated (not to be
confused with windows
Hey all,
I have created a Spark Job that runs successfully but if I do not use sc.stop()
at the end, the job hangs. It shows some "cleaned accumulator 0" messages but
never finishes.
I intent to use these jobs in production via spark-submit and schedule it in
cron.
Is that the best practice
Hadoop 2.4.0. Here is the relevant logs from executor 1136
16/03/18 21:26:58 INFO mapred.SparkHadoopMapRedUtil:
attempt_201603182126_0276_m_000484_0: Committed16/03/18 21:26:58 INFO
executor.Executor: Finished task 484.0 in stage 276.0 (TID 59663).
1080 bytes result sent to driver16/03/18
Which version of hadoop do you use ?
bq. Requesting to kill executor(s) 1136
Can you find more information on executor 1136 ?
Thanks
On Fri, Mar 18, 2016 at 4:16 PM, Nezih Yigitbasi <
nyigitb...@netflix.com.invalid> wrote:
> Hi Spark experts,
> I am using Spark 1.5.2 on YARN with dynamic
Hi Spark experts,
I am using Spark 1.5.2 on YARN with dynamic allocation enabled. I see in
the driver/application master logs that the app is marked as SUCCEEDED and
then SparkContext stop is called. However, this stop sequence takes > 10
minutes to complete, and YARN resource manager kills the
Hello Spark developers,
After upgrading to spark 1.4 on Mesos 0.22.1 existing code started to throw
this exception when calling sparkContext.stop :
(SparkListenerBus) [ERROR -
org.apache.spark.Logging$class.logError(Logging.scala:96)] Listener
EventLoggingListener threw an exception
what is it for? when do we call it?
thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkContext-stop-tp17826.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
It is used to shut down the context when you're done with it, but if you're
using a context for the lifetime of your application I don't think it
matters.
I use this in my unit tests, because they start up local contexts and you
can't have multiple local contexts open so each test must stop its
You don't have to call it if you just exit your application, but it's useful
for example in unit tests if you want to create and shut down a separate
SparkContext for each test.
Matei
On Oct 31, 2014, at 10:39 AM, Evan R. Sparks evan.spa...@gmail.com wrote:
In cluster settings if you don't
Actually, if you don't call SparkContext.stop(), the event log
information that is used by the history server will be incomplete, and
your application will never show up in the history server's UI.
If you don't use that functionality, then you're probably ok not
calling it as long as your
10 matches
Mail list logo