folks,
We recently upgraded to 2.3.1 and we started seeing that, the spark jobs
leaves _temporary directory in the s3 even though write to s3 already
finish. It do not cleanup the temporary directory.
Hadoop version 2.8. is there a way to control it?
--
Sent from: http://apache-spark-user-list
Can anyone please have a look and put thoughts here..
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
All,
We have a problem with Spark Worker. The worker goes down whenever we are
not able to get the spark master up and running before starting the worker.
Of course- it does try to ReregisterWithMaster upto 16 attemps :
1. First 6 attempts it make in interval of appx 10 seconds
2. Next 10 at
All,
We have a problem with Spark Worker. The worker goes down whenever we are
not able to get the spark master up and running before starting the worker.
Of course- it does try to ReregisterWithMaster upto 16 attemps :
1. First 6 attempts it make in interval of appx 10 seconds
2. Next 10 at