Hey Mayur, Thanks for the suggestion, I didn't realize that was configurable. I don't think I'm running out of memory, though it does seem like these errors go away when i turn off the spark.streaming.unpersist configuration and use spark.cleaner.ttl instead. Do you know if there are known issues with the unpersist option?
On Sat, May 31, 2014 at 12:17 AM, Mayur Rustagi <mayur.rust...@gmail.com> wrote: > You can increase your akka timeout, should give you some more life.. are > you running out of memory by any chance? > > > Mayur Rustagi > Ph: +1 (760) 203 3257 > http://www.sigmoidanalytics.com > @mayur_rustagi <https://twitter.com/mayur_rustagi> > > > > On Sat, May 31, 2014 at 6:52 AM, Michael Chang <m...@tellapart.com> wrote: > >> I'm running a some kafka streaming spark contexts (on 0.9.1), and they >> seem to be dying after 10 or so minutes with a lot of these errors. I >> can't really tell what's going on here, except that maybe the driver is >> unresponsive somehow? Has anyone seen this before? >> >> 14/05/31 01:13:30 ERROR BlockManagerMaster: Failed to remove RDD 12635 >> >> akka.pattern.AskTimeoutException: Timed out >> >> at >> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334) >> >> at akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118) >> >> at >> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:691) >> >> at >> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:688) >> >> at >> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:455) >> >> at >> akka.actor.LightArrayRevolverScheduler$$anon$12.executeBucket$1(Scheduler.scala:407) >> >> at >> akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:411) >> >> at >> akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363) >> >> at java.lang.Thread.run(Thread.java:744) >> >> Thanks, >> >> Mike >> >> >> >