t;>> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>>>> at
>>>>
>>>> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>>>> at scal
pushArrayBuffer(ReceiverSupervisorImpl.scala:127)
>> at
>>
>> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl$$anon$2.onPushBlock(ReceiverSupervisorImpl.scala:112)
>> at
>>
>> org.apache.spark.streaming.receiver.BlockGenerator.pushBlock(BlockGenerat
;> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:166)
>>> at
>>>
>>> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushArrayBuffer(ReceiverSuper
streaming$receiver$BlockGenerator$$keepPushingBlocks(BlockGenerator.scala:155)
> at
>
> org.apache.spark.streaming.receiver.BlockGenerator$$anon$1.run(BlockGenerator.scala:87)
>
>
> Has anyone run into this before?
>
>
>
> --
> View this message in context:
> htt
-Error-in-block-pushing-thread-tp22356.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
I am running a standalone Spark streaming cluster, connected to multiple
RabbitMQ endpoints. The application will run for 20-30 minutes before
raising the following error:
WARN 2015-04-01 21:00:53,944
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 22 - Ask time