Hi,

I have a spark streaming application running on yarn that consumes from a jms 
source. I have the checkpointing and WAL enabled to ensure zero data loss. 
However, When I suddenly kill my application and restarts it, sometimes it 
recovers the data from the WAL but sometimes it doesn’t !! In all the cases, I 
can see the WAL written correctly on HDFS. 

Can someone explains me why my WAL is sometimes ignored on restart ? What are 
the conditions for spark to decide to recover or not from the WAL ?

Thanks,
Walid.
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to