Any ideas? This is so important because we use kafka direct streaming and
save processed offsets manually as last step in the job, so we archive
at-least-once.
But see what happens when new batch is scheduled after a job fails:
- suppose we start from offset 10 loaded from zookeeper
- job starts
One more thing we can try is before committing offset we can verify the
latest offset of that partition(in zookeeper) with fromOffset in
OffsetRange.
Just a thought...
Let me know if it works..
On Tue, Oct 27, 2015 at 9:00 PM, Cody Koeninger wrote:
> If you want to make
If you want to make sure that your offsets are increasing without gaps...
one way to do that is to enforce that invariant when you're saving to your
database. That would probably mean using a real database instead of
zookeeper though.
On Tue, Oct 27, 2015 at 4:13 AM, Krot Viacheslav
Actually a great idea, I even didn't think about that. Thanks a lot!
вт, 27 окт. 2015 г. в 17:29, varun sharma :
> One more thing we can try is before committing offset we can verify the
> latest offset of that partition(in zookeeper) with fromOffset in
> OffsetRange.
Hi all,
I wonder what is the correct way to stop streaming application if some job
failed?
What I have now:
val ssc = new StreamingContext
ssc.start()
try {
ssc.awaitTermination()
} catch {
case e => ssc.stop(stopSparkContext = true, stopGracefully = false)
}
It works but one problem
+1, wanted to do same.
On Mon, Oct 26, 2015 at 8:58 PM, Krot Viacheslav
wrote:
> Hi all,
>
> I wonder what is the correct way to stop streaming application if some job
> failed?
> What I have now:
>
> val ssc = new StreamingContext
>
> ssc.start()
> try {
>