Hi,

Thanks for your help, my problem is with NumberAvgBolt which one is submergerd 
by the spout tuples, to resume if I understand , I have to implement the 
reliability like this example : 
http://www.datasalt.com/2012/01/real-time-feed-processing-with-storm/

With this solution, I could have a memory overflow problems if I receive too 
messages from my sources, no ?

Best regards,
Charlie QUILLARD

________________________________
De : ??? <[email protected]>
Envoyé : mercredi 8 juillet 2015 23:18
À : [email protected]
Objet : Re: Problem to recept massive tuples

Hi, Charlie.

Your Spout has the issue. Spout shouldn't stay longer in nextTuple() cause 
Spout takes care of events (including calling ack, fail, nextTuple) in event 
loop with just one thread.

In other words, back pressure cannot work in your Spout cause how max spout 
pending works is checking pending queue size before calling nextTuple() and if 
it's greater than max spout pending, it skips calling nextTuple().

If you want to follow max spout pending strictly, nextTuple() should emit only 
one tuple, but it is not hard rule.

Please note that max spout pending only works with ack mode, and for now it is 
only way for Storm to handle back pressure.

Hope this helps.

Thanks,
Jungtaek Lim (HeartSaVioR)

2015? 7? 9? ???, charlie 
quillard<[email protected]<mailto:[email protected]>>?? ??? 
???:

Hi,


I began my performance tests on storm and  in my full use case when i send many 
tuples(> 1000), I can have a "core dump " because my bolt cannot treated all my 
spout tuples.

And for testing , I added a gist : 
https://gist.github.com/episanchez/a5c101bdf637a5ff2e28 , and when I did not 
put a 1 millisecond sleep, I had the same problem.

So i would like to know how to fix it without to put a sleep.


Thanks in advance,

Charlie QUILLARD


--
Name : ? ??
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior

Reply via email to