Thanks for the thorough and quick answer.

Two follow up questions. The context is performing low latency stream 
manipulation (hopefully process a message the moment it arrives, maybe a few 
milliseconds later).


1. What happens if the LBQ contains 10 items, while the Storm topology does not 
call nextTuple because of backoffs ? Wouldn't it be better of for another 
Amazon SQS client to handle these items? Or are you assuming a single Storm 
topology is the sole handler of these items ?


2. If by backoff, you mean storm topology cannot handle any more messages, or 
maxSpoutPending is reached, then ignore this question. If by backoff you mean 
exponential backoff then I am worried about a message arriving to the queue and 
nextTuple is not called for a long time (more than a few milliseconds).


Regards,

Itai

________________________________
From: Michael Rose <[email protected]>
Sent: Friday, July 18, 2014 11:27 PM
To: [email protected]
Subject: Re: Acking is delayed by 5 seconds (in disruptor queue ?)

I have no experience with multilang spouts, however my impression from the docs 
is that you should be handling your own multiplexing if you're writing a 
shellspout. Otherwise if you block for 5 seconds emitting a tuple, you cannot 
process an ack until that's done. I'd experiment with that, if you change the 
sleep.spout.wait time to be 500ms and you don't block in your spout (instead 
returning "sync") it should back off just as it does with a normal spout (see 
https://github.com/apache/incubator-storm/blob/master/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java,
 "sync" is a no-op).

The post you linked to was mine, and for a long time that was true (especially 
0.6 and 0.7). Since Storm 0.8, the spout wait strategy will do automatic 
backoffs when no tuples are emitted. The only time I've intentionally blocked 
in a spout after 0.8.0 is to control throughout (e.g. only allow 10/s during 
development). I've never built a multilang spout before.

Spouts, like bolts, run in a single-threaded context so blocking at all 
prevents acks/fails/emits from being done until the thread is unblocked. That 
is why it's best to have another thread dealing with IO and asynchronously 
feeding a concurrent data structure the spout can utilize. For example, in our 
internal Amazon SQS client our IO thread continuously fetches up to 10 messages 
per get and shoves them into a LinkedBlockingQueue (until full, then it blocks 
the IO thread only until the spout emits clear up room).


Michael Rose (@Xorlev<https://twitter.com/xorlev>)
Senior Platform Engineer, FullContact<http://www.fullcontact.com/>
[email protected]<mailto:[email protected]>


On Fri, Jul 18, 2014 at 1:34 PM, Itai Frenkel 
<[email protected]<mailto:[email protected]>> wrote:

So can you please explain this sentence from the multilang documentation?


"Also like ISpout, if you have no tuples to emit for a next, you should sleep 
for a small amount of time before syncing. ShellSpout will not automatically 
sleep for you"

https://storm.incubator.apache.org/documentation/Multilang-protocol.html


I read it as: "Unless you sleep a small amount of time before syncing, the 
ShellSpout would serialize one "nextTuple" message per 1ms (see configuration 
below) which would require much more CPU cycles"

topology.spout.wait.strategy: "backtype.storm.spout.SleepSpoutWaitStrategy"
topology.sleep.spout.wait.strategy.time.ms<http://topology.sleep.spout.wait.strategy.time.ms>:
 1



You can also refer to the answer here, which refers to regular Spouts doing 
sleep as well:

https://groups.google.com/forum/#!topic/storm-user/OSjaVgTK5m0


Regards,

Itai



________________________________
From: Michael Rose <[email protected]<mailto:[email protected]>>
Sent: Friday, July 18, 2014 10:18 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: Acking is delayed by 5 seconds (in disruptor queue ?)

Run your producer code in another thread to fill a LBQ, poll that with 
nextTuple instead.

You should never be blocking yourself inside a spout.


Michael Rose (@Xorlev<https://twitter.com/xorlev>)
Senior Platform Engineer, FullContact<http://www.fullcontact.com/>
[email protected]<mailto:[email protected]>


On Fri, Jul 18, 2014 at 1:03 PM, Itai Frenkel 
<[email protected]<mailto:[email protected]>> wrote:

Hello again,


Attached is a simplified reproduction (without the ShellSpout, but the concepts 
are the same).


It seems that ack() and nextTuple() are always called on the same thread. That 
means that there is an inherent tradeoff.

Either nextTuple sleeps a few ms  (and then the ShellSpout would serialize alot 
of nextTuple messages)

or nextTuple can sleep but then the ack is delayed.


Is there a way around this limitation?


Itai

________________________________
From: Itai Frenkel <[email protected]<mailto:[email protected]>>
Sent: Thursday, July 17, 2014 9:42 PM
To: [email protected]<mailto:[email protected]>
Subject: Acking is delayed by 5 seconds (in disruptor queue ?)

Hello,

I have noticed that an ack takes 5 seconds to pass from the bolt to the spout 
(see debug log below). It is a simple topology with 1 spout, 1 bolt and 1 acker 
all running on the same worker. The spout and the bolt are ShellSpout and 
ShellBolt respectively.

It looks like the message is delayed in the LMAX disruptor? queue.
How can I reduce this delay to ~1ms ?

Regards,
Itai


2014-07-17 18:30:30 b.s.t.ShellBolt [INFO] Shell msg: Sent process to tuple 
2759481868963667531
2014-07-17 18:30:30 b.s.d.task [INFO] Emitting: bolt __ack_ack 
[-357211617823660063 -3928495599512172728]
2014-07-17 18:30:30 b.s.t.ShellBolt [INFO] Shell msg: Bolt sent ack to tuple 
2759481868963667531
2014-07-17 18:30:30 b.s.d.executor [INFO] Processing received message source: 
bolt:2, stream: __ack_ack, id: {}, [-357211617823660063 -3928495599512172728]
2014-07-17 18:30:30 b.s.d.task [INFO] Emitting direct: 3; __acker __ack_ack 
[-357211617823660063]
2014-07-17 18:30:35 b.s.d.executor [INFO] Processing received message source: 
__acker:1, stream: __ack_ack, id: {}, [-357211617823660063]
2014-07-17 18:30:35 b.s.d.executor [INFO] Acking message 1138





Reply via email to