Hello again,
replies in-line:

[-- cut --]
>
In Pull based approach, if producer is creating more work, we can implement
> a logic to add more worker actors to the system. These additional workers
> will take the surge of work created by fast producer. This maintain's
> balance or capability in the system to handle sudden load.
>
Sure, that's standard tactics for scaling on-demand.
Also, it assumes you are able to do this fast enough, which simply may not
be true.



> Nothing against akka streams but it's back pressure will force the
> producer to push the work at slow rate.
>
No, because the protocol used by reactive streams can adapt and in the case
of faster subscribers than publishers, they can emit way larger demand than
the publisher is producing at.
We call it "dynamic push pull", which shows how the protocol behaves quite
well. This means that they can signal demand quite rarely, as in this
example:

We can signal much more demand, if the subscriber is fast or has large
buffers:
S: DEMAND 10.000
P: SEND
P: SEND
P: SEND
...       # 100 MORE SENDS
S: DEMAND 10.000
P: SEND   # SEND NO. 104

 This effectively looks like push "for a while" (P can push 10000 elements
without waiting for any more demand).
Signalling demand can be interleaved between signalling data, which can
lead to the publisher never having to wait with publishing.

If demand is depleted, it looks like pull again, because the publisher
can't publish to this subscriber.

vs. the naive implementation (which we avoid, but uses the same protocol)
(which would such, because of the overheads, but perhaps that's exacly what
a subscriber needs – because it can't cope with more elements than 1 (has
no buffer, is slow, whatever)):
S: DEMAND 1
P: SEND
S: DEMAND 1
P: SEND
S: DEMAND 1
P: SEND


> What are your thoughts on taking the approach of pull based processing
> with capability of adding more worker's or worker nodes on the fly?
>
Of course, that's one of our most often recommended patterns ever :-)
Like I said before, it still assumes the "work dispatcher" is able to keep
up with incoming work requests which it then delegates to these worker
nodes – this usually holds, but is not guaranteed.

In order to guarantee things like this proper congestion control must be
implemented within the system - such as the reactive streams protocol (or
TCP or your home grown congestion control – there's plenty of these).

-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

<http://typesafe.com>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to