Hey,

It would be more complicated to "replace" AtLeastOnceDelivery with your 
> demand-driven proposal - the entire point of ALOD is that it fights back 
> the fact that messages can get lost and nodes can go down.
> Effectively what you're proposing is to switch from "re-sending until I 
> get confirmations" (push) to "pulling all the time" (pull), the catch here 
> is – "*what if the demand messages get lost?*", so you'd have to add 
> re-delivery of the demand tokens themselves anyway.
>

True, the demand can get lost as well. Hmm... and that would be in fact a 
problem of any "reactive stream" between remote actors. This would make 
things more complex, but still doable, in a peer-to-peer setting at least 
(without routers). And would help with the potential flooding of the 
destination when it comes back after being absent for a longer time. But as 
I understand it's not complete non-sense ;) 

By the way - isn't dropping demand messages a problem also in the current 
remote-streams implementation?
 

> Also imagine that you're trying to send M1 to A1, the A node goes down, it 
> restarts. You could keep redelivering the M1 message, which would trigger 
> the *starting* of the A1 actor (it could be persistent actor, in a shard, 
> which starts when it gets a message),
> then the push mode of ALOD will revive this A1 guy and deliver the M1 
> message. This would not work in a just pull based model - you'd have to 
> revive *everyone* on A after a restart just in order to start asking 
> around in the cluster if someone didn't have a message they wanted to send 
> to these A# actor – where as with the "retry (push)" model, they are just 
> started whenever there really is some message to be delivered to them, no 
> need to start them and "ask around".
>

Sure, as we move away from peer-to-peer to more actors things do get more 
complex, but then, if you want to have back-pressure, you need some kind of 
feedback. I'd see it as a tradeoff - either lazily started actors, or 
backpressure.

If the sharded actors are aggregate roots, for example, then lazy loading 
makes perfect sense. But if these are workers, of which there are a couple 
per host, then this wouldn't be a problem. Just depends on the type of work 
they are supposed to do.
 

> I'd also like to make sure what you mean by "reactive" when you use it in 
> this proposal – I assume you mean the *reactive*-streams "reactive", as 
> in "abides to the reactive streams protocol", and akka-streams of course 
> drive those using messaging (in most cases).
>

Yes, reactive streams, mental shortcut :)
 

> If so, then yes – we do plan to support reactive-streams over the network, 
> in our case those will be actor's and messages of course, and yes, we'll 
> need to implement a reliable redelivery transport for those messages.
>

Great to hear :)
 

> We're not there yet, but we definitely will cross that bridge when we get 
> there :-)
>
> Let's move on to the Router example;
> Well, this is pretty much what we deal with nowadays with elements like 
> Broadcast 
> <https://github.com/akka/akka/blob/release-2.3-dev/akka-stream/src/main/scala/akka/stream/javadsl/FlowGraph.scala#L165>
>  
> / Balance 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fakka%2Fakka%2Fblob%2Frelease-2.3-dev%2Fakka-stream%2Fsrc%2Fmain%2Fscala%2Fakka%2Fstream%2Fjavadsl%2FFlowGraph.scala%23L209&sa=D&sntz=1&usg=AFQjCNFSrMR25-LKR9NaD5WOGaYkn7az4g>
>  and *FlexiRoute* 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fakka%2Fakka%2Fblob%2Frelease-2.3-dev%2Fakka-stream%2Fsrc%2Fmain%2Fscala%2Fakka%2Fstream%2Fjavadsl%2FFlexiRoute.scala&sa=D&sntz=1&usg=AFQjCNF5wRj3RGifRFHYYzVy6qJr7Fb17A>
> .
> Especially FlexiRoute should be of interest for you (in this example).
>

I'm wondering how many more functionalities are there in the code 
undiscovered ;) But that will change when the docs are there I guess :)
 

> As for the last proposal... I think it's either missing some details, or 
> is wishful thinking.
> How would you without a central entity be able to guarantee that you're 
> properly balancing values among all the B side actors?
> If you can just peer to peer between then you could simply just use 
> point-to-point streams, and if that's not doable, there will be some form 
> of router anyway doing the routing between A and B actors.
>

Right, well, originally I was wondering if Akka could replace 
Kafka+Zookeeper's message streams (which can be used to implement the 
scenario above: where there's a pool of producers, and a pool of consumers, 
all potentially on different hosts, and using Kafka they can stream 
messages reliably). With Kafka's delivery methods you bind each consumer to 
a number of partitions, so it would be as you describe, kind of 
point-to-point streams, which get re-balanced when a node goes down.

Going this route, there could be a cluster-singleton service which assigns 
B-actors to A-actors, and creates streams between those two. These could be 
the "reactive message streams" from above. And to solve the 
demand-splitting problem (when a B has two As assigned), there could be 
simply more consumer-actors then producer-actors.

Thanks!
Adam

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to