Hi Alex,
the main concern you express is that messages sent from an
EventsourcedProcessor are eventually delivered to their recipients, which is
what a durable message queue (formerly called PersistentChannel) is for. The
main use of the non-persistent Channel is to deduplicate messages, but
Hi Lawrence,
21 maj 2014 kl. 17:38 skrev Lawrence Wagerfield lawre...@dmz.wagerfield.com:
Interesting, thanks for the heads-up regarding the deprecation of Processor :)
While we're on the topic of event-sourcing:
Is it legal to send messages on recovery? These are side-effecting, but will
In event sourcing principles you don't perform external side effects during
replay/recovery. That doesn't prevent you from doing it, if you have a good
reason for it.
Sometimes it is more useful to send messages immediately after recovery,
than during recovery. Later events in the recovery might
One thing unanswered from the previous email:
Is it in the spirit of akka-stream/reactive streams to implement your own
producers? Or should all producers (publishers) be created by the framework?
In theory reactive streams aim to be usable between frameworks - so an Rx
Producer would be
On Monday, May 26, 2014 2:25:52 PM UTC+2, Konrad Malawski wrote:
One thing unanswered from the previous email:
Is it in the spirit of akka-stream/reactive streams to implement your
own producers? Or should all producers (publishers) be created by the
framework?
In theory reactive
Hi Adam
- is it reasonable (thinking about reactive streams in general) to have an
actor which produces elements on-demand (instead of providing a
collection/iterator/() = as is currently supported)? As far as I
understand the current implementation, subscribers explicitly ask
publishers
Hey Adam,
Patrik has right now opened a ticket and started exposing the ActorProducer
abstraction :-)
https://github.com/akka/akka/issues/15288 This will help a lot in
implementing external producers. :-)
On Mon, May 26, 2014 at 2:52 PM, Endre Varga endre.va...@typesafe.comwrote:
Hi Adam
-
On Mon, May 26, 2014 at 07:41:53AM +0200, Patrik Nordwall wrote:
Hi Eugene,
I have not looked at the code yet, but in the config you have two
consistent hashing routers. The envelope is unwrapped when the message pass
the first consistent hashing envelope. Could that be the reason?
Well,
Hi,
The docs also advice against auto-downing. However I do not really get the
alternative. Manual downing would be unworkable, because it could render your
application unavailable for to long. So should I implement some strategy in my
akka solution, or in some external monitoring system?
How
It *seems* that I've fixed this weird behavior by making messages, being
sent to the cluster, implementing the *ConsistentHashable* trait
At least, the sample project and the components from my prototype - are
both working well now.
So it turns out that for the case if there are several
- another thing is if the streams are thought to be more local, or remote
as well? There's currently the TCP stream implementation, which I guess
would indicate remote as well (and in such scenarios the need for
backpressure arises quite naturally, maybe even more than in locally), but
Awesome, subscribed :)
Thanks,
Adam
On Monday, May 26, 2014 3:04:30 PM UTC+2, Konrad Malawski wrote:
Hey Adam,
Patrik has right now opened a ticket and started exposing the
ActorProducer abstraction :-)
https://github.com/akka/akka/issues/15288 This will help a lot in
implementing
12 matches
Mail list logo