Hi dong,
On 20.02.14 04:31, dong wrote:
I'm been playing with the new Akka persistence module, and have the
following questions that I hope to get answered.
1. The document says "If a processor emits more than one outbound
message per inbound Persistent message it *must* use a separate
channel for each outbound message to ensure that confirmations are
uniquely identifiable..." Is this because that
"p.withPayload(...) and Persistent(...) method reuse the current
message's id, therefore if we call either method more than once,
the processor will emit multiple messages with the same id?
Yes, akka-persistence doesn't generate (and write) new sequence umbers
for outbound messages. Generating new sequence numbers for outbound
messages would make this usage rule obsolete but would significantly
lower throughput. We decided to go for higher throughput.
1. I think this implies channels compare new message'd id with the
largest id ever seen and discard messages whose ids are smaller or
equal to the last id seen, do they?
No, replayed messages contain information which channel destinations
confirmed their delivery. If a channel encounters a replayed message
that contains a confirmation with the same channel id, it ignores that
message. A confirmation is a persistent (processorId, sequenceNr,
channelId) 3-tuple, where (processorId, sequenceNr) identify the a
persistent message.
1. (I guess I should start reading the code.)
2. Were channels designed to be used one-way or two-ways? If my
previous guess about channel's id check mechanism is correct,
channels should be one-way only. Just want to make sure I'v got it
right.
They are one-way.
1. If one processor accepts persistence messages from multiple
channels, to deal with potential re-deliverying of the same
messages, I guess the processor should keep a 'last-seen-id' for
each channel and do id-check, right?
Only if you assume that messages cannot be lost. This is reasonable to
assume for local channel destinations but not for remote destinations.
1. In a hello-persistence example I'm writing, I used a Casbah
mongodb journal plugin (the author is nice btw), I randomly get
"Persistent commands not supported" error. Anyone knows what this
imply, application logic error or it might be a journal
plugin incomparability?
Seems you're sending Persistent messages to an Eventsourced processor.
Eventsourced processors do not support command sourcing.
1. Is there a way to customize message id generation logic? Say I
want my id start from 1000000 and increment by rand()%3?
No.
Thank you :)
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google
Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
--
Martin Krasser
blog: http://krasserm.blogspot.com
code: http://github.com/krasserm
twitter: http://twitter.com/mrt1nz
--
Read the docs: http://akka.io/docs/
Check the FAQ: http://akka.io/faq/
Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.