On Tue, Jun 23, 2015 at 10:39 PM, Chanan Braunstein
chanan.braunst...@pearson.com wrote:
Hi,
I need to use a custom serializer for Avro in Akka Persistence for certain
classes. I created the serializer and registered it for a set of events in
some of the Persistent classes. However, prior
I don't think Akka Persistence and PersistentActor is the right tool for
saving the products.
Sounds like you should use an ordinary database call.
You could use a PersistentActor for generating the product IDs, if that
requires persistent state (e.g. a sequence number).
/Patrik
On Sun, Jun 21,
On Tue, Jun 23, 2015 at 8:06 AM, Kostas kougios
kostas.koug...@googlemail.com wrote:
Hi, using akka cluster and DistributedPubSubExtension, I am creating a
topic.
The receiving actor (driver) subscribes to the topic:
private val mediator = DistributedPubSubExtension(system).mediator
Seems interesting
Aggregator actor use the pattern actor per request? The examples provided by
documentation always finish the processing for one request calling context.stop
Thanks
-Mensagem Original-
De: Jim Hazen jimhazen2...@gmail.com
Enviada em: 24/06/2015 00:23
Para:
Hi,
I'm trying to send messages to an actor from the sink of a stream. I tried
it with Sink.actorRef method, but this method does not provide the back
pressure signal from the destination actor. Are there any alternative ways
to send the output of a stream to an actor?
Thanks in advance.
BR,
I am using the latest akka.
I ended up doing a context().watch(RemoteActor) in Actor A.
This seems to work even if the remote node crashes.
Is this the expected behavior when death watching clustered nodes: The
watcher gets a Terminated message also on jvm crashes?
Am Mittwoch, 24. Juni 2015
You can run with akka.loglevel=DEBUG, but also INFO level should show
pretty well what the cluster is doing (if you know what to look for).
I have a hard time understanding what you are seeing and what you expect.
First thing you must clarify is if members are downed and removed, or if
they are
AFAIK akka.io.TcpConnection will send you messages on socket events,
including when there is data available. Then you can do logical message
framing however you want. We have code to extract messages from TCP data
streams in typical ways for the payments industry. It can get a bit complex
in
1) Actor A (Frontend Actor) is created.
2) A message is send to a backend Actor (running on a node in the cluster)
by a constant-hashing router. Actor A is a member of this message.
3) The Cluster node which received the message from A crashes
How can Actor A be notified that the node crashed?
You can use ActorSubscriber to implement a Sink as an actor that can
communicat with your destination actor. It is documented here:
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/stream-integrations.html#ActorSubscriber
-Endre
On Wed, Jun 24, 2015 at 12:42 PM, Vishnu
Thanks for reporting!
On Mon, Jun 22, 2015 at 11:35 PM, Michael Frank syntaxjoc...@gmail.com
wrote:
the current documentation appears to be incorrect, which i believe was
the source of confusion. on line 20 of the code block of the section
Actors and shared mutable state on page
Hi Avi,
I've found this project [1] particularly useful in order to implement your
DDD modeling with the use of Akka. Kudos for Paweł Kaczor. In our
implementation, we've changed some backends, as Kafka and Cassandra instead
of Event Store, and built some others building blocks, but this
Are you assembling the jar files into one fat jar?
/Patrik
On Tue, Jun 23, 2015 at 6:15 PM, Harit Himanshu
harit.subscripti...@gmail.com wrote:
Hello there!
I am trying to run akka-remoting and see issues when I try to run it on
separate JVM process
$ java -jar
I tried to follow the documentation as close as possible.
(http://doc.akka.io/docs/akka-stream-and-http-experimental/current/scala/stream-integrations.html)
I have an application that I want to wake up every 30 seconds and read a
document from the web. It'll parse that document into my own
No OutOfMemory, the third node is running fine. Except is can be the
leader, and in that case I have two leaders...
I think I have reproduced it in the following program (let me know if you
want the complete maven setup or similar):
application.conf:
akka {
actor.provider =
Thanks you Juanjo!
2015-06-24 16:07 GMT+03:00 Juan José Vázquez Delgado
juanjo.vazquez.delg...@tecsisa.com:
Hi Avi,
I've found this project [1] particularly useful in order to implement your
DDD modeling with the use of Akka. Kudos for Paweł Kaczor. In our
implementation, we've changed
Hi Konrad,
Sorry for the split posting (quite appropriate given we are talking split
brain, though ^_^ ). How should I fix it?
Anyways: thanks for the reply. As I mentioned there, is there any risk of
race condition in performing this operation?
Thanks,
D.
On Tuesday, 23 June 2015
On Wed, Jun 24, 2015 at 2:32 PM, Anders Båtstrand ander...@gmail.com
wrote:
Sorry about the confusion, I am probably using some terminology wrong. I
will try again.
This problem is happening on all my clusters under load, using Akka 2.3.11.
I am using auto-down-after-unreachable, so nodes
Feel free to steal anything out of there you'd like, I was just playing
around while being over-typesafe and over-organized. Were I to start a
framework (I cringe at that word, more set of best practices and some
helpers) I would definitely include an easy way for ScalaTags and ScalaCSS
and
About codec we already have an implementation to do that, the problem is
read data in suited way. My first thought is TCPConnection could read data
stream and send Received(data), data as iso 8583 ByteString, to handler.
After all, i don't know whether i have to do framing (your suggestion) or
On Wed, Jun 24, 2015 at 4:57 PM, Anders Båtstrand ander...@gmail.com
wrote:
No OutOfMemory, the third node is running fine. Except is can be the
leader, and in that case I have two leaders...
What are you using the leader for? There is no guarantee that there will
not be more than one leader.
I am using the cluster singleton, my mistake. I was somehow believing the
leader always had the singleton...
Anyway, it might be that https://github.com/akka/akka/issues/17479 is
related. I am not downing any node manually, however, and a node will never
down itself, right? Anyway, this bug
I have attached my logs showing the problem.
I do now think that the problem is the same as the bug you mention. I can
read the following:
2015-06-24 17:51:54,693 INFO Cluster(akka://my-system)
my-system-akka.actor.default-dispatcher-3 - Cluster Node
[akka.tcp://my-system@machine2:15552] -
Yes, of course. You might also get a Terminated message if the other system is
temporarily – but too long – unreachable. In that case the other system will
remain quarantined, i.e. all communication from it will be blocked, in order to
make sure Terminated means what it suggests: no zombies ;-)
Yes Michael, you are right. I based my example code on the documentation
you referenced, assuming it was correct.
What would be the correct way to do, the same as when sender is involved or
is there something special here?
// is this the safe way to proceed?
val s = self
Future {
On Wed, Jun 24, 2015 at 8:20 PM, Frederic frederica...@gmail.com wrote:
Yes Michael, you are right. I based my example code on the documentation
you referenced, assuming it was correct.
What would be the correct way to do, the same as when sender is involved
or is there something special
26 matches
Mail list logo