My application.conf
akka {
loglevel = INFO
extensions =
[com.romix.akka.serialization.kryo.KryoSerializationExtension$]
actor {
provider = akka.remote.RemoteActorRefProvider
serializers {
kryo = com.romix.akka.serialization.kryo.KryoSerializer
}
Nevermind. I tested akka 2.3.4 and apparently the issue in question was
fixed.
Regards,
Jim
On Wednesday, August 6, 2014 2:03:58 PM UTC-10, Jim Newsham wrote:
I can't seem to find the akka 2.3.1 artifacts targeting scala 2.11. Are
these available somewhere? We'd like to upgrade to
Is it possible to reduce the average message overhead?
200 bytes extra cost per remote message doesn't looks good...
On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:
Hi Michael,
I used wireshark to capture the traffic. I found for each message sent(the
message is sent with
Hi Brian,
No, PubSub works within a cluster – it needs to know which nodes to send
messages to, right?
However you could have a subscriber that will mediate the messages to the
other cluster via Cluster Client –
http://doc.akka.io/docs/akka/2.3.4/contrib/cluster-client.html
Would that help in your
Hello Jim,
yes, as indicated by the issue: https://github.com/akka/akka/issues/15109
it's resolved.
Related answer – we have not published akka 2.3.1 for scala 2.11 because at
that time scala 2.11 was not available as stable yet.
We do not plan to back release 2.3.1 and instead suggest using
I have problem that involves synchronising outbound messages from a parent
actor and its child actor. This particular problem is with regards to
forwarding failure messages to clients.
Here is the example:
I have a service actor that receives a request from a client actor*.*
The service
Instead of mutating state from within the future I would use the pipeTo
pattern. Using pipeTo you can send the result of a future to an actor (e.g.
to self). There you can safely change state, as you are in
single-threaded-illusion-land again...
HTH
Cheers,
Michael
Am Donnerstag, 7. August
Sorry, still early. Missed the part where you said that you don't want to
use PipeTo because of the transaction. Not sure if that is a problem at all
though. From what I see you use the transaction to make sure nothing
happens with the values between your zcard and zrange calls, afterwards its
On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor.kl...@gmail.com
wrote:
Or add compression.
This is the Akka wire level envelope, cannot be directly controlled by
users (unless someone writes a new transport of course).
-Endre
On Aug 7, 2014 9:52 AM, Endre Varga
Hi Syed,
As the very first step, can you tell us what is the Akka version you are
using? If it is not Akka 2.3.4, please try to upgrade to 2.3.4 and see if
the issue still remains.
-Endre
On Thu, Aug 7, 2014 at 12:12 AM, Ryan Tanner ryan.tan...@gmail.com wrote:
When those large messages are
You can do wire-level compression.
On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga endre.va...@typesafe.com
wrote:
On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor.kl...@gmail.com
wrote:
Or add compression.
This is the Akka wire level envelope, cannot be directly controlled by
users
Hi,
When I upgrade from akka 2.2.3 to akka 2.3.4, I found the message
throughput drops about 30%.
My benchmark looks like this:
4 machines, each machine has 1 source actor and 1 target actor. Each source
actor will randomly deliver a 100 bytes messge at a time to any target
actor.
I use
compressed link/interface
Is this configuration inside Akka conf? I cannot find the document, do you
have pointer to this?
On Thursday, August 7, 2014 4:58:05 PM UTC+8, √ wrote:
Hi Sean,
On Thu, Aug 7, 2014 at 10:49 AM, Sean Zhong cloc...@gmail.com
javascript: wrote:
Hi Viktor,
I tried it and this looks very promising as since all processors now go
into Open state. However without the reader I'm deep into encoding hell
because my files are in us-ascii and my db in UTF-8 :
invalid byte sequence for encoding UTF8: 0x00
And I can't just sanitize the files beforehand...
That would be completely outside of Akka.
On Thu, Aug 7, 2014 at 11:01 AM, Sean Zhong clock...@gmail.com wrote:
compressed link/interface
Is this configuration inside Akka conf? I cannot find the document, do you
have pointer to this?
On Thursday, August 7, 2014 4:58:05 PM UTC+8, √
:( encoding hell
On Thu, Aug 7, 2014 at 11:10 AM, Jasper lme...@excilys.com wrote:
I tried it and this looks very promising as since all processors now go
into Open state. However without the reader I'm deep into encoding hell
because my files are in us-ascii and my db in UTF-8 :
invalid
Hi,
Turns out there was a bug in our homebrew jdbc-snapshot implementation.
The loaded SelectedSnapshot was populated with Option(state) instead of
just the state, so the following lines in ShardCoordinator was not executed:
case SnapshotOffer(_, state: State) =
log.debug(receiveRecover
Great to hear you've found the problem!
We'll provide a TCK for journal plugins with the next release (minor
already), so I suggest grinding your custom plugin with it to see if it's
really valid :-)
Happy hakking!
On Thu, Aug 7, 2014 at 11:30 AM, Morten Kjetland m...@kjetland.com wrote:
Hi,
I will try to get a min code set to reproduce this. I will post updates
here.
On Thursday, August 7, 2014 5:47:51 PM UTC+8, Patrik Nordwall wrote:
Could you please share the benchmark source code?
/Patrik
On Thu, Aug 7, 2014 at 11:45 AM, Endre Varga endre...@typesafe.com
javascript:
Can it be the case that you have a lot of system message traffic between
your systems? Do you have lots of remote deployed actors maybe?
All actors(4 source, 4 target) are created using remote actors.
On Thursday, August 7, 2014 5:45:34 PM UTC+8, drewhk wrote:
Hi Sean,
This is
java.lang.NoSuchMethodError:
com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
NSME is essentially always a sign of classpath issues, either having the
wrong version of the lib on the classpath, the wrong version first on the
classpath or not having the dependency on the
Hi Lawrence,
In general, exactly one entity in a distributed system should be
responsible for deciding about success / failure,
otherwise there always will be a race of some kind.
In your case though, the problem arrises because the service actor does not
know if the transaction actor has
Hi,
Thanks for the details. I am looking into this. But the shut down starts
well ahead of this and running with no ack for sometime before Closing
it. Definitely i will look into this NSME issue, but this cluster shutdown
is caused by some thing else? The same code is running fine in Play
https://lh5.googleusercontent.com/-HoimNVHLEVs/U-NbcKNJOYI/D7Q/kESkTGWcAdQ/s1600/001.png
Cluster: 4 machines, each machine has 1 source actor and 1 target actor..
all actor started remotely by a master.
test scenario: Each source actor will randomly deliver a 100 bytes messge
at
Hi Mani,
I had the same issue (NSME in the leveldb code) when I bumped an
application to akka 2.3.4 and guava 17. As soon as I dropped the guava
dependency everything was fine again.
I just checked the pom and Play 2.3 seems to have a dependency on Guava 16,
while the leveldb in
Just verified, the method Closable.closeQuietly was deprecated since at
least guava 14, and was removed in guava 16. Guava 17 then introduced two
closeQuietly methods, which have different parameter types though...
Cheers,
Michael
Am Donnerstag, 7. August 2014 13:42:46 UTC+2 schrieb Michael
I made a diff, and try to use old akka 2.2.3 config when running with akka
2.3.4
Here is the diff:
--- akka.2.3.4.conf.json Thu Aug 7 19:33:37 2014
+++ akka2.2.3.conf.json Thu Aug 7 19:32:35 2014
@@ -13,13 +13,10 @@
},
default-dispatcher: {
attempt-teamwork: on,
-
Hi Konard,
I appreciate your response.
There are two approaches for work processing either pull or push. Pull
seems to be better approach for processing
work(http://blog.goconspire.com/post/64901258135/akka-at-conspire-part-5-the-importance-of-pulling).
Note:Pull based work processing
Some kind of intermediate actor that woul merge incoming messages and
produce one stream from them ?
W dniu czwartek, 7 sierpnia 2014 13:15:29 UTC+2 użytkownik √ napisał:
Hi!
A Consumer (in the future, Subscriber) can only be connected to one
producer, and as such you need to merge
Michael,
Thank you for your response.
Here is what I'm struggling with.
In order to use pipeTo pattern I'll need access to the transaction (tran )and
the FIRST Future (zf) in the actor where I'm piping the Future to because
the SECOND Future depends on the value (z) of FIRST. How can I do
I finally narrowed down the config item: akka.actor.remote.use-dispatcher
The default setting for akka 2.3.4 remote is
akka.actor.remote.use-dispatcher = akka.remote.default-remote-dispatcher,
when I change it to akka.actor.remote.use-dispatcher = , then the
performance is same or better with
The default-remote-dispatcher config is:
### Default dispatcher for the remoting subsystem
default-remote-dispatcher {
type = Dispatcher
executor = fork-join-executor
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
Something like that.
This somehow looks like I have to know right of the beginning how much
streams there are to merge.
Unfortunately my components come and go. They don't know any Flow to merge
with, they just know the consumer.
They do some work, emit some events and then eventually die...
I finally find out it is all out remote dispatcher parallism setting! when
I change parallelism-max to 10, the performance is greatly improved.
default-remote-dispatcher: {
executor: fork-join-executor,
fork-join-executor: {
parallelism-max: 10,
Sean,
Yes the separate dispatcher might be the cause. This default was added to
protect the remoting subsystem from load in userspace (learning from some
past problems). Feel free to reconfigure it to anything that works for you
-- the default might be conservative indeed.
-Endre
On Thu, Aug
Hello,
Thanks for detailing . It clarified my thoughts a lot. This conversation
has put more lights on reactive streams. Also, how reactive streams and
pull mode better fit in design/architecture.
On Thursday, 7 August 2014 18:07:31 UTC+5:30, Konrad Malawski wrote:
Hello again,
replies
Hi,
*Deploy In Amazon:*
I am currently working a simple application based on Pull mode of
processing work.
I want to deploy the same App in Amazon Cloud EC2 instances. Also how to
get this app auto scaling in amazon. I ran through web but did not find any
good articles which could explain
Alright I fixed it, it was stupid. But actually it didn't solve anything...
Here's how I did it :
doJDBCStuff() :
val cpManager = conn.unwrap(classOf[PGConnection]).getCopyAPI
val stringBytes: Array[Byte] = batchStrings.toString().map(_.toByte).toArray
val copy = cpManager.copyIn(sCOPY
Hello Soumya,
how about maping over the Futures in a for comprehention, like this, and
then sending the result back to the actor so it can set the value (*don't*
modify a var from a Future (not safe – as it's executing on a different
thread, and Actor guarantees don't hold any longer)).
case
Are you suggesting the default decider combined with a one-for-one strategy
with a max retry attempt of 1, combined with the following code?:
override def preRestart(exception)
client ! exception
context stop self
On Thursday, August 7, 2014 12:29:05 PM UTC+1, Konrad Malawski wrote:
Hi Greg and Akka!
Nice to hear that you find the prototype interesting. I'll try to have a
look into your scheduler code as soon as I can. I will also look into
moving my existing code to a new github repo (which is not a fork of Akka
repo) for any further development.
Greetings
Odd
On Tue,
What I'm playing at is:
Assumptions:
I'm assuming we're talking about all these actors in the same JVM, nothing
you wrote is hinting a clustered env.
Execution:
If your actor reaches the point in the code where it `client ! result` and
does *nothing *(bold italic nothing, as in stopping :-))
It certainly makes sense. I wouldn't expect the send/stop operation to fail
any more than I would expect the whole supervision framework to fail.
What I'm trying to defend against ultimately comes down to programmer
error. Its quite likely that I'm being irrational in my perception of how
Hi,
I am playing around with Java 8 and Akka.
In Scala I liked to use currying in some scenarios in combination with Akka
Props factory methods.
I tried to use a similar approach with classes from the new
java.util.function package and would be interested to know if there are
better
Hi Endre,
I was using 2.3.3 -- Let me run with 2.3.4 if it makes a difference
thx
-Syed
On Thursday, August 7, 2014 1:19:29 AM UTC-7, drewhk wrote:
Hi Syed,
As the very first step, can you tell us what is the Akka version you are
using? If it is not Akka 2.3.4, please try to upgrade to
Hi Ryan,
In my test its a very large string and I just get the string back from the
message (i.e. it gets deserialized) -- Im not doing anything further. This
test is to just check on how large message can be send accross. I will
attempt again with 2.3.4 to see if it makes a difference
thx
On Wednesday, August 6, 2014 9:08:14 AM UTC+1, Martin Krasser wrote:
Kafka maintains an offset for each partition separately and a partition
is bound to a single node (disregarding replication). For example, if a
Kafka topic is configured to have 2 partitions, each partition starts
with
Can someone (Martin?) please post some rough performance and scalability
numbers per backing storage type? I see these DDD/ES/CQRS discussions lead
to consumer-developer limitations based on performance and scalability, but
I have not seen any actual numbers. So please post numbers in events
On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:
I vote that you need to have a single sequence across all events in an
event store. This is going to cover probably 99% of all actor persistence
needs and it is going to make using akka-persistence way easier.
If that was
I want to deploy the same App in Amazon Cloud EC2 instances. Also how to
get this app auto scaling in amazon. I ran through web but did not find any
good articles which could explain this step by step. Waiting for help in
this regard.
That's more of an EC2 question than it is an akka one.
7 aug 2014 kl. 20:57 skrev ahjohannessen ahjohannes...@gmail.com:
On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:
I vote that you need to have a single sequence across all events in an event
store. This is going to cover probably 99% of all actor persistence needs
Hey Konrard,
thanks for your input. I'll look into it.
On Tuesday, August 5, 2014 6:22:30 PM UTC+2, Konrad Malawski wrote:
Hi Maatary,
This is more of an architectural question, not really limited to Actors.
If I get that right you're worried about evolution in system B forcing
system A
FYI - Just tried with 2.3.4 and it doesn't change the behavior..
On Thursday, August 7, 2014 10:38:38 AM UTC-7, Syed Ahmed wrote:
Hi Ryan,
In my test its a very large string and I just get the string back from the
message (i.e. it gets deserialized) -- Im not doing anything further. This
I am sure you have already thought of this, Patrik, but if you leave full
ordering to the store implementation, it could still have unnecessary
limitations if the implementor chooses to support sequence only for
persistenceId. One very big limitation is, if the store doesn't support
single
Hello,
I'm trying to design a database actor to handle all the database database
activity. The domain entities are GPS locations and users. I was thinking
of having two database actors, one with crud operations for GPS locations
and the second for crud operations for the users. I'm not sure if
Unfortunately there is no way to reduce this overhead without changing the
wire layer format, which we cannot do now. As you correctly see,
practically all the overhead comes from the path of the destination and
sender actor. In the future we have plans to implement a scheme which
56 matches
Mail list logo