Roland: I'm really sad to see you leave. You have been an amazing leader
and representative for the Akka project for years. Thanks for all the hard
work over the years. Akka would not have been where it is without you.
Patrik: Congrats to the new role. I can't think of a better new leader of
the
But why perform building views(read model), using Persistence Query, by
processing all events from the bottom of journal when there is a
possibility of significant reduction of this process both in time and
resources?
W dniu piątek, 11 marca 2016 10:42:09 UTC+1 użytkownik Patrik Nordwall
Sorry, I've forgotten to mention. We are using Cassandra plugin.
W dniu piątek, 11 marca 2016 10:03:41 UTC+1 użytkownik Konrad Malawski
napisał:
>
> But which datastores?
>
> The event journals are often different entities than the SnapshotStore.
> If it's the same, it's technically doable,
Hello!
I was looking for and not found anywhere any information about ability to
get snapshots from snapshot store using Akka Persistence Query. Do you know
any way of doing such a thing?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>>
On Fri, Mar 11, 2016 at 10:24 AM, wrote:
> Sorry, I've forgotten to mention. We are using Cassandra plugin.
>
That feature is not supported by the Cassandra plugin.
Snapshots should only be seen as an optimization for the recovery process
of persistent actors.
/Patrik
Hi there,
Very glad you're enjoying Akka HTTP, and thanks for the question – seems docs
can be made more clear (please let us know what lead you to this conclusion so
we can fix it, full answer below):
I can use the Akka Http Websockets as described in the documentation, but it
looks like it
If you are doing .get on the message in your preRestart: that is not safe,
the actor might restart for reasons not being a message and then the option
will be empty. In general just .get:ing an option is an antipattern,
someone made it an option because it can be empty so you need to deal with
In general the sources and sinks should take care of cleaning up themselves
rather than have some downstream/upstream element report
cancel/failure/completion through a side channel, if you are implementing a
custom Source it's OutHandler will get a onDownstreamFinish and if your are
writing a
Hi. Thanks Patrik.
I am using shutdown() and then awaitTermination(). Both don't block and
return immediately. My previous test start two nodes and the the
awaitTermination()
blocks as expected and everything works fine. The only test which fails is
the one which works in single-node mode.
I've
Thanks.
I would like to keep monitoring counters in the actorsystem, so those can
be exposed via service to external world.
On Thursday, March 10, 2016 at 5:48:12 PM UTC-6, rrodseth wrote:
>
> You can also use alsoTo to send stream elements to an actor or special
> purpose Sink.
>
> On
Me too :(
I'l prepare a minimum example. Typically when I do this the problem get
clear and I could fix my test code :)
Regards.
On Fri, Mar 11, 2016 at 1:43 PM, Patrik Nordwall
wrote:
> Please share minimized code of the problem. We use this all over the place
>
Hopefully I can explain this correctly...
I stumbled into code using akka streams that's laid out like this:
- A Kafka source
- a flow that partitions decoded kafka messages by a field (Partition graph
stage)
- to each partition, a Subscriber/Publisher 'Flow' actor gets assigned,
that sends
Hi Eduardo,
If you have an actor that is blocking indefinitely, the actor system
termination will never complete, could this be the case? If it is you
should be able to see that by getting a thread dump from the JVM and see
one of your actor blocking one of the dispatcher threads.
--
Johan
Hi Biniam,
from the stack trace it is pretty obvious that you should look at
AbstractActor.groovy—which is not an Akka source file.
Regards,
Roland
> 11 mar 2016 kl. 12:28 skrev Biniam Asnake :
>
> Hello everyone,
>
> Please guide me in the right direction to solve
Hi Richard
Yes, that sounds right.
You could also just keep the future as a direct reference in your actor and
then .foreach(binding => binding.unbind()) in preRestart/postStop to make
the server lifecycle follow the actor lifecycle.
--
Johan Andrén
Akka Team, Lightbend Inc.
--
>>
Hi.
Hum... I think that is not the case. In fact the methods shutdown()
and awaitTermination()
simply don't block at all and the next test says that the port 12551 is
already bound. If my previous test starts two nodes everything works find
and the awaitTermination() method waits for the node
Hi,
To notice that you lost contact with the other end of a TCP socket you have
to read or write the socket, this is probably best achieved by providing
some type of heartbeat in your own protocol.
--
Johan Andrén
Akka Team, Lightbend Inc.
--
>> Read the docs: http://akka.io/docs/
Option.get does not return null for an empty option, it throws an exception.
--
Johan Andrén
Akka Team, Lightbend Inc.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>
Here is my implementation in AbstractActor.
AbstractActor extends UntypedActor and all other actor classes extend this
AbstractActor class.
I am using Groovy language and `?.` operator is a null check (e.g.
message?.get()?.toString() means if message is not null, get the message.
If get is
Hi Marek,
It is not the header that is null, the exception is thrown because there
was a body that was to big. The reason you get null in the logs is that we
have a descriptive toString but the exception message field is null so this
is what you get in the logs.
I have added a ticket,
Hi Alex,
Did you look at the sample TCP client server in the docs?
http://doc.akka.io/docs/akka/2.4.2/scala/stream/stream-io.html
In your code you pass the incoming messages through a Flow.fold, but fold
only ever emits when upstream has completed, so this will make it so that
it only ever can
But which datastores?
The event journals are often different entities than the SnapshotStore.
If it's the same, it's technically doable, however then it's implementation
dependent – there is no general answer about this :)
--
Cheers,
Konrad 'ktoso’ Malawski
Akka @ Lightbend
On 11 March 2016
You must use shutdown followed by awaitTermination.
(note that awaitTermination is replaced by something else in 2.4.x, see
deprecation)
In TestKit there is a helper method to shutdown the actor system, await and
verify.
On Fri, Mar 11, 2016 at 1:22 PM, Eduardo Fernandes
Please share minimized code of the problem. We use this all over the place
so I'm pretty sure your code is not correct.
On Fri, Mar 11, 2016 at 1:39 PM, Eduardo Fernandes wrote:
> Hi. Thanks Patrik.
>
> I am using shutdown() and then awaitTermination(). Both don't block and
>
Normally you don't rebuild the views (read model).
If you really need that you can start a PersistentActor with same
persistenceId, but make sure that you dont write anything from it.
fre 11 mars 2016 kl. 10:59 skrev :
> But why perform building views(read model), using
Johan, hi!
Sorry, haven't grokked exactly :)
Do I understand correctly, in code above I must eliminate all
complete()/cancel() calls, instead just bothering about own clean up only?
In other words, are `finish` events propagating to other stages
independently these my stage's in/out handlers?
Hello -
I like using Akka Http in my actor systems because it is lightweight. I
use it for normal Rest/Json requests/responses for CRUD operations, however
I now also have a need for websockets.
I can use the Akka Http Websockets as described in the documentation, but
it looks like it
Or, if to reformulate, how does transparent stage look?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
Running several clustered singletone, occasionally I'll get this constantly
in the logs over and over:
"DEBUG [ClusterSingletonProxy] Trying to identify singleton..."
It will never stop. How can I trap or get a listener for this happening
more than N times, so that I can take some sort of
29 matches
Mail list logo