I'm using FST for serialization and am happy with it. It works reasonably
out-of-the-box.
To get it to work with remoting, I had to register specific FST serializers
for ActorRef and Deploy. The first one falling back to
Serialization.serializedActorPath and the latter one delegating to the
and everything goes as it should.
thanks,
Bert
On Thursday, February 11, 2016 at 5:36:17 PM UTC+1, Bert Robben wrote:
>
> Hi,
>
> I'm trying to use sharding and that works reasonably well (my entities are
> being created across my cluster and they respond to my requests).
>
&
the Props. No need to
> involve grandpa.
>
> fre 12 feb. 2016 kl. 11:43 skrev Bert Robben <bert.rob...@gmail.com>:
>
>> Thanks Filippo. I think I figured out now what was going wrong.
>>
>> I was confused between the shardId and the entityId.
>>
>>
; not be necessary.
>
> On Fri, Feb 12, 2016 at 12:26 PM, Bert Robben <bert.rob...@gmail.com>
> wrote:
>
>> I'm not sure what you mean with "entity type name" and grandpa.
>>
>>
>> But I checked the docs again, and it indeed is already mentioned that
uca <m...@filippodeluca.com
> > wrote:
>
>> Hi Bert,
>> the only way that I am aware of to get the id is using the one you have
>> posted:
>>
>>context().parent().path().name()
>>
>> It is weird that the Actors get destroyed. They should no
Hi,
I'm getting strange results when using sharding with state-store-mode =
ddata.
This is my scenario:
* I have a cluster with 4 nodes (A1, B1, A2, B2). In this cluster there are
3 shards:
- I have one set of primary actors sharded across all these nodes (A1,
B1, A2, B2).
- I also have
I kill the node (to simulate that the node crashes); the cluster detect
this as unreachable and because I have auto-down configured as "yes", the
master DOWNs it after some seconds (I have auto-down-unreachable-after =
10s).
I see this clearly in the logs that the node is DOWN'ed.
And I wait
Sure. https://github.com/akka/akka/issues/19917 it is.
On Mon, Feb 29, 2016 at 5:17 PM, Endre Varga <endre.va...@lightbend.com>
wrote:
>
>
> On Mon, Feb 29, 2016 at 5:03 PM, Bert Robben <bert.rob...@gmail.com>
> wrote:
>
>> I kill the node (to simulate that the
for this?
Bert
On Monday, February 29, 2016 at 5:03:11 PM UTC+1, Bert Robben wrote:
>
> I kill the node (to simulate that the node crashes); the cluster detect
> this as unreachable and because I have auto-down configured as "yes", the
> master DOWNs it after some seconds (I have auto-
So should I read this as "sharding with ddata is not usable in small
clusters with unreliable nodes" ? That would be a bummer.
On Monday, February 29, 2016 at 2:42:43 PM UTC+1, Marek Żebrowski wrote:
>
> I had similar problem - basically shardcoordinator needs to read data from
> Majority of
Hi,
I'm trying to use sharding and that works reasonably well (my entities are
being created across my cluster and they respond to my requests).
However, I notice that after each call to my entity, it is destroyed. Every
next call on the entity then recreates it. This is not ideal for me since
I'm using cluster sharding in distributed data mode and am trying to
understand under which circumstances the sharding goes out-of-sync such
that a single entity is allocated on two different nodes at the same time.
I'm asking this question because I'm evaluating to what extent sharding +
dd
Hi,
I'm deploying my akka cluster with mesos and would like to do rolling
upgrades with marathon. My app has a few cluster singletons and also using
cluster sharding (with ddata). The cluster is small for now (between 2 and
4 nodes).
So the scenario is the following
(1) I have a happy cluster
only
>> one node in that list, then when that node is restarted, it will not be
>> able to join the already existing cluster, but will form a cluster on its
>> own.
>>
>> On Mon, Aug 8, 2016 at 6:19 PM, Bert Robben <bert@gmail.com> wrote:
>>
>>&
at is kind of the point.
>
> If your old nodes already use enable-additional-serialization-bindings then
> such rollout would work, since both sides use protobuf anyway always anyway.
> --
> Konrad `ktoso` Malawski
> Akka <http://akka.io> @ Lightbend <http://lightbend.com
Given I have a cluster running Akka version 2.4.16 (using various Akka
distributed features such as sharding, cluster singleton, remote actor
creation, etc). Would it be possible to do a rolling upgrade of the cluster
and gradually upgrade all nodes to 2.4.17 ?
>From
I'm puzzled by the meaning of Actor.receive() and context.become(..). I was
under the impression that Actor.receive() gives you the partial function
that determines how incoming messages are handled at this moment.
context().become(...) allows you to replace that behavior by another one.
But
Hi,
I understood from the following fragment from the remoting docs that with
the latest version (I'm using 2.4.14) that it's possible to completely
disable java Serialization.
"For compatibility reasons, the current remoting still uses Java
serialization for some classes, however you can
18 matches
Mail list logo