Re: [akka-user] akka-http WebSockets with separate Sink and Source

2017-09-15 Thread Jakub Janeček
Yes, I am sure. This is the only output from the console I get (I even tried with println just to be sure :) ): [DEBUG] [09/16/2017 07:43:11.335] [main] [EventStream(akka://system)] logger log1-Logging$DefaultLogger started [DEBUG] [09/16/2017 07:43:11.336] [main] [EventStream(akka://system)]

Re: [akka-user] akka-http WebSockets with separate Sink and Source

2017-09-15 Thread Konrad “ktoso” Malawski
You seem to be logging on debug level there - are you sure your logging configuration is such that it will log/print log statements? On September 16, 2017 at 5:18:19, Jakub Janeček (janecek.ja...@gmail.com) wrote: > Hello, > > I am trying to implement a simple WebSockets server using akka-http

[akka-user] akka-http WebSockets with separate Sink and Source

2017-09-15 Thread Jakub Janeček
Hello, I am trying to implement a simple WebSockets server using akka-http with separate Sink and Source. According to the documentation I should be using the handleMessagesWithSinkSource method however if I do so I am not able to receive any Message. I am able to send some back to the client

[akka-user] Serialize a throwable for remote actors

2017-09-15 Thread Kevin Osborn
I am using clustered persistent actors. So, my actors are going to be sending a response back to the sender. Normally, this is just going to be a CommandSuccess message, which is just a simple ack object. And since this is remote, I want to use protobuf. I am using ScalaPb. So, in this case, it

Re: [akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-15 Thread Justin du coeur
On Fri, Sep 15, 2017 at 9:48 AM, Sebastian Oliveri wrote: > I wrote your implementation but as a local actor in every node to live in > memory as long as the instance is up and running. > > I thought about possible border cases regarding to model it as a local > actor but I

Re: [akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-15 Thread Sebastian Oliveri
Justin, I wrote your implementation but as a local actor in every node to live in memory as long as the instance is up and running. I thought about possible border cases regarding to model it as a local actor but I can not come up with a case that would break the scenario. I am thinking

[akka-user] Akka Stream Kafka - Avro Desereilization

2017-09-15 Thread Yaser Arshad
Hi, Using Kafka Source in Akka Streams Kafka, I am trying to deserialize an Avro object using KafkaAvroDeserializer that is created using avro-tools library. If I use confluent Kafka Consumer, I can deserialize it like this: def consumerProperties() = { val props = new Properties

Re: [akka-user] Ask a lot of Actors inside an Iteration in an Akka-Scheduler and wait for reply of everyone results in a stop of actor-system and webserver

2017-09-15 Thread 'Simon Jacobs' via Akka User List
Hey Johan, thanks for your reply! Calling .get on a CompletableFuture blocks the thread until the future is completed, don't do that. That's the point. I need to call .get because otherwise the actor stops working. I also tried to collect the stages and wait then for the Response:

Re: [akka-user] Akka Persistence and avoiding var in favor of context.become

2017-09-15 Thread Konrad “ktoso” Malawski
Become “should" work AFAIR, though I’d recommend using Akka 2.5 (it’s binary compatible with 2.4, so you can just upgrade it). I could not find if we fixed anything about become in persistent Actors, there were fixes but a very long time ago in 2.3.. I would recommend avoiding become with