[akka-user] Re: can anyone help me to solve this error

2017-07-20 Thread Marek Żebrowski
look at your cluster port configuration: netty.tcp { hostname = "127.0.0.1" port = 0 } means means that app can choose first free port and in your case it chosen 1170 > listening on addresses :[akka.tcp://ClusterSystem@127.0.0.1:1170] and cluster seed nodes are configured for p

Re: [akka-user] Re: Problem with clean shutdown using artery in cluster

2016-11-29 Thread Marek Żebrowski
Thanks for detailed explanation. I'll try to adapt it to my system. -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-

[akka-user] Re: Problem with clean shutdown using artery in cluster

2016-11-28 Thread Marek Żebrowski
Guido, how exactly do you use registerShutdownHook(ActorSystem system) ? Is it last call in application main() I tried to write something similar - I created a thread val shutdownThread = new Thread(new Runnable() { override def run() = { new SigIntBarrier().await() application

[akka-user] Re: Problem with clean shutdown using artery in cluster

2016-11-28 Thread Marek Żebrowski
Thanks! I'll try this approach in my system -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received t

[akka-user] Re: Problem with clean shutdown using artery in cluster

2016-11-24 Thread Marek Żebrowski
Akka version 2.4.14 -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you

[akka-user] Problem with clean shutdown using artery in cluster

2016-11-24 Thread Marek Żebrowski
I encountered a problem when trying to migrate from old remoting to artery, in trying to implement graceful shutdown. It is rather old code, now I see that there is other way to wait for leaving http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#How_To_Cleanup_when_Member_is_Removed I tried

[akka-user] thread-pool-executor with fixed-pool-size clarification

2016-09-30 Thread Marek Żebrowski
I'm trying to understand how thread-pool-executor works. In my config I have: queue-ec { type = Dispatcher executor = "thread-pool-executor" thread-pool-executor { fixed-pool-size = 1 } throughput = 1 } and I have a piece of code that executes following loop - it basically is a Futur

[akka-user] Re: Subscribing to PersistentShardCoordinator startup failure

2016-08-31 Thread Marek Żebrowski
We experienced similar problem many times and gave up on using persistence mode in shard coordinator - "ddata" works for us much better. Even when failure happens, which ususally occures during rolling-restart, it is way easier to recover - no need to tamper with persistent storage. So if you d

[akka-user] Re: Simple HTTP client

2016-07-12 Thread Marek Żebrowski
Hi, have a look on http://gatling.io/ it is build on akka, and is very good as traffic generator. W dniu czwartek, 7 lipca 2016 14:31:57 UTC+2 użytkownik Alexandru Dinca napisał: > > Hi, > > I am new to Akka and async programming. I'm trying to make a simple > traffic generator for a web-app.

[akka-user] Re: Using Actors to download a file from S3; only the first 12 actors work in parallel way, rest others work in a serialized manner. Why?

2016-07-05 Thread Marek Żebrowski
Problem with that code while loop - it is busy waiting inside Actor - it just consumes CPU, and does not do what you want. Instead pass instance of S3ProgressListener to txmanager.download(request, file_ref, listener) and handle listener changes there. Probably you will not get any speedup by go

[akka-user] Re: How to handle zero (almost) downtime deployments of sharded persistent system.

2016-06-10 Thread Marek Żebrowski
We use sharding, but with own persistence, but no rembember-entities feature. we do rolling-restarts (yes, not advised, lot's of pain when akka cluster goes crazy during restart) so we just shut down shard nodes one by one, and start new ones. Ususally it works. In theory it should be possible t

[akka-user] Re: Akka HTTP client overflow strategy

2016-05-30 Thread Marek Żebrowski
Faced with similar problem we went following route: 1. increased max connections to 64 for a host - default 4 is ususally way to low for high-traffic scenarios 2. added a queue that holds incoming requests, with overflow strategy DropNew. Yes, it is not perfect solution, but nothing crashes, in w

Re: [akka-user] akka-http - proper usage of HeaderDirectives.headerValueByType

2016-03-29 Thread Marek Żebrowski
Nice, thanks! More examples, explanations and learning materials are very welcome for akka-http, to make it more approachable. Btw, some simplification of usage for http-client would help, too. Internally we came up with something like: trait HttpClient { implicit def system: ActorSystem implic

[akka-user] akka-http - proper usage of HeaderDirectives.headerValueByType

2016-03-29 Thread Marek Żebrowski
I'm trying to write authenticator for akka-http using Authorization header. I came up with something: def logUserIn(token:String):Future[Option[UserId]] = ??? val withUserId: Directive1[User.Id] = { extractExecutionContext.flatMap { implicit ec => headerValueByType[Authorization]().flatMap { authH

Re: [akka-user] [2.4.2] - http-client - lots of: Error in stage [One2OneBidi]: Inner stream finished before inputs completed. Outputs might have been truncated

2016-03-24 Thread Marek Żebrowski
Thanks for comforting me! -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received this message becaus

[akka-user] [2.4.2] - http-client - lots of: Error in stage [One2OneBidi]: Inner stream finished before inputs completed. Outputs might have been truncated

2016-03-24 Thread Marek Żebrowski
I experience lots of Errors of such kind: akka://scamandrill/user/StreamSupervisor-3/flow-25-0-unknown-operation] Error in stage [One2OneBidi]: Inner stream finished before inputs completed. Outputs might have been truncated. (akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$ Where

Re: [akka-user] Akka remote communication roadmap?

2016-03-20 Thread Marek Żebrowski
Great news ! >From my experience current remoting ( cluster that depends on it) is way to fragile to be used in boxes that are not in the same data center, and in such case of co-location UDP is perfectly fine. I expect that people trying to deploy in any "enterprise" or "coroprate" environments

Re: [akka-user] akka.http.scaladsl.model.EntityStreamSizeException: null when processing Chunked POST request [2.4.2]

2016-03-14 Thread Marek Żebrowski
thanks! I assume it is akka.http.parsing.max-content-length config W dniu piątek, 11 marca 2016 15:44:15 UTC+1 użytkownik Akka Team napisał: > > Hi Marek, > > It is not the header that is null, the exception is thrown because there > was a body that was to big. The reason you get null in the logs

[akka-user] akka.http.scaladsl.model.EntityStreamSizeException: null when processing Chunked POST request [2.4.2]

2016-03-07 Thread Marek Żebrowski
I have akka-http server app that processes Chunked POST requests. Client is also akka-http. Once in a while I observe failure with following stacktrace: 00:00:34.638 [imageresizer-akka.actor.default-dispatcher-20265] ERROR a.a.ActorSystemImpl: Error during processing of request HttpRequest(Http

[akka-user] Re: DDShardCoordinator gets confused after some nodes are removed

2016-02-29 Thread Marek Żebrowski
That is my experience - but I do strange things, like rolling restarts. -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/a

[akka-user] Re: DDShardCoordinator gets confused after some nodes are removed

2016-02-29 Thread Marek Żebrowski
I had similar problem - basically shardcoordinator needs to read data from Majority of the nodes - it seems impossible in small cluster when nodes are added/removed -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/cu

Re: [akka-user] Akka remote communication roadmap?

2016-02-22 Thread Marek Żebrowski
It seems that it is not as easy as originally appeared - precisely error handling and handling failed writes needs some work. My results for akka-io-remote compared to netty3 I use code posted some time ago on this list (//source https://gist.github.com/ibalashov/381f323ca976c3364c84) just becau

Re: [akka-user] Akka remote communication roadmap?

2016-02-21 Thread Marek Żebrowski
After adding framing to https://github.com/marekzebrowski/akka-remote-io it seems to work fine - echo server started to respond to all messages properly. There is still lots of work to do (error handling, configurability, udp support, proper dispatchers usage), but probably not that far away f

Re: [akka-user] Akka remote communication roadmap?

2016-02-21 Thread Marek Żebrowski
I see my error - I just write payload, without any framing - I need to adjust protocol to match netty implementation. -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search

Re: [akka-user] Akka remote communication roadmap?

2016-02-20 Thread Marek Żebrowski
I wondered how hard could it be to write akka-remote transport using akka.io only. It turned out that it is not that hard to start: https://github.com/marekzebrowski/akka-remote-io roughly ~ 200 lines for proof of concept tcp implementation probably I'm doing something wrong as simple test case

Re: [akka-user] How to detect sharding start failures and Singleon start failures?

2016-02-18 Thread Marek Żebrowski
Probably yes - we didn't investigate very thoroughly what conditions are ok. -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/

Re: [akka-user] How to detect sharding start failures and Singleon start failures?

2016-02-18 Thread Marek Żebrowski
10:42:17 UTC+1 użytkownik Filippo De Luca napisał: > > I agree with you. I think a message on eventBus will solve it. > > What about ddata? You say it does not allow to scale up or down, is that > correct? > > On 18 February 2016 at 09:33, Marek Żebrowski > wrote: > &

Re: [akka-user] How to detect sharding start failures and Singleon start failures?

2016-02-18 Thread Marek Żebrowski
ould stop the actor >> system, but that might be too harsh. >> >> Please open an issue, and a pull request would also be very welcome. >> >> Thanks, >> Patrik >> >> On Wed, Feb 17, 2016 at 9:00 AM, Marek Żebrowski < >> marek.zebrow...@gmail.com> wr

[akka-user] How to detect sharding start failures and Singleon start failures?

2016-02-17 Thread Marek Żebrowski
We observe problems with both cluster sharding and cluster singletons. With sharders - usually problem is corrupted journal that prevents sharding coordinator from starting. In our situation easiest thing to do is to delete all data from journal and restart it - problem is that I can't find a wa

Re: [akka-user] ReliableDeliverySupervisor trying to connect node that is no longer in cluster

2016-02-02 Thread Marek Żebrowski
p jvm 2016-02-02 14:48 GMT+01:00 Patrik Nordwall : > > > On Tue, Feb 2, 2016 at 2:07 PM, Marek Żebrowski > wrote: > >> Yes, I'm using cluster sharding with persistence >> > > The it is probably the sharding that triggers these connection attempts >

Re: [akka-user] ReliableDeliverySupervisor trying to connect node that is no longer in cluster

2016-02-02 Thread Marek Żebrowski
Yes, I'm using cluster sharding with persistence W dniu wtorek, 2 lutego 2016 13:18:43 UTC+1 użytkownik Patrik Nordwall napisał: > > What version are you using? Are you using Cluster Sharding? > /Patrik > > On Tue, Feb 2, 2016 at 10:58 AM, Marek Żebrowski > wrote: >

[akka-user] ReliableDeliverySupervisor trying to connect node that is no longer in cluster

2016-02-02 Thread Marek Żebrowski
W have a setup in which some nodes are auto-scaled Even after clean node exit (DOWN) other nodes tries to communicate with already left node: WARN a.r.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sgact...@app-2016-01-31-224114.as.sgrouples.com:2552] has failed, addr

[akka-user] Re: [akka-http] easy to use client api

2016-01-26 Thread Marek Żebrowski
Or closer to typesafe: https://www.playframework.com/documentation/2.5.x/ScalaWS W dniu wtorek, 26 stycznia 2016 12:16:39 UTC+1 użytkownik Marek Żebrowski napisał: > > There are some: > > http://dispatch.databinder.net/Dispatch.html > rapture.io has some http client also I rem

[akka-user] Re: [akka-http] easy to use client api

2016-01-26 Thread Marek Żebrowski
There are some: http://dispatch.databinder.net/Dispatch.html rapture.io has some http client also I rembemer Twitter Finagle has some, but it depends on rest of finagle I think https://twitter.github.io/finagle/guide/Clients.html probably dispatch has widest usage W dniu wtorek, 26 stycznia 201

Re: [akka-user] [akka-http-2.0.2] multipart and fields and fileupload problem

2016-01-18 Thread Marek Żebrowski
for now I took a different approach, with manually parsing parts formData.parts.map { part => if (part.filename.isDefined) { val destination = File.createTempFile("akka-http-upload", ".tmp") val fileInfo = FileInfo(part.name, part.filename.get, part.entity.contentType) part.entity.d

Re: [akka-user] [akka-http-2.0.2] multipart and fields and fileupload problem

2016-01-18 Thread Marek Żebrowski
Thanks! -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscrib

[akka-user] [akka-http-2.0.2] multipart and fields and fileupload problem

2016-01-18 Thread Marek Żebrowski
I'm trying to use fileuplad together with form fields in one endpoint. My smallest use case that demonstrates failure request is 'multipart/form-data' with fields as in example: if "image" uploaded is large enough (about 1MB) I got error: MalformedFormFieldRejection(sizes,Substream Source cannot

[akka-user] Re: ANNOUNCE: Akka 2.4.0 Released

2015-09-30 Thread Marek Żebrowski
In right time. Maybe cluster sharding with additions will be more cooperative on cluster roll-restarts. I'm giving it a try ASAP! -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >>

Re: [akka-user] [2.3.13] remoting port open after ActorSystem shutdown in tests

2015-09-09 Thread Marek Żebrowski
e to > investigate the issue. Can you actually share a standalone test case that I > can use to investigate further? > > -Endre > > > > On Mon, Sep 7, 2015 at 12:33 PM, Marek Żebrowski > wrote: > >> I'm using AkkaSystem in a test. >> system config has enabl

[akka-user] [2.3.13] remoting port open after ActorSystem shutdown in tests

2015-09-07 Thread Marek Żebrowski
I'm using AkkaSystem in a test. system config has enabled remoting akka { actor { creation-timeout = 5s } remote { enabled-transports = ["akka.remote.netty.tcp"] netty.tcp { hostname = "localhost" port = 7337 } } cluster { auto-join = off } } I try to

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-27 Thread Marek Żebrowski
Probably you want to add a note to docs in http://doc.akka.io/docs/akka/2.3.12/scala/cluster-usage.html#Cluster_Dispatcher W dniu czwartek, 27 sierpnia 2015 09:41:41 UTC+2 użytkownik Marek Żebrowski napisał: > > Thanks a lot!!! > > W dniu czwartek, 27 sierpnia 2015 09:17:13 UTC+

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-27 Thread Marek Żebrowski
art cluster with 1 dispatcher thread, previously 5 > threads were required to avoid the risk of deadlock. > > The immediate workaround for you is to change the configuration of your > cluster-dispatcher. Use at least pool size of 5 threads. > > /Patrik > > On Wed, Aug 26,

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-26 Thread Marek Żebrowski
wnik Patrik Nordwall napisał: > > > > On Wed, Aug 26, 2015 at 3:50 PM, Marek Żebrowski > wrote: > >> Problem still persists. >> 1. I changed boot procedure to wait until cluster starts. it is done by: >> >> class ClusterWaiter(p: Promise[Boole

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-26 Thread Marek Żebrowski
led W dniu środa, 26 sierpnia 2015 15:50:30 UTC+2 użytkownik Marek Żebrowski napisał: > > Problem still persists. > 1. I changed boot procedure to wait until cluster starts. it is done by: > > class ClusterWaiter(p: Promise[Boolean]) extends Actor { > override def

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-26 Thread Marek Żebrowski
kka.tcp://sgact...@app1.groupl.es:2552] - Starting up... suggests that Cluster extension is starting several times, from different threads. Maybe it is a root cause of the problem? Maybe there is a race in extension startup / cluster startup ? W dniu czwartek, 6 sierpnia 2015 11:59:39 UTC+2 użytkown

Re: [akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-06 Thread Marek Żebrowski
> do you do any blocking (long running) tasks on the default-dispatcher? > /Patrik > > On Mon, Aug 3, 2015 at 10:42 AM, Marek Żebrowski > wrote: > >> That failure leads to another failure: actor name not unique >> >> 08:11:39.893 [sgActors-akka.actor.default-disp

[akka-user] Re: 2.3.12. Akka cluster init timeouts

2015-08-03 Thread Marek Żebrowski
That failure leads to another failure: actor name not unique 08:11:39.893 [sgActors-akka.actor.default-dispatcher-17] ERROR > a.c.ClusterCoreSupervisor: actor name [cluster] is not unique! > akka.actor.InvalidActorNameException: actor name [cluster] is not unique! > > -- >> Read th

[akka-user] 2.3.12. Akka cluster init timeouts

2015-08-03 Thread Marek Żebrowski
Quite often, especially on slower machines (test, intergration) akka cluster extension timeouts: 08:11:39.642 [sgActors-akka.actor.default-dispatcher-4] ERROR Cluster(akka://sgActors): Failed to startup Cluster. You can try to increase 'akka.actor.creation-timeout'. java.util.concurrent.Timeou

[akka-user] Re: How to trace Disassociacion cause? [2.3.9]

2015-04-03 Thread Marek Żebrowski
I got that the other part is breaking connection 2015-04-03 12:07:53,806 INFO akka.actor.LocalActorRef akka://sgActors/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsgActors%4010.90.23.151%3A39036-1 - Message [akka.remote.transport.Association Handle$Disassociated] from

[akka-user] Re: How to trace Disassociacion cause? [2.3.9]

2015-04-03 Thread Marek Żebrowski
If I add more nodes - they are disconnected at almost exactly the same time 11:24:42.232 [sgActors-akka.actor.default-dispatcher-62] WARN a.r.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sgActors@10.89.144.8:2555] has failed, address is now gated for [0] ms. Reason is:

[akka-user] Re: How to trace Disassociacion cause? [2.3.9]

2015-04-03 Thread Marek Żebrowski
The same situation happens also if both nodes are on the same node, so probably it's not a network issue, but rather some problem in the code, maybe some silent exception that causes EndpointWriter to disassociate, but there is no trace of it in the logs. -- >> Read the docs: http

[akka-user] How to trace Disassociacion cause? [2.3.9]

2015-04-02 Thread Marek Żebrowski
I run a very small, 3 node cluster on EC2 and I observerve constant disassociacions. Heartbeats are exchanged, nothing is lost: 05:52:06.702 [sgActors-akka.actor.default-dispatcher-62] DEBUG a.c.ClusterHeartbeatSender: Cluster Node [akka.tcp://sgact...@app1.sgrouples.com:2552] - Heartbeat to [

Re: [akka-user] 2.3.9 - cluster unstable on EC2

2015-03-26 Thread Marek Żebrowski
l, less that few hunderd bytes. I'll try with changing akka.cluster.failure-detector.acceptable-heartbeat-pause to see if it makes a difference. Thanks! W dniu czwartek, 26 marca 2015 14:10:29 UTC+1 użytkownik Patrik Nordwall napisał: > > > > On Wed, Mar 25, 2015 at 8:59 AM,

[akka-user] 2.3.9 - cluster unstable on EC2

2015-03-25 Thread Marek Żebrowski
I'm trying to run small cluster, just 3 nodes on EC2. I configured dispatchers and increased threshold for failure detector as recommended { cluster { use-dispatcher = cluster-dispatcher failure-detector { threshold = 12 } auto-down-unreachable-after = 10s retry-unsu