look at your cluster port configuration:
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
means means that app can choose first free port and in your case it chosen 1170
> listening on addresses :[akka.tcp://ClusterSystem@127.0.0.1:1170]
and cluster seed nodes are configured for p
Thanks for detailed explanation. I'll try to adapt it to my system.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-
Guido, how exactly do you use registerShutdownHook(ActorSystem system) ? Is
it last call in application main()
I tried to write something similar - I created a thread
val shutdownThread = new Thread(new Runnable() {
override def run() = {
new SigIntBarrier().await()
application
Thanks! I'll try this approach in my system
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received t
Akka version 2.4.14
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you
I encountered a problem when trying to migrate from old remoting to artery,
in trying to implement graceful shutdown.
It is rather old code, now I see that there is other way to wait for leaving
http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#How_To_Cleanup_when_Member_is_Removed
I tried
I'm trying to understand how thread-pool-executor works.
In my config I have:
queue-ec {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 1
}
throughput = 1
}
and I have a piece of code that executes following loop - it basically is a
Futur
We experienced similar problem many times and gave up on using persistence
mode in shard coordinator - "ddata" works for us much better. Even when
failure happens, which ususally occures during rolling-restart, it is way
easier to recover - no need to tamper with persistent storage. So if you
d
Hi,
have a look on http://gatling.io/
it is build on akka, and is very good as traffic generator.
W dniu czwartek, 7 lipca 2016 14:31:57 UTC+2 użytkownik Alexandru Dinca
napisał:
>
> Hi,
>
> I am new to Akka and async programming. I'm trying to make a simple
> traffic generator for a web-app.
Problem with that code while loop - it is busy waiting inside Actor - it
just consumes CPU, and does not do what you want.
Instead pass instance of S3ProgressListener to txmanager.download(request,
file_ref, listener) and handle listener changes there.
Probably you will not get any speedup by go
We use sharding, but with own persistence, but no rembember-entities
feature.
we do rolling-restarts (yes, not advised, lot's of pain when akka cluster
goes crazy during restart)
so we just shut down shard nodes one by one, and start new ones. Ususally
it works.
In theory it should be possible t
Faced with similar problem we went following route:
1. increased max connections to 64 for a host - default 4 is ususally way
to low for high-traffic scenarios
2. added a queue that holds incoming requests, with overflow strategy
DropNew. Yes, it is not perfect solution, but nothing crashes, in w
Nice, thanks! More examples, explanations and learning materials are very
welcome for akka-http, to make it more approachable.
Btw, some simplification of usage for http-client would help, too.
Internally we came up with something like:
trait HttpClient {
implicit def system: ActorSystem
implic
I'm trying to write authenticator for akka-http using Authorization header.
I came up with something:
def logUserIn(token:String):Future[Option[UserId]] = ???
val withUserId: Directive1[User.Id] = {
extractExecutionContext.flatMap { implicit ec =>
headerValueByType[Authorization]().flatMap { authH
Thanks for comforting me!
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message becaus
I experience lots of Errors of such kind:
akka://scamandrill/user/StreamSupervisor-3/flow-25-0-unknown-operation]
Error in stage [One2OneBidi]: Inner stream finished before inputs
completed. Outputs might have been truncated.
(akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$
Where
Great news !
>From my experience current remoting ( cluster that depends on it) is way to
fragile to be used in boxes that are not in the same data center, and in
such case of co-location UDP is perfectly fine.
I expect that people trying to deploy in any "enterprise" or "coroprate"
environments
thanks!
I assume it is akka.http.parsing.max-content-length config
W dniu piątek, 11 marca 2016 15:44:15 UTC+1 użytkownik Akka Team napisał:
>
> Hi Marek,
>
> It is not the header that is null, the exception is thrown because there
> was a body that was to big. The reason you get null in the logs
I have akka-http server app that processes Chunked POST requests.
Client is also akka-http.
Once in a while I observe failure with following stacktrace:
00:00:34.638 [imageresizer-akka.actor.default-dispatcher-20265] ERROR
a.a.ActorSystemImpl: Error during processing of request
HttpRequest(Http
That is my experience - but I do strange things, like rolling restarts.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/a
I had similar problem - basically shardcoordinator needs to read data from
Majority of the nodes - it seems impossible in small cluster when nodes are
added/removed
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/cu
It seems that it is not as easy as originally appeared - precisely error
handling and handling failed writes needs some work.
My results for akka-io-remote compared to netty3
I use code posted some time ago on this list (//source
https://gist.github.com/ibalashov/381f323ca976c3364c84) just becau
After adding framing to
https://github.com/marekzebrowski/akka-remote-io
it seems to work fine - echo server started to respond to all messages
properly.
There is still lots of work to do (error handling, configurability, udp
support, proper dispatchers usage), but probably not that far away f
I see my error - I just write payload, without any framing - I need to
adjust protocol to match netty implementation.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search
I wondered how hard could it be to write akka-remote transport using
akka.io only.
It turned out that it is not that hard to start:
https://github.com/marekzebrowski/akka-remote-io
roughly ~ 200 lines for proof of concept tcp implementation
probably I'm doing something wrong as simple test case
Probably yes - we didn't investigate very thoroughly what conditions are
ok.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/
10:42:17 UTC+1 użytkownik Filippo De Luca
napisał:
>
> I agree with you. I think a message on eventBus will solve it.
>
> What about ddata? You say it does not allow to scale up or down, is that
> correct?
>
> On 18 February 2016 at 09:33, Marek Żebrowski > wrote:
>
&
ould stop the actor
>> system, but that might be too harsh.
>>
>> Please open an issue, and a pull request would also be very welcome.
>>
>> Thanks,
>> Patrik
>>
>> On Wed, Feb 17, 2016 at 9:00 AM, Marek Żebrowski <
>> marek.zebrow...@gmail.com> wr
We observe problems with both cluster sharding and cluster singletons.
With sharders - usually problem is corrupted journal that prevents sharding
coordinator from starting. In our situation easiest thing to do is to
delete all data from journal and restart it - problem is that I can't find
a wa
p jvm
2016-02-02 14:48 GMT+01:00 Patrik Nordwall :
>
>
> On Tue, Feb 2, 2016 at 2:07 PM, Marek Żebrowski > wrote:
>
>> Yes, I'm using cluster sharding with persistence
>>
>
> The it is probably the sharding that triggers these connection attempts
>
Yes, I'm using cluster sharding with persistence
W dniu wtorek, 2 lutego 2016 13:18:43 UTC+1 użytkownik Patrik Nordwall
napisał:
>
> What version are you using? Are you using Cluster Sharding?
> /Patrik
>
> On Tue, Feb 2, 2016 at 10:58 AM, Marek Żebrowski > wrote:
>
W have a setup in which some nodes are auto-scaled
Even after clean node exit (DOWN) other nodes tries to communicate with
already left node:
WARN a.r.ReliableDeliverySupervisor: Association with remote system
[akka.tcp://sgact...@app-2016-01-31-224114.as.sgrouples.com:2552] has
failed, addr
Or closer to typesafe:
https://www.playframework.com/documentation/2.5.x/ScalaWS
W dniu wtorek, 26 stycznia 2016 12:16:39 UTC+1 użytkownik Marek Żebrowski
napisał:
>
> There are some:
>
> http://dispatch.databinder.net/Dispatch.html
> rapture.io has some http client also I rem
There are some:
http://dispatch.databinder.net/Dispatch.html
rapture.io has some http client also I rembemer
Twitter Finagle has some, but it depends on rest of finagle I think
https://twitter.github.io/finagle/guide/Clients.html
probably dispatch has widest usage
W dniu wtorek, 26 stycznia 201
for now I took a different approach, with manually parsing parts
formData.parts.map { part =>
if (part.filename.isDefined) {
val destination = File.createTempFile("akka-http-upload", ".tmp")
val fileInfo = FileInfo(part.name, part.filename.get,
part.entity.contentType)
part.entity.d
Thanks!
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscrib
I'm trying to use fileuplad together with form fields in one endpoint. My
smallest use case that demonstrates failure
request is 'multipart/form-data' with fields as in example:
if "image" uploaded is large enough (about 1MB) I got error:
MalformedFormFieldRejection(sizes,Substream Source cannot
In right time. Maybe cluster sharding with additions will be more
cooperative on cluster roll-restarts. I'm giving it a try ASAP!
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>
e to
> investigate the issue. Can you actually share a standalone test case that I
> can use to investigate further?
>
> -Endre
>
>
>
> On Mon, Sep 7, 2015 at 12:33 PM, Marek Żebrowski > wrote:
>
>> I'm using AkkaSystem in a test.
>> system config has enabl
I'm using AkkaSystem in a test.
system config has enabled remoting
akka {
actor {
creation-timeout = 5s
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "localhost"
port = 7337
}
}
cluster {
auto-join = off
}
}
I try to
Probably you want to add a note to docs in
http://doc.akka.io/docs/akka/2.3.12/scala/cluster-usage.html#Cluster_Dispatcher
W dniu czwartek, 27 sierpnia 2015 09:41:41 UTC+2 użytkownik Marek Żebrowski
napisał:
>
> Thanks a lot!!!
>
> W dniu czwartek, 27 sierpnia 2015 09:17:13 UTC+
art cluster with 1 dispatcher thread, previously 5
> threads were required to avoid the risk of deadlock.
>
> The immediate workaround for you is to change the configuration of your
> cluster-dispatcher. Use at least pool size of 5 threads.
>
> /Patrik
>
> On Wed, Aug 26,
wnik Patrik Nordwall
napisał:
>
>
>
> On Wed, Aug 26, 2015 at 3:50 PM, Marek Żebrowski > wrote:
>
>> Problem still persists.
>> 1. I changed boot procedure to wait until cluster starts. it is done by:
>>
>> class ClusterWaiter(p: Promise[Boole
led
W dniu środa, 26 sierpnia 2015 15:50:30 UTC+2 użytkownik Marek Żebrowski
napisał:
>
> Problem still persists.
> 1. I changed boot procedure to wait until cluster starts. it is done by:
>
> class ClusterWaiter(p: Promise[Boolean]) extends Actor {
> override def
kka.tcp://sgact...@app1.groupl.es:2552] - Starting up...
suggests that Cluster extension is starting several times, from different
threads. Maybe it is a root cause of the problem? Maybe there is a race in
extension startup / cluster startup ?
W dniu czwartek, 6 sierpnia 2015 11:59:39 UTC+2 użytkown
> do you do any blocking (long running) tasks on the default-dispatcher?
> /Patrik
>
> On Mon, Aug 3, 2015 at 10:42 AM, Marek Żebrowski > wrote:
>
>> That failure leads to another failure: actor name not unique
>>
>> 08:11:39.893 [sgActors-akka.actor.default-disp
That failure leads to another failure: actor name not unique
08:11:39.893 [sgActors-akka.actor.default-dispatcher-17] ERROR
> a.c.ClusterCoreSupervisor: actor name [cluster] is not unique!
> akka.actor.InvalidActorNameException: actor name [cluster] is not unique!
>
>
--
>> Read th
Quite often, especially on slower machines (test, intergration) akka
cluster extension timeouts:
08:11:39.642 [sgActors-akka.actor.default-dispatcher-4] ERROR
Cluster(akka://sgActors): Failed to startup Cluster. You can try to
increase 'akka.actor.creation-timeout'.
java.util.concurrent.Timeou
I got that the other part is breaking connection
2015-04-03 12:07:53,806 INFO akka.actor.LocalActorRef
akka://sgActors/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsgActors%4010.90.23.151%3A39036-1
- Message [akka.remote.transport.Association
Handle$Disassociated] from
If I add more nodes - they are disconnected at almost exactly the same time
11:24:42.232 [sgActors-akka.actor.default-dispatcher-62] WARN
a.r.ReliableDeliverySupervisor: Association with remote system
[akka.tcp://sgActors@10.89.144.8:2555] has failed, address is now gated for
[0] ms. Reason is:
The same situation happens also if both nodes are on the same node, so
probably it's not a network issue, but rather some problem in the code,
maybe some silent exception that causes EndpointWriter to disassociate, but
there is no trace of it in the logs.
--
>> Read the docs: http
I run a very small, 3 node cluster on EC2 and I observerve constant
disassociacions.
Heartbeats are exchanged, nothing is lost:
05:52:06.702 [sgActors-akka.actor.default-dispatcher-62] DEBUG
a.c.ClusterHeartbeatSender: Cluster Node
[akka.tcp://sgact...@app1.sgrouples.com:2552] - Heartbeat to
[
l, less that few
hunderd bytes.
I'll try with changing
akka.cluster.failure-detector.acceptable-heartbeat-pause
to see if it makes a difference.
Thanks!
W dniu czwartek, 26 marca 2015 14:10:29 UTC+1 użytkownik Patrik Nordwall
napisał:
>
>
>
> On Wed, Mar 25, 2015 at 8:59 AM,
I'm trying to run small cluster, just 3 nodes on EC2.
I configured dispatchers and increased threshold for failure detector as
recommended
{
cluster {
use-dispatcher = cluster-dispatcher
failure-detector {
threshold = 12
}
auto-down-unreachable-after = 10s
retry-unsu
54 matches
Mail list logo