Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-16 Thread Kunal Ghosh
Is it because I am running application on single physical machine , the application is taking more time the process? On Tuesday, May 16, 2017 at 9:22:28 AM UTC+5:30, Kunal Ghosh wrote: > > Hey Patrik, Thanks for the help !! > your solution worked ! > Now when I run my applic

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Kunal Ghosh
We would be able to do that in Lightbend's commercial support. > > Regards, > Patrik > > On Wed, May 17, 2017 at 6:51 AM, Kunal Ghosh <kunal@gmail.com > > wrote: > >> Is it because I am running application on single physical machine , the >

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Kunal Ghosh
efault) the files > should be deleted when the actor system is terminated, but not if the > process is killed abruptly. > > On Wed, May 17, 2017 at 12:42 PM, Kunal Ghosh <kunal@gmail.com > > wrote: > >> I just had to delete aprox 300 GB of aeron-temp files in >>

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-12 Thread Kunal Ghosh
y 8, 2017 at 6:37:17 PM UTC+5:30, Patrik Nordwall wrote: > > the port number for the seed-nodes does not match canonical.port = 25520 > > replacing akka.tcp with akka is correct, and if you have that in the code > somewhere it must be changed there also > > On Mon, May 8, 2017 a

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-15 Thread Kunal Ghosh
quot;java.io.File" = kryo"java.util.HashMap" = kryo"org.iceengine.compare.engine.ICECompare$CompareType" = kryo"org.iceengine.common.CompareChain" = kryo "org.apache.commons.collections.comparators.ComparatorChain" = kryo "[Lor

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-18 Thread Kunal Ghosh
68,965,517 bytes/s >>- 10, bytes message: 12,392 msg/s, 123,924,183 bytes/s >> >> >> >> On Wed, May 17, 2017 at 1:50 PM, Kunal Ghosh <kunal@gmail.com >> > wrote: >> >>> Thanks I really appreciate your help !!! >>> When I ru

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-22 Thread Kunal Ghosh
e: > >> I don't think the bottleneck is in Artery/Aeron. We have measured very >> good throughput such as >> >>- 100 bytes message: 689,655 msg/s, 68,965,517 bytes/s >>- 10, bytes message: 12,392 msg/s, 123,924,183 bytes/s >> >> >> >>

[akka-user] Akka Cluster Failing for Huge Data File

2017-05-05 Thread Kunal Ghosh
Hi, my application uses a Akka cluster which has one master node and two child seed nodes. The master node reads data from input file and sends it over to both child nodes for evaluation (processing). The application works fine for smaller data file eg. file with 43 rows but when the input file

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-08 Thread Kunal Ghosh
d be small (a few 100kB at most). Otherwise > they will prevent other messages to get through, such as cluster heartbeat > messages. Split the large message into smaller messages, or transfer it on > a side channel such as Akka Http or Stream TCP. I'd also recommend that you > try the

[akka-user] Akka | Work - Pull pattern running into problem !

2017-10-12 Thread Kunal Ghosh
Hi, I am using Akka with work pulling pattern (producer-consumer) in my application. (Ref - https://github.com/MartinKanters/java-akka-workpulling-example) I have single producer and multiple consumer. My machine has 16 cores and 32 threads cpu. The number of consumer is configurable. When I

[akka-user] Re: Akka | Work - Pull pattern running into problem !

2017-10-12 Thread Kunal Ghosh
yes producer is fast and consumers can not keep up with it that is why i used work-pulling pattern. Does akka stream has solution for producer out running the consumer??? or is there any other option??? On Thursday, October 12, 2017 at 7:30:21 PM UTC+5:30, Bwmat wrote: > > Sounds like your