Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-22 Thread Kunal Ghosh
Hi Patrik , Thanks for all your help. I found a solution for the issue i was facing last time. akka.actor.serialize-messages was 'on' i switched it to 'off'. Now while running program gets stuck after processing 1 - 15000 rows it never throw any error or exception. the processing is done

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-18 Thread Kunal Ghosh
Hi Patrik, One request you to please tell me whether slow performance is because I am serializing 150 files(classes) or is there is problem with my configuration ??? I am stuck please help On Thursday, May 18, 2017 at 2:13:28 PM UTC+5:30, Patrik Nordwall wrote: > > and I mean 10,000 bytes

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-18 Thread Patrik Nordwall
and I mean 10,000 bytes message, ofc On Thu, May 18, 2017 at 10:42 AM, Patrik Nordwall wrote: > I don't think the bottleneck is in Artery/Aeron. We have measured very > good throughput such as > >- 100 bytes message: 689,655 msg/s, 68,965,517 bytes/s >-

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-18 Thread Patrik Nordwall
I don't think the bottleneck is in Artery/Aeron. We have measured very good throughput such as - 100 bytes message: 689,655 msg/s, 68,965,517 bytes/s - 10, bytes message: 12,392 msg/s, 123,924,183 bytes/s On Wed, May 17, 2017 at 1:50 PM, Kunal Ghosh wrote: >

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Kunal Ghosh
Thanks I really appreciate your help !!! When I run my application for 43 rows (2kb) input file it creates 1.14 gb of file in c:\Users\admin\AppData\Local\Temp. When i open up that file in editor apart from other things I see data in it. I think it is due to this writing of data in file the

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Patrik Nordwall
When you use the embedded media driver (that is the default) the files should be deleted when the actor system is terminated, but not if the process is killed abruptly. On Wed, May 17, 2017 at 12:42 PM, Kunal Ghosh wrote: > I just had to delete aprox 300 GB of aeron-temp

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Kunal Ghosh
I just had to delete aprox 300 GB of aeron-temp files in c:\Users\admin\AppData\Local\Temp. how to clean up Aeron files ? On Wednesday, May 17, 2017 at 12:46:54 PM UTC+5:30, Patrik Nordwall wrote: > > Performance debugging/tuning is not something I can help with in free OSS > support. We would

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-17 Thread Patrik Nordwall
Performance debugging/tuning is not something I can help with in free OSS support. We would be able to do that in Lightbend's commercial support. Regards, Patrik On Wed, May 17, 2017 at 6:51 AM, Kunal Ghosh wrote: > Is it because I am running application on single

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-16 Thread Kunal Ghosh
Is it because I am running application on single physical machine , the application is taking more time the process? On Tuesday, May 16, 2017 at 9:22:28 AM UTC+5:30, Kunal Ghosh wrote: > > Hey Patrik, Thanks for the help !! > your solution worked ! > Now when I run my application in non

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-15 Thread Kunal Ghosh
Hey Patrik, Thanks for the help !! your solution worked ! Now when I run my application in non clustered environment with round robin pool (no of instances = 8) it take 23 seconds to process 2 million rows of data. But when I run same application in clustered environment it took 23 minutes

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-12 Thread Justin du coeur
My rule of thumb is that you should never, *ever *use *idstrategy = "incremental"* What that is saying is that, at serialization time, if it encounters a type that isn't registered yet, it *makes up a registration out of thin air.* This is essentially guaranteed to fail unless the order of

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-12 Thread Kunal Ghosh
Hi, How do i set generics for ObjectArraySerializer in kryos ? public class ICEUniqueSource{ private final *ICEColSource[] _columns*; } public class ICEColSource{ } Following error -- 00:11 TRACE: [kryo] Write field: _columns (org.iceengine.compare.engine.ICEUniqueSource) pos=788 00:11

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-08 Thread Patrik Nordwall
the port number for the seed-nodes does not match canonical.port = 25520 replacing akka.tcp with akka is correct, and if you have that in the code somewhere it must be changed there also On Mon, May 8, 2017 at 2:36 PM, Kunal Ghosh wrote: > Thanks @Patrik Your help is

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-08 Thread Kunal Ghosh
Thanks @Patrik Your help is much appreciated !! Below are my configuration for Kryo Serialization and Artery remote implementation in application.conf file. Please go through it and tell me whether is it correct ?? Also I have a question that changing configuration is enough or I will have

Re: [akka-user] Akka Cluster Failing for Huge Data File

2017-05-06 Thread Patrik Nordwall
First, don't use java serialization for performance and security reasons. Secondly, actor messages should be small (a few 100kB at most). Otherwise they will prevent other messages to get through, such as cluster heartbeat messages. Split the large message into smaller messages, or transfer it on

[akka-user] Akka Cluster Failing for Huge Data File

2017-05-05 Thread Kunal Ghosh
Hi, my application uses a Akka cluster which has one master node and two child seed nodes. The master node reads data from input file and sends it over to both child nodes for evaluation (processing). The application works fine for smaller data file eg. file with 43 rows but when the input file