Did you call Socket.setTcpNoDelay with true?  This buffeting has a big impact 
if you didn't.

NIO probably won't help -- the JDK uses it underneath anyway.

Also standard RMI is much faster than JERI in my experience.

- Tim

On Sep 17, 2010, at 7:28 AM, Silas De Munck <silas.demu...@ua.ac.be> wrote:

> Hello,
> 
> In the context of a distributed discrete event simulation, I need to be able 
> to send objects at a high rate between program instances on different hosts. 
> Initially I implemented this using remote calls to an object instance living 
> on the other side. This performs fairly well, but because essentially I only 
> need a one-way stream-connection with asynchronous sending, I thought using 
> the RMI mechanism for this was not the best solution. Therefore I added a TCP 
> implementation using Sockets and ObjectStreams.
> 
> Now comparing the RMI and the socket implementation, the results are somewhat 
> strange. I expected the sockets to perform better because of the reduced 
> overhead, but this isn't the case. My (naive) socket implementation only 
> reaches about 25% of the througput rate of the RMI implementation.
> 
> Does RMI use a different (faster) serialization implementation, compared to 
> the 
> serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance difference?
> 
> Any pointers on where to start investigate this issue would be very much 
> appreciated.
> 
> Regards,
> 
> Silas
> 
> 
> 
> 
> -- 
> 
> Silas De Munck
> PhD Student
> 
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
> 
> e-mail: silas.demu...@ua.ac.be

Reply via email to