Re: [JBoss-dev] jbossmq message transport times

2002-01-18 Thread Loren Rosen

* The test I'm running is using 'localhost' (at least, I think so-- it uses
the same infrastructure as the existing perf tests in the testsuite
module).

* It perhaps isn't surprising that MacOS X has TCPNODELAY defaulted to
false (and I bet MacOS Server doesn't). What had me puzzled was why that
would affect loopback tests. I might mention that earlier testing with
varying packet sizes did indeed show a stairstep response time with 200 ms
increases at each jump (in fact it wasn't just a random dart throw that
motived
my change to the TCPNODELAY).

* Even creating an empty message creates a packet with more than 200
bytes-- due to the overhead for the message id and so forth. On the other
hand, there is the application level acknowledgement packet, which is
indeed a tinygram.

* There is likely to be a bunch of work needed if we really want to support
high performance message traffic over high latency and/or low bandwidth
links. Changing the buffer sizes isn't the half of it.

Jeff Tulley wrote:

 Ok, I just got the run-down on buffer sizes and TCPNODELAY.

 The answer is, of course, IT DEPENDS.

 If what you are sending out always will be 100 or 200 bytes, the buffer
 size will not affect you very much - you were sending tiny-grams out
 before and will continue to do so even after setting TCPNODELAY.  The
 only difference is that before you were only sending a tiny-gram out
 every 200-400ms (good from a network traffic point of view, very BAD
 from a developer's point of view).  Then again, if these small messages
 are few and far between, I wouldn't sweat sending out the small packets
 at all.

 If you have a feel for typical MQ packet sizes you can come up with a
 better buffer size.  For instance, even if most messages are 2-4K, if
 you have occasional ones at 10K, then set your buffer to 10K.  The extra
 buffer is most likely worth the increase in efficiency. (Here again
 there is a fine line between efficiency and wasted, unused buffer
 space).

 Actually, further refinement:  The buffer size for TCP is 1460 bytes.
 So, try to set your buffer size to a multiple of this, so instead of 16K
 you would do 16 x 1460 = 23360 bytes.  That way you are maximizing the
 chance that all data out will completely fill up multiple packets
 instead of having a few full and then a partially empty packet every
 time.  I would not go much above this 16 multiplier unless all data sent
 is always way larger than that, but this is no hard rule of thumb.

 Complicated enough?  It definitely will be worth it to implement this.
 1000's of packets per second vs 5 packets per second(or as Loren saw,
 about 2.5 per second) is a HUGE deal.

 Jeff Tulley  ([EMAIL PROTECTED])
 (801)861-5322
 Novell, Inc., the leading provider of Net business solutions
 http://www.novell.com

  Hiram Chirino [EMAIL PROTECTED] 1/16/02 1:59:34 PM 
 So,

   Are you saying that if we do TCPNODELAY, just make sure we setup a
 BIG
 BufferedOutputStream and make sure we got our flush()es in the right
 places???  The question is: how BIG should the buffer be???

 Regards,
 Hiram

 From: Jeff Tulley [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: RE: [JBoss-dev] jbossmq message transport times
 Date: Wed, 16 Jan 2002 10:03:23 -0700
 
 Actually it is even a little more complicated than that.  Nagle's
 algorithm by itself would not cause such delays, but what happened is
 that at the other end somebody (I forget who) came up with the idea of
 a
 delayed ACK.  Since an ACK is a small packet, and one ACK can
 acknowledge receipt for multiple packets, the delayed ACK algorithm
 tries to send an ACK only every other packet.  This saves the wire
 from
 having too many tini-gram ACK packets.
 
 Combined with the Nagle algorithm, this causes death of communication
 since neither side will budge  Eventually the delayed ACK algorithm
 times out (typically 200 ms, depending on the OS), and the ACK is
 sent.
 So, you get down to about 5 packets per second throughput.
 
 There are two other algorithms, which modify NaglReceived: from
 INET-PRV-MTA by prv-mail20.provo.novell.com
wite's to get around this
 problem.  One, the Doupnik algorithm, uses information from the
 data
 to determine if there is more coming, and it sends the last bit
 immediately without waiting for the buffer to fill up.  So, you get
 full
 packets until the last one, which will be a tiny gram, but you do
 not
 get the 200 ms delay for every packet since the receiving end is
 continually getting information.  See
 http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt

 for information.
 
 The other algorithm is called the Minshall Algorithm.  I do not know
 a
 lot about it, but it is along the same lines.  See
 http://lists.w3.org/Archives/Public/ietf-discuss/1998Dec/0025.html
 for a discussion.
 
 The problem with setting TCPNODELAY is that it disables Nagle's
 altogether and you get the wire flooded with a lot of tiny packets.
 But, in the web space it is exactly

Re: [JBoss-dev] jbossmq message transport times

2002-01-18 Thread Loren Rosen

I made two changes which I'll describe here. I haven't yet investigated the
mechanics of submitting patches.

The changes are both in the org.jboss.mq.il.oil package. Note there are analogous
places in the other il subpackages that may also need changing-- my current test
evidently doesn't exercise the other code paths.

In OILServerIL, add a line to the createConnection method:

private void createConnection()
  throws Exception
   {
  socket = new Socket(addr, port);

   // add the next line
   socket.setTcpNoDelay(true);

  in = new ObjectInputStream(new
BufferedInputStream(socket.getInputStream()));
  out = new ObjectOutputStream(new
BufferedOutputStream(socket.getOutputStream()));
  out.flush();
   }

The same line gets added in OILServerILService

public void run()
   {
  try
  {
 while (running)
 {
Socket socket = null;
try
{
   socket = serverSocket.accept();
}
catch (java.io.InterruptedIOException e)
{
   continue;
}

// it's possible that the service is no longer
// running but it got a connection, no point in
// starting up a thread!
//
if (!running)
{
   if(socket != null)
   {
  try
  {
 socket.close();
  }
  catch(Exception e)
  {
 // do nothing
  }
   }
   return;
}

try
{
   socket.setSoTimeout(0);

   // add next line
socket.setTcpNoDelay(true);

   new Thread(new Client(socket), OIL Worker).start();
}
catch(IOException ie)
{
   if(log.isDebugEnabled())
   {
  log.debug(IOException processing client connection, ie);
  log.debug(Dropping client connection, server will not
terminate);
   }
}
 }
  }
  catch (SocketException e)
  {
 // There is no easy way (other than string comparison) to
 // determine if the socket exception is caused by connection
 // reset by peer. In this case, it's okay to ignore both
 // SocketException and IOException.
 log.warn(SocketException occured (Connection reset by peer?). Cannot
initialize the OILServerILService.);
  }
  catch (IOException e)
  {
 log.warn(IOException occured. Cannot initialize the
OILServerILService.);
  }
  finally
  {
 try
 {
serverSocket.close();
 }
 catch(Exception e)
 {
log.debug(error closing server socket, e);
 }
 return;
  }
   }


Christian Riege wrote:

 hi,

 On Wed, 2002-01-16 at 10:58, Sacha Labourey wrote:

  I guess that if you want to modify this, you need to make it optional.
 
  The TCPNODELAY flag is related to the Nagle's algorithm

 true.

  This algorithm is made to avoid sending very small paquets each time you
  send data through your socket [...]

 primarily useful for telnet/ssh style applications.

  I am not familiar with JBossMQ code base but if you think that the
  implementation could, at some time, send many small messages and would

 JBossMQ sends packets whenever a message needs to be delivered via the
 IL (except for the InVM layer but we can put that aside).

  suffer from sending each time a paquet, then you should most probably make
  this setting optional. But, on the opposite, if you think that your paquet
  (*all* paquets, even ACK) always have a sufficient size and don't want Nagle
  to take care of you,  then it is up to you.

 ... don't forget the new pinger thread which tries to ping the server
 every once in a while to ensure its still up and running.

 bottom line: TCP_NODELAY should be set to *enabled* whenever you're in a
 low latency / high bandwidth situation, i.e. LAN or something similiar.
 I guess this applies to at the very least 80% of the JBossMQ
 installations out there.

 I would disable TCP_NODELAY only when being in a very low bandwidth /
 high latency situation.

 Loren, can you pls. submit a patch via SourceForge (or send it to this
 list) and we'll get this integrated into the codebase.

 Regarding Mac OS/X in this situation: it could very well be that their
 default mode of operation is to have TCP_NODELAY disabled by default,
 even when running on the loopback device -- you ARE using 127.0.0.1
 after all, right?. On my Linux box this is a switch when configuring the
 kernel (at least it has been when I was still building my own kernels
 ... which hasn't happened since 2.0.something).

 Regards,
 Christian

 ___
 Jboss-development mailing list
 [EMAIL 

Re: [JBoss-dev] jbossmq message transport times

2002-01-18 Thread Scott M Stark

The TcpNoDelay flag should be a configurable attribute
exposed through the org.jboss.mq.il.oil.OILServerILServiceMBean.
Patches should be submitted through sourceforge as cvs diffs.


Scott Stark
Chief Technology Officer
JBoss Group, LLC


- Original Message -
From: Loren Rosen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 18, 2002 1:58 PM
Subject: Re: [JBoss-dev] jbossmq message transport times


 I made two changes which I'll describe here. I haven't yet investigated
the
 mechanics of submitting patches.

 The changes are both in the org.jboss.mq.il.oil package. Note there are
analogous
 places in the other il subpackages that may also need changing-- my
current test
 evidently doesn't exercise the other code paths.

 In OILServerIL, add a line to the createConnection method:

 private void createConnection()
   throws Exception
{
   socket = new Socket(addr, port);

// add the next line
socket.setTcpNoDelay(true);




___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



RE: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Sacha Labourey

Hello,

I guess that if you want to modify this, you need to make it optional.

The TCPNODELAY flag is related to the Nagle's algorithm

This algorithm is made to avoid sending very small paquets each time you
send data through your socket but instead wait 400ms (generally) or a full
buffer (or a message from the other side if I remember well) and send a
paquet that wrap more data. Thus, it tends to minimize the amount of small
data that are sent.

I am not familiar with JBossMQ code base but if you think that the
implementation could, at some time, send many small messages and would
suffer from sending each time a paquet, then you should most probably make
this setting optional. But, on the opposite, if you think that your paquet
(*all* paquets, even ACK) always have a sufficient size and don't want Nagle
to take care of you,  then it is up to you.

Cheers,


Sacha

 -Message d'origine-
 De : [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]De la part de
 Hiram Chirino
 Envoyé : mercredi, 16 janvier 2002 04:17
 À : [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Objet : Re: [JBoss-dev] jbossmq message transport times



 I can't think of a reason that it would break anything...


___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



Re: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Lennart Petersson

Den 2002-01-16 02:37:12 skrev Loren Rosen [EMAIL PROTECTED]:

I'm
testing on MacOS X, which for our purposes is just another Unix flavor
with an oddball GUI.

Hi hi hi I'm waiting for my PB TI to arrive... really longing for Unix with an 
oddball GUI :)

/Lennart



___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



RE: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Christian Riege

hi,

On Wed, 2002-01-16 at 10:58, Sacha Labourey wrote:

 I guess that if you want to modify this, you need to make it optional.
 
 The TCPNODELAY flag is related to the Nagle's algorithm

true.

 This algorithm is made to avoid sending very small paquets each time you
 send data through your socket [...]

primarily useful for telnet/ssh style applications.

 I am not familiar with JBossMQ code base but if you think that the
 implementation could, at some time, send many small messages and would

JBossMQ sends packets whenever a message needs to be delivered via the
IL (except for the InVM layer but we can put that aside).

 suffer from sending each time a paquet, then you should most probably make
 this setting optional. But, on the opposite, if you think that your paquet
 (*all* paquets, even ACK) always have a sufficient size and don't want Nagle
 to take care of you,  then it is up to you.

... don't forget the new pinger thread which tries to ping the server
every once in a while to ensure its still up and running.

bottom line: TCP_NODELAY should be set to *enabled* whenever you're in a
low latency / high bandwidth situation, i.e. LAN or something similiar.
I guess this applies to at the very least 80% of the JBossMQ
installations out there.

I would disable TCP_NODELAY only when being in a very low bandwidth /
high latency situation.

Loren, can you pls. submit a patch via SourceForge (or send it to this
list) and we'll get this integrated into the codebase.

Regarding Mac OS/X in this situation: it could very well be that their
default mode of operation is to have TCP_NODELAY disabled by default,
even when running on the loopback device -- you ARE using 127.0.0.1
after all, right?. On my Linux box this is a switch when configuring the
kernel (at least it has been when I was still building my own kernels
... which hasn't happened since 2.0.something).

Regards,
Christian



___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



RE: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Jeff Tulley

Actually it is even a little more complicated than that.  Nagle's
algorithm by itself would not cause such delays, but what happened is
that at the other end somebody (I forget who) came up with the idea of a
delayed ACK.  Since an ACK is a small packet, and one ACK can
acknowledge receipt for multiple packets, the delayed ACK algorithm
tries to send an ACK only every other packet.  This saves the wire from
having too many tini-gram ACK packets.

Combined with the Nagle algorithm, this causes death of communication
since neither side will budge  Eventually the delayed ACK algorithm
times out (typically 200 ms, depending on the OS), and the ACK is sent. 
So, you get down to about 5 packets per second throughput.

There are two other algorithms, which modify NaglReceived: from INET-PRV-MTA by 
prv-mail20.provo.novell.com
wite's to get around this
problem.  One, the Doupnik algorithm, uses information from the data
to determine if there is more coming, and it sends the last bit
immediately without waiting for the buffer to fill up.  So, you get full
packets until the last one, which will be a tiny gram, but you do not
get the 200 ms delay for every packet since the receiving end is
continually getting information.  See
http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt
   for information.

The other algorithm is called the Minshall Algorithm.  I do not know a
lot about it, but it is along the same lines.  See
http://lists.w3.org/Archives/Public/ietf-discuss/1998Dec/0025.html 
for a discussion.

The problem with setting TCPNODELAY is that it disables Nagle's
altogether and you get the wire flooded with a lot of tiny packets. 
But, in the web space it is exactly these tiny packets which trigger the
problem, and otherwise you lose throughput.  The best solution, lacking
the ability to change your OS TCP stack, is to turn Nagle's off (with
the TCPNODELAY), and then use a bigger buffer for your data.  Then you
are using the wire more efficiently in that you are not flooding it with
tiny-grams.

I wouldn't know all of this except the coworker in the office next to
me took a class from Doupnik himself, and has recently just finished
working with NetWare's TCP engineers to implement the Doupnik algorithm.
 He found that most programs he dealt with in testing for the problem
turned off Nagle's algorithm for efficiency.  Some OS's have this fix in
them, some do not.  (Most do not???)

If we turn off Nagles algorithm in that code, we defintely need to look
at the buffer size if we want the most efficient data transfer.

Jeff Tulley  ([EMAIL PROTECTED])
(801)861-5322
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com


 Sacha Labourey [EMAIL PROTECTED] 1/16/02 2:58:30 AM

Hello,

I guess that if you want to modify this, you need to make it optional.

The TCPNODELAY flag is related to the Nagle's algorithm

This algorithm is made to avoid sending very small paquets each time
you
send data through your socket but instead wait 400ms (generally) or a
full
buffer (or a message from the other side if I remember well) and send
a
paquet that wrap more data. Thus, it tends to minimize the amount of
small
data that are sent.

I am not familiar with JBossMQ code base but if you think that the
implementation could, at some time, send many small messages and would
suffer from sending each time a paquet, then you should most probably
make
this setting optional. But, on the opposite, if you think that your
paquet
(*all* paquets, even ACK) always have a sufficient size and don't want
Nagle
to take care of you,  then it is up to you.

Cheers,


Sacha

 -Message d'origine-
 De : [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED]]De la part de
 Hiram Chirino
 Envoyé : mercredi, 16 janvier 2002 04:17
 À : [EMAIL PROTECTED]; [EMAIL PROTECTED] 
 Objet : Re: [JBoss-dev] jbossmq message transpo
rt times



 I can't think of a reason that it would break anything...


___
Jboss-development mailing list
[EMAIL PROTECTED] 
https://lists.sourceforge.net/lists/listinfo/jboss-development

___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



RE: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Hiram Chirino

So,

  Are you saying that if we do TCPNODELAY, just make sure we setup a BIG 
BufferedOutputStream and make sure we got our flush()es in the right 
places???  The question is: how BIG should the buffer be???

Regards,
Hiram

From: Jeff Tulley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: [JBoss-dev] jbossmq message transport times
Date: Wed, 16 Jan 2002 10:03:23 -0700

Actually it is even a little more complicated than that.  Nagle's
algorithm by itself would not cause such delays, but what happened is
that at the other end somebody (I forget who) came up with the idea of a
delayed ACK.  Since an ACK is a small packet, and one ACK can
acknowledge receipt for multiple packets, the delayed ACK algorithm
tries to send an ACK only every other packet.  This saves the wire from
having too many tini-gram ACK packets.

Combined with the Nagle algorithm, this causes death of communication
since neither side will budge  Eventually the delayed ACK algorithm
times out (typically 200 ms, depending on the OS), and the ACK is sent.
So, you get down to about 5 packets per second throughput.

There are two other algorithms, which modify NaglReceived: from 
INET-PRV-MTA by prv-mail20.provo.novell.com
   wite's to get around this
problem.  One, the Doupnik algorithm, uses information from the data
to determine if there is more coming, and it sends the last bit
immediately without waiting for the buffer to fill up.  So, you get full
packets until the last one, which will be a tiny gram, but you do not
get the 200 ms delay for every packet since the receiving end is
continually getting information.  See
http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt
for information.

The other algorithm is called the Minshall Algorithm.  I do not know a
lot about it, but it is along the same lines.  See
http://lists.w3.org/Archives/Public/ietf-discuss/1998Dec/0025.html
for a discussion.

The problem with setting TCPNODELAY is that it disables Nagle's
altogether and you get the wire flooded with a lot of tiny packets.
But, in the web space it is exactly these tiny packets which trigger the
problem, and otherwise you lose throughput.  The best solution, lacking
the ability to change your OS TCP stack, is to turn Nagle's off (with
the TCPNODELAY), and then use a bigger buffer for your data.  Then you
are using the wire more efficiently in that you are not flooding it with
tiny-grams.

I wouldn't know all of this except the coworker in the office next to
me took a class from Doupnik himself, and has recently just finished
working with NetWare's TCP engineers to implement the Doupnik algorithm.
  He found that most programs he dealt with in testing for the problem
turned off Nagle's algorithm for efficiency.  Some OS's have this fix in
them, some do not.  (Most do not???)

If we turn off Nagles algorithm in that code, we defintely need to look
at the buffer size if we want the most efficient data transfer.

Jeff Tulley  ([EMAIL PROTECTED])
(801)861-5322
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com


  Sacha Labourey [EMAIL PROTECTED] 1/16/02 2:58:30 AM
 
Hello,

I guess that if you want to modify this, you need to make it optional.

The TCPNODELAY flag is related to the Nagle's algorithm

This algorithm is made to avoid sending very small paquets each time
you
send data through your socket but instead wait 400ms (generally) or a
full
buffer (or a message from the other side if I remember well) and send
a
paquet that wrap more data. Thus, it tends to minimize the amount of
small
data that are sent.

I am not familiar with JBossMQ code base but if you think that the
implementation could, at some time, send many small messages and would
suffer from sending each time a paquet, then you should most probably
make
this setting optional. But, on the opposite, if you think that your
paquet
(*all* paquets, even ACK) always have a sufficient size and don't want
Nagle
to take care of you,  then it is up to you.

Cheers,


   Sacha

  -Message d'origine-
  De : [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED]]De la part de
  Hiram Chirino
  Envoyé : mercredi, 16 janvier 2002 04:17
  À : [EMAIL PROTECTED]; [EMAIL PROTECTED]
  Objet : Re: [JBoss-dev] jbossmq message transpo
rt times
 
 
 
  I can't think of a reason that it would break anything...


___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development

___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development



_
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.


___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo

RE: [JBoss-dev] jbossmq message transport times

2002-01-16 Thread Jeff Tulley

Ok, I just got the run-down on buffer sizes and TCPNODELAY.

The answer is, of course, IT DEPENDS.

If what you are sending out always will be 100 or 200 bytes, the buffer
size will not affect you very much - you were sending tiny-grams out
before and will continue to do so even after setting TCPNODELAY.  The
only difference is that before you were only sending a tiny-gram out
every 200-400ms (good from a network traffic point of view, very BAD
from a developer's point of view).  Then again, if these small messages
are few and far between, I wouldn't sweat sending out the small packets
at all.

If you have a feel for typical MQ packet sizes you can come up with a
better buffer size.  For instance, even if most messages are 2-4K, if
you have occasional ones at 10K, then set your buffer to 10K.  The extra
buffer is most likely worth the increase in efficiency. (Here again
there is a fine line between efficiency and wasted, unused buffer
space).

Actually, further refinement:  The buffer size for TCP is 1460 bytes. 
So, try to set your buffer size to a multiple of this, so instead of 16K
you would do 16 x 1460 = 23360 bytes.  That way you are maximizing the
chance that all data out will completely fill up multiple packets
instead of having a few full and then a partially empty packet every
time.  I would not go much above this 16 multiplier unless all data sent
is always way larger than that, but this is no hard rule of thumb.

Complicated enough?  It definitely will be worth it to implement this. 
1000's of packets per second vs 5 packets per second(or as Loren saw,
about 2.5 per second) is a HUGE deal.

Jeff Tulley  ([EMAIL PROTECTED])
(801)861-5322
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com

 Hiram Chirino [EMAIL PROTECTED] 1/16/02 1:59:34 PM 
So,

  Are you saying that if we do TCPNODELAY, just make sure we setup a
BIG 
BufferedOutputStream and make sure we got our flush()es in the right 
places???  The question is: how BIG should the buffer be???

Regards,
Hiram

From: Jeff Tulley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: [JBoss-dev] jbossmq message transport times
Date: Wed, 16 Jan 2002 10:03:23 -0700

Actually it is even a little more complicated than that.  Nagle's
algorithm by itself would not cause such delays, but what happened is
that at the other end somebody (I forget who) came up with the idea of
a
delayed ACK.  Since an ACK is a small packet, and one ACK can
acknowledge receipt for multiple packets, the delayed ACK algorithm
tries to send an ACK only every other packet.  This saves the wire
from
having too many tini-gram ACK packets.

Combined with the Nagle algorithm, this causes death of communication
since neither side will budge  Eventually the delayed ACK algorithm
times out (typically 200 ms, depending on the OS), and the ACK is
sent.
So, you get down to about 5 packets per second throughput.

There are two other algorithms, which modify NaglReceived: from 
INET-PRV-MTA by prv-mail20.provo.novell.com
   wite's to get around this
problem.  One, the Doupnik algorithm, uses information from the
data
to determine if there is more coming, and it sends the last bit
immediately without waiting for the buffer to fill up.  So, you get
full
packets until the last one, which will be a tiny gram, but you do
not
get the 200 ms delay for every packet since the receiving end is
continually getting information.  See
http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt

for information.

The other algorithm is called the Minshall Algorithm.  I do not know
a
lot about it, but it is along the same lines.  See
http://lists.w3.org/Archives/Public/ietf-discuss/1998Dec/0025.html 
for a discussion.

The problem with setting TCPNODELAY is that it disables Nagle's
altogether and you get the wire flooded with a lot of tiny packets.
But, in the web space it is exactly these tiny packets which trigger
the
problem, and otherwise you lose throughput.  The best solution,
lacking
the ability to change your OS TCP stack, is to turn Nagle's off (with
the TCPNODELAY), and then use a bigger buffer for your data.  Then
you
are using the wire more efficiently in that you are not flooding it
with
tiny-grams.

I wouldn't know all of this except the coworker in the office next to
me took a class from Doupnik himself, and has recently just finished
working with NetWare's TCP engineers to implement the Doupnik
algorithm.
  He found that most programs he dealt with in testing for the
problem
turned off Nagle's algorithm for efficiency.  Some OS's have this fix
in
them, some do not.  (Most do not???)

If we turn off Nagles algorithm in that code, we defintely need to
look
at the buffer size if we want the most efficient data transfer.

Jeff Tulley  ([EMAIL PROTECTED])
(801)861-5322
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 


  Sacha Labourey [EMAIL PROTECTED] 1/16/02 2:58:30
AM
 
Hello,

I guess that if you want

[JBoss-dev] jbossmq message transport times

2002-01-15 Thread Loren Rosen

As a start on measuring jbossmq performance, I timed sending and
receiving a 1-kbyte nondurable mesage (in a loop to amortize JIT effects
and the like).  Both client and server were on the same machine. The
average round trip time was 400 miliseconds, which is not good at all.

Some investigation showed that most of the time was spent idling in the
message transport code. I tried a few things and eventually discovered
that explicitly setting TCPNODELAY on the sockets fixed the problem--
round trip times dropped down to a few tens of milliseconds.

I can certainly submit a patch with the one-line change, but before I
do-- is this really the right fix? Will it make something else worse? If
no one else has seen the same problem, perhaps it's O/S specific. I'm
testing on MacOS X, which for our purposes is just another Unix flavor
with an oddball GUI.


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


___
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development