Re: [twsocket] TWSocketServer and backlog

2007-12-05 Thread [EMAIL PROTECTED]
No worries!  Here's the update on this:  I have
*slightly* modified my application based on the
suggestions and insight I received from this list. 
When I say slightly I mean a lot, but that sounds
too ominous :)

First, I switched to TWSocketThrdServer without a
hitch (hurray! for Arno's hard work on it).  This
introduced a new bottleneck: the database calls
needed to be synchronized, and so it was basically
the same as running on a single thread but with the
additional overhead of thread creation and
synchronization.  The end result was that it made my
server slower.

Which brings me to the second change I made:  I
re-factored all database requests into a separate
thread and completely de-coupled the other two
working threads from having to perform any database
access or additional management logic; they just post
messages to the new thread with the data they need
stored in the database.  This introduce a whole new
level of complexity: that of inter-thread
communications and how to cope with un-received
posted messages if the thread needs to abort
unexpectedly (since the other threads now send and
forget and expect the database thread to do its thing).

And this led me to the final change in my
application:  The new database thread became the
overall manager of the application:  it governs all
other threads, instructs them when to start and stop,
and appropriately deals with anybody's impromptu
demise.  So now I have 3 worker threads:

  1. Queue Manager:  performs all database access,
inter-thread management, application initialization
and recovery, and overall management of the entire queue.

  2. Queue Receiver: accepts incoming client
connections and stores their data into the queue,
then post the necessary information to the Manager,
making sure state is always recoverable in case of
failure.

  3. Queue Dispatcher: scans periodically the queue
and sends the messages via SMTP, then posts a message
to the Manager announcing their result (whether
success or failure) so that it can update the
database record and remove the message from the
queue.  It also receives notifications from the
Manager whenever new messages arrive of higher
priority, so that it can interrupt its current scan
and address those.

   Overall, the new design is more elegant and
flexible, and still very stable; but more
importantly, it is now considerably faster than
before (orders of magnitude), and none of the
connection issues that I was encountering are
manifesting anymore.  For the sake of comparisson, it
now can take about 30 to 40 seconds to send 1000
messages to the Queue Server.  And that's with a
backlog of 50.  A backlog of 5 takes a few seconds
more because (at most) a handful of connections need
to be retried (10061 error).  A backlog of 10
succeeds without retries and takes roughly the same time.

   This means that it was my application design which
was impeding the performance of TWSocketServer, and
not an inherent issue with TWSocket itself (DOH!). 
System resources are limited, of course, so in my
opinion our empirical analysis on the usage of the
backlog is still valid:  a larger number seems to
affect performance negatively without any overall
gain in availability, especially under heavy stress.

   In conclusion, as Arno and Wilfried suggested from
the beginning (and as Francois has always claimed),
TWSocket is fast, efficient and fully capable of
handling thousands of concurrent connections,
provided there are sufficient resources for it, and
that no _extensive_processing_ is competing with the
socket communication.  How's that for an endorsement :)

   Thanks to all of you who offered help and suggestions.

   Cheers!
-dZ.


--- Original Message ---
From: Hoby Smith[mailto:[EMAIL PROTECTED]
Sent: 12/5/2007 12:44:57 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 Hey DZ.  Sorry, I didn't mean to drop out of this
email thread.  I have just
been slammed for the last week and didn't have a
chance to response to any
of the further posts on this (they were buried in
very long inbox).  From
what I see, Wilfried and Arno helped you out more
than I would have anyway.
Also, sorry I misunderstood your initial post about
this.  Story of my
life... always coming in to the middle of a
conversation confused and
broke... ;)

BTW, the pocket calculator comment was LOL... :)


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-12-05 Thread Hoby Smith
Hey DZ.  Sorry, I didn't mean to drop out of this email thread.  I have just
been slammed for the last week and didn't have a chance to response to any
of the further posts on this (they were buried in very long inbox).  From
what I see, Wilfried and Arno helped you out more than I would have anyway.
Also, sorry I misunderstood your initial post about this.  Story of my
life... always coming in to the middle of a conversation confused and
broke... ;)

BTW, the pocket calculator comment was LOL... :)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of DZ-Jay
Sent: Thursday, November 29, 2007 4:52 AM
To: ICS support mailing
Subject: Re: [twsocket] TWSocketServer and backlog

Wait, I'm sorry, I perhaps did not explain correctly:  It was taking 5 
to 7 minutes for the server to *process* the client's request to 
completion, not the connection.  My tests, although quick and dirty, 
are intended to check the behaviour of my application as a whole, not 
just the connection.

For the sake of understanding, here's a brief explanation of my project:
Its an e-mail queue server; it has a listening thread running 
TWSocketServer, which will receive requests from my queue client.  The 
client communicates with a custom line-based protocol, and sends the 
message data, which will then be stored in the queue (filesystem) by 
the listening thread.  A separate thread periodically scans the queue 
directories and dispatches the messages to the SMTP server.  The client 
builds and encodes the entire message on the fly, line by line as the 
data is sent, to avoid having the entire thing on memory at once.  But 
that's not really important to this discussion (I'm just proud of it 
:).

A large message may take a few seconds to transmit.  My tests all send 
the same message: a multi-part message with alternative text and html 
parts, and a small (4Kb) binary attachment, encoded in MIME.  The whole 
thing was about 14Kb (I think, I'm not sure).  I was sending 1000 of 
these.

 1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
 kidding... ;)

He, he, he.  If it was processing 1000 *connections* in 5 minutes, I'd 
say a pocket calculator!

 2. Is your client class on the server initiating a bunch of additional
 processes, like database lookups or something?

Not the client, but the server is performing some integrity checks, 
file IO, and eventually storing a record of the transaction in the 
database.  The client does indeed build the message on the fly, even 
encoding all content as lines are sent to the server (I'm sorry, there 
I go again, but I think this is pretty nifty :), but it doesn't start 
doing that until after the connection is established and the message 
data is going to be sent.

Plus both the server and the client were running on the same 
development machine, along with Delphi-hog-my-memory-2006, in debug 
mode, with no optimizations.  Moreover, the client test app has a 
TMemo, displaying the progress, and in my rush to make a quick and 
dirty test, the test app does not free the client objects (all 1000 of 
them), until it finishes.

So the slowness wasn't unexpected.  The point of my previous message 
was to show the difference between two tests, when the only variable 
was the backlog value: a backlog of 5 took less than half the time to 
do the exact same work as a backlog of 500.

The problem that I see is that the TWSocketServer seems to be taking 
too long (relatively speaking) to accept the connections.  My client 
seems to be able to send lots of connection requests before a single 
one is established, thus abusing and exceeding the backlog queue.  Of 
course, it could be my application that is preventing TWSocketServer 
from doing its work effectively, and if so, then perhaps I should 
consider using a multi-threaded server.  I cringe at that thought, 
though, because I had so many problems getting TWSocketThrdServer to 
run properly (due to my own lack of experience with this sort of 
thing).

Any recommendations would be greatly appreciated.

dZ.

On Nov 28, 2007, at 18:47, Hoby Smith wrote:

 H... If it is taking your system 5 to 7 MINUTES to process 1000 
 connect
 / disconnect cycles, then something is very wrong.

 I would have to rerun my tests, but I am thinking that I was doing  1K
 connect / disconnects in about 10 to 15 seconds when running both 
 server and
 client on a single core P4.  Perhaps a little faster using several 
 client
 instances at the same time, although the performance maxed quickly on a
 single core CPU.  I believe it was much faster on a 4 way Xeon machine 
 I
 tested it on. I can get more specific stats for you, if you want them.

 But, whatever my specific results were, 5 to 7 MINUTES is just WAY off.

 1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
 kidding... ;)

 2. Is your client class on the server initiating a bunch of additional
 processes, like database lookups

Re: [twsocket] TWSocketServer and backlog

2007-11-30 Thread DZ-Jay

On Nov 29, 2007, at 14:20, Arno Garrels wrote:

 Hard to tell, a good compromise is using TWSocketServer given
 any lengthy task is run in worker threads. I think separating
 socket IO work from other tasks by using worker threads for those
 tasks considered lengthy is the way to go. The definition of
 lengthy however is another story then. g

The problem I have is that most of the processing is atomical to the 
client transaction, that is,

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-29 Thread DZ-Jay
Wait, I'm sorry, I perhaps did not explain correctly:  It was taking 5 
to 7 minutes for the server to *process* the client's request to 
completion, not the connection.  My tests, although quick and dirty, 
are intended to check the behaviour of my application as a whole, not 
just the connection.

For the sake of understanding, here's a brief explanation of my project:
Its an e-mail queue server; it has a listening thread running 
TWSocketServer, which will receive requests from my queue client.  The 
client communicates with a custom line-based protocol, and sends the 
message data, which will then be stored in the queue (filesystem) by 
the listening thread.  A separate thread periodically scans the queue 
directories and dispatches the messages to the SMTP server.  The client 
builds and encodes the entire message on the fly, line by line as the 
data is sent, to avoid having the entire thing on memory at once.  But 
that's not really important to this discussion (I'm just proud of it 
:).

A large message may take a few seconds to transmit.  My tests all send 
the same message: a multi-part message with alternative text and html 
parts, and a small (4Kb) binary attachment, encoded in MIME.  The whole 
thing was about 14Kb (I think, I'm not sure).  I was sending 1000 of 
these.

 1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
 kidding... ;)

He, he, he.  If it was processing 1000 *connections* in 5 minutes, I'd 
say a pocket calculator!

 2. Is your client class on the server initiating a bunch of additional
 processes, like database lookups or something?

Not the client, but the server is performing some integrity checks, 
file IO, and eventually storing a record of the transaction in the 
database.  The client does indeed build the message on the fly, even 
encoding all content as lines are sent to the server (I'm sorry, there 
I go again, but I think this is pretty nifty :), but it doesn't start 
doing that until after the connection is established and the message 
data is going to be sent.

Plus both the server and the client were running on the same 
development machine, along with Delphi-hog-my-memory-2006, in debug 
mode, with no optimizations.  Moreover, the client test app has a 
TMemo, displaying the progress, and in my rush to make a quick and 
dirty test, the test app does not free the client objects (all 1000 of 
them), until it finishes.

So the slowness wasn't unexpected.  The point of my previous message 
was to show the difference between two tests, when the only variable 
was the backlog value: a backlog of 5 took less than half the time to 
do the exact same work as a backlog of 500.

The problem that I see is that the TWSocketServer seems to be taking 
too long (relatively speaking) to accept the connections.  My client 
seems to be able to send lots of connection requests before a single 
one is established, thus abusing and exceeding the backlog queue.  Of 
course, it could be my application that is preventing TWSocketServer 
from doing its work effectively, and if so, then perhaps I should 
consider using a multi-threaded server.  I cringe at that thought, 
though, because I had so many problems getting TWSocketThrdServer to 
run properly (due to my own lack of experience with this sort of 
thing).

Any recommendations would be greatly appreciated.

dZ.

On Nov 28, 2007, at 18:47, Hoby Smith wrote:

 H... If it is taking your system 5 to 7 MINUTES to process 1000 
 connect
 / disconnect cycles, then something is very wrong.

 I would have to rerun my tests, but I am thinking that I was doing  1K
 connect / disconnects in about 10 to 15 seconds when running both 
 server and
 client on a single core P4.  Perhaps a little faster using several 
 client
 instances at the same time, although the performance maxed quickly on a
 single core CPU.  I believe it was much faster on a 4 way Xeon machine 
 I
 tested it on. I can get more specific stats for you, if you want them.

 But, whatever my specific results were, 5 to 7 MINUTES is just WAY off.

 1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
 kidding... ;)

 2. Is your client class on the server initiating a bunch of additional
 processes, like database lookups or something?

 3. Do you have any problems with other apps on your system running 
 slow?
 Perhaps you have a bad driver or resource conflict with your NIC?

 Just some thoughts...

 Hoby

-- 
DZ-Jay [TeamICS]
http://www.overbyte.be/eng/overbyte/teamics.html

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-29 Thread DZ-Jay

On Nov 29, 2007, at 06:10, Wilfried Mestdagh wrote:

 Hello DZ-Jay,

 So conclusion is that increasing the backlog does:
- decrease the performance for accepting connections
- decrease the overall performance of the application

This seems to be the conclusion of mine and Huby's tests.

 Also:
 - connecting clients should have a range of retry's when refused,
   eventually with a random small delay.

Agreed.


 For your application:
 Seems you have a lot of processing to do. While your code is executing
 no incoming socket can be accepting. Then maybe execute a certain block
 (the most time consuming one) in a separate thread ? You can exchange
 data between threads the most easy is by posting a message where a
 pointer to the data is in WParam argument. The pointer can be freed in
 the custom message handler.

I will consider this.  Thank you, Wilfried.  However, the queue manager 
(listening) thread  does not have a single large block of 
long-executing code, but very small blocks that each do a little work, 
which may be affecting performance:
1. It runs TWSocketServer, so it has to process all incoming 
connections and client communications.
2. For each client, it parses the incoming requests from the client to 
determine what needs to be done, and which state they are in.  The 
request is a string in CMD:VALUE fomat.
3. If it's message data, it writes to the filesystem.
4. When done successfuly, it logs to the database.
5. When a message posted by the dispatcher thread announces that a 
message has been dispatched, the manager thread needs to log this to 
the database.

And all throughout it is writing to a log file (at least while 
debugging), which needs to be synchronized among all threads.

I will try to do some analysis to determine which portions are the 
bottleneck and see if they could be offset to a separate thread.

I do not mind too much right now if the server runs a little slow; we 
can always re-factor it and optimize it in the future.  But what I 
would like to avoid is rejecting connections too often because of a 
full backlog (which seems to be happening right now).

Perhaps I should run the TWSocketServer on its own thread, and post 
messages from the clients to the queue manager thread to do the work? 
Although this seems too complex and expensive.  It almost looks like 
each client should run on its own thread... :(

dZ.

-- 
DZ-Jay [TeamICS]
http://www.overbyte.be/eng/overbyte/teamics.html

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-29 Thread Arno Garrels
DZ-Jay wrote:
 On Nov 29, 2007, at 06:10, Wilfried Mestdagh wrote:
 
 Hello DZ-Jay,
 
 So conclusion is that increasing the backlog does:
- decrease the performance for accepting connections
- decrease the overall performance of the application
 
 This seems to be the conclusion of mine and Huby's tests.

Strange, I never noticed something like that.

 
 You can exchange data between threads the most easy is by posting a
 message where a pointer to the data is in WParam argument. The
 pointer can be freed in the custom message handler.

That's indeed the fastest way since the thread must not wait. 

 Perhaps I should run the TWSocketServer on its own thread, and post
 messages from the clients to the queue manager thread to do the work?
 Although this seems too complex and expensive.  It almost looks like
 each client should run on its own thread... :(

I'm that sure: 

1 - Stressing a server with 100 connection attempts per second is most
likely not a real world scenario, except upon DoS attacks.
2 - Run your stress tester against IIS or other servers, I found that
they were not able to accept more clients per second than my server.  
3 - I played with different designs. 
a) Listening sockets in one thread, client sockets in another thread(s).
   This introduces a new problem, clients are accepted very fast,
   however the listening thread must synchronize with the client
   thread(s) which may take longer than with current TWSocketServer,
   I worked around by posting just the socket handle to the thread
   which was fast, however also rather complicated to handle all
   the client stuff/pool in the threads.
b) Listening sockets in one thread, one thread per client.
   AFAIR without a thread pool accepting clients was slower than
   with TWSocketServer.
c) I even hacked together a server that used M$ overlapped sockets,
   this was a rather disapointing discourse since performance was
   the same as with (a). 

The goal is to accept clients as fast as possible, once they are 
connected it won't hurt to let them wait some milliseconds.

Before you rewrite your application I suggest you code some test
apps. with different designs and compare their performance.

--
Arno Garrels

 
 dZ.
 
 --
 DZ-Jay [TeamICS]
 http://www.overbyte.be/eng/overbyte/teamics.html
-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-29 Thread [EMAIL PROTECTED]
--- Original Message ---
From: Arno Garrels[mailto:[EMAIL PROTECTED]

 You can exchange data between threads the most
easy is by posting a
 message where a pointer to the data is in WParam
argument. The
 pointer can be freed in the custom message handler.

 That's indeed the fastest way since the thread must
not wait. 

However, if the main thread notified the slave
threads to quit, the last thread that quits may post
messages (before receiving the WM_QUIT message) to
the first one and fail, which will cause the memory
in the message to not be freed (until the application
finally quits).  I don't know if this is a real
concern, though.

 1 - Stressing a server with 100 connection attempts
per second is most
 likely not a real world scenario, except upon DoS
attacks.

I agree.  However, this is very easily done by a
brain-dead developer using my queue client class in a
simple 'for' loop to send a lot of messages at once,
say, an announcement to all our customers.  I would
like to prevent this as much as possible by improving
connection acceptance speed on the server, or else
I'll have to cripple the client somehow.  Do not
underestimate the tenacity of morons. :)

 2 - Run your stress tester against IIS or other
servers, I found that
 they were not able to accept more clients per
second than my server.  

I'm sure this is true.

I am able to avoid the whole issue by responsibly
designing the client application:  send the next
connection request after the first one triggers
OnSessionConnected, or connecting only a few clients
at a time, then pause until they are done.  This not
only improves performance of the server, but it
prevents an inadvertent DoS attack from an
application that needs to send lots of messages at once.

 3 - I played with different designs. 

Which would you consider to work best?

 The goal is to accept clients as fast as possible,
once they are 
 connected it won't hurt to let them wait some
milliseconds.

This is indeed my goal.

Would it make sense to have a pool of listening
sockets in a separate (single) thread that will post
a message to the (single) working thread with the
socket handle?  That way the connections can be
established quickly, and my server can continue doing
its processing within a single thread so that I don't
have to redesign it right now.

   -dZ.

Sent: 11/29/2007 1:52:38 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 DZ-Jay wrote:
 On Nov 29, 2007, at 06:10, Wilfried Mestdagh wrote:
 
 Hello DZ-Jay,
 
 So conclusion is that increasing the backlog does:
- decrease the performance for accepting
connections
- decrease the overall performance of the
application
 
 This seems to be the conclusion of mine and Huby's
tests.

Strange, I never noticed something like that.
 Perhaps I should run the TWSocketServer on its own
thread, and post
 messages from the clients to the queue manager
thread to do the work?
 Although this seems too complex and expensive.  It
almost looks like
 each client should run on its own thread... :(

I'm that sure: 

1 - Stressing a server with 100 connection attempts
per second is most
likely not a real world scenario, except upon DoS
attacks.
2 - Run your stress tester against IIS or other
servers, I found that
they were not able to accept more clients per second
than my server.  
3 - I played with different designs. 
a) Listening sockets in one thread, client
sockets in another thread(s).
   This introduces a new problem, clients are
accepted very fast,
   however the listening thread must synchronize
with the client
   thread(s) which may take longer than with
current TWSocketServer,
   I worked around by posting just the socket
handle to the thread
   which was fast, however also rather
complicated to handle all
   the client stuff/pool in the threads.
b) Listening sockets in one thread, one thread
per client.
   AFAIR without a thread pool accepting clients
was slower than
   with TWSocketServer.
c) I even hacked together a server that used M$
overlapped sockets,
   this was a rather disapointing discourse since
performance was
   the same as with (a). 

The goal is to accept clients as fast as possible,
once they are 
connected it won't hurt to let them wait some
milliseconds.

Before you rewrite your application I suggest you
code some test
apps. with different designs and compare their
performance.

--
Arno Garrels

 
 dZ.
 
 --
 DZ-Jay [TeamICS]
  http://www.overbyte.be/eng/overbyte/teamics.html 
-- 
To unsubscribe or change your settings for TWSocket
mailing list
please goto 
http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket

Visit our website at  http://www.overbyte.be 


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-29 Thread Arno Garrels
[EMAIL PROTECTED] wrote:
 --- Original Message ---
 From: Arno Garrels[mailto:[EMAIL PROTECTED]
 
 You can exchange data between threads the most
 easy is by posting a
 message where a pointer to the data is in WParam
 argument. The
 pointer can be freed in the custom message handler.
 
 That's indeed the fastest way since the thread must
 not wait.
 
 However, if the main thread notified the slave
 threads to quit, the last thread that quits may post
 messages (before receiving the WM_QUIT message) to
 the first one and fail, which will cause the memory
 in the message to not be freed (until the application
 finally quits).  I don't know if this is a real
 concern, though.

When the process died the memory is given back to the 
OS anyway, so no problem. PostMessage() will return 
False on failure, in this case the memory can be
released by the calling thread.

 
 1 - Stressing a server with 100 connection attempts per second is
 most likely not a real world scenario, except upon DoS
 attacks.
 
 I agree.  However, 
 

It's easy to DoS any server ;-) 

 2 - Run your stress tester against IIS or other servers, I found that
 they were not able to accept more clients per
 second than my server.
 
 I'm sure this is true.

 3 - I played with different designs.
 
 Which would you consider to work best?

Hard to tell, a good compromise is using TWSocketServer given
any lengthy task is run in worker threads. I think separating
socket IO work from other tasks by using worker threads for those
tasks considered lengthy is the way to go. The definition of 
lengthy however is another story then. g 

--
Arno Garrels 


-dZ.
 
 Sent: 11/29/2007 1:52:38 PM
 To  : twsocket@elists.org
 Cc  :
 Subject : RE: Re: [twsocket] TWSocketServer and backlog
 
  DZ-Jay wrote:
 On Nov 29, 2007, at 06:10, Wilfried Mestdagh wrote:
 
 Hello DZ-Jay,
 
 So conclusion is that increasing the backlog does:
- decrease the performance for accepting
 connections
- decrease the overall performance of the
 application
 
 This seems to be the conclusion of mine and Huby's
 tests.
 
 Strange, I never noticed something like that.
 Perhaps I should run the TWSocketServer on its own
 thread, and post
 messages from the clients to the queue manager thread to do the work?
 Although this seems too complex and expensive.  It
 almost looks like
 each client should run on its own thread... :(
 
 I'm that sure:
 
 1 - Stressing a server with 100 connection attempts
 per second is most
 likely not a real world scenario, except upon DoS
 attacks.
 2 - Run your stress tester against IIS or other
 servers, I found that
 they were not able to accept more clients per second
 than my server.
 3 - I played with different designs.
 a) Listening sockets in one thread, client
 sockets in another thread(s).
This introduces a new problem, clients are
 accepted very fast,
however the listening thread must synchronize
 with the client
thread(s) which may take longer than with
 current TWSocketServer,
I worked around by posting just the socket
 handle to the thread
which was fast, however also rather
 complicated to handle all
the client stuff/pool in the threads.
 b) Listening sockets in one thread, one thread
 per client.
AFAIR without a thread pool accepting clients
 was slower than
with TWSocketServer.
 c) I even hacked together a server that used M$
 overlapped sockets,
this was a rather disapointing discourse since
 performance was
the same as with (a).
 
 The goal is to accept clients as fast as possible,
 once they are
 connected it won't hurt to let them wait some
 milliseconds.
 
 Before you rewrite your application I suggest you
 code some test
 apps. with different designs and compare their
 performance.
 
 --
 Arno Garrels
 
 
 dZ.
 
 --
 DZ-Jay [TeamICS]
  http://www.overbyte.be/eng/overbyte/teamics.html
 --
 To unsubscribe or change your settings for TWSocket
 mailing list
 please goto
 http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
 
 Visit our website at  http://www.overbyte.be
-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Paul
I always use 500, no problems yet

Paul


- Original Message - 
From: [EMAIL PROTECTED]
To: twsocket@elists.org
Sent: Wednesday, November 28, 2007 6:27 PM
Subject: [twsocket] TWSocketServer and backlog


 Hello:
While stress-testing my application, I noticed
 that I am able to send substantially many more
 connections in the time it takes the TWSocketServer
 to handle the incomming requests, causing the default
 backlog to fill up quickly.  Obviously, I can
 increase the number, but seeing that the default is 5
 (which seems rather low to me), I'm thinking that
 perhaps there may be a concern in setting this too high.
 
Does anybody know what I should take into
 consideration before changing this value, and if
 there are any concerns with it being too high?
 
Thanks,
-dZ.
 
 -- 
 To unsubscribe or change your settings for TWSocket mailing list
 please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
 Visit our website at http://www.overbyte.be
 

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread [EMAIL PROTECTED]
--- Original Message ---
From: Paul[mailto:[EMAIL PROTECTED]
 
 I always use 500, no problems yet

Thanks for the quick reply.

Then, is there a particular reason why it defaults to
5? It seems too low for all but the most trivial
applications (given that spawning the client object
and dupping the socket seems to take a relatively
long time).

   -dZ.

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Arno Garrels
Paul wrote:
 I always use 500, no problems yet

But the ListenbacklogQueue is limited in size depending
on the OS (cannot recall the values, however it's far less
then 500, AFAIR). The more blocking the server behaves the
earlier you get 10061 back from a connect. Simple test is with
TcpSrv Demo, with logging to the memo enabled I get the first
error 10061 after 100-200 connects (10ms intervals). Turning off
logging to memo establishes several thousand connections without
any error easily.

--

Arno Garrels
   

 
 Paul
 
 
 - Original Message -
 From: [EMAIL PROTECTED]
 To: twsocket@elists.org
 Sent: Wednesday, November 28, 2007 6:27 PM
 Subject: [twsocket] TWSocketServer and backlog
 
 
 Hello:
While stress-testing my application, I noticed
 that I am able to send substantially many more
 connections in the time it takes the TWSocketServer
 to handle the incomming requests, causing the default
 backlog to fill up quickly.  Obviously, I can
 increase the number, but seeing that the default is 5
 (which seems rather low to me), I'm thinking that
 perhaps there may be a concern in setting this too high.
 
Does anybody know what I should take into
 consideration before changing this value, and if
 there are any concerns with it being too high?
 
Thanks,
-dZ.
 
 --
 To unsubscribe or change your settings for TWSocket mailing list
 please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
 Visit our website at http://www.overbyte.be
-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Wilfried Mestdagh
Hello dz,

a client application should do at least a few (or infinity) retry's if
connection fails. so normally not needed to increase it. On the other
hand it does no harm to increase it.

---
Rgds, Wilfried [TeamICS]
http://www.overbyte.be/eng/overbyte/teamics.html
http://www.mestdagh.biz

Wednesday, November 28, 2007, 18:27, [EMAIL PROTECTED] wrote:

 Hello:
 While stress-testing my application, I noticed
 that I am able to send substantially many more
 connections in the time it takes the TWSocketServer
 to handle the incomming requests, causing the default
 backlog to fill up quickly.  Obviously, I can
 increase the number, but seeing that the default is 5
 (which seems rather low to me), I'm thinking that
 perhaps there may be a concern in setting this too high.

 Does anybody know what I should take into
 consideration before changing this value, and if
 there are any concerns with it being too high?

 Thanks,
 -dZ.


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Wilfried Mestdagh
Hello dz,

   I think 5 is the winsock default value

---
Rgds, Wilfried [TeamICS]
http://www.overbyte.be/eng/overbyte/teamics.html
http://www.mestdagh.biz

Wednesday, November 28, 2007, 19:01, [EMAIL PROTECTED] wrote:

--- Original Message ---
From: Paul[mailto:[EMAIL PROTECTED]
 
 I always use 500, no problems yet

 Thanks for the quick reply.

 Then, is there a particular reason why it defaults to
 5? It seems too low for all but the most trivial
 applications (given that spawning the client object
 and dupping the socket seems to take a relatively
 long time).

-dZ.


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread [EMAIL PROTECTED]
Hello:

The problem with retrying is that it is not the same
as a server full error when the maximum number of
clients is reached; 100061 is essentially a port not
open error, which is the same error you would get if
the server is not running.  So there is no real way
to know that the listener is currently busy and the
backlog full, or if the server is listening on a
different port or disabled completely.

I will certainly increase the backlog on my server,
but will also consider building a number of retries
in the connection routine of the client class.

   Thanks for the help.
 -dZ.



--- Original Message ---
From: Wilfried
Mestdagh[mailto:[EMAIL PROTECTED]
Sent: 11/28/2007 2:26:49 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 Hello dz,

a client application should do at least a few (or
infinity) retry's if
connection fails. so normally not needed to increase
it. On the other
hand it does no harm to increase it.

---
Rgds, Wilfried [TeamICS]
 http://www.overbyte.be/eng/overbyte/teamics.html 
 http://www.mestdagh.biz 

Wednesday, November 28, 2007, 18:27, [EMAIL PROTECTED] wrote:

 Hello:
 While stress-testing my application, I noticed
 that I am able to send substantially many more
 connections in the time it takes the TWSocketServer
 to handle the incomming requests, causing the default
 backlog to fill up quickly.  Obviously, I can
 increase the number, but seeing that the default is 5
 (which seems rather low to me), I'm thinking that
 perhaps there may be a concern in setting this too
high.

 Does anybody know what I should take into
 consideration before changing this value, and if
 there are any concerns with it being too high?

 Thanks,
 -dZ.


-- 
To unsubscribe or change your settings for TWSocket
mailing list
please goto 
http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket

Visit our website at  http://www.overbyte.be 


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Hoby Smith
FYI... I ran into an issue with some test code I wrote a few months ago,
which related to the backlog setting, as well as the annoying issue with
Winsock running out of local ports.  In my test, I was attempting to see how
many connections could be handled by a particular process over a period of
time.

I believe my results showed that increasing this value can have a very
negative effect on performance.  Basically, the issue is inherent in how the
TCP stack is implemented, not in how a particular app services the stack.  I
found that surpassing a particular connection rate threshold would result in
an exponential gain in processing time on the listening stack.  Meaning, the
TCP stack performance decreases dramatically as you increase the number of
pending connections, when the listening socket is receiving a high rate of
connection requests.  My assumption is that this is due to the increased
overhead in managing the backlog queue.  Given this, I made two
observations, which may be wrong, but made sense to me.

First, this is why the Winsock default is 5.  I imagine that the Winsock
stack implementation was designed with the perspective that if the backlog
is actually filling up enough to reach 5 or more, then something is wrong.
Probably, a couple more might be ok, but my results showed that as you
increased this value under heavy load, your connection rate was very
unpredictable, as well as instable (lots of failed connects).  For the
TCP/IP stack to be effective, it must be responsive enough to handle the low
level connection requests in a timely fashion.  If not, then you have a
major low level servicing problem or the machine is seriously overloaded
with TCP requests.  In which case, you want to get connection errors, rather
than an overloaded backlog scenario.

Second, increasing this value surely creates a greater DOS attack surface,
making you more vulnerable to bursts of socket open requests, and surely
would make the effects of such an attack even worse.  This might also be why
the Winsock default is 5.  However, as I personally don't think that there
is really a practical solution to a well designed DOS attack, then this
might not really be relevant.  Nonetheless, it might be something you need
to consider.

So, given that, I personally don't recommend increasing the value.  If your
app can't service the stack with a backlog setting close to 5, then your
system is just overloaded or not responsive for some reason.

Anyway, that is what I determined from my testing results.  If anyone has
found to the contrary, please feel free to correct me... :)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Wednesday, November 28, 2007 12:58 PM
To: twsocket@elists.org
Subject: Re: [twsocket] TWSocketServer and backlog

Hello:

The problem with retrying is that it is not the same
as a server full error when the maximum number of
clients is reached; 100061 is essentially a port not
open error, which is the same error you would get if
the server is not running.  So there is no real way
to know that the listener is currently busy and the
backlog full, or if the server is listening on a
different port or disabled completely.

I will certainly increase the backlog on my server,
but will also consider building a number of retries
in the connection routine of the client class.

   Thanks for the help.
 -dZ.



--- Original Message ---
From: Wilfried
Mestdagh[mailto:[EMAIL PROTECTED]
Sent: 11/28/2007 2:26:49 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 Hello dz,

a client application should do at least a few (or
infinity) retry's if
connection fails. so normally not needed to increase
it. On the other
hand it does no harm to increase it.

---
Rgds, Wilfried [TeamICS]
 http://www.overbyte.be/eng/overbyte/teamics.html 
 http://www.mestdagh.biz 

Wednesday, November 28, 2007, 18:27, [EMAIL PROTECTED] wrote:

 Hello:
 While stress-testing my application, I noticed
 that I am able to send substantially many more
 connections in the time it takes the TWSocketServer
 to handle the incomming requests, causing the default
 backlog to fill up quickly.  Obviously, I can
 increase the number, but seeing that the default is 5
 (which seems rather low to me), I'm thinking that
 perhaps there may be a concern in setting this too
high.

 Does anybody know what I should take into
 consideration before changing this value, and if
 there are any concerns with it being too high?

 Thanks,
 -dZ.


-- 
To unsubscribe or change your settings for TWSocket
mailing list
please goto 
http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket

Visit our website at  http://www.overbyte.be 


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread [EMAIL PROTECTED]
Hello:
Thank you for your very informative response.  I
was performing some tests on my server application by
continually increasing the backlog value with some
mixed results, which seem to coincide with your
empirical analysis.

I kept increasing the backlog value up until I
reached 1000, but to my surprise, I noticed that the
connections started failing after about 230 requests,
out of 1000 clients.  These were the first 230
requests, so the backlog queue was significantly less
than its maximum.  I also thought I noticed that the
server was taking longer to respond, but didn't think
much of it at the time.

However, after reading your post I decided to try
once again with a backlog of 5, and set a retry loop
every time a connection failed.  As expected, the
connections started failing almost immediately after
the test started.  But much to my surprise, the
connections were handled quicker -- sometimes orders
of magnitude faster than before!

As a reference, using my localhost as the server
and client, with a test application spawning 1000
clients to connect one right after the other, and
re-trying if they failed, it took about 5 to 7
minutes to process the entire lot; while it only took
about 2 minutes to process with a backlog of 5.  The
test with a backlog limit of 5 retried much more
times, of course, but when connections were
established, they were processed faster.

Still, it seems to me that TWSocketServer is
taking too long to process incoming connections, as
many connections can be queued in the backlog while
its instantiating the client and dupping the socket.
 Any thoughts on this?

-dZ.

--- Original Message ---
From: Hoby Smith[mailto:[EMAIL PROTECTED]
Sent: 11/28/2007 5:31:09 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 FYI... I ran into an issue with some test code I
wrote a few months ago,
which related to the backlog setting, as well as the
annoying issue with
Winsock running out of local ports.  In my test, I
was attempting to see how
many connections could be handled by a particular
process over a period of
time.

I believe my results showed that increasing this
value can have a very
negative effect on performance.  Basically, the issue
is inherent in how the
TCP stack is implemented, not in how a particular app
services the stack.  I
found that surpassing a particular connection rate
threshold would result in
an exponential gain in processing time on the
listening stack.  Meaning, the
TCP stack performance decreases dramatically as you
increase the number of
pending connections, when the listening socket is
receiving a high rate of
connection requests.  My assumption is that this is
due to the increased
overhead in managing the backlog queue.  Given this,
I made two
observations, which may be wrong, but made sense to me.

First, this is why the Winsock default is 5.  I
imagine that the Winsock
stack implementation was designed with the
perspective that if the backlog
is actually filling up enough to reach 5 or more,
then something is wrong.
Probably, a couple more might be ok, but my results
showed that as you
increased this value under heavy load, your
connection rate was very
unpredictable, as well as instable (lots of failed
connects).  For the
TCP/IP stack to be effective, it must be responsive
enough to handle the low
level connection requests in a timely fashion.  If
not, then you have a
major low level servicing problem or the machine is
seriously overloaded
with TCP requests.  In which case, you want to get
connection errors, rather
than an overloaded backlog scenario.

Second, increasing this value surely creates a
greater DOS attack surface,
making you more vulnerable to bursts of socket open
requests, and surely
would make the effects of such an attack even worse.
 This might also be why
the Winsock default is 5.  However, as I personally
don't think that there
is really a practical solution to a well designed DOS
attack, then this
might not really be relevant.  Nonetheless, it might
be something you need
to consider.

So, given that, I personally don't recommend
increasing the value.  If your
app can't service the stack with a backlog setting
close to 5, then your
system is just overloaded or not responsive for some
reason.

Anyway, that is what I determined from my testing
results.  If anyone has
found to the contrary, please feel free to correct
me... :)

-Original Message-


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be


Re: [twsocket] TWSocketServer and backlog

2007-11-28 Thread Hoby Smith
H... If it is taking your system 5 to 7 MINUTES to process 1000 connect
/ disconnect cycles, then something is very wrong.  

I would have to rerun my tests, but I am thinking that I was doing  1K
connect / disconnects in about 10 to 15 seconds when running both server and
client on a single core P4.  Perhaps a little faster using several client
instances at the same time, although the performance maxed quickly on a
single core CPU.  I believe it was much faster on a 4 way Xeon machine I
tested it on. I can get more specific stats for you, if you want them.

But, whatever my specific results were, 5 to 7 MINUTES is just WAY off.

1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
kidding... ;)

2. Is your client class on the server initiating a bunch of additional
processes, like database lookups or something?  

3. Do you have any problems with other apps on your system running slow?
Perhaps you have a bad driver or resource conflict with your NIC? 

Just some thoughts...

Hoby

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Wednesday, November 28, 2007 4:30 PM
To: twsocket@elists.org
Subject: Re: [twsocket] TWSocketServer and backlog

Hello:
Thank you for your very informative response.  I
was performing some tests on my server application by
continually increasing the backlog value with some
mixed results, which seem to coincide with your
empirical analysis.

I kept increasing the backlog value up until I
reached 1000, but to my surprise, I noticed that the
connections started failing after about 230 requests,
out of 1000 clients.  These were the first 230
requests, so the backlog queue was significantly less
than its maximum.  I also thought I noticed that the
server was taking longer to respond, but didn't think
much of it at the time.

However, after reading your post I decided to try
once again with a backlog of 5, and set a retry loop
every time a connection failed.  As expected, the
connections started failing almost immediately after
the test started.  But much to my surprise, the
connections were handled quicker -- sometimes orders
of magnitude faster than before!

As a reference, using my localhost as the server
and client, with a test application spawning 1000
clients to connect one right after the other, and
re-trying if they failed, it took about 5 to 7
minutes to process the entire lot; while it only took
about 2 minutes to process with a backlog of 5.  The
test with a backlog limit of 5 retried much more
times, of course, but when connections were
established, they were processed faster.

Still, it seems to me that TWSocketServer is
taking too long to process incoming connections, as
many connections can be queued in the backlog while
its instantiating the client and dupping the socket.
 Any thoughts on this?

-dZ.

--- Original Message ---
From: Hoby Smith[mailto:[EMAIL PROTECTED]
Sent: 11/28/2007 5:31:09 PM
To  : twsocket@elists.org
Cc  : 
Subject : RE: Re: [twsocket] TWSocketServer and backlog

 FYI... I ran into an issue with some test code I
wrote a few months ago,
which related to the backlog setting, as well as the
annoying issue with
Winsock running out of local ports.  In my test, I
was attempting to see how
many connections could be handled by a particular
process over a period of
time.

I believe my results showed that increasing this
value can have a very
negative effect on performance.  Basically, the issue
is inherent in how the
TCP stack is implemented, not in how a particular app
services the stack.  I
found that surpassing a particular connection rate
threshold would result in
an exponential gain in processing time on the
listening stack.  Meaning, the
TCP stack performance decreases dramatically as you
increase the number of
pending connections, when the listening socket is
receiving a high rate of
connection requests.  My assumption is that this is
due to the increased
overhead in managing the backlog queue.  Given this,
I made two
observations, which may be wrong, but made sense to me.

First, this is why the Winsock default is 5.  I
imagine that the Winsock
stack implementation was designed with the
perspective that if the backlog
is actually filling up enough to reach 5 or more,
then something is wrong.
Probably, a couple more might be ok, but my results
showed that as you
increased this value under heavy load, your
connection rate was very
unpredictable, as well as instable (lots of failed
connects).  For the
TCP/IP stack to be effective, it must be responsive
enough to handle the low
level connection requests in a timely fashion.  If
not, then you have a
major low level servicing problem or the machine is
seriously overloaded
with TCP requests.  In which case, you want to get
connection errors, rather
than an overloaded backlog scenario.

Second, increasing this value surely creates a
greater DOS attack surface,
making you more