It was tested a few year ago, may be on J502 or J503 I can not remember. Since then, I did not test or use sdasync anymore.

My wild guess is that J can only keep a finite message queue. So that when J is busy executing a long running sentence, many TCP packet arrived and window send many messages to J, then J's message queue overflow. I remembered I heard beep sounds at that time.

Eric Iverson wrote:
As far as I know the use of sdasync should not result in lost messages. It is possible that bugs in pre 601 releases made it possible for the sdasync event to be missed (that is the socket messages have integrity, but J has missed the event to process them). In those cases use of a timer event or wd'tnosgs' would have been a workaround.

As far as I know sdasync should work properly in 601. Evidence to the contrary (either specific to J or for sdasync in general) would be very interesting.

That said, I think the use of sdasync should be avoided for non-trivial applications. It is not portable. And for a serious app it would be better to seperate the processing of critical information exchange from the user interface.

----- Original Message ----- From: "Miller, Raul D" <[EMAIL PROTECTED]>
To: "General forum" <[email protected]>
Sent: Tuesday, January 16, 2007 10:01 AM
Subject: RE: [Jgeneral] Web subject area in Wiki


Bill Lam wrote:
If I could throw my two cents in, "do not use sdasync" even
it is available because some events will be missing under heavy
loading.  To make matter worse, missing event does not always
cause mistake.

If sdasync works that way in the context of TCP, then Microsoft's
implementation of TCP is severely broken.

The whole purpose of TCP is to ensure that packets are presented
to the application in sequence, with no gaps in the sequence.
If it can't do that in a timely fashion, it's supposed to
end the connection.

More specifically, the OS is not supposed to signal to the other
end that a packet has been received until it can guarantee that
the application has received that packet.

In general, operating systems are expected to have limited
space for buffers, and this is just one of the many issues
which can cause packet loss.

Put differently, sdasync should be able to drop events without
impacting the reliability of the TCP connection.  TCP means
that the OS is telling the remote side of the connection how
many bytes have been received by the application.  If the
remote side does not receive acknowledgement of what it's
sent in a timely fashion, it's supposed to [a] send the
information again, and [b] decrease the rate at which it
sends for this connection.

That said, do you have specific knowledge of sdasync failing
in the context of TCP?  Or, were you extrapolating from your
experiences with this sort of mechanism in some other context?

Thanks,



--
regards,
bill
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to