>If I have a client and a server exchanging messages and if the server
>decides to go down for whatever reasons, the client can still send
>successfully one message before throwing an exception that tells me the
>communication channel is no longer valid.

In fact, Send() probably only adds the message to a buffer; data sending is
performed asynchronously, possibly after some time.

>First one is successfully sent and received. The second one is successfully
>sent, but obviously not received since the server closed its socket. The
>third one fails to be sent and the client throws and exception.
>Evidently, my question is why the second message is successfully sent since
>the server socket was closed. In the documentation, the Socket.Close method
>is said to "close the remote host connection and release all managed and
>unmanaged resources associated with the Socket".

This is indeed quite odd. However, despite I personally think it *should*
throw an exception, such a behavior would not help in a general case. For
example, the remote client may be in the process of shutting down the
connection, while you are just sending new data. Your endpoint is not aware
of the fact that the data won't be received (the socket is still connected,
as there is some delay related to network communications).

Note that when you add another Send() just after the one that should fail,
the second one will fail. But this is still unreliable behavior - see above.

> I guess it must have to do with the managed
> resources that are not garbage-collected very fast, or something.

No, I definitely think there is no such relationship.

However, this opens up another question: How a TCP endpoint could be sure
that all its data were delivered to destination? The TCP protocol, of
course, knows that. The problem is whether the Socket class enables finding
it out. It seems that there is now way to know it for sure (see socket.
Connected and TclClient.GetStream().Flush() - all of them are unreliable). I
have not done much of TCP/IP socket programming in .NET, so I am not sure
but I guess that the most reliable method would be to make server
acknowledge data reception (perhaps at the end of it).

Why such a design could make sense? Note that even if you know that the
remote server has received your data (that is, its TCP layer has received
and acknowledged the data), the application (the server) could not read it,
just close the socket and exit. Here is the problem: If you need to know for
sure that the remote app has received (read) the data, make it send you back
a confirmation. (An app-level confirmation instead of a protocol-level one.)

And of course, all Windows APIs are based on Winsock, so the behavior should
be consistent.

Marek

===================================
This list is hosted by DevelopMentor�  http://www.develop.com

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to