I'm new to Haskell and have reached an impasse in understanding the behaviour 
of sockets. 

I see that under the hood Network.Socket sockets are set to non-blocking. 

Presumably, when a non-blocking socket's buffer is full it should immediately 
return 0 bytes. 

I've found that setting the send buffer size causes send to truncate the 
ByteString to the buffer size, but that successive sends continue to succeed 
when the buffer should be full. 

In the server code I set the send buffer to 1, then attempted to overflow it:

    handleConnection conn = do
      setSocketOption conn SendBuffer 1
      s1 <- send conn "abc"
      putStrLn $ "Bytes sent: " ++ show s1
      s2 <- send conn "def"
      putStrLn $ "Bytes sent: " ++ show s2
      s3 <- send conn "ghi"
      putStrLn $ "Bytes sent: " ++ show s3
      close conn

And in the client I delay the recv by 1 second:

  setSocketOption sk RecvBuffer 1
  threadDelay (1 * 10^6)
  b1 <- recv sk 1
  B8.putStrLn b1
  b2 <- recv sk 1
  B8.putStrLn b2
  b3 <- recv sk 1
  B8.putStrLn b3

The server immediately outputs:

Bytes sent: 1
Bytes sent: 1
Bytes sent: 1

The client waits a for second and then outputs:

a
d
g

What's going on?  I expected the second and third send operation to return 0 
bytes sent, because the send buffer can only hold 1 byte.

The crux of my line of enquiry is this;  how can my application know when to 
pause in generating its chunked output if send doesn't block and the current 
non-blocking send behaviour apparently succeeds when the send buffer should be 
full?   

More generally, isn't polling sockets using system calls to be avoided in 
favour of blocking and lightweight haskell threads?

Hope someone can help clear up the confusion. 
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to