At 11:25 AM -0700 11/7/03, Michael Downey wrote: >The limitation I believe you are talking about is the transmit and receive >buffer sizes for the socket. These generally can't be bigger than 64K on >most systems. But a TCP/IP socket is by definition a STREAM oriented >device. So if you write a 800 Meg buffer to a socket it means that the I/O >layer likely has to do many low level writes to get all that data pushed >out. On VMS they don't treat the socket that way and only do 1 SYS$QIO call >for the buffer and don't check to make sure the buffer isn't bigger then >64K. Really they should of checked the size and did a number of SYS$QIOs to >get the full buffer pushed out. TCP/IP generally won't create a single >packet with the whole buffer inside it. UDP does this but it is not stream >oriented. Really at the lowest layer the TCP/IP driver would have to query >the transport protocol to see what the biggest packet size that the >transport will take. It then has to break up the message given to it into a >number of packets of the queried size and then wait for acknowledges for >those packets. So if a program does a write of 800 Megs the TCP/IP driver >should take the first 64K into it's buffer and write that out. Once it gets >all the acknowledges back it will then take the next 64K into it's buffer >and write that out as well. Until the full 800 Meg buffer is written out >the calling program is blocked. An example of where this works on VMS is in >file I/O. You can open a file and write an 800 Meg buffer to the file using >write and it will block you until all the data is written to the file. That >is because the file layer is properly handling the 64K limit that VMS has >with SYS$QIO. I don't know why they did not do the same thing when they >wrote the TCP/IP layer for VMS. > >If you have a different understanding of how that works please let me know >as I currently use this information in an abstraction layer that we use for >both UNIX and VMS.
Thanks for the explanation. I had a feeling I was oversimplifying somewhat. I believe writes to files also had a similar limitation at some point in the past but the CRTL took upon itself the task of breaking up large writes. It would have to do the same thing for sockets, and I agree it probably should. Regardless of if and when that happens, I think it's reasonable to expect a Perl program that is inserting a large number of rows in a database to make some reasonable assumption about the maximum number of rows it can insert in one go. -- ________________________________________ Craig A. Berry mailto:[EMAIL PROTECTED] "... getting out of a sonnet is much more difficult than getting in." Brad Leithauser
