"Craig A. Berry" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I've done a bit of digging on this issue, and it looks to me like > there are plenty of bad assumptions to go around. Perhaps TCP/IP > services should provide a way to send packets larger than 64K, though > I'm not sure if that's even possible using a QIO interface (upon > which the CRTL socket interface is clearly based) since the spot > where it writes the number of bytes transferred in the IOSB is a > 16-bit word. It is hard to find where, if anywhere, the 64K limit > for sockets is documented. > > Even on linux or other operating systems, the maximum buffer sizes > are not guaranteed to be greater than 64K and are generally tuneable > by the system manager. So unless you control all the pieces, > including networking hardware, you can't assume very large packets > won't cause problems. > > IO::Socket should probably check the maximum chunk it can send and > return an error if it's passed something too large. > > Net::MySQL should probably not assume that an entire query will be > sent as one packet, though the same assumption may exist on the > server side. > > The application should definitely not assume that it can suck an > entire file into memory and send it to the database in one insert > statement. Even if you manage to lift the 64K limit, you may > eventually run into a 256K or 1MB or whatever limit. Pretty trivial > changes to the script could do the inserts in more manageable pieces, > say 1000 rows at a time. This is by far the easiest thing to change > and should be done regardless of whether changes are possible in the > underlying layers. > -- The limitation I believe you are talking about is the transmit and receive buffer sizes for the socket. These generally can't be bigger than 64K on most systems. But a TCP/IP socket is by definition a STREAM oriented device. So if you write a 800 Meg buffer to a socket it means that the I/O layer likely has to do many low level writes to get all that data pushed out. On VMS they don't treat the socket that way and only do 1 SYS$QIO call for the buffer and don't check to make sure the buffer isn't bigger then 64K. Really they should of checked the size and did a number of SYS$QIOs to get the full buffer pushed out. TCP/IP generally won't create a single packet with the whole buffer inside it. UDP does this but it is not stream oriented. Really at the lowest layer the TCP/IP driver would have to query the transport protocol to see what the biggest packet size that the transport will take. It then has to break up the message given to it into a number of packets of the queried size and then wait for acknowledges for those packets. So if a program does a write of 800 Megs the TCP/IP driver should take the first 64K into it's buffer and write that out. Once it gets all the acknowledges back it will then take the next 64K into it's buffer and write that out as well. Until the full 800 Meg buffer is written out the calling program is blocked. An example of where this works on VMS is in file I/O. You can open a file and write an 800 Meg buffer to the file using write and it will block you until all the data is written to the file. That is because the file layer is properly handling the 64K limit that VMS has with SYS$QIO. I don't know why they did not do the same thing when they wrote the TCP/IP layer for VMS.
If you have a different understanding of how that works please let me know as I currently use this information in an abstraction layer that we use for both UNIX and VMS. Michael Downey
