Re: pty(4) 1024 bytes buffer limit
On Fri, 9 Sep 2011 09:38:31 -0400 Matthew Mondor wrote: > On Fri, 9 Sep 2011 00:26:43 + (UTC) > chris...@astron.com (Christos Zoulas) wrote: > > > Please file a PR about this. I've been meaning to fix it. > > Thanks, I will. For reference and to close this thread, the relevant PR was kern/45352, which was fixed and closed, thanks to Christos for the fixes and to the others who posted hints. -- Matt
Re: pty(4) 1024 bytes buffer limit
On Fri, Sep 09, 2011 at 05:11:17PM -0700, Erik Fair wrote: > > ... unless they're using jumbo frames. Potentially 9Kbytes, depending upon > NICs and switches. ...which reminds me that someday, I will eventually finish my "9k MTU demonstrated harmful" informational RFC. The original research which seemed to show a benefit for 9K MTU was done with NFS over UDP (so IP layer fragmentation instead of TCP layer segmentation) on systems with a 16K page size and a page-based memory coherency algorithm for multiple processors. Guess what the NFS RPC size limit was set to? Right, 8K. If they'd used 16K there, they would have concluded they needed a 17K MTU. In practice, large MTU helps, particularly for receive (modern adapters do TCP segmentation offload on send so you get the whole efficiency benefit of a large MTU for your stack, and then some), but 9K is a very bad choice of size: on most systems it means you allocate 4K three times and waste the last 3K of it. The FDDI MTU of 4K would have been a much better choice; for some applications 8K-plus-headers is good too. Thor
Re: pty(4) 1024 bytes buffer limit
On Sep 8, 2011, at 14:45, Thor Lancelot Simon wrote: > On Thu, Sep 08, 2011 at 11:26:29AM -0400, Matthew Mondor wrote: >> >> It would be nice to for instance be able to use an MTU of 3000 so that >> there are less context switches, but unfortunately tracing the >> processes show that 1024 bytes are read from the pty devices at most. > > Are you sure using an MTU of 3000 would do much of anything? Since > almost all peers are connected by Ethernet somewhere along the line, > you are unlikely to ever see packets larger than 1500 minus Ethernet > framing size. ... unless they're using jumbo frames. Potentially 9Kbytes, depending upon NICs and switches. Erik
Re: pty(4) 1024 bytes buffer limit
On Fri, Sep 09, 2011 at 09:37:08AM -0400, Matthew Mondor wrote: > and OpenBSD with it being absent on Linux. But pppd also uses the > in-kernel ppp support I think, which is probably different than Linux's. The IP payload traffic should never go to userland, so I don't see how the pty limit could be relevant here. Martin
Re: pty(4) 1024 bytes buffer limit
On Fri, 09 Sep 2011 08:30:51 +1000 matthew green wrote: > > I looked at the various tty(4) termios(4) and pty(4) without finding an > > option to change the buffer size. Is there a way at all to change it? > > there's no option. infact, it's all hard coded as magic 1024 constants > in about 4 places in sys/kern. i kept meaning to fix that, but haven't > gotten around to it. Thanks for the confirmation, -- Matt
Re: pty(4) 1024 bytes buffer limit
On Fri, 9 Sep 2011 00:26:43 + (UTC) chris...@astron.com (Christos Zoulas) wrote: > Please file a PR about this. I've been meaning to fix it. Thanks, I will. -- Matt
Re: pty(4) 1024 bytes buffer limit
On Thu, 8 Sep 2011 17:45:38 -0400 Thor Lancelot Simon wrote: > On Thu, Sep 08, 2011 at 11:26:29AM -0400, Matthew Mondor wrote: > > > > It would be nice to for instance be able to use an MTU of 3000 so that > > there are less context switches, but unfortunately tracing the > > processes show that 1024 bytes are read from the pty devices at most. > > Are you sure using an MTU of 3000 would do much of anything? Since > almost all peers are connected by Ethernet somewhere along the line, > you are unlikely to ever see packets larger than 1500 minus Ethernet > framing size. Indeed I could even avoid IP fragmentation with a low enough MTU, which is what I tried in the initial setup (and am still using, because of the 1024 bytes limit). > How did you determine that the bottleneck for your application was > context switches? That the 1024-byte read size you're seeing is > actually internal to the tty layer or ppp rather than application > imposed in userspace? I'm not sure that the bottleneck really are user context switches but I highly suspect it, as the forwarding daemon is mostly idle while it can't seem to send faster than about 178KB/sec (when using an MTU small enough to avoid the 1024 bytes limit, without which performance drops even more). If I could test with higher MTU to move more work down into the kernel and network, I could confirm or disprove :) I wrote the application as a test, so am controlling the buffer size, and am invoking pppd with the wanted mru setting. While it's not impossible that pppd imposes the limit, I've found some threads when searching with people complaining about the same pty limit on NetBSD and OpenBSD with it being absent on Linux. But pppd also uses the in-kernel ppp support I think, which is probably different than Linux's. Also, although I didn't inspect carefully the whole if_ppp code, I didn't see anything suggesting 1024 would be a limit, yet in the pty code I do see TTYHOG: /usr/include/sys/tty.h:#define TTYHOG 1024 To definitely test if it's really a pty/tty limitation I could write a small program and see, though; probably the best thing to confirm. -- Matt
Re: pty(4) 1024 bytes buffer limit
On Fri, Sep 09, 2011 at 08:38:59AM +0200, Rhialto wrote: > > There's also a possible issue with direct selection data transfer > > versus INCR data transfer, but in xterm's case that is unlikely to be > > what's behind your problem. It's hard to be sure; you outline > > conditions under which you see misbehaviour but you don't say what the > > misbehaviour actually is. > > Sorry. What I'm seeing is that I have selected more than 1024 characters > in one xterm, and when I paste it in another, I only get the first 1024 > of those. > > The way I checked the number was to type in one shell the "wc" command, > then paste into that xterm. The output from wc (if needed, forced to end > with one or two ^Ds) tells me its input is only 1024 bytes. That's a known xterm bug. Christos fixed it in -current quite some time ago; I'm not sure offhand if the fixes made it into -5, but they ought to if they haven't. -- David A. Holland dholl...@netbsd.org
Re: pty(4) 1024 bytes buffer limit
On Thu 08 Sep 2011 at 22:56:29 -0400, Mouse wrote: > There's also a possible issue with direct selection data transfer > versus INCR data transfer, but in xterm's case that is unlikely to be > what's behind your problem. It's hard to be sure; you outline > conditions under which you see misbehaviour but you don't say what the > misbehaviour actually is. Sorry. What I'm seeing is that I have selected more than 1024 characters in one xterm, and when I paste it in another, I only get the first 1024 of those. The way I checked the number was to type in one shell the "wc" command, then paste into that xterm. The output from wc (if needed, forced to end with one or two ^Ds) tells me its input is only 1024 bytes. I suddenly realise that the problem could also be on the source side of the paste, because if I paste into a gvim window, the limitation seems to occur too. Though I'm not sure if there isn't a pty in there somewhere too. > /~\ The ASCII Mouse -Olaf. -- ___ Olaf 'Rhialto' Seibert -- There's no point being grown-up if you \X/ rhialto/at/xs4all.nl-- can't be childish sometimes. -The 4th Doctor
Re: pty(4) 1024 bytes buffer limit
> I wonder if the pty buffer size is what is limiting my pastes in > xterms. Could be. But there is another possible culprit. I've long had issues with pastes in my own terminal emulator. Every once in a while I ahve a look. The most recent look led to this fragment of the main .c file, in the code responsible for writing data to the pty: [n is set to the amount of data available to be written] /* * We really shouldn't need to limit n. But if we don't, we * write in bursts of 900 bytes, which ends up overflowing * typical TTYHOG values with the echo before we get a chance * to consume any of it. * * Using 256 is theoretically wrong, since we have no a priori * reason to believe that the TTYHOG (or local equivalent) * value is as large as this expects (ca. 530 or greater). But * it seems to work well enough in practice. Blech. */ if (n > 256) n = 256; There's also a possible issue with direct selection data transfer versus INCR data transfer, but in xterm's case that is unlikely to be what's behind your problem. It's hard to be sure; you outline conditions under which you see misbehaviour but you don't say what the misbehaviour actually is. /~\ The ASCII Mouse \ / Ribbon Campaign X Against HTMLmo...@rodents-montreal.org / \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Re: pty(4) 1024 bytes buffer limit
In article <201109081526.p88fqt1q002...@ginseng.pulsar-zone.net>, Matthew Mondor wrote: >Hello, > >I've been wondering if it was possible to change the pty(4) internal >buffer size, as I noticed that ppp tunnels cannot use a larger frame >size. Because of this, it seems that the optimal MTU be 856, which is >so small that context switches become the bottleneck. > >It would be nice to for instance be able to use an MTU of 3000 so that >there are less context switches, but unfortunately tracing the >processes show that 1024 bytes are read from the pty devices at most. > >I looked at the various tty(4) termios(4) and pty(4) without finding an >option to change the buffer size. Is there a way at all to change it? > Please file a PR about this. I've been meaning to fix it. christos
re: pty(4) 1024 bytes buffer limit
> I've been wondering if it was possible to change the pty(4) internal > buffer size, as I noticed that ppp tunnels cannot use a larger frame > size. Because of this, it seems that the optimal MTU be 856, which is > so small that context switches become the bottleneck. > > It would be nice to for instance be able to use an MTU of 3000 so that > there are less context switches, but unfortunately tracing the > processes show that 1024 bytes are read from the pty devices at most. > > I looked at the various tty(4) termios(4) and pty(4) without finding an > option to change the buffer size. Is there a way at all to change it? there's no option. infact, it's all hard coded as magic 1024 constants in about 4 places in sys/kern. i kept meaning to fix that, but haven't gotten around to it. i used to bump it to 10K avoid the xterm vs. size problem but that was fixed a while ago by christos, and i think that patch has been removed from my source trees. .mrg.
Re: pty(4) 1024 bytes buffer limit
On Thu, Sep 08, 2011 at 11:26:29AM -0400, Matthew Mondor wrote: > > It would be nice to for instance be able to use an MTU of 3000 so that > there are less context switches, but unfortunately tracing the > processes show that 1024 bytes are read from the pty devices at most. Are you sure using an MTU of 3000 would do much of anything? Since almost all peers are connected by Ethernet somewhere along the line, you are unlikely to ever see packets larger than 1500 minus Ethernet framing size. How did you determine that the bottleneck for your application was context switches? That the 1024-byte read size you're seeing is actually internal to the tty layer or ppp rather than application imposed in userspace? Thor
Re: pty(4) 1024 bytes buffer limit
I wonder if the pty buffer size is what is limiting my pastes in xterms. I notice them when pasting more than a few text lines (about 23 medium filled lines is the limit in a quick test I just did; wc tells me 1025 characters including an extra newline to terminate the final partial line) into vi inside screen in an xterm, but the limit equally applies if I paste into a "wc" from stdin directly in an xterm (so screen and vim are not to blame). I'm pretty sure also that in the past there was no such limit, or at least it was much higher. -Olaf. -- ___ Olaf 'Rhialto' Seibert -- There's no point being grown-up if you \X/ rhialto/at/xs4all.nl-- can't be childish sometimes. -The 4th Doctor
pty(4) 1024 bytes buffer limit
Hello, I've been wondering if it was possible to change the pty(4) internal buffer size, as I noticed that ppp tunnels cannot use a larger frame size. Because of this, it seems that the optimal MTU be 856, which is so small that context switches become the bottleneck. It would be nice to for instance be able to use an MTU of 3000 so that there are less context switches, but unfortunately tracing the processes show that 1024 bytes are read from the pty devices at most. I looked at the various tty(4) termios(4) and pty(4) without finding an option to change the buffer size. Is there a way at all to change it? Thanks, -- Matt