Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Claudio Jeker
On Wed, Mar 30, 2011 at 03:07:13PM +0200, Walter Haidinger wrote:
 Am 29.03.2011 22:42, schrieb Claudio Jeker:
  Here is a possible fix. The problem was that because of the way NFS uses
  the socket API it did not turn of the sendbuffer scaling which reset the
  size of the socket back to 17376 bytes which is a no go when a buffer of
  more then 17k is generated by NFS. It is better to initialize the sb_wat
  in soreserve() which is called by NFS and all attach functions.
  
  Please test and report back.
 
 Thanks for the patch. Glad to test it.
 
 Well, the good news: No more lockups, neither in a VM nor on real hardware.
 
 Everything is also pretty fine with larger buffers, but with small buffers
 (e.g. -o tcp,-r=512,-w=512), the system doesn't respond sometimes, like
 short freezes of a couple of seconds as if there is a pause while some
 buffers are emptied.

Buffers below 8k are stupid. For TCP just use 32k or even 64k. 512byte
buffers are silly. They get internally rounded up since the smallest
packet seems to be 512bytes data plus header. This will give you TCP send
and recv buffers of around 1200bytes. No wonder it is slow as hell.
 
 This is also visible in the runtime, i.e. time stats to put a 16 MiB file
 (dd if=/dev/urandom of=/nfs/foo bs=4096 count=4096), first column is the
 buffer size used for the nfs mount:
 
   512:1m9.07s real 0m0.01s user 0m1.13s system
  1024:   0m20.13s real 0m0.00s user 0m1.23s system
  2048:0m5.59s real 0m0.00s user 0m1.13s system
  4096:0m2.07s real 0m0.00s user 0m0.86s system
  8192:0m1.41s real 0m0.00s user 0m0.91s system
 16384:0m1.19s real 0m0.00s user 0m0.82s system
 32768:0m1.11s real 0m0.00s user 0m0.76s system
 
 Writing a 64 MiB file:
 
   512:6m2.83s real  0m0.03s user 0m5.78s system
  1024:  2m58.62s real 0m0.03s user 0m5.45s system
  2048:1m12.66s real 0m0.07s user 0m4.66s system
  4096:0m27.60s real 0m0.05s user 0m4.47s system
  8192:0m11.68s real 0m0.01s user 0m3.85s system
 16384:0m6.50s real  0m0.00s user 0m3.64s system
 32768:0m6.15s real  0m0.00s user 0m3.22s system
 
 ktrace dumps for all dd runs are available, I can put 
 them somewhere for download if required.
 

The default block size is 8k for smaller buffers the overhead of header
and round trip time is just too big.

-- 
:wq Claudio



IluminaciĆ³n LED y Solar

2011-03-30 Thread TBX
$200 PesosFaroles solares demarcadores x
4 unidades  Practica  Estaca Solar Led /
incluye 4 Estacas / Estaca  para clavar en
la tierra / Ideal para decoracioacute;n  de
jardines, locales y espacios al aire libre, 
demarcacion de senderos, patios o simplemente   
  destacar sectores del parque. / No precisa Cableado   
  / Funciona a Energia Solar   $125
Pesos   Luces  de emergencia x 2 unidades 30
Leds Se  enciende automaacute;ticamente al
corte de energiacute;a  / Ideal los
dormitorios, acceso de su casa, bantilde;os
 Pasillos, living, cocina, etc. / Si los nintilde;os   
  se encuentran en su habitacioacute;n y se corta  
   la luz inmediatamente estaraacute;n iluminados  
   y seguros EVITE ACCIDENTES / Una vez encendida   
  y accionada como luz de emergencia la puede
transportar  por toda la casa. / Recargable
directamente de  los enchufes / Removible
para utilizarla sin cable  como linterna /
Autonomiacute;a 12 horas.



Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Mark Kettenis
 Date: Tue, 29 Mar 2011 22:42:47 +0200
 From: Claudio Jeker cje...@diehard.n-r-g.com
 
 Here is a possible fix. The problem was that because of the way NFS uses
 the socket API it did not turn of the sendbuffer scaling which reset the
 size of the socket back to 17376 bytes which is a no go when a buffer of
 more then 17k is generated by NFS. It is better to initialize the sb_wat
 in soreserve() which is called by NFS and all attach functions.

This no longer does the sbcheckreserve() dance though.  Is that alright?

 Index: netinet/tcp_usrreq.c
 ===
 RCS file: /cvs/src/sys/netinet/tcp_usrreq.c,v
 retrieving revision 1.105
 diff -u -p -r1.105 tcp_usrreq.c
 --- netinet/tcp_usrreq.c  10 Oct 2010 22:02:50 -  1.105
 +++ netinet/tcp_usrreq.c  29 Mar 2011 20:26:55 -
 @@ -653,15 +653,7 @@ tcp_attach(so)
   int error;
  
   if (so-so_snd.sb_hiwat == 0 || so-so_rcv.sb_hiwat == 0) {
 - /* if low on memory only allow smaller then default buffers */
 - if (so-so_snd.sb_wat == 0 ||
 - sbcheckreserve(so-so_snd.sb_wat, tcp_sendspace))
 - so-so_snd.sb_wat = tcp_sendspace;
 - if (so-so_rcv.sb_wat == 0 ||
 - sbcheckreserve(so-so_rcv.sb_wat, tcp_recvspace))
 - so-so_rcv.sb_wat = tcp_recvspace;
 -
 - error = soreserve(so, so-so_snd.sb_wat, so-so_rcv.sb_wat);
 + error = soreserve(so, tcp_sendspace, tcp_recvspace);
   if (error)
   return (error);
   }
 Index: kern/uipc_socket2.c
 ===
 RCS file: /cvs/src/sys/kern/uipc_socket2.c,v
 retrieving revision 1.51
 diff -u -p -r1.51 uipc_socket2.c
 --- kern/uipc_socket2.c   24 Sep 2010 02:59:45 -  1.51
 +++ kern/uipc_socket2.c   29 Mar 2011 20:18:46 -
 @@ -353,6 +353,8 @@ soreserve(struct socket *so, u_long sndc
   goto bad;
   if (sbreserve(so-so_rcv, rcvcc))
   goto bad2;
 + so-so_snd.sb_wat = sndcc;
 + so-so_rcv.sb_wat = rcvcc;
   if (so-so_rcv.sb_lowat == 0)
   so-so_rcv.sb_lowat = 1;
   if (so-so_snd.sb_lowat == 0)



Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Walter Haidinger
Am 30.03.2011 15:23, schrieb Claudio Jeker:
 Buffers below 8k are stupid. For TCP just use 32k or even 64k. 512byte
 buffers are silly. They get internally rounded up since the smallest
 packet seems to be 512bytes data plus header. This will give you TCP send
 and recv buffers of around 1200bytes. No wonder it is slow as hell.

Throughput isn't the issue. The system gets unusable with sizes  2048.
The machine freezes, it takes a couple of seconds for the next shell
prompt to appear, like under really heavy load (I'd say way 30). 

Of course bufsizes that small make no sense and your patch eliminates
the lock ups, but the they show there is still some bug. I'd expect slow
nfs transfers but not the behavior as if under heavy load. (*)

This is just to let to know, maybe you want to have a further look.

Why did I test with small buffer sizes too? Well, I got another email
which said about the mount options, obviously regarding the buffer sizes:

When you jackfuck that knob with other values, what is the result?
 Troubleshooting isn't only for others, son!

A reminder that this is an OpenBSD list... ;-)
Luckily I always make sure to have my asbestos on when dealing with!

Walter

PS: (*) No, I'm sorry, I don't have a patch that fixes that.



Olde Fort Inn, Google First Page Guaranteed?

2011-03-30 Thread Russell Chen
Having trouble reading this email? View it in your browser.
[http://bm1.nocserv2.com/display.php?M=721973C=b76c0fb6bb811a57db4d70a488f5bf04S=123L=29N=30]


Google First Page Guaranteed?

We can get your business Olde Fort Inn on 1st page of Google Before You're
Billed! 

We Ensure You Receive CALLS Not Just Clicks!

Hi , 

We can get your Olde Fort Inn business website  on the 1st page of Google
BEFORE YOU'RE BILLED! You are not billed until we get you there. We are
Internet Marketing Specialists that drive not just traffic to your business
but CALLS and RETURN! 

Stop letting your competition beat you because they have better online
marketing and not better product or service.

Strategic Business Consulting Group is a small firm located in New York
City specializing bringing more profitable business to you from the
Internet. We  ensure your business receives calls ... not just clicks! If
we don't perform and get you leads you don't pay a dime! In short we get
you the leads and you close the sale.

It's Simple - Here's How It Works

1) You pick or we help you find any 40 keyword phrases.  

2) You pick 5 areas (Cities, Towns, or Counties) you want to market to.

3) Once your web site starts coming up on the 1st page of Google then the
billing starts.


Call me Direct 1 (347) 857-7039 for a no obligation consultation to see
how 1st page Google results will do for your Olde Fort Inn business today!

Warmest regards,

Russell Chen, 

Internet Marketing Specialist
Strategic Business Consulting Group, LLC.
244 5th Ave. Suite Q224
New York, NY 10001

Direct Phone: 347-857-7039
Fax: 646-998-1325


PS: , Call me now! What have you got to lose? but only to gain!

This message was intended for 'tech@openbsd.org' 

Unsubscribe
[http://bm1.nocserv2.com/unsubscribe.php?M=721973C=b76c0fb6bb811a57db4d70a488f5bf04L=29N=123]
|

 


Powered by Interspire



Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Claudio Jeker
On Wed, Mar 30, 2011 at 08:34:24PM +0200, Mark Kettenis wrote:
  Date: Tue, 29 Mar 2011 22:42:47 +0200
  From: Claudio Jeker cje...@diehard.n-r-g.com
  
  Here is a possible fix. The problem was that because of the way NFS uses
  the socket API it did not turn of the sendbuffer scaling which reset the
  size of the socket back to 17376 bytes which is a no go when a buffer of
  more then 17k is generated by NFS. It is better to initialize the sb_wat
  in soreserve() which is called by NFS and all attach functions.
 
 This no longer does the sbcheckreserve() dance though.  Is that alright?
 

The code that was there previously was a bit strange. Since when
sb_hiwat == 0 is true then sb_wat is 0 as well. Additionally
sbcheckreserve() would only cause the watermark to be set to the default
which is tcp_sendspace/tcp_recvspace. So since sb_hiwat == 0 is never run.

-- 
:wq Claudio

  Index: netinet/tcp_usrreq.c
  ===
  RCS file: /cvs/src/sys/netinet/tcp_usrreq.c,v
  retrieving revision 1.105
  diff -u -p -r1.105 tcp_usrreq.c
  --- netinet/tcp_usrreq.c10 Oct 2010 22:02:50 -  1.105
  +++ netinet/tcp_usrreq.c29 Mar 2011 20:26:55 -
  @@ -653,15 +653,7 @@ tcp_attach(so)
  int error;
   
  if (so-so_snd.sb_hiwat == 0 || so-so_rcv.sb_hiwat == 0) {
  -   /* if low on memory only allow smaller then default buffers */
  -   if (so-so_snd.sb_wat == 0 ||
  -   sbcheckreserve(so-so_snd.sb_wat, tcp_sendspace))
  -   so-so_snd.sb_wat = tcp_sendspace;
  -   if (so-so_rcv.sb_wat == 0 ||
  -   sbcheckreserve(so-so_rcv.sb_wat, tcp_recvspace))
  -   so-so_rcv.sb_wat = tcp_recvspace;
  -
  -   error = soreserve(so, so-so_snd.sb_wat, so-so_rcv.sb_wat);
  +   error = soreserve(so, tcp_sendspace, tcp_recvspace);
  if (error)
  return (error);
  }
  Index: kern/uipc_socket2.c
  ===
  RCS file: /cvs/src/sys/kern/uipc_socket2.c,v
  retrieving revision 1.51
  diff -u -p -r1.51 uipc_socket2.c
  --- kern/uipc_socket2.c 24 Sep 2010 02:59:45 -  1.51
  +++ kern/uipc_socket2.c 29 Mar 2011 20:18:46 -
  @@ -353,6 +353,8 @@ soreserve(struct socket *so, u_long sndc
  goto bad;
  if (sbreserve(so-so_rcv, rcvcc))
  goto bad2;
  +   so-so_snd.sb_wat = sndcc;
  +   so-so_rcv.sb_wat = rcvcc;
  if (so-so_rcv.sb_lowat == 0)
  so-so_rcv.sb_lowat = 1;
  if (so-so_snd.sb_lowat == 0)



Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Claudio Jeker
On Wed, Mar 30, 2011 at 08:36:45PM +0200, Walter Haidinger wrote:
 Am 30.03.2011 15:23, schrieb Claudio Jeker:
  Buffers below 8k are stupid. For TCP just use 32k or even 64k. 512byte
  buffers are silly. They get internally rounded up since the smallest
  packet seems to be 512bytes data plus header. This will give you TCP send
  and recv buffers of around 1200bytes. No wonder it is slow as hell.
 
 Throughput isn't the issue. The system gets unusable with sizes  2048.
 The machine freezes, it takes a couple of seconds for the next shell
 prompt to appear, like under really heavy load (I'd say way 30). 
 
 Of course bufsizes that small make no sense and your patch eliminates
 the lock ups, but the they show there is still some bug. I'd expect slow
 nfs transfers but not the behavior as if under heavy load. (*)

NFS is a strange beast and I guess running with to small buffers results
in such side effects. This has nothing todo with the buffer scaling but
more with the way NFS works.

 This is just to let to know, maybe you want to have a further look.

I'm not interested. Maybe someone else likes to dig deep into NFS.
I guess there is a reason why the default is 8k.
 
 Why did I test with small buffer sizes too? Well, I got another email
 which said about the mount options, obviously regarding the buffer sizes:
 
 When you jackfuck that knob with other values, what is the result?
  Troubleshooting isn't only for others, son!
  
 A reminder that this is an OpenBSD list... ;-)
 Luckily I always make sure to have my asbestos on when dealing with!

-- 
:wq Claudio



Re: NFS writes lock up system with -o tcp,-w32768

2011-03-30 Thread Bret S. Lambert
On Wed, Mar 30, 2011 at 09:54:45PM +0200, Claudio Jeker wrote:
 On Wed, Mar 30, 2011 at 08:36:45PM +0200, Walter Haidinger wrote:
  Am 30.03.2011 15:23, schrieb Claudio Jeker:
   Buffers below 8k are stupid. For TCP just use 32k or even 64k. 512byte
   buffers are silly. They get internally rounded up since the smallest
   packet seems to be 512bytes data plus header. This will give you TCP send
   and recv buffers of around 1200bytes. No wonder it is slow as hell.
  
  Throughput isn't the issue. The system gets unusable with sizes  2048.
  The machine freezes, it takes a couple of seconds for the next shell
  prompt to appear, like under really heavy load (I'd say way 30). 
  
  Of course bufsizes that small make no sense and your patch eliminates
  the lock ups, but the they show there is still some bug. I'd expect slow
  nfs transfers but not the behavior as if under heavy load. (*)
 
 NFS is a strange beast and I guess running with to small buffers results
 in such side effects. This has nothing todo with the buffer scaling but
 more with the way NFS works.

NFS has enough bugs to open a special exhibit at the zoo.

I have no idea if I'll ever have enough courage to dive back into it again.

 
  This is just to let to know, maybe you want to have a further look.
 
 I'm not interested. Maybe someone else likes to dig deep into NFS.
 I guess there is a reason why the default is 8k.
  
  Why did I test with small buffer sizes too? Well, I got another email
  which said about the mount options, obviously regarding the buffer sizes:
  
  When you jackfuck that knob with other values, what is the result?
   Troubleshooting isn't only for others, son!
   
  A reminder that this is an OpenBSD list... ;-)
  Luckily I always make sure to have my asbestos on when dealing with!
 
 -- 
 :wq Claudio



horribly slow fsck_ffs pass1 performance

2011-03-30 Thread Amit Kulkarni
Hi,

In fsck_ffs's pass1.c it just takes forever for large sized partitions
and also if you have very high number of files stored on that
partition (used inodes count goes high).

fsck main limitation is in pass1.c.

In pass1.c I found out that it in fact proceeded to check all inodes,
but there's a misleading comment there, which says, Find all
allocated blocks. So the original intent was to check only used
inodes in that code block but somebody deleted that part of code when
compared to FreeBSD. Is there any special reason not to build a used
inode list, then only go through it as FreeBSD does? I know they added
some stuff in the last year but that part of code has existed for a
long time and we don't have it. Why not?

I was reading cvs ver 1.46 of pass1.c in FreeBSD.

Thanks