Re: vinum in 4.x poor performer?

2005-02-12 Thread Michael L. Squires

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
I read that somewhere, but then every example shows 256k as being the strip 
size :(  Now, with a 5 drives RAID5 array (which I'll be moving that server 
to over the next couple of weeks), 256k isn't an issue?  or is there 
something better i should set it to?

The 4.10 man 8 vinum page shows 512K in the example, but later on it says 
that if cylinder groups are 32MB in size you should avoid a power of 2 
which will place all superblocks and inodes on the same subdisk and that 
an odd number (479kB is the example) should be chosen.

Mike Squires
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Chad Leigh -- Shire.Net LLC wrote:
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon ysystem 
... doing the same vmstat on my other vinum based system, running more, but 
on a Dual-PIII shows major idle time:

# vmstat 5
 procs  memory  pagedisks faults  cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy 
id
20 1 0 4088636 219556 1664   1   2   1 3058 217   0   0  856 7937 2186 51 
15 34
20 1 0 4115372 224220  472   0   0   0 2066   0   0  35  496 2915 745  7  7 
86
10 1 0 4125252 221788  916   0   0   0 2513   0   2  71  798 4821 1538  6 
11 83
 9 1 0   36508 228452  534   0   0   2 2187   0   0  46  554 3384 1027  3 
8 89
11 1 0   27672 218828  623   0   6   0 2337   0   0  61  583 2607 679  3  9 
88
16 1 05776 220540  989   0   0   0 2393   0   9  32  514 3247 1115  3 
8 90

Which leads me further to believe this is a Dual-Xeon problem, and much 
further away from believing it has anything to do with software RAID :(
I only use AMD, so I cannot provide specifics, but look in the BIOS at boot 
time and see if there is anything strange looking in the settings.
Unfortunately, I'm dealing with remote servers, so without something 
specific to get a remote tech to check, BIOS related stuff will have to 
wait until I can visit the servers persoally :(

Chad

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
still getting this:
# vmstat 5
procs  memory  pagedisks faults  cpu
r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy 
id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  7 38 
55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  1 99 
0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  1 99 
0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  1 99 
0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  3 97 
0

Is there a way of determining what is sucking up so much Sys time?  stuff 
like pperl scripts running and such would use 'user time', no?  I've got 
some high CPU processes running, but would expect them to be shooting up 
the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-sql -P10 -p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-docs -P10 -p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-hackers -P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-ports -P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -o -d postgresql.org 
-l pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 
/usr/bin/perl -wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_trigger -t hourly

Is there some way of finding out where all the Sys Time is being used? 
Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script?  I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414

Re: 99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Chad Leigh -- Shire . Net LLC
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon 
ysystem ... doing the same vmstat on my other vinum based system, 
running more, but on a Dual-PIII shows major idle time:

# vmstat 5
 procs  memory  pagedisks faults  
cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs 
us sy id
20 1 0 4088636 219556 1664   1   2   1 3058 217   0   0  856 7937 2186 
51 15 34
20 1 0 4115372 224220  472   0   0   0 2066   0   0  35  496 2915 745  
7  7 86
10 1 0 4125252 221788  916   0   0   0 2513   0   2  71  798 4821 1538 
 6 11 83
 9 1 0   36508 228452  534   0   0   2 2187   0   0  46  554 3384 1027 
 3  8 89
11 1 0   27672 218828  623   0   6   0 2337   0   0  61  583 2607 679  
3  9 88
16 1 05776 220540  989   0   0   0 2393   0   9  32  514 3247 1115 
 3  8 90

Which leads me further to believe this is a Dual-Xeon problem, and 
much further away from believing it has anything to do with software 
RAID :(
I only use AMD, so I cannot provide specifics, but look in the BIOS at 
boot time and see if there is anything strange looking in the settings.

Chad

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
still getting this:
# vmstat 5
procs  memory  pagedisks faults  
cpu
r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs 
us sy id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  
7 38 55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  
1 99  0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  
1 99  0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  
1 99  0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  
3 97  0

Is there a way of determining what is sucking up so much Sys time?  
stuff like pperl scripts running and such would use 'user time', no?  
I've got some high CPU processes running, but would expect them to be 
shooting up the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME 
COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-sql -P10 -p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-docs -P10 -p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-hackers -P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-ports -P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -o -d 
postgresql.org -l pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 
/usr/bin/perl -wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_trigger -t hourly

Is there some way of finding out where all the Sys Time is being 
used? Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec 
value,
that I could use in a script?  I know vmstat reports it, but is 
there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to 
keep
running every five seconds or something, it's a little more 
complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414*  678  349 8198
54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl
Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01 

Re: 99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Marc G. Fournier
Most odd, there definitely has to be a problem with the Dual-Xeon ysystem 
... doing the same vmstat on my other vinum based system, running more, 
but on a Dual-PIII shows major idle time:

# vmstat 5
 procs  memory  pagedisks faults  cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy id
20 1 0 4088636 219556 1664   1   2   1 3058 217   0   0  856 7937 2186 51 15 34
20 1 0 4115372 224220  472   0   0   0 2066   0   0  35  496 2915 745  7  7 86
10 1 0 4125252 221788  916   0   0   0 2513   0   2  71  798 4821 1538  6 11 83
 9 1 0   36508 228452  534   0   0   2 2187   0   0  46  554 3384 1027  3  8 89
11 1 0   27672 218828  623   0   6   0 2337   0   0  61  583 2607 679  3  9 88
16 1 05776 220540  989   0   0   0 2393   0   9  32  514 3247 1115  3  8 90
Which leads me further to believe this is a Dual-Xeon problem, and much 
further away from believing it has anything to do with software RAID :(

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
still getting this:
# vmstat 5
procs  memory  pagedisks faults  cpu
r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  7 38 55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  1 99  0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  1 99  0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  1 99  0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  3 97  0
Is there a way of determining what is sucking up so much Sys time?  stuff 
like pperl scripts running and such would use 'user time', no?  I've got some 
high CPU processes running, but would expect them to be shooting up the 'user 
time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-sql -P10 -p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-docs -P10 -p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-hackers -P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-ports -P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -o -d postgresql.org -l 
pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 
/usr/bin/perl -wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_trigger -t hourly

Is there some way of finding out where all the Sys Time is being used? 
Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script?  I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
do

99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Marc G. Fournier
still getting this:
# vmstat 5
 procs  memory  pagedisks faults  cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  7 38 55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  1 99  0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  1 99  0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  1 99  0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  3 97  0
Is there a way of determining what is sucking up so much Sys time?  stuff 
like pperl scripts running and such would use 'user time', no?  I've got 
some high CPU processes running, but would expect them to be shooting up 
the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-sql -P10 
-p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-docs -P10 
-p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-hackers 
-P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-ports 
-P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -o -d postgresql.org -l 
pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 /usr/bin/perl 
-wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_trigger -t hourly
Is there some way of finding out where all the Sys Time is being used? 
Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script?  I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just
killed an http process that was using alot of CPU, and syscalls drop'd
down to a numeric value again ... I'm still curious as to why this only
seem sto affect my Dual-Xeon box though :(
Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
"[EMAIL PROTECTED]"

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscr

Re: vinum in 4.x poor performer?

2005-02-09 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Mark A. Garcia wrote:
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm sd name vm.p0.s0 drive d0 plex vm.p0 
len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
It's worth pointing out that your performance on the raid-5 can change for 
the better if you avoid having the stripe size be a power of 2.  This is 
especially true if the (n)umber of disks are a 2^n.
I read that somewhere, but then every example shows 256k as being the 
strip size :(  Now, with a 5 drives RAID5 array (which I'll be moving that 
server to over the next couple of weeks), 256k isn't an issue?  or is 
there something better i should set it to?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-09 Thread Mark A. Garcia
Olivier Nicole wrote:
All servers run RAID5 .. only one other is using vinum, the other 3 are 
using hardware RAID controllers ...
   


Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))
(Given it is same disk type/speed/controler...)
 

Usually this is the case, but it's also very dependent on the hardware 
raid controller.  There are situations where a software raid (vinum in 
this case) can outperform some hardward controlers under specific 
circumstances, i.e. sequential reads w/very large stripe size.  An 
example is an image server where the average image might be 3MB.  A 
stripe size of 434kB would cause ~7 transfers of data.  A case for a 
larger stripe size of 5MB would greatly improve performance.  There 
would be an 2MB diff in the avg file size that doesn't have any useable 
data.  Only 1 transfer of data would occur.  Vinum optimizes the data 
transfered to the exact 3MB of the file, whereas some hardware controls 
would transfer the whole 5MB stripe, adding some bandwidth latency and 
transfer time.  Again, it's a matter of specific cases, and assuming 
'performance' based on differing conduits for data transfer can just 
skirt the real issue, if there is any.

Cheers,
-.mag
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-09 Thread Mark A. Garcia
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 
2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm sd name vm.p0.s0 drive d0 plex 
vm.p0 len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
It's worth pointing out that your performance on the raid-5 can change 
for the better if you avoid having the stripe size be a power of 2.  
This is especially true if the (n)umber of disks are a 2^n.

Cheers,
-.mag
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-09 Thread Loren M. Lang
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
> 
> Is there a command that I can run that provide me the syscall/sec value, 
> that I could use in a script?  I know vmstat reports it, but is there an 
> easier way the having to parse the output? a perl module maybe, that 
> already does it?

vmstat shouldn't be too hard to parse, try the following:

vmstat|tail -1|awk '{print $15;}'

To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:

vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'

> 
> Thanks ...
> 
> On Wed, 9 Feb 2005, Marc G. Fournier wrote:
> 
> >On Tue, 8 Feb 2005, Dan Nelson wrote:
> >
> >>Details on the array's performance, I think.  Software RAID5 will
> >>definitely have poor write performance (logging disks solve that
> >>problem but vinum doesn't do that), but should have excellent read
> >>rates.  From this output, however:
> >>
> >>>systat -v output help:
> >>>4 usersLoad  4.64  5.58  5.77
> >>
> >>>Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
> >>>24 9282   949 8414*  678  349 8198
> >>
> >>>54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl
> >>
> >>>Disks   da0   da1   da2   da3   da4 pass0 pass1
> >>>KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
> >>>tps  23 2 4 3 1 0 0
> >>>MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
> >>>% busy3 1 1 1 0 0 0
> >>
> >>, it looks like your disks aren't being touched at all.  You are doing
> >>over 9 syscalls/second, though, which is mighty high.  The 50% Sys
> >>doesn't look good either.  You may have a runaway process doing some
> >>syscall over and over.  If this is not an MPSAFE syscall (see
> >>/sys/kern/syscalls.master ), it will also prevent other processes from
> >>making non-MPSAFE syscalls, and in 4.x that's most of them.
> >
> >Wow, that actually pointed me in the right direction, I think ... I just 
> >killed an http process that was using alot of CPU, and syscalls drop'd 
> >down to a numeric value again ... I'm still curious as to why this only 
> >seem sto affect my Dual-Xeon box though :(
> >
> >Thanks ...
> >
> >
> >Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
> >Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
> >___
> >freebsd-questions@freebsd.org mailing list
> >http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> >To unsubscribe, send any mail to 
> >"[EMAIL PROTECTED]"
> >
> 
> 
> Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
> Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"

-- 
I sense much NT in you.
NT leads to Bluescreen.
Bluescreen leads to downtime.
Downtime leads to suffering.
NT is the path to the darkside.
Powerful Unix is.

Public Key: ftp://ftp.tallye.com/pub/lorenl_pubkey.asc
Fingerprint: B3B9 D669 69C9 09EC 1BCD  835A FAF3 7A46 E4A3 280C
 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
Is there a command that I can run that provide me the syscall/sec value, 
that I could use in a script?  I know vmstat reports it, but is there an 
easier way the having to parse the output? a perl module maybe, that 
already does it?

Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just 
killed an http process that was using alot of CPU, and syscalls drop'd down 
to a numeric value again ... I'm still curious as to why this only seem sto 
affect my Dual-Xeon box though :(

Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just 
killed an http process that was using alot of CPU, and syscalls drop'd 
down to a numeric value again ... I'm still curious as to why this only 
seem sto affect my Dual-Xeon box though :(

Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Dan Nelson
In the last episode (Feb 09), Marc G. Fournier said:
> On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
> >On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
> >>I have a Dual-Xeon server with 4G of RAM, with its primary file
> >>system consisting of 4x73G SCSI drives running RAID5 using vinum
> >>... the operating system is currently FreeBSD 4.10-STABLE #1: Fri
> >>Oct 22 15:06:55 ADT 2004 ... swap usage is 0% (6149) ... and it
> >>performs worse then any of my other servers, and I have less
> >>running on it then the other servers ...
> >>
> >>I also have HTT disabled on this server ... and softupdates enabled
> >>on the file system ...
> >>
> >>That said ... am I hitting limits of software raid or is there
> >>something I should be looking at as far as performance is
> >>concerned?  Maybe something I have misconfigured?
> >
> >Based on what you've said, it's impossible to tell.  Details would
> >be handy.
> 
> Like?  I'm not sure what would be useful for this one ... I just sent
> in my current drive config ... something else useful?

Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
 
> systat -v output help:
> 4 usersLoad  4.64  5.58  5.77 

> Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
> 24 9282   949 8414*  678  349 8198

> 54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl  

> Disks   da0   da1   da2   da3   da4 pass0 pass1   
> KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00   
> tps  23 2 4 3 1 0 0   
> MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00   
> % busy3 1 1 1 0 0 0   

, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Dual-Xeon vs Dual-PIII (Was: Re: vinum in 4.x poor performer?)

2005-02-08 Thread Marc G. Fournier
The more I'm looking at this, the less I can believe my 'issue' is with 
vinum ... based on one of my other machines, it just doesn't *feel* right 


I have two servers that are fairly similar in config ... both running 
vinum RAID5 over 4 disks ... one is the Dual-Xeon that I'm finding 
"problematic" with 73G Seagate drives, and the other is the Dual-PIII with 
36G Seagate drives ...

The reason that I'm finding it hard to believe that my problem is with 
vinum is that the Dual-PIII is twice as loaded as the Dual-Xeon, but 
hardly seems to break a sweat ...

In fact, out of all my servers (3xDual-PIII, 1xDual-Athlon and 
1xDual-Xeon), only the Dual-Xeon doesn't seem to be able to perform ...

Now, out of all of the servers, only the Dual-Xeon, of course, supports 
HTT, which I *believe* is disabled, but from dmesg:

Copyright (c) 1992-2004 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 ADT 2004
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/kernel
Timecounter "i8254"  frequency 1193182 Hz
CPU: Intel(R) Xeon(TM) CPU 2.40GHz (2392.95-MHz 686-class CPU)
  Origin = "GenuineIntel"  Id = 0xf27  Stepping = 7
  
Features=0xbfebfbff
  Hyperthreading: 2 logical CPUs
real memory  = 4026466304 (3932096K bytes)
avail memory = 3922362368 (3830432K bytes)
Programming 24 pins in IOAPIC #0
IOAPIC #0 intpin 2 -> irq 0
Programming 24 pins in IOAPIC #1
Programming 24 pins in IOAPIC #2
FreeBSD/SMP: Multiprocessor motherboard: 4 CPUs
 cpu0 (BSP): apic id:  0, version: 0x00050014, at 0xfee0
 cpu1 (AP):  apic id:  1, version: 0x00050014, at 0xfee0
 cpu2 (AP):  apic id:  6, version: 0x00050014, at 0xfee0
 cpu3 (AP):  apic id:  7, version: 0x00050014, at 0xfee0
 io0 (APIC): apic id:  8, version: 0x00178020, at 0xfec0
 io1 (APIC): apic id:  9, version: 0x00178020, at 0xfec81000
 io2 (APIC): apic id: 10, version: 0x00178020, at 0xfec81400
Preloaded elf kernel "kernel" at 0x80339000.
Warning: Pentium 4 CPU: PSE disabled
Pentium Pro MTRR support enabled
Using $PIR table, 19 entries at 0x800f2f30
Its showing "4 CPUs" ... but:
machdep.hlt_logical_cpus: 1
which, from /usr/src/UPDATING indicates that the HTT "cpus" aren't enabled:
20031022:
Support for HyperThread logical CPUs has now been enabled by
default.  As a result, the HTT kernel option no longer exists.
Instead, the logical CPUs are always started so that they can
handle interrupts.  However, the extra logical CPUs are prevented
from executing user processes by default.  To enable the logical
CPUs, change the value of the machdep.hlt_logical_cpus from 1 to
0.  This value can also be set from the loader as a tunable of
the same name.
Finally ... top shows:
last pid: 73871;  load averages:  9.76,  9.23,  8.16
   up 9+02:02:26  00:57:06
422 processes: 8 running, 413 sleeping, 1 zombie
CPU states: 19.0% user,  0.0% nice, 81.0% system,  0.0% interrupt,  0.0% idle
Mem: 2445M Active, 497M Inact, 595M Wired, 160M Cache, 199M Buf, 75M Free
Swap: 2048M Total, 6388K Used, 2041M Free
  PID USERNAME   PRI NICE  SIZERES STATE  C   TIME   WCPUCPU COMMAND
28298 www 64   0 28136K 12404K CPU2   2  80:59 24.51% 24.51% httpd
69232 excalibur   64   0 80128K 76624K RUN2   2:55 16.50% 16.50% lisp.run
72879 www 64   0 22664K  9444K RUN0   0:12 12.94% 12.94% httpd
14154 www 64   0 36992K 22880K RUN0  55:07 12.70% 12.70% httpd
69758 www 63   0 15400K  8756K RUN0   0:18 11.87% 11.87% httpd
 7553 nobody   2   0   158M   131M poll   0  33:19  8.98%  8.98% nsd
70752 setiathome   2   0 14644K 14084K select 2   0:47  8.98%  8.98% perl
71191 setiathome   2   0 13220K 12804K select 0   0:29  8.40%  8.40% perl
70903 setiathome   2   0 14224K 13676K select 0   0:42  7.37%  7.37% perl
33932 setiathome   2   0 21712K 21144K select 0   2:23  4.59%  4.59% perl
In this case ... 0% idle, 81% in system?
As a comparison the Dual-PIII/vinum server looks like:
last pid: 90614;  load averages:  3.64,  2.41,  2.69
  up 3+08:45:17  
00:59:27
955 processes: 12 running, 942 sleeping, 1 zombie
CPU states: 63.9% user,  0.0% nice, 32.6% system,  3.5% interrupt,  0.0% idle
Mem: 2432M Active, 687M Inact, 563M Wired, 147M Cache, 199M Buf, 5700K Free
Swap: 8192M Total, 12M Used, 8180M Free, 12K In
  PID USERNAME   PRI NICE  SIZERES STATE  C   TIME   WCPUCPU COMMAND
90506 scrappy 56   0 19384K 14428K RUN0   0:06 22.98% 16.41% postgres
90579 root57   0  3028K  2156K CPU1   1   0:04 26.23% 14.45% top
90554 pgsql   -6   0 12784K  7408K RUN1   0:04 18.76% 11.87% postgres
90529 pgsql   54   0 14448K  8568K RUN0   0:03 16.90% 11.28% postgres
90560 

Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system
consisting of 4x73G SCSI drives running RAID5 using vinum ... the
operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
I also have HTT disabled on this server ... and softupdates enabled on the
file system ...
That said ... am I hitting limits of software raid or is there something I
should be looking at as far as performance is concerned?  Maybe something
I have misconfigured?
Based on what you've said, it's impossible to tell.  Details would be
handy.
Like?  I'm not sure what would be useful for this one ... I just sent in 
my current drive config ... something else useful?

systat -v output help:
4 usersLoad  4.64  5.58  5.77  Feb  9 00:24
Mem:KBREALVIRTUAL VN PAGER  SWAP PAGER
Tot   Share  TotShareFree in  out in  out
Act 1904768  137288  3091620   381128  159276 count
All 3850780  221996  1078752   605460 pages
 7921 zfod   Interrupts
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt242 cow 681 total
24 9282   949 8414*  678  349 8198 566916 wireahd0 irq16
  2527420 act  67 ahd1 irq17
54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl   608208 inact   157 em0 irq18
|||||||||| 146620 cache   200 clk irq0
===>12656 free257 rtc irq8
  daefr
Namei Name-cacheDir-cache7363 prcfr
Calls hits% hits% react
4610646005  100   130 pdwake
  pdpgs
Disks   da0   da1   da2   da3   da4 pass0 pass1   intrn
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00204096 buf
tps  23 2 4 3 1 0 0  1610 dirtybuf
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00512000 desiredvnodes
% busy3 1 1 1 0 0 0397436 numvnodes
   166179 freevnodes
Drives da1 -> da4 are used on the vinum array da0 is just the system drive 
...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm 
sd name vm.p0.s0 drive d0 plex vm.p0 len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
On Tue, 8 Feb 2005, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system 
consisting of 4x73G SCSI drives running RAID5 using vinum ... the operating 
system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 ADT 2004 ... 
swap usage is 0% (6149) ... and it performs worse then any of my other 
servers, and I have less running on it then the other servers ...

I also have HTT disabled on this server ... and softupdates enabled on the 
file system ...

That said ... am I hitting limits of software raid or is there something I 
should be looking at as far as performance is concerned?  Maybe something I 
have misconfigured?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
On Tue, 8 Feb 2005, Marc G. Fournier wrote:
On Wed, 9 Feb 2005, Olivier Nicole wrote:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
All servers run RAID5 .. only one other is using vinum, the other 3 are using 
hardware RAID controllers ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Olivier Nicole
> All servers run RAID5 .. only one other is using vinum, the other 3 are 
> using hardware RAID controllers ...


Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))

(Given it is same disk type/speed/controler...)

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Olivier Nicole wrote:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
All servers run RAID5 .. only one other is using vinum, the other 3 are 
using hardware RAID controllers ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vinum in 4.x poor performer?

2005-02-08 Thread Greg 'groggy' Lehey
On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
>
> I have a Dual-Xeon server with 4G of RAM, with its primary file system
> consisting of 4x73G SCSI drives running RAID5 using vinum ... the
> operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
> ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of
> my other servers, and I have less running on it then the other servers ...
>
> I also have HTT disabled on this server ... and softupdates enabled on the
> file system ...
>
> That said ... am I hitting limits of software raid or is there something I
> should be looking at as far as performance is concerned?  Maybe something
> I have misconfigured?

Based on what you've said, it's impossible to tell.  Details would be
handy.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpCHjBCzWxDc.pgp
Description: PGP signature


Re: vinum in 4.x poor performer?

2005-02-08 Thread Olivier Nicole
> and it performs worse then any of 
> my other servers, and I have less running on it then the other servers ...

What are you other servers? What RAID system/level?

Of course a software RAID5 is slower than a plain file system on a
disk.

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
I have a Dual-Xeon server with 4G of RAM, with its primary file system 
consisting of 4x73G SCSI drives running RAID5 using vinum ... the 
operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 
ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of 
my other servers, and I have less running on it then the other servers ...

I also have HTT disabled on this server ... and softupdates enabled on the 
file system ...

That said ... am I hitting limits of software raid or is there something I 
should be looking at as far as performance is concerned?  Maybe something 
I have misconfigured?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"