Re: graid3 - requirements or manpage wrong?

2004-11-26 Thread Brian Szymanski
 That is not completely fair for vinum

 I've been running vinum now for the better of 3-4 years, and even with a
 set of very flaky seagate IDE drives I never lost a byte.
 Vinum has served me well, and I trust gvinum will get there as well.
 I just left my fileserver at 5.1, which I know is not an option for
 everybody.

Are you using vinum Raid5 ? I'm considering rolling back to 5.1 myself if
someone attests that things just work there with R5, then waiting for
gvinum to mature before getting my machine back on stable.

Also, when did vinum stop working in favor of gvinum? is it with 5.3?
Could I expect 5.2.1 to work? Pardon the barrage of questions, but it
would take me hours to test each case, so if anyone knows, drop me a line.
Thanks!

Brian Szymanski
[EMAIL PROTECTED]


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-26 Thread Willem Jan Withagen
Brian Szymanski wrote:
That is not completely fair for vinum
I've been running vinum now for the better of 3-4 years, and even with a
set of very flaky seagate IDE drives I never lost a byte.
Vinum has served me well, and I trust gvinum will get there as well.
I just left my fileserver at 5.1, which I know is not an option for
everybody.

Are you using vinum Raid5 ? I'm considering rolling back to 5.1 myself if
someone attests that things just work there with R5, then waiting for
gvinum to mature before getting my machine back on stable.
Also, when did vinum stop working in favor of gvinum? is it with 5.3?
Could I expect 5.2.1 to work? Pardon the barrage of questions, but it
would take me hours to test each case, so if anyone knows, drop me a line.
Thanks!
[~wjw] [EMAIL PROTECTED] uname -a
FreeBSD files.digiware.nl 5.1-RELEASE-p11 FreeBSD 5.1-RELEASE-p11 #3: Sat Dec 
20 16:16:35 CET 2003 
[EMAIL PROTECTED]:/mnt2/obj/usr/src51/src/sys/GENERIC  i386

[~wjw] [EMAIL PROTECTED] vinum l
4 drives:
D vinumdrive1   State: up   /dev/ad7s1h A: 0/58143 MB (0%)
D vinumdrive0   State: up   /dev/ad6s1h A: 0/58143 MB (0%)
D vinumdrive3   State: up   /dev/ad5s1h A: 0/58143 MB (0%)
D vinumdrive2   State: up   /dev/ad4s1h A: 0/58143 MB (0%)
1 volumes:
V vinum0State: up   Plexes:   1 Size:170 GB
1 plexes:
P vinum0.p0  R5 State: up   Subdisks: 4 Size:170 GB
4 subdisks:
S vinum0.p0.s0  State: up   D: vinumdrive0  Size: 56 GB
S vinum0.p0.s1  State: up   D: vinumdrive1  Size: 56 GB
S vinum0.p0.s2  State: up   D: vinumdrive2  Size: 56 GB
S vinum0.p0.s3  State: up   D: vinumdrive3  Size: 56 GB
Note that this is vinum in its most simple state:
- 4* whole disk in vinum.
- NO root or swap or other complicating issues.
This server is only doing one simple thing: NFS en SMB serving. Even SMB is 
still way behind on 2.2.8
And I have not tried in going to the 5.2.1 venture. I only went to 5.1 on this 
box, because it was the last of the mohicans in my home server park not 
running = 5. So I wanted to get ride of the 4.x tree.
And as you can see I have not tinkered with this box for almost a year.

--WjW
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-26 Thread David Gilbert
 Brian == Brian Szymanski [EMAIL PROTECTED] writes:

 That is not completely fair for vinum
 
 I've been running vinum now for the better of 3-4 years, and even
 with a set of very flaky seagate IDE drives I never lost a byte.
 Vinum has served me well, and I trust gvinum will get there as
 well.  I just left my fileserver at 5.1, which I know is not an
 option for everybody.

Brian Are you using vinum Raid5 ? I'm considering rolling back to 5.1
Brian myself if someone attests that things just work there with
Brian R5, then waiting for gvinum to mature before getting my machine
Brian back on stable.

Brian Also, when did vinum stop working in favor of gvinum? is it
Brian with 5.3?  Could I expect 5.2.1 to work? Pardon the barrage of
Brian questions, but it would take me hours to test each case, so if
Brian anyone knows, drop me a line.  Thanks!

In 5.3, it appears that you can load vinum or gvinum.  Vinum appears
to have the functionality (and bugs) that it had back in 5.1.  The
only missing function seems to be the ability to swap to a vinum
volume.

Dave.

-- 

|David Gilbert, Independent Contractor.   | Two things can only be |
|Mail:   [EMAIL PROTECTED]|  equal if and only if they |
|http://daveg.ca  |   are precisely opposite.  |
=GLO
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-26 Thread Brian Szymanski

 Brian == Brian Szymanski [EMAIL PROTECTED] writes:

 That is not completely fair for vinum

 I've been running vinum now for the better of 3-4 years, and even
 with a set of very flaky seagate IDE drives I never lost a byte.
 Vinum has served me well, and I trust gvinum will get there as
 well.  I just left my fileserver at 5.1, which I know is not an
 option for everybody.

 Brian Are you using vinum Raid5 ? I'm considering rolling back to 5.1
 Brian myself if someone attests that things just work there with
 Brian R5, then waiting for gvinum to mature before getting my machine
 Brian back on stable.

 Brian Also, when did vinum stop working in favor of gvinum? is it
 Brian with 5.3?  Could I expect 5.2.1 to work? Pardon the barrage of
 Brian questions, but it would take me hours to test each case, so if
 Brian anyone knows, drop me a line.  Thanks!

 In 5.3, it appears that you can load vinum or gvinum.  Vinum appears
 to have the functionality (and bugs) that it had back in 5.1.  The
 only missing function seems to be the ability to swap to a vinum
 volume.

Actually I experienced a number of bugs with vinum in 5.3 that proved
fatal to a root vinum install (in fact, everything on the second ATA
channel was marked down after every reboot if I recall correctly).

As for the swap: why would you want to do that? It was my understanding
that the kernel load balanced swap requests across drives?

Cheers,
Brian

 Dave.

 --
 
 |David Gilbert, Independent Contractor.   | Two things can only be
 |
 |Mail:   [EMAIL PROTECTED]|  equal if and only if they
 |
 |http://daveg.ca  |   are precisely opposite.
 |
 =GLO



-- 
Brian Szymanski
[EMAIL PROTECTED]


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-26 Thread Paul Mather
Brian Szymanski wrote:
As for the swap: why would you want to do that? It was my understanding
that the kernel load balanced swap requests across drives?
 

You'd want to do it not for load-balancing but for fault tolerance.  
With a RAID 1/3/5 setup you could have a drive fail and still have 
swapping (and hence the system) continue to work.  That's not the same 
as (or true of) having multiple swap partitions with the system 
balancing load over all of them.

Cheers,
Paul.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-25 Thread Vallo Kallaste
On Wed, Nov 24, 2004 at 07:33:55PM +0100, Eirik verby
[EMAIL PROTECTED] wrote:

 OK I see, makes sense. So it's not really a raid3 issue, but an 
 implementation issue.
 The only problem then is - gvinum being in a completely unusable state 
 (for raid5 anyway), what are my alternatives? I have four 160gb IDE 
 drives, and I want capacity+redundancy. Performance is a non-issue, 
 really. What do I do - in software?

Submit code is the standard answer. Vinum and now gvinum (I have not
tried the latter, your words) have never had reliable RAID-5
implementation. That is my experience only. Yes I am frustrated
about current state of FreeBSD and because of such state I'm forced
to use other OS's, for reliability reasons. For a person who's been
with FreeBSD since 2.0.5 that's sad future, but nevertheless I'm
unsubscribing from the remaining FreeBSD lists until things
(hopefully) improve and to save you all from further rants.
-- 
Vallo Kallaste
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-25 Thread Willem Jan Withagen
Vallo Kallaste wrote:
On Wed, Nov 24, 2004 at 07:33:55PM +0100, Eirik Øverby
[EMAIL PROTECTED] wrote:

OK I see, makes sense. So it's not really a raid3 issue, but an 
implementation issue.
The only problem then is - gvinum being in a completely unusable state 
(for raid5 anyway), what are my alternatives? I have four 160gb IDE 
drives, and I want capacity+redundancy. Performance is a non-issue, 
really. What do I do - in software?

Submit code is the standard answer. Vinum and now gvinum (I have not
tried the latter, your words) have never had reliable RAID-5
implementation. That is my experience only. Yes I am frustrated
about current state of FreeBSD and because of such state I'm forced
to use other OS's, for reliability reasons. For a person who's been
with FreeBSD since 2.0.5 that's sad future, but nevertheless I'm
unsubscribing from the remaining FreeBSD lists until things
(hopefully) improve and to save you all from further rants.
That is not completely fair for vinum
I've been running vinum now for the better of 3-4 years, and even with a set 
of very flaky seagate IDE drives I never lost a byte.
Vinum has served me well, and I trust gvinum will get there as well.
I just left my fileserver at 5.1, which I know is not an option for everybody.

--WjW
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-25 Thread Brian Szymanski
The only problem then is - gvinum being in a completely unusable state
(for raid5 anyway), what are my alternatives? I have four 160gb IDE
drives, and I want capacity+redundancy. Performance is a non-issue,
really. What do I do - in software?

What's unusable about it? I've 4 250GB ATA drives, desiring capacity +
redundancy, but don't care about speed, much like you, and gvinum raid 5
has suited me just fine this past few weeks. Eats a lot of system cpu when
there is heavy IO to the R5, but I've booted up with a drive unplugged and
it worked fine in degraded mode, so I'm content...

 Vinum and now gvinum (I have not tried the latter, your words) have
 never had reliable RAID-5 implementation. That is my experience only.

? This is the first I've heard of such problems? Vinum has served me well
in the past, although I've never used Raid-5 before... If there are known
bugs, I'd appreciate someone sending me a link to where I can read more.

Cheers,
Brian Szymanski
[EMAIL PROTECTED]


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: graid3 - requirements or manpage wrong?

2004-11-24 Thread Pawel Jakub Dawidek
On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote:
+ to the best of my ability I have been investigating the 'real' 
+ requirements of a raid-3 array, and cannot see that the following text 
+ from graid3(8) cannot possibly be correct - and if it is, then the 
+ implementation must be wrong or incomplete (emphasis added):
+ 
+ label  Create a RAID3 device.  The last given component will contain
+parity data, all the rest - regular data.  ***Number of 
+ compo-
+nents has to be equal to 3, 5, 9, 17, etc. (2^n + 1).***
+ 
+ I might be wrong, but I cannot see how a raid-3 array should require 
+ (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays 
+ consisting of four drives, for example. This is also what I had hoped to 
+ accomplish.

This requirement is because we want sectorsize to be power of 2
(UFS needs it).
In RAID3 we want to send every I/O request to all components at once,
that's why we need sector size to be N*512, where N is a power of 2 value
AND because graid3 uses one parity component we need N+1 providers.

-- 
Pawel Jakub Dawidek   http://www.FreeBSD.org
[EMAIL PROTECTED]   http://garage.freebsd.pl
FreeBSD committer Am I Evil? Yes, I Am!


pgpcglvLFqzns.pgp
Description: PGP signature


Re: graid3 - requirements or manpage wrong?

2004-11-24 Thread Eirik Øverby
On 24. Nov 2004, at 18:11, Pawel Jakub Dawidek wrote:
On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote:
+ to the best of my ability I have been investigating the 'real' 
+ requirements of a raid-3 array, and cannot see that the following 
text 
+ from graid3(8) cannot possibly be correct - and if it is, then the 
+ implementation must be wrong or incomplete (emphasis added):
+ 
+ label      Create a RAID3 device.  The last given component will 
contain
+                parity data, all the rest - regular data.  ***Number 
of 
+ compo-
+                nents has to be equal to 3, 5, 9, 17, etc. (2^n + 
1).***
+ 
+ I might be wrong, but I cannot see how a raid-3 array should 
require 
+ (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays 
+ consisting of four drives, for example. This is also what I had 
hoped to 
+ accomplish.

This requirement is because we want sectorsize to be power of 2
(UFS needs it).
In RAID3 we want to send every I/O request to all components at once,
that's why we need sector size to be N*512, where N is a power of 2 
value
AND because graid3 uses one parity component we need N+1 providers.
OK I see, makes sense. So it's not really a raid3 issue, but an 
implementation issue.
The only problem then is - gvinum being in a completely unusable state 
(for raid5 anyway), what are my alternatives? I have four 160gb IDE 
drives, and I want capacity+redundancy. Performance is a non-issue, 
really. What do I do - in software?

/Eirik

-- 
Pawel Jakub Dawidek                       http://www.FreeBSD.org
[EMAIL PROTECTED]                           http://garage.freebsd.pl
FreeBSD committer                         Am I Evil? Yes, I Am!
On 24. Nov 2004, at 18:11, Pawel Jakub Dawidek wrote:
On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote:
+ to the best of my ability I have been investigating the 'real' 
+ requirements of a raid-3 array, and cannot see that the following 
text 
+ from graid3(8) cannot possibly be correct - and if it is, then the 
+ implementation must be wrong or incomplete (emphasis added):
+ 
+ label      Create a RAID3 device.  The last given component will 
contain
+                parity data, all the rest - regular data.  ***Number 
of 
+ compo-
+                nents has to be equal to 3, 5, 9, 17, etc. (2^n + 
1).***
+ 
+ I might be wrong, but I cannot see how a raid-3 array should require 
+ (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays 
+ consisting of four drives, for example. This is also what I had 
hoped to 
+ accomplish.

This requirement is because we want sectorsize to be power of 2
(UFS needs it).
In RAID3 we want to send every I/O request to all components at once,
that's why we need sector size to be N*512, where N is a power of 2 
value
AND because graid3 uses one parity component we need N+1 providers.

OK I see, makes sense. So it's not really a raid3 issue, but an 
implementation issue.
The only problem then is - gvinum being in a completely unusable state 
(for raid5 anyway), what are my alternatives? I have four 160gb IDE 
drives, and I want capacity+redundancy. Performance is a non-issue, 
really. What do I do - in software?

/Eirik

-- 
Pawel Jakub Dawidek                       http://www.FreeBSD.org
[EMAIL PROTECTED]                           http://garage.freebsd.pl
FreeBSD committer                         Am I Evil? Yes, I Am!

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]