Re: FreeBSD Software RAID

2009-05-27 Thread Matthew Seaman

Gary Gatten wrote:

What about with PAE and/or other extension schemes?


Doesn't help with the KVM requirement, and still only provides a 4GB address
space for any single process.


If it's just memory requirements, can I assume if I don't have a $hit
load of storage and billions of files it will work ok with 4GB of RAM?
I guess I'm just making sure there isn't some bug that only exists on
the i386 architecture?


ZFS should work on i386.  As far as I know there aren't any killer bugs that
are architecture specific, but I'm no expert. Unless your aim is to learn about
ZFS I personally wouldn't bother with it on an i386 system: you'll almost
certainly get a lot better performance and a lot less grief out of UFS under
those conditions.

Cheers,

Matthew

--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: FreeBSD Software RAID

2009-05-27 Thread Wojciech Puchar

I really don't have any hard data on ZFS performance relative to UFS + geom.


so please test yourself :)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Wojciech Puchar

ZFS should work on i386.  As far as I know there aren't any killer bugs that
are architecture specific, but I'm no expert. Unless your aim is to learn


unless someone assume than size of pointers are 4 bytes, and write program 
in C, there will work as good in 64-bit mode and in 32-bit mode.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Kirk Strauser
On Wednesday 27 May 2009 09:52:42 am Wojciech Puchar wrote:
  ZFS should work on i386.  As far as I know there aren't any killer bugs
  that are architecture specific, but I'm no expert. Unless your aim is to
  learn

 unless someone assume than size of pointers are 4 bytes, and write program
 in C, there will work as good in 64-bit mode and in 32-bit mode.

Wojciech, I have to ask: are you actually a programmer or are you repeating 
things you've read elsewhere?  I can think of a whole list of reasons why code 
written to target a 64-bit system would be non-trivial to port to 32-bit, 
particularly if performance is an issue.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Wojciech Puchar

in C, there will work as good in 64-bit mode and in 32-bit mode.


Wojciech, I have to ask: are you actually a programmer or are you repeating


yes i am. if you are interested i wrote programs for x86, ARM (ARM7TDMI), 
MIPS32 (4Kc), and once for alpha. I have quite good knowledge for ARM and 
MIPS assembly, for x86 - quite outdated as i wrote my last assembly 
program when 486 was new CPU.



things you've read elsewhere?


you probably mistaken me with some poeple on that list that do this.

If you are reading my posts on that list (and maybe others) you know that 
the last thing i do is to repeat and repeat know and popular opinions :)



I can think of a whole list of reasons why code
written to target a 64-bit system would be non-trivial to port to 32-bit,


you talk about performance or if it work at all?

i already wrote a lot of programs, and after moving to 64-bit (amd64) only 
one wasn't working just after recompiling, because i assumed that pointer 
is 4 byte long.


do you have any other examples of code non-portability between amd64 and 
i386?


I say between amd64 and i386 because there are more issues with other 
archs, where for example non-aligned memory access is not allowed.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Kirk Strauser
On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote:

 you talk about performance or if it work at all?

Both, really.  If they have to code up macros to support identical operations 
(such as addition) on both platforms, and accidentally forget to use the macro 
in some place, then voila: untested code.

 do you have any other examples of code non-portability between amd64 and
 i386?

You're also forgetting that this isn't high-level programming where you get to 
lean on a cross-platform libc or similar.  This is literally interfacing with 
the hardware, and there are a whole boatload of subtle incompatibilities when 
handling stuff at that level.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Wojciech Puchar



you talk about performance or if it work at all?


Both, really.  If they have to code up macros to support identical operations


OK. talking about performance:

- 64-bit addition/substraction on 32-bit computer: 2 instructions instead 
of one (ADD+ADC)

- 64-bit NOT, XOR, AND, OR and compare/test etc - 2 instead of one
- multiply - depends of machine, something like 7-8 times longer (4 
multiples+additions) to do 64bitx64bit multiply.

But how often do you multiply 2 longs in C. Actually VERY rarely.

the only exception i can think now is RSA/DSA assymetric key generation 
and processing.


- every operation on 32-bit or smaller values - same
- every branching - same
- external memory access - depends of chipset/CPU not mode - same


now do

cc -O2 -s some C program

and look at resulting assembly output to see how much performance could 
really be gained.



about checksumming in ZFS - it could be much faster on 64-bit arch, if 
only memory speed and latency wouldn't be a limit.  and it is, and any 
performance difference in that case would be rather marginal.



(such as addition) on both platforms, and accidentally forget to use the macro
in some place, then voila: untested code.


do you have any other examples of code non-portability between amd64 and
i386?


You're also forgetting that this isn't high-level programming where you get to
lean on a cross-platform libc or similar.  This is literally interfacing with
the hardware, and there are a whole boatload of subtle incompatibilities when
handling stuff at that level.


we talked about C code. if not - please be more clear as i don't 
understand what you talking about.


and no - ZFS is not on interface level, doesn't talk directly to hardware.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread David Kelly
On Wed, May 27, 2009 at 11:52:33AM -0500, Kirk Strauser wrote:
 On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote:
 
  you talk about performance or if it work at all?
 
 Both, really.  If they have to code up macros to support identical
 operations (such as addition) on both platforms, and accidentally
 forget to use the macro in some place, then voila: untested code.

I haven't looked at the ZFS code but this sort of thing is exactly why
all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
the first thing I have to do with a new compiler is to work out the
proper typedefs to create them.

-- 
David Kelly N4HHE, dke...@hiwaay.net

Whom computers would destroy, they must first drive mad.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Wojciech Puchar

I haven't looked at the ZFS code but this sort of thing is exactly why
all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
the first thing I have to do with a new compiler is to work out the
proper typedefs to create them.


int, short and char are portable, only other things must be defined this 
way.


int8_t int16_t is just unneeded work. anyway - it's just defines, having 
no effect on compiled code and it's performance.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread David Kelly
On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote:
 I haven't looked at the ZFS code but this sort of thing is exactly why
 all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
 the first thing I have to do with a new compiler is to work out the
 proper typedefs to create them.
 
 int, short and char are portable, only other things must be defined this 
 way.

No, they are not portable. int is 16 bits on many systems I work with.
char is sometimes signed, sometimes not. uint8_t is never signed and
always unambiguous.

 int8_t int16_t is just unneeded work. anyway - it's just defines, having 
 no effect on compiled code and it's performance.

No, they are not just defines, I said typedef. Typedef is subject to
stricter checking by the compiler.

Packing and alignment in structs is a big portability problem.

-- 
David Kelly N4HHE, dke...@hiwaay.net

Whom computers would destroy, they must first drive mad.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-27 Thread Roland Smith
On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote:
  I haven't looked at the ZFS code but this sort of thing is exactly why
  all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
  the first thing I have to do with a new compiler is to work out the
  proper typedefs to create them.
 
 int, short and char are portable, 

Not completely, at least as far as C is concerned. I'd say that char and
long are portable, but not short and int.

According to KR (and I don't think this has changed in later
standards), a char is defined as one byte. Short, int and long can vary
but short and int must be at least 16 bits, and a long must be at least
32 bits. Additionally a short may not be longer than an int which may
not be longer than a long. But the size of an int depends on hardware
platform and compiler data model.

Roland
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgp4s0ze2UDGA.pgp
Description: PGP signature


Re: FreeBSD Software RAID

2009-05-26 Thread Howard Jones
Wojciech Puchar wrote:
 you are right. you can't be happy of warm house without getting really
 cold some time :)

 that's why it's excellent that ZFS (and few other things) is included
 in FreeBSD but it's COMPLETELY optional.

Well, I switched from the heater that doesn't work and is poorly
documented (gvinum) to the one that does and  is (zfs, albeit mostly
documented by Sun), and so far I am warm :-)

Once I'd increased kmem, at least. I did get a panic before that, but
now I am shuffling data happily and slightly faster than gvinum did, and
memory has levelled off at about 160MB for zfs. I'll be keeping my
previous hardware RAID in one piece for a little while though, I think,
just in case! (old Adaptec card with a 2TB limit on containers).
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Steve Bertrand
Howard Jones wrote:
 Wojciech Puchar wrote:
 you are right. you can't be happy of warm house without getting really
 cold some time :)

 that's why it's excellent that ZFS (and few other things) is included
 in FreeBSD but it's COMPLETELY optional.

 Well, I switched from the heater that doesn't work and is poorly
 documented (gvinum) to the one that does and  is (zfs, albeit mostly
 documented by Sun), and so far I am warm :-)
 
 Once I'd increased kmem, at least. I did get a panic before that, but
 now I am shuffling data happily and slightly faster than gvinum did, and
 memory has levelled off at about 160MB for zfs. I'll be keeping my
 previous hardware RAID in one piece for a little while though, I think,
 just in case! (old Adaptec card with a 2TB limit on containers).

I moved my AMANDA tapeless backup system to ZFS well over a year ago.
It's got four 500GB SATA drives.

At first, it would panic frequently sometime during the backup. The
backups peak at ~400Mbps of network traffic. I adopted the following
script to write out the memory usage during the backup, so I could
better tune the system (sorry, I can't recall where I found this code snip):

#!/bin/sh

TEXT=`/sbin/kldstat | /usr/bin/awk 'BEGIN {print 16i 0;} NR1 \
{print toupper($4) +} END {print p}' | dc`

DATA=`/usr/bin/vmstat -m | sed -Ee \
'1s/.*/0/;s/.* ([0-9]+)K.*/\1+/;$s/$/1024*p/' | dc`

TOTAL=$((DATA + TEXT))
DATE=`/bin/date | awk '{print $4}'`

/bin/echo $DATE `/bin/echo $TOTAL | \
/usr/bin/awk '{print $1/1048576}'`  /home/steve/mem.usage

Cronned every minute, I'd end up with a file like this:

19:16:01 500.205
19:17:02 485.699
19:18:01 474.305
19:19:01 473.265
19:20:01 471.874
19:21:02 471.94

...the next day, I'd be able to review this file to see what the memory
 usage was at the time of the panic/reboot.

I found that:

vm.kmem_size=1536M
vm.kmem_size_max=1536M

made the system extremely stable, and since then:

amanda# uptime
 9:01AM  up 81 days, 17:06,

I'm about to upgrade the system to -STABLE today...

Steve


smime.p7s
Description: S/MIME Cryptographic Signature


Re: FreeBSD Software RAID

2009-05-26 Thread Adam Vande More
Sweet thanks for the info.  Building one of those boxes is next in the list.

On 5/26/09, Steve Bertrand st...@ibctech.ca wrote:
 Howard Jones wrote:
 Wojciech Puchar wrote:
 you are right. you can't be happy of warm house without getting really
 cold some time :)

 that's why it's excellent that ZFS (and few other things) is included
 in FreeBSD but it's COMPLETELY optional.

 Well, I switched from the heater that doesn't work and is poorly
 documented (gvinum) to the one that does and  is (zfs, albeit mostly
 documented by Sun), and so far I am warm :-)

 Once I'd increased kmem, at least. I did get a panic before that, but
 now I am shuffling data happily and slightly faster than gvinum did, and
 memory has levelled off at about 160MB for zfs. I'll be keeping my
 previous hardware RAID in one piece for a little while though, I think,
 just in case! (old Adaptec card with a 2TB limit on containers).

 I moved my AMANDA tapeless backup system to ZFS well over a year ago.
 It's got four 500GB SATA drives.

 At first, it would panic frequently sometime during the backup. The
 backups peak at ~400Mbps of network traffic. I adopted the following
 script to write out the memory usage during the backup, so I could
 better tune the system (sorry, I can't recall where I found this code snip):

 #!/bin/sh

 TEXT=`/sbin/kldstat | /usr/bin/awk 'BEGIN {print 16i 0;} NR1 \
 {print toupper($4) +} END {print p}' | dc`

 DATA=`/usr/bin/vmstat -m | sed -Ee \
 '1s/.*/0/;s/.* ([0-9]+)K.*/\1+/;$s/$/1024*p/' | dc`

 TOTAL=$((DATA + TEXT))
 DATE=`/bin/date | awk '{print $4}'`

 /bin/echo $DATE `/bin/echo $TOTAL | \
 /usr/bin/awk '{print $1/1048576}'`  /home/steve/mem.usage

 Cronned every minute, I'd end up with a file like this:

 19:16:01 500.205
 19:17:02 485.699
 19:18:01 474.305
 19:19:01 473.265
 19:20:01 471.874
 19:21:02 471.94

 ...the next day, I'd be able to review this file to see what the memory
  usage was at the time of the panic/reboot.

 I found that:

 vm.kmem_size=1536M
 vm.kmem_size_max=1536M

 made the system extremely stable, and since then:

 amanda# uptime
  9:01AM  up 81 days, 17:06,

 I'm about to upgrade the system to -STABLE today...

 Steve



-- 
Adam Vande More
Systems Administrator
Mobility Sales
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Kirk Strauser
On Monday 25 May 2009 08:57:48 am Howard Jones wrote:

 I'm was half-considering switching to ZFS, but the most positive thing I
 could find written about that (as implemented on FreeBSD) is that it
 doesn't crash that much, so perhaps not. That was from a while ago
 though.

Wojciech hates it for some reason, but I wouldn't let that deter you.  I'm 
using ZFS on several production machines now and it's been beautifully solid 
the whole time.  It has several huge advantages over UFS:

  - Filesystem sizes are dynamic.  They all grow and shrink inside the same 
pool, so you don't have to worry about making one too large or too small.

  - You can sort of think of a ZFS filesystem as a directory with a set of 
configurable, inheritable attributes.  Set your /usr/ports to use compression, 
and tell /home to keep two copies of everything for safety's sake.

  - Snapshots aren't painful.

It's been 100% reliable on every amd64 machine I've put it on (but avoid it on 
x86!).  7-STABLE hasn't required any tuning since February or so.

UFS and gstripe/gmirror/graid* are good, but ZFS has spoiled me and I won't be 
going back.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-26 Thread Gary Gatten
Why avoid ZFS on x86?

-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser
Sent: Tuesday, May 26, 2009 12:39 PM
To: freebsd-questions@freebsd.org
Subject: Re: FreeBSD  Software RAID

On Monday 25 May 2009 08:57:48 am Howard Jones wrote:

 I'm was half-considering switching to ZFS, but the most positive thing
I
 could find written about that (as implemented on FreeBSD) is that it
 doesn't crash that much, so perhaps not. That was from a while ago
 though.

Wojciech hates it for some reason, but I wouldn't let that deter you.
I'm 
using ZFS on several production machines now and it's been beautifully
solid 
the whole time.  It has several huge advantages over UFS:

  - Filesystem sizes are dynamic.  They all grow and shrink inside the
same 
pool, so you don't have to worry about making one too large or too
small.

  - You can sort of think of a ZFS filesystem as a directory with a set
of 
configurable, inheritable attributes.  Set your /usr/ports to use
compression, 
and tell /home to keep two copies of everything for safety's sake.

  - Snapshots aren't painful.

It's been 100% reliable on every amd64 machine I've put it on (but avoid
it on 
x86!).  7-STABLE hasn't required any tuning since February or so.

UFS and gstripe/gmirror/graid* are good, but ZFS has spoiled me and I
won't be 
going back.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
freebsd-questions-unsubscr...@freebsd.org





font size=1
div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 
1.0pt 0in'
/div
This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system.
/font

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread cpghost
On Tue, May 26, 2009 at 01:15:41PM -0500, Gary Gatten wrote:
 Why avoid ZFS on x86?

That's because ZFS works best with huge amounts of (Kernel-)RAM, and
i386 32-bit doesn't provide enough adressing space.

Btw, I've tried ZFS on two FreeBSD/amd64 test machines with 8GB and
16GB of RAM, and it looks very promising. I wouldn't put it on
production servers yet, but will eventually, once FreeBSD's ZFS
integration matures and stabilizes.

-cpghost.

-- 
Cordula's Web. http://www.cordula.ws/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Matthew Seaman

Gary Gatten wrote:

Why avoid ZFS on x86?


Because in order to deal most effectively with disk arrays of 100s or 1000s
of GB as are typical nowadays, ZFS requires more than the 4GB of addressable
RAM[*] that the i386 arch can provide.

You can make ZFS work on i386, but it requires very careful tuning and is not
going to work brilliantly well for particularly large or high-throughput
filesystems.

Cheers,

Matthew

[*] Technically, it requires more than the typical 2GB of kernel memory that
is the default on i386.  KVM under 64bit architectures can be *much* bigger
than that.

--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


RE: FreeBSD Software RAID

2009-05-26 Thread Gary Gatten
What about with PAE and/or other extension schemes?

If it's just memory requirements, can I assume if I don't have a $hit
load of storage and billions of files it will work ok with 4GB of RAM?
I guess I'm just making sure there isn't some bug that only exists on
the i386 architecture?



-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Matthew Seaman
Sent: Tuesday, May 26, 2009 1:38 PM
To: Gary Gatten
Cc: freebsd-questions@freebsd.org
Subject: Re: FreeBSD  Software RAID

Gary Gatten wrote:
 Why avoid ZFS on x86?

Because in order to deal most effectively with disk arrays of 100s or
1000s
of GB as are typical nowadays, ZFS requires more than the 4GB of
addressable
RAM[*] that the i386 arch can provide.

You can make ZFS work on i386, but it requires very careful tuning and
is not
going to work brilliantly well for particularly large or high-throughput
filesystems.

Cheers,

Matthew

[*] Technically, it requires more than the typical 2GB of kernel memory
that
is the default on i386.  KVM under 64bit architectures can be *much*
bigger
than that.

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
  Kent, CT11 9PW






font size=1
div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 
1.0pt 0in'
/div
This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system.
/font

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Kirk Strauser
On Tuesday 26 May 2009 01:44:51 pm Gary Gatten wrote:
 What about with PAE and/or other extension schemes?

 If it's just memory requirements, can I assume if I don't have a $hit
 load of storage and billions of files it will work ok with 4GB of RAM?
 I guess I'm just making sure there isn't some bug that only exists on
 the i386 architecture?

My understanding is that it's much more than just the memory addressing.  
ZFS is thoroughly 64-bit and uses 64-bit math pervasively.  That means you 
have to emulate all those operations with 2 32-bit values, and on the 
register-starved x86 platform you end up with absolutely horrible performance.  
Furthermore, it's just not that well tested.  Sun designed ZFS for 64-bit 
systems and I think 32-bit support was pretty much an afterthought.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Wojciech Puchar

Wojciech hates it for some reason, but I wouldn't let that deter you.  I'm


same == incredibly low performance.

of course having overmuscled CPU not much used for anything else - it may 
not be a problem.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-26 Thread Gary Gatten
10-4, thanks!

-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser
Sent: Tuesday, May 26, 2009 2:00 PM
To: freebsd-questions@freebsd.org
Subject: Re: FreeBSD  Software RAID

On Tuesday 26 May 2009 01:44:51 pm Gary Gatten wrote:
 What about with PAE and/or other extension schemes?

 If it's just memory requirements, can I assume if I don't have a $hit
 load of storage and billions of files it will work ok with 4GB of
RAM?
 I guess I'm just making sure there isn't some bug that only exists on
 the i386 architecture?

My understanding is that it's much more than just the memory
addressing.  
ZFS is thoroughly 64-bit and uses 64-bit math pervasively.  That means
you 
have to emulate all those operations with 2 32-bit values, and on the 
register-starved x86 platform you end up with absolutely horrible
performance.  
Furthermore, it's just not that well tested.  Sun designed ZFS for
64-bit 
systems and I think 32-bit support was pretty much an afterthought.
-- 
Kirk Strauser
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
freebsd-questions-unsubscr...@freebsd.org





font size=1
div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 
1.0pt 0in'
/div
This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system.
/font

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-26 Thread Wojciech Puchar

 - Filesystem sizes are dynamic.  They all grow and shrink inside the
same
pool, so you don't have to worry about making one too large or too
small.


there are actually almost no filesystems, just one filesystem with many 
upper descriptors and separate per filesystem quota.


just to make happy those who like to have separate filesystem for many 
things.


i always make one filesystem for /, unless it's multiple disks config and 
i do like some data to be physically on different drive.for example highly 
loaded squid cache.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Wojciech Puchar

You can make ZFS work on i386, but it requires very careful tuning and is not
going to work brilliantly well for particularly large or high-throughput
filesystems.


you mean high transfer like reading/writing huge files. anyway not 
faster than properly configured UFS+maybe gstripe/gmirror.


for small files it's only fast when they will fit in cache, same with UFS
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-26 Thread Wojciech Puchar

ZFS is thoroughly 64-bit and uses 64-bit math pervasively.  That means
you
have to emulate all those operations with 2 32-bit values, and on the
register-starved x86 platform you end up with absolutely horrible
performance.


no this difference isn't that great. it doesn't use much less CPU on the 
same processor using i386 and amd64 kernels - i checked it.


no precise measurements but there are no more than 20% performance 
difference - comparable to most programs used in i386 and amd64 mode.


so no horrible performance on i386, or if you prefer - always horrible 
performance no matter what CPU mode.


while x86 architecture doesn't have much registers 
EAX,EBX,ECX,EDX,ESI,EDI,EBP,ESP 8 total (+EIP) it doesn't affect programs 
that much, as all modern x86 processors perform memory-operand instructions 
single cycle (or more than one of them).


anyway extra 8 registers and PC-relative addresses are very useful. this 
roughly 20% performance difference is because of this.


if you mean gain on 64-bit registers when calculating block checksums in 
ZFS - it's for sure memory-bandwidth and latency limited, not CPU power.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-26 Thread Matthew Seaman

Wojciech Puchar wrote:
You can make ZFS work on i386, but it requires very careful tuning and 
is not

going to work brilliantly well for particularly large or high-throughput
filesystems.


you mean high transfer like reading/writing huge files. anyway not 
faster than properly configured UFS+maybe gstripe/gmirror.


I mean high-throughput, as in bytes-per-second.  Whether that consists of a
very large number of small files or fewer larger ones is pretty much immaterial.


for small files it's only fast when they will fit in cache, same with UFS


For any files, it's a lot faster when they can be served out of cache.  That's
true for any filesystem.  It's only when you get beyond the capacity of your
caches that things get interesting.

I really don't have any hard data on ZFS performance relative to UFS + geom.
However my feeling is that UFS will win at small scales, but that ZFS will
close the gap as the scale increases, and that ZFS is the clear winner when
you consider things other than direct performance -- manageability, resilience
to hardware failure or disk errors, etc.  Of course, small scale (ie. about
the same size as a single drive) is hundreds of GB nowadays, and growing.

Cheers,

Matthew

--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: FreeBSD Software RAID

2009-05-25 Thread Mister Olli
Hi, 

I remember building a RAID5 on gvinum with 3 500GB hard drives some
months ago, and it took horribly long to initialize the raid5 (several
hours).

It seems to be a one-time job, cause since the raid finished it's
initialization the machine starts up/ reboots within normal times.

The documentation is some point, yes ;-)
I got my basic know-how about gvinum and raid-1 from a blog also and
could read-on with what I needed depending on the man pages. but it was
hard..

Regards
---
Mr. Olli


On Mon, 2009-05-25 at 14:57 +0100, Howard Jones wrote:
 Hi,
 
 Can anyone with experience of software RAID point me in the right
 direction please? I've used gmirror before with no trouble, but nothing
 fancier.
 
 I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD
 7.1-p4 system.
 
 I created a RAID 5 set with gvinum:
 drive d0 device /dev/ad4s1a
 drive d1 device /dev/ad6s1a
 drive d2 device /dev/ad8s1a
 drive d3 device /dev/ad10s1a
 volume jumbo
 plex org raid5 256k
 sd drive d0
 sd drive d1
 sd drive d2
 sd drive d3
 
 and it shows as up and happy. If I reboot, all the subdisks show as
 stale, and so the plex is down. It seems to be doing a rebuild, although
 it wasn't before, and would newfs, mount and accept data onto the new
 plex before the reboot.
 
 Is there any way to avoid having to wait while gvinum apparently
 calculates the parity on all those zeroes?
 
 Am I missing some step to 'liven up' the plex before the first reboot?
 (loader.conf has the correct line to load gvinum at boot) I tried again,
 with 'gvinum start jumbo' before rebooting, and that made no difference.
 
 Also is the configuration file format actually documented anywhere? I
 got that example from someone's blog, but the gvinum manpage doesn't
 mention the format at all! It *does* have a few pages dedicated to
 things that don't work, which was handy... :-) The handbook is still
 talking about ccd and vinum, and mostly covers the complications of
 booting of such a device.
 
 On the subject of documentation, I'm also assuming that this:
 S jumbo.p0.s2   State: I 1% D: d2   Size:   
 931 GB
 means it's 1% through initialising, because the states or the output of
 'list' aren't described in the manual either.
 
 I'm was half-considering switching to ZFS, but the most positive thing I
 could find written about that (as implemented on FreeBSD) is that it
 doesn't crash that much, so perhaps not. That was from a while ago though.
 
 Does anyone use software RAID5 (or RAIDZ) for data they care about?
 
 Cheers,
 
 Howie
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-25 Thread Graeme Dargie


-Original Message-
From: Howard Jones [mailto:howard.jo...@network-i.net] 
Sent: 25 May 2009 14:58
To: freebsd-questions@freebsd.org
Subject: FreeBSD  Software RAID

Hi,

Can anyone with experience of software RAID point me in the right
direction please? I've used gmirror before with no trouble, but nothing
fancier.

I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD
7.1-p4 system.

I created a RAID 5 set with gvinum:
drive d0 device /dev/ad4s1a
drive d1 device /dev/ad6s1a
drive d2 device /dev/ad8s1a
drive d3 device /dev/ad10s1a
volume jumbo
plex org raid5 256k
sd drive d0
sd drive d1
sd drive d2
sd drive d3

and it shows as up and happy. If I reboot, all the subdisks show as
stale, and so the plex is down. It seems to be doing a rebuild, although
it wasn't before, and would newfs, mount and accept data onto the new
plex before the reboot.

Is there any way to avoid having to wait while gvinum apparently
calculates the parity on all those zeroes?

Am I missing some step to 'liven up' the plex before the first reboot?
(loader.conf has the correct line to load gvinum at boot) I tried again,
with 'gvinum start jumbo' before rebooting, and that made no difference.

Also is the configuration file format actually documented anywhere? I
got that example from someone's blog, but the gvinum manpage doesn't
mention the format at all! It *does* have a few pages dedicated to
things that don't work, which was handy... :-) The handbook is still
talking about ccd and vinum, and mostly covers the complications of
booting of such a device.

On the subject of documentation, I'm also assuming that this:
S jumbo.p0.s2   State: I 1% D: d2   Size:   
931 GB
means it's 1% through initialising, because the states or the output of
'list' aren't described in the manual either.

I'm was half-considering switching to ZFS, but the most positive thing I
could find written about that (as implemented on FreeBSD) is that it
doesn't crash that much, so perhaps not. That was from a while ago
though.

Does anyone use software RAID5 (or RAIDZ) for data they care about?

Cheers,

Howie
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
freebsd-questions-unsubscr...@freebsd.org


I have been running ZFS RAIDZ for 5 months on a 7.1 amd64 install, I
have to say my experience has been mostly good. Initially I had an issue
with a pci sata card causing drives to disconnect, but after investing a
new motherboard with 6 sata ports everything has been smooth. I did have
to replace a disk last week as it was showing checksum, read and write
errors. ZFS rebuilt 2TB of data in around 5hours and did not loose any
files at all. 

Regards

Graeme

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread Valentin Bud
On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie a...@tangerine-army.co.ukwrote:



 -Original Message-
 From: Howard Jones [mailto:howard.jo...@network-i.net]
 Sent: 25 May 2009 14:58
 To: freebsd-questions@freebsd.org
 Subject: FreeBSD  Software RAID

 Hi,

 Can anyone with experience of software RAID point me in the right
 direction please? I've used gmirror before with no trouble, but nothing
 fancier.

 I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD
 7.1-p4 system.

 I created a RAID 5 set with gvinum:
 drive d0 device /dev/ad4s1a
 drive d1 device /dev/ad6s1a
 drive d2 device /dev/ad8s1a
 drive d3 device /dev/ad10s1a
 volume jumbo
plex org raid5 256k
sd drive d0
sd drive d1
sd drive d2
sd drive d3

 and it shows as up and happy. If I reboot, all the subdisks show as
 stale, and so the plex is down. It seems to be doing a rebuild, although
 it wasn't before, and would newfs, mount and accept data onto the new
 plex before the reboot.

 Is there any way to avoid having to wait while gvinum apparently
 calculates the parity on all those zeroes?

 Am I missing some step to 'liven up' the plex before the first reboot?
 (loader.conf has the correct line to load gvinum at boot) I tried again,
 with 'gvinum start jumbo' before rebooting, and that made no difference.

 Also is the configuration file format actually documented anywhere? I
 got that example from someone's blog, but the gvinum manpage doesn't
 mention the format at all! It *does* have a few pages dedicated to
 things that don't work, which was handy... :-) The handbook is still
 talking about ccd and vinum, and mostly covers the complications of
 booting of such a device.

 On the subject of documentation, I'm also assuming that this:
S jumbo.p0.s2   State: I 1% D: d2   Size:
 931 GB
 means it's 1% through initialising, because the states or the output of
 'list' aren't described in the manual either.

 I'm was half-considering switching to ZFS, but the most positive thing I
 could find written about that (as implemented on FreeBSD) is that it
 doesn't crash that much, so perhaps not. That was from a while ago
 though.

 Does anyone use software RAID5 (or RAIDZ) for data they care about?

 Cheers,

 Howie
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 freebsd-questions-unsubscr...@freebsd.org


 I have been running ZFS RAIDZ for 5 months on a 7.1 amd64 install, I
 have to say my experience has been mostly good. Initially I had an issue
 with a pci sata card causing drives to disconnect, but after investing a
 new motherboard with 6 sata ports everything has been smooth. I did have
 to replace a disk last week as it was showing checksum, read and write
 errors. ZFS rebuilt 2TB of data in around 5hours and did not loose any
 files at all.

 Regards

 Graeme

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org


I have been using ZFS for about half an year. I just have mirroring with 2
drives. Never had a problem with it. I would go with ZFS in the future too.
And yes the server is in production and it has all sort of important data.

a great day,
v


-- 
network warrior since 2005
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread Wojciech Puchar

i use gmirror but once i tried gvinum and it doesn't work well.

i think simply use mirroring. ZFS will introduce 100 times more problems 
than it solves

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread David Kelly
On Mon, May 25, 2009 at 07:37:59PM +0300, Valentin Bud wrote:
 On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie 
 a...@tangerine-army.co.ukwrote:
 
  Can anyone with experience of software RAID point me in the right
  direction please? I've used gmirror before with no trouble, but nothing
  fancier.

[76 lines trimmed]

 I have been using ZFS for about half an year. I just have mirroring
 with 2 drives. Never had a problem with it. I would go with ZFS in the
 future too. And yes the server is in production and it has all sort of
 important data.

I have looked at ZFS recently. Appears to be a memory hog, needs about 1
GB especially if large file transfers may occur over gigabit ethernet
to/from other machines.

-- 
David Kelly N4HHE, dke...@hiwaay.net

Whom computers would destroy, they must first drive mad.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread Wojciech Puchar


I have looked at ZFS recently. Appears to be a memory hog, needs about 1
GB especially if large file transfers may occur over gigabit ethernet
while it CAN be set up on 256MB machine with a little big flags in 
loader.conf (should be autotuned anyway) - it generally takes as much 
memory as it's available, and LOTS of CPU power.


with similar operations ZFS takes 10-20 TIMES more CPU than UFS and it's 
NOT faster than properly configured UFS. doesn't  make  any sense

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-25 Thread Graeme Dargie


-Original Message-
From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl] 
Sent: 25 May 2009 18:09
To: FreeBSD-Questions@freebsd.org
Cc: Howard Jones; Graeme Dargie; Valentin Bud
Subject: Re: FreeBSD  Software RAID


 I have looked at ZFS recently. Appears to be a memory hog, needs about
1
 GB especially if large file transfers may occur over gigabit ethernet
while it CAN be set up on 256MB machine with a little big flags in 
loader.conf (should be autotuned anyway) - it generally takes as much 
memory as it's available, and LOTS of CPU power.

with similar operations ZFS takes 10-20 TIMES more CPU than UFS and it's

NOT faster than properly configured UFS. doesn't  make  any sense
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
freebsd-questions-unsubscr...@freebsd.org

Ok granted this is a server sat in my house and it is not a mission
critical server in a large business, personally I have can live with ZFS
taking a bit longer vs resilience. From just looking at my system at the
moment I have 1.8GB of free ram from a total of 4GB.


Regards 

Graeme

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread David Kelly
On Mon, May 25, 2009 at 07:09:15PM +0200, Wojciech Puchar wrote:
 
 I have looked at ZFS recently. Appears to be a memory hog, needs
 about 1 GB especially if large file transfers may occur over gigabit
 ethernet

 while it CAN be set up on 256MB machine with a little big flags in
 loader.conf (should be autotuned anyway) - it generally takes as much
 memory as it's available, and LOTS of CPU power.
 
 with similar operations ZFS takes 10-20 TIMES more CPU than UFS and
 it's NOT faster than properly configured UFS. doesn't  make  any sense

It makes a certain degree of sense. Sometimes things have to be done
wrong for us to realize how good we had it before. How would we know how
great FreeBSD is if we didn't have Linux? I had to look at ZFS to decide
not to use it when I rebuild my storage this week due to a failing
drive.

-- 
David Kelly N4HHE, dke...@hiwaay.net

Whom computers would destroy, they must first drive mad.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-25 Thread Wojciech Puchar

Ok granted this is a server sat in my house and it is not a mission
critical server in a large business, personally I have can live with ZFS
taking a bit longer vs resilience.


simply gmirror and UFS gives the same. much simpler, much faster.

but of course lots of people like to make their life harder
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD Software RAID

2009-05-25 Thread Wojciech Puchar

It makes a certain degree of sense. Sometimes things have to be done
wrong for us to realize how good we had it before. How would we know how
great FreeBSD is if we didn't have Linux? I had to look at ZFS to decide
not to use it when I rebuild my storage this week due to a failing
drive.


you are right. you can't be happy of warm house without getting really 
cold some time :)


that's why it's excellent that ZFS (and few other things) is included in 
FreeBSD but it's COMPLETELY optional.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-25 Thread Graeme Dargie


-Original Message-
From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl] 
Sent: 25 May 2009 18:54
To: Graeme Dargie
Cc: FreeBSD-Questions@freebsd.org; Howard Jones; Valentin Bud
Subject: RE: FreeBSD  Software RAID

 Ok granted this is a server sat in my house and it is not a mission
 critical server in a large business, personally I have can live with
ZFS
 taking a bit longer vs resilience.

simply gmirror and UFS gives the same. much simpler, much faster.

but of course lots of people like to make their life harder

No I am not making life harder at all ... I have 6x500gb hard disks I
want in a good solid raid 5 type configuration. So you are somewhat wide
of the mark in your assumptions.



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: FreeBSD Software RAID

2009-05-25 Thread Wojciech Puchar

but of course lots of people like to make their life harder

No I am not making life harder at all ... I have 6x500gb hard disks I
want in a good solid raid 5 type configuration. So you are somewhat wide
of the mark in your assumptions.
that's a reason. just don't forget that RAID-z is MUCH closer to RAID3 
than RAID5. so you get random access speed of single drive, just higher 
transfer.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD - software raid

2003-06-19 Thread Bill Moran
Moti Levy wrote:
Hi,
before I do the unthinkable and use Linux for a server , I ask for your
help.
I have a set of 4 IDE drives .
I need to build a file server that'll run samba/nfs
I've done this.  Works very well.

I want to use all 4 drives as a raid 5 array and use the combined space for
storage.
is there a way to do it with FreeBSD ?
I looked at ccd and vinum but as far as my understanding goes , I cant use
it during setup but rather build the
system first and than use ccd/vinum on all BUT the system disk .
am I wrong ?
Yes.

is there a solution out there ?
You can use all four drives in your Vinum array.  There used to be a
restriction that the root partition could not be part of the Vinum array,
but that's no longer the case, see the Vinum section of the handbook:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html
And specifically this:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD - software raid

2003-06-19 Thread Moti Levy
well,
it says :
For this to be possible at all, the following requirements must be met for
the root volume:

  a.. The root volume must not be striped or RAID-5.

  b.. The root volume must not contain more than one concatenated subdisk
per plex.

and i want :

  I want to use all 4 drives as a raid 5 array and use the combined space
for
  storage.


thanks for your reply .

please feel free to correct me again if i'm wrong

Moti



- Original Message - 
From: Bill Moran [EMAIL PROTECTED]
To: Moti Levy [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, June 19, 2003 9:49 AM
Subject: Re: FreeBSD - software raid


 Moti Levy wrote:
  Hi,
  before I do the unthinkable and use Linux for a server , I ask for your
  help.
  I have a set of 4 IDE drives .
  I need to build a file server that'll run samba/nfs

 I've done this.  Works very well.

  I want to use all 4 drives as a raid 5 array and use the combined space
for
  storage.
  is there a way to do it with FreeBSD ?
  I looked at ccd and vinum but as far as my understanding goes , I cant
use
  it during setup but rather build the
  system first and than use ccd/vinum on all BUT the system disk .
  am I wrong ?

 Yes.

  is there a solution out there ?

 You can use all four drives in your Vinum array.  There used to be a
 restriction that the root partition could not be part of the Vinum array,
 but that's no longer the case, see the Vinum section of the handbook:
 http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html
 And specifically this:
 http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html

 -- 
 Bill Moran
 Potential Technologies
 http://www.potentialtech.com




___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD - software raid

2003-06-19 Thread Bill Moran
Moti Levy wrote:
well,
it says :
For this to be possible at all, the following requirements must be met for
the root volume:
  a.. The root volume must not be striped or RAID-5.

  b.. The root volume must not contain more than one concatenated subdisk
per plex.
and i want :


I want to use all 4 drives as a raid 5 array and use the combined space
Could you please configure your mailer not to break off lines like this.  It
makes your messages very difficult to read.
In answer to your question, I pose another question:

Do you really need so much space on the root partition that you must stripe it?
Why not just mirror the root partition and then stripe the remaining partitions?
My root partition is only 200M, and that's more than I've ever needed.
If you ABSOLUTELY MUST stripe the root partition, then you are correct, you can't
do it.  Buy a hardware raid card.
I can't see any logical reason for such a requirement, however.
- Original Message - 
From: Bill Moran [EMAIL PROTECTED]
To: Moti Levy [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, June 19, 2003 9:49 AM
Subject: Re: FreeBSD - software raid



Moti Levy wrote:

Hi,
before I do the unthinkable and use Linux for a server , I ask for your
help.
I have a set of 4 IDE drives .
I need to build a file server that'll run samba/nfs
I've done this.  Works very well.


I want to use all 4 drives as a raid 5 array and use the combined space
for

storage.
is there a way to do it with FreeBSD ?
I looked at ccd and vinum but as far as my understanding goes , I cant
use

it during setup but rather build the
system first and than use ccd/vinum on all BUT the system disk .
am I wrong ?
Yes.


is there a solution out there ?
You can use all four drives in your Vinum array.  There used to be a
restriction that the root partition could not be part of the Vinum array,
but that's no longer the case, see the Vinum section of the handbook:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html
And specifically this:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html


--
Bill Moran
Potential Technologies
http://www.potentialtech.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]