James Carlson <[email protected]> writes:

> Harry Putnam writes:
>> I wanted to partition and slice up my disk to be able to work with
>> raidz.  Maybe that isn't really important and all the same stuff
>> applies to  any zpool.  I don't know yet... I'm just getting
>> started. 

> I'm not sure that's a wise idea in the first place.  A RAID volume
> tries to spread the I/O load across the members.  If you do that with
> multiple partitions, you're naturally going to get the lowest possible
> performance -- causing the disk head to bounce back and forth between
> partitions on each access.
>

Thanks for detailed info about how RAID volumes work, and the snafu
that might arise from partitioning... good info.

I must not have made myself clear.  

But first, the same thing could be said about the point Brian raised.
Its probably not the best use of zfs to have several boot OS on a
laptap and partion up the disk to allow that.  But as in my case it is
a way for someone to experiment with zfs and raidz with limited
resources... that is, 1 disk or only a partial disk.

In my case I plan to use whole disks once I kind of get the idea of
how to work with zfs and raidz.  So what ever I do to the partitioned
disk or however slow it makes zfs work, the experiment is aimed at
learning to use zfs commands and manipulations, before putting real
data on the server that is valuable.  

For now it is an expendable experiment.

> It's "redundant array of inexpensive *disks*," not "partitions."  As
> Dave said, your best bet is to feed ZFS a whole disk, and let it do as
> it will with it.

That is something I realized going in, although not with the kind of
knowledge you have on the subject.

Once I kind of get it, that disk will be reformatted into 1 disk and
several more bought and installed.  I also have to get a recognized
sata controller into the mix to make use of existing full sata disks I
have on hand.  At that point I´ll have 3 IDE and 2 SATA for a total of 
1200GB to put into a raidz1 configuration.

_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to