Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-04 Thread Brian Hechinger
On Wed, Mar 04, 2009 at 10:59:04AM +1100, Julius Roberts wrote:
 
  However I would expect that if you could present 8 raid0 luns to
  the host then that should be at least a decent config to start
  using for ZFS.
 
 I can confirm that we are doing that here (with 3 drives) and it's
 been fine for almost a year now.

Jules, (or anyone who knows the answer)

Even though it probably really doesn't matter since you only have a single
disk in each raid0, what did you set the PERC's stripe size to?  (I can't
think of what terminology is actually used for it and don't have a PERC in
front of me to check on currently) I've got two raid0 volumes with two disks
in each, and I was wondering what stripe width at the hardware level would
work best with ZFS.

Thanks!

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-04 Thread Sriram Narayanan
On Wed, Mar 4, 2009 at 5:29 AM, Julius Roberts hooliowobb...@gmail.com wrote:
 I would like to hear if anyone is using ZFS with this card and how you set
 it up, and what, if any, issues you've had with that set up.

 However I would expect that if you could present 8 raid0 luns to
 the host then that should be at least a decent config to start
 using for ZFS.

 I can confirm that we are doing that here (with 3 drives) and it's
 been fine for almost a year now.


I've done exactly this myself.

I have two 200 GB SATA disks for the OS, and four 146 GB SAS disks for
the data pool. I've just downloaded and installed Nexenta.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-04 Thread Julius Roberts
2009/3/5 Brian Hechinger wo...@4amlunch.net:
 Even though it probably really doesn't matter since you only have a single
 disk in each raid0, what did you set the PERC's stripe size to?  (I can't
 think of what terminology is actually used for it and don't have a PERC in
 front of me to check on currently) I've got two raid0 volumes with two disks
 in each, and I was wondering what stripe width at the hardware level would
 work best with ZFS.

Hi mate, to be honest, i don't know.  i just left it as the default :)

-- 
Kind regards, Jules
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread James C. McPherson
On Tue, 03 Mar 2009 09:50:51 -0800
Kristin Amundsen avatarofsl...@gmail.com wrote:

 I am trying to set up OpenSolaris on a Dell 2950 III that has 8 SAS drives
 connected to a PERC 6/i card.  I am wondering what the best way to configure
 the RAID in the BIOS for ZFS.
 
 Part of the problem is there seems to be some confusion inside Dell as to
 what can be done with the card.  Their tech support suggested making the 8
 drives show up by making them 8 raid0 devices.  I researched online to see
 if I could find anyone doing that and the only person I found indicated
 there were issues with needing to reboot the machine because the controller
 would take drives totally offline when there were problems.  The sales rep I
 have been working with said the card can be configured with a no-raid
 option.  I am not sure why tech support did not know about this (I spent a
 long time taking with them about if we could turn RAID off on the machine).
 I could not find anyone talking about running zfs on a system configured
 this way.
 
 I would like to hear if anyone is using ZFS with this card and how you set
 it up, and what, if any, issues you've had with that set up.


Gday Kristin,
I didn't specifically test ZFS with this card when I was 
making the changes for 

6712499 mpt should identify and report Dell SAS6/iR family of controllers

However I would expect that if you could present 8 raid0 luns to 
the host then that should be at least a decent config to start
using for ZFS. 


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread Julius Roberts
 I would like to hear if anyone is using ZFS with this card and how you set
 it up, and what, if any, issues you've had with that set up.

 However I would expect that if you could present 8 raid0 luns to
 the host then that should be at least a decent config to start
 using for ZFS.

I can confirm that we are doing that here (with 3 drives) and it's
been fine for almost a year now.

Jules
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread Bryant Eadon

Julius Roberts wrote:

I would like to hear if anyone is using ZFS with this card and how you set
it up, and what, if any, issues you've had with that set up.

However I would expect that if you could present 8 raid0 luns to
the host then that should be at least a decent config to start
using for ZFS.


I can confirm that we are doing that here (with 3 drives) and it's
been fine for almost a year now.



I've accomplished that with a similar RAID card, set each drive to a 'JBOD' mode 
in the RAID BIOS, this properly presents individual devices to the OS so ZFS can 
do it's thing.  One thing to note however is that if you remove a drive and 
reboot without replacing it the names of devices may be shifted forward a number 
 causing havoc with a ZFS pool.


Additionally, depending on the OS be, careful with attaching removable storage 
on bootup as this may also shift the names of the devices -- I've personally 
experienced it on FreeBSD 7.1 with a USB stick attached during reboot.  The 
stick happened to take the name of my first drive on this RAID device.



-Bryant
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss