[zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
Hi, I'm getting weird errors while trying to install openindiana 151a on a Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS tries to access the drives (for whatever reason), I get this dumped into syslog: genunix: WARNING: Device

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote: Hi, are those DELL branded WD disks? DELL tends to manipulate the firmware of the drives so that power handling with Solaris fails. If this is the case here: Easiest way to make it work is to modify /kernel/drv/sd.conf and add an entry

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote: Hi, are those DELL branded WD disks? DELL tends to manipulate the firmware of the drives so that power handling with Solaris fails. If this is the case here: Easiest way to make it work is to modify /kernel/drv/sd.conf and add an entry

[zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Bruce McGill
Hi All, I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered nodes running Veritas Cluster Server software. For now, the configuration on NetApp is as follows: /vol/EBUSApp/EBUSApp 100G online MBSUN04 : 0 MBSUN05 : 0

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Joerg Schilling
Jim Klimov jimkli...@cos.ru wrote: We know that large redundancy is highly recommended for big HDDs, so in-place autoexpansion of the raidz1 pool onto 3Tb disks is out of the question. Before I started to use my thumper, I reconfigured it to use RAID-Z2. This allows me to just replace disks

Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Hung-sheng Tsao
IMHO Just use whole stack of Vcs Vxfm vxfs Sent from my iPhone On May 16, 2012, at 5:20 AM, Bruce McGill brucemcgill@gmail.com wrote: Hi All, I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered nodes running Veritas Cluster Server software. For now, the

Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Brian Wilson
Hi Bruce, My opinions and two cents are inline. Take them with appropriate amounts of salt ;) On 05/16/12 04:20 AM, Bruce McGill wrote: Hi All, I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered nodes running Veritas Cluster Server software. For now, the configuration

[zfs-discuss] Hard Drive Choice Question

2012-05-16 Thread Paul Kraus
I have a small server at home (HP Proliant Micro N36) that I use for file, DNS, DHCP, etc. services. I currently have a zpool of four mirrored 1 TB Seagate ES2 SATA drives. Well, it was a zpool of four until last night when one of the drives died. ZFS did it's job and all the data is still OK.

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 10:17 AM, Koopmann, Jan-Peter wrote: One thing came up while trying this - I'm on a text install image system, so my / is a ramdisk. Any ideas how I can change the sd.conf on the USB disk or reload the driver configuration on the fly? I tried looking for the file on the USB

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
2012-05-16 6:18, Bob Friesenhahn wrote: You forgot IDEA #6 where you take advantage of the fact that zfs can be told to use sparse files as partitions. This is rather like your IDEA #3 but does not require that disks be partitioned. This is somewhat the method of making missing devices when

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
2012-05-16 13:30, Joerg Schilling написал: Jim Klimovjimkli...@cos.ru wrote: We know that large redundancy is highly recommended for big HDDs, so in-place autoexpansion of the raidz1 pool onto 3Tb disks is out of the question. Before I started to use my thumper, I reconfigured it to use

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread bofh
On Wed, May 16, 2012 at 1:45 PM, Jim Klimov jimkli...@cos.ru wrote: Your idea actually evolved for me into another (#7?), which is simple and apparent enough to be ingenious ;) DO use the partitions, but split the 2.73Tb drives into a roughly 2.5Tb partition followed by a 250Gb partition of

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
Hello fellow BOFH, I also went by that title in a previous life ;) 2012-05-16 21:58, bofh wrote: Err, why go to all that trouble? Replace one disk per pool. Wait for resilver to finish. Replace next disk. Once all/enough disks have been replaced, turn on autoexpand, and you're done. As

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread bofh
There's something going on then. I have 7x 3TB disk at home, in raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes about 2.5 hours. I had done the resilvering as well, and that did not take 15 hours/drive. Copying 3TBs onto 2.5 SATA drives did take more than a day, but a 2.5

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Bob Friesenhahn
On Wed, 16 May 2012, Jim Klimov wrote: Your idea actually evolved for me into another (#7?), which is simple and apparent enough to be ingenious ;) DO use the partitions, but split the 2.73Tb drives into a roughly 2.5Tb partition followed by a 250Gb partition of the same size as vdevs of the

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Joerg Schilling
bofh goodb...@gmail.com wrote: There's something going on then. I have 7x 3TB disk at home, in raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes about 2.5 hours. I had done the resilvering as well, and that did not take 15 hours/drive. Copying 3TBs onto 2.5 SATA drives

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
2012-05-16 22:21, bofh wrote: There's something going on then. I have 7x 3TB disk at home, in raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes about 2.5 hours. I had done the resilvering as well, and that did not take 15 hours/drive. That is the critical moment ;) The

[zfs-discuss] zfs_arc_max values

2012-05-16 Thread Paynter, Richard
Does anyone know what the minimum value for zfs_arc_max should be set to? Does it depend on the amount of memory on the system, and - if so - is there a formula, or percentage, to use to determine what the minimum value is? Thanks Richard Paynter Confidentiality Note: This e-mail,

Re: [zfs-discuss] zfs_arc_max values

2012-05-16 Thread Richard Elling
On May 16, 2012, at 12:35 PM, Paynter, Richard richard.payn...@infocrossing.com wrote: Does anyone know what the minimum value for zfs_arc_max should be set to? Does it depend on the amount of memory on the system, and – if so – is there a formula, or percentage, to use to determine what

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
2012-05-15 19:17, casper@oracle.com wrote: Your old release of Solaris (nearly three years old) doesn't support disks over 2TB, I would think. (A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb) While this was proven correct by my initial experiments, it seems that