Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-07 Thread Robert Milkowski
On Wed, 6 May 2009, Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re We forget because it is no longer a problem ;-) bug number? re I think it is disingenuous to compare an enterprise-class RAID re array with the random collection of hardware on which

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-07 Thread Robert Milkowski
On Thu, 7 May 2009, Robert Milkowski wrote: On Wed, 6 May 2009, Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re We forget because it is no longer a problem ;-) bug number? re I think it is disingenuous to compare an enterprise-class RAID re array

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Carson Gaspar
Bob Friesenhahn wrote: It seems like this Nagios script is not very useful since the notion of free memory has become antiquated. Not true. The script is simply not intelligent enough. There are really 3 broad kinds of RAM usage: A) Unused B) Unfreeable by the kernel (normal process

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-07 Thread Carson Gaspar
Richard Elling wrote: Miles Nordin wrote: AIUI the later BE's are clones of the first, and not all blocks will be rewritten, so it's still an issue. no? In practice, yes, they are clones. But whether it is an issue depends on what the issue is. As I see it, the issue is that someone wants

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-07 Thread Richard Elling
Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re We forget because it is no longer a problem ;-) bug number? PSARC 2007/567 re I think it is disingenuous to compare an enterprise-class RAID re array with the random collection of

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Fajar A. Nugraha
On Wed, May 6, 2009 at 10:17 PM, Richard Elling richard.ell...@gmail.com wrote: Fajar A. Nugraha wrote: IMHO it's probably best to set a limit on ARC size and treat it like any other memory used by applications. There are a few cases where this makes sense, but not many.  The ARC will

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-07 Thread Richard Elling
Carson Gaspar wrote: Richard Elling wrote: Miles Nordin wrote: AIUI the later BE's are clones of the first, and not all blocks will be rewritten, so it's still an issue. no? In practice, yes, they are clones. But whether it is an issue depends on what the issue is. As I see it, the

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Richard Elling
Fajar A. Nugraha wrote: On Wed, May 6, 2009 at 10:17 PM, Richard Elling richard.ell...@gmail.com wrote: Fajar A. Nugraha wrote: IMHO it's probably best to set a limit on ARC size and treat it like any other memory used by applications. There are a few cases where this

[zfs-discuss] Anyone willing to sponsor an ARC case for grub2?

2009-05-07 Thread C. Bergström
Hi.. I'm not exactly familiar with the ARC/sponsor process, but thought I'd toss this out since Vladimir 'phcoder' Serbinenko mentioned the benefits of his port for grub2. I think by doing some sort of formal process we'll actually get feedback about the best way to move forward. There are

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-07 Thread Casper . Dik
I'll call bull* on that. Microsoft has an admirably simple installation and 88% of the market. Apple has another admirably simple installation and 10% of the market. Solaris has less than 1% of the market and has had a very complex installation process. You can't win that battle by increasing

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Moore, Joe
Carson Gaspar wrote: Not true. The script is simply not intelligent enough. There are really 3 broad kinds of RAM usage: A) Unused B) Unfreeable by the kernel (normal process memory) C) Freeable by the kernel (buffer cache, ARC, etc.) Monitoring usually should focus on keeping (A+C)

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Bob Friesenhahn
On Thu, 7 May 2009, Moore, Joe wrote: Carson Gaspar wrote: Not true. The script is simply not intelligent enough. There are really 3 broad kinds of RAM usage: A) Unused B) Unfreeable by the kernel (normal process memory) C) Freeable by the kernel (buffer cache, ARC, etc.) Monitoring usually

[zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Gregory Skelton
Hi Everyone, I want to start out by saying ZFS has been a life saver to me, and the scientific collaboration I work for. I can't imagine working with the TB's of data that we do, without the snapshots or the ease of moving the data from one pool to another. Right now I'm trying to setup a

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-07 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: re PSARC 2007/567 oh, failmode? We were not talking about panics. We're talking about corrupted pools. Many of the systems in bugs related to this PSARC are not even using a SAN and are not reporting problems simliar to the one I

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Mike Gerdts
On Thu, May 7, 2009 at 3:29 PM, Gregory Skelton gskel...@gravity.phys.uwm.edu wrote: Hi Everyone, I want to start out by saying ZFS has been a life saver to me, and the scientific collaboration I work for. I can't imagine working with the TB's of data that we do, without the snapshots or the

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Mark J Musante
On Thu, 7 May 2009, Mike Gerdts wrote: Perhaps you have change the configuration of the array since the last reconfiguration boot. If you run devfsadm then run format, does it see more disks? Another thing to check is to see if the controller has a jbod mode as opposed to passthrough.

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread milosz
with pass-through disks on areca controllers you have to set the lun id (i believe) using the volume command. when you issue a volume info your disk id's should look like this (if you want solaris to see the disks): 0/1/0 0/2/0 0/3/0 0/4/0 etc. the middle part there (again, i think that's

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Gregory Skelton
Thanks for all your help, changing the mode from RAID to JBOD did the trick. I was hoping to have a RAID 1+0 for the OS, but I guess with Areca is all or nothing. Cheers, Gregory On Fri, 8 May 2009, James C. McPherson wrote: On Thu, 07 May 2009 16:59:01 -0400 milosz mew...@gmail.com