[zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Troy Nancarrow (MEL)
Hi, Please forgive me if my searching-fu has failed me in this case, but I've been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS ARC is using up the memory, but that the

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Casper . Dik
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote: PS: At one point the old JumpStart code was encumbered, and the community wasn't able to assist. I haven't looked at the next-gen jumpstart framework that was delivered as part of the OpenSolaris SPARC preview. Can anyone

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Fajar A. Nugraha
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL) troy.nancar...@foxtel.com.au wrote: So how are others monitoring memory usage on ZFS servers? I think you can get the amount of memory zfs arc uses with arcstat.pl. http://www.solarisinternals.com/wiki/index.php/Arcstat IMHO it's probably

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Thomas Maier-Komor
Troy Nancarrow (MEL) schrieb: Hi, Please forgive me if my searching-fu has failed me in this case, but I've been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS ARC is

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Richard Elling
Ellis, Mike wrote: How about a generic zfs options field in the JumpStart profile? (essentially an area where options can be specified that are all applied to the boot-pool (with provisions to deal with a broken-out-var)) We had this discussion a while back and, IIRC, it was expected that

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Richard Elling
Roger Solano wrote: Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. Alternatively, you can purchase non-Sun 500

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Rich Teer
On Wed, 6 May 2009, Richard Elling wrote: popular interactive installers much more simplified. I agree that interactive installation needs to remain as simple as possible. How about offering a choice an installation time: Custom or default?? Those that don't want/need the interactive

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Richard Elling
Fajar A. Nugraha wrote: On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL) troy.nancar...@foxtel.com.au wrote: So how are others monitoring memory usage on ZFS servers? I think you can get the amount of memory zfs arc uses with arcstat.pl.

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Bob Friesenhahn
On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote: Please forgive me if my searching-fu has failed me in this case, but I've been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Lori Alt
This sounds like a good idea to me, but it should be brought up on the caiman-disc...@opensolaris.org mailing list, since this is not just, or even primarily, a zfs issue. Lori Rich Teer wrote: On Wed, 6 May 2009, Richard Elling wrote: popular interactive installers much more simplified.

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Blake
On Wed, May 6, 2009 at 11:14 AM, Rich Teer rich.t...@rite-group.com wrote: On Wed, 6 May 2009, Richard Elling wrote: popular interactive installers much more simplified.  I agree that interactive installation needs to remain as simple as possible. How about offering a choice an installation

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Richard Elling
Bob Friesenhahn wrote: On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote: Please forgive me if my searching-fu has failed me in this case, but I've been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Bob Friesenhahn
On Wed, 6 May 2009, Richard Elling wrote: Memory is meant to be used. 96% RAM use is good since it represents an effective use of your investment. Actually, I think a percentage of RAM is a bogus metric to measure. For example, on a 2TBytes system, you would be wasting 80 GBytes. Perhaps

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: re Note: in the Caiman world, this is only an issue for the first re BE. Later BEs can easily have other policies. -- richard AIUI the later BE's are clones of the first, and not all blocks will be rewritten, so it's still an

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-06 Thread Paul Choi
Ben Rockwood's written a very useful util called arc_summary: http://www.cuddletech.com/blog/pivot/entry.php?id=979 It's really good for looking at ARC usage (including memory usage). You might be able to make some guesses based on kstat -n zfs_file_data and kstat -n zfs_file_data_buf. Look for

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Mike Gerdts
On Wed, May 6, 2009 at 2:54 AM, casper@sun.com wrote: On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote: PS: At one point the old JumpStart code was encumbered, and the community wasn't able to assist. I haven't looked at the next-gen jumpstart framework that was

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Roger Solano wrote: Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Richard Elling
Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re Note: in the Caiman world, this is only an issue for the first re BE. Later BEs can easily have other policies. -- richard AIUI the later BE's are clones of the first, and not all blocks will

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Bob Friesenhahn
On Thu, 7 May 2009, Scott Lawson wrote: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. Just thought I would point out that these are hardware backed RAID arrays. You might be better off using

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-06 Thread Richard Elling
Miles Nordin wrote: djm == Darren J Moffat darr...@opensolaris.org writes: djm If you only present a single lun to ZFS it may not be able to djm repair any detected errors. And also the problems with pools becoming corrupt and unimportable, especially when the SAN reboots

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-06 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: re We forget because it is no longer a problem ;-) bug number? re I think it is disingenuous to compare an enterprise-class RAID re array with the random collection of hardware on which Solaris re runs. compare with a

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Bob Friesenhahn wrote: On Thu, 7 May 2009, Scott Lawson wrote: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. Just thought I would point out that these are hardware backed RAID arrays. You

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Bob Friesenhahn
On Thu, 7 May 2009, Scott Lawson wrote: Something nice about the STK2540 solution is that if the server system dies. The STK2540's can quickly be swung over to another system via a quick 'zfs import'. Sure provided they have it attached to a fibre channel switch or have a nice long fibre lead.

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread David Magda
On May 6, 2009, at 20:46, Bob Friesenhahn wrote: After all this discussion, I am not sure if anyone adequately answered the original poster's question as to whether at 2540 with SAS 15K drives would provide substantial synchronous write throughput improvement when used as a L2ARC device.

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Adam Leventhal
After all this discussion, I am not sure if anyone adequately answered the original poster's question as to whether at 2540 with SAS 15K drives would provide substantial synchronous write throughput improvement when used as a L2ARC device. I was under the impression that the L2ARC was to

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread erik.ableson
On 7 mai 09, at 04:03, Adam Leventhal wrote: After all this discussion, I am not sure if anyone adequately answered the original poster's question as to whether at 2540 with SAS 15K drives would provide substantial synchronous write throughput improvement when used as a L2ARC device. I