Re: [zfs-discuss] Petabyte pool?

2013-03-15 Thread Ray Van Dolson
On Fri, Mar 15, 2013 at 06:09:34PM -0700, Marion Hakanson wrote: Greetings, Has anyone out there built a 1-petabyte pool? I've been asked to look into this, and was told low performance is fine, workload is likely to be write-once, read-occasionally, archive storage of gene sequencing

Re: [zfs-discuss] Petabyte pool?

2013-03-15 Thread Ray Van Dolson
On Fri, Mar 15, 2013 at 06:31:11PM -0700, Marion Hakanson wrote: rvandol...@esri.com said: We've come close: admin@mes-str-imgnx-p1:~$ zpool list NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT datapool 978T 298T 680T30% 1.00x ONLINE - syspool278G 104G

Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Ray Van Dolson
On Tue, Nov 13, 2012 at 03:08:04PM -0500, Peter Tripp wrote: Hi folks, I'm in the market for a couple of JBODs. Up until now I've been relatively lucky with finding hardware that plays very nicely with ZFS. All my gear currently in production uses LSI SAS controllers (3801e, 9200-16e,

Re: [zfs-discuss] IOzone benchmarking

2012-05-04 Thread Ray Van Dolson
On Thu, May 03, 2012 at 07:35:45AM -0700, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ray Van Dolson System is a 240x2TB (7200RPM) system in 20 Dell MD1200 JBODs. 16 vdevs of 15 disks each -- RAIDZ3

[zfs-discuss] IOzone benchmarking

2012-05-01 Thread Ray Van Dolson
I'm trying to run some IOzone benchmarking on a new system to get a feel for baseline performance. Unfortunately, the system has a lot of memory (144GB), but I have some time so am approaching my runs as follows: Throughput: iozone -m -t 8 -T -r 128k -o -s 36G -R -b bigfile.xls IOPS:

Re: [zfs-discuss] IOzone benchmarking

2012-05-01 Thread Ray Van Dolson
On Tue, May 01, 2012 at 03:21:05AM -0700, Gary Driggs wrote: On May 1, 2012, at 1:41 AM, Ray Van Dolson wrote: Throughput: iozone -m -t 8 -T -r 128k -o -s 36G -R -b bigfile.xls IOPS: iozone -O -i 0 -i 1 -i 2 -e -+n -r 128K -s 288G iops.txt Do you expect to be reading

Re: [zfs-discuss] IOzone benchmarking

2012-05-01 Thread Ray Van Dolson
On Tue, May 01, 2012 at 07:18:18AM -0700, Bob Friesenhahn wrote: On Mon, 30 Apr 2012, Ray Van Dolson wrote: I'm trying to run some IOzone benchmarking on a new system to get a feel for baseline performance. Unfortunately, benchmarking with IOzone is a very poor indicator of what

Re: [zfs-discuss] ZFS on Linux vs FreeBSD

2012-04-25 Thread Ray Van Dolson
On Wed, Apr 25, 2012 at 05:48:57AM -0700, Paul Archer wrote: This may fall into the realm of a religious war (I hope not!), but recently several people on this list have said/implied that ZFS was only acceptable for production use on FreeBSD (or Solaris, of course) rather than Linux with ZoL.

[zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-10 Thread Ray Van Dolson
Hi all; We have a Solaris 10 U9 x86 instance running on Silicon Mechanics / SuperMicro hardware. Occasionally under high load (ZFS scrub for example), the box becomes non-responsive (it continues to respond to ping but nothing else works -- not even the local console). Our only solution is to

Re: [zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-10 Thread Ray Van Dolson
). There are two internally mounted Intel X-25E's -- these double as the rootpool and ZIL devices. There is an 80GB X-25M mounted to the expander along with the 1TB drives operating as L2ARC. On Jan 10, 2012, at 21:07, Ray Van Dolson rvandol...@esri.com wrote: Hi all; We have a Solaris 10 U9 x86

[zfs-discuss] ZFS + Dell MD1200's - MD3200 necessary?

2012-01-05 Thread Ray Van Dolson
We are looking at building a storage platform based on Dell HW + ZFS (likely Nexenta). Going Dell because they can provide solid HW support globally. Are any of you using the MD1200 JBOD with head units *without* an MD3200 in front? We are being told that the MD1200's won't daisy chain unless

Re: [zfs-discuss] ZFS + Dell MD1200's - MD3200 necessary?

2012-01-05 Thread Ray Van Dolson
; Yep, we are doing this. Just trying to sanity check the suggested config against what folks are doing in the wild as our Dell partner doesn't seem to think it should/can be done without the MD3200. They may have alterior motives of course. :) Thanks, Ray On 6 Jan 2012, at 01:28, Ray Van Dolson

Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-30 Thread Ray Van Dolson
for the pointer. Ray On Dec 30, 2011, at 2:03, Ray Van Dolson rvandol...@esri.com wrote: On Thu, Dec 29, 2011 at 10:59:04PM -0800, Fajar A. Nugraha wrote: On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote: Is there a non-disruptive way to undeduplicate everything

Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-30 Thread Ray Van Dolson
Thanks for you response, Richard. On Fri, Dec 30, 2011 at 09:52:17AM -0800, Richard Elling wrote: On Dec 29, 2011, at 10:31 PM, Ray Van Dolson wrote: Hi all; We have a dev box running NexentaStor Community Edition 3.1.1 w/ 24GB (we don't run dedupe on production boxes -- and we do pay

[zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-29 Thread Ray Van Dolson
Hi all; We have a dev box running NexentaStor Community Edition 3.1.1 w/ 24GB (we don't run dedupe on production boxes -- and we do pay for Nexenta licenses on prd as well) RAM and an 8.5TB pool with deduplication enabled (1.9TB or so in use). Dedupe ratio is only 1.26x. The box has an

Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-29 Thread Ray Van Dolson
On Thu, Dec 29, 2011 at 10:59:04PM -0800, Fajar A. Nugraha wrote: On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote: Is there a non-disruptive way to undeduplicate everything and expunge the DDT? AFAIK, no  zfs send/recv and then back perhaps (we have the extra

[zfs-discuss] ZFS in front of MD3000i

2011-10-24 Thread Ray Van Dolson
We're setting up ZFS in front of an MD3000i (and attached MD1000 expansion trays). The rule of thumb is to let ZFS manage all of the disks, so we wanted to expose each MD3000i spindle via a JBOD mode of some sort. Unfortunately, it doesn't look like the MD3000i this (though this[1] post seems to

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Ray Van Dolson
On Thu, Sep 22, 2011 at 12:46:42PM -0700, Brandon High wrote: On Tue, Sep 20, 2011 at 12:21 AM, Markus Kovero markus.kov...@nebula.fi wrote: Hi, I was wondering do you guys have any recommendations as replacement for Intel X25-E as it is being EOL’d? Mainly as for log device. The Intel

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Ray Van Dolson
On Thu, Sep 22, 2011 at 01:21:26PM -0700, Brandon High wrote: On Thu, Sep 22, 2011 at 12:53 PM, Ray Van Dolson rvandol...@esri.com wrote: It seems to perform similarly to the X-25E as well (3300 IOPS for random writes).  Perhaps the drive can be overprovisioned as well? My impression

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Ray Van Dolson
On Thu, Sep 22, 2011 at 01:34:09PM -0700, Bob Friesenhahn wrote: On Thu, 22 Sep 2011, Brandon High wrote: The 20GB 311 only costs ~ $100 though. The 100GB Intel 710 costs ~ $650. The 311 is a good choice for home or budget users, and it seems that the 710 is much bigger than it needs to

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Ray Van Dolson
On Fri, Aug 12, 2011 at 06:53:22PM -0700, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ray Van Dolson For ZIL, I suppose we could get the 300GB drive and overcommit to 95%! What kind of benefit does

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Ray Van Dolson
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote: On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson rvandol...@esri.com wrote: Are any of you using the Intel 320 as ZIL?  It's MLC based, but I understand its wear and performance characteristics can be bumped up significantly

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-12 Thread Ray Van Dolson
On Thu, Aug 11, 2011 at 09:17:38PM -0700, Cooper Hubbell wrote: Which 320 series drive are you targeting, specifically? The ~$100 80GB variant should perform as well as the more expensive versions if your workload is more random from what I've seen/read. ESX NFS-attached datastore activity.

[zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Ray Van Dolson
Are any of you using the Intel 320 as ZIL? It's MLC based, but I understand its wear and performance characteristics can be bumped up significantly by increasing the overprovisioning to 20% (dropping usable capacity to 80%). Anyone have experience with this? Ray

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Ray Van Dolson
On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote: On 08/12/11 08:00 AM, Ray Van Dolson wrote: Are any of you using the Intel 320 as ZIL? It's MLC based, but I understand its wear and performance characteristics can be bumped up significantly by increasing the overprovisioning

[zfs-discuss] Adjusting HPA from Solaris on Intel 320 SSD's

2011-07-18 Thread Ray Van Dolson
Is there a way to tweak the HPA (Host Protected Area) on an Intel 320 SSD using native Solaris commands? In this case, we'd like to shrink the usable space so as to improve performance per recommendation in Intel Solid-State Drive 320 Series in Server Storage Applications section 4.1. hdparm on

Re: [zfs-discuss] Should Intel X25-E not be used with a SAS Expander?

2011-06-02 Thread Ray Van Dolson
On Thu, Jun 02, 2011 at 11:19:25AM -0700, Josh Simon wrote: I don't believe this to be the reason since there are other SATA (single-port) SSD drives listed as approved in that same document. Upon further research I found some interesting links that may point to a potentially different

Re: [zfs-discuss] Should Intel X25-E not be used with a SAS Expander?

2011-06-02 Thread Ray Van Dolson
On Thu, Jun 02, 2011 at 11:39:13AM -0700, Donald Stahl wrote: Yup; reset storms affected us as well (we were using the X-25 series for ZIL/L2ARC).  Only the ZIL drives were impacted, but it was a large impact :) What did you see with your reset storm? Were there log errors in

[zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS arrays (Solaris 10 U9). The disk began throwing errors like this: May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci@0,0/pci8086,3410@9/pci15d9,400@0 (mpt_sas0): May 5 04:33:44 dev-zfs4

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
On Tue, May 10, 2011 at 02:42:40PM -0700, Jim Klimov wrote: In a recent post r-mexico wrote that they had to parse system messages and manually fail the drives on a similar, though different, occasion: http://opensolaris.org/jive/message.jspa?messageID=515815#515815 Thanks Jim, good

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
On Tue, May 10, 2011 at 03:57:28PM -0700, Brandon High wrote: On Tue, May 10, 2011 at 9:18 AM, Ray Van Dolson rvandol...@esri.com wrote: My question is -- is there a way to tune the MPT driver or even ZFS itself to be more/less aggressive on what it sees as a failure scenario? You didn't

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-06 Thread Ray Van Dolson
On Wed, May 04, 2011 at 08:49:03PM -0700, Edward Ned Harvey wrote: From: Tim Cook [mailto:t...@cook.ms] That's patently false.  VM images are the absolute best use-case for dedup outside of backup workloads.  I'm not sure who told you/where you got the idea that VM images are not ripe

[zfs-discuss] Permanently using hot spare?

2011-05-05 Thread Ray Van Dolson
Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare). The hot spare kicked in and all is well. Is it possible to just make that hot spare disk -- already silvered into the pool -- as a permanent part of the pool? We could then throw in a new disk and mark it as a spare and avoid

Re: [zfs-discuss] Permanently using hot spare?

2011-05-05 Thread Ray Van Dolson
On Thu, May 05, 2011 at 03:13:06PM -0700, TianHong Zhao wrote: Just detach the faulty disk, then the spare will become the normal disk once it's finished resilvering. #zfs detach pool fault_device_name Then you need to the new spare : #zfs add pool new_spare_device There seems to be a

[zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Ray Van Dolson
There are a number of threads (this one[1] for example) that describe memory requirements for deduplication. They're pretty high. I'm trying to get a better understanding... on our NetApps we use 4K block sizes with their post-process deduplication and get pretty good dedupe ratios for VM

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Ray Van Dolson
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote: On 5/4/2011 9:57 AM, Ray Van Dolson wrote: There are a number of threads (this one[1] for example) that describe memory requirements for deduplication. They're pretty high. I'm trying to get a better understanding... on our

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Ray Van Dolson
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote: On Wed, May 4, 2011 at 12:29 PM, Erik Trimble erik.trim...@oracle.com wrote:        I suspect that NetApp does the following to limit their resource usage:   they presume the presence of some sort of cache that can be dedicated

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Ray Van Dolson
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote: On 5/4/2011 2:54 PM, Ray Van Dolson wrote: On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote: (2) Block size: a 4k block size will yield better dedup than a 128k block size, presuming reasonable data turnover

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Ray Van Dolson
On Wed, May 04, 2011 at 04:51:36PM -0700, Erik Trimble wrote: On 5/4/2011 4:44 PM, Tim Cook wrote: On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com wrote: On 5/4/2011 4:14 PM, Ray Van Dolson wrote: On Wed, May 04, 2011 at 02:55:55PM

Re: [zfs-discuss] detach configured log devices?

2011-03-16 Thread Ray Van Dolson
On Wed, Mar 16, 2011 at 09:33:58AM -0700, Jim Mauro wrote: With ZFS, Solaris 10 Update 9, is it possible to detach configured log devices from a zpool? I have a zpool with 3 F20 mirrors for the ZIL. They're coming up corrupted. I want to detach them, remake the devices and reattach them to

Re: [zfs-discuss] Good SLOG devices?

2011-03-01 Thread Ray Van Dolson
On Tue, Mar 01, 2011 at 08:03:42AM -0800, Roy Sigurd Karlsbakk wrote: Hi I'm running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those

Re: [zfs-discuss] Good SLOG devices?

2011-03-01 Thread Ray Van Dolson
On Tue, Mar 01, 2011 at 09:56:35AM -0800, Roy Sigurd Karlsbakk wrote: a) do you need an SLOG at all? Some workloads (asynchronous ones) will never benefit from an SLOG. We're planning to use this box for CIFS/NFS, so we'll need an SLOG to speed things up. b) form factor. at least one

[zfs-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and noticed its device names are extremely hair -- very similar to the multipath device names: c0t5000C50026F8ACAAd0, etc, etc. mpathadm seems to confirm: # mpathadm list lu /dev/rdsk/c0t50015179591CE0C1d0s2

Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
in the kernel you have storage multipathing enabled. (Check with modinfo.) On 2/15/2011 3:53 PM, Ray Van Dolson wrote: I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and noticed its device names are extremely hair -- very similar to the multipath device names

Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
it doesn't matter if the system is rebooted. ZFS should be able to identify the devices by their internal device IDs but I can't speak for unknown hardware. When you make hardware changes, always have current backups. Thanks, Cindy On 02/15/11 14:32, Ray Van Dolson wrote: Thanks Torrey

[zfs-discuss] cfgadm MPxIO aware yet in Solaris 10 U9?

2011-02-15 Thread Ray Van Dolson
I just replaced a failing disk on one of my servers running Solaris 10 U9. The system was MPxIO enabled and I now have the old device hanging around in the cfgadm list. I understand from searching around that cfgadm may not be MPxIO aware -- at least not in Solaris 10. I see a fix was pushed to

Re: [zfs-discuss] native ZFS on Linux

2011-02-12 Thread Ray Van Dolson
On Sat, Feb 12, 2011 at 09:18:26AM -0800, David E. Anderson wrote: I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support. Any pointers to more info on this? Probably using this[1]. Ray [1] http://kqstor.com/ ___ zfs-discuss mailing

Re: [zfs-discuss] Fwd: native ZFS on Linux

2011-02-12 Thread Ray Van Dolson
currently. Ray -- Forwarded message -- From: C. Bergström codest...@osunix.org Date: 2011/2/12 Subject: Re: [zfs-discuss] native ZFS on Linux To: Cc: zfs-discuss@opensolaris.org Ray Van Dolson wrote: On Sat, Feb 12, 2011 at 09:18:26AM -0800, David E. Anderson wrote

Re: [zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-23 Thread Ray Van Dolson
On Thu, Dec 23, 2010 at 07:35:29AM -0800, Deano wrote: If anybody does know of any source to the secure erase/reformatters, I’ll happily volunteer to do the port and then maintain it. I’m currently in talks with several SSD and driver chip hardware peeps with regard getting datasheets for

Re: [zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-22 Thread Ray Van Dolson
On Wed, Dec 22, 2010 at 05:43:35AM -0800, Jabbar wrote: Hello, I was thinking of buying a couple of SSD's until I found out that Trim is only supported with SATA drives. I'm not sure if TRIM will work with ZFS. I was concerned that with trim support the SSD life and write throughput will

[zfs-discuss] Moving rpool disks

2010-11-15 Thread Ray Van Dolson
We need to move the disks comprising our mirrored rpool on a Solaris 10 U9 x86_64 (not SPARC) system. We'll be relocating both drives to a different controller in the same system (should go from c1* to c0*). We're curious as to what the best way is to go about this? We'd love to be able to just

Re: [zfs-discuss] X4540 RIP

2010-11-09 Thread Ray Van Dolson
On Mon, Nov 08, 2010 at 11:51:02PM -0800, matthew patton wrote: I have this with 36 2TB drives (and 2 separate boot drives). http://www.colfax-intl.com/jlrid/SpotLight_more_Acc.asp?L=134S=58B=2267 That's just a Supermicro SC847. http://www.supermicro.com/products/chassis/4U/?chs=847

[zfs-discuss] NFS/SATA lockups (svc_cots_kdup no slots free sata port time out)

2010-10-19 Thread Ray Van Dolson
I have a Solaris 10 U8 box (142901-14) running as an NFS server with a 23 disk zpool behind it (three RAIDZ2 vdevs). We have a single Intel X-25E SSD operating as an slog ZIL device attached to a SATA port on this machine's motherboard. The rest of the drives are in a hot-swap enclosure.

Re: [zfs-discuss] Multiple SLOG devices per pool

2010-10-13 Thread Ray Van Dolson
On Tue, Oct 12, 2010 at 08:49:00PM -0700, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ray Van Dolson I have a pool with a single SLOG device rated at Y iops. If I add a second (non-mirrored) SLOG device

Re: [zfs-discuss] Bursty writes - why?

2010-10-12 Thread Ray Van Dolson
On Tue, Oct 12, 2010 at 12:09:44PM -0700, Eff Norwood wrote: The NFS client in this case was VMWare ESXi 4.1 release build. What happened is that the file uploader behavior was changed in 4.1 to prevent I/O contention with the VM guests. That means when you go to upload something to the

[zfs-discuss] Multiple SLOG devices per pool

2010-10-12 Thread Ray Van Dolson
I have a pool with a single SLOG device rated at Y iops. If I add a second (non-mirrored) SLOG device also rated at Y iops will my zpool now theoretically be able to handle 2Y iops? Or close to that? Thanks, Ray ___ zfs-discuss mailing list

[zfs-discuss] ZFS disk space monitoring with SNMP

2010-10-01 Thread Ray Van Dolson
Hey folks; Running on Solaris 10 U9 here. How do most of you monitor disk usage / capacity on your large zpools remotely via SNMP tools? Net SNMP seems to be using a 32-bit unsigned integer (based on the MIB) for hrStorageSize and friends, and thus we're not able to get accurate numbers for

Re: [zfs-discuss] ZFS disk space monitoring with SNMP

2010-10-01 Thread Ray Van Dolson
On Fri, Oct 01, 2010 at 03:00:16PM -0700, Volker A. Brandt wrote: Hello Ray, hello list! Running on Solaris 10 U9 here. How do most of you monitor disk usage / capacity on your large zpools remotely via SNMP tools? Net SNMP seems to be using a 32-bit unsigned integer (based on the

Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-09-21 Thread Ray Van Dolson
Just wanted to post a quick follow-up to this. Original thread is here[1] -- not quoted for brevity. Andrew Gabriel suggested[2] that this could possibly be some workload triggered issue. We wanted to rule out a driver problem and so we tested various configurations under Solaris 10U9 and

[zfs-discuss] Best practice for Sol10U9 ZIL -- mirrored or not?

2010-09-16 Thread Ray Van Dolson
Best practice in Solaris 10 U8 and older was to use a mirrored ZIL. With the ability to remove slog devices in Solaris 10 U9, we're thinking we may get more bang for our buck to use two slog devices for improved IOPS performance instead of needing the redundancy so much. Any thoughts on this?

Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Ray Van Dolson
On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote: We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 – 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing

Re: [zfs-discuss] NFS performance issue

2010-09-08 Thread Ray Van Dolson
On Wed, Sep 08, 2010 at 01:20:58PM -0700, Dr. Martin Mundschenk wrote: Hi! I searched the web for hours, trying to solve the NFS/ZFS low performance issue on my just setup OSOL box (snv134). The problem is discussed in many threads but I've found no solution. On a nfs shared volume, I

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-09-02 Thread Ray Van Dolson
On Tue, Aug 31, 2010 at 12:47:49PM -0700, Brandon High wrote: On Mon, Aug 30, 2010 at 3:05 PM, Ray Van Dolson rvandol...@esri.com wrote: I want to fix (as much as is possible) a misalignment issue with an X-25E that I am using for both OS and as an slog device. It's pretty easy to get

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Ray Van Dolson
On Mon, Aug 30, 2010 at 10:11:32PM -0700, Christopher George wrote: I was wondering if anyone had a benchmarking showing this alignment mattered on the latest SSDs. My guess is no, but I have no data. I don't believe there can be any doubt whether a Flash based SSD (tier1 or not) is

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-30 Thread Ray Van Dolson
On Mon, Aug 30, 2010 at 03:37:52PM -0700, Eric D. Mudama wrote: On Mon, Aug 30 at 15:05, Ray Van Dolson wrote: I want to fix (as much as is possible) a misalignment issue with an X-25E that I am using for both OS and as an slog device. This is on x86 hardware running Solaris 10U8

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-30 Thread Ray Van Dolson
On Mon, Aug 30, 2010 at 03:56:42PM -0700, Richard Elling wrote: comment below... On Aug 30, 2010, at 3:42 PM, Ray Van Dolson wrote: On Mon, Aug 30, 2010 at 03:37:52PM -0700, Eric D. Mudama wrote: On Mon, Aug 30 at 15:05, Ray Van Dolson wrote: I want to fix (as much as is possible

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-30 Thread Ray Van Dolson
On Mon, Aug 30, 2010 at 04:12:48PM -0700, Edho P Arief wrote: On Tue, Aug 31, 2010 at 6:03 AM, Ray Van Dolson rvandol...@esri.com wrote: In any case -- any thoughts on whether or not I'll be helping anything if I change my slog slice starting cylinder to be 4k aligned even though slice 0

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Ray Van Dolson
On Sat, Aug 28, 2010 at 05:50:38AM -0700, Eff Norwood wrote: I can't think of an easy way to measure pages that have not been consumed since it's really an SSD controller function which is obfuscated from the OS, and add the variable of over provisioning on top of that. If anyone would like

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 05:51:38AM -0700, David Magda wrote: On Fri, August 27, 2010 08:46, Eff Norwood wrote: Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure your performance killer. Do a snoop | grep sync and you'll see the sync write calls from VMWare. We

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 11:57:17AM -0700, Marion Hakanson wrote: markwo...@yahoo.com said: So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be able to support all the VM's or would it still be pushing the limits a 44 disk pool? If it weren't a closed

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 12:46:42PM -0700, Mark wrote: It does, its on a pair of large APC's. Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are mounted inside a couple Windows VM's. I'd have to migrate all our VM's to iSCSI, which I'm willing to do if it would

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 01:22:15PM -0700, John wrote: Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync writes? What I mean is, doesn't the ZIL eventually need to make it to the pool, and if the pool as a whole (spinning disks) can't keep up with 30+ vm's of write

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 03:51:39PM -0700, Eff Norwood wrote: By all means please try it to validate it yourself and post your results from hour one, day one and week one. In a ZIL use case, although the data set is small it is always writing a small ever changing (from the SSDs perspective)

Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Ray Van Dolson
On Wed, Aug 25, 2010 at 11:47:38AM -0700, Andreas Grüninger wrote: Ray Supermicro does not support the use of SSDs behind an expander. You must put the SSD in the head or use an interposer card see here:

[zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-24 Thread Ray Van Dolson
I posted a thread on this once long ago[1] -- but we're still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane

Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-24 Thread Ray Van Dolson
On Tue, Aug 24, 2010 at 04:46:23PM -0700, Andrew Gabriel wrote: Ray Van Dolson wrote: I posted a thread on this once long ago[1] -- but we're still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 08:35:05AM -0700, Tim Cook wrote: No, no they don't. You're under the misconception that they no longer own the code just because they released a copy as GPL. That is not true. Anyone ELSE who uses the GPL code must release modifications if they wish to distribute it

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 08:48:31AM -0700, Joerg Schilling wrote: Ray Van Dolson rvandol...@esri.com wrote: I absolutely guarantee Oracle can and likely already has dual-licensed BTRFS. Well, Oracle obviously would want btrfs to stay as part of the Linux kernel rather than die

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 08:55:49AM -0700, Tim Cook wrote: Why would they obviously want that? When the project started, they were competing with Sun. They now own Solaris; they no longer have a need to produce a competing product. I would be EXTREMELY surprised to see Oracle continue to

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 08:58:20AM -0700, Garrett D'Amore wrote: On Mon, 2010-08-16 at 08:52 -0700, Ray Van Dolson wrote: On Mon, Aug 16, 2010 at 08:48:31AM -0700, Joerg Schilling wrote: Ray Van Dolson rvandol...@esri.com wrote: I absolutely guarantee Oracle can and likely already

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 08:57:19AM -0700, Joerg Schilling wrote: C. Bergström codest...@osunix.org wrote: I absolutely guarantee Oracle can and likely already has dual-licensed BTRFS. No.. talk to Chris Mason.. it depends on the linux kernel too much already to be available under

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 09:08:52AM -0700, Ray Van Dolson wrote: On Mon, Aug 16, 2010 at 08:57:19AM -0700, Joerg Schilling wrote: C. Bergström codest...@osunix.org wrote: I absolutely guarantee Oracle can and likely already has dual-licensed BTRFS. No.. talk to Chris Mason

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ray Van Dolson
On Mon, Aug 16, 2010 at 09:15:12AM -0700, Tim Cook wrote: Or, for all you know, Chris Mason's contract has a non-compete that states if he leaves Oracle he's not allowed to work on any project he was a part of for five years. The business motivation would be to set the competition back a

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Ray Van Dolson
On Fri, Aug 13, 2010 at 02:01:07PM -0700, C. Bergström wrote: Gary Mills wrote: If this information is correct, http://opensolaris.org/jive/thread.jspa?threadID=133043 further development of ZFS will take place behind closed doors. Opensolaris will become the internal development

Re: [zfs-discuss] Adding ZIL to pool questions

2010-08-01 Thread Ray Van Dolson
On Sun, Aug 01, 2010 at 12:36:28PM -0700, Gregory Gee wrote: Jim, that ACARD looks really nice, but out of the price range for a home server. Edward, disabling ZIL might be ok, but let me characterize what my home server does and tell me if disabling ZIL is ok. My home OpenSolaris server

[zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-02 Thread Ray Van Dolson
We have a server with a couple X-25E's and a bunch of larger SATA disks. To save space, we want to install Solaris 10 (our install is only about 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL attached to a zpool created from the SATA drives. Currently we do this by

Re: [zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-02 Thread Ray Van Dolson
However, SVM+UFS is more annoying to work with as far as LiveUpgrade is concerned. We'd love to use a ZFS root, but that requires that the entire SSD be dedicated as an rpool leaving no space for ZIL. Or does it? It appears that we could do a: # zfs create -V 24G rpool/zil On our

Re: [zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-02 Thread Ray Van Dolson
On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote: We have a server with a couple X-25E's and a bunch of larger SATA disks. To save space, we want to install Solaris 10 (our install is only about 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL

Re: [zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-02 Thread Ray Van Dolson
On Fri, Jul 02, 2010 at 08:18:48AM -0700, Erik Ableson wrote: Le 2 juil. 2010 à 16:30, Ray Van Dolson rvandol...@esri.com a écrit : On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote: We have a server with a couple X-25E's and a bunch of larger SATA disks. To save space, we

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ray Van Dolson
On Wed, Jun 30, 2010 at 09:47:15AM -0700, Edward Ned Harvey wrote: From: Arne Jansen [mailto:sensi...@gmx.net] Edward Ned Harvey wrote: Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-17 Thread Ray Van Dolson
On Thu, Jun 17, 2010 at 09:42:44AM -0700, F. Wessels wrote: I just lookup it up again and as far as i can see the super cap is present in the MLC version as well as the SLC Very nice. A pair of the 50GB SLC model would be great for ZIL. Might continue to stick with the X-25M for L2ARC though

Re: [zfs-discuss] Deduplication and ISO files

2010-06-07 Thread Ray Van Dolson
On Fri, Jun 04, 2010 at 01:10:44PM -0700, Ray Van Dolson wrote: On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote: On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote: Makes sense.  So, as someone else suggested, decreasing my block size may improve

[zfs-discuss] Deduplication and ISO files

2010-06-04 Thread Ray Van Dolson
I'm running zpool version 23 (via ZFS fuse on Linux) and have a zpool with deduplication turned on. I am testing how well deduplication will work for the storage of many, similar ISO files and so far am seeing unexpected results (or perhaps my expectations are wrong). The ISO's I'm testing with

Re: [zfs-discuss] Deduplication and ISO files

2010-06-04 Thread Ray Van Dolson
On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote: On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson rvandol...@esri.com wrote: The ISO's I'm testing with are the 32-bit and 64-bit versions of the RHEL5 DVD ISO's.  While both have their differences, they do contain a lot of similar

Re: [zfs-discuss] Deduplication and ISO files

2010-06-04 Thread Ray Van Dolson
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote: On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote: Makes sense.  So, as someone else suggested, decreasing my block size may improve the deduplication ratio. It might. It might make your performance tank

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Ray Van Dolson
This thread has grown giant, so apologies for screwing up threading with an out of place reply. :) So, as far as SF-1500 based SSD's, the only ones currently in existence are the Vertex 2 LE and Vertex 2 EX, correct (I understand the Vertex 2 Pro was never mass produced)? Both of these are based

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Ray Van Dolson
On Mon, May 24, 2010 at 11:30:20AM -0700, Ray Van Dolson wrote: This thread has grown giant, so apologies for screwing up threading with an out of place reply. :) So, as far as SF-1500 based SSD's, the only ones currently in existence are the Vertex 2 LE and Vertex 2 EX, correct (I

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Ray Van Dolson
On Wed, May 05, 2010 at 04:31:08PM -0700, Bob Friesenhahn wrote: On Thu, 6 May 2010, Ian Collins wrote: Bob and Ian are right. I was trying to remember the last time I installed Solaris 10, and the best I can recall, it was around late fall 2007. The fine folks at Oracle have been making

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Ray Van Dolson
On Wed, May 05, 2010 at 05:09:40PM -0700, Erik Trimble wrote: On Wed, 2010-05-05 at 19:03 -0500, Bob Friesenhahn wrote: On Wed, 5 May 2010, Ray Van Dolson wrote: From a zfs standpoint, Solaris 10 does not seem to be behind the currently supported OpenSolaris release. Well, being

[zfs-discuss] ZFS monitoring - best practices?

2010-04-08 Thread Ray Van Dolson
We're starting to grow our ZFS environment and really need to start standardizing our monitoring procedures. OS tools are great for spot troubleshooting and sar can be used for some trending, but we'd really like to tie this into an SNMP based system that can generate graphs for us (via RRD or

  1   2   >