Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-15 Thread Carson Gaspar
Richard Elling wrote: ... As you can see, so much has changed, hopefully for the better, that running performance benchmarks on old software just isn't very interesting. NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they would not be competitive in the market. The

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2010-02-15 Thread Orvar Korvar
Yes, if you value your data you should change from USB drives to normal drives. I heard that USB did some strange things? Normal connection such as SATA is more reliable. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-15 Thread Bogdan Ćulibrk
zfs ml wrote: sorry, scratch the above - I didn't see this: 9. domUs have ext3 mounted with: noatime,commit=120 Is the write traffic because you backing up to the same disks that the domUs live on? Yes it is. ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-15 Thread Bogdan Ćulibrk
Kjetil and Richard thanks for this. Kjetil Torgrim Homme wrote: Bogdan Ćulibrk b...@default.rs writes: What are my options from here? To move onto zvol with greater blocksize? 64k? 128k? Or I will get into another trouble going that way when I have small reads coming from domU (ext3 with

[zfs-discuss] zfs questions wrt unused blocks

2010-02-15 Thread heinz zerbes
Gents, We want to understand the mechanism of zfs a bit better. Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks? Q: what criteria is there for zfs to start reclaiming blocks Issue at hand is an LDOM or zone running in a virtual (thin-provisioned) disk on a NFS

Re: [zfs-discuss] SSD and ZFS

2010-02-15 Thread Tracey Bernath
For those following the saga: With the prefetch problem fixed, and data coming off the L2ARC instead of the disks, the system switched from IO bound to CPU bound, I opened up the throttles with some explicit PARALLEL hints in the Oracle commands, and we were finally able to max out the single SSD:

Re: [zfs-discuss] [networking-discuss] Help needed on big transfers failure with e1000g

2010-02-15 Thread Arnaud Brand
Hi, Sending to zfs-discuss too since this seems to be related to the zfs receive operation. The following only holds true when the replication stream (ie the delta between snap1 and snap2) is more than about 800GB. If I proceed with this command the transfer fails after some variable amount

Re: [zfs-discuss] available space

2010-02-15 Thread Cindy Swearingen
Hi Charles, What kind of pool is this? The SIZE and AVAIL amounts will vary depending on the ZFS redundancy and whether the deflated or inflated amounts are displayed. I attempted to explain the differences in the zpool list/zfs list display, here:

Re: [zfs-discuss] available space

2010-02-15 Thread Charles Hedrick
Thanks. That makes sense. This is raidz2. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Nick
I've seen threads like this around this ZFS forum, so forgive me if I'm covering old ground. I currently have a ZFS configuration where I have individual drives presented to my Opensolaris machine and I'm using ZFS to do a RAIDZ-1 on the drives. I have several filesystems and volumes on this

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Bob Friesenhahn
On Mon, 15 Feb 2010, Nick wrote: I'm using the latest Opensolaris dev build (132) and I have my storage pools and volumes upgraded to the latest available versions. I am using deduplication on my ZFS volumes, set at the highest volume level, so I'm not sure if this has an impact. Can anyone

[zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-15 Thread Tiernan OToole
Good morning all. I am in the process of building my V1 SAN for media storage in house, and i am already thinkg ov the V2 build... Currently, there are 8 250Gb hdds and 3 500Gb disks. the 8 250s are in a RAIDZ2 array, and the 3 500s will be in RAIDZ1... At the moment, the current case is quite

Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-15 Thread James Dickens
Yes send and receive will do the job. see zfs manpage for details. James Dickens http://uadmin.blogspot.com On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole lsmart...@gmail.comwrote: Good morning all. I am in the process of building my V1 SAN for media storage in house, and i am already

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Nick
There is no doubt that it is both a bug and expected behavior and is related to deduplication being enabled. Is it expected because it's a bug, or is it a bug that is not going to be fixed and so I should expect it? Is there a bug/defect I can keep an eye on in one of the Opensolaris

[zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Yi Zhang
Hi, I recently installed OpenSoalris 200906 on a 10GB primary partition on my laptop. I noticed there wasn't any option for customizing the slices inside the solaris partition. After installation, there was only a single slice (0) occupying the entire partition. Now the problem is that I need to

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Eric Schrock
On 02/15/10 10:26, Nick wrote: There is no doubt that it is both a bug and expected behavior and is related to deduplication being enabled. Is it expected because it's a bug, or is it a bug that is not going to be fixed and so I should expect it? Is there a bug/defect I can keep an eye on

Re: [zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Casper . Dik
Hi, I recently installed OpenSoalris 200906 on a 10GB primary partition on my laptop. I noticed there wasn't any option for customizing the slices inside the solaris partition. After installation, there was only a single slice (0) occupying the entire partition. Now the problem is that I need to

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Nick
Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Yi Zhang
On Mon, Feb 15, 2010 at 1:48 PM, casper@sun.com wrote: Hi, I recently installed OpenSoalris 200906 on a 10GB primary partition on my laptop. I noticed there wasn't any option for customizing the slices inside the solaris partition. After installation, there was only a single slice (0)

Re: [zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Richard Elling
On Feb 15, 2010, at 11:15 AM, Yi Zhang wrote: On Mon, Feb 15, 2010 at 1:48 PM, casper@sun.com wrote: Hi, I recently installed OpenSoalris 200906 on a 10GB primary partition on my laptop. I noticed there wasn't any option for customizing the slices inside the solaris partition.

Re: [zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Darren J Moffat
On 15/02/2010 19:15, Yi Zhang wrote: Can you create a zvol and use that for ufs? Slow, but ... Casper Casper, thanks for the tip! Actually I'm not sure if this would work for me. I wanted to use directio to bypass the file system cache when reading/writing files. That's why I chose UFS

Re: [zfs-discuss] Shrink the slice used for zpool?

2010-02-15 Thread Yi Zhang
Thank you, Darren and Richard. I think this gives what I wanted. Yi On Mon, Feb 15, 2010 at 3:13 PM, Darren J Moffat darren.mof...@sun.com wrote: On 15/02/2010 19:15, Yi Zhang wrote: Can you create a zvol and use that for ufs?  Slow, but ... Casper Casper, thanks for the tip! Actually

Re: [zfs-discuss] zfs promote

2010-02-15 Thread Cindy Swearingen
Hi-- From your pre-promotion output, both fs1-patch and snap1 are referencing the same 16.4 GB, which makes sense. I don't see how fs1 could be a clone of fs1-patch because it should be REFER'ing 16.4 GB as well in your pre-promotion zfs list. If you snapshot, clone, and promote, then the

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Peter Tribble
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. ... From here, I discover the iscsi target on our Windows server 2008 R2

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Bob Beverage
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there completely idle. I am also seeing this behaviour. It started somewhere around snv111 but I am not

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-15 Thread Daniel Carosone
On Mon, Feb 15, 2010 at 01:45:57PM +0100, Bogdan ?ulibrk wrote: One more thing regarding SSD, will be useful to throw in additional SAS/SATA drive in to serve as L2ARC? I know SSD is the most logical thing to put as L2ARC, but will conventional drive be of *any* help in L2ARC? Only in

Re: [zfs-discuss] SSD and ZFS

2010-02-15 Thread Daniel Carosone
On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote: Now, to add the second SSD ZIL/L2ARC for a mirror. Just be clear: mirror ZIL by all means, but don't mirror l2arc, just add more devices and let them load-balance. This is especially true if you're sharing ssd writes with ZIL, as

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread taemun
Just thought I'd chime in for anyone who had read this - the import operation completed this time, after 60 hours of disk grinding. :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Khyron
The DDT is stored within the pool, IIRC, but there is an RFE open to allow you to store it on a separate top level VDEV, like a SLOG. The other thing I've noticed with all of the destroyed a large dataset with dedup enabled and it's taking forever to import/destory/insert function here questions

[zfs-discuss] zfs questions wrt unused blocks

2010-02-15 Thread heinz zerbes
Gents, We want to understand the mechanism of zfs a bit better. Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks? Q: what criteria is there for zfs to start reclaiming blocks Issue at hand is an LDOM or zone running in a virtual (thin-provisioned) disk on a NFS

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Rob Logan
RFE open to allow you to store [DDT] on a separate top level VDEV hmm, add to this spare, log and cache vdevs, its to the point of making another pool and thinly provisioning volumes to maintain partitioning flexibility. taemun: hay, thanks for closing the loop!

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread taemun
The system in question has 8GB of ram. It never paged during the import (unless I was asleep at that point, but anyway). It ran for 52 hours, then started doing 47% kernel cpu usage. At this stage, dtrace stopped responding, and so iopattern died, as did iostat. It was also increasing ram usage

Re: [zfs-discuss] zfs questions wrt unused blocks

2010-02-15 Thread Richard Elling
On Feb 15, 2010, at 8:43 PM, heinz zerbes wrote: Gents, We want to understand the mechanism of zfs a bit better. Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks? Q: what criteria is there for zfs to start reclaiming blocks The answer to these questions is too

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Markus Kovero
The other thing I've noticed with all of the destroyed a large dataset with dedup enabled and it's taking forever to import/destory/insert function here questions is that the process runs so so so much faster with 8+ GiB of RAM.  Almost to a man, everyone who reports these 3, 4, or

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Ragnar Sundblad
On 15 feb 2010, at 23.33, Bob Beverage wrote: On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there completely idle. I am also seeing this behaviour.