Re: [zfs-discuss] improve meta data performance

2010-02-19 Thread Kjetil Torgrim Homme
Chris Banal cba...@gmail.com writes: We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops of which about 90% are meta data. In hind sight it would have been significantly better  to use a mirrored configuration but we opted for 4 x (9+2) raidz2 at the time. We can not

Re: [zfs-discuss] ZFS mirrored boot disks

2010-02-19 Thread Terry Hull
Interestingly, with the machine running, I can pull the first drive in the mirror, replace it with an unformatted one, format it, mirror rpool over to it, install the boot loader, and at that point the machine will boot with no problems. It s just when the first disk is missing that I have a

Re: [zfs-discuss] ZFS mirrored boot disks

2010-02-19 Thread Fajar A. Nugraha
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull t...@nrg-inc.com wrote: Interestingly, with the machine running, I can pull the first drive in the mirror, replace it with an unformatted one, format it, mirror rpool over to it, install the boot loader, and at that point the machine will boot with

[zfs-discuss] Growing ZFS Volume with SMI/VTOC label

2010-02-19 Thread Tony MacDoodle
Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label without losing data as the OS is built on this volume? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Disk controllers changing the names of disks

2010-02-19 Thread Markus Kovero
I am curious how admins are dealing with controllers like the Dell Perc 5 and 6 that can change the device name on a disk if a disk fails and the machine reboots. These controllers are not nicely behaved in that they happily fill in the device numbers for the physical drive that is

Re: [zfs-discuss] Growing ZFS Volume with SMI/VTOC label

2010-02-19 Thread Tony MacDoodle
So in a ZFS boot disk configuration (rpool) in a running environment, it's not possible? On Fri, Feb 19, 2010 at 9:25 AM, casper@sun.com wrote: Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label without losing data as the OS is built on this volume? Sure as

Re: [zfs-discuss] Growing ZFS Volume with SMI/VTOC label

2010-02-19 Thread Casper . Dik
So in a ZFS boot disk configuration (rpool) in a running environment, it's not possible? The example I have does grows the rpool while running from the rpool. But you need a recent version of zfs to grow the pool while it is in use. On Fri, Feb 19, 2010 at 9:25 AM, casper@sun.com wrote:

Re: [zfs-discuss] ZFS mirrored boot disks

2010-02-19 Thread David Dyer-Bennet
On Fri, February 19, 2010 00:32, Terry Hull wrote: I have a machine with the Supermicro 8 port SATA card installed. I have had no problem creating a mirrored boot disk using the oft-repeated scheme: prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s – /dev/rdsk/c4t1d0s2 zpool attach rpool c4t0d0s0

[zfs-discuss] ZFS unit of compression

2010-02-19 Thread Thanos Makatos
Hello. I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is it tunnable? Thanks. Thanos -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS unit of compression

2010-02-19 Thread Darren J Moffat
On 19/02/2010 15:43, Thanos Makatos wrote: Hello. I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is it tunnable? I don't understand what you mean. For user data ZFS compresses ZFS blocks these would be 512 bytes minimum upto 128k maximum and depend on the

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Edward Ned Harvey
One more thing I¹d like to add here: The PERC cache measurably and significantly accelerates small disk writes. However, for read operations, it is insignificant compared to system ram, both in terms of size and speed. There is no significant performance improvement by enabling adaptive

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Günther
hello i have made some benchmarks with my napp-it zfs-serverbr a href=http://www.napp-it.org/bench.pdf; target=_blankscreenshot/abr br a href=http://www.napp-it.org/bench.pdf; target=_blankwww.napp-it.org/bench.pdf/abr br - 2gb vs 4 gb vs 8 gb rambr - mirror vs raidz vs raidz2 vs raidz3br - dedup

[zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Felix Buenemann
Hi, I'm currently testing a Mtron Pro 7500 16GB SLC SSD as a ZIL device and seeing very poor performance for small file writes via NFS. Copying a source code directory with around 4000 small files to the ZFS pool over NFS without the SSD log device yields around 1000 IOPS (pool of 8 sata

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Bob Friesenhahn
On Fri, 19 Feb 2010, Felix Buenemann wrote: So it is apparent, that the SSD has really poor random writes. But I was under the impression, that the ZIL is mostly sequential writes or was I misinformed here? Maybe the cache syncs bring the device to it's knees? That's what it seems like.

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Felix Buenemann
Am 19.02.10 19:30, schrieb Bob Friesenhahn: On Fri, 19 Feb 2010, Felix Buenemann wrote: So it is apparent, that the SSD has really poor random writes. But I was under the impression, that the ZIL is mostly sequential writes or was I misinformed here? Maybe the cache syncs bring the device to

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread David Dyer-Bennet
On Fri, February 19, 2010 12:50, Felix Buenemann wrote: Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD. Well, but the Intel X25-M is the drive that really first cracked the problem (earlier

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-19 Thread Orvar Korvar
I can strongly recommend this series of articles http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ Very good! :o) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Bob Friesenhahn
On Fri, 19 Feb 2010, David Dyer-Bennet wrote: Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD. Well, but the Intel X25-M is the drive that really first cracked the problem (earlier high-performance

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Felix Buenemann
Am 19.02.10 20:50, schrieb Bob Friesenhahn: On Fri, 19 Feb 2010, David Dyer-Bennet wrote: Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD. Well, but the Intel X25-M is the drive that really first

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread David Dyer-Bennet
On Fri, February 19, 2010 13:50, Bob Friesenhahn wrote: On Fri, 19 Feb 2010, David Dyer-Bennet wrote: Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD. Well, but the Intel X25-M is the drive that

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Marion Hakanson
felix.buenem...@googlemail.com said: I think I'll try one of thise inexpensive battery-backed PCI RAM drives from Gigabyte and see how much IOPS they can pull. Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit. Dunno if that's sufficient for your purposes, but it

Re: [zfs-discuss] SSDs with a SCSI SCA interface?

2010-02-19 Thread Eric Sproul
On 12/ 4/09 02:06 AM, Erik Trimble wrote: Hey folks. I've looked around quite a bit, and I can't find something like this: I have a bunch of older systems which use Ultra320 SCA hot-swap connectors for their internal drives. (e.g. v20z and similar) I'd love to be able to use modern

[zfs-discuss] rule of thumb for scrub

2010-02-19 Thread Harry Putnam
I think I asked this before but apparently have lost track of the answers I got. I'm wanting a general rule of thumb for how often to `scrub'. My setup is a home NAS and general zfs server so it does not see heavy use. I'm up to build 129 and do update fairly often, just the last few builds

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Richard Elling
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote: One more thing I’d like to add here: The PERC cache measurably and significantly accelerates small disk writes. However, for read operations, it is insignificant compared to system ram, both in terms of size and speed. There is no

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes all writes are synchronised to nonvolatile storage,

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote: The PERC cache measurably and significantly accelerates small disk writes. However, for read operations, it is insignificant compared to system ram, both in terms of size and speed. There is no significant performance improvement by

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Felix Buenemann
Am 19.02.10 21:29, schrieb Marion Hakanson: felix.buenem...@googlemail.com said: I think I'll try one of thise inexpensive battery-backed PCI RAM drives from Gigabyte and see how much IOPS they can pull. Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit. Dunno if

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ross Walker
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI

Re: [zfs-discuss] rule of thumb for scrub

2010-02-19 Thread Cindy Swearingen
Hi Harry, Our current scrubbing guideline is described here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Run zpool scrub on a regular basis to identify data integrity problems. If you have consumer-quality drives, consider a weekly scrubbing schedule. If you have

Re: [zfs-discuss] Lost disk geometry

2010-02-19 Thread Daniel Carosone
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote: On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote: Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's not even in the data sheets any more! any such geometry has been entirely fictitious since

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Phil Harman
On 19/02/2010 21:57, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes all writes

Re: [zfs-discuss] Disk controllers changing the names of disks

2010-02-19 Thread Freddie Cash
On FreeBSD, I avoid this issue completely by labelling either the entire disk (via glabel(8)) or individual slices/partitions (via either glabel(8) or gpt labels). Use the label name to build the vdevs. Then it doesn't matter where the drive is connected, or how the device node is

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Neil Perrin
If I understand correctly, ZFS now adays will only flush data to non volatile storage (such as a RAID controller NVRAM), and not all the way out to disks. (To solve performance problems with some storage systems, and I believe that it also is the right thing to do under normal circumstances.)

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Eugen Leitl
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to compact flash at power failure. Promises 65,000 IOPS and thus should be great for ZIL. It's pretty reasonable

Re: [zfs-discuss] Lost disk geometry

2010-02-19 Thread David Dyer-Bennet
On Fri, February 19, 2010 16:21, Daniel Carosone wrote: On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote: On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote: Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's not even in the data sheets any

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.40, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to compact flash at power failure. Promises 65,000 IOPS and thus

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Daniel Carosone
On Fri, Feb 19, 2010 at 11:51:29PM +0100, Ragnar Sundblad wrote: On 19 feb 2010, at 23.40, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Toby Thain
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to compact flash at power failure. Promises 65,000 IOPS and thus should

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Rob Logan
An UPS plus disabling zil, or disabling synchronization, could possibly achieve the same result (or maybe better) iops wise. Even with the fastest slog, disabling zil will always be faster... (less bytes to move) This would probably work given that your computer never crashes in an

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.20, Ross Walker wrote: On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our

[zfs-discuss] l2arc current usage (population size)

2010-02-19 Thread Christo Kutrovsky
Hello, How do you tell how much of your l2arc is populated? I've been looking for a while now, can't seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Thomas Garner
These are the same as the acard devices we've discussed here previously; earlier hyperdrive models were their own design.  Very interesting, and my personal favourite, but I don't know of anyone actually reporting results yet with them as ZIL. Here's one report:

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.22, Phil Harman wrote: On 19/02/2010 21:57, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still

Re: [zfs-discuss] l2arc current usage (population size)

2010-02-19 Thread Tomas Ögren
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes: Hello, How do you tell how much of your l2arc is populated? I've been looking for a while now, can't seem to find it. Must be easy, as this blog entry shows it over time:

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Ragnar Sundblad
On 20 feb 2010, at 02.34, Rob Logan wrote: An UPS plus disabling zil, or disabling synchronization, could possibly achieve the same result (or maybe better) iops wise. Even with the fastest slog, disabling zil will always be faster... (less bytes to move) This would probably work given

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Felix Buenemann
Am 20.02.10 01:33, schrieb Toby Thain: On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to compact flash at power