Hi, I currently have 4x 1tb drives in a raidz configuration. I want to add another 2 x 1tb drives, however if i simply zpool add, i will only gain an extra 1tb of space as it will create a second raidz set inside the existing tank/pool. Is there a way to add my new drives into the existing raidz without losing even more space without rebuilding the entire pool from the beginning? if not, is this something being worked on currently? thanks and merry xmas!

On 25 Dec 2009, at 20:00, zfs-discuss-requ...@opensolaris.org wrote:

Send zfs-discuss mailing list submissions to
        zfs-discuss@opensolaris.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body 'help' to
        zfs-discuss-requ...@opensolaris.org

You can reach the person managing the list at
        zfs-discuss-ow...@opensolaris.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of zfs-discuss digest..."


Today's Topics:

  1. Re: Benchmarks results for ZFS + NFS, using SSD's as  slog
     devices (ZIL) (Freddie Cash)
  2. Re: Benchmarks results for ZFS + NFS,      using SSD's as  slog
     devices (ZIL) (Richard Elling)
  3. Re: Troubleshooting dedup performance (Michael Herf)
  4. ZFS write bursts cause short app stalls (Saso Kiselkov)


----------------------------------------------------------------------

Message: 1
Date: Thu, 24 Dec 2009 17:34:32 PST
From: Freddie Cash <fjwc...@gmail.com>
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS, using
        SSD's as  slog devices (ZIL)
Message-ID: <2086438805.291261704902840.javamail.tweb...@sf-app1>
Content-Type: text/plain; charset=UTF-8

Mattias Pantzare wrote:
That  would leave us with three options;

1) Deal with it and accept performance as it is.
2) Find a way to speed things up further for this
workload
3) Stop trying to use ZFS for this workload

Option 4 is to re-do your pool, using fewer disks per raidz2 vdev, giving more vdevs to the pool, and thus increasing the IOps for the whole pool.

14 disks in a single raidz2 vdev is going to give horrible IO, regardless of how fast the individual disks are.

Redoing it with 6-disk raidz2 vdevs, or even 8-drive raidz2 vdevs will give you much better throughput.

Freddie
--
This message posted from opensolaris.org


------------------------------

Message: 2
Date: Thu, 24 Dec 2009 17:39:11 -0800
From: Richard Elling <richard.ell...@gmail.com>
To: Freddie Cash <fjwc...@gmail.com>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS,    using
        SSD's as  slog devices (ZIL)
Message-ID: <b8134afb-e6f1-4c62-a93b-d5826587b...@gmail.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

On Dec 24, 2009, at 5:34 PM, Freddie Cash wrote:

Mattias Pantzare wrote:
That  would leave us with three options;

1) Deal with it and accept performance as it is.
2) Find a way to speed things up further for this
workload
3) Stop trying to use ZFS for this workload

Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.

14 disks in a single raidz2 vdev is going to give horrible IO,
regardless of how fast the individual disks are.

Redoing it with 6-disk raidz2 vdevs, or even 8-drive raidz2 vdevs
will give you much better throughput.

At this point it is useful to know that if you do not have a
separate log, then the ZIL uses the pool and its data protection
scheme.  In other words, each ZIL write will be a raidz2 stripe
with its associated performance.
 -- richard



------------------------------

Message: 3
Date: Thu, 24 Dec 2009 21:22:28 -0800
From: Michael Herf <mbh...@gmail.com>
To: Richard Elling <richard.ell...@gmail.com>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Troubleshooting dedup performance
Message-ID:
        <c65729770912242122k1c3f9cf4hdfa1c17789393...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
running visibly faster (somewhere around 3-5x faster).

echo zfs_prefetch_disable/W0t1 | mdb -kw

Anyone else see a result like this?

I'm using the "read" bandwidth from the sending pool from "zpool
iostat -x 5" to estimate transfer rate, since I assume the write rate
would be lower when dedup is working.

mike

p.s. Note to set it back to the default behavior:
echo zfs_prefetch_disable/W0t0 | mdb -kw


------------------------------

Message: 4
Date: Fri, 25 Dec 2009 18:57:32 +0100
From: Saso Kiselkov <skisel...@gmail.com>
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS write bursts cause short app stalls
Message-ID: <4b34fd0c.8090...@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I've started porting a video streaming application to opensolaris on
ZFS, and am hitting some pretty weird performance issues. The thing I'm
trying to do is run 77 concurrent video capture processes (roughly
430Mbit/s in total) all writing into separate files on a 12TB J4200
storage array. The disks in the array are arranged into a single RAID-0
ZFS volume (though I've tried different RAID levels, none helped). CPU
performance is not an issue (barely hitting 35% utilization on a single
CPU quad-core X2250). I/O bottlenecks can also be ruled out, since the
storage array's sequential write performance is around 600MB/s.

The problem is the bursty behavior of ZFS writes. All the capture
processes do, in essence is poll() on a socket and then read() and
write() any available data from it to a file. The poll() call is done
with a timeout of 250ms, expecting that if no data arrives within 0.25
seconds, the input is dead and recording stops (I tried increasing this
value, but the problem still arises, although not as frequently). When
ZFS decides that it wants to commit a transaction group to disk (every
30 seconds), the system stalls for a short amount of time and depending
on the number capture of processes currently running, the poll() call
(which usually blocks for 1-2ms), takes on the order of hundreds of ms, sometimes even longer. I figured that I might be able to resolve this by
lowering the txg timeout to something like 1-2 seconds (I need ZFS to
write as soon as data arrives, since it will likely never be
overwritten), but I couldn't find any tunable parameter for it anywhere
on the net. On FreeBSD, I think this can be done via the
vfs.zfs.txg_timeout sysctl. A glimpse into the source at
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/txg.c
on line 40 made me worry that somebody maybe hard-coded this value into
the kernel, in which case I'd be pretty much screwed in opensolaris.

Any help would be greatly appreciated.

Regards,
- --
Saso
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAks0/QoACgkQRO8UcfzpOHB9PgCeOuJFVHTCohRzuf7kAEkC1l1i
qBAAn18Jkx+N9OotWVCwpz5iQzNZSsEG
=FCJL
-----END PGP SIGNATURE-----


------------------------------

_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


End of zfs-discuss Digest, Vol 50, Issue 123
********************************************

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to