Chris Banal cba...@gmail.com writes:
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs
ops of which about 90% are meta data. In hind sight it would have been
significantly better to use a mirrored configuration but we opted for
4 x (9+2) raidz2 at the time. We can not
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to it,
install the boot loader, and at that point the machine will boot with no
problems. It s just when the first disk is missing that I have a
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull t...@nrg-inc.com wrote:
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to
it, install the boot loader, and at that point the machine will boot with
Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label
without losing data as the OS is built on this volume?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am curious how admins are dealing with controllers like the Dell Perc 5 and
6 that can change the device name on a disk if a disk fails and the machine
reboots. These
controllers are not nicely behaved in that they happily fill in the device
numbers for
the physical drive that is
So in a ZFS boot disk configuration (rpool) in a running environment, it's
not possible?
On Fri, Feb 19, 2010 at 9:25 AM, casper@sun.com wrote:
Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC
label
without losing data as the OS is built on this volume?
Sure as
So in a ZFS boot disk configuration (rpool) in a running environment, it's
not possible?
The example I have does grows the rpool while running from the rpool.
But you need a recent version of zfs to grow the pool while it is in use.
On Fri, Feb 19, 2010 at 9:25 AM, casper@sun.com wrote:
On Fri, February 19, 2010 00:32, Terry Hull wrote:
I have a machine with the Supermicro 8 port SATA card installed. I have
had no problem creating a mirrored boot disk using the oft-repeated
scheme:
prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s â /dev/rdsk/c4t1d0s2
zpool attach rpool c4t0d0s0
Hello.
I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is
it tunnable?
Thanks.
Thanos
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 19/02/2010 15:43, Thanos Makatos wrote:
Hello.
I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is
it tunnable?
I don't understand what you mean.
For user data ZFS compresses ZFS blocks these would be 512 bytes minimum
upto 128k maximum and depend on the
One more thing I¹d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by enabling adaptive
hello
i have made some benchmarks with my napp-it zfs-serverbr
a href=http://www.napp-it.org/bench.pdf; target=_blankscreenshot/abr
br
a href=http://www.napp-it.org/bench.pdf;
target=_blankwww.napp-it.org/bench.pdf/abr
br
- 2gb vs 4 gb vs 8 gb rambr
- mirror vs raidz vs raidz2 vs raidz3br
- dedup
Hi,
I'm currently testing a Mtron Pro 7500 16GB SLC SSD as a ZIL device and
seeing very poor performance for small file writes via NFS.
Copying a source code directory with around 4000 small files to the ZFS
pool over NFS without the SSD log device yields around 1000 IOPS (pool
of 8 sata
On Fri, 19 Feb 2010, Felix Buenemann wrote:
So it is apparent, that the SSD has really poor random writes.
But I was under the impression, that the ZIL is mostly sequential writes or
was I misinformed here?
Maybe the cache syncs bring the device to it's knees?
That's what it seems like.
Am 19.02.10 19:30, schrieb Bob Friesenhahn:
On Fri, 19 Feb 2010, Felix Buenemann wrote:
So it is apparent, that the SSD has really poor random writes.
But I was under the impression, that the ZIL is mostly sequential
writes or was I misinformed here?
Maybe the cache syncs bring the device to
On Fri, February 19, 2010 12:50, Felix Buenemann wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first cracked the
problem (earlier
I can strongly recommend this series of articles
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
Very good! :o)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first cracked the
problem (earlier high-performance
Am 19.02.10 20:50, schrieb Bob Friesenhahn:
On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first
On Fri, February 19, 2010 13:50, Bob Friesenhahn wrote:
On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that
felix.buenem...@googlemail.com said:
I think I'll try one of thise inexpensive battery-backed PCI RAM drives from
Gigabyte and see how much IOPS they can pull.
Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit.
Dunno if that's sufficient for your purposes, but it
On 12/ 4/09 02:06 AM, Erik Trimble wrote:
Hey folks.
I've looked around quite a bit, and I can't find something like this:
I have a bunch of older systems which use Ultra320 SCA hot-swap
connectors for their internal drives. (e.g. v20z and similar)
I'd love to be able to use modern
I think I asked this before but apparently have lost track of the
answers I got.
I'm wanting a general rule of thumb for how often to `scrub'.
My setup is a home NAS and general zfs server so it does not see heavy
use.
I'm up to build 129 and do update fairly often, just the last few
builds
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
One more thing I’d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
unsafe (i.e. if my iSCSI client assumes all writes are synchronised to
nonvolatile storage,
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by
Am 19.02.10 21:29, schrieb Marion Hakanson:
felix.buenem...@googlemail.com said:
I think I'll try one of thise inexpensive battery-backed PCI RAM drives from
Gigabyte and see how much IOPS they can pull.
Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit.
Dunno if
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with
respect to correctness, it may be that some of our performance
workaround are still unsafe (i.e. if my iSCSI
Hi Harry,
Our current scrubbing guideline is described here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote:
On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's
not even in the data sheets any more!
any such geometry has been entirely fictitious since
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still unsafe
(i.e. if my iSCSI client assumes all writes
On FreeBSD, I avoid this issue completely by labelling either the entire disk
(via glabel(8)) or individual slices/partitions (via either glabel(8) or gpt
labels). Use the label name to build the vdevs. Then it doesn't matter where
the drive is connected, or how the device node is
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus should be great for ZIL. It's pretty
reasonable
On Fri, February 19, 2010 16:21, Daniel Carosone wrote:
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote:
On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
Anybody know what the proper geometry is for a WD1600BEKT-6-1A13?
It's
not even in the data sheets any
On 19 feb 2010, at 23.40, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus
On Fri, Feb 19, 2010 at 11:51:29PM +0100, Ragnar Sundblad wrote:
On 19 feb 2010, at 23.40, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus should
An UPS plus disabling zil, or disabling synchronization, could possibly
achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster...
(less bytes to move)
This would probably work given that your computer never crashes
in an
On 19 feb 2010, at 23.20, Ross Walker wrote:
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our
Hello,
How do you tell how much of your l2arc is populated? I've been looking for a
while now, can't seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/brendan/entry/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc
These are the same as the acard devices we've discussed here
previously; earlier hyperdrive models were their own design. Very
interesting, and my personal favourite, but I don't know of anyone
actually reporting results yet with them as ZIL.
Here's one report:
On 19 feb 2010, at 23.22, Phil Harman wrote:
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
Hello,
How do you tell how much of your l2arc is populated? I've been looking for a
while now, can't seem to find it.
Must be easy, as this blog entry shows it over time:
On 20 feb 2010, at 02.34, Rob Logan wrote:
An UPS plus disabling zil, or disabling synchronization, could possibly
achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster...
(less bytes to move)
This would probably work given
Am 20.02.10 01:33, schrieb Toby Thain:
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
46 matches
Mail list logo