Re: [zfs-discuss] A resilver record?

2011-03-20 Thread taemun
769G resilvered on a 500G drive? I'm guessing there was a whole bunch of activity (and probably snapshot creation) happening alongside the resilver. On 20 March 2011 18:57, Ian Collins i...@ianshome.com wrote: Has anyone seen a resilver longer than this for a 500G drive in a riadz2 vdev?

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread taemun
On 28 February 2011 02:06, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: Take that a step further. Anything external is unreliable. I have used USB, eSATA, and Firewire external devices. They all work. The only question is for how long. eSATA has no need

Re: [zfs-discuss] deduplication requirements

2011-02-07 Thread taemun
On 6 February 2011 01:34, Michael michael.armstr...@gmail.com wrote: Hi guys, I'm currently running 2 zpools each in a raidz1 configuration, totally around 16TB usable data. I'm running it all on an OpenSolaris based box with 2gb memory and an old Athlon 64 3700 CPU, I understand this is

Re: [zfs-discuss] Replace block devices to increase pool size

2011-02-06 Thread taemun
If autoexpand = on, then yes. zpool get autoexpand pool zpool set autoexpand=on pool The expansion is vdev specific, so if you replaced the mirror first, you'd get that much (the extra 2TB) without touching the raidz. Cheers, On 7 February 2011 01:41, Achim Wolpers achim...@googlemail.com

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread taemun
Uhm. Higher RPM = higher linear speed of the head above the platter = higher throughput. If the bit pitch (ie the size of each bit on the platter) is the same, then surely a higher linear speed corresponds with a larger number of bits per second? So if all other things being equal includes the

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-28 Thread taemun
Comments below. On 29 January 2011 00:25, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: This was something interesting I found recently. Apparently for flash manufacturers, flash hard drives are like the pimple on the butt of the elephant. A vast majority of

Re: [zfs-discuss] Migrating zpool to new drives with 4K Sectors

2011-01-06 Thread taemun
zfs replace will copy across on to the disk with the same old ashift=9, whereas you want ashift=12 for 4KB drives. (size = 2^ashift) You'd need to make a new pool (or add a vdev to an existing pool) with the modified tools in order to get proper performance out of 4KB drives. On 7 January 2011

Re: [zfs-discuss] very slow boot: stuck at mounting zfs filesystems

2010-12-08 Thread taemun
Dedup? Taking a long time to boot after hard reboot after lookup? I'll bet that it hard locked whilst deleting some files or a dataset that was dedup'd. After the delete is started, it spends *ages* cleaning up the DDT (the table containing a list of dedup'd blocks). If you hard lock in the

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 6 December 2010 21:43, Fred Liu fred_...@issi.com wrote: 3TB HDD needs UEFI not the traditional BIOS and OS support. Fred Fred: http://www.anandtech.com/show/3858/the-worlds-first-3tb-hdd-seagate-goflex-desk-3tb-review/2 Namely: a feature of GPT is 64-bit LBA support. With 64-bit LBAs

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote: There shouldn't be any problems using a 3TB drive with Solaris, so long as you're using a 64-bit kernel. Recent versions of zfs should properly recognize the 4k sector size as well. I think you'll find that these 3TB, 4KiB

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 7 December 2010 13:55, Tim Cook t...@cook.ms wrote: It's based on a jumper on most new drives. Can you back that up with anything? I've never seen anything but requests for a jumper that forces the firmware to export 4KiB sectors. WD EARS at launch provided the ability to force the

Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-12-01 Thread taemun
On 2 December 2010 16:17, Miles Nordin car...@ivy.net wrote: t == taemun tae...@gmail.com writes: t I would note that the Seagate 2TB LP has a 0.32% Annualised t Failure Rate. bullshit. Apologies, should have read: Specified Annualised Failure Rate

Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-11-29 Thread taemun
On 29 November 2010 20:39, GMAIL piotr.jasiukaj...@gmail.com wrote: Does anyone use Seagate ST32000542AS disks with ZFS? I wonder if the performance is not that ugly as with WD Green WD20EARS disks. I'm using these drives for one of the vdevs in my pool. The pool was created with ashift=12

Re: [zfs-discuss] Recomandations

2010-11-29 Thread taemun
On 29 November 2010 15:03, Erik Trimble erik.trim...@oracle.com wrote: I'd have to re-look at the ZFS Best Practices Guide, but I'm pretty sure the recommendation of 7, 9, or 11 disks was for a raidz1, NOT a raidz2. Due to #5 above, best performance comes with an EVEN number of data disks in

Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-11-29 Thread taemun
On 30 November 2010 03:09, Krunal Desai mov...@gmail.com wrote: I assume it either: 1. does a really good job of 512-byte emulation that results in little to no performance degradation ( http://consumer.media.seagate.com/2010/06/the-digital-den/advanced-format-drives-with-smartalign/

Re: [zfs-discuss] ashift and vdevs

2010-11-26 Thread taemun
On 27 November 2010 08:05, Krunal Desai mov...@gmail.com wrote: One new thought occurred to me; I know some of the 4K drives emulate 512 byte sectors, so to the host OS, they appear to be no different than other 512b drives. With this additional layer of emulation, I would assume that ashift

[zfs-discuss] ashift and vdevs

2010-11-23 Thread taemun
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn't seem to know. I'm considering a mixed pool with some advanced format (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set

Re: [zfs-discuss] ashift and vdevs

2010-11-23 Thread taemun
Cheers for the links David, but you'll note that I've commented on the blog you linked (ie, was aware of it). The zpool-12 binary linked from http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/ worked perfectly on my SX11 installation. (It threw some error on b134, so

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread taemun
Tuomas: My understanding is that the copies functionality doesn't guarantee that the extra copies will be kept on a different vdev. So that isn't entirely true. Unfortunately. On 20 October 2010 07:33, Tuomas Leikola tuomas.leik...@gmail.com wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread taemun
Forgive me, but isn't this incorrect: --- mv /pool1/000 /pool1/000d --- rm –rf /pool1/000 Shouldn't that last line be rm –rf /pool1/000d ?? On 8 October 2010 04:32, Remco Lengers re...@lengers.com wrote: any snapshots? *zfs list -t snapshot* ..Remco On 10/7/10 7:24 PM,

Re: [zfs-discuss] resilver that never finishes

2010-09-18 Thread taemun
But all of which have newer code, today, than onnv-134. On 18 September 2010 22:20, Tom Bird t...@marmot.org.uk wrote: On 18/09/10 13:06, Edho P Arief wrote: On Sat, Sep 18, 2010 at 7:01 PM, Tom Birdt...@marmot.org.uk wrote: All said and done though, we will have to live with snv_134's

Re: [zfs-discuss] New SSD options

2010-05-22 Thread taemun
Basic electronics, go! The linked capacitor from Elna ( http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf) has an internal resistance of 30 ohms. Intel rate their 32GB X25-E at 2.4W active (we aren't interested in idle power usage, if its idle, we don't need the capacitor in

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread taemun
iostat -xen 1 will provide the same device names as the rest of the system (as well as show error columns). zpool status will show you which drive is in which pool. As for the controllers, cfgadm -al groups them nicely. t On 23 May 2010 03:50, Brian broco...@vt.edu wrote: I am new to

[zfs-discuss] ZFS on-disk DDT block arrangement

2010-04-06 Thread taemun
I was wondering if someone could explain why the DDT is seemingly (from empirical observation) kept in a huge number of individual blocks, randomly written across the pool, rather than just a large binary chunk somewhere. Having been victim of the relly long times it takes to destroy a

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread taemun
I'm not entirely convinced there is no problem here I had a WD EADS 1.5TB die, the warranty replacement drive was a EARS. So, first foray into 4k sectors. I had 8x EADS in a raidz set, had replaced the broken one with a 1.5TB Seagate 7200rpm - which was obviously faster. Just replacing back,

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread taemun
A pool with a 4-wide raidz2 is a completely nonsensical idea. It has the same amount of accessible storage as two striped mirrors. And would be slower in terms of IOPS, and be harder to upgrade in the future (you'd need to keep adding four drives for every expansion with raidz2 - with mirrors you

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread taemun
Just thought I'd chime in for anyone who had read this - the import operation completed this time, after 60 hours of disk grinding. :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread taemun
The system in question has 8GB of ram. It never paged during the import (unless I was asleep at that point, but anyway). It ran for 52 hours, then started doing 47% kernel cpu usage. At this stage, dtrace stopped responding, and so iopattern died, as did iostat. It was also increasing ram usage

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-13 Thread taemun
After around four days the process appeared to have stalled (no audible hard drive activity). I restarted with milestone=none; deleted /etc/zfs/zpool.cache, restarted, and went zpool import tank. (also allowed root login to ssh, so I could make new ssh sessions if required.) Now I can watch the

[zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread taemun
Can anyone comment about whether the on-boot Reading ZFS confi is any slower/better/whatever than deleting zpool.cache, rebooting and manually importing? I've been waiting more than 30 hours for this system to come up. There is a pool with 13TB of data attached. The system locked up whilst

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread taemun
Do you think that more RAM would help this progress faster? We've just hit 48 hours. No visible progress (although that doesn't really mean much). It is presently in a system with 8GB of ram, I could try to move the pool across to a system with 20GB of ram, if that is likely to expedite the