769G resilvered on a 500G drive? I'm guessing there was a whole bunch of
activity (and probably snapshot creation) happening alongside the resilver.
On 20 March 2011 18:57, Ian Collins i...@ianshome.com wrote:
Has anyone seen a resilver longer than this for a 500G drive in a riadz2
vdev?
On 28 February 2011 02:06, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Take that a step further. Anything external is unreliable. I have used
USB, eSATA, and Firewire external devices. They all work. The only
question is for how long.
eSATA has no need
On 6 February 2011 01:34, Michael michael.armstr...@gmail.com wrote:
Hi guys,
I'm currently running 2 zpools each in a raidz1 configuration, totally
around 16TB usable data. I'm running it all on an OpenSolaris based box with
2gb memory and an old Athlon 64 3700 CPU, I understand this is
If autoexpand = on, then yes.
zpool get autoexpand pool
zpool set autoexpand=on pool
The expansion is vdev specific, so if you replaced the mirror first, you'd
get that much (the extra 2TB) without touching the raidz.
Cheers,
On 7 February 2011 01:41, Achim Wolpers achim...@googlemail.com
Uhm. Higher RPM = higher linear speed of the head above the platter = higher
throughput. If the bit pitch (ie the size of each bit on the platter) is the
same, then surely a higher linear speed corresponds with a larger number of
bits per second?
So if all other things being equal includes the
Comments below.
On 29 January 2011 00:25, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
This was something interesting I found recently. Apparently for flash
manufacturers, flash hard drives are like the pimple on the butt of the
elephant. A vast majority of
zfs replace will copy across on to the disk with the same old ashift=9,
whereas you want ashift=12 for 4KB drives. (size = 2^ashift)
You'd need to make a new pool (or add a vdev to an existing pool) with the
modified tools in order to get proper performance out of 4KB drives.
On 7 January 2011
Dedup? Taking a long time to boot after hard reboot after lookup?
I'll bet that it hard locked whilst deleting some files or a dataset that
was dedup'd. After the delete is started, it spends *ages* cleaning up the
DDT (the table containing a list of dedup'd blocks). If you hard lock in the
On 6 December 2010 21:43, Fred Liu fred_...@issi.com wrote:
3TB HDD needs UEFI not the traditional BIOS and OS support.
Fred
Fred:
http://www.anandtech.com/show/3858/the-worlds-first-3tb-hdd-seagate-goflex-desk-3tb-review/2
Namely:
a feature of GPT is 64-bit LBA support. With 64-bit LBAs
On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote:
There shouldn't be any problems using a 3TB drive with Solaris, so
long as you're using a 64-bit kernel. Recent versions of zfs should
properly recognize the 4k sector size as well.
I think you'll find that these 3TB, 4KiB
On 7 December 2010 13:55, Tim Cook t...@cook.ms wrote:
It's based on a jumper on most new drives.
Can you back that up with anything? I've never seen anything but requests
for a jumper that forces the firmware to export 4KiB sectors.
WD EARS at launch provided the ability to force the
On 2 December 2010 16:17, Miles Nordin car...@ivy.net wrote:
t == taemun tae...@gmail.com writes:
t I would note that the Seagate 2TB LP has a 0.32% Annualised
t Failure Rate.
bullshit.
Apologies, should have read: Specified Annualised Failure Rate
On 29 November 2010 20:39, GMAIL piotr.jasiukaj...@gmail.com wrote:
Does anyone use Seagate ST32000542AS disks with ZFS?
I wonder if the performance is not that ugly as with WD Green WD20EARS
disks.
I'm using these drives for one of the vdevs in my pool. The pool was created
with ashift=12
On 29 November 2010 15:03, Erik Trimble erik.trim...@oracle.com wrote:
I'd have to re-look at the ZFS Best Practices Guide, but I'm pretty sure
the recommendation of 7, 9, or 11 disks was for a raidz1, NOT a raidz2. Due
to #5 above, best performance comes with an EVEN number of data disks in
On 30 November 2010 03:09, Krunal Desai mov...@gmail.com wrote:
I assume it either:
1. does a really good job of 512-byte emulation that results in little
to no performance degradation
(
http://consumer.media.seagate.com/2010/06/the-digital-den/advanced-format-drives-with-smartalign/
On 27 November 2010 08:05, Krunal Desai mov...@gmail.com wrote:
One new thought occurred to me; I know some of the 4K drives emulate 512
byte sectors, so to the host OS, they appear to be no different than other
512b drives. With this additional layer of emulation, I would assume that
ashift
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn't seem to know.
I'm considering a mixed pool with some advanced format (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set
Cheers for the links David, but you'll note that I've commented on the blog
you linked (ie, was aware of it). The zpool-12 binary linked from
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/
worked
perfectly on my SX11 installation. (It threw some error on b134, so
Tuomas:
My understanding is that the copies functionality doesn't guarantee that
the extra copies will be kept on a different vdev. So that isn't entirely
true. Unfortunately.
On 20 October 2010 07:33, Tuomas Leikola tuomas.leik...@gmail.com wrote:
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden
Forgive me, but isn't this incorrect:
---
mv /pool1/000 /pool1/000d
---
rm –rf /pool1/000
Shouldn't that last line be
rm –rf /pool1/000d
??
On 8 October 2010 04:32, Remco Lengers re...@lengers.com wrote:
any snapshots?
*zfs list -t snapshot*
..Remco
On 10/7/10 7:24 PM,
But all of which have newer code, today, than onnv-134.
On 18 September 2010 22:20, Tom Bird t...@marmot.org.uk wrote:
On 18/09/10 13:06, Edho P Arief wrote:
On Sat, Sep 18, 2010 at 7:01 PM, Tom Birdt...@marmot.org.uk wrote:
All said and done though, we will have to live with snv_134's
Basic electronics, go!
The linked capacitor from Elna (
http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf) has an
internal resistance of 30 ohms.
Intel rate their 32GB X25-E at 2.4W active (we aren't interested in idle
power usage, if its idle, we don't need the capacitor in
iostat -xen 1 will provide the same device names as the rest of the system
(as well as show error columns).
zpool status will show you which drive is in which pool.
As for the controllers, cfgadm -al groups them nicely.
t
On 23 May 2010 03:50, Brian broco...@vt.edu wrote:
I am new to
I was wondering if someone could explain why the DDT is seemingly
(from empirical observation) kept in a huge number of individual blocks,
randomly written across the pool, rather than just a large binary chunk
somewhere.
Having been victim of the relly long times it takes to destroy a
I'm not entirely convinced there is no problem here I had a WD EADS
1.5TB die, the warranty replacement drive was a EARS. So, first foray into
4k sectors.
I had 8x EADS in a raidz set, had replaced the broken one with a 1.5TB
Seagate 7200rpm - which was obviously faster.
Just replacing back,
A pool with a 4-wide raidz2 is a completely nonsensical idea. It has the
same amount of accessible storage as two striped mirrors. And would be
slower in terms of IOPS, and be harder to upgrade in the future (you'd need
to keep adding four drives for every expansion with raidz2 - with mirrors
you
Just thought I'd chime in for anyone who had read this - the import
operation completed this time, after 60 hours of disk grinding.
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The system in question has 8GB of ram. It never paged during the
import (unless I was asleep at that point, but anyway).
It ran for 52 hours, then started doing 47% kernel cpu usage. At this
stage, dtrace stopped responding, and so iopattern died, as did
iostat. It was also increasing ram usage
After around four days the process appeared to have stalled (no
audible hard drive activity). I restarted with milestone=none; deleted
/etc/zfs/zpool.cache, restarted, and went zpool import tank. (also
allowed root login to ssh, so I could make new ssh sessions if
required.) Now I can watch the
Can anyone comment about whether the on-boot Reading ZFS confi is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?
I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached. The system locked up whilst
Do you think that more RAM would help this progress faster? We've just
hit 48 hours. No visible progress (although that doesn't really mean
much).
It is presently in a system with 8GB of ram, I could try to move the
pool across to a system with 20GB of ram, if that is likely to
expedite the
31 matches
Mail list logo