On Thu, 10 Jan 2013, Jim Klimov wrote:
On 2013-01-10 08:51, Jason wrote:
Hi,
One of my server's zfs faulted and it shows following:
NAMESTATE READ WRITE CKSUM
backup UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 ins
without accelerator (gnu dd with oflag=sync). Not bad at all.
This could be just good enough for small businesses and moderate sized
pools.
Michael
--
Michael Hase
edition-software GmbH
http://edition-software.de
___
zfs-discuss mailing list
zfs-dis
On Tue, 17 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
If you were to add a second vdev (i.e. stripe) then you should see very
close to 200% due to the default round-robin scheduling of the writes.
My expectation would be > 200%, as 4 disks are involved.
sorry to insist, but still no real answer...
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
So only one thing left: mirror should read 2x
I don't think that mirror should necessarily read 2x faster even though the
potential is there to do so. L
On Mon, 16 Jul 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael Hase
got some strange results, please see
attachements for exact numbers and pool config:
seq write factor seq read factor
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Michael Hase wrote:
This is my understanding of zfs: it should load balance read requests even
for a single sequential reader. zfs_prefetch_disable is the default 0. And
I can see exactly this scaling behaviour with sas disks
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Stefan Ring wrote:
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstandin
than
expected, especially for a simple mirror. Any ideas?
Thanks,
Michael
--
Michael Hase
http://edition-software.de pool: ptest
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
ptest ONLINE 0 0 0
c13t4d0 O
The discussion is really old: writing many small files on an nfs mounted zfs
filesystem is slow without ssd zil due to the sync nature of the nfs protocol
itself. But there is something I don't really understand. My tests on an old
opteron box with 2 small u160 scsi arrays and a zpool with 4 mir
So I ended up recreating the zpool from scratch, there seems no chance to
repair anything. All data lost - luckily nothing really important. Never had
such an experience with mirrored volumes on svm/ods since solaris 2.4.
Just to clarify things: there was no mocking with the underlying disk devic
Hi Victor,
the kernel panic in bug 6424466 resulted from overwriting some areas of the
disks, in this case I would expect at least strange things - ok, not exactly a
panic. In my case there was no messsing around with the underlying disks. The
fix only seems to avoid the panic and mentions no
I have a strange problem with a faulted zpool (two way mirror):
[EMAIL PROTECTED];0]~# zpool status poolm
pool: poolm
state: FAULTED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolm UNAVAIL 0 0 0 insufficient replicas
12 matches
Mail list logo