Ahhh! So thats how the formula works. That makes perfect sense.
Lets take my case as a scenario:
Each of my vdevs is 10 disk RaidZ2 (8 data + 2 Parity). Using 128K stripe, I'll
have 128K/8 = 16K blocks per data drive 16K blocks per parity drive. That
fits both 512B 4KB.
It works in my
On 10/09/2010 04:24, Bill Sommerfeld wrote:
C) Does zfs send zfs receive mean it will defrag?
Scores so far:
1 No
2 Yes
maybe. If there is sufficient contiguous freespace in the destination
pool, files may be less fragmented.
But if you do incremental sends of multiple snapshots, you may
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w
trn tot device
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv
I don't have any errors from fmdump or syslog.
The machine is SUN FIRE X4275 I don't use mpt or lsi drivers.
It could be a bug in a driver since I see this on 2 the same machines.
On Fri, Sep 10, 2010 at 9:51 PM, Carson Gaspar car...@taltos.org wrote:
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
You are both right. More below...
On Sep 10, 2010, at 2:06 PM, Piotr Jasiukajtis wrote:
I don't have any errors from fmdump or syslog.
The machine is SUN FIRE X4275 I don't use mpt or lsi drivers.
It could be a bug in a driver since I see this on 2 the same machines.
On Fri, Sep 10, 2010
On Sep 9, 2010, at 5:55 PM, Fei Xu wrote:
Just to update the status and findings.
Thanks for the update.
I've checked TLER settings and they are off by default.
I moved the source pool to another chassis and do the 3.8TB send again. this
time, not any problems! the difference is
1.
On Sep 9, 2010, at 6:39 AM, Marty Scholes wrote:
Erik wrote:
Actually, your biggest bottleneck will be the IOPS
limits of the
drives. A 7200RPM SATA drive tops out at 100 IOPS.
Yup. That's it.
So, if you need to do 62.5e6 IOPS, and the rebuild
drive can do just 100
IOPS, that means you
by the way, in HDtune, I saw C7: Ultra DMA CRC
error count is a little high which indicates a
potential connection issue. Maybe all are caused by
the enclosure?
Bingo!
You are right, I've done a lot of tests and the defect is narrorw down the
problem hardware. The two pool works fine
bash-3.00# uname -a
SunOS testxx10 5.10 Generic_142910-17 i86pc i386 i86pc
bash-3.00# zpool upgrade -v
This system is currently running ZFS pool version 22.
The following versions are supported:
VER DESCRIPTION
---
1 Initial ZFS
10 matches
Mail list logo