Hi,
My OmniOS host is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
# zpool status -v
root@host:~# zpool status -v
pool:
Hello,
I'n updating Devilator, the performance data collector for Orca and FreeBSD to
include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
writes and reads, and several hit/misses data pairs.
Any suggestions to improve it? What other variables can be interesting?
An
On Mon, Feb 11, 2013 at 9:53 AM, Borja Marcos bor...@sarenet.es wrote:
Hello,
I'n updating Devilator, the performance data collector for Orca and
FreeBSD to include ZFS monitoring. So far I am graphing the ARC and L2ARC
size, L2ARC writes and reads, and several hit/misses data pairs.
Any
On Feb 11, 2013, at 4:56 PM, Tim Cook wrote:
The zpool iostat output has all sorts of statistics I think would be
useful/interesting to record over time.
Yes, thanks :) I think I will add them, I just started with the esoteric ones.
Anyway, still there's no better way to read it than
On 02/11/2013 04:53 PM, Borja Marcos wrote:
Hello,
I'n updating Devilator, the performance data collector for Orca and FreeBSD
to include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
writes and reads, and several hit/misses data pairs.
Any suggestions to improve
On 2013-02-11 17:14, Borja Marcos wrote:
On Feb 11, 2013, at 4:56 PM, Tim Cook wrote:
The zpool iostat output has all sorts of statistics I think would be
useful/interesting to record over time.
Yes, thanks :) I think I will add them, I just started with the esoteric ones.
Anyway, still
root@host:~# fmadm faulty
--- --
-
TIME EVENT-ID MSG-ID SEVERITY
--- --
-
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC Major
Host :
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which
which we added 24 more disks and created additional vdevs. The initial
vdevs are filled up and so write speed declined. Now how to find files
Hi,
Anyone knows if there is any progress on bp_rewrite ? Its much awaited to
solve re-distribution issue, and moving vdevs.
Regards,
Ram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss