What does ::zio_state show?
Dave
On 08/25/10 07:41, Bryan Leaman wrote:
Hi, I've been following these forums for a long time but this is my first post.
I'm looking for some advice on debugging an issue. I've been looking at all
the bug reports and updates though b146 but I can't find a
Charles,
Is it just ZFS hanging (or what it appears to be is slowing down or
blocking) or does the whole system hang?
A couple of questions
What does iostat show during the time period of the slowdown?
What does mpstat show during the time of the slowdown?
You can look at the metadata
Charles,
Just like UNIX, there are several ways to drill down on the problem. I
would probably start with a live crash dump (savecore -L) when you see
the problem. Another method would be to grap multiple stats commands
during the problem to see where you can drill down later. I would
Have you tried setting zfs_recover aok in /etc/system or setting it
with the mdb?
Read how to set via /etc/system
http://opensolaris.org/jive/thread.jspa?threadID=114906
mdb debugger
How do you know it is dedup causing the problem?
You can check to see how much is by looking at the threads (look for ddt)
mdb -k
::threadlist -v
or dtrace it.
fbt:zfs:ddt*:entry
You can disable dedup. I believe current dedup data stays until it gets
over written. I'm not sure what send
Interesting thread. So how would you go about fixing this?
I suspect you have to track down the vnode, znode_t and eventually
modify one kernel buffers for znode_phys_t. If your left with the
decision to completely rebuild then repairing this might be the only
choice some people may have.
I'm working on this scenario in which file system activity appears to
cause the arc cache to evict meta data. I would like to have a
preference to keep the metadata in cache over ZFS File Data
What I've notice on import of a zpool the arc_meta_used goes up
significantly. ZFS meta data
Good idea. Provides options, but it would be nice to be able to set a
low water mark on what can be taken away from the arc metadata cache
without having to have something like an SSD.
Dave
On 10/01/10 14:02, Freddie Cash wrote:
On Fri, Oct 1, 2010 at 11:46 AM, David Blasingame Oracle
You might want to check this post.
http://opensolaris.org/jive/thread.jspa?threadID=122156
Dave
On 10/12/10 07:30, Alexander Lesle wrote:
Hello guys,
I want to built a new NAS and I am searching for a controller.
At supermicro I found this new one with the LSI 2008 controller.
a diff to list the file differences between snapshots
http://arc.opensolaris.org/caselog/PSARC/2010/105/mail
Dave
On 10/13/10 15:48, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of dirk schelfhout
Wanted to test
The vmdump.0 is a compressed crash dump. You will need to convert it to
a format that can be read.
# savecore -f ./vmdump.0 ./
This will create a couple of files, but the ones you will need next is
unix.0 vmcore.0. Use mdb to print out the stack.
# mdb unix.0 vmcore.0
run the
16:17, schrieb David Blasingame Oracle:
The vmdump.0 is a compressed crash dump. You will need to convert it
to a format that can be read.
# savecore -f ./vmdump.0 ./
This will create a couple of files, but the ones you will need next
is unix.0 vmcore.0. Use mdb to print out the stack
Can you clarify what you mean by ZFS Write Performance Issues? A single
kstat isn't very helpful, at least not to me. Maybe a few over a couple
of seconds when you are hitting the write performance issue might be
beneficial. A zpool iostat 1 may also be beneficial.
As far as the kstat data
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
14 matches
Mail list logo