When tuning recordsize for things like databases, we
try to recommend
that the customer's recordsize match the I/O size of
the database
record.
On this filesystem I have:
- file links and they are rather static
- small files ( about 8kB ) that keeps changing
- big files ( 1MB - 20 MB )
Field ms_smo.smo_objsize in metaslab struct is size of data on disk.
I checked the size of metaslabs in memory:
::walk spa | ::walk metaslab | ::print struct metaslab
ms_map.sm_root.avl_numnodes
I got 1GB
But only some metaslabs are loaded:
::walk spa | ::walk metaslab | ::print struct metaslab
After few hours with dtrace and source code browsing I found that in my space
map there are no 128K blocks left.
Try this on your ZFS.
dtrace -n fbt::metaslab_group_alloc:return'/arg1 == -1/{}
If you will get probes, then you also have the same problem.
Allocating from space map works like
If you want to know which blocks you do not have:
dtrace -n fbt::metaslab_group_alloc:entry'{ self-s = arg1; }' -n
fbt::metaslab_group_alloc:return'/arg1 != -1/{ self-s = 0 }' -n
fbt::metaslab_group_alloc:return'/self-s (arg1 == -1)/{ @s =
quantize(self-s); self-s = 0; }' -n tick-10s'{
Łukasz пишет:
After few hours with dtrace and source code browsing I found that in my space
map there are no 128K blocks left.
Actually you may have some 128k or more free space segments, but
alignment requirements will not allow to allocate them. Consider the
following example:
1. Space
Hello,
I'm investigating problem with ZFS over NFS.
The problems started about 2 weeks ago, most nfs threads are hanging in
txg_wait_open.
Sync thread is consuming one processor all the time,
Average spa_sync function times from entry to return is 2 minutes.
I can't use dtrace to examine
This system exhibits the symptoms of :
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6495013
Moving to nevada would certainly help as it has many more bug fixes and
performance improvements over S10U3.
--
Prabahar.
?ukasz wrote:
Hello,
I'm investigating problem with ZFS over