On Feb 08, 2008  18:09 +0000, Ricardo Correia wrote:
> On Qui, 2008-02-07 at 22:51 -0700, Neil Perrin wrote:
> > I believe when a prototype using 1K dnodes was tested it showed an
> > unacceptable (30%?) hit on some benchmarks. So if can possibly
> > avoid increasing the dnode size (by default) then we should do so.
> 
> 
> Do you know the reason for such a performance hit?
> 
> Even with 1K dnode sizes, if the dnodes didn't have any extended
> attributes and since metadata compression is enabled, the on-disk size
> of metadnode blocks should remain approximately the same, right?

One of the default behaviours of the DMU is that if there is a larger
dnode then there are more blkptr_t items added to consume the rest of
the space left once the bonus buffer.  Unless the bonus buffer size
was also increased in this test, or the number of blkptr in the dnode
is limited (as we have done with the large dnode patch) then any file
data will cause the blocks to be stored directly in the dnode instead
of in an indirect block.

For some workloads (small file read/write access) this would be an
improvement, but for workloads which look a lot at the [dz]node attributes
(e.g. find, ls -l) it would be worse.

> Could it be because the metadnode object became twice the size (logical
> size) and therefore required another level of indirect blocks which, as
> a consequence, required an additional disk seek for each metadnode block
> read?
> 
> It would be interesting to run some benchmarks with Kalpak's large dnode
> patch.

Definitely.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.


Reply via email to