Steven Whitehouse <swhit...@redhat.com> writes:

> If you are worried about read I/O, then I'd look carefully at the
> fragmentation using filefrag on a few representative files to see how
> they are laid out on disk.

A running qcow2 image, using a backing file:

- running qcow2 is 822MB with 3002 extents

- backing file is 2.2GB with 2893 extents

Another saved images, used as read-only backing file for running VMs is
7.5GB with 9640 extents.

> There are other possible causes of
> performance issues too - do you have the fs mounted noatime (which we
> recommend for most use cases) for example?

Right, I missed that one, I need to planify a down time.

> Running a filesystem which is close to the capacity limit can generate
> fragmentation over time, 80% would usually be ok, and more recent
> versions of GFS2 are better than older ones at avoiding fragmentation
> in such circumstances,

It's running on Ubuntu Trusty a 3.13 kernel and gfs2-utils 3.1.6.

Thanks.
-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF

Attachment: signature.asc
Description: PGP signature

-- 
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to