Am Tue, 7 Feb 2017 14:50:04 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:

> > Also does autodefrag works with nodatacow (ie with snapshot)  or
> > are these exclusive ?  
> I'm not sure about this one.  I would assume based on the fact that
> many other things don't work with nodatacow and that regular defrag
> doesn't work on files which are currently mapped as executable code
> that it does not, but I could be completely wrong about this too.

Technically, there's nothing that prevents autodefrag to work for
nodatacow files. The question is: is it really necessary? Standard file
systems also have no autodefrag, it's not an issue there because they
are essentially nodatacow. Simply defrag the database file once and
you're done. Transactional MySQL uses huge data files, probably
preallocated. It should simply work with nodatacow.

On the other hand: Using snapshots clearly introduces fragmentation over
time. If autodefrag kicks in (given, it is supported for nodatacow), it
will slowly unshare all data over time. This somehow defeats the
purpose of having snapshots in the first place for this scenario.

In conclusion, I'd recommend to run some maintenance scripts from time
to time, one to re-share identical blocks, and one to defragment the
current workspace.

The bees daemon comes into mind here... I haven't tried it but it
sounds like it could fill a gap here:

https://github.com/Zygo/bees

Another option comes into mind: XFS now supports shared-extents
copies. You could simply do a cold copy of the database with this
feature resulting in the same effect as a snapshot, without seeing the
other performance problems of btrfs. Tho, the fragmentation issue would
remain, and I think there's no dedupe application for XFS yet.

-- 
Regards,
Kai

Replies to list-only preferred.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to