Hello there,
I'm planing to use btrfs for a medium-sized webserver. It is commonly
recommended to set nodatacow for database files to avoid performance
degradation. However, apparently nodatacow disables some of my main
motivations of using btrfs : checksumming and (probably) incremental
backups with send/receive (please correct me if I'm wrong on this).
Also, the databases are among the most important data on my webserver,
so it is particularly there that I would like those feature working.
My question is, are there strategies to avoid nodatacow of databases
that are suitable and safe in a production server?
I thought about the following:
- in mysql/mariadb: setting "innodb_file_per_table" should avoid having
few very big database files.
- in mysql/mariadb: adapting database schema to store blobs into
dedicated tables.
- btrfs: set autodefrag or some cron job to regularly defrag only
database fails to avoid performance degradation due to fragmentation
- turn on compression on either btrfs or mariadb
Is this likely to give me ok-ish performance? What other possibilities
are there?
Thanks for your recommendations.
ingvar
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html