I use aufs to merge multiple btrfs raid1 volumes together. Â This might seem
   and odd thing to do on the outset, but it's acutally been quite useful for
   me for a few reasons:
   *  If you have say 8 disks in a raid 10 configuration and start seeing
   strange  file  system  behavior or you simply decide to change out the
   underlying filesystem, you need an equal amount of capacity (another 8
   disks) to be able to accomplish this task. Â By using smaller 2 disk volumes
   and striping across them with aufs, now I can accomplish the same task with
   only 2 additional disks. Â It takes a lot more hand holding, requires less
   dollars.
   * It allows me to be more comfortable with running a less well tested file
   system. Â None if the data on my volume is "critical", but I would prefer
   not to lose it. Â By having smaller volumes, any filesystem corruption
   cannot magically take out the data on the other volumes. Â I would lose some
   data, but not every thing.
   I've been using aufs in this manner for years now, and it has been very
   nice, I've bounced between btrfs and zfs a few times, upgraded drives a few
   at at time, etc. Â The only thing I've run into that has caused me any real
   trouble is that none of the create policies are exactly what I want. Â pmfs
   is  very close, but it would be nice if you could set a low water mark
   similar to the low value for the mfsrr create policy. Â Perhaps this could
   be called 'pmfsrr' (although I'm not sure that quite the right thing to call
   it.) Â Here is the way I would envision it working:
   create=pmfsrr:low:[second]
   Selects a writable branch where the parent dir exists, such as tdp mode.
   When the parent dir exists on multiple writable branches, aufs selects the
   one which has most free space, such as mfs mode. Â If there are less than
   'low' bytes (a percentage would actually be more favorable imo) available on
   all branches where the parent dir exists, the writable branch with the most
   free space will be selected.
   Â  Â
   This would be extremely handy to automatically prevent a single branch from
   hitting 100% full while still keeping data as tightly grouped as possible.
   I apologize if this idea has already been suggested, but my searches didn't
   find anything the seemed to match with this so I figured I would throw it
   out there and see what others on this list thought. Â If others like this
   idea and there is a better place to submit a feature request, let me know.
   Thanks for your time!
   --
   Michael Johnson - MJ
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk

Reply via email to