Re: filesystem full when it's not? out of inodes? huh?

2012-03-02 Thread Brian J. Murrell
On 12-02-26 06:00 AM, Hugo Mills wrote:
 
The option that nobody's mentioned yet is to use mixed mode. This
 is the -M or --mixed option when you create the filesystem. It's
 designed specifically for small filesystems, and removes the
 data/metadata split for more efficient packing.

Cool.

As mentioned before, you probably need to upgrade to 3.2 or 3.3-rc5
 anyway. There were quite a few fixes in the ENOSPC/allocation area
 since then.

I've upgraded to the Ubuntu Precise kernel which looks to be 3.2.6 with
btrfs-tools 0.19+20100601-3ubuntu3 so that would look like a btrfs-progs
snapshot from 2010-06-01 and (unsurprisingly) I don't see the -M option
in mkfs.btrfs.

So I went digging and I just wanted to verify what I think I am seeing.

Looking at

http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=commit;h=67377734fd24c32cbdfeb697c2e2bd7fed519e75

it would appear that the mixed data+metadata code landed in the kernel
back in Sep, of 2010, is that correct?

And looking at

http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git;a=commit;h=b8802ae3fa0c70d4cfc3287ed07479925973b0ac

the userspace support for this landed in Dec. of 2010, is that right?

If my archeology is correct, then I only need to update my btrfs-tools,
yes?  Is  2010-06-01 really the last time the tools were considered
stable or are Ubuntu just being conservative and/or lazy about updating?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


subvolume nomenclature

2012-03-02 Thread Brian J. Murrell
I seem to have the following subvolumes of my filesystem:

# btrfs sub li /
ID 256 top level 5 path @
ID 257 top level 5 path @home
ID 258 top level 5 path @/etc/apt/oneiric

I *think* the last one is there due to a:

# btrfsctl -s oneiric /

that I did prior to doing an upgrade.  I can't seem to figure out the
nomenclature to delete it though:

# btrfs sub de /@/etc/apt/oneiric
ERROR: error accessing '/@/etc/apt/oneiric'

I've tried lots of other combinations with no luck.

Can anyone give me a hint (or the answer :-) )?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: subvolume nomenclature

2012-03-02 Thread Brian J. Murrell
On 12-03-02 08:36 AM, cwillu wrote:
 
 Try btrfs sub delete /etc/apt/oneiric, assuming that that's the path
 where you actually see it.

Well, there is a root filesystem at /etc/apt/oneiric:

# ls /etc/apt/oneiric/
bin   etc initrd.img.old  mnt   root  selinux  tmp  vmlinuz
boot  homelib opt   run   srv  usr  vmlinuz.old
dev   initrd.img  media   proc  sbin  sys  var

but it doesn't delete:

# btrfs subvolume delete /etc/apt/oneiric
Delete subvolume '/etc/apt/oneiric'
ERROR: cannot delete '/etc/apt/oneiric' - Device or resource busy

and doesn't unmount:

# umount /etc/apt/oneiric
umount: /etc/apt/oneiric: not mounted

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:37 PM, Daniel Lee wrote:
 3.22GB + (896MB * 2) = 5GB
 
 There's no mystery here, you're simply out of space.

Except the mystery that I had to expand the filesystem to something
between 20GB and 50GB in order to complete the operation, after which I
could reduce it back down to 5GB.

Cheers,
b.




signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:19 AM, Jérôme Poulin wrote:
 
 What would be interesting is getting an eye on btrfs fi df of your
 filesystem to see what part is getting full, or maybe just do a
 balance.

I did try a balance.  As I had mentioned subsequently, I ended up having
to grow the filesystem to 10x (somewhere between 20 and 50GB) it's data
requirement in order to get that kernel headers .deb to unpack, and
after it unpacked I was successful in shrinking back down to 5G, so it
seems the problem was something worse than just not ideal metadata
allocation.

 I have been running 3.0.0 for quite a while without any problem,
 metadata grew a bit too much (1.5 TB for 2 TB of data) and balance
 fixed it back to 50 GB of metadata then 20 GB after deleting some
 snapshots.

Interesting data point, thanks.

b.





signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:52 PM, Daniel Lee wrote:
 What's mysterious about that?

What's mysterious about needing to grow the filesystem to over 20GB to
unpack 10MB of (small, so yes, many) files?

 When you shrink it btrfs is going to throw
 away unused data to cram it all in the requested space and you had empty
 space that was taken up by the metadata allocation.

The shrinking is secondary mystery.  It's the need for more than 20GB of
space for less than 3GB of files that's the major mystery.

 Did you compare
 btrfs fi df after you shrank it with before?

I didn't unfortunately.

b.



signature.asc
Description: OpenPGP digital signature


filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
I have a 5G /usr btrfs filesystem on a 3.0.0-12-generic kernel that is
returning ENOSPC when it's only 75% full:

FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
  5.0G  2.8G  967M  75% /usr

And yet I can't even unpack a linux-headers package on to it, which
should be nowhere near 967MB.  dpkg says it will need 10MB:

$ sudo apt-get install -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
Do you want to continue [Y/n]? y
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h': 
No space left on device

And indeed, using dd I am able to create a 967MB file:

$ sudo dd if=/dev/zero of=/usr/bigfile bs=1M count=1000
dd: writing `/usr/bigfile': No space left on device
967+0 records in
966+0 records out
1012924416 bytes (1.0 GB) copied, 16.1545 s, 62.7 MB/s

strace yields this as the cause of the ENOSPC:

8213  
rename(/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h.dpkg-new,
 /usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h 
unfinished ...
...
8213  ... rename resumed )= -1 ENOSPC (No space left on device)

So this starts to feel like some kind of inode count limitation.  But I
didn't think btrfs had inode count limitations.  Here's the df stats on
the filesystem:

$ btrfs filesystem df /usr
Data: total=3.22GB, used=3.22GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=896.00MB, used=251.62MB
Metadata: total=8.00MB, used=0.00

I don't know if that's useful or not.

Any ideas?

Cheers
b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
On 12-02-25 09:37 PM, Fahrzin Hemmati wrote:

 Nope, still in heavy development, though you should upgrade to 3.2.

I recall being told I should upgrade to 2.6.36 (or was it .37 or .38) at
one time.  Seems like one should always upgrade.  :-/

 Also, the devs mentioned in several places it's not friendly to small
 drives, and I'm pretty sure 5GB is considered tiny.

But it won't ever get taken serious if it can't be used on regular
filesystems.  I shouldn't have to allocate an 80G filesystem for 3G of
data just so that the filesystem isn't tiny.

 I don't think you need to separate /usr out to it's own disk. You could
 instead create a single drive with multiple subvolumes for /, /var,
 /usr, etc.

The point is to separate filesystems which can easily fill with
application data growth from filesystems that can have more fatal
effects by being filled.

That said, I don't think having /var as a subvolume in the same pool as
/ and /usr achieves that usage isolation, does it?  Isn't /var still
allowed to consume all of the space that it, / and /usr share with them
all being subvolumes in the same pool?

 When you have Ubuntu use btrfs for /, it creates @ and @home
 for / and /home, respectively,

Yes, I had noticed that.  I also didn't immediately see anything that
prevents /home from filling / as I describe above.

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
On 12-02-25 09:10 PM, Fahrzin Hemmati wrote:
 btrfs is horrible for small filesystems (like a 5GB drive). df -h says
 you have 967MB available, but btrfs (at least by default) allocates 1GB
 at a time to data/metadata. This means that your 10MB file is too big
 for the current allocation and requires a new data chunk, or another
 1GB, which you don't have.

So increasing the size of the filesystem should suffice then?  How much
bigger?  10G?  Nope.  still not big enough:

# lvextend -L+1G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
  Extending logical volume mint_usr to 10.00 GiB
  Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   10G  2.8G  6.0G  32% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/usb.h': No space 
left on device

20G maybe?  Nope:

# lvextend -L20G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
  Extending logical volume mint_usr to 20.00 GiB
  Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   20G  2.8G   16G  15% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/ncpfs/packet/signing.h':
 No space left on device

Maybe 50G?  Yup:

# apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
Setting up linux-image-3.0.0-16-generic (3.0.0-16.28) ...
...
# df -h /usr
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   50G  2.8G   43G   7% /usr

So I guess I need a 50G btrfs filesystem for 2.8G worth of data?

Does that really seem right?  I suppose to be fair it could have been
some other value between 20G and 50G since I didn't test values in
between.  So still, I need some amount more than 20G of space to store
2.8G of data?

Surely there is something going on here other than just btrfs sucks for
small filesystems.

b.






signature.asc
Description: OpenPGP digital signature


Re: efficiency of btrfs cow

2011-03-23 Thread Brian J. Murrell
On 11-03-06 11:06 AM, Calvin Walton wrote:
 
 To see exactly what's going on, you should use the btrfs filesystem df
 command to see how space is being allocated for data and metadata
 separately:

OK.  So with an empty filesystem, before my first copy (i.e. the base on
which the next copy will CoW from) df reports:

Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/btrfs--test-btrfs--test
 92274688056 922746824   1% /mnt/btrfs-test

and btrfs fi df reports:

Data: total=8.00MB, used=0.00
Metadata: total=1.01GB, used=24.00KB
System: total=12.00MB, used=4.00KB

after the first copy df and btrfs fi df report:

Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/btrfs--test-btrfs--test
 922746880 121402328 801344552  14% /mnt/btrfs-test

root@linux:/mnt/btrfs-test# cat .snapshots/monthly.22/metadata/btrfs_df-stop
Data: total=110.01GB, used=109.26GB
Metadata: total=5.01GB, used=3.26GB
System: total=12.00MB, used=24.00KB

So it's clear that total usage (as reported by df) was 121,402,328KB but
Metadata has two values:

Metadata: total=5.01GB, used=3.26GB

What's the difference between total and used?  And for that matter,
what's the difference between the total and used for Data
(total=110.01GB, used=109.26GB)?

Even if I take the largest values (i.e. the total values) for Data and
Metadata (each converted to KB first) and add them up they are:
120,607,211.52 which is not quite the 121,402,328 that df reports.
There is a 795,116.48KB discrepancy.

In any case, which value from a btrfs df fi should I be subtracting from
df's accounting to get a real accounting of the amount of data used?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: efficiency of btrfs cow

2011-03-23 Thread Brian J. Murrell
On 11-03-23 11:53 AM, Chester wrote:
 I'm not a developer, but I think it goes something like this:
 btrfs doesn't write the filesystem on the entire device/partition at
 format time, rather, it dynamically increases the size of the
 filesystem as data is used. That's why formating a disk in btrfs can
 be so fast.

Indeed, this much is understood, which is why I am using btrfs fi df to
try to determine how much of the increase in raw device usage is the
dynamic allocation of metadata.

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


btrfs fi df units

2011-03-19 Thread Brian J. Murrell
I notice when I issue a btrfs fi df the result is in units of GB (for a
large filesystem -- maybe it's smaller for smaller filesystems).  Is
there any way to force the units?  I'd like to see the granularity of
KBs if possible.

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
I have a backup volume on an ext4 filesystem that is using rsync and
it's --link-dest option to create hard-linked incremental backups.  I
am sure everyone here is familiar with the technique but in case anyone
isn't basically it's effectively doing (each backup):

# cp -al /backup/previous-backup/ /backup/current-backup
# rsync -aAHX ... --exclude /backup / /backup/current-backup

The shortcoming of this of course is that it just takes 1 byte in a
(possibly huge) file to require that the whole file be recopied to the
backup.

btrfs and it's CoW capability to the rescue -- again, no surprise to
anyone here.

So I replicated a few of the directories in my backup volume to a btrfs
volume using snapshots for each backup to take advantage of CoW and with
any luck, avoid entire file duplication where only some subset of the
file has changed.

Overall, it seems that I saw success.  Most backups on btrfs were
smaller than their source, and overall, for all of the backups
replicated, the use was less.  There were some however that were
significantly larger.  Here's the analysis:

  Backup  btrfs  ext4
  --  -  
monthly.22:  112GiB 113GiB  98%
monthly.21:   14GiB  14GiB  95%
monthly.20:   19GiB  20GiB  94%
monthly.19:   12GiB  13GiB  94%
monthly.18:5GiB   6GiB  87%
monthly.17:   11GiB  12GiB  92%
monthly.16:8GiB  10GiB  82%
monthly.15:   16GiB  11GiB 146%
monthly.14:   19GiB  20GiB  94%
monthly.13:   21GiB  22GiB  96%
monthly.12:   61GiB  67GiB  91%
monthly.11:   24GiB  22GiB 106%
monthly.10:   22GiB  19GiB 114%
 monthly.9:   12GiB  13GiB  90%
 monthly.8:   15GiB  17GiB  91%
 monthly.7:9GiB  11GiB  87%
 monthly.6:8GiB   9GiB  85%
 monthly.5:   16GiB  18GiB  91%
 monthly.4:   13GiB  15GiB  89%
 monthly.3:   11GiB  19GiB  62%
 monthly.2:   29GiB  22GiB 134%
 monthly.1:   23GiB  24GiB  94%
 monthly.0:5GiB   5GiB  94%
 Total:  497GiB 512GiB  96%

btrfs use is a calculation of the df value of the fileystem before and
after each backup.  ext4 (rsync, really) use is calculated with du
-xks on the whole backup volume, which as you know only counts a
multiply hard-linked file's space use once.

So as you can see, for the most part, btrfs and CoW was more efficient,
but in some cases (i.e. monthly.15, monthly.11, monthly.10, monthly.2)
it was less efficient.

Taking the biggest anomaly, monthly.15, a du of just that directory on
both the btrfs and ext4 filesystems shows results I would expect:

btrfs: 136,876,580 monthly.15
ext4:  142,153,928 monthly.15

Yet the before and after df results show the btrfs usage higher than
ext4.  Is there some periodic jump in overhead used by btrfs that
would account for this mysterious increased usage in some of the copies?

Any other ideas for the anomalous results?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:06 AM, Calvin Walton wrote:
 
 There actually is such a periodic jump in overhead,

Ahh.  So my instincts were correct.

 caused by the way
 which btrfs dynamically allocates space for metadata as needed by the
 creation of new files, which it does whenever the free metadata space
 ratio reaches a threshold (it's probably more complicated than that, but
 close enough for now).

Sounds fair enough.

 To see exactly what's going on, you should use the btrfs filesystem df
 command to see how space is being allocated for data and metadata
 separately:
 
 ayu ~ # btrfs fi df /
 Data: total=266.01GB, used=249.35GB
 System, DUP: total=8.00MB, used=36.00KB
 Metadata, DUP: total=3.62GB, used=1.93GB
 ayu ~ # df -h /
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sda4 402G  254G  145G  64% /
 
 If you use the btrfs tool's df command to account for space in your
 testing, you should get much more accurate results.

Indeed!  Unfortunately that tool seems to be completely silent on my system:

# btrfs filesystem df /mnt/btrfs-test/
# btrfs filesystem df /mnt/btrfs-test

Where /mnt/btrfs-test is where I have the device that I created the
btrfs filesystem on mounted.  i.e.:

# grep btrfs /proc/mounts
/dev/mapper/btrfs--test-btrfs--test /mnt/btrfs-test btrfs rw,relatime 0 0

My btrfs-tools appears to be from 20101101.  The changelog says:

  * Merging upstream version 0.19+20101101.

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:17 AM, Calvin Walton wrote:
 
 To add a bit to this: if you *do not* use the --inplace option on rsync,
 rsync will rewrite the entire file, instead of updating the existing
 file!

Of course.  As I mentioned to Fajar previously, I am indeed using
--inplace when copying from the existing archive to the new btrfs archive.

 This of course negates some of the benefits of btrfs's COW support when
 doing incremental backups.

Absolutely.

b.




signature.asc
Description: OpenPGP digital signature


Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:02 AM, Fajar A. Nugraha wrote:
 
 If you have snapshots anyway, why not :
 - create a snapshot before each backup run
 - use the same directory (e.g. just /backup), no need to cp anything
 - add --inplace to rsync

Which is exactly what I am doing.  There is no cp involved in making
the btrfs copies of the existing backup.  It's simply rsync -aAXH ...
--inplace from the existing backup archive to the new, btrfs archive.

Cheers,
b.




signature.asc
Description: OpenPGP digital signature


how to know when all space from a snapshot delete is freed?

2011-03-02 Thread Brian J. Murrell
For some time after I issue a snapshot delete, the space in the volume
is freed.  It starts to free quite fast and then the progress slows and
speeds up again.

Given that the return from the snapshot delete command is immediate and
the space is freed asynchronously, how can I determine absolutely that
the snapshot has been entirely removed and the space freeing operation
is complete?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: interesting use case for multiple devices and delayed raid?

2009-04-01 Thread Brian J. Murrell
On Wed, 01 Apr 2009 21:13:19 +1100, Dmitri Nikulin wrote:

On Wed, 2009-04-01 at 21:13 +1100, Dmitri Nikulin wrote:
 
 I assume you mean read bandwidth, since write bandwidth cannot be
 increased by mirroring, only striping.

No, I mean write bandwidth.  You can get increased write bandwidth with
RAID 0 if you only write to one side of the mirror (initially),
effectively, striping.  You would update the other half of the mirror
lazily (iow, delayed) when the filesystem has idle bandwidth.  One
of the stipulations was that the use pattern is peaks and valleys, not
sustained usage.

Yes, you would lose the data that was written to a failed mirror before
the filesystem got a chance to do the lazy mirror updating later on.
That was a stipulation in my original requirements too.

 If you intend to stripe first,
 then mirror later as time permits,

Yeah, that's one way to describe it.

 this is the kind of sophistication
 you will need to write in the program code itself.

Why?  A filesystem that does already does it's own mirroring and
striping (as I understand btrfs does) should be able to handle this
itself.  Much better in the filesystem than for each application to have
to handle it itself.

 A filesystem is a handy abstraction, but you are by no means limited
 to using it. If you have very special needs, you can get pretty far by
 writing your own meta-filesystem to add semantics you don't have in
 your kernel filesystem of choice.

Of course.  But I am floating this idea as a feature of btrfs given that
it already has much of the components needed.

 This is handled by DragonFly BSD's HAMMER filesystem. A master gets
 written to, and asynchronously updates a slave, even over a network.
 It is transactionally consistent and virtually impossible to corrupt
 as long as the disk media is stable. However as far as I know it won't
 spread reads, so you'll still get the performance of one disk.

More importantly, it won't spread writes.

 A more complete solution, that requires no software changes, would be
 to have 3 or 4 disks. A stripe for really fast reads and writes, and
 another disk (or another stripe) to act as a slave to the data being
 written to the primary stripe. This seems to do what you want, at a
 small price premium.

No.  That's not really what I am describing at all.

I apologize if my original description was unclear.  Hopefully it is
more so now.

b.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html