Hiya,
I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
large, 4 of which are in use) to another machine with 3 3TB HDs
and preserve all the subvolumes/snapshots.
Is there a way to do that without using a software/hardware raid
on the new machine (that is just use btrfs
2011-03-21 16:24:50 +, Stephane Chazelas:
[...]
I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
large, 4 of which are in use) to another machine with 3 3TB HDs
and preserve all the subvolumes/snapshots.
[...]
I tried one approach: export a LVM snapshot of the old fs as a
nbd
2011-03-22 18:06:29 -0600, cwillu:
I can mount it back, but not if I reload the btrfs module, in which case I
get:
[ 1961.328280] Btrfs loaded
[ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid 1
transid 118 /dev/loop0
[ 1961.329007] btrfs: failed to read the system
2011-03-23 12:13:45 +0700, Fajar A. Nugraha:
On Mon, Mar 21, 2011 at 11:24 PM, Stephane Chazelas
stephane.chaze...@gmail.com wrote:
AFAICT, compression is enabled at mount time and would
only apply to newly created files. Is there a way to compress
files already in a btrfs filesystem
On Fri, 2011-04-01 at 14:12 +0200, Helmut Hullen wrote:
Hallo, Struan,
Du meintest am 01.04.11:
1) Is the balancing operation expected to take many hours (or days?)
on a filesystem such as this? Or are there known issues with the
algorithm that are yet to be addressed?
May be.
2011-04-03 21:35:00 +0200, Helmut Hullen:
Hallo, Stephane,
Du meintest am 03.04.11:
balancing about 2 TByte needed about 20 hours.
[...]
Hugo has explained the limits of regarding
dmesg | grep relocating
or (more simple) the last lines of dmesg and looking for the
2011-04-04 20:07:54 +0100, Stephane Chazelas:
[...]
4.7 more days to go. And I reckon it will have written about 9
TB to disk by that time (which is the total size of the volume,
though only 3.8TB are occupied).
Yes - that's the pessimistic estimation. As Hugo has explained it can
2011-03-28 14:17:48 +0100, Stephane Chazelas:
[...]
So here is how I transferred a 6TB btrfs on one 6TB raid5 device
(on host src) over the network onto a btrfs on 3 3TB hard drives
[...]
I then did a btrfs fi balance again and let it run through. However here is
what I get:
[...]
Sorry
2011-04-09 10:11:41 +0100, Hugo Mills:
[...]
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df /srv/MM
Data, RAID0: total=1.67TB, used=1.48TB
System, RAID1:
2011-04-10 18:13:51 +0800, Miao Xie:
[...]
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df /srv/MM
Data, RAID0: total=1.67TB, used=1.48TB
System, RAID1:
2011-04-06 12:43:50 +0100, Stephane Chazelas:
[...]
The rate is going down. It's now down to about 14kB/s
[658654.295752] btrfs: relocating block group 3919858106368 flags 20
[671932.913235] btrfs: relocating block group 3919589670912 flags 20
[686189.296126] btrfs: relocating block group
2011-04-12 15:22:57 +0800, Miao Xie:
[...]
But the algorithm of df command doesn't simulate the above allocation
correctly, this
simulated allocation just allocates the stripes from two disks, and then,
these two disks
have no free space, but the third disk still has 1.2TB free space, df
Hiya,
I've not found much detail on what the ssd btrfs mount option
did. Would it make sense to enable it to a fs on a USB flash
drive?
I'm using btrfs (over LVM) on a Live Linux USB stick to benefit
from btrfs's compression and am trying to improve the
performance.
Would anybody have any
2011-05-19 21:04:54 +0200, Hubert Kario:
On Wednesday 18 of May 2011 00:02:52 Stephane Chazelas wrote:
Hiya,
I've not found much detail on what the ssd btrfs mount option
did. Would it make sense to enable it to a fs on a USB flash
drive?
yes, enabling discard is pointless though
2011-05-19 15:54:23 -0600, cwillu:
[...]
Try with the ssd_spread mount option.
[...]
Thanks. I'll try that.
I wonder now what credit to give to recommendations like in
http://www.patriotmemory.com/forums/showthread.php?3696-HOWTO-Increase-write-speed-by-aligning-FAT32
2011-05-21 14:58:21 +0200, Tomasz Chmielewski:
I have a btrfs filesystem (2.6.39) which is mounted, but otherwise, not used:
# lsof -n|grep /mnt/btrfs
processes with open fds are one thing. You could also have loop
devices setup on it for instance.
#
I noticed that whenever I do sync,
2011-05-22 11:52:37 +0200, Tomasz Chmielewski:
[...]
Can you try running these commands yourself:
iostat -k 1 /your/btrfs/device
And in a second terminal:
while true; do sync ; done
To see if your btrfs makes writes on sync each time?
[...]
Yes it does. And I see:
Hiya,
I get a btrfs sub list output that I don't understand:
# btrfs sub list /backup/
ID 257 top level 5 path u1/linux/lvm+btrfs/storage/data/data
ID 260 top level 5 path u2/linux/lvm/linux/var/data
ID 262 top level 5 path u1/linux/lvm+btrfs/storage/data/snapshots/2010-10-11
ID 263 top level 5
2011-05-26 22:22:03 +0100, Stephane Chazelas:
[...]
I get a btrfs sub list output that I don't understand:
# btrfs sub list /backup/
ID 257 top level 5 path u1/linux/lvm+btrfs/storage/data/data
ID 260 top level 5 path u2/linux/lvm/linux/var/data
ID 262 top level 5 path u1/linux/lvm+btrfs
2011-05-27 10:21:03 +0200, Andreas Philipp:
[...]
What do those top-level IDs mean by the way?
The top-level ID associated with a subvolume is NOT the ID of this
particular subvolume but of the subvolume containing it. Since the
root/initial (sub-)volume has always ID 0, the subvolumes of
Is there a way to derive the subvolume ID from the stat(2)
st_dev, by the way.
# btrfs sub list .
ID 256 top level 5 path a
ID 257 top level 5 path b
# zstat +dev . a b
. 27
a 28
b 29
Are the dev numbers allocated in the same order as the
subvolids? Would there be any /sys, /proc, ioctl
2011-05-27 10:12:24 +0100, Hugo Mills:
[skipped useful clarification]
That's all rather dense, and probably too much information. Hope
it's helpful, though.
[...]
It is, thanks.
How would one end up in a situation where the output of btrfs
sub list . has:
ID 287 top level 285 path data
2011-05-27 10:45:23 +0100, Hugo Mills:
[...]
How could a subvolume 285 become a top level?
How does one get a subvolume with a top-level other than 5?
This just means that subvolume 287 was created (somewhere) inside
subvolume 285.
Due to the way that the FS trees and subvolumes
2011-05-27 13:49:52 +0200, Andreas Philipp:
[...]
Thanks, I can understand that. What I don't get is how one creates
a subvol with a top-level other than 5. I might be missing the
obvious, though.
If I do:
btrfs sub create A btrfs sub create A/B btrfs sub snap A A/B/C
A, A/B,
Hiya,
please consider this:
~# truncate -s1G ./a
~# mkfs.btrfs ./a
~# sudo mount -o loop ./a /mnt/1
~# cd /mnt/1
/mnt/1# ls
/mnt/1# btrfs sub c A
Create subvolume './A'
/mnt/1# btrfs sub c A/B
Create subvolume 'A/B'
/mnt/1# touch A/inA A/B/inB
/mnt/1# btrfs sub snap A A.snap
Create a snapshot of
2011-06-02 01:39:41 +0100, Stephane Chazelas:
[...]
/mnt/1# zstat +device ./**/*
. 25
A 26
A/B 27
A/B/inB 27
A/inA 26
A.snap 28
A.snap/B 23
A.snap/inA 28
Why does A.snap/B have a different st_dev from A.snap's?
[...]
If I create another snap of A or A.snap, the B in there gets
Hiya,
I've got a btrfs FS with 84 subvolumes in it (some created
with btrfs sub create, some with btrfs sub snap of the other
ones). There's no nesting of subvolumes at all (all direct
children of the root subvolume).
The btrfs subvolume list is only showing 80 subvolumes. The 4
missing ones (1
2011-06-29 15:37:47 +0100, Stephane Chazelas:
[...]
I found
http://thread.gmane.org/gmane.comp.file-systems.btrfs/8123/focus=8208
which looks like the same issue, with Li Zefan saying he had a
fix, but I couldn't find any mention that it was actually fixed.
Has anybody got any update
2011-06-30 11:18:42 +0200, Andreas Philipp:
[...]
After that, I posted a patch to fix btrfs-progs, which Chris
aggreed on:
http://marc.info/?l=linux-btrfsm=129238454714319w=2
[...]
Great. Thanks a lot
It fixes my problem indeed.
Which brings me to my next question: where to
Looks like this was missing in integration-20110626 for the
readonly snapshot patch:
diff --git a/btrfs.c b/btrfs.c
index e117172..be6ece5 100644
--- a/btrfs.c
+++ b/btrfs.c
@@ -49,7 +49,7 @@ static struct Command commands[] = {
/*
avoid short commands different for the
2011-06-30 22:55:15 +0200, Andreas Philipp:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 30.06.2011 14:34, Stephane Chazelas wrote:
Looks like this was missing in integration-20110626 for the
readonly snapshot patch:
diff --git a/btrfs.c b/btrfs.c
index e117172..be6ece5
Hi,
I just got one of those:
[ 8203.192107] [ cut here ]
[ 8203.192146] kernel BUG at
/media/data/mattems/src/linux-2.6-3.0.0~rc5/debian/build/source_amd64_none/fs/btrfs/inode.c:1583!
[ 8203.192210] invalid opcode: [#1] SMP
[ 8203.192246] CPU 1
[ 8203.192256]
Hiya,
I've got a server using brtfs to implement a backup system.
Basically, every night, for a few machines, I rsync (and other
methods) their file systems into one btrfs subvolume each and
then snapshot it.
On that server, the btrfs fs is on 3 3TB drives, mounted with
compress-force. Every
2011-07-03 13:38:57 -0600, cwillu:
On Sun, Jul 3, 2011 at 1:09 PM, Stephane Chazelas
stephane_chaze...@yahoo.fr wrote:
[...]
Now, on a few occasions (actually, most of the time), when I
rsynced the data (about 2.5TB) onto the external drive, the
system would crash after some time with Out
2011-07-06 09:11:11 +0100, Stephane Chazelas:
[...]
extent_map delayed_node btrfs_inode_cache btrfs_free_space_cache
(in bytes)
[...]
01:00 267192640 668595744 23216460003418048
01:10 267192640 668595744 23216460003418048
01:20 267192640 668595744 23216460003418048
01
2011-07-07 16:20:20 +0800, Li Zefan:
[...]
btrfs_inode_cache is a slab cache for in memory inodes, which is of
struct btrfs_inode.
[...]
Thanks Li.
If that's a cache, the system should be able to reuse the space
there when it's low on memory, wouldn't it? What would be the
conditions where
2011-07-08 11:06:08 -0400, Chris Mason:
[...]
So the invalidate opcode in btrfs-fixup-0 is the big problem. We're
either failing to write because we weren't able to allocate memory (and
not dealing with it properly) or there is a bigger problem.
Does the btrfs-fixup-0 oops come before or
2011-07-08 16:41:23 +0100, Stephane Chazelas:
2011-07-08 11:06:08 -0400, Chris Mason:
[...]
So the invalidate opcode in btrfs-fixup-0 is the big problem. We're
either failing to write because we weren't able to allocate memory (and
not dealing with it properly) or there is a bigger
2011-07-08 12:17:54 -0400, Chris Mason:
[...]
Jun 5 00:58:10 BUG: Bad page state in process rsync pfn:1bfdf
Jun 5 00:58:10 page:ea61f8c8 count:0 mapcount:0 mapping:
(null) index:0x2300
Jun 5 00:58:10 page flags: 0x110(dirty)
Jun 5 00:58:10 Pid:
2011-07-08 12:15:08 -0400, Chris Mason:
[...]
You described this workload as rsync, is there anything else running?
[...]
Nope. Nothing else. And at least initially, that was onto an
empty drive so basic copy.
rsync --archive --xattrs --hard-links --numeric-ids --sparse --acls
Cheers,
2011-07-08 12:15:08 -0400, Chris Mason:
[...]
I'd definitely try without -o compress_force.
[...]
Just started that over the night.
I'm running a dstat -df at the same time and I'm seeing
substantive amount of disk writes on the disks that hold the
source FS (and I'm rsyncing from read-only
2011-07-09 08:09:55 +0100, Stephane Chazelas:
2011-07-08 16:12:28 -0400, Chris Mason:
[...]
I'm running a dstat -df at the same time and I'm seeing
substantive amount of disk writes on the disks that hold the
source FS (and I'm rsyncing from read-only snapshot subvolumes
in case
2011-07-08 12:17:54 -0400, Chris Mason:
[...]
How easily can you recompile your kernel with more debugging flags?
That should help narrow it down. I'm looking for CONFIG_SLAB_DEBUG (or
slub) and CONFIG_DEBUG_PAGEALLOC
[...]
I tried that (with CONFIG_DEBUG_SLAB_LEAK as well) but no
difference
2011-07-09 13:25:00 -0600, cwillu:
On Sat, Jul 9, 2011 at 11:09 AM, Stephane Chazelas
stephane_chaze...@yahoo.fr wrote:
2011-07-08 11:06:08 -0400, Chris Mason:
[...]
I would do two things. First, I'd turn off compress_force. There's no
explicit reason for this, it just seems like
2011-07-09 08:09:55 +0100, Stephane Chazelas:
2011-07-08 16:12:28 -0400, Chris Mason:
[...]
I'm running a dstat -df at the same time and I'm seeing
substantive amount of disk writes on the disks that hold the
source FS (and I'm rsyncing from read-only snapshot subvolumes
in case
2011-07-10 08:44:34 -0400, Chris Mason:
[...]
Great, we're on the right track. Does it trigger with mount -o compress
instead of mount -o compress_force?
[...]
It does trigger. I get that same invalid opcode.
BTW, I tried with CONFIG_SLUB and slub_debug and no more useful
information than
2011-07-11 02:00:51 +0200, krz...@gmail.com :
Documentation says that btrfs-image zeros data. Feature request is for
disabling this. btrfs-image could be used to copy filesystem to
another drive (for example with snapshots, when copying it file by
file would take much longer time or acctualy
2011-07-11 14:39:18 +0200, krz...@gmail.com :
2011/7/11 Stephane Chazelas stephane_chaze...@yahoo.fr:
[...]
See also
http://thread.gmane.org/gmane.comp.file-systems.btrfs/9675/focus=9820
for a way to transfer btrfs fs.
(Add a layer of copy-on-write on the original devices (LVM
2011-07-11 11:00:19 -0400, Chris Mason:
Excerpts from Stephane Chazelas's message of 2011-07-11 05:01:21 -0400:
2011-07-10 19:37:28 +0100, Stephane Chazelas:
2011-07-10 08:44:34 -0400, Chris Mason:
[...]
Great, we're on the right track. Does it trigger with mount -o compress
2011-07-11 12:25:51 -0400, Chris Mason:
[...]
Also, when I resume the rsync (so it doesn't transfer the
already transfered files), it does BUG() again.
Ok, could you please send along the exact rsync command you were
running?
[...]
I did earlier, but here it is again:
rsync --archive
2011-07-11 10:01:21 +0100, Stephane Chazelas:
[...]
Same without dmcrypt. So to sum up, BUG() reached in btrfs-fixup
thread when doing an
- rsync (though I also got (back when on ubuntu and 2.6.38) at
least one occurrence using bsdtar | bsdtar)
- of a large amount of data (with a large
Still on my btrfs-based backup system. I still see one BUG()
reached in btrfs-fixup per boot time, no memory exhaustion
anymore. There is now however something new: write performance
is down to a few bytes per second.
I've got a few processes (rsync, patched ntfsclone, shells
mostly) writing to
2011-07-16 13:12:10 +0100, Stephane Chazelas:
[...]
ntfsclone (patched to only write modified clusters):
# strace -Te write -p 4717
Process 4717 attached - interrupt to quit
write(1, 65.16 percent completed\r, 25) = 25 0.008996
write(1, 65.16 percent completed\r, 25) = 25 0.743358
write
2011-07-16 13:12:10 +0100, Stephane Chazelas:
Still on my btrfs-based backup system. I still see one BUG()
reached in btrfs-fixup per boot time, no memory exhaustion
anymore. There is now however something new: write performance
is down to a few bytes per second.
[...]
The condition
2011-07-17 10:17:37 +0100, Stephane Chazelas:
2011-07-16 13:12:10 +0100, Stephane Chazelas:
Still on my btrfs-based backup system. I still see one BUG()
reached in btrfs-fixup per boot time, no memory exhaustion
anymore. There is now however something new: write performance
is down
2011-07-18 11:39:12 +0100, Stephane Chazelas:
2011-07-17 10:17:37 +0100, Stephane Chazelas:
2011-07-16 13:12:10 +0100, Stephane Chazelas:
Still on my btrfs-based backup system. I still see one BUG()
reached in btrfs-fixup per boot time, no memory exhaustion
anymore. There is now
2011-07-18 20:37:25 +0100, Stephane Chazelas:
2011-07-18 11:39:12 +0100, Stephane Chazelas:
2011-07-17 10:17:37 +0100, Stephane Chazelas:
2011-07-16 13:12:10 +0100, Stephane Chazelas:
Still on my btrfs-based backup system. I still see one BUG()
reached in btrfs-fixup per boot time
2011-07-25 17:38:10 +0100, Jeremy Sanders:
I'm afraid this is a rather old kernel, 2.6.35.13-92.fc14.x86_64, but this
error looks rather similiar to
http://www.spinics.net/lists/linux-btrfs/msg11053.html
Has this been fixed? I was simultaneously doing rsyncs into different
subvolumes
2011-07-28 07:23:43 +0100, Stephane Chazelas:
Hiya, I got below those last night. That was 3 minutes after a
bunch of rsync and ntfsclone processes started.
It's the first time it happens. I upgraded from 3.0rc6 to 3.0
yesterday.
[...]
And again this morning, though at that point only one
2011-06-06 12:19:56 +0200, Marek Otahal:
Hello,
the issue happens every time when i have to hard power-off my notebook
(suspend problems).
With kernel 2.6.39 the partition is unmountable, solution is to boot 2.6.38
kernel which
1/ is able to mount the partition,
2/ by doing that fixes
Hiya,
Recently,
a btrfs file system of mine started to behave very poorly with
some btrfs kernel tasks taking 100% of CPU time.
# btrfs fi show /dev/sdb
Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1
Total devices 3 FS bytes used 4.25TB
devid2 size 2.73TB used
2011-09-27 10:15:09 +0100, Stephane Chazelas:
[...]
a btrfs file system of mine started to behave very poorly with
some btrfs kernel tasks taking 100% of CPU time.
# btrfs fi show /dev/sdb
Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1
Total devices 3 FS bytes used 4.25TB
2011-09-27 10:15:09 +0100, Stephane Chazelas:
[...]
btrfs-transacti R running task0 963 2 0x
880143af7730 0001 ff10 880143af77b0
8801456da420 e86aa840 1000
ffe4 8801462ba800
2011-10-02 16:38:21 +0200, krz...@gmail.com :
Also I think there are no real tools to find out which
directories are subvolumes/snapshots
[...]
On my system (debian), there's mountpoint command (from the
initscript package from
http://savannah.nongnu.org/projects/sysvinit) that will tell you
2011-10-21, 00:39(+03), Nikos Voutsinas:
[...]
## Comment: Of course /dev/sdh is not mounted.
mount |grep /dev/sdh
root@lxc:~#
[...]
Note that mount(8) uses /etc/mtab to find out what is mounted.
And if that file is not a symlink to /proc/mounts, the
information is not necessarily correct.
2011-10-23, 17:19(+02), Mathijs Kwik:
[...]
For this case (my laptop) I can stick to file-based rsync, but I think
some guarantees should exist at the block level. Many virtual machines
and cloud hosting services (like ec2) provide block-level snapshots.
With xfs, I can freeze the filesystem
2011-10-24, 09:59(-04), Edward Ned Harvey:
[...]
If you are reading the raw device underneath btrfs, you are
not getting the benefit of the filesystem checksumming. If
you encounter an undetected read/write error, it will silently
pass. Your data will be corrupted, you'll never know about it
2011-10-25, 07:46(-04), Edward Ned Harvey:
[...]
My suggestion to the OP of this thread is to use rsync for now, wait for
btrfs send, or switch to zfs.
[...]
rsync won't work if you've got snapshot volumes though (unless
you're prepared to have a backup copy thousands of times the
size of the
This morning, I have a strange behavior when doing a tail -f
on a log file.
cat log runs successfully, but
tail -f log hangs.
Running a strace shows it hanging on lseek(3, 0, SEEK_CUR...
3 being the fd for that log file.
In dmesg:
[59881.520030] INFO: task btrfs-delalloc-:763 blocked for more
I don't quite understand the behavior of btrfs fi defrag
~# truncate -s2G ~/a
~# mkfs.btrfs ~/a
nodesize 4096 leafsize 4096 sectorsize 4096 size 2.00GB
~# mount -o loop ~/a /mnt/1
/mnt/1# cd x
/mnt/1# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 2.0G 64K
2011-10-28, 10:25(+08), Li Zefan:
[...]
# df . -h
FilesystemSize Used Avail Use% Mounted on
/home/lizf/tmp/a 2.0G 409M 1.4G 23% /mnt
OK, why are we not gaining space after compression though?
And I was not suprised, as there's a regression.
With this fix:
2011-10-28, 07:57(+07), Fajar A. Nugraha:
[...]
Already got 'em. Everything that tries to even think about modifying stuff
(btrfs-zero-log, btrfsck, and btrfs-debug-tree) all dump core:
Your last resort (for now, anyway) might be using restore from
Josef's btrfs-progs:
Hiya,
trying to restore a FS from a backup (tgz) on a freshly made
btrfs this morning, I got ENOSPCs after about 100MB out of 4GB
have been extracted. strace indicates that the ENOSPC are upon
the open(O_WRONLY).
Restoring with:
mkfs.btrfs /dev/mapper/VG_USB-root
mount -o compress-force,ssd $_
Hiya,
Before setting up a new RAID10 btrfs array with 6 drives, I
wanted to check how good it behaved in case of disk failure.
I've not been too impressed. Is RAID10 btrfs support only
meant for reading performance improvement?
My test method was:
Use the device-mapper to have devices mapped
2011-11-17 17:09:25 +, Stephane CHAZELAS:
[...]
Before setting up a new RAID10 btrfs array with 6 drives, I
wanted to check how good it behaved in case of disk failure.
I've not been too impressed. Is RAID10 btrfs support only
meant for reading performance improvement?
My test method
2011-09-6, 11:21(-05), Andrew Carlson:
I was doing some testing with writing out data to a BTFS filesystem
with the compress-force option. With 1 program running, I saw
btfs-delalloc taking about 1 CPU worth of time, much as could be
expected. I then started up 2 programs at the same time,
2011-11-22, 09:47(-05), Chris Mason:
On Tue, Nov 22, 2011 at 02:30:07PM +, Stephane CHAZELAS wrote:
2011-09-6, 11:21(-05), Andrew Carlson:
I was doing some testing with writing out data to a BTFS filesystem
with the compress-force option. With 1 program running, I saw
btfs-delalloc
Hiya,
yes, you'll probably think that is crazy, but after observing
better performance with btrfs in some work loads on md RAID5
than btrfs builtin RAID10, I thought I'd try btrfs on zfs
(in-kernel, not fuse) zvol (on raidz) just for a laugh.
While this procedure worked for ext4 and xfs, for
Hiya,
is there any recommendation out there to setup a btrfs FS on top
of hardware or software raid5 or raid6 wrt stripe/stride alignment?
From mkfs.btrfs, it doesn't look like there's much that can be
adjusted that would help, and what I'm asking might not even
make sense for btrfs but I
2011-11-23, 09:08(-08), Blair Zajac:
On Nov 23, 2011, at 9:04 AM, Stephane CHAZELAS wrote:
Hiya,
is there any recommendation out there to setup a btrfs FS on top
of hardware or software raid5 or raid6 wrt stripe/stride alignment?
Isn't the advantage of having btrfs do all the raiding
2011-12-07, 12:35(-06), BJ Quinn:
I've got a 6TB btrfs array (two 3TB drives in a RAID 0). It's
about 2/3 full and has lots of snapshots. I've written a
script that runs through the snapshots and copies the data
efficiently (rsync --inplace --no-whole-file) from the main
6TB array to a backup
2011-12-08, 10:49(-05), Phillip Susi:
On 12/7/2011 1:49 PM, BJ Quinn wrote:
What I need isn't really an equivalent zfs send -- my script can do
that. As I remember, zfs send was pretty slow too in a scenario like
this. What I need is to be able to clone a btrfs array somehow -- dd
would be
2012-05-28 12:37:00 -0600, Bruce Guenter:
Is there any way to mark existing snapshots as read-only? Making new
ones read-only is easy enough, but what about existing ones?
[...]
you can always do
btrfs sub snap -r vol vol-ro
btrfs sub del vol
mv vol-ro vol
--
Stephane
--
To unsubscribe
83 matches
Mail list logo