Re: cloning single-device btrfs file system onto multi-device one

2011-04-17 Thread Hubert Kario
On Monday 21 of March 2011 17:24:50 Stephane Chazelas wrote:
 Hiya,
 
 I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
 large, 4 of which are in use) to another machine with 3 3TB HDs
 and preserve all the subvolumes/snapshots.
 
 Is there a way to do that without using a software/hardware raid
 on the new machine (that is just use btrfs multi-device).
 
 If fewer than 3TB were occupied, I suppose I could just resize
 it so that it fits on one 3TB hd, then copy device to device
 onto a 3TB disk, add the 2 other ones and do a balance, but
 here, I can't do that.
 
 I suspect that if compression was enabled, the FS could fit on
 3 TB, but AFAICT, compression is enabled at mount time and would
 only apply to newly created files. Is there a way to compress
 files already in a btrfs filesystem?

You can compress files already on disk using
btrfs filesystem defragment -c /path/to/file
but defragmenting breaks snapshotting (at least it did 2 months ago, dunno if 
it's still true)

 
 Any help would be appreciated.
 Stephane

-- 
Hubert Kario
QBS - Quality Business Software
ul. Ksawerów 30/85
02-656 Warszawa
POLAND
tel. +48 (22) 646-61-51, 646-74-24
fax +48 (22) 646-61-50
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-06 Thread Helmut Hullen
Hallo, Evert,

Du meintest am 05.04.11:

 I then did a btrfs fi balance again and let it run through. However
 here is what I get:

 $ df -h /mnt
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/sdb              8.2T  3.5T  3.2T  53% /mnt

 Only 3.2T left. How would I reclaim the missing space?

 $ sudo btrfs fi show
 Label: none  uuid: ...
        Total devices 3 FS bytes used 3.43TB
        devid    4 size 2.73TB used 1.17TB path /dev/sdc
        devid    3 size 2.73TB used 1.17TB path /dev/sdb
        devid    2 size 2.70TB used 1.14TB path /dev/sda4
 $ sudo btrfs fi df /mnt
 Data, RAID0: total=3.41TB, used=3.41TB
 System, RAID1: total=16.00MB, used=232.00KB
 Metadata, RAID1: total=35.25GB, used=20.55GB

 So that kind of worked but that is of little use to me as 2TB
 kind of disappeared under my feet in the process.

 From my limited understanding, btrfs will write metadata in raid1 by
 default. So, this could be where your 2TB has gone.

 I am assuming you used raid0 for the three new disks?

No - take a look at the (shown) output of btrfs fi df /mnt. DATA is RAID0.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-06 Thread Helmut Hullen
Hallo, Stephane,

Du meintest am 28.03.11:

 I then did a btrfs fi balance again and let it run through. However
 here is what I get:

 $ df -h /mnt
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sdb  8.2T  3.5T  3.2T  53% /mnt

 Only 3.2T left. How would I reclaim the missing space?

 $ sudo btrfs fi show
 Label: none  uuid: ...
 Total devices 3 FS bytes used 3.43TB
 devid4 size 2.73TB used 1.17TB path /dev/sdc
 devid3 size 2.73TB used 1.17TB path /dev/sdb
 devid2 size 2.70TB used 1.14TB path /dev/sda4
 $ sudo btrfs fi df /mnt
 Data, RAID0: total=3.41TB, used=3.41TB
 System, RAID1: total=16.00MB, used=232.00KB
 Metadata, RAID1: total=35.25GB, used=20.55GB

 So that kind of worked but that is of little use to me as 2TB
 kind of disappeared under my feet in the process.

It may not please you - I've seen this nasty effect too. Reproducable.

I presume that balance eats disk space.

But balance seems to be necessary if I want to delete a device, and  
deleting a device is one of the features I'd like to use with btrfs  
(adding a new bigger device, then deleting an old smaller device).

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-06 Thread Arne Jansen
On 28.03.2011 15:17, Stephane Chazelas wrote:
 
 I then did a btrfs fi balance again and let it run through. However here is
 what I get:
 
 $ df -h /mnt
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sdb  8.2T  3.5T  3.2T  53% /mnt
 
 Only 3.2T left. How would I reclaim the missing space?
 
 $ sudo btrfs fi show
 Label: none  uuid: ...
 Total devices 3 FS bytes used 3.43TB
 devid4 size 2.73TB used 1.17TB path /dev/sdc
 devid3 size 2.73TB used 1.17TB path /dev/sdb
 devid2 size 2.70TB used 1.14TB path /dev/sda4
 $ sudo btrfs fi df /mnt
 Data, RAID0: total=3.41TB, used=3.41TB
 System, RAID1: total=16.00MB, used=232.00KB
 Metadata, RAID1: total=35.25GB, used=20.55GB
 
 So that kind of worked but that is of little use to me as 2TB
 kind of disappeared under my feet in the process.
 
 Any idea, anyone?
 

This can just be a miscalculation. Can you please send the output
of btrfs-debug-tree -d /dev/sdc? Shouldn't be too long.

Thanks,
Arne
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-06 Thread Stephane Chazelas
2011-03-28 14:17:48 +0100, Stephane Chazelas:
[...]
 So here is how I transferred a 6TB btrfs on one 6TB raid5 device
 (on host src) over the network onto a btrfs on 3 3TB hard drives
[...]
 I then did a btrfs fi balance again and let it run through. However here is
 what I get:
[...]

Sorry, it didn't run through and it is still running (after 9
days) and there are indications it  could still be running 8 years from
now (see other thread). There hasn't been any change in the
amount of free space reported by df since the beginning of  the
balance (there still are 2TB missing).

Cheers,
Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-06 Thread Arne Jansen
On 06.04.2011 14:05, Stephane Chazelas wrote:
 2011-04-06 10:25:00 +0200, Arne Jansen:
 On 28.03.2011 15:17, Stephane Chazelas wrote:

 I then did a btrfs fi balance again and let it run through. However here is
 what I get:

 $ df -h /mnt
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sdb  8.2T  3.5T  3.2T  53% /mnt

 Only 3.2T left. How would I reclaim the missing space?

 $ sudo btrfs fi show
 Label: none  uuid: ...
 Total devices 3 FS bytes used 3.43TB
 devid4 size 2.73TB used 1.17TB path /dev/sdc
 devid3 size 2.73TB used 1.17TB path /dev/sdb
 devid2 size 2.70TB used 1.14TB path /dev/sda4
 $ sudo btrfs fi df /mnt
 Data, RAID0: total=3.41TB, used=3.41TB
 System, RAID1: total=16.00MB, used=232.00KB
 Metadata, RAID1: total=35.25GB, used=20.55GB

 So that kind of worked but that is of little use to me as 2TB
 kind of disappeared under my feet in the process.

 Any idea, anyone?


 This can just be a miscalculation. Can you please send the output
 of btrfs-debug-tree -d /dev/sdc? Shouldn't be too long.
 [..]
 
 Hi Arne,
 
 Here it is below (compressed and b64-uuencoded as it's about 1MB
 large)
 
 begin-base64 600 bdt.log.xz

The tree says:
2x28.25 GB in metadata
3,41 TB in data
16MB in system

so I'd say the calculation of Avail in df is just wrong.

-Arne


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-04-05 Thread Evert Vorster
Hi there.

From my limited understanding, btrfs will write metadata in raid1 by
default. So, this could be where your 2TB has gone.

I am assuming you used raid0 for the three new disks?

Also, hard-stopping a btrfs is a no-no...

Kind regards,
-Evert-

On Mon, Mar 28, 2011 at 6:17 AM, Stephane Chazelas
stephane.chaze...@gmail.com wrote:
 2011-03-22 18:06:29 -0600, cwillu:
  I can mount it back, but not if I reload the btrfs module, in which case I 
  get:
 
  [ 1961.328280] Btrfs loaded
  [ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid 1 
  transid 118 /dev/loop0
  [ 1961.329007] btrfs: failed to read the system array on loop0
  [ 1961.340084] btrfs: open_ctree failed

 Did you rescan all the loop devices (btrfs dev scan /dev/loop*) after
 reloading the module, before trying to mount again?

 Thanks. That probably was the issue, that and using too big
 files on too small volumes I'd guess.

 I've tried it in real life and it seemed to work to some extent.
 So here is how I transferred a 6TB btrfs on one 6TB raid5 device
 (on host src) over the network onto a btrfs on 3 3TB hard drives
 (on host dst):

 on src:

 lvm snapshot -L100G -n snap /dev/VG/vol
 nbd-server 12345 /dev/VG/snap

 (if you're not lucky enough to have used lvm there, you can use
 nbd-server's copy-on-write feature).

 on dst:

 nbd-client src 12345 /dev/nbd0
 mount /dev/nbd0 /mnt
 btrfs device add /dev/sdb /dev/sdc /dev/sdd /mnt
  # in reality it was /dev/sda4 (a little under 3TB), /dev/sdb,
  # /dev/sdc
 btrfs device delete /dev/nbd0 /mnt

 That was relatively fast (about 18 hours) but failed with an
 error. Apparently, it managed to fill up the 3 3TB drives (as
 shown by btrfs fi show). Usage for /dev/nbd0 was at 16MB though
 (?!)

 I then did a btrfs fi balance /mnt. I could see usage on the
 drives go down quickly. However, that was writing data onto
 /dev/nbd0 so was threatening to fill up my LVM snapshot. I then
 cancelled that by doing a hard reset on dst (couldn't find
 any other way). And then:

 Upon reboot, I mounted /dev/sdb instead of /dev/nbd0 in case
 that made a difference and then ran the

 btrfs device delete /dev/nbd0 /mnt

 again, which this time went through.

 I then did a btrfs fi balance again and let it run through. However here is
 what I get:

 $ df -h /mnt
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/sdb              8.2T  3.5T  3.2T  53% /mnt

 Only 3.2T left. How would I reclaim the missing space?

 $ sudo btrfs fi show
 Label: none  uuid: ...
        Total devices 3 FS bytes used 3.43TB
        devid    4 size 2.73TB used 1.17TB path /dev/sdc
        devid    3 size 2.73TB used 1.17TB path /dev/sdb
        devid    2 size 2.70TB used 1.14TB path /dev/sda4
 $ sudo btrfs fi df /mnt
 Data, RAID0: total=3.41TB, used=3.41TB
 System, RAID1: total=16.00MB, used=232.00KB
 Metadata, RAID1: total=35.25GB, used=20.55GB

 So that kind of worked but that is of little use to me as 2TB
 kind of disappeared under my feet in the process.

 Any idea, anyone?

 Thanks
 Stephane
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
-Evert-
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-03-28 Thread Stephane Chazelas
2011-03-22 18:06:29 -0600, cwillu:
  I can mount it back, but not if I reload the btrfs module, in which case I 
  get:
 
  [ 1961.328280] Btrfs loaded
  [ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid 1 
  transid 118 /dev/loop0
  [ 1961.329007] btrfs: failed to read the system array on loop0
  [ 1961.340084] btrfs: open_ctree failed
 
 Did you rescan all the loop devices (btrfs dev scan /dev/loop*) after
 reloading the module, before trying to mount again?

Thanks. That probably was the issue, that and using too big
files on too small volumes I'd guess.

I've tried it in real life and it seemed to work to some extent.
So here is how I transferred a 6TB btrfs on one 6TB raid5 device
(on host src) over the network onto a btrfs on 3 3TB hard drives
(on host dst):

on src:

lvm snapshot -L100G -n snap /dev/VG/vol
nbd-server 12345 /dev/VG/snap

(if you're not lucky enough to have used lvm there, you can use
nbd-server's copy-on-write feature).

on dst:

nbd-client src 12345 /dev/nbd0
mount /dev/nbd0 /mnt
btrfs device add /dev/sdb /dev/sdc /dev/sdd /mnt
  # in reality it was /dev/sda4 (a little under 3TB), /dev/sdb,
  # /dev/sdc
btrfs device delete /dev/nbd0 /mnt

That was relatively fast (about 18 hours) but failed with an
error. Apparently, it managed to fill up the 3 3TB drives (as
shown by btrfs fi show). Usage for /dev/nbd0 was at 16MB though
(?!)

I then did a btrfs fi balance /mnt. I could see usage on the
drives go down quickly. However, that was writing data onto
/dev/nbd0 so was threatening to fill up my LVM snapshot. I then
cancelled that by doing a hard reset on dst (couldn't find
any other way). And then:

Upon reboot, I mounted /dev/sdb instead of /dev/nbd0 in case
that made a difference and then ran the 

btrfs device delete /dev/nbd0 /mnt

again, which this time went through.

I then did a btrfs fi balance again and let it run through. However here is
what I get:

$ df -h /mnt
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb  8.2T  3.5T  3.2T  53% /mnt

Only 3.2T left. How would I reclaim the missing space?

$ sudo btrfs fi show
Label: none  uuid: ...
Total devices 3 FS bytes used 3.43TB
devid4 size 2.73TB used 1.17TB path /dev/sdc
devid3 size 2.73TB used 1.17TB path /dev/sdb
devid2 size 2.70TB used 1.14TB path /dev/sda4
$ sudo btrfs fi df /mnt
Data, RAID0: total=3.41TB, used=3.41TB
System, RAID1: total=16.00MB, used=232.00KB
Metadata, RAID1: total=35.25GB, used=20.55GB

So that kind of worked but that is of little use to me as 2TB
kind of disappeared under my feet in the process.

Any idea, anyone?

Thanks
Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-03-28 Thread Stephane Chazelas
2011-03-23 12:13:45 +0700, Fajar A. Nugraha:
 On Mon, Mar 21, 2011 at 11:24 PM, Stephane Chazelas
 stephane.chaze...@gmail.com wrote:
  AFAICT, compression is enabled at mount time and would
  only apply to newly created files. Is there a way to compress
  files already in a btrfs filesystem?
 
 You need to select the files manually (not possible to select a
 directory), but yes, it's possible using btrfs filesystem defragment
 -c
[...]

Thanks. However I find that for files that have snapshots, it
ends up increasing disk usage instead of reducing it (size of
the file + size of the compressed file, instead of size of the
file).

If I do the btrfs fi de on both the volume and its snapshot, I
end up with some benefit only if the compression ratio is over
2 (and with more snapshots, there's little chance of getting any
benefit at all). Also, with dozens of snapshots on a 4TB volume,
it's likely to take weeks to do.

Is there a way around that?

Thanks
Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-03-22 Thread Stephane Chazelas
2011-03-21 16:24:50 +, Stephane Chazelas:
[...]
 I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
 large, 4 of which are in use) to another machine with 3 3TB HDs
 and preserve all the subvolumes/snapshots.
[...]

I tried one approach: export a LVM snapshot of the old fs as a
nbd device, mount it from the new machine (/dev/nbd0), then add
the new disks to the FS (btrfs add device) and then delete
/dev/nbd0 which I'd hope would relocate all the extents onto the
new disks.

I did some experiments with some loop devices but got all sorts
of results with different versions of kernels (debian unstable
2.6.37 and 2.6.38 amd64).

Here is what I did:

dd seek=512 bs=1M of=./a  /dev/null
dd seek=256 bs=1M of=./b  /dev/null
dd seek=256 bs=1M of=./c  /dev/null
mkfs.btrfs ./a
losetup /dev/loop0 ./a
losetup /dev/loop1 ./b
losetup /dev/loop2 ./c
mount /dev/loop0 /mnt
yes | head -c 300M  /mnt/test
btrfs device add /dev/loop1 /mnt
btrfs device add /dev/loop2 /mnt
# btrfs filesystem balance /mnt
btrfs device delete /dev/loop0 /mnt

In 2,6,38, upon the balance as well as upon the delete, it
seemed to go in a loop, the system at 70% wait, and some 
btrfs: found 1 extents
2 to 3 times per second in dmesg. I tried leaving it on for a few
hours and it didn't help. The only thing I could do is reboot.
Disk usage of the a, b, c files were not increasing, though
dstat -d showed some disk writing at ~500kB/s (so I suppose it
was writing the same blocks over and over and seeking a lot).

In 2.6.37, I managed to have it working once, though I don't
know how and never managed to reproduce.

Upon the delete, I can see some relocations in dmesg output, but
then:

# btrfs device delete /dev/loop0 /mnt
ERROR: error removing the device '/dev/loop0'
(no error in dmesg)

Upon umount, here is what I find in dmesg:

[...]
[ 1802.357205] btrfs: relocating block group 0 flags 2
[ 1860.193351] [ cut here ]
[ 1860.193373] WARNING: at 
/build/buildd-linux-2.6_2.6.37-2-amd64-bITS0h/linux-2.6-2.6.37/debian/build/source_amd64_none/fs/btrfs/volumes.c:544
 __btrfs_close_devices+0xb5/0xd0 [btrfs]()
[ 1860.193379] Hardware name: MacBookPro4,1
[ 1860.193382] Modules linked in: btrfs libcrc32c hidp vboxnetadp vboxnetflt 
vboxdrv ip6table_filter ip6_tables ebtable_nat ebtables acpi_cpufreq mperf 
cpufreq_powersave cpufreq_userspace cpufreq_conservative cpufreq_stats 
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state 
nf_conntrack ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables bridge stp 
parport_pc ppdev lp parport sco bnep rfcomm l2cap kvm_intel binfmt_misc kvm 
deflate ctr twofish_generic twofish_x86_64 twofish_common camellia serpent 
blowfish cast5 des_generic cbc cryptd aes_x86_64 aes_generic xcbc rmd160 
sha512_generic sha256_generic sha1_generic hmac crypto_null af_key fuse nfsd 
exportfs nfs lockd fscache nfs_acl auth_rpcgss sunrpc loop dm_crypt 
snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep snd_pcm_oss 
snd_mixer_oss snd_pcm uvcvideo videodev nouveau btusb bluetooth snd_seq_midi 
lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event v4l1_compat rfkill snd_seq 
bcm5974 wl(P) ttm drm_kms_helper v4l2_compat_ioctl32 snd_timer snd_seq_device 
drm i2c_i801 i2c_algo_bit snd tpm_tis soundcore video snd_page_alloc lib80211 
joydev i2c_core tpm tpm_bios battery ac applesmc input_polldev evdev pcspkr 
mbp_nvidia_bl output power_supply processor thermal_sys button ext4 mbcache 
jbd2 crc16 raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor 
async_memcpy async_tx raid1 raid0 multipath linear md_mod nbd dm_mirror 
dm_region_hash dm_log dm_mod zlib_deflate crc32c sg sd_mod sr_mod cdrom 
crc_t10dif hid_apple usbhid hid ata_generic sata_sil24 uhci_hcd ata_piix libata 
ehci_hcd scsi_mod usbcore sky2 firewire_ohci firewire_core crc_itu_t nls_base 
[last unloaded: uinput]
[ 1860.193550] Pid: 14808, comm: umount Tainted: PW   2.6.37-2-amd64 #1
[ 1860.193552] Call Trace:
[ 1860.193561]  [81047084] ? warn_slowpath_common+0x78/0x8c
[ 1860.193577]  [a0c74a4b] ? __btrfs_close_devices+0xb5/0xd0 [btrfs]
[ 1860.193593]  [a0c74a83] ? btrfs_close_devices+0x1d/0x70 [btrfs]
[ 1860.193610]  [a0c53e64] ? close_ctree+0x2cd/0x32f [btrfs]
[ 1860.193616]  [8110580d] ? dispose_list+0xa7/0xb9
[ 1860.193627]  [a0c3d1f3] ? btrfs_put_super+0x10/0x1d [btrfs]
[ 1860.193633]  [810f5c67] ? generic_shutdown_super+0x5c/0xd4
[ 1860.193638]  [810f5d1e] ? kill_anon_super+0x9/0x40
[ 1860.193642]  [810f5794] ? deactivate_locked_super+0x1e/0x3d
[ 1860.193647]  [8110928e] ? sys_umount+0x2cf/0x2fa
[ 1860.193653]  [81009a12] ? system_call_fastpath+0x16/0x1b
[ 1860.193656] ---[ end trace 4e4b8320dc6e70cc ]---


I can mount it back, but not if I reload the btrfs module, in which case I get:

[ 1961.328280] Btrfs loaded
[ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid 1 transid 
118 /dev/loop0
[ 

Re: cloning single-device btrfs file system onto multi-device one

2011-03-22 Thread cwillu
 I can mount it back, but not if I reload the btrfs module, in which case I 
 get:

 [ 1961.328280] Btrfs loaded
 [ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid 1 transid 
 118 /dev/loop0
 [ 1961.329007] btrfs: failed to read the system array on loop0
 [ 1961.340084] btrfs: open_ctree failed

Did you rescan all the loop devices (btrfs dev scan /dev/loop*) after
reloading the module, before trying to mount again?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cloning single-device btrfs file system onto multi-device one

2011-03-22 Thread Fajar A. Nugraha
On Mon, Mar 21, 2011 at 11:24 PM, Stephane Chazelas
stephane.chaze...@gmail.com wrote:
 AFAICT, compression is enabled at mount time and would
 only apply to newly created files. Is there a way to compress
 files already in a btrfs filesystem?

You need to select the files manually (not possible to select a
directory), but yes, it's possible using btrfs filesystem defragment
-c

# mount -o loop /tmp/test.img /mnt/tmp
# cd /mnt/tmp
# dd if=/dev/zero of=100M.bin bs=1M count=100;sync;df -h .
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.20833 s, 86.8 MB/s
FilesystemSize  Used Avail Use% Mounted on
/dev/loop01.0G  101M  794M  12% /mnt/tmp
# /sbin/btrfs fi de -c /mnt/tmp/100M.bin;sync;df -h .
FilesystemSize  Used Avail Use% Mounted on
/dev/loop01.0G  3.5M  891M   1% /mnt/tmp

For a whole filesystem, you might be able to automate it using shell
script with find . -type f ...

-- 
Fajar
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


cloning single-device btrfs file system onto multi-device one

2011-03-21 Thread Stephane Chazelas
Hiya,

I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
large, 4 of which are in use) to another machine with 3 3TB HDs
and preserve all the subvolumes/snapshots.

Is there a way to do that without using a software/hardware raid
on the new machine (that is just use btrfs multi-device).

If fewer than 3TB were occupied, I suppose I could just resize
it so that it fits on one 3TB hd, then copy device to device
onto a 3TB disk, add the 2 other ones and do a balance, but
here, I can't do that.

I suspect that if compression was enabled, the FS could fit on
3 TB, but AFAICT, compression is enabled at mount time and would
only apply to newly created files. Is there a way to compress
files already in a btrfs filesystem?

Any help would be appreciated.
Stephane

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html