btrfs send receive: No space left on device

2018-10-17 Thread Libor Klepáč
Hello,
i have new 32GB SSD in my intel nuc, installed debian9 on it, using btrfs as a 
rootfs.
Then i created subvolumes /system and /home and moved system there.

System was installed using kernel 4.9.x and filesystem created using 
btrfs-progs 4.7.x
Details follow:
main filesystem

# btrfs filesystem usage /mnt/btrfs/ssd/
Overall:
Device size:  29.08GiB
Device allocated:  4.28GiB
Device unallocated:   24.80GiB
Device missing:  0.00B
Used:  2.54GiB
Free (estimated): 26.32GiB  (min: 26.32GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:   16.00MiB  (used: 0.00B)

Data,single: Size:4.00GiB, Used:2.48GiB
   /dev/sda3   4.00GiB

Metadata,single: Size:256.00MiB, Used:61.05MiB
   /dev/sda3 256.00MiB

System,single: Size:32.00MiB, Used:16.00KiB
   /dev/sda3  32.00MiB

Unallocated:
   /dev/sda3  24.80GiB

#/etc/fstab
UUID=d801da52-813d-49da-bdda-87fc6363e0ac   /mnt/btrfs/ssd  btrfs 
noatime,space_cache=v2,compress=lzo,commit=300,subvolid=5 0   0
UUID=d801da52-813d-49da-bdda-87fc6363e0ac   /   btrfs   
noatime,space_cache=v2,compress=lzo,commit=300,subvol=/system 0   0
UUID=d801da52-813d-49da-bdda-87fc6363e0ac   /home   btrfs   
noatime,space_cache=v2,compress=lzo,commit=300,subvol=/home 0   0

-
Then i installed kernel from backports:
4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.6-1~bpo9+1
and btrfs-progs 4.17

For backups , i have created 16GB iscsi device on my qnap and mounted it, 
created filesystem, mounted like this:
LABEL=backup/mnt/btrfs/backup   btrfs   
noatime,space_cache=v2,compress=lzo,subvolid=5,nofail,noauto 0   0

After send-receive operation on /home subvolume, usage looks like this:

# btrfs filesystem usage /mnt/btrfs/backup/
Overall:
Device size:  16.00GiB
Device allocated:  1.27GiB
Device unallocated:   14.73GiB
Device missing:  0.00B
Used:844.18MiB
Free (estimated): 14.92GiB  (min: 14.92GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:   16.00MiB  (used: 0.00B)

Data,single: Size:1.01GiB, Used:833.36MiB
   /dev/sdb1.01GiB

Metadata,single: Size:264.00MiB, Used:10.80MiB
   /dev/sdb  264.00MiB

System,single: Size:4.00MiB, Used:16.00KiB
   /dev/sdb4.00MiB

Unallocated:
   /dev/sdb   14.73GiB


Problem is, during send-receive of system subvolume, it runs out of space:

# btrbk run /mnt/btrfs/ssd/system/ -v  
btrbk command line client, version 0.26.1  (Wed Oct 17 09:51:20 2018)
Using configuration: /etc/btrbk/btrbk.conf
Using transaction log: /var/log/btrbk.log
Creating subvolume snapshot for: /mnt/btrfs/ssd/system
[snapshot] source: /mnt/btrfs/ssd/system
[snapshot] target: /mnt/btrfs/ssd/_snapshots/system.20181017T0951
Checking for missing backups of subvolume "/mnt/btrfs/ssd/system" in 
"/mnt/btrfs/backup/"
Creating subvolume backup (send-receive) for: 
/mnt/btrfs/ssd/_snapshots/system.20181016T2034
No common parent subvolume present, creating full backup...
[send/receive] source: /mnt/btrfs/ssd/_snapshots/system.20181016T2034
[send/receive] target: /mnt/btrfs/backup/system.20181016T2034
mbuffer: error: outputThread: error writing to  at offset 0x4b5bd000: 
Broken pipe
mbuffer: warning: error during output to : Broken pipe
WARNING: [send/receive] (send=/mnt/btrfs/ssd/_snapshots/system.20181016T2034, 
receive=/mnt/btrfs/backup) At subvol 
/mnt/btrfs/ssd/_snapshots/system.20181016T2034
WARNING: [send/receive] (send=/mnt/btrfs/ssd/_snapshots/system.20181016T2034, 
receive=/mnt/btrfs/backup) At subvol system.20181016T2034
ERROR: rename o77417-5519-0 -> 
lib/modules/4.18.0-0.bpo.1-amd64/kernel/drivers/watchdog/pcwd_pci.ko failed: No 
space left on device
ERROR: Failed to send/receive btrfs subvolume: 
/mnt/btrfs/ssd/_snapshots/system.20181016T2034  -> /mnt/btrfs/backup
[delete] options: commit-after
[delete] target: /mnt/btrfs/backup/system.20181016T2034
WARNING: Deleted partially received (garbled) subvolume: 
/mnt/btrfs/backup/system.20181016T2034
ERROR: Error while resuming backups, aborting
Created 0/2 missing backups
WARNING: Skipping cleanup of snapshots for subvolume "/mnt/btrfs/ssd/system", 
as at least one target aborted earlier
Completed within: 116s  (Wed Oct 17 09:53:16 2018)

Backup Summary (btrbk command line client, version 0.26.1)

Date:   Wed Oct 17 09:51:20 2018
Config: /etc/btrbk/btrbk.conf
Filter: subvolume=/mnt/btrfs/ssd/system

Legend:
===  up-to-date subvolume (source snapshot)
+++  created subvolume (source 

BTRFS and cyrus mail server

2017-02-08 Thread Libor Klepáč
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on 
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found 
any recent article on this topic.

I'm preparing migration of our mailserver to Debian Stretch, ie. kernel 4.9 
for now. We are using XFS for storage now. I will migrate using imapsync to 
new server. Both are virtual machines running on vmware on Dell hardware.
Disks are on battery backed hw raid controllers over vmfs.

I'm considering using BTRFS, but I'm little concerned because of reading this 
mailing list ;)

I'm interested in using:
 - compression (emails should compress well - right?)
 - maybe deduplication (cyrus does it by hardlinking of same content messages 
now) later
 - snapshots for history
 - send/receive for offisite backup
 - what about data inlining, should it be turned off?

Our Cyrus pool consist of ~520GB of data in ~2,5million files, ~2000 
mailboxes.
We have message size limit of ~25MB, so emails are not bigger than that.
There are however bigger files, these are per mailbox caches/index files of 
cyrus (some of them are around 300MB) - and these are also files which are 
most modified.
Rest of files (messages) are usualy just writen once.

---
I started using btrfs on backup server as a storage for 4 backuppc run in 
containers (backups are then send away with btrbk), year ago.
After switching off data inlining i'm satisfied, everything works (send/
receive is sometime slow, but i guess it's because of sata disks on receive 
side).


Thanks for you opinions,

Libor

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Send-recieve performance

2016-07-29 Thread Libor Klepáč
On pátek 22. července 2016 13:27:15 CEST Libor Klepáč wrote:
> Hello,
> 
> Dne pátek 22. července 2016 14:59:43 CEST, Henk Slager napsal(a):
> 
> > On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč <libor.kle...@bcom.cz>
> > wrote:
> 
> > > Hello,
> > > we use backuppc to backup our hosting machines.
> > > 
> > > I have recently migrated it to btrfs, so we can use send-recieve for
> > > offsite backups of our backups.
> > > 
> > > I have several btrfs volumes, each hosts nspawn container, which runs
> > > in
> > > /system subvolume and has backuppc data in /backuppc subvolume .
> > > I use btrbk to do snapshots and transfer.
> > > Local side is set to keep 5 daily snapshots, remote side to hold some
> > > history. (not much yet, i'm using it this way for few weeks).
> > > 
> > > If you know backuppc behaviour: for every backup (even incremental), it
> > > creates full directory tree of each backed up machine even if it has no
> > > modified files and places one small file in each, which holds some info
> > > for backuppc. So after few days i ran into ENOSPACE on one volume,
> > > because my metadata grow, because of inlineing. I switched from
> > > mdata=DUP
> > > to mdata=single (now I see it's possible to change inline file size,
> > > right?).
> > 
> > I would try mounting both send and receive volumes with max_inline=0
> > So then for all small new- and changed files, the filedata will be
> > stored in data chunks and not inline in the metadata chunks.
> 
> 
> Ok, i will try. Is there way to move existing files from metadata to data 
> chunks? Something like btrfs balance with convert filter?
>
Writen on 25.7.2016: 
I will recreate on new filesystems and do new send/receive

Writen on 29.7.2016:
Created new filesystem or copyed to new subvolumes after mounting with 
max_inline=0

Difference is remarkable, for example
before:
--
btrfs filesystem usage  /mnt/btrfs/as/
Overall:
Device size: 320.00GiB
Device allocated:144.06GiB
Device unallocated:  175.94GiB
Device missing:  0.00B
Used:122.22GiB
Free (estimated):176.33GiB  (min: 88.36GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:  512.00MiB  (used: 40.86MiB)

Data,single: Size:98.00GiB, Used:97.61GiB
   /dev/sdb   98.00GiB

Metadata,single: Size:46.00GiB, Used:24.61GiB
   /dev/sdb   46.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sdb   64.00MiB

Unallocated:
   /dev/sdb  175.94GiB

after:
---
   
btrfs filesystem usage  /mnt/btrfs/as/
Overall:
Device size: 320.00GiB
Device allocated:137.06GiB
Device unallocated:  182.94GiB
Device missing:  0.00B
Used: 54.36GiB
Free (estimated):225.15GiB  (min: 133.68GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:  512.00MiB  (used: 0.00B)

Data,single: Size:91.00GiB, Used:48.79GiB
   /dev/sdb   91.00GiB

Metadata,single: Size:46.00GiB, Used:5.58GiB
   /dev/sdb   46.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sdb   64.00MiB

Unallocated:
   /dev/sdb  182.94GiB


> 
> > That you changed metadata profile from dup to single is unrelated in
> > principle. single for metadata instead of dup is half the write I/O
> > for the harddisks, so in that sense it might speed up send actions a
> > bit. I guess almost all time is spend in seeks.
> 
> 
> Yes, I just didn't realize that so much files will be in metadata structures
>  and it cought me be suprise.
> 

 
> > The send part is the speed bottleneck as it looks like, you can test
> > and isolate it by doing a dummy send and pipe it to  | mbuffer >
> > /dev/null  and see what speed you get.
> 
> 
> I tried it already, did incremental send to file 
> #btrfs send -v -p ./backuppc.20160712/  ./backuppc.20160720_1/ | pv > /mnt/
> data1/send
> At subvol ./backuppc.20160720_1/
> joining genl thread
> 18.9GiB 21:14:45 [ 259KiB/s]
> 
> Copied it over scp to reciever with speed 50.9MB/s.
> No i will try recieve.
>

Writen on 25.7.2016: 
Receive did 1GB of those 19GB over weekend, so, canceled ...

Writen on 29.7.2016: 
Even after clean filesystems mounted with max_inline=0, send/receive is slow.
I tried to unmount all filesystem, unload btrfs module, then loaded it again.
Send/receive still slow.

Then i set vm.dirty_bytes to 102400
and then set it to
vm.dirty_background_ratio = 10
vm.dirty_ratio = 20

And voila, speed went up dramaticaly, now it has transfered about 10GB in 
30minutes!

Libor
N�r��yb�X��ǧv�^�)޺{.n�+{�n�߲)w*jg����ݢj/���z�ޖ��2�ޙ&�)ߡ�a�����G���h��j:+v���w��٥

Re: Btrfs send to send out metadata and data separately

2016-07-29 Thread Libor Klepáč
Hello,

just a little question on receiver point 0), see bellow

Dne pátek 29. července 2016 20:40:38 CEST, Qu Wenruo napsal(a):
> Hi Filipe, and maintainers,
> 
> Receive will do the following thing first before recovering the
> subvolume/snapshot:
> 
> 0) Create temporary dir for data extents
> Create a new dir with temporary name($data_extent), to put data
> extents into it.

These are the directories in format "o4095916-21925-0" on receiver side?

I'm in middle of send/receive and i have over 300 thousand of them already.
I was always told, that directory with lots of items suffers on performance 
(but i newer used btrfs before :), is that true?

Should it be little structured (subdirs based on extent number, for example) ?

Libor
N�r��yb�X��ǧv�^�)޺{.n�+{�n�߲)w*jg����ݢj/���z�ޖ��2�ޙ&�)ߡ�a�����G���h��j:+v���w��٥

Re: Send-recieve performance

2016-07-22 Thread Libor Klepáč
Hello,

Dne pátek 22. července 2016 14:59:43 CEST, Henk Slager napsal(a):
> On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč <libor.kle...@bcom.cz> wrote:
> > Hello,
> > we use backuppc to backup our hosting machines.
> > 
> > I have recently migrated it to btrfs, so we can use send-recieve for
> > offsite backups of our backups.
> > 
> > I have several btrfs volumes, each hosts nspawn container, which runs in
> > /system subvolume and has backuppc data in /backuppc subvolume .
> > I use btrbk to do snapshots and transfer.
> > Local side is set to keep 5 daily snapshots, remote side to hold some
> > history. (not much yet, i'm using it this way for few weeks).
> > 
> > If you know backuppc behaviour: for every backup (even incremental), it
> > creates full directory tree of each backed up machine even if it has no
> > modified files and places one small file in each, which holds some info
> > for backuppc. So after few days i ran into ENOSPACE on one volume,
> > because my metadata grow, because of inlineing. I switched from mdata=DUP
> > to mdata=single (now I see it's possible to change inline file size,
> > right?).
> I would try mounting both send and receive volumes with max_inline=0
> So then for all small new- and changed files, the filedata will be
> stored in data chunks and not inline in the metadata chunks.

Ok, i will try. Is there way to move existing files from metadata to data 
chunks? Something like btrfs balance with convert filter?

> That you changed metadata profile from dup to single is unrelated in
> principle. single for metadata instead of dup is half the write I/O
> for the harddisks, so in that sense it might speed up send actions a
> bit. I guess almost all time is spend in seeks.

Yes, I just didn't realize that so much files will be in metadata structures 
and it cought me be suprise.

> 
> > My problem is, that on some volumes, send-recieve is relatively fast (rate
> > in MB/s or hundreds of kB/s) but on biggest volume (biggest in space and
> > biggest in contained filesystem trees) rate is just 5-30kB/s.
> > 
> > Here is btrbk progress copyed
> > 785MiB 47:52:00 [12.9KiB/s] [4.67KiB/s]
> > 
> > ie. 758MB in 48 hours.
> > 
> > Reciever has high IO/wait - 90-100%, when i push data using btrbk.
> > When I run dd over ssh it can do 50-75MB/s.
> 
> The send part is the speed bottleneck as it looks like, you can test
> and isolate it by doing a dummy send and pipe it to  | mbuffer >
> /dev/null  and see what speed you get.

I tried it already, did incremental send to file 
#btrfs send -v -p ./backuppc.20160712/  ./backuppc.20160720_1/ | pv > /mnt/
data1/send
At subvol ./backuppc.20160720_1/
joining genl thread
18.9GiB 21:14:45 [ 259KiB/s]

Copied it over scp to reciever with speed 50.9MB/s.
No i will try recieve.


> > Sending machine is debian jessie with kernel 4.5.0-0.bpo.2-amd64 (upstream
> > 4.5.3) , btrfsprogs 4.4.1. It is virtual machine running on volume
> > exported from MD3420, 4 SAS disks in RAID10.
> > 
> > Recieving machine is debian jessie on Dell T20 with 4x3TB disks in MD
> > RAID5 , kernel is 4.4.0-0.bpo.1-amd64 (upstream 4.4.6), btrfsprgos 4.4.1
> > 
> > BTRFS volumes were created using those listed versions.
> > 
> > Sender:
> > -
> > #mount | grep hosting
> > /dev/sdg on /mnt/btrfs/hosting type btrfs
> > (rw,noatime,space_cache,subvolid=5,subvol=/) /dev/sdg on
> > /var/lib/container/hosting type btrfs
> > (rw,noatime,space_cache,subvolid=259,subvol=/system) /dev/sdg on
> > /var/lib/container/hosting/var/lib/backuppc type btrfs
> > (rw,noatime,space_cache,subvolid=260,subvol=/backuppc)
> > 
> > #btrfs filesystem usage /mnt/btrfs/hosting
> > 
> > Overall:
> > Device size: 840.00GiB
> > Device allocated:815.03GiB
> > Device unallocated:   24.97GiB
> > Device missing:  0.00B
> > Used:522.76GiB
> > Free (estimated):283.66GiB  (min: 271.18GiB)
> > Data ratio:   1.00
> > Metadata ratio:   1.00
> > Global reserve:  512.00MiB  (used: 0.00B)
> > 
> > Data,single: Size:710.98GiB, Used:452.29GiB
> > 
> >/dev/sdg  710.98GiB
> > 
> > Metadata,single: Size:103.98GiB, Used:70.46GiB
> > 
> >/dev/sdg  103.98GiB
> 
> This is a very large ratio metadata/data. Large and scattered
> metadata, even on fast rotational media, will result in slow send
> operation is my experience ( incremental send, about 10G metadata

Send-recieve performance

2016-07-20 Thread Libor Klepáč
Hello,
we use backuppc to backup our hosting machines.

I have recently migrated it to btrfs, so we can use send-recieve for offsite 
backups of our backups.

I have several btrfs volumes, each hosts nspawn container, which runs in 
/system subvolume and has backuppc data in /backuppc subvolume
.
I use btrbk to do snapshots and transfer.
Local side is set to keep 5 daily snapshots, remote side to hold some history. 
(not much yet, i'm using it this way for few weeks).

If you know backuppc behaviour: for every backup (even incremental), it creates 
full directory tree of each backed up machine even if it has no modified files 
and places one small file in each, which holds some info for backuppc. 
So after few days i ran into ENOSPACE on one volume, because my metadata grow, 
because of inlineing.
I switched from mdata=DUP to mdata=single (now I see it's possible to change 
inline file size, right?).

My problem is, that on some volumes, send-recieve is relatively fast (rate in 
MB/s or hundreds of kB/s) but on biggest volume (biggest in space and biggest 
in contained filesystem trees) rate is just 5-30kB/s.

Here is btrbk progress copyed
785MiB 47:52:00 [12.9KiB/s] [4.67KiB/s]

ie. 758MB in 48 hours.

Reciever has high IO/wait - 90-100%, when i push data using btrbk.
When I run dd over ssh it can do 50-75MB/s.

Sending machine is debian jessie with kernel 4.5.0-0.bpo.2-amd64 (upstream 
4.5.3) , btrfsprogs 4.4.1. It is virtual machine running on volume exported 
from MD3420, 4 SAS disks in RAID10.

Recieving machine is debian jessie on Dell T20 with 4x3TB disks in MD RAID5 , 
kernel is 4.4.0-0.bpo.1-amd64 (upstream 4.4.6), btrfsprgos 4.4.1

BTRFS volumes were created using those listed versions.

Sender:
-
#mount | grep hosting
/dev/sdg on /mnt/btrfs/hosting type btrfs 
(rw,noatime,space_cache,subvolid=5,subvol=/)
/dev/sdg on /var/lib/container/hosting type btrfs 
(rw,noatime,space_cache,subvolid=259,subvol=/system)
/dev/sdg on /var/lib/container/hosting/var/lib/backuppc type btrfs 
(rw,noatime,space_cache,subvolid=260,subvol=/backuppc)

#btrfs filesystem usage /mnt/btrfs/hosting
Overall:
Device size: 840.00GiB
Device allocated:815.03GiB
Device unallocated:   24.97GiB
Device missing:  0.00B
Used:522.76GiB
Free (estimated):283.66GiB  (min: 271.18GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:  512.00MiB  (used: 0.00B)

Data,single: Size:710.98GiB, Used:452.29GiB
   /dev/sdg  710.98GiB

Metadata,single: Size:103.98GiB, Used:70.46GiB
   /dev/sdg  103.98GiB

System,DUP: Size:32.00MiB, Used:112.00KiB
   /dev/sdg   64.00MiB

Unallocated:
   /dev/sdg   24.97GiB

# btrfs filesystem show /mnt/btrfs/hosting
Label: 'BackupPC-BcomHosting'  uuid: edecc92a-646a-4585-91a0-9cbb556303e9
Total devices 1 FS bytes used 522.75GiB
devid1 size 840.00GiB used 815.03GiB path /dev/sdg

#Reciever:
#mount | grep hosting
/dev/mapper/vgPecDisk2-lvHostingBackupBtrfs on /mnt/btrfs/hosting type btrfs 
(rw,noatime,space_cache,subvolid=5,subvol=/)

#btrfs filesystem usage /mnt/btrfs/hosting/
Overall:
Device size: 896.00GiB
Device allocated:604.07GiB
Device unallocated:  291.93GiB
Device missing:  0.00B
Used:565.98GiB
Free (estimated):313.62GiB  (min: 167.65GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:  512.00MiB  (used: 55.80MiB)

Data,single: Size:530.01GiB, Used:508.32GiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   530.01GiB

Metadata,single: Size:74.00GiB, Used:57.65GiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs74.00GiB

System,DUP: Size:32.00MiB, Used:80.00KiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs64.00MiB

Unallocated:
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   291.93GiB

#btrfs filesystem show /mnt/btrfs/hosting/
Label: none  uuid: 2d7ea471-8794-42ed-bec2-a6ad83f7b038
Total devices 1 FS bytes used 564.56GiB
devid1 size 896.00GiB used 604.07GiB path 
/dev/mapper/vgPecDisk2-lvHostingBackupBtrfs



What can i do about it? I tried to defragment /backuppc subvolume (without 
recursive option), should i do it for all snapshots/subvolumes on both sides? 
Should upgrade to 4.6.x kernel help (there is 4.6.3 in backports)?

Thanks for any answer.

With regards,

Libor



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


crash on linux 2.6.39-1-amd64(debian)

2011-06-11 Thread Libor Klepáč
Hi, last week i have had crash on btrfs.
 After powerdown (magic sysrq did not work) i wasn't able to boot - system was 
running out of memory. I tried btrfsck from another debian system run from usb 
flash stick, it produced some errors but i was able to mount my filesystem.

hw info
dell latitude 13 with ssd disk OCZ-VERTEX2 120GB

system info
debian unstable with 2.6.39-1-amd64 
btrfs-tools 0.19+20101101-1

btrfs filesystem is mounted on /home build on top of (now degraded) /dev/md0

mount info 
/dev/md0 on /home type btrfs (rw,noatime,ssd,compress,space_cache)

i'm running snapbtr every *:12 minutes, crash happened shortly after this, all 
snap subvolumes are gone. I have seen similar crashes before, on 2.6.38, 
without any problems after reboot.

I'm also using autofs to mount some samba shares from work server , i think, 
that this kind of crash was affected by autofs, because i remember autofs in 
kernel oops several times

kernel oops (not whole, sorry)
http://klepac.cz/btrfs/07062011104.jpg
after reboot
http://klepac.cz/btrfs/07062011105.jpg
after fsck
http://klepac.cz/btrfs/07062011106.jpg

with regards
Libor


signature.asc
Description: This is a digitally signed message part.


Raid1 resync

2011-03-11 Thread Libor Klepáč
Hi,
before my ssd crashed, my /home partion was sitting on md raid1 consisting of 
partition on disk and iScsi LUN in network, it was created with bitmaps and 
iscsi lun was set writemostly.
So when i came to work, i fired up iscsi, added lun to raid and partial resync 
went ahead.
Before leaving work, i failed iscsci disk and kicked it from raid.
I would like to use something like this with btrfs, without mdraid below it.

Is it somehow posible?
1) I can't go below 2 devices.
2) Does btrfs support resync only of changed blocks like mdraid with bitmaps?

Thanks
Libor


signature.asc
Description: This is a digitally signed message part.


Re: Raid1 resync

2011-03-11 Thread Libor Klepáč
Hi,
sorry for my bad wording, i have no crash with btrfs, I'm considering to 
switch to btrfs on ssd disk comming back from warranty claim.
I was just asking, whether it's posible to switch my old setup (i described in 
my first mail) using mdraid to using btrfs internal raid without using mdraid, 
ie whether it's possible to resync changed data on iscsi disk when i come to 
work in morning.

With regards
Libor

PS. If this mailing list is for bug/patches reporting only, then i'm sorry for 
making unvanted mail traffic.


Dne pátek 11 Březen 2011 15:41:25 Wenyi Liu napsal(a):
 Hi Libor:
 Maybe, you can attach your oops at first.
 
 ---
 Best Regards,
 Liu Wenyi
 
 2011/3/11 Libor Klepáč libor.kle...@bcom.cz
 
  Hi,
  before my ssd crashed, my /home partion was sitting on md raid1
  consisting of partition on disk and iScsi LUN in network, it was created
  with bitmaps and iscsi lun was set writemostly.
  So when i came to work, i fired up iscsi, added lun to raid and partial
  resync went ahead.
  Before leaving work, i failed iscsci disk and kicked it from raid.
  I would like to use something like this with btrfs, without mdraid below
  it.
  
  Is it somehow posible?
  1) I can't go below 2 devices.
  2) Does btrfs support resync only of changed blocks like mdraid with
  bitmaps?
  
  Thanks
  Libor


signature.asc
Description: This is a digitally signed message part.