Re: how to repair or access broken btrfs?

2017-11-14 Thread Stefan Priebe - Profihost AG
Am 14.11.2017 um 18:45 schrieb Andrei Borzenkov: > 14.11.2017 12:56, Stefan Priebe - Profihost AG пишет: >> Hello, >> >> after a controller firmware bug / failure i've a broken btrfs. >> >> # parent transid verify failed on 181846016 wanted 143404 found 143399 >> >> running repair, fsck or

[PATCH 05/10] writeback: convert the flexible prop stuff to bytes

2017-11-14 Thread Josef Bacik
From: Josef Bacik The flexible proportions were all page based, but now that we are doing metadata writeout that can be smaller or larger than page size we need to account for this in bytes instead of number of pages. Signed-off-by: Josef Bacik ---

[PATCH 01/10] remove mapping from balance_dirty_pages*()

2017-11-14 Thread Josef Bacik
From: Josef Bacik The only reason we pass in the mapping is to get the inode in order to see if writeback cgroups is enabled, and even then it only checks the bdi and a super block flag. balance_dirty_pages() doesn't even use the mapping. Since balance_dirty_pages*() works on a

[PATCH 04/10] lib: add a __fprop_add_percpu_max

2017-11-14 Thread Josef Bacik
From: Josef Bacik This helper allows us to add an arbitrary amount to the fprop structures. Signed-off-by: Josef Bacik --- include/linux/flex_proportions.h | 11 +-- lib/flex_proportions.c | 9 + 2 files changed, 14 insertions(+), 6

[PATCH 02/10] writeback: convert WB_WRITTEN/WB_DIRITED counters to bytes

2017-11-14 Thread Josef Bacik
From: Josef Bacik These are counters that constantly go up in order to do bandwidth calculations. It isn't important what the units are in, as long as they are consistent between the two of them, so convert them to count bytes written/dirtied, and allow the metadata accounting

Re: Need help with incremental backup strategy (snapshots, defragmentingt & performance)

2017-11-14 Thread Dave
On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov wrote: > > On Mon, 13 Nov 2017 22:39:44 -0500 > Dave wrote: > > > I have my live system on one block device and a backup snapshot of it > > on another block device. I am keeping them in sync with hourly

[PATCH 03/10] lib: add a batch size to fprop_global

2017-11-14 Thread Josef Bacik
From: Josef Bacik The flexible proportion stuff has been used to track how many pages we are writing out over a period of time, so counts everything in single increments. If we wanted to use another base value we need to be able to adjust the batch size to fit our the units we'll

[PATCH 09/10] Btrfs: kill the btree_inode

2017-11-14 Thread Josef Bacik
From: Josef Bacik In order to more efficiently support sub-page blocksizes we need to stop allocating pages from pagecache for our metadata. Instead switch to using the account_metadata* counters for making sure we are keeping the system aware of how much dirty metadata we have,

[PATCH 08/10] export radix_tree_iter_tag_set

2017-11-14 Thread Josef Bacik
From: Josef Bacik We use this in btrfs for metadata writeback. Acked-by: Matthew Wilcox Signed-off-by: Josef Bacik --- lib/radix-tree.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index

[PATCH 07/10] writeback: introduce super_operations->write_metadata

2017-11-14 Thread Josef Bacik
From: Josef Bacik Now that we have metadata counters in the VM, we need to provide a way to kick writeback on dirty metadata. Introduce super_operations->write_metadata. This allows file systems to deal with writing back any dirty metadata we need based on the writeback needs of

[PATCH 06/10] writeback: add counters for metadata usage

2017-11-14 Thread Josef Bacik
From: Josef Bacik Btrfs has no bounds except memory on the amount of dirty memory that we have in use for metadata. Historically we have used a special inode so we could take advantage of the balance_dirty_pages throttling that comes with using pagecache. However as we'd like to

[PATCH 10/10] btrfs: rework end io for extent buffer reads

2017-11-14 Thread Josef Bacik
From: Josef Bacik Now that the only thing that keeps eb's alive is io_pages and it's refcount we need to hold the eb ref for the entire end io call so we don't get it removed out from underneath us. Also the hooks make no sense for us now, so rework this to be cleaner.

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Chris Murphy
On Tue, Nov 14, 2017 at 5:38 AM, Adam Borowski wrote: > On Tue, Nov 14, 2017 at 10:36:22AM +0200, Klaus Agnoletti wrote: >> I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the > ^ >> 2TB disks started giving me I/O

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Chris Murphy
On Tue, Nov 14, 2017 at 5:48 AM, Roman Mamedov wrote: > On Tue, 14 Nov 2017 10:36:22 +0200 > Klaus Agnoletti wrote: > >> Obviously, I want /dev/sdd emptied and deleted from the raid. > > * Unmount the RAID0 FS > > * copy the bad drive using

Tiered storage?

2017-11-14 Thread Roy Sigurd Karlsbakk
Hi all I've been following this project on and off for quite a few years, and I wonder if anyone has looked into tiered storage on it. With tiered storage, I mean hot data lying on fast storage and cold data on slow storage. I'm not talking about cashing (where you just keep a copy of the hot

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Chris Murphy
On Tue, Nov 14, 2017 at 1:36 AM, Klaus Agnoletti wrote: > Btrfs v3.17 Unrelated to the problem but this is pretty old. > Linux box 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) Also pretty old kernel. > x86_64 GNU/Linux > klaus@box:~$ sudo btrfs --version >

Re: [PATCH] btrfs/154: test for device dynamic rescan

2017-11-14 Thread Anand Jain
On 11/14/2017 08:12 PM, Eryu Guan wrote: On Mon, Nov 13, 2017 at 10:25:41AM +0800, Anand Jain wrote: Make sure missing device is included in the alloc list when it is scanned on a mounted FS. This test case needs btrfs kernel patch which is in the ML [PATCH] btrfs: handle dynamically

[PATCH v2] btrfs/154: test for device dynamic rescan

2017-11-14 Thread Anand Jain
Make sure missing device is included in the alloc list when it is scanned on a mounted FS. This test case needs btrfs kernel patch which is in the ML [PATCH] btrfs: handle dynamically reappearing missing device Without the kernel patch, the test will run, but reports as failed, as the device

Re: [PATCH] btrfs: handle dynamically reappearing missing device

2017-11-14 Thread kbuild test robot
Hi Anand, Thank you for the patch! Yet something to improve: [auto build test ERROR on btrfs/next] [also build test ERROR on v4.14 next-20171114] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits

Re: Tiered storage?

2017-11-14 Thread waxhead
As a regular BTRFS user I can tell you that there is no such thing as hot data tracking yet. Some people seem to use bcache together with btrfs and come asking for help on the mailing list. Raid5/6 have received a few fixes recently, and it *may* soon me worth trying out raid5/6 for data, but

Re: [PATCH v2] btrfs/154: test for device dynamic rescan

2017-11-14 Thread Eryu Guan
On Wed, Nov 15, 2017 at 11:05:15AM +0800, Anand Jain wrote: > Make sure missing device is included in the alloc list when it is > scanned on a mounted FS. > > This test case needs btrfs kernel patch which is in the ML > [PATCH] btrfs: handle dynamically reappearing missing device > Without the

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Roman Mamedov
On Tue, 14 Nov 2017 10:36:22 +0200 Klaus Agnoletti wrote: > Obviously, I want /dev/sdd emptied and deleted from the raid. * Unmount the RAID0 FS * copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive (noting how much of it is actually unreadable --

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Patrik Lundquist
On 14 November 2017 at 09:36, Klaus Agnoletti wrote: > > How do you guys think I should go about this? I'd clone the disk with GNU ddrescue. https://www.gnu.org/software/ddrescue/ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Austin S. Hemmelgarn
On 2017-11-14 07:48, Roman Mamedov wrote: On Tue, 14 Nov 2017 10:36:22 +0200 Klaus Agnoletti wrote: Obviously, I want /dev/sdd emptied and deleted from the raid. * Unmount the RAID0 FS * copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Austin S. Hemmelgarn
On 2017-11-14 03:36, Klaus Agnoletti wrote: Hi list I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the 2TB disks started giving me I/O errors in dmesg like this: [388659.173819] ata5.00: exception Emask 0x0 SAct 0x7fff SErr 0x0 action 0x0 [388659.175589] ata5.00:

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Adam Borowski
On Tue, Nov 14, 2017 at 10:36:22AM +0200, Klaus Agnoletti wrote: > I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the ^ > 2TB disks started giving me I/O errors in dmesg like this: > > [388659.188988] Add. Sense: Unrecovered read error -

Re: Read before you deploy btrfs + zstd

2017-11-14 Thread Austin S. Hemmelgarn
On 2017-11-14 02:34, Martin Steigerwald wrote: Hello David. David Sterba - 13.11.17, 23:50: while 4.14 is still fresh, let me address some concerns I've seen on linux forums already. The newly added ZSTD support is a feature that has broader impact than just the runtime compression. The

Re: [PATCH] btrfs/154: test for device dynamic rescan

2017-11-14 Thread Eryu Guan
On Mon, Nov 13, 2017 at 10:25:41AM +0800, Anand Jain wrote: > Make sure missing device is included in the alloc list when it is > scanned on a mounted FS. > > This test case needs btrfs kernel patch which is in the ML > [PATCH] btrfs: handle dynamically reappearing missing device > Without the

Re: Need help with incremental backup strategy (snapshots, defragmentingt & performance)

2017-11-14 Thread Roman Mamedov
On Tue, 14 Nov 2017 10:14:55 +0300 Marat Khalili wrote: > Don't keep snapshots under rsync target, place them under ../snapshots > (if snapper supports this): > Or, specify them in --exclude and avoid using --delete-excluded. Both are good suggestions, in my case each system does

Re: Need help with incremental backup strategy (snapshots, defragmentingt & performance)

2017-11-14 Thread Roman Mamedov
On Mon, 13 Nov 2017 22:39:44 -0500 Dave wrote: > I have my live system on one block device and a backup snapshot of it > on another block device. I am keeping them in sync with hourly rsync > transfers. > > Here's how this system works in a little more detail: > > 1. I

how to repair or access broken btrfs?

2017-11-14 Thread Stefan Priebe - Profihost AG
Hello, after a controller firmware bug / failure i've a broken btrfs. # parent transid verify failed on 181846016 wanted 143404 found 143399 running repair, fsck or zero-log always results in the same failure message: extent-tree.c:2725: alloc_reserved_tree_block: BUG_ON `ret` triggered, value

Re: [PATCH 4/4] Btrfs: btrfs_dedupe_file_range() ioctl, remove 16MiB restriction

2017-11-14 Thread Timofey Titovets
Sorry, i just thinking that i can test that and send you some feedback, But for now, no time. I will check that later and try adds memory reusing. So, just ignore patches for now. Thanks 2017-10-10 20:36 GMT+03:00 David Sterba : > On Tue, Oct 03, 2017 at 06:06:04PM +0300,

RE: Read before you deploy btrfs + zstd

2017-11-14 Thread Paul Jones
> -Original Message- > From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs- > ow...@vger.kernel.org] On Behalf Of Martin Steigerwald > Sent: Tuesday, 14 November 2017 6:35 PM > To: dste...@suse.cz; linux-btrfs@vger.kernel.org > Subject: Re: Read before you deploy btrfs + zstd > >

Re: [GIT PULL] Btrfs changes for 4.15

2017-11-14 Thread David Sterba
On Tue, Nov 14, 2017 at 07:39:11AM +0800, Qu Wenruo wrote: > > - extend mount options to specify zlib compression level, -o compress=zlib:9 > > However the support for it has a big problem, it will cause wild memory > access for "-o compress" mount option. > > Kernel ASAN can detect it easily

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Klaus Agnoletti
Hi Roman I almost understand :-) - however, I need a bit more information: How do I copy the image file to the 6TB without screwing the existing btrfs up when the fs is not mounted? Should I remove it from the raid again? Also, as you might have noticed, I have a bit of an issue with the entire

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Klaus Agnoletti
Hi Austin Good points. Thanks a lot. /klaus On Tue, Nov 14, 2017 at 2:14 PM, Austin S. Hemmelgarn wrote: > On 2017-11-14 03:36, Klaus Agnoletti wrote: >> >> Hi list >> >> I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the >> 2TB disks started giving me

Btrfs progs pre-release 4.14-rc1

2017-11-14 Thread David Sterba
Hi, a pre-release has been tagged. Changes: * build: libzstd now required by default * check: more lowmem mode repair enhancements * subvol set-default: also accept path * prop set: compression accepts no/none, same as "" * filesystem usage: enable for filesystem on top of a seed

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Klaus Agnoletti
Hi Roman, If you look at the 'show' command, the failing disk is sorta out of the fs, so maybe removing the 6TB disk again will divide the data already on the 6TB disk (which isn't more than 300something gigs) to the 2 well-functioning disks. Still, as putting the dd-image of the 2TB disk on the

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Kai Krakow
Am Tue, 14 Nov 2017 17:48:56 +0500 schrieb Roman Mamedov : > [1] Note that "ddrescue" and "dd_rescue" are two different programs > for the same purpose, one may work better than the other. I don't > remember which. :) One is a perl implementation and is the one working worse.

Re: A partially failing disk in raid0 needs replacement

2017-11-14 Thread Roman Mamedov
On Tue, 14 Nov 2017 15:09:52 +0100 Klaus Agnoletti wrote: > Hi Roman > > I almost understand :-) - however, I need a bit more information: > > How do I copy the image file to the 6TB without screwing the existing > btrfs up when the fs is not mounted? Should I remove it

Re: Read before you deploy btrfs + zstd

2017-11-14 Thread Martin Steigerwald
David Sterba - 14.11.17, 19:49: > On Tue, Nov 14, 2017 at 08:34:37AM +0100, Martin Steigerwald wrote: > > Hello David. > > > > David Sterba - 13.11.17, 23:50: > > > while 4.14 is still fresh, let me address some concerns I've seen on > > > linux > > > forums already. > > > > > > The newly added

Re: how to repair or access broken btrfs?

2017-11-14 Thread Andrei Borzenkov
14.11.2017 12:56, Stefan Priebe - Profihost AG пишет: > Hello, > > after a controller firmware bug / failure i've a broken btrfs. > > # parent transid verify failed on 181846016 wanted 143404 found 143399 > > running repair, fsck or zero-log always results in the same failure message: >

Re: Read before you deploy btrfs + zstd

2017-11-14 Thread David Sterba
On Tue, Nov 14, 2017 at 08:34:37AM +0100, Martin Steigerwald wrote: > Hello David. > > David Sterba - 13.11.17, 23:50: > > while 4.14 is still fresh, let me address some concerns I've seen on linux > > forums already. > > > > The newly added ZSTD support is a feature that has broader impact than

Re: Read before you deploy btrfs + zstd

2017-11-14 Thread David Sterba
On Mon, Nov 13, 2017 at 11:50:46PM +0100, David Sterba wrote: > Up to now, there are no bootloaders supporting ZSTD. I've tried to implement the support to GRUB, still incomplete and hacky but most of the code is there. The ZSTD implementation is copied from kernel. The allocators need to be