Re: RedHat 7.4 Release Notes: "Btrfs has been deprecated" - wut?
On 2017年08月12日 15:42, Christoph Hellwig wrote: On Sat, Aug 12, 2017 at 02:10:18AM +0200, Christoph Anton Mitterer wrote: Qu Wenruo wrote: Although Btrfs can disable data CoW, nodatacow also disables data checksum, which is another main feature for btrfs. Then decoupling of the two should probably decoupled and support for notdatacow+checksumming be implemented?! And how are you going to write your data and checksum atomically when doing in-place updates? Exactly, that's the main reason I can figure out why btrfs disables checksum for nodatacow. Thanks, Qu -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs data recovery
Christian Rene Thelen posted on Sun, 13 Aug 2017 19:12:48 +0200 as excerpted: > I have formated an encrypted disk, containing a LVM with a btrfs system. > > All superblocks appear to be destroyed; the btrfs-progs tools can't find > the root tree anymore and scalpel, binwalk, foremost & co return only > scrap. The filesystem was on an ssd and mounted with -o compression=lzo. > > How screwed am I? Any chances to recover some files? Is there a > plausible way to rebuild the superblock manually? Checking the raw image > with xxd gives me not a single readable word. > > I managed to decrypt the LV and dd it to an image. What can I do? Sysadmin's rule #1 of backups: The value of your data is not defined by arbitrary claims, but by the number of backups you consider it worth the trouble to make. No backups, you defined the data as worth less to you than the trouble and resources it would take to make them, and unlike words, actions, or lack thereof, are facts that don't lie. So regardless, you're not screwed, because if you had backups you can always recover from them, and if you didn't, then you considered the time and trouble to make backups worth more than the data itself, so in either case, you saved what your actions defined as of most importance to you, and actions don't lie. It sounds like you can be happy that you saved the real important time and resources you would have otherwise put into making those backups, which means you can be happy, because the data was self-evidently worth less to you than the time and resources you saved. =:^) Meanwhile/alternatively, because I've learned the value of my data as defined by backups too... Consider the lesson of Hurricane Katrina. During the hurricane and the immediate aftermath, Intercosmos/drectNIC (a hosting company located in New Orleans) had a small team that stayed on- site, keeping the servers up and the data available, and blogging about their experience. Many sysadmins and other technically inclined users were glued to that blog, living for each update. I was certainly among them. (2005) https://www.feld.com/archives/2005/09/blogging-from-a-new-orleans-data-center.html But at the same time I was seeing the wider news out of New Orleans. The looting. The people who /thought/ they were safe on that bridge, only to be slain by the police that were /supposed/ to be protecting them. The aftermath with the raw sewage, and bloated and decaying animal and occasional human bodies floating by. Of course that got me thinking about /real/ tragedy. I am (If you are) still relatively healthy, have a home to go to at night, food on the table, in the fridge, or money to buy it at the burger/taco/sandwich shop down the street, and a family and/or friends likewise fortunate, you have the /truly/ important stuff, and with a bit of perspective, the triviality of loss of some data in the bigger picture can be seen. Even if that data was irreplaceable family photos, consider how much more fortunate you are than the folks who just lost all that and more to a fire or flood... or as refugees just robbed of the last /truly/ valuable thing they had other than life itself, their family, or part of it, washed overboard. https://en.wikipedia.org/wiki/Death_of_Alan_Kurdi (2015) And if your lack of backups defined the data as trivial and you now regret it, well... be glad you'll live another day and get the chance to create more... this time, defining the data as more valuable than what you lost, by having more and/or more frequently updated backups thereof. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 4/5] squashfs: Add zstd support
On Sun, Aug 13, 2017 at 9:31 AM, Geert Uytterhoeven wrote: > On Fri, Aug 4, 2017 at 10:19 PM, Nick Terrell wrote: >> --- a/fs/squashfs/decompressor.c >> +++ b/fs/squashfs/decompressor.c >> @@ -65,6 +65,12 @@ static const struct squashfs_decompressor >> squashfs_zlib_comp_ops = { >> }; >> #endif >> >> +#ifndef CONFIG_SQUASHFS_ZSTD >> +static const struct squashfs_decompressor squashfs_zstd_comp_ops = { >> + NULL, NULL, NULL, NULL, ZSTD_COMPRESSION, "zstd", 0 > > Can you please use designated initializers? I prefer it as it is. It matches the coding style that I used in the rest of that file to declare the "unsupported" compressor entries (if this patch uses a different style it would look odd).There's no pointers to functions being assigned here, and it makes it a short and concise one-line. Phillip > >> +}; >> +#endif >> + >> static const struct squashfs_decompressor squashfs_unknown_comp_ops = { >> NULL, NULL, NULL, NULL, 0, "unknown", 0 >> ; > >> +const struct squashfs_decompressor squashfs_zstd_comp_ops = { >> + .init = zstd_init, >> + .free = zstd_free, >> + .decompress = zstd_uncompress, >> + .id = ZSTD_COMPRESSION, >> + .name = "zstd", >> + .supported = 1 >> +}; > > ... like you did here. > > Gr{oetje,eeting}s, > > Geert > > -- > Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- > ge...@linux-m68k.org > > In personal conversations with technical people, I call myself a hacker. But > when I'm talking to journalists I just say "programmer" or something like > that. > -- Linus Torvalds -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs issue with mariadb incremental backup
Hi Chris, Thank you very much for your suggestion. I didn't remember which steps that I made a mistake and made the mysql had Received UUID. I have done the following and it work as it should be right now. [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708070830 rw_201708070830 Create a snapshot of 'mbroken_201708070830' in './rw_201708070830' [root@backuplogC7 mariadb]# btrfs sub list . ID 257 gen 542 top level 5 path mbroken ID 317 gen 576 top level 5 path mbroken_201708070830 ID 318 gen 568 top level 5 path mbroken_201708080830 ID 319 gen 569 top level 5 path mbroken_201708090830 ID 320 gen 570 top level 5 path mbroken_201708100830 ID 321 gen 571 top level 5 path mbroken_201708110830 ID 322 gen 572 top level 5 path mbroken_201708120830 ID 323 gen 573 top level 5 path mbroken_201708130830 ID 324 gen 543 top level 5 path mysql ID 348 gen 576 top level 5 path rw_201708070830 [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708080830 rw_201708080830 Create a snapshot of 'mbroken_201708080830' in './rw_201708080830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708090830 rw_201708090830 Create a snapshot of 'mbroken_201708090830' in './rw_201708090830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708100830 rw_201708100830 Create a snapshot of 'mbroken_201708100830' in './rw_201708100830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708110830 rw_201708110830 Create a snapshot of 'mbroken_201708110830' in './rw_201708110830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708120830 rw_201708120830 Create a snapshot of 'mbroken_201708120830' in './rw_201708120830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708130830 rw_201708130830 Create a snapshot of 'mbroken_201708130830' in './rw_201708130830' [root@backuplogC7 mariadb]# btrfs sub list . ID 257 gen 542 top level 5 path mbroken ID 317 gen 576 top level 5 path mbroken_201708070830 ID 318 gen 577 top level 5 path mbroken_201708080830 ID 319 gen 578 top level 5 path mbroken_201708090830 ID 320 gen 579 top level 5 path mbroken_201708100830 ID 321 gen 580 top level 5 path mbroken_201708110830 ID 322 gen 581 top level 5 path mbroken_201708120830 ID 323 gen 582 top level 5 path mbroken_201708130830 ID 324 gen 543 top level 5 path mysql ID 348 gen 576 top level 5 path rw_201708070830 ID 349 gen 577 top level 5 path rw_201708080830 ID 350 gen 578 top level 5 path rw_201708090830 ID 351 gen 579 top level 5 path rw_201708100830 ID 352 gen 580 top level 5 path rw_201708110830 ID 353 gen 581 top level 5 path rw_201708120830 ID 354 gen 582 top level 5 path rw_201708130830 [root@backuplogC7 mariadb]# btrfs subvolume list -a -R . | grep "3ad0334a-4063-654c-add6-b1cbcdeaa639" ID 257 gen 542 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken ID 317 gen 576 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708070830 ID 318 gen 577 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708080830 ID 319 gen 578 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708090830 ID 320 gen 579 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708100830 ID 321 gen 580 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708110830 ID 322 gen 581 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708120830 ID 323 gen 582 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708130830 [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708070830 mysql_201708070830 Create a readonly snapshot of 'rw_201708070830' in './mysql_201708070830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708080830 mysql_201708080830 Create a readonly snapshot of 'rw_201708080830' in './mysql_201708080830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708090830 mysql_201708090830 Create a readonly snapshot of 'rw_201708090830' in './mysql_201708090830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708100830 mysql_201708100830 Create a readonly snapshot of 'rw_201708100830' in './mysql_201708100830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708110830 mysql_201708110830 Create a readonly snapshot of 'rw_201708110830' in './mysql_201708110830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708120830 mysql_201708120830 Create a readonly snapshot of 'rw_201708120830' in './mysql_201708120830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708130830 mysql_201708130830 Create a readonly snapshot of 'rw_201708130830' in './mysql_201708130830' [root@backuplogC7 mariadb]# btrfs sub list . ID 257 gen 542 top level 5 path mbroken ID 317 gen 576 top level 5 path mbroken_201708070830 ID 318 gen 577 top level 5 path mbroken_201708080830 ID 319 gen 578 top level 5 path mbroken_201708090830 ID 320 gen 579 top level 5 path mbroken_201708100830 ID 32
Re: btrfs issue with mariadb incremental backup
Hi "A L", Thank you very much for your suggestion. I've got it. It work properly right now. what I have done are following. mv all current mysql and snapshot mysql to mbroken and mbroken_yymmddhhmm btrfs subvolume snapshot mbroken to mysql (to make it rw subvolume without Recieve UUID) btrfs subvolume snapshot mbroken_yymmddhhmm to rw_yymmddhhmm (to make it rw subvolume without Recieve UUID) btrfs subvolume snap -r rw_yymmddhhmm to mysql_yymmddhhmm ( to make ro snapshot) after this just send full for the first ro snapshot (mysql_201708070830) and send incremental until the end (mysql_201708130830). [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708070830 rw_201708070830 Create a snapshot of 'mbroken_201708070830' in './rw_201708070830' [root@backuplogC7 mariadb]# btrfs sub list . ID 257 gen 542 top level 5 path mbroken ID 317 gen 576 top level 5 path mbroken_201708070830 ID 318 gen 568 top level 5 path mbroken_201708080830 ID 319 gen 569 top level 5 path mbroken_201708090830 ID 320 gen 570 top level 5 path mbroken_201708100830 ID 321 gen 571 top level 5 path mbroken_201708110830 ID 322 gen 572 top level 5 path mbroken_201708120830 ID 323 gen 573 top level 5 path mbroken_201708130830 ID 324 gen 543 top level 5 path mysql ID 348 gen 576 top level 5 path rw_201708070830 [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708080830 rw_201708080830 Create a snapshot of 'mbroken_201708080830' in './rw_201708080830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708090830 rw_201708090830 Create a snapshot of 'mbroken_201708090830' in './rw_201708090830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708100830 rw_201708100830 Create a snapshot of 'mbroken_201708100830' in './rw_201708100830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708110830 rw_201708110830 Create a snapshot of 'mbroken_201708110830' in './rw_201708110830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708120830 rw_201708120830 Create a snapshot of 'mbroken_201708120830' in './rw_201708120830' [root@backuplogC7 mariadb]# btrfs subvolume snapshot mbroken_201708130830 rw_201708130830 Create a snapshot of 'mbroken_201708130830' in './rw_201708130830' [root@backuplogC7 mariadb]# btrfs sub list . ID 257 gen 542 top level 5 path mbroken ID 317 gen 576 top level 5 path mbroken_201708070830 ID 318 gen 577 top level 5 path mbroken_201708080830 ID 319 gen 578 top level 5 path mbroken_201708090830 ID 320 gen 579 top level 5 path mbroken_201708100830 ID 321 gen 580 top level 5 path mbroken_201708110830 ID 322 gen 581 top level 5 path mbroken_201708120830 ID 323 gen 582 top level 5 path mbroken_201708130830 ID 324 gen 543 top level 5 path mysql ID 348 gen 576 top level 5 path rw_201708070830 ID 349 gen 577 top level 5 path rw_201708080830 ID 350 gen 578 top level 5 path rw_201708090830 ID 351 gen 579 top level 5 path rw_201708100830 ID 352 gen 580 top level 5 path rw_201708110830 ID 353 gen 581 top level 5 path rw_201708120830 ID 354 gen 582 top level 5 path rw_201708130830 [root@backuplogC7 mariadb]# btrfs subvolume list -a -R . | grep "3ad0334a-4063-654c-add6-b1cbcdeaa639" ID 257 gen 542 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken ID 317 gen 576 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708070830 ID 318 gen 577 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708080830 ID 319 gen 578 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708090830 ID 320 gen 579 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708100830 ID 321 gen 580 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708110830 ID 322 gen 581 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708120830 ID 323 gen 582 top level 5 received_uuid 3ad0334a-4063-654c-add6-b1cbcdeaa639 path mbroken_201708130830 [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708070830 mysql_201708070830 Create a readonly snapshot of 'rw_201708070830' in './mysql_201708070830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708080830 mysql_201708080830 Create a readonly snapshot of 'rw_201708080830' in './mysql_201708080830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708090830 mysql_201708090830 Create a readonly snapshot of 'rw_201708090830' in './mysql_201708090830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708100830 mysql_201708100830 Create a readonly snapshot of 'rw_201708100830' in './mysql_201708100830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708110830 mysql_201708110830 Create a readonly snapshot of 'rw_201708110830' in './mysql_201708110830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_201708120830 mysql_201708120830 Create a readonly snapshot of 'rw_201708120830' in './mysql_201708120830' [root@backuplogC7 mariadb]# btrfs sub snap -r rw_
Building a BTRFS test machine
Would that be useful to build a BTRFS test machine, which will perform both software tests (btrfs send | btrfs receive, read/write random data etc.) and hardware tests, such as abrupt power off test, abruptly removing a raid-X disk physically, etc. If it would be useful, what tests should it cover? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC] Checksum of the parity
On Sun, Aug 13, 2017 at 8:45 PM, Chris Murphy wrote: > Further, the error detection of corrupt reconstruction is why I say > Btrfs is not subject *in practice* to the write hole problem. [2] > > [1] > I haven't tested the raid6 normal read case where a stripe contains > corrupt data strip and corrupt P strip, and Q strip is good. I expect > instead of EIO, we get a reconstruction from Q, and then both data and > P get fixed up, but I can't find it in comments or code. Yes, that's what I would expect (which theoretically makes the odds of successful recovery better on RAID6, possible "good enough") but I have no clue how that actually gets handled right now (I guess the current code isn't that thorough). > [2] > Is Btrfs subject to the write hole problem manifesting on disk? I'm > not sure, sadly I don't read the code well enough. But if all Btrfs > raid56 writes are full stripe CoW writes, and if the prescribed order > guarantees still happen: data CoW to disk > metadata CoW to disk > > superblock update, then I don't see how the write hole happens. Write > hole requires: RMW of a stripe, which is a partial stripe overwrite, > and a crash during the modification of the stripe making that stripe > inconsistent as well as still pointed to by metadata. I guess the problem is that stripe size or stripe element size is (sort of) fixed (not sure which one, I guess it's the latter, in which case the actual stripe size depends on the number of devices) and relatively big (much bigger than the usual 4k sector size or even the leaf size which now defaults to 16k, if I recall [but I set this to 4k myself]), so a partial stripe update (RMW) is certainly possible during generic use. This is why I threw the idea around a few months ago to resurrect that old (but dead looking / stuck) project about making the stripe (element) size configurable by the user. That would allow for making the stripe size equal to the filesystem sector size on a limited amount of setups (for example, 5 or 6 HDD with 512-byte physical sectors in RAID-5 or RAID-6 respectively) which would (as I understand) practically eliminate the problem (at least on the filesystem side, I am not sure if the HDD's volatile write-cache or at least it's internal re-ordering feature should still be disabled for this to really avoid inconsistencies between stripe elements --- I can't recall ever seeing partially written sectors [we would know since these are checksummed in place and thus appear unreadable if partially written], I guess there might be usually enough electricity in some small capacitor to finish the current sector after the power gets cut ???). -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs issue with mariadb incremental backup
From: siranee...@tpc.co.th -- Sent: 2017-08-13 - 16:00 > Hi "A L", > > As your suggestion. Does it mean I have to make all my current subvolumes to > clear > "Receive UUID" and send them with full snapshots? You need to clear all subvolumes with 'Received UUID' in order to use send-receive safely. once you have done that you can start over with new incremental sends (after the initial full send). I suggest to remove any existing snapshots of those volumes to avoid confusion. >>> mysql >>> mysql_201708060830 >>> mysql_201708070830 >>> mysql_201708080830 >>> mysql_201708090830 >>> mysql_201708100830 >>> mysql_201708110830 >>> mysql_201708120830 >>> mysql_201708130830 > > The next day that I take snapshot with the script will be > mysql_201708140830 > > which will be sent with incremental from mysql_201708130830 to > mysql_201708140830 > > Is this correct? yes, but your current snapshots can't be used because of the 'Received UUID'. > > I am not understand with the url that you gave me which said btrbk is a backup tool for btrfs. It detects the problem of the 'Received UUID'. The FAQ explains how to remove the 'Received UUID' from the source volumes and snapshots. It is safer to remove all existing snapshots then trying to fix them and risking the integrity of the data. > > "I'm getting an error: Aborted: "Received UUID" is set > > You probably restored a backup with send-receive, and made it read/write > using btrfs > property set. This is bad, as all snapshots and backups will inherit this > identical > "Received UUID", which results in all these subvolumes will be treated as > "containing same data". > > To fix this, create a "proper" snapshot: > > - This is as your suggestion for the subvolume "mysql" > > # cd /mnt/btr_pool > # mv mysubvolume mysubvolume.broken > # btrfs subvolume snapshot mysubvolume.broken mysubvolume > > Now, mysubvolume should have an empty "Received UUID". Note that in order to > have a > clean environment, you also need to fix all subvolumes (snapshots as well as > backups) that you created with the broken subvolume. > > Check if there are more broken subvolumes: > > # btrfs subvolume show mysubvolume.broken > # btrfs subvolume list -a -R /mnt/btr_pool | grep <"Received UUID" from above> > # btrfs subvolume list -a -R /mnt/btr_backup | grep <"Received UUID" from > above> > > - This guide seem that I have to clear <"Received UUID" > only the subvolume > "mysql" > and the others ("mysql_201708070830" should using btrfs subvolume snapshot -r > instead of btrfs subvolume snapshot. Is this correct? > > Now clean all subvolume listed (same as above, but using btrfs subvolume > snapshot -r > now). Then delete all the broken subvolumes: > > # btrfs subvolume delete *.broken > > Finally, you should have a clean environment, and btrbk will not complain any > more. > > > Best Regards, > Siranee Jaraswachirakul. > >> >> >> On 8/13/2017 12:52 PM, siranee...@tpc.co.th wrote: >>> Hi "A L", >>> >>> [root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql >>> /var/lib/mariadb/mysql >>> Name: mysql >>> UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44 >>> Parent UUID:- >>> Received UUID: 3ad0334a-4063-654c-add6-b1cbcdeaa639 >>> Creation time: 2017-06-21 13:27:41 +0700 >>> Subvolume ID: 257 >>> Generation: 539 >>> Gen at creation:9 >>> Parent ID: 5 >>> Top level ID: 5 >>> Flags: - >>> Snapshot(s): >>> mysql_201708060830 >>> mysql_201708070830 >>> mysql_201708080830 >>> mysql_201708090830 >>> mysql_201708100830 >>> mysql_201708110830 >>> mysql_201708120830 >>> mysql_201708130830 >>> >>> yes I think it has Received UUID because I restored the source from snapshot >>> mysql_201708040830 for prove that the local snapshot was work. >>> >>> How to clear the Received UUID ? >>> What to do next? >> You need to make a read-write snapshot of /var/lib/mariadb/mysql and >> then remove the old subvolume and all its snapshots. >> >> Example from https://github.com/digint/btrbk/blob/master/doc/FAQ.md >> >> # cd /mnt/btr_pool >> # mv mysubvolume mysubvolume.broken >> # btrfs subvolume snapshot mysubvolume.broken mysubvolume >> >> You can do the same with each of your snaps
Re: btrfs issue with mariadb incremental backup
On Sun, Aug 13, 2017 at 4:52 AM, wrote: > Hi "A L", > > [root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql > /var/lib/mariadb/mysql > Name: mysql > UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44 > Parent UUID:- > Received UUID: 3ad0334a-4063-654c-add6-b1cbcdeaa639 > Creation time: 2017-06-21 13:27:41 +0700 > Subvolume ID: 257 > Generation: 539 > Gen at creation:9 > Parent ID: 5 > Top level ID: 5 > Flags: - > Snapshot(s): > mysql_201708060830 > mysql_201708070830 > mysql_201708080830 > mysql_201708090830 > mysql_201708100830 > mysql_201708110830 > mysql_201708120830 > mysql_201708130830 > > yes I think it has Received UUID because I restored the source from snapshot > mysql_201708040830 for prove that the local snapshot was work. > > How to clear the Received UUID ? > What to do next? I'm using btrfs-progs 4.12, and I just did a btrfs send/receive from file system A to B; and then on B I took a rw snapshot of the ro snapshot. The ro snapshot has Received UUID, but the rw snapshot made from it does not have Received UUID. So now I'm confused how you have a rw snapshot with Received UUID set. Did you ever use 'btrfs property set' to change a ro snapshot to rw? That's the only way I can think it's possible. The only other thing is maybe the behavior has changed since btrfs-progs 4.4. It is possible this behavior has changed since btrfs-progs 4.4. The changelogs shows there are improvements in send/receive in particular 4.8.3 and 4.8.4, but also others. I have no idea which are related. But anyway, this is one of the reasons why the expert users on this almost always say use something newer because it's just too hard to remember, even with changelogs, what's fixed and where. My suggestion is to investigate moving to kernel 4.9 series; and it should be safe to move to btrfs-progs 4.12 now. Any new features in 4.12 that can't be supported with an older kernel should fail gracefully. No matter what, you must have separate backups. You can't depend exclusively on one backup, no matter the method. It's fine to use send/receive for backup1. But backup2 should be a conventional backup, e.g. XFS with rsync. Basically this is a game of risk and the more independent backups you have, the more failure cases (especially user error) you can recover from, at the expense of some complexity. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs issue with mariadb incremental backup
On Sun, Aug 13, 2017 at 4:49 AM, wrote: > [root@backuplogC7 ~]# btrfs send /var/lib/mariadb/mysql_201708090830 | ssh > 192.168.45.166 btrfs receive /var/lib/mariadb > At subvol /var/lib/mariadb/mysql_201708090830 > At subvol mysql_201708090830 > [root@backuplogC7 ~]# rsync -avnc /var/lib/mariadb/mysql_201708090830/ > root@192.168.45.166:/var/lib/mariadb/mysql_201708090830/ > sending incremental file list > ./ > > sent 3773 bytes received 19 bytes 842.67 bytes/sec > total size is 718361496 speedup is 189441.32 (DRY RUN) OK so the full send is sane. That suggests the file systems on both sides are OK, and that the problem is related strictly to incremental send/receive. It suggests that in a critical way either an origin parent or its copy on the destination are not identical. Why will be hard to figure out, and I'm not even sure it's worth it. Right now you basically have a work around: a known good full send, you can start doing incremental send/receive again using this full send/receive snapshot as the parent (with -p). Where confusion can happen is if you have to restore a snapshot, I think you must commit to a whole new canonical tree with the reverse send/receive being a full one, NOT incremental. I'm asserting this, I do not know if it is true, I don't know if the tools should prevent reverse incremental send/receive, but the very idea of reversing an incremental send receive to me just seems logically problematic. Any time you reverse the send/receive direction it must be full. I think. I did once get into trouble with this myself, having to do a restore from backup, to a new file system. I then took a rw snapshot of it, and made that the canonical live modify subvolume. I then ro snapshotted it, and tried to do an incremental send *back* to backup and it failed with an error I don't remember but I think it was later send/receive versions than 4.4 where there's some extract sanity checking to prevent this. And I was able to work around it with -c (clone) option. Anyway, I got sufficiently confused myself after all of that, that I pretty much gave up on such a strategy, blew away all the ro snapshots on both sides, and started from scratch with a new full send, and then resumed the incrementals unidirectionally. And anytime they are reversed I assume it must be a full send. Anyway, I think it's perilous to reverse directions while also trying to do incremental send/receive without a lot of testing to thoroughly understand what's going on. And I basically got to fuck this, not worth the complexity. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC] Checksum of the parity
On Sun, Aug 13, 2017 at 8:16 AM, Goffredo Baroncelli wrote: > Hi all, > > in the BTRFS wiki, in the status page, in the "line" RAID5/6 it is reported > that the parity is not checksummed. This was reported several time in the ML > and also on other site (e.g. phoronix) as a BTRFS defect. > > However I was unable to understand it, and I am supposing that this is a > false mith. > > So my question is: the fact that in the BTRFS5/6 the parity is not > checksummed could be considered a defect ? > > My goal is to verify if there is a rationale to require the parity > checksummed, and if no I would like to remove this from the wiki. It is not a per se defect. If parity is corrupt, and parity is needed for reconstruction, reconstruction will be corrupt, but is then detected and we get EIO [1] Further, the error detection of corrupt reconstruction is why I say Btrfs is not subject *in practice* to the write hole problem. [2] [1] I haven't tested the raid6 normal read case where a stripe contains corrupt data strip and corrupt P strip, and Q strip is good. I expect instead of EIO, we get a reconstruction from Q, and then both data and P get fixed up, but I can't find it in comments or code. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/fs/btrfs/raid56.c?h=v4.12.7 line 1851, I'm not sure where we're at exactly at this line; seems like it must be a scrub because P & Q are not relevant if data is good. [2] Is Btrfs subject to the write hole problem manifesting on disk? I'm not sure, sadly I don't read the code well enough. But if all Btrfs raid56 writes are full stripe CoW writes, and if the prescribed order guarantees still happen: data CoW to disk > metadata CoW to disk > superblock update, then I don't see how the write hole happens. Write hole requires: RMW of a stripe, which is a partial stripe overwrite, and a crash during the modification of the stripe making that stripe inconsistent as well as still pointed to by metadata. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs data recovery
On Sun, Aug 13, 2017 at 11:12 AM, Christian Rene Thelen wrote: > I have formated an encrypted disk, containing a LVM with a btrfs system. > > All superblocks appear to be destroyed; This is an unclear description. I don't understand the exact layout of the storage stack, and what part of it you formatted. For example I can't tell if the whole block device is encrypted, a partition is encrypted, or if it's the LV that's encrypted. And I can't tell if the formatting was a mistake, and what you accidentally formatted. I can't tell if the encrypted device opens without error, if the LV is discovered. You need to be really clear because any changes you make dramatically increase the chance of total data loss. What do you get for: $ sudo btrfs rescue super -v /dev/mapper/... This should be the logical block device that contains the Btrfs file system, the device you would mount (if it weren't damaged). It's possible but somewhat unlikely that all of the supers are damaged; but it depends on the size of the file system and what you formatted. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs data recovery
On Sun, Aug 13, 2017 at 07:12:48PM +0200, Christian Rene Thelen wrote: > I have formated an encrypted disk, containing a LVM with a btrfs system. What did you format it as? (i.e. what are the locations of the damaged blocks?) > All superblocks appear to be destroyed; the btrfs-progs tools can't > find the root tree anymore and scalpel, binwalk, foremost & co > return only scrap. The filesystem was on an ssd and mounted with -o > compression=lzo. The compression would explain the junk you're getting from the carving tools. They tend to rely on being able to identify sequences of bytes as something recognisable -- compression defeats that by reducing everything to (statistically) random bits. > How screwed am I? Quite badly. > Any chances to recover some files? The compression isn't helping, as noted above. The metadata will be uncompressed, though, so that should be readable, depending on how much was formatted/damaged in the original incident. > Is there a plausible way to rebuild the superblock manually? > Checking the raw image with xxd gives me not a single readable word. That's unsurprising. Metadata isn't human-readable, and nor is compressed data. Did you ever balance this filesystem? More particularly, did you ever balance the metadata? If you did, then there's a good chance it wasn't at the front of the device, and so has a much smaller chance of being damaged. > I managed to decrypt the LV and dd it to an image. What can I do? btrfs-find-root may be able to find some of the tree heads. That at minimum is the information you need in order to reconstruct the superblock (well, that plus the UUID, but the UUID is going to be all over the place -- it shouldn't be hard to find that if the rest is discoverable). That said, recovering this is going to be somewhere between very hard and miraculous. Hugo. -- Hugo Mills | But somewhere along the line, it seems hugo@... carfax.org.uk | That pimp became cool, and punk mainstream. http://carfax.org.uk/ | PGP: E2AB1DE4 | Machinae Supremacy, Rise signature.asc Description: Digital signature
Btrfs data recovery
I have formated an encrypted disk, containing a LVM with a btrfs system. All superblocks appear to be destroyed; the btrfs-progs tools can't find the root tree anymore and scalpel, binwalk, foremost & co return only scrap. The filesystem was on an ssd and mounted with -o compression=lzo. How screwed am I? Any chances to recover some files? Is there a plausible way to rebuild the superblock manually? Checking the raw image with xxd gives me not a single readable word. I managed to decrypt the LV and dd it to an image. What can I do? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
lazytime mount option—no support in Btrfs
Back in 2014 Ted Tso introduced the lazytime mount option for ext4 and shortly thereafter a more generic VFS implementation which was then merged into mainline. His early patches included support for Btrfs but those changes were removed prior to the feature being merged. His changelog includes the following note about the removal: - Per Christoph's suggestion, drop support for btrfs and xfs for now, issues with how btrfs and xfs handle dirty inode tracking. We can add btrfs and xfs support back later or at the end of this series if we want to revisit this decision. My reading of the current mainline shows that Btrfs still lacks any support for lazytime. Has any thought been given to adding support for lazytime to Btrfs? Thanks, Adam -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC] Checksum of the parity
Hi all, in the BTRFS wiki, in the status page, in the "line" RAID5/6 it is reported that the parity is not checksummed. This was reported several time in the ML and also on other site (e.g. phoronix) as a BTRFS defect. However I was unable to understand it, and I am supposing that this is a false mith. So my question is: the fact that in the BTRFS5/6 the parity is not checksummed could be considered a defect ? My goal is to verify if there is a rationale to require the parity checksummed, and if no I would like to remove this from the wiki. BR G.Baroncelli -- gpg @keyserver.linux.it: Goffredo Baroncelli Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RedHat 7.4 Release Notes: "Btrfs has been deprecated" - wut?
On 08/12/2017 02:12 PM, Hugo Mills wrote: > On Sat, Aug 12, 2017 at 01:51:46PM +0200, Christoph Anton Mitterer wrote: >> On Sat, 2017-08-12 at 00:42 -0700, Christoph Hellwig wrote: [...] >> good, but csum is not > >I don't think this is a particularly good description of the > problem. I'd say it's more like this: > >If you write data and metadata separately (which you have to do in > the nodatacow case), and the system halts between the two writes, then > you either have the new data with the old csum, or the old csum with > the new data. Both data and csum are "good", but good from different > states of the FS. In both cases (data first or metadata first), the > csum doesn't match the data, and so you now have an I/O error reported > when trying to read that data. > >You can't easily fix this, because when the data and csum don't > match, you need to know the _reason_ they don't match -- is it because > the machine was interrupted during write (in which case you can fix > it), or is it because the hard disk has had someone write data to it > directly, and the data is now toast (in which case you shouldn't fix > the I/O error)? I am still inclined to think that this kind of problems could be solved using a journal: if you track which blocks are updated in the transaction and their checksum, if the transaction are interrupted, you can always rebuild the pair data/checksum: in case of interruption of a transaction: - all COW data are trashed - some NOCOW data might be written - all metadata (which are COW) are trashed Supposing to log for each transaction BTRFS which "data NOCOW blocks" will be updated and their checksum, in case a transaction is interrupted you know which blocks have to be checked and are able to verify if the checksum matches and correct the mismatch. Logging also the checksum could help to identify if: - the data is old - the data is updated - the updated data is correct The same approach could be used also to solving also the issue related to the infamous RAID5/6 hole: logging which block are updated, in case of transaction aborted you can check the parity which have to be rebuild. > >Basically, nodatacow bypasses the very mechanisms that are meant to > provide consistency in the filesystem. > >Hugo. > -- gpg @keyserver.linux.it: Goffredo Baroncelli Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs issue with mariadb incremental backup
Hi "A L", As your suggestion. Does it mean I have to make all my current subvolumes to clear "Receive UUID" and send them with full snapshots? >> mysql >> mysql_201708060830 >> mysql_201708070830 >> mysql_201708080830 >> mysql_201708090830 >> mysql_201708100830 >> mysql_201708110830 >> mysql_201708120830 >> mysql_201708130830 The next day that I take snapshot with the script will be mysql_201708140830 which will be sent with incremental from mysql_201708130830 to mysql_201708140830 Is this correct? I am not understand with the url that you gave me which said "I'm getting an error: Aborted: "Received UUID" is set You probably restored a backup with send-receive, and made it read/write using btrfs property set. This is bad, as all snapshots and backups will inherit this identical "Received UUID", which results in all these subvolumes will be treated as "containing same data". To fix this, create a "proper" snapshot: - This is as your suggestion for the subvolume "mysql" # cd /mnt/btr_pool # mv mysubvolume mysubvolume.broken # btrfs subvolume snapshot mysubvolume.broken mysubvolume Now, mysubvolume should have an empty "Received UUID". Note that in order to have a clean environment, you also need to fix all subvolumes (snapshots as well as backups) that you created with the broken subvolume. Check if there are more broken subvolumes: # btrfs subvolume show mysubvolume.broken # btrfs subvolume list -a -R /mnt/btr_pool | grep <"Received UUID" from above> # btrfs subvolume list -a -R /mnt/btr_backup | grep <"Received UUID" from above> - This guide seem that I have to clear <"Received UUID" > only the subvolume "mysql" and the others ("mysql_201708070830" should using btrfs subvolume snapshot -r instead of btrfs subvolume snapshot. Is this correct? Now clean all subvolume listed (same as above, but using btrfs subvolume snapshot -r now). Then delete all the broken subvolumes: # btrfs subvolume delete *.broken Finally, you should have a clean environment, and btrbk will not complain any more. Best Regards, Siranee Jaraswachirakul. > > > On 8/13/2017 12:52 PM, siranee...@tpc.co.th wrote: >> Hi "A L", >> >> [root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql >> /var/lib/mariadb/mysql >> Name: mysql >> UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44 >> Parent UUID:- >> Received UUID: 3ad0334a-4063-654c-add6-b1cbcdeaa639 >> Creation time: 2017-06-21 13:27:41 +0700 >> Subvolume ID: 257 >> Generation: 539 >> Gen at creation:9 >> Parent ID: 5 >> Top level ID: 5 >> Flags: - >> Snapshot(s): >> mysql_201708060830 >> mysql_201708070830 >> mysql_201708080830 >> mysql_201708090830 >> mysql_201708100830 >> mysql_201708110830 >> mysql_201708120830 >> mysql_201708130830 >> >> yes I think it has Received UUID because I restored the source from snapshot >> mysql_201708040830 for prove that the local snapshot was work. >> >> How to clear the Received UUID ? >> What to do next? > You need to make a read-write snapshot of /var/lib/mariadb/mysql and > then remove the old subvolume and all its snapshots. > > Example from https://github.com/digint/btrbk/blob/master/doc/FAQ.md > > # cd /mnt/btr_pool > # mv mysubvolume mysubvolume.broken > # btrfs subvolume snapshot mysubvolume.broken mysubvolume > > You can do the same with each of your snapshots too, and send them as > full snapshots (without -p). > > ~A > >> Best Regards, >> Siranee Jaraswachirakul. >> >>> Have you checked that there is no Received UUID on the source subvolume? >>> >>> # btrfs sub show volume/mysql/ >>> volume/mysql >>> Name: mysql >>> UUID: 8a94524e-a956-c14b-bb8d-d453627f27d5 >>> Parent UUID: - >>> Received UUID: - >>> Creation time: 2017-04-17 11:46:20 +0200 >>> Subvolume ID: 1469 >>> Generation: 122934 >>> Gen at creation: 78671 >>> Parent ID: 5 >>> Top level ID: 5 >>> Flags: - >>> Snapshot(s): >>> >>> There is no Received UUID here. If it has, then btrfs send-receive will >>> have problems, sin
Re: btrfs issue with mariadb incremental backup
On 8/13/2017 12:52 PM, siranee...@tpc.co.th wrote: Hi "A L", [root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql /var/lib/mariadb/mysql Name: mysql UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44 Parent UUID:- Received UUID: 3ad0334a-4063-654c-add6-b1cbcdeaa639 Creation time: 2017-06-21 13:27:41 +0700 Subvolume ID: 257 Generation: 539 Gen at creation:9 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): mysql_201708060830 mysql_201708070830 mysql_201708080830 mysql_201708090830 mysql_201708100830 mysql_201708110830 mysql_201708120830 mysql_201708130830 yes I think it has Received UUID because I restored the source from snapshot mysql_201708040830 for prove that the local snapshot was work. How to clear the Received UUID ? What to do next? You need to make a read-write snapshot of /var/lib/mariadb/mysql and then remove the old subvolume and all its snapshots. Example from https://github.com/digint/btrbk/blob/master/doc/FAQ.md # cd /mnt/btr_pool # mv mysubvolume mysubvolume.broken # btrfs subvolume snapshot mysubvolume.broken mysubvolume You can do the same with each of your snapshots too, and send them as full snapshots (without -p). ~A Best Regards, Siranee Jaraswachirakul. Have you checked that there is no Received UUID on the source subvolume? # btrfs sub show volume/mysql/ volume/mysql Name: mysql UUID: 8a94524e-a956-c14b-bb8d-d453627f27d5 Parent UUID: - Received UUID: - Creation time: 2017-04-17 11:46:20 +0200 Subvolume ID: 1469 Generation: 122934 Gen at creation: 78671 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): There is no Received UUID here. If it has, then btrfs send-receive will have problems, since all snapshots of the source subvolume will have the same Received UUID and it can't tell the differences between the snapshots. On 8/13/2017 5:40 AM, siranee...@tpc.co.th wrote: Hi Chris, I started as your suggestion again. The diff occured since snapshot mysql_201708090830 manually send. What should I do next? - delete all the bad/mismatching snapshots only on the destination computer. [root@joytest ~]# date Sun Aug 13 10:27:23 ICT 2017 [root@joytest ~]# cd /var/lib/mariadb [root@joytest mariadb]# btrfs sub list . ID 313 gen 220 top level 5 path mysql_201708070830 ID 316 gen 199 top level 5 path mysql_201708080830 ID 318 gen 205 top level 5 path mysql_201708090830 ID 320 gen 211 top level 5 path mysql_201708100830 ID 322 gen 219 top level 5 path mysql_201708110830 ID 323 gen 219 top level 5 path mysql_201708120830 ID 324 gen 224 top level 5 path mysql_201708130830 ID 325 gen 225 top level 5 path mysql [root@joytest mariadb]# btrfs sub del mysql_201708130830 Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708130830' [root@joytest mariadb]# btrfs sub del mysql_201708120830 Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708120830' [root@joytest mariadb]# btrfs sub del mysql_201708110830 Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708110830' [root@joytest mariadb]# btrfs sub del mysql_201708100830 Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708100830' [root@joytest mariadb]# btrfs sub del mysql_201708090830 Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708090830' [root@joytest mariadb]# btrfs sub sync . [root@joytest mariadb]# systemctl status mariadb â— mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2017-08-13 09:07:00 ICT; 1h 24min ago Process: 19871 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=1/FAILURE) Process: 19870 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS) Process: 19842 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS) Main PID: 19870 (code=exited, status=0/SUCCESS) Aug 13 09:06:58 joytest systemd[1]: Starting MariaDB database server... Aug 13 09:06:58 joytest mysqld_safe[19870]: 170813 09:06:58 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 13 09:06:58 joytest mysqld_safe[19870]: 170813 09:06:58 mysqld_safe Starting mysqld daemon with databases from /var/lib/mariadb/mysql Aug 13 09:07:00 joyte
Re: btrfs issue with mariadb incremental backup
Hi "A L", [root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql /var/lib/mariadb/mysql Name: mysql UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44 Parent UUID:- Received UUID: 3ad0334a-4063-654c-add6-b1cbcdeaa639 Creation time: 2017-06-21 13:27:41 +0700 Subvolume ID: 257 Generation: 539 Gen at creation:9 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): mysql_201708060830 mysql_201708070830 mysql_201708080830 mysql_201708090830 mysql_201708100830 mysql_201708110830 mysql_201708120830 mysql_201708130830 yes I think it has Received UUID because I restored the source from snapshot mysql_201708040830 for prove that the local snapshot was work. How to clear the Received UUID ? What to do next? Best Regards, Siranee Jaraswachirakul. > Have you checked that there is no Received UUID on the source subvolume? > > # btrfs sub show volume/mysql/ > volume/mysql > Name: mysql > UUID: 8a94524e-a956-c14b-bb8d-d453627f27d5 > Parent UUID: - > Received UUID: - > Creation time: 2017-04-17 11:46:20 +0200 > Subvolume ID: 1469 > Generation: 122934 > Gen at creation: 78671 > Parent ID: 5 > Top level ID: 5 > Flags: - > Snapshot(s): > > There is no Received UUID here. If it has, then btrfs send-receive will > have problems, since all snapshots of the source subvolume will have the > same Received UUID and it can't tell the differences between the snapshots. > > On 8/13/2017 5:40 AM, siranee...@tpc.co.th wrote: >> Hi Chris, >> >> I started as your suggestion again. The diff occured since snapshot >> mysql_201708090830 manually send. What should I do next? >> >> - delete all the bad/mismatching snapshots only on the destination computer. >> [root@joytest ~]# date >> Sun Aug 13 10:27:23 ICT 2017 >> [root@joytest ~]# cd /var/lib/mariadb >> [root@joytest mariadb]# btrfs sub list . >> ID 313 gen 220 top level 5 path mysql_201708070830 >> ID 316 gen 199 top level 5 path mysql_201708080830 >> ID 318 gen 205 top level 5 path mysql_201708090830 >> ID 320 gen 211 top level 5 path mysql_201708100830 >> ID 322 gen 219 top level 5 path mysql_201708110830 >> ID 323 gen 219 top level 5 path mysql_201708120830 >> ID 324 gen 224 top level 5 path mysql_201708130830 >> ID 325 gen 225 top level 5 path mysql >> [root@joytest mariadb]# btrfs sub del mysql_201708130830 >> Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708130830' >> [root@joytest mariadb]# btrfs sub del mysql_201708120830 >> Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708120830' >> [root@joytest mariadb]# btrfs sub del mysql_201708110830 >> Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708110830' >> [root@joytest mariadb]# btrfs sub del mysql_201708100830 >> Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708100830' >> [root@joytest mariadb]# btrfs sub del mysql_201708090830 >> Delete subvolume (no-commit): '/var/lib/mariadb/mysql_201708090830' >> [root@joytest mariadb]# btrfs sub sync . >> [root@joytest mariadb]# systemctl status mariadb >> â mariadb.service - MariaDB database server >> Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor >> preset: >> disabled) >> Active: failed (Result: exit-code) since Sun 2017-08-13 09:07:00 ICT; 1h >> 24min >> ago >>Process: 19871 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID >> (code=exited, status=1/FAILURE) >>Process: 19870 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, >> status=0/SUCCESS) >>Process: 19842 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n >> (code=exited, >> status=0/SUCCESS) >> Main PID: 19870 (code=exited, status=0/SUCCESS) >> >> Aug 13 09:06:58 joytest systemd[1]: Starting MariaDB database server... >> Aug 13 09:06:58 joytest mysqld_safe[19870]: 170813 09:06:58 mysqld_safe >> Logging to >> '/var/log/mariadb/mariadb.log'. >> Aug 13 09:06:58 joytest mysqld_safe[19870]: 170813 09:06:58 mysqld_safe >> Starting >> mysqld daemon with databases from /var/lib/mariadb/mysql >> Aug 13 09:07:00 joytest systemd[1]: mariadb.service: control process exited, >> code=exited status=1 >> Aug 13 09:07:00 joytest systemd[1]: Failed to start MariaDB database server. >> Aug 13 09:07:00 joytest systemd[1]: Unit mariadb.service entered failed >> state. >> Aug 13 09:07:00 joytest systemd[1]: mariadb.service failed.
Re: btrfs issue with mariadb incremental backup
Hi Chris, Try deleting mysql_201708090830/ snapshot on the destination. And resend but this time do a full send of that snapshot, don't use -p. I wonder if a full send, rather than incremental makes a difference. Follow it up with the rsync command to compare origin and destination. Yes, It's different. [root@backuplogC7 ~]# btrfs send /var/lib/mariadb/mysql_201708090830 | ssh 192.168.45.166 btrfs receive /var/lib/mariadb At subvol /var/lib/mariadb/mysql_201708090830 At subvol mysql_201708090830 [root@backuplogC7 ~]# rsync -avnc /var/lib/mariadb/mysql_201708090830/ root@192.168.45.166:/var/lib/mariadb/mysql_201708090830/ sending incremental file list ./ sent 3773 bytes received 19 bytes 842.67 bytes/sec total size is 718361496 speedup is 189441.32 (DRY RUN) Best Regards, Siranee Jaraswachirakul > On Sat, Aug 12, 2017 at 9:40 PM, wrote: > >> [root@backuplogC7 ~]# rsync -avnc /var/lib/mariadb/mysql_201708090830/ >> root@192.168.45.166://var/lib/mariadb/mysql_201708090830/ >> sending incremental file list >> ./ >> ib_logfile1 >> ibdata1 >> >> sent 3779 bytes received 25 bytes 507.20 bytes/sec >> total size is 718361496 speedup is 188843.72 (DRY RUN) > > > OK so I don't think this can be a sync related problem. That snapshot > has been committed to disk days ago. There's definitely something > wrong with the incremental send/receive, but it's unclear whether this > is a kernel bug (send side) or btrfs-progs (receive side), or if > there's any chance of file system corruption/confusion happening with > either of the two subvolumes on the origin or the subvolume (parent) > on the destination. > > So that means you're really in the weeds on what to do next. > > Try deleting mysql_201708090830/ snapshot on the destination. And > resend but this time do a full send of that snapshot, don't use -p. I > wonder if a full send, rather than incremental makes a difference. > Follow it up with the rsync command to compare origin and destination. > > > > -- > Chris Murphy > -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 4/5] squashfs: Add zstd support
On Fri, Aug 4, 2017 at 10:19 PM, Nick Terrell wrote: > --- a/fs/squashfs/decompressor.c > +++ b/fs/squashfs/decompressor.c > @@ -65,6 +65,12 @@ static const struct squashfs_decompressor > squashfs_zlib_comp_ops = { > }; > #endif > > +#ifndef CONFIG_SQUASHFS_ZSTD > +static const struct squashfs_decompressor squashfs_zstd_comp_ops = { > + NULL, NULL, NULL, NULL, ZSTD_COMPRESSION, "zstd", 0 Can you please use designated initializers? > +}; > +#endif > + > static const struct squashfs_decompressor squashfs_unknown_comp_ops = { > NULL, NULL, NULL, NULL, 0, "unknown", 0 > ; > +const struct squashfs_decompressor squashfs_zstd_comp_ops = { > + .init = zstd_init, > + .free = zstd_free, > + .decompress = zstd_uncompress, > + .id = ZSTD_COMPRESSION, > + .name = "zstd", > + .supported = 1 > +}; ... like you did here. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html