On Sat, 10 Apr 2021 17:06:22 -0600
Chris Murphy wrote:
> Right. The block device (partition containing the Btrfs file system)
> must be exclusively used by one kernel, host or guest. Dom0 or DomU.
> Can't be both.
>
> The only exception I'm aware of is virtiofs or virtio-9p, but I
> haven't mess
On Sat, 10 Apr 2021 13:38:57 +
Paul Leiber wrote:
> d) Perhaps the complete BTRFS setup (Xen, VMs, pass through the partition,
> Samba share) is flawed?
I kept reading and reading to find where you say you unmounted in on the host,
and then... :)
> e) Perhaps it is wrong to mount the BTRFS
On Fri, 26 Mar 2021 08:09:50 +
"Wulfhorst, Heiner" wrote:
> > You got the right lead there, but I believe it's
> >
> > btrfs filesystem resize 3:max /
> >
> > --
> > With respect,
> > Roman
>
> Thanks a lot, you got me one step further (but directly stuck again).
> Unfortunately it now j
On Thu, 25 Mar 2021 15:47:20 +
"Wulfhorst, Heiner" wrote:
> But btrfs won't extend its filesystem:
> # btrfs filesystem resize max /
> Resize '/' of 'max'
...
> btrfs filesystem resize / 3:max
You got the right lead there, but I believe it's
btrfs filesystem resize 3:max /
--
With resp
On Sat, 23 Jan 2021 16:42:33 +0800
Qu Wenruo wrote:
> For the worst case, btrfs can allocate a 128 MiB file extent, and have
> good luck to write 127MiB into the extent. It will take 127MiB + 128MiB
> space, until the last 1MiB of the original extent get freed, the full
> 128MiB can be freed.
Do
On Sun, 10 Jan 2021 11:34:27 +0100
" " wrote:
> I'm trying to transfer a btrfs snapshot via the network.
>
> First attempt: Both NC programs don't exit after the transfer is complete.
> When I ctrl-C the sending side, the receiving side exits OK.
It is a common annoyance that NC doesn't exit
On Tue, 5 Jan 2021 11:24:24 +
Graham Cobb wrote:
> used that approach as the (old) NAS I was using had a very old linux
> version and didn't even run btrfs.
One anecdote --
I do use an old D-Link DNS-323 NAS with old kernel and distro (older Debian),
and only ~60 MB of RAM to serve a 8 TB d
On Mon, 21 Dec 2020 12:05:37 -0500
Remi Gauvin wrote:
> I suggest making a new Read/Write subvolume to put your snapshots into
>
> btrfs subvolume create .my_snapshots
> btrfs subvolume snapshot -r /mnt_point /mnt_point/.my_snapshots/snapshot1
It sounds like this could plant a misconception rig
On Sat, 19 Dec 2020 23:59:45 +0100
Ulli Horlacher wrote:
> Ok, I was able to extend the btrfs filesystem via a loopback devive.
>
> What is the suggested way to do this at boot time?
>
> For now I have in /etc/rc.local:
>
> cd /nfs/rusnas/fex
> for d in spool_[1-9].btrfs; do
> echo -n "$d ==
On Thu, 17 Dec 2020 13:30:08 +0100
Ulli Horlacher wrote:
> root@fextest:/nfs/rusnas/fex# mount disk1.btrfs /mnt/tmp
> root@fextest:/nfs/rusnas/fex# df -TH /mnt/tmp
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/loop2 btrfs 16T 3.7M 16T 1% /mnt/tmp
You see here 'mount'
On Mon, 14 Dec 2020 23:27:51 +0100
Ian Kumlien wrote:
> Aaaand sorry, turns out that my raid device was 1 fifth of its
> original size and it had to be manually remedied...
>
> Now lets see if data survives this...
So, how did it went, and was all data OK?
This was a really nasty release, there
On Tue, 22 Oct 2019 11:00:07 +0200
Chris Murphy wrote:
> Hi,
>
> So XFS has these
>
> [49621.415203] XFS (loop0): Mounting V5 Filesystem
> [49621.58] XFS (loop0): Ending clean mount
> ...
> [49621.58] XFS (loop0): Ending clean mount
> [49641.459463] XFS (loop0): Unmounting Filesystem
>
On Wed, 16 Oct 2019 15:45:54 +0800
Qu Wenruo wrote:
> dirty fixes (a special branch for the user
> to do, never intended to upstream).
Still it would be nice to get a btrfs check mode which would include trying
destructive actions, as in accepting the loss of some part of user data
(outright del
On Wed, 21 Aug 2019 13:42:53 -0600
Chris Murphy wrote:
> Why do this? a) compression for home, b) encryption for home, c) home
> is portable because it's a file, d) I still get btrfs snapshots
> anywhere (I tend to snapshot subvolumes inside of cryptohome; but
Storing Btrfs on Btrfs really feels
Hello,
I have a number of VM images in sparse NOCOW files, with:
# du -B M -sc *
...
46030Mtotal
and:
# du -B M -sc --apparent-size *
...
96257Mtotal
But despite there being nothing else on the filesystem and no snapshots,
# df -B M .
... 1M-blocks Used Avai
On Sat, 18 May 2019 11:18:31 +0200
Michael Laß wrote:
>
> > Am 18.05.2019 um 06:09 schrieb Chris Murphy :
> >
> > On Fri, May 17, 2019 at 11:37 AM Michael Laß wrote:
> >>
> >>
> >> I tried to reproduce this issue: I recreated the btrfs file system, set up
> >> a minimal system and issued fs
On Tue, 16 Apr 2019 09:46:39 +0200
Daniel Brunner wrote:
> Hi,
>
> thanks for the quick response.
>
> The filesystem went read-only on its own right at the first read error.
> I unmounted all mounts and rebooted (just to be sure).
>
> I ran the command you suggested with --progress
> All outpu
On Thu, 21 Feb 2019 22:01:24 -0500
"Martin K. Petersen" wrote:
> Consequently, many of the modern devices that claim to support discard
> to make us software folks happy (or to satisfy a purchase order
> requirements) complete the commands without doing anything at all.
> We're simply wasting que
On Mon, 11 Feb 2019 22:09:02 -0500
Zygo Blaxell wrote:
> Still reproducible on 4.20.7.
>
> The behavior is slightly different on current kernels (4.20.7, 4.14.96)
> which makes the problem a bit more difficult to detect.
>
> # repro-hole-corruption-test
> i: 91, status: 0, bytes_ded
On Tue, 29 Jan 2019 23:15:18 +
Hans van Kranenburg wrote:
> So, what I was thinking of is:
>
> * Use dm-integrity on partitions on the individual disks
> * Use mdadm RAID10 on top (which is then able to repair bitrot)
> * Use LVM on top
> * Etc...
You never explicitly say what's the whole i
On Thu, 6 Dec 2018 06:11:46 +
Robert White wrote:
> So it would be dog-slow, but it would be neat if BTRFS had a mount
> option to convert any TRIM command from above into the write of a zero,
> 0xFF, or trash block to the device below if that device doesn't support
> TRIM. Real TRIM suppo
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total devices 1 FS bytes used 36.63GiB
devid
On Thu, 22 Nov 2018 22:07:25 +0900
Tomasz Chmielewski wrote:
> Spot on!
>
> Removed "discard" from fstab and added "ssd", rebooted - no more
> btrfs-cleaner running.
Recently there has been a bugfix for TRIM in Btrfs:
btrfs: Ensure btrfs_trim_fs can trim the whole fs
https://patchwork.k
On Thu, 15 Nov 2018 11:39:58 -0700
Juan Alberto Cirez wrote:
> Is BTRFS mature enough to be deployed on a production system to underpin
> the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
What are you looking to gain from using Btrfs on an NVR system? It doesn't
sound like
On Sat, 10 Nov 2018 03:08:01 +0900
Tomasz Chmielewski wrote:
> After upgrading from kernel 4.16.1 to 4.19.1 and a clean restart, the fs
> no longer mounts:
Did you try rebooting back to 4.16.1 to see if it still mounts there?
--
With respect,
Roman
On Tue, 9 Oct 2018 09:52:00 -0600
Chris Murphy wrote:
> You'll be left with three files. /big_file and root/big_file will
> share extents, and snapshot/big_file will have its own extents. You'd
> need to copy with --reflink for snapshot/big_file to have shared
> extents with /big_file - or dedupl
On Fri, 14 Sep 2018 19:27:04 +0200
Rafael Jesús Alcántara Pérez wrote:
> BTRFS info (device sdc1): use lzo compression, level 0
> BTRFS warning (device sdc1): 'recovery' is deprecated, use
> 'usebackuproot' instead
> BTRFS info (device sdc1): trying to use backup root at mount time
> BTRF
On Fri, 17 Aug 2018 23:17:33 +0200
Martin Steigerwald wrote:
> > Do not consider SSD "compression" as a factor in any of your
> > calculations or planning. Modern controllers do not do it anymore,
> > the last ones that did are SandForce, and that's 2010 era stuff. You
> > can check for yourself
On Fri, 17 Aug 2018 14:28:25 +0200
Martin Steigerwald wrote:
> > First off, keep in mind that the SSD firmware doing compression only
> > really helps with wear-leveling. Doing it in the filesystem will help
> > not only with that, but will also give you more space to work with.
>
> While also
On Tue, 14 Aug 2018 16:41:11 +0300
Dmitrii Tcvetkov wrote:
> If usebackuproot doesn't help then filesystem is beyond repair and you
> should try to refresh your backups with "btrfs restore" and restore from
> them[1].
>
> [1]
> https://btrfs.wiki.kernel.org/index.php/FAQ#How_do_I_recover_from_
Hello,
On two machines I have subvolumes where I backup other hosts' root filesystems
via rsync. These subvolumes have the +c attribute on them.
During the backup, sometimes I get tons of messages like these in dmesg:
[Wed Jul 25 20:58:22 2018] BTRFS error (device dm-8): error inheriting props
On Mon, 2 Jul 2018 08:19:03 -0700
Marc MERLIN wrote:
> I actually have fewer snapshots than this per filesystem, but I backup
> more than 10 filesystems.
> If I used as many snapshots as you recommend, that would already be 230
> snapshots for 10 filesystems :)
(...once again me with my rsync :)
On Fri, 29 Jun 2018 00:22:10 -0700
Marc MERLIN wrote:
> On Fri, Jun 29, 2018 at 12:09:54PM +0500, Roman Mamedov wrote:
> > On Thu, 28 Jun 2018 23:59:03 -0700
> > Marc MERLIN wrote:
> >
> > > I don't waste a week recreating the many btrfs send/receive relatio
On Thu, 28 Jun 2018 23:59:03 -0700
Marc MERLIN wrote:
> I don't waste a week recreating the many btrfs send/receive relationships.
Consider not using send/receive, and switching to regular rsync instead.
Send/receive is very limiting and cumbersome, including because of what you
described. And i
On Mon, 14 May 2018 11:36:26 +0300
Nikolay Borisov wrote:
> So what made you have these expectation, is it codified somewhere
> (docs/man pages etc)? I'm fine with that semantics IF this is what
> people expect.
"Compression ...does not work for NOCOW files":
https://btrfs.wiki.kernel.org/index.
On Mon, 14 May 2018 11:10:34 +0300
Nikolay Borisov wrote:
> But if we have mounted the fs with FORCE_COMPRESS shouldn't we disregard
> the inode flags, presumably the admin knows what he is doing?
Please don't. Personally I always assumed chattr +C would prevent both CoW and
compression, and use
On Sat, 10 Mar 2018 16:50:22 +0100
Adam Borowski wrote:
> Since we're on a btrfs mailing list, if you use qemu, you really want
> sparse format:raw instead of qcow2 or preallocated raw. This also works
> great with TRIM.
Agreed, that's why I use RAW. QCOW2 would add a second layer of COW on top
On Sat, 10 Mar 2018 15:19:05 +0100
Christoph Anton Mitterer wrote:
> TRIM/discard... not sure how far this is really a solution.
It is the solution in a great many of usage scenarios, don't know enough about
your particular one, though.
Note you can use it on HDDs too, even without QEMU and the
On Fri, 12 Jan 2018 17:49:38 + (GMT)
"Konstantin V. Gavrilenko" wrote:
> Hi list,
>
> just wondering whether it is possible to mount two subvolumes with different
> mount options, i.e.
>
> |
> |- /a defaults,compress-force=lza
You can have use different compression algorithms across the
On Fri, 15 Dec 2017 01:39:03 +0100
Ian Kumlien wrote:
> Hi,
>
> Running a 4.14.3 kernel, this just happened, but there should have
> been another 20 gigs or so available.
>
> The filesystem seems fine after a reboot though
What are your mount options, and can you show the output of "btrfs fi
d
On Sat, 18 Nov 2017 02:08:46 +0100
Hans van Kranenburg wrote:
> It's using send + balance at the same time. There's something that makes
> btrfs explode when you do that.
>
> It's not new in 4.14, I have seen it in 4.7 and 4.9 also, various
> different explosions in kernel log. Since that happen
On Thu, 16 Nov 2017 16:12:56 -0800
Marc MERLIN wrote:
> On Thu, Nov 16, 2017 at 11:32:33PM +0100, Holger Hoffstätte wrote:
> > Don't pop the champagne just yet, I just read that apprently 4.14 broke
> > bcache for some people [1]. Not sure how much that affects you, but it might
> > well make thi
On Tue, 14 Nov 2017 15:09:52 +0100
Klaus Agnoletti wrote:
> Hi Roman
>
> I almost understand :-) - however, I need a bit more information:
>
> How do I copy the image file to the 6TB without screwing the existing
> btrfs up when the fs is not mounted? Should I remove it from the raid
> again?
On Tue, 14 Nov 2017 10:36:22 +0200
Klaus Agnoletti wrote:
> Obviously, I want /dev/sdd emptied and deleted from the raid.
* Unmount the RAID0 FS
* copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
(noting how much of it is actually unreadable -- chances are it's mostl
On Mon, 13 Nov 2017 22:39:44 -0500
Dave wrote:
> I have my live system on one block device and a backup snapshot of it
> on another block device. I am keeping them in sync with hourly rsync
> transfers.
>
> Here's how this system works in a little more detail:
>
> 1. I establish the baseline by
On Tue, 14 Nov 2017 10:14:55 +0300
Marat Khalili wrote:
> Don't keep snapshots under rsync target, place them under ../snapshots
> (if snapper supports this):
> Or, specify them in --exclude and avoid using --delete-excluded.
Both are good suggestions, in my case each system does have its own
On Wed, 1 Nov 2017 11:32:18 +0200
Nikolay Borisov wrote:
> Fallocating a file in btrfs goes through several stages. The one before
> actually
> inserting the fallocated extents is to create a qgroup reservation, covering
> the desired range. To this end there is a loop in btrfs_fallocate which
On Wed, 1 Nov 2017 01:00:08 -0400
Dave wrote:
> To reconcile those conflicting goals, the only idea I have come up
> with so far is to use btrfs send-receive to perform incremental
> backups as described here:
> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup .
Another option is to ju
On Thu, 26 Oct 2017 09:40:19 -0600
Cheyenne Wills wrote:
> Briefly when I upgraded a system from 4.0.5 kernel to 4.9.5 (and
> later) I'm seeing a blocked task timeout with heavy IO against a
> multi-lun btrfs filesystem. I've tried a 4.12.12 kernel and am still
> getting the hang.
There is now
On Wed, 18 Oct 2017 09:24:01 +0800
Qu Wenruo wrote:
>
>
> On 2017年10月18日 04:43, Cameron Kelley wrote:
> > Hey btrfs gurus,
> >
> > I have a 4 disk btrfs filesystem that has suddenly stopped mounting
> > after a recent reboot. The data is in an odd configuration due to
> > originally being in a
On Tue, 3 Oct 2017 10:54:05 +
Hugo Mills wrote:
>There are other possibilities for missing space, but let's cover
> the obvious ones first.
One more obvious thing would be files that are deleted, but still kept open by
some app (possibly even from network, via NFS or SMB!). @Frederic, di
On Tue, 26 Sep 2017 16:50:00 + (UTC)
Ferry Toth wrote:
> https://www.phoronix.com/scan.php?page=article&item=linux414-bcache-
> raid&num=2
>
> I think it might be idle hopes to think bcache can be used as a ssd cache
> for btrfs to significantly improve performance..
My personal real-world
On Tue, 12 Sep 2017 12:32:14 +0200
Adam Borowski wrote:
> discard in the guest (not supported over ide and virtio, supported over scsi
> and virtio-scsi)
IDE does support discard in QEMU, I use that all the time.
It got broken briefly in QEMU 2.1 [1], but then fixed again.
[1] https://bugs.deb
On Thu, 31 Aug 2017 07:45:55 -0400
"Austin S. Hemmelgarn" wrote:
> If you use dm-cache (what LVM uses), you need to be _VERY_ careful and
> can't use it safely at all with multi-device volumes because it leaves
> the underlying block device exposed.
It locks the underlying device so it can't b
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
> Hi,
> this 37T filesystem took some times to mount. It has 47
> subvolumes/snapshots and is mounted with
> noatime,compress=zlib,space_cache. Is it normal, due to its size?
If you could implement SSD caching in front of your FS
On Mon, 28 Aug 2017 15:03:47 +0300
Nikolay Borisov wrote:
> when the cleaner thread runs again the snapshot's root item is going to
> be deleted for good and you no longer will see it.
Oh, that's pretty sweet -- it means there's actually a way to reliably wait
for cleaner work to be done on all
On Tue, 22 Aug 2017 18:57:25 +0200
Ulli Horlacher wrote:
> On Tue 2017-08-22 (21:45), Roman Mamedov wrote:
>
> > It is beneficial to not have snapshots in-place. With a local directory of
> > snapshots, issuing things like "find", "grep -r" or even &quo
On Tue, 22 Aug 2017 17:45:37 +0200
Ulli Horlacher wrote:
> In perl I have now:
>
> $root = $volume;
> while (`btrfs subvolume show "$root" 2>/dev/null` !~ /toplevel subvolume/) {
> $root = dirname($root);
> last if $root eq '/';
> }
>
>
If you are okay with rolling your own solutions like
On Tue, 22 Aug 2017 16:24:51 +0200
Ulli Horlacher wrote:
> On Tue 2017-08-22 (15:44), Peter Becker wrote:
> > Is use: https://github.com/jf647/btrfs-snap
> >
> > 2017-08-22 15:22 GMT+02:00 Ulli Horlacher :
> > > With Netapp/waffle you have automatic hourly/daily/weekly snapshots.
> > > You can f
On Wed, 16 Aug 2017 12:48:42 +0100 (BST)
"Konstantin V. Gavrilenko" wrote:
> I believe the chunk size of 512kb is even worth for performance then the
> default settings on my HW RAID of 256kb.
It might be, but that does not explain the original problem reported at all.
If mdraid performance wo
On Fri, 4 Aug 2017 12:44:44 +0500
Roman Mamedov wrote:
> > What is 0x98f94189, is it not a csum of a block of zeroes by any chance?
>
> It does seem to be something of that sort
Actually, I think I know what happened.
I used "dd bs=1M conv=sparse" to copy source FS ont
On Fri, 4 Aug 2017 12:18:58 +0500
Roman Mamedov wrote:
> What I find weird is why the expected csum is the same on all of these.
> Any idea what this might point to as the cause?
>
> What is 0x98f94189, is it not a csum of a block of zeroes by any chance?
It does seem to be somet
Hello,
I've migrated my home dir to a luks dm-crypt device some time ago, and today
during a scheduled backup a few files turned out to be unreadable, with csum
errors from Btrfs in dmesg.
What I find weird is why the expected csum is the same on all of these.
Any idea what this might point to as
On Wed, 02 Aug 2017 11:17:04 +0200
Thomas Wurfbaum wrote:
> A restore does also not help:
> mainframe:~ # btrfs restore /dev/sdb1 /mnt
> parent transid verify failed on 29392896 wanted 1486833 found 1486836
> parent transid verify failed on 29392896 wanted 1486833 found 1486836
> parent transid
On Tue, 1 Aug 2017 10:14:23 -0600
Liu Bo wrote:
> This aims to fix write hole issue on btrfs raid5/6 setup by adding a
> separate disk as a journal (aka raid5/6 log), so that after unclean
> shutdown we can make sure data and parity are consistent on the raid
> array by replaying the journal.
C
On Sun, 30 Jul 2017 18:14:35 +0200
"marcel.cochem" wrote:
> I am pretty sure that not all data is lost as i can grep thorugh the
> 100 GB SSD partition. But my question is, if there is a tool to rescue
> all (intact) data and maybe have only a few corrupt files which can't
> be recovered.
There
On Mon, 31 Jul 2017 11:12:01 -0700
Liu Bo wrote:
> Superblock and chunk tree root is OK, looks like the header part of
> the tree root is now all-zero, but I'm unable to think of a btrfs bug
> which can lead to that (if there is, it is a serious enough one)
I see that the FS is being mounted wit
On Fri, 28 Jul 2017 17:40:50 +0100 (BST)
"Konstantin V. Gavrilenko" wrote:
> Hello list,
>
> I am stuck with a problem of btrfs slow performance when using compression.
>
> when the compress-force=lzo mount flag is enabled, the performance drops to
> 30-40 mb/s and one of the btrfs processes
On Mon, 24 Jul 2017 09:46:34 -0400
"Austin S. Hemmelgarn" wrote:
> > I am a little bit confused because the balance command is running since
> > 12 hours and only 3GB of data are touched. This would mean the whole
> > balance process (new disc has 8TB) would run a long, long time... and
> > is us
On Fri, 21 Jul 2017 13:00:56 +0800
Anand Jain wrote:
>
>
> On 07/18/2017 02:30 AM, David Sterba wrote:
> > So it basically looks good, I could not resist and rewrote the changelog
> > and comments. There's one code fix:
> >
> > On Mon, Jul 17, 2017 at 04:52:58PM +0300, Timofey Titovets wrote:
On Tue, 18 Jul 2017 16:57:10 +0500
Roman Mamedov wrote:
> if a block written consists of zeroes entirely, instead of writing zeroes to
> the backing storage, converts that into an "unmap" operation
> (FALLOC_FL_PUNCH_HOLE[1]).
BTW I found that it is very easy to "offl
Hello,
Qemu/KVM has this nice feature in its storage layer, "detect-zeroes=unmap".
Basically the VM host detects if a block written by the guest consists of
zeroes entirely, and instead of writing zeroes to the backing storage,
converts that into an "unmap" operation (FALLOC_FL_PUNCH_HOLE[1]).
I
On Wed, 5 Jul 2017 22:10:35 -0600
Daniel Brady wrote:
> parent transid verify failed
Typically in Btrfs terms this means "you're screwed", fsck will not fix it, and
nobody will know how to fix or what is the cause either. Time to restore from
backups! Or look into "btrfs restore" if you don't ha
On Thu, 8 Jun 2017 19:57:10 +0200
Hans van Kranenburg wrote:
> There is an improvement with subvolume delete + nossd that is visible
> between 4.7 and 4.9.
I don't remember if I asked before, but did you test on 4.4? The two latest
longterm series are 4.9 and 4.4. 4.7 should be abandoned and for
On Wed, 7 Jun 2017 15:09:02 +0200
Adam Borowski wrote:
> On Wed, Jun 07, 2017 at 01:10:26PM +0300, Timofey Titovets wrote:
> > 2017-06-07 13:05 GMT+03:00 Stefan G. Weichinger :
> > > Am 2017-06-07 um 11:37 schrieb Timofey Titovets:
> > >
> > >> btrfs scrub start /mnt_path do this trick
> > >>
> >
On Sun, 21 May 2017 19:54:05 +0300
Timofey Titovets wrote:
> Sorry, but i know about subpagesize-blocksize patch set, but i don't
> understand where you see conflict?
>
> Can you explain what you mean?
>
> By PAGE_SIZE i mean fs cluster size in my patch set.
This appears to be exactly the conf
On Fri, 19 May 2017 11:55:27 +0300
Pasi Kärkkäinen wrote:
> > > Try saving your data with "btrfs restore" first
> >
> > First post, he tried that. No luck. Tho that was with 4.4 userspace.
> > It might be worth trying with the 4.11-rc or soon to be released 4.11
> > userspace, tho...
> >
On Thu, 18 May 2017 04:09:38 +0200
Łukasz Wróblewski wrote:
> I will try when stable 4.12 comes out.
> Unfortunately I do not have a backup.
> Fortunately, these data are not so critical.
> Some private photos and videos of youth.
> However, I would be very happy if I could get it back.
Try savi
On Fri, 12 May 2017 20:36:44 +0200
Kai Krakow wrote:
> My concern is with fail scenarios of some SSDs which die unexpected and
> horribly. I found some reports of older Samsung SSDs which failed
> suddenly and unexpected, and in a way that the drive completely died:
> No more data access, everyth
On Thu, 11 May 2017 09:19:28 -0600
Chris Murphy wrote:
> On Thu, May 11, 2017 at 8:56 AM, Marat Khalili wrote:
> > Sorry if question sounds unorthodox, Is there some simple way to read (and
> > backup) all BTRFS metadata from volume?
>
> btrfs-image
Hm, I thought that's for debugging only, and
On Wed, 10 May 2017 09:48:07 +0200
Martin Steigerwald wrote:
> Yet, when it comes to btrfs check? Its still quite rudimentary if you ask me.
>
Indeed it is. It may or may not be possible to build a perfect Fsck, but IMO
for the time being, what's most sorely missing, is some sort of a knowing
On Wed, 10 May 2017 09:02:46 +0200
Stefan Priebe - Profihost AG wrote:
> how to fix bad key ordering?
You should clarify does the FS in question mount (read-write? read-only?)
and what are the kernel messages if it does not.
--
With respect,
Roman
--
To unsubscribe from this list: send the lin
On Mon, 8 May 2017 20:05:44 +0200
"Janos Toth F." wrote:
> May be someone more talented will be able to assist you but in my
> experience this kind of damage is fatal in practice (even if you could
> theoretically fix it, it's probably easier to recreate the fs and
> restore the content from back
Hello,
It appears like during some trouble with HDD cables and controllers, I got some
disk corruption.
As a result, after a short period of time my Btrfs went read-only, and now does
not mount anymore.
[Sun May 7 23:08:02 2017] BTRFS error (device dm-8): parent transid verify
failed on 13799
On Tue, 2 May 2017 23:17:11 -0700
Marc MERLIN wrote:
> On Tue, May 02, 2017 at 11:00:08PM -0700, Marc MERLIN wrote:
> > David,
> >
> > I think you maintain btrfs-progs, but I'm not sure if you're in charge
> > of check --repair.
> > Could you comment on the bottom of the mail, namely:
> > > fai
On Fri, 28 Apr 2017 11:13:36 +0200
Christophe de Dinechin wrote:
> Since we memset tmpl, max_size==0. This does not seem consistent with nr = 1.
> In check_extent_refs, we will call:
>
> set_extent_dirty(root->fs_info->excluded_extents,
>rec->start,
>rec
On Thu, 27 Apr 2017 08:52:30 -0500
Gerard Saraber wrote:
> I could just reboot the system and be fine for a week or so, but is
> there any way to diagnose this?
`btrfs fi df` for a start.
Also obligatory questions: do you have a lot of snapshots, and do you use
qgroups?
--
With respect,
Roman
On Tue, 18 Apr 2017 03:23:13 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Without reading the links...
>
> Are you /sure/ it's /all/ ssds currently on the market? Or are you
> thinking narrowly, those actually sold as ssds?
>
> Because all I've read (and I admit I may not actually be cur
On Mon, 17 Apr 2017 07:53:04 -0400
"Austin S. Hemmelgarn" wrote:
> General info (not BTRFS specific):
> * Based on SMART attributes and other factors, current life expectancy
> for light usage (normal desktop usage) appears to be somewhere around
> 8-12 years depending on specifics of usage (as
On Sun, 9 Apr 2017 06:38:54 +
Paul Jones wrote:
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Hans van Kranenburg
> Sent: Sunday, 9 April 2017 6:19 AM
> To: linux-btrfs
> Subject: About free space fragmentati
On Mon, 3 Apr 2017 11:30:44 +0300
Marat Khalili wrote:
> You may want to look here: https://www.synology.com/en-global/dsm/Btrfs
> . Somebody forgot to tell Synology, which already supports btrfs in all
> hardware-capable devices. I think Rubicon has been crossed in
> 'mass-market NAS[es]', fo
On Sun, 2 Apr 2017 09:30:46 +0300
Andrei Borzenkov wrote:
> 02.04.2017 03:59, Duncan пишет:
> >
> > 4) In fact, since an in-place convert is almost certainly going to take
> > more time than a blow-away and restore from backup,
>
> This caught my eyes. Why? In-place convert just needs to recre
On Mon, 27 Mar 2017 13:32:47 -0600
Chris Murphy wrote:
> How about if qgroups are enabled, then non-root user is prevented from
> creating new subvolumes?
That sounds like, if you turn your headlights on in a car, then in-vehicle air
conditioner randomly stops working. :)
Two things only vaguel
On Mon, 27 Mar 2017 16:49:47 +0200
Christian Theune wrote:
> Also: the idea of migrating on btrfs also has its downside - the performance
> of “mkdir” and “fsync” is abysmal at the moment. I’m waiting for the current
> shrinking job to finish but this is likely limited to the “find free space”
On Mon, 27 Mar 2017 15:20:37 +0200
Christian Theune wrote:
> (Background info: we’re migrating large volumes from btrfs to xfs and can
> only do this step by step: copying some data, shrinking the btrfs volume,
> extending the xfs volume, rinse repeat. If someone should have any
> suggestions to
On Sat, 25 Mar 2017 23:00:20 -0400
"J. Hart" wrote:
> I have a Btrfs filesystem on a backup server. This filesystem has a
> directory to hold backups for filesystems from remote machines. In this
> directory is a subdirectory for each machine. Under each machine
> subdirectory is one direct
On Fri, 17 Mar 2017 10:27:11 +0100
Lionel Bouton wrote:
> Hi,
>
> Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit :
> > btrfs-debug-tree -b 3415463870464
>
> Here is what it gives me back :
>
> btrfs-debug-tree -b 3415463870464 /dev/sdb
> btrfs-progs v4.6.1
> checksum verify failed on 34154
On Thu, 16 Feb 2017 13:37:53 +0200
Imran Geriskovan wrote:
> What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
> I'm still on 4.8.x. I'd be happy to hear from anyone using 4.1x for
> a very typical single disk setup. Are they reasonably stable/good
> enough for this case?
You
On Tue, 14 Feb 2017 10:30:43 -0500
"Austin S. Hemmelgarn" wrote:
> I was just experimenting with snapshots on 4.9.0, and came across some
> unexpected behavior.
>
> The simple explanation is that if you snapshot a subvolume, any files in
> the subvolume that have the NOCOW attribute will not h
On Tue, 7 Feb 2017 09:13:25 -0500
Peter Zaitsev wrote:
> Hi Hugo,
>
> For the use case I'm looking for I'm interested in having snapshot(s)
> open at all time. Imagine for example snapshot being created every
> hour and several of these snapshots kept at all time providing quick
> recovery po
1 - 100 of 369 matches
Mail list logo