On Thu, Mar 6, 2014 at 7:25 PM, Zach Brown z...@redhat.com wrote:
On Thu, Mar 06, 2014 at 07:01:07PM -0500, Josef Bacik wrote:
Zach found this deadlock that would happen like this
And this fixes it. It's run through a few times successfully.
I'm not sure if my issue is related to this or
On Wed, Mar 12, 2014 at 11:24 AM, Josef Bacik jba...@fb.com wrote:
On 03/12/2014 08:56 AM, Rich Freeman wrote:
After a number of reboots the system became stable, presumably
whatever race condition btrfs was hitting followed a favorable
path.
I do have a 2GB btrfs-image pre-dating my
On Wed, Mar 12, 2014 at 12:34 PM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
On Wed, Mar 12, 2014 at 11:24 AM, Josef Bacik jba...@fb.com wrote:
On 03/12/2014 08:56 AM, Rich Freeman wrote:
After a number of reboots the system became stable, presumably
whatever race condition btrfs
On Sat, Mar 15, 2014 at 7:51 AM, Duncan 1i5t5.dun...@cox.net wrote:
1) Does running the snapper cleanup command from that cron job manually
trigger the problem as well?
As you can imagine I'm not too keen to trigger this often. But yes, I
just gave it a shot on my SSD and cleaning a few days
I've been getting blocked tasks on 3.15.1 generally at times when the
filesystem is somewhat busy (such as doing a backup via scp/clonezilla
writing to the disk).
A week ago I had enabled snapper for a day which resulted in a daily
cleanup of about 8 snapshots at once, which might have
On Fri, Jun 27, 2014 at 9:06 AM, Duncan 1i5t5.dun...@cox.net wrote:
Hopefully that problem's fixed on 3.16-rc2+, but as of yet there's not
enough 3.16-rc2+ reports out there from folks experiencing issues with
3.15 blocked tasks to rightfully say.
Any chance that it was backported to 3.15.2?
On Fri, Jun 27, 2014 at 11:52 AM, Chris Murphy li...@colorremedies.com wrote:
On Jun 27, 2014, at 9:14 AM, Rich Freeman r-bt...@thefreemanclan.net wrote:
I got another block this morning and failed to capture a log before my
terminals gave out. I switched back to 3.15.0 for the moment
On Fri, Jun 27, 2014 at 8:22 PM, Chris Samuel ch...@csamuel.org wrote:
On Fri, 27 Jun 2014 05:20:41 PM Duncan wrote:
If I'm not mistaken the fix for the 3.16 series bug was:
ea4ebde02e08558b020c4b61bb9a4c0fcf63028e
Btrfs: fix deadlocks with trylock on tree nodes.
That patch applies
On Tue, Jul 22, 2014 at 10:53 AM, Chris Mason c...@fb.com wrote:
Thanks for the help in tracking this down everyone. We'll get there!
Are you all running multi-disk systems (from a btrfs POV, more than one
device?) I don't care how many physical drives this maps to, just does
btrfs think
On Wed, Aug 13, 2014 at 7:54 AM, Martin Steigerwald mar...@lichtvoll.de wrote:
Am Dienstag, 12. August 2014, 15:44:59 schrieb Liu Bo:
This has been reported and discussed for a long time, and this hang occurs
in both 3.15 and 3.16.
Liu, is this safe for testing yet?
I'm more than happy to
On Fri, Aug 22, 2014 at 8:04 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
I personally use Gentoo Unstable on all my systems, so I build all my
kernels locally anyway, and stay pretty much in-line with the current
stable Mainline kernel.
Gentoo Unstable probably means gentoo-sources,
On Fri, Aug 22, 2014 at 3:35 AM, Duncan 1i5t5.dun...@cox.net wrote:
No claim to be a dev, btrfs or otherwise, here, but I believe in this
case you /are/ being too paranoid.
Both btrfs send and receive only deal with data/metadata they know how to
deal with. If it's corrupt in some way or if
On Wed, Sep 10, 2014 at 9:06 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Normally, you shouldn't need to run balance at all on most BTRFS
filesystems, unless your usage patterns vary widely over time (I'm
actually a good example of this, most of the files in my home directory
are
On Thu, Sep 25, 2014 at 5:21 PM, Holger Hoffstätte
holger.hoffstae...@googlemail.com wrote:
That's why I mentioned adding a second device - that will immediately
allow cleanup with headroom. An additional 8GB tmpfs volume can works
wonders.
If you add a single 8GB tmpfs to a RAID1 btrfs
On Sun, Oct 12, 2014 at 6:14 AM, Martin Steigerwald mar...@lichtvoll.de wrote:
Am Freitag, 10. Oktober 2014, 10:37:44 schrieb Chris Murphy:
On Oct 10, 2014, at 6:53 AM, Bob Marley bobmar...@shiftmail.org wrote:
On 10/10/2014 03:58, Chris Murphy wrote:
* mount -o recovery
Enable
On Thu, Oct 9, 2014 at 10:19 AM, Petr Janecek jane...@ucw.cz wrote:
I have trouble finishing btrfs balance on five disk raid10 fs.
I added a disk to 4x3TB raid10 fs and run btrfs balance start
/mnt/b3, which segfaulted after few hours, probably because of the BUG
below. btrfs check does not
On Thu, Oct 2, 2014 at 3:27 AM, Tomasz Chmielewski t...@virtall.com wrote:
Got this when running balance with 3.17.0-rc7:
[173475.410717] kernel BUG at fs/btrfs/relocation.c:931!
I just started a post on another thread with this exact same issue on
3.17.0. I started a balance after adding a
On Sun, Oct 12, 2014 at 7:11 AM, David Arendt ad...@prnet.org wrote:
This weekend I finally had time to try btrfs send again on the newly
created fs. Now I am running into another problem:
btrfs send returns: ERROR: send ioctl failed with -12: Cannot allocate
memory
In dmesg I see only the
On Mon, Oct 13, 2014 at 4:27 PM, David Arendt ad...@prnet.org wrote:
From my own experience and based on what other people are saying, I
think there is a random btrfs filesystem corruption problem in kernel
3.17 at least related to snapshots, therefore I decided to post using
another subject
On Mon, Oct 13, 2014 at 4:48 PM, john terragon jterra...@gmail.com wrote:
I think I just found a consistent simple way to trigger the problem
(at least on my system). And, as I guessed before, it seems to be
related just to readonly snapshots:
1) I create a readonly snapshot
2) I do some
On Mon, Oct 13, 2014 at 4:55 PM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
On Mon, Oct 13, 2014 at 4:48 PM, john terragon jterra...@gmail.com wrote:
After the rebooting (or the remount) I consistently have the corruption
with the usual multitude of these in dmesg
parent transid verify
On Mon, Oct 13, 2014 at 5:22 PM, john terragon jterra...@gmail.com wrote:
I'm using compress=no so compression doesn't seem to be related, at
least in my case. Just read-only snapshots on 3.17 (although I haven't
tried 3.16).
I was using lzo compression, and hence my comment about turning it
On Tue, Oct 14, 2014 at 10:48 AM, Suman C schakr...@gmail.com wrote:
The new drive shows up as sdb. btrfs fi show still prints drive missing.
mounted the filesystem with ro,degraded
tried adding the new sdb drive which results in the following error.
(-f because the new drive has a fs from
On Wed, Oct 15, 2014 at 10:30 AM, Josef Bacik jba...@fb.com wrote:
We've found it, the Fedora guys are reverting the bad patch now, we'll get
the fix sent back to stable shortly. Sorry about that.
After reverting this commit, can the bad snapshots be
deleted/repaired/etc without wiping and
On Fri, Oct 17, 2014 at 8:53 AM, Chris Mason c...@fb.com wrote:
This sounds like the problem fixed with some patches to our extent mapping
code that went in with the merge window. I've cherry picked a few for
stable and I'm running them through tests now. They are in my stable-3.17
branch,
On Mon, Oct 20, 2014 at 10:04 AM, Zygo Blaxell zblax...@furryterror.org wrote:
On Fri, Oct 17, 2014 at 08:17:37AM +, Hugo Mills wrote: On Fri, Oct 17,
2014 at 10:10:09AM +0200, Tomasz Torcz wrote:
On Fri, Oct 17, 2014 at 04:02:03PM +0800, Liu Bo wrote:
Recently I've observed some
On Tue, Oct 21, 2014 at 5:29 AM, Duncan 1i5t5.dun...@cox.net wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let me know if you
On Thu, Oct 23, 2014 at 10:35 PM, Zygo Blaxell
ce3g8...@umail.furryterror.org wrote:
- single profile: we can tolerate zero missing disks,
so we don't allow rw mounts even if degraded.
That seems like the wrong logic here. By all means mount read-only by
On Fri, Oct 24, 2014 at 12:07 PM, Zygo Blaxell
ce3g8...@umail.furryterror.org wrote:
We could also leave this as an option to the user mount -o
degraded-and-I-want-to-lose-my-data, but in my opinion the use
case is very, very exceptional.
Well, it is only exceptional
On Mon, Oct 13, 2014 at 11:12 AM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
On Thu, Oct 9, 2014 at 10:19 AM, Petr Janecek jane...@ucw.cz wrote:
I have trouble finishing btrfs balance on five disk raid10 fs.
I added a disk to 4x3TB raid10 fs and run btrfs balance start
/mnt/b3, which
On Tue, Oct 28, 2014 at 9:12 AM, E V eliven...@gmail.com wrote:
I've seen dead locks on 3.16.3. Personally, I'm staying with 3.14
until something newer stabilizes, haven't had any issues with it. You
might want to try the latest 3.14, though I think there should be a
new one pretty soon with
On Tue, Oct 28, 2014 at 9:33 AM, Duncan 1i5t5.dun...@cox.net wrote:
Since it's not an option here I've not looked into it too closely
personally, and don't know if it'll fit your needs, but if it does, it
may well be simpler to substitute it into the existing backup setup
without rewriting the
On Thu, Oct 30, 2014 at 9:02 PM, Tobias Holst to...@tobby.eu wrote:
Addition:
I found some posts here about a general file system corruption in 3.17
and 3.17.1 - is this the cause?
Additionally I am using ro-snapshots - maybe this is the cause, too?
Anyway: Can I fix that or do I have to
On Tue, Nov 25, 2014 at 6:13 PM, Chris Murphy li...@colorremedies.com wrote:
A few years ago companies including Western Digital started shipping
large cheap drives, think of the green drives. These had very high
TLER (Time Limited Error Recovery) settings, a.k.a. SCT ERC. Later
they
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
(2TB striped across 4 disks plus 1TB striped across 5 disks after
On Tue, Mar 24, 2015 at 2:31 AM, Anand Jain anand.j...@oracle.com wrote:
Do you have this fix ..
[PATCH] Btrfs: release path before starting transaction in can_nocow_extent
could you try ?.
I believe I already have this patch. 3.18.9 contains this:
commit
On Mon, Mar 23, 2015 at 7:22 PM, Hugo Mills h...@carfax.org.uk wrote:
On Mon, Mar 23, 2015 at 11:10:46PM +, Martin wrote:
As titled:
Does btrfs have dedup (on raid1 multiple disks) that can be enabled?
The current state of play is on the wiki:
On Sun, Mar 29, 2015 at 7:43 AM, Kai Krakow hurikha...@gmail.com wrote:
With the planned performance improvements, I'm guessing the best way will
become mounting the root subvolume (subvolid 0) and letting duperemove work
on that as a whole - including crossing all fs boundaries.
Why cross
On Mon, Mar 23, 2015 at 4:23 AM, Anand Jain anand.j...@oracle.com wrote:
Do you still have the problem ? Can you pls confirm on the latest btrfs ?
Since I am fixing the devices part of the btrfs, I am bit nervous.
I'm having a similar problem. I'm getting some kind of btrfs
corruption that
On Thu, Mar 26, 2015 at 8:07 PM, Martin m_bt...@ml1.co.uk wrote:
Anyone with any comments on how well duperemove performs for TB-sized
volumes?
Took many hours but less than a day for a few TB - I'm not sure
whether it is smart enough to take less time on subsequent scans like
bedup.
Does
On Wed, Mar 25, 2015 at 6:55 AM, Marc Cousin cousinm...@gmail.com wrote:
On 25/03/2015 02:19, David Sterba wrote:
as it reads the pre/post snapshots and deletes them if the diff is
empty. This adds some IO stress.
I couldn't find a clear explanation in the documentation. Does it mean
that
On Mon, Mar 23, 2015 at 9:22 AM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
I'm having a similar problem. I'm getting some kind of btrfs
corruption that causes a panic/reboot, and then the initramfs won't
mount root for 3.18.9, but it will mount it for 3.18.8.
Running on 3.18.8
On Mon, Apr 13, 2015 at 12:58 PM, Greg KH gre...@linuxfoundation.org wrote:
On Mon, Apr 13, 2015 at 07:28:38PM +0500, Roman Mamedov wrote:
On Thu, 2 Apr 2015 10:17:47 -0400
Chris Mason c...@fb.com wrote:
Hi stable friends,
Can you please backport this one to 3.19.y. It fixes a bug
On Wed, Apr 1, 2015 at 2:50 AM, Anand Jain anand.j...@oracle.com wrote:
Eric found something like this and has a fix with in the email.
Sub: I think btrfs: fix leak of path in btrfs_find_item broke stable
trees ...
I don't mind trying this patch if the maintainers recommend it. I'm
still
On Mon, Jul 27, 2015 at 1:20 AM, Duncan 1i5t5.dun...@cox.net wrote:
Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:
Hi,
50% of the time when booting, the system go in safe mode because my 12x
4TB RAID10 btrfs is taking too long to mount from fstab.
This won't help,
On Sun, Aug 9, 2015 at 8:47 AM, Hugo Mills h...@carfax.org.uk wrote:
On Sun, Aug 09, 2015 at 02:29:53PM +0200, Jim MacBaine wrote:
Hi,
How does btrfs handle raid1 on a bunch of uneven sized disks? Can I
just keep adding arbitrarily sized disks to an existing raid1 and
expect the file system
On Wed, Oct 14, 2015 at 4:53 PM, Donald Pearson
wrote:
>
> Personally I would still recommend zfs on illumos in production,
> because it's nearly unshakeable and the creative things you can do to
> deal with problems are pretty remarkable. The unfortunate reality is
>
On Wed, Oct 14, 2015 at 10:47 PM, Zygo Blaxell
wrote:
>
> I wouldn't describe dedup+defrag as unsafe. More like insane. You won't
> lose any data, but running both will waste a lot of time and power.
> Either one is OK without the other, or applied to
On Wed, Oct 14, 2015 at 9:47 PM, Chris Murphy wrote:
>
> For that matter, now that GlusterFS has checksums and snapshots...
Interesting - I haven't kept up with that. Does it actually do
end-to-end checksums? That is, compute the checksum at the time of
storage, store
On Sat, Oct 17, 2015 at 12:36 PM, Xavier Gnata wrote:
> 2) Disabling copy-on-write for just the VM image directory.
Unless this has changed, doing this will also disable checksumming. I
don't see any reason why it has to, but it does. So, I avoid using
this at all
On Wed, Oct 14, 2015 at 1:09 AM, Zygo Blaxell
wrote:
>
> I wouldn't try to use dedup on a kernel older than v4.1 because of these
> fixes in 4.1 and later:
I would assume that these would be ported to the other longterm
kernels like 3.18 at some point?
> Do dedup
What is the current state of Dedup and Defrag in btrfs? I seem to
recall there having been problems a few months ago and I've stopped
using it, but I haven't seen much news since.
I'm interested both in the 3.18 and subsequent kernel series.
--
Rich
--
To unsubscribe from this list: send the
On Wed, Sep 16, 2015 at 12:45 PM, Martin Tippmann
wrote:
> From reading the list I understand that btrfs is still very much work
> in progress and performance is not a top priority at this stage but I
> don't see why it shouldn't perform at least equally good as
On Sun, Oct 4, 2015 at 8:03 AM, Lionel Bouton
wrote:
>
> This focus on single reader RAID1 performance surprises me.
>
> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never
On Sun, Sep 27, 2015 at 10:45 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> But I think part of reasoning behind the relatively low priority this
> issue has received is that it's a low visibility issue not really
> affecting most people running btrfs, either because they're not running
> on ssd or
On Fri, Sep 25, 2015 at 9:25 AM, Bostjan Skufca wrote:
>
> Similar here: I am sticking with 3.19.2 which has proven to work fine for me
I'd recommend still tracking SOME stable series. I'm sure there were
fixes in 3.19 for btrfs (to say nothing of other subsystems) that
you're
On Sat, Sep 19, 2015 at 9:26 PM, Jim Salter wrote:
>
> ZFS, by contrast, works like absolute gangbusters for KVM image storage.
I'd be interested in what allows ZFS to handle KVM image storage well,
and whether this could be implemented in btrfs. I'd think that the
fragmentation
On Fri, Sep 25, 2015 at 7:20 AM, Austin S Hemmelgarn
wrote:
> On 2015-09-24 17:07, Sjoerd wrote:
>>
>> Maybe a silly question for most of you, but the wiki states to always try
>> to
>> use the latest kernel with btrfs. Which one would be best:
>> - 4.2.1 (currently latest
On Mon, Oct 5, 2015 at 7:16 AM, Lionel Bouton
wrote:
> According to the bad performance -> unstable logic, md would then be the
> less stable RAID1 implementation which doesn't make sense to me.
>
The argument wasn't that bad performance meant that something was
On Fri, Dec 25, 2015 at 11:34 PM, Chris Murphy wrote:
> I would then also try to reproduce with 4.2.8 or 4.3.3 because those
> have ~ 25% backports than made it to 4.1.15, so there's an off chance
> it's fixed there.
I take it that those backports are in the queue
On Wed, Mar 9, 2016 at 4:45 PM, Marc MERLIN wrote:
> On Wed, Mar 09, 2016 at 02:21:26PM -0700, Chris Murphy wrote:
>> > I have a very stripped down docker image that actually mounts portion of
>> > of my root filesystem read only.
>> > While it's running out of a btrfs
On Tue, Mar 1, 2016 at 11:27 AM, Hugo Mills wrote:
>
>Definitely don't use parity RAID on 3.19. It's not really something
> I'd trust, personally, even on 4.4, except for testing purposes.
++ - raid 5/6 are fairly unstable at this point. Raid 1 should be just fine.
>
On Sun, Mar 6, 2016 at 4:07 PM, Chris Murphy <li...@colorremedies.com> wrote:
> On Sun, Mar 6, 2016 at 5:01 AM, Rich Freeman <ri...@gentoo.org> wrote:
>
>> I think it depends on how you define "old." I think that 3.18.28
>> would be fine as it is a suppor
On Fri, Sep 30, 2016 at 8:38 PM, Jeff Mahoney <je...@suse.com> wrote:
> On 9/30/16 5:07 PM, Rich Freeman wrote:
>> On Fri, Sep 30, 2016 at 4:55 PM, Jeff Mahoney <je...@suse.com> wrote:
>>> This looks like a use-after-free on one of the pages used for
>>>
I'm not sure if this is related to the same issue or not, but I just
started getting a new BUG, followed by a panic. (I'm also enabled
network console capture so that you won't have to squint at photos.)
Original BUG is:
[14740.444257] [ cut here ]
[14740.444293] kernel
Here is another trace, similar to the original issue, but I have a bit
more detail on this one and it is available as text which if nothing
else is more convenient so I'll go ahead and paste this. I don't
intend to keep pasting these unless I get something that looks
different.
I only posted the
On Fri, Sep 30, 2016 at 4:55 PM, Jeff Mahoney wrote:
> This looks like a use-after-free on one of the pages used for
> compression. Can you post the output of objdump -Dr
> /lib/modules/$(uname -r)/kernel/fs/btrfs/btrfs.ko somewhere?
>
Sure:
On Thu, Sep 22, 2016 at 1:41 PM, Jeff Mahoney <je...@suse.com> wrote:
> On 9/22/16 8:18 AM, Rich Freeman wrote:
>> I have been getting panics consistently after doing a btrfs replace
>> operation on a raid1 and rebooting. I linked a photo of the panic; I
>> haven't been
68 matches
Mail list logo