Hi!
I'm using btrfs on this system:
Linux qmap02 3.11.0-18-generic #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
During a task with heavy I/O (it's a database server) suddenly every
disk-accessing process freezes. Dmesg outputs the following when this
happend:
[
While raid1 metadata with raid5 data would seem to be a non-useful
configuration, I've taken to using raid10 metadata with raid0 data (and if I'm
right, my logic could probably be extended to claim that r10 metadata would be
a good choice with r5 data).
In theory this would preserve some copy o
2014-03-23 21:47 GMT+04:00 Hugo Mills :
>> Hello. Sorry for writing to btrfs mailing list, but personal mail
>> reject my message.
>> Saying "
>> : host 10.101.1.19[10.101.1.19] said: 554 5.4.6 Hop
>> count exceeded - possible mail loop (in reply to end of DATA command)
>
>He's moved to Fac
Hi all,
While fuzzing with trinity inside KVM tools guest running latest -next kernel
I've stumbled on the following spew.
This is a result of a failed allocation in alloc_extent_state_atomic() which
triggers a BUG_ON when the return value is NULL. It's a bit weird that it
BUGs on failed allocat
My Name is Macus Donald,Please i have no intentions of causing you any pains,
I'm seeking your assistance only on humanitarian basis.I want you to assist me
ensure that my estate and money is been used for Orphanage Home Project.if you
are interested and you need more details contact me on my
On Sun, Mar 23, 2014 at 10:52:29PM +, Hugo Mills wrote:
> On Sun, Mar 23, 2014 at 03:44:35PM -0700, Marc MERLIN wrote:
> > If I lose 2 drives on a raid5, -m raid1 should ensure I haven't lost my
> > metadate.
> > From there, would I indeed have small files that would be stored entirely on
> > s
Ok, thanks to the help I got from you, and my own experiments, I've
written this:
http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html
If someone reminds me how to edit the btrfs wiki, I'm happy to copy that
there, or give anyone permission to take part of all of what I wrot
On Sun, Mar 23, 2014 at 03:44:35PM -0700, Marc MERLIN wrote:
> If I lose 2 drives on a raid5, -m raid1 should ensure I haven't lost my
> metadate.
> From there, would I indeed have small files that would be stored entirely on
> some of the drives that didn't go missing, and therefore I could recove
If I lose 2 drives on a raid5, -m raid1 should ensure I haven't lost my
metadate.
>From there, would I indeed have small files that would be stored entirely on
some of the drives that didn't go missing, and therefore I could recover
some data with 2 missing drives?
Or is it kind of pointless/waste
On Sun, Mar 23, 2014 at 12:10:17PM -0700, Marc MERLIN wrote:
> On Sun, Mar 23, 2014 at 05:34:09PM +, Hugo Mills wrote:
> >xaba on IRC has just pointed out that it looks like you're running
> > this on a mounted filesystem -- it needs to be unmounted for
> > btrfs-image to work reliably.
>
Hi,
I am going through the process of replacing a bad drive in a RAID 1
mirror. The filesystem wouldn't mount because of the missing device,
and the btrfs man pages were not helpful in resolving it. Specifically,
it would have been very useful to me if this part of the wiki were
included somewhe
On Wed, Mar 19, 2014 at 10:53:33AM -0600, Chris Murphy wrote:
>
> On Mar 19, 2014, at 9:40 AM, Marc MERLIN wrote:
> >
> > After adding a drive, I couldn't quite tell if it was striping over 11
> > drive2 or 10, but it felt that at least at times, it was striping over 11
> > drives with write fai
On Sun, Mar 23, 2014 at 05:34:09PM +, Hugo Mills wrote:
>xaba on IRC has just pointed out that it looks like you're running
> this on a mounted filesystem -- it needs to be unmounted for
> btrfs-image to work reliably.
Sorry, I didn't realize that, although it makes sense. btrfs-image rea
On Sun, Mar 23, 2014 at 11:09:07AM -0700, Marc MERLIN wrote:
> I found out that a drive that used to be part of a raid system that is mounted
> and running without it, btrfs apparently decides that the drive is part of
> the mounted
> raidset and in use.
> As a result, I had to eventually dd 0's o
I found out that a drive that used to be part of a raid system that is mounted
and running without it, btrfs apparently decides that the drive is part of the
mounted
raidset and in use.
As a result, I had to eventually dd 0's over it, btrfs device scan, and finally
I was able to use it again.
btr
On Sun, Mar 23, 2014 at 09:36:19PM +0400, Vasiliy Tolstov wrote:
> Hello. Sorry for writing to btrfs mailing list, but personal mail
> reject my message.
> Saying "
> : host 10.101.1.19[10.101.1.19] said: 554 5.4.6 Hop
> count exceeded - possible mail loop (in reply to end of DATA command)
Hello. Sorry for writing to btrfs mailing list, but personal mail
reject my message.
Saying "
: host 10.101.1.19[10.101.1.19] said: 554 5.4.6 Hop
count exceeded - possible mail loop (in reply to end of DATA command)
Final-Recipient: rfc822; chris.ma...@fusionio.com
Action: failed
Status: 5.0.0
On Sun, Mar 23, 2014 at 10:03:14AM -0700, Marc MERLIN wrote:
> On Sun, Mar 23, 2014 at 04:28:25PM +, Hugo Mills wrote:
> >Before you do this, can you take a btrfs-image of your metadata,
> > and add a report to bugzilla.kernel.org? You're not the only person
> > who's had this problem recen
On Sun, Mar 23, 2014 at 04:28:25PM +, Hugo Mills wrote:
>Before you do this, can you take a btrfs-image of your metadata,
> and add a report to bugzilla.kernel.org? You're not the only person
> who's had this problem recently, and I suspect there's something
> still lurking in there that ne
On Sun, Mar 23, 2014 at 09:20:00AM -0700, Marc MERLIN wrote:
> Both
> legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=5 /mnt/btrfs_pool2
> legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=0 /mnt/btrfs_pool2
> failed unfortunately.
>
> On Sun, Mar 23, 2014 at 12:26:32PM +, Dunc
On Sun, Mar 23, 2014 at 04:18:43PM +, Hugo Mills wrote:
> On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
> > I'm still doing some testing so that I can write some howto.
> >
> > I got that far after a rebalance (mmmh, that took 2 days with little
> > data, and unfortunately 5 dea
Both
legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=5 /mnt/btrfs_pool2
legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=0 /mnt/btrfs_pool2
failed unfortunately.
On Sun, Mar 23, 2014 at 12:26:32PM +, Duncan wrote:
> When it rains, it pours. What you're missing is that this is
On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
> I'm still doing some testing so that I can write some howto.
>
> I got that far after a rebalance (mmmh, that took 2 days with little
> data, and unfortunately 5 deadlocks and reboots.
>
> polgara:/mnt/btrfs_backupcopy# btrfs fi show
I'm still doing some testing so that I can write some howto.
I got that far after a rebalance (mmmh, that took 2 days with little
data, and unfortunately 5 deadlocks and reboots.
polgara:/mnt/btrfs_backupcopy# btrfs fi show
Label: backupcopy uuid: eed9b55c-1d5a-40bf-a032-1be6980648e1
Tot
Am Sonntag, 23. März 2014, 15:51:34 schrieben Sie:
> I was expecting either a speed improvement after rebalance, or no
> noticeable effect, but I am extremely disappointed to see that now (and
> after having rebooted), my system has become slow like hell, takes at least
> 10x longer to boot and op
Hi there,
# uname -r
3.13.6-1-ARCH
# btrfs --version
Btrfs v3.12
After having read the recent discussion about rebalance, I ran it for a test
on my laptop with a 1TB HD, which current situation (after rebalance) is :
# btrfs fi sh
Label: TETHYS uuid: 9a1ca6f4-1c1b-4a62-a84b-b388066084dc
I was using my laptop when, suddenly, it froze. I forced it to
shutdown, but, when I tried to turn it back on, /home, a btrfs
partition, couldn't mount. I tried to mount it with the recovery mount
option but it didn't help: http://pastebin.com/8C8MEyK9. Then I tried
to recover it with btrfs-zero-lo
Marc MERLIN posted on Sun, 23 Mar 2014 00:01:44 -0700 as excerpted:
> legolas:/mnt/btrfs_pool2# btrfs balance .
> ERROR: error during balancing '.' - No space left on device
> But:
> # btrfs fi show `pwd`
> Label: btrfs_pool2 uuid: [...]
> Total devices 1 FS bytes used 646.41GiB
> de
Jon Nelson posted on Sat, 22 Mar 2014 18:21:02 -0500 as excerpted:
>>> # btrfs fi df /
>>> Data, single: total=1.80TiB, used=832.22GiB
>>> System, DUP: total=8.00MiB, used=204.00KiB
>>> Metadata, DUP: total=5.50GiB, used=5.00GiB
[The 0-used single listings left over from filesystem creation omit
On Sun, Mar 23, 2014 at 12:01:44AM -0700, Marc MERLIN wrote:
> legolas:/mnt/btrfs_pool2# btrfs balance .
> ERROR: error during balancing '.' - No space left on device
> There may be more info in syslog - try dmesg | tail
> [ 8454.159635] BTRFS info (device dm-1): relocating block group 288329039872
On 2014/03/22 11:11 PM, Marc MERLIN wrote:
Please consider adding a blank line between quotes, it makes them just a bit
more readable :)
Np.
On Sat, Mar 22, 2014 at 11:02:24PM +0200, Brendan Hide wrote:
- it doesn't create writeable snapshots on the destination in case you want
to use the co
legolas:/mnt/btrfs_pool2# btrfs balance .
ERROR: error during balancing '.' - No space left on device
There may be more info in syslog - try dmesg | tail
[ 8454.159635] BTRFS info (device dm-1): relocating block group 288329039872
flags 1
[ 8590.167294] BTRFS info (device dm-1): relocating block g
32 matches
Mail list logo