On Mon, Mar 03, 2014 at 12:09:11PM -0500, Josef Bacik wrote:
> Ok I lied I just went ahead and did it, please let me know if this
> fixes it
This looked promising, but I still have the problem.
PM: Syncing filesystems ... done.
PM: Preparing system for mem sleep
Freezing user space processes ...
On Mon, Mar 03, 2014 at 05:18:33PM -0500, Josef Bacik wrote:
> > Maybe it will work if we cancel the scrub as opposed to pausing it,
> > but of course it's not ideal. Is that the next step?
>
> Sigh I thought the PM stuff called freeze_fs() but I think that was
> just tuxonice. I don't have a qui
On Wed, Mar 05, 2014 at 06:24:40PM +0100, Swâmi Petaramesh wrote:
> Hello,
>
> (Not having received a single answer, I repost this...)
I got your post, and posted myself about bedup not working at all for me,
and got no answer either.
As far as I can tell, it's entirely unmaintained and was like
I have a file server with 4 cpu cores and 5 btrfs devices:
Label: btrfs_boot uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
Total devices 1 FS bytes used 48.92GiB
devid1 size 79.93GiB used 73.04GiB path /dev/mapper/cryptroot
Label: varlocalspace uuid: 9f46dbe2-1344-44c3-b0fb-af28
aybe just btrfs send/receive that is taking a btrfs-wide lock?
Or btrfs scrub maybe?
Thanks,
Marc
On Wed, Mar 12, 2014 at 08:18:08AM -0700, Marc MERLIN wrote:
> I have a file server with 4 cpu cores and 5 btrfs devices:
> Label: btrfs_boot uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
>
On Sun, Mar 09, 2014 at 11:33:50AM +, Hugo Mills wrote:
> discard is, except on the very latest hardware, a synchronous command
> (it's a limitation of the SATA standard), and therefore results in
> very very poor performance.
Interesting. How do I know if a given SSD will hang on discard?
Is
ve
to finish first. It took so long that I don't want to do it again :)
Marc
On Thu, Mar 13, 2014 at 06:48:13PM -0700, Marc MERLIN wrote:
> Can anyone comment on this.
>
> Are others seeing some btrfs operations on filesystem/diskA hang/deadlock
> other btrfs operations on filesyst
On Thu, Mar 13, 2014 at 09:39:02PM -0600, Chris Murphy wrote:
>
> On Mar 13, 2014, at 8:11 PM, Marc MERLIN wrote:
>
> > On Sun, Mar 09, 2014 at 11:33:50AM +, Hugo Mills wrote:
> >> discard is, except on the very latest hardware, a synchronous command
> >&g
On Fri, Mar 14, 2014 at 12:07:54PM +, Duncan wrote:
> Marc MERLIN posted on Thu, 13 Mar 2014 22:17:50 -0700 as excerpted:
>
> > On Thu, Mar 13, 2014 at 09:39:02PM -0600, Chris Murphy wrote:
> >>
> >> On Mar 13, 2014, at 8:11 PM, Marc MERLIN wrote:
> >>
On Fri, Mar 14, 2014 at 08:46:09PM +, Holger Hoffstätte wrote:
> On Fri, 14 Mar 2014 15:57:41 -0400, Martin K. Petersen wrote:
>
> > So right now I'm afraid we don't have a good way for a user to determine
> > whether a device supports queued trims or not.
>
> Mount with discard, unpack kerne
On Sat, Mar 15, 2014 at 11:26:27AM +, Duncan wrote:
> Chris Samuel posted on Sat, 15 Mar 2014 17:48:56 +1100 as excerpted:
>
> > $ sudo smartctl --identify /dev/sdb | fgrep 'Trim bit in DATA SET
> > MANAGEMENT'
> > 169 0 1 Trim bit in DATA SET MANAGEMENT command
> > supported
On Sun, Mar 16, 2014 at 12:22:05PM -0400, Martin K. Petersen wrote:
> queued trim, not even a prototype. I went out and bought a 840 EVO this
> morning because the general lazyweb opinion seemed to indicate that this
> drive supports queued trim. Well, it doesn't. At least not in the 120GB
> versio
I just created this array:
polgara:/mnt/btrfs_backupcopy# btrfs fi show
Label: backupcopy uuid: 7d8e1197-69e4-40d8-8d86-278d275af896
Total devices 10 FS bytes used 220.32GiB
devid1 size 465.76GiB used 25.42GiB path /dev/dm-0
devid2 size 465.76GiB used 25.40GiB path
On Sun, Mar 16, 2014 at 05:12:10PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 4:55 PM, Chris Murphy wrote:
>
> > Then use btrfs replace start.
>
> Looks like in 3.14rc6 replace isn't yet supported. I get "dev_replace cannot
> yet handle RAID5/RAID6".
>
> When I do:
> btrfs device add
On Sun, Mar 16, 2014 at 05:23:25PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 5:17 PM, Marc MERLIN wrote:
>
> > - but no matter how I remove the faulty drive, there is no rebuild on a
> > new drive procedure that works yet
> >
> > Correct?
>
&g
On Sun, Mar 16, 2014 at 07:06:23PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 6:51 PM, Marc MERLIN wrote:
> >
> >
> > polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
> > `pwd`
> > ERROR: error removing the device '/
On Sun, Mar 16, 2014 at 08:56:35PM -0600, Chris Murphy wrote:
> >>> polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
> >>> `pwd`
> >>> ERROR: error removing the device '/dev/mapper/crypt_sde1' - Invalid
> >>> argument
> >>
> >> You didn't specify a mount point, is the re
ck? So while a day out,
> hourly snapshots are nice, a year out, they're just noise.
I'm happy to share my script with others if that helps:
http://marc.merlins.org/linux/scripts/btrfs-snaps
Or for the list archives/google:
------
On Sun, Mar 16, 2014 at 11:12:43PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 9:44 PM, Marc MERLIN wrote:
>
> > On Sun, Mar 16, 2014 at 08:56:35PM -0600, Chris Murphy wrote:
> >
> >>> If I add a device, isn't it going to grow my raid to make it bi
On Tue, Mar 18, 2014 at 09:02:07AM +, Duncan wrote:
> First just a note that you hijacked Mr Manana's patch thread. Replying
(...)
I did, I use mutt, I know about in Reply-To, I was tired, I screwed up,
sorry, and there was no undo :)
> Since you don't have to worry about the data I'd sugges
On Wed, Mar 19, 2014 at 12:32:55AM -0600, Chris Murphy wrote:
>
> On Mar 19, 2014, at 12:09 AM, Marc MERLIN wrote:
> >
> > 7) you can remove a drive from an array, add files, and then if you plug
> > the drive in, it apparently gets auto sucked in back in the array.
My server died last night during a btrfs send/receive to a btrfs radi5 array
Here are the logs. Is this anything known or with a possible workaround?
Thanks,
Marc
btrfs-rmw-2: page allocation failure: order:1, mode:0x8020
CPU: 1 PID: 12499 Comm: btrfs-rmw-2 Not tainted
3.14.0-rc5-amd64-i915-pre
On Wed, Mar 19, 2014 at 12:20:08PM -0400, Chris Mason wrote:
> On 03/19/2014 11:45 AM, Marc MERLIN wrote:
> >My server died last night during a btrfs send/receive to a btrfs radi5
> >array
> >
> >Here are the logs. Is this anything known or with a possible workaro
On Wed, Mar 19, 2014 at 10:53:33AM -0600, Chris Murphy wrote:
> > Yes, although it's limited, you apparently only lose new data that was added
> > after you went into degraded mode and only if you add another drive where
> > you write more data.
> > In real life this shouldn't be too common, even i
On Thu, Mar 20, 2014 at 12:13:36AM +, Chris Mason wrote:
> >Should I double it?
> >
> >For now, I have the copy running again, and it's been going for 8 hours
> >without failure on the old kernel but of course that doesn't mean my 2TB
> >copy will complete without hitting the bug again.
>
> So
On Thu, Mar 20, 2014 at 01:44:20AM +0100, Tobias Holst wrote:
> I tried the RAID6 implementation of btrfs and I looks like I had the
> same problem. Rebuild with "balance" worked but when a drive was
> removed when mounted and then readded, the chaos began. I tried it a
> few times. So when a drive
On Thu, Mar 20, 2014 at 11:30:33AM -0400, Josef Bacik wrote:
> Yeah there's a way to make suspend run commands while it goes down,
> you'll want to make it do btrfs scrub cancel on your btrfs fses. If you
> search the archives you'll see we've covered this recently and the guy
> posted the scri
On Sun, Mar 16, 2014 at 10:42:24PM -0700, Marc MERLIN wrote:
> On Thu, Mar 06, 2014 at 09:33:24PM +, Duncan wrote:
> > However, best snapshot management practice does progressive snapshot
> > thinning, so you never have more than a few hundred snapshots to manage
> >
After 1.5 days of running, the machine I was doing btrfs receive on got
stuck with this (note, the traces are not all the same).
The machine is not dead, but any IO that goes through btrfs seems dead.
If you want Sysrq-W, let me know.
Is thre anything I can try to unwedge or prevent this problem
On Wed, Jan 08, 2014 at 12:02:06AM -0800, Marc MERLIN wrote:
> On Tue, Jan 07, 2014 at 10:53:29AM +, Hugo Mills wrote:
> >You need to move /mnt/btrfs_pool2/tmp_read_only_new to a different
> > name as well. The send stream contains the name of the subvolume it
> > wan
On Fri, Mar 21, 2014 at 02:24:45PM -0400, Josef Bacik wrote:
> >Is thre anything I can try to unwedge or prevent this problem next time I
> >try?
>
> Sysrq+w would be nice so I can see what everybody is doing. Thanks,
Sure thing. There you go
http://marc.merlins.org/tmp/sysreq-w-btrfs.txt
(too
On Fri, Mar 21, 2014 at 05:51:23PM -0700, Marc MERLIN wrote:
> On Fri, Mar 21, 2014 at 02:24:45PM -0400, Josef Bacik wrote:
> > >Is thre anything I can try to unwedge or prevent this problem next time I
> > >try?
> >
> > Sysrq+w would be nice so I can see
On Sat, Mar 22, 2014 at 09:44:05PM +0200, Brendan Hide wrote:
> Hi, Marc
>
> Feel free to use ideas from my own script. Some aspects in my script are
> more mature and others are frankly pathetic. ;)
>
> There are also quite a lot of TODOs throughout my script that aren't
> likely to get the ur
After deleting a huge directory tree in my /home subvolume, syncing
snapshots now fails with:
ERROR: rmdir o1952777-157-0 failed. No such file or directory
Error line 156 with status 1
DIE: Code dump:
153 if [[ -n "$init" ]]; then
154 btrfs send "$src_newsnap" | $ssh btrfs receive "$
Please consider adding a blank line between quotes, it makes them just a bit
more readable :)
On Sat, Mar 22, 2014 at 11:02:24PM +0200, Brendan Hide wrote:
> >- it doesn't create writeable snapshots on the destination in case you want
> >to use the copy as a live filesystem
> One of the issues wit
On Fri, Mar 21, 2014 at 11:47:18PM -0700, Marc MERLIN wrote:
> On Fri, Mar 21, 2014 at 05:51:23PM -0700, Marc MERLIN wrote:
> > On Fri, Mar 21, 2014 at 02:24:45PM -0400, Josef Bacik wrote:
> > > >Is thre anything I can try to unwedge or prevent this problem next time
legolas:/mnt/btrfs_pool2# btrfs balance .
ERROR: error during balancing '.' - No space left on device
There may be more info in syslog - try dmesg | tail
[ 8454.159635] BTRFS info (device dm-1): relocating block group 288329039872
flags 1
[ 8590.167294] BTRFS info (device dm-1): relocating block g
I'm still doing some testing so that I can write some howto.
I got that far after a rebalance (mmmh, that took 2 days with little
data, and unfortunately 5 deadlocks and reboots.
polgara:/mnt/btrfs_backupcopy# btrfs fi show
Label: backupcopy uuid: eed9b55c-1d5a-40bf-a032-1be6980648e1
Tot
Both
legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=5 /mnt/btrfs_pool2
legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=0 /mnt/btrfs_pool2
failed unfortunately.
On Sun, Mar 23, 2014 at 12:26:32PM +, Duncan wrote:
> When it rains, it pours. What you're missing is that this is
On Sun, Mar 23, 2014 at 04:18:43PM +, Hugo Mills wrote:
> On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
> > I'm still doing some testing so that I can write some howto.
> >
> > I got that far after a rebalance (mmmh, that took 2 days with little
>
On Sun, Mar 23, 2014 at 04:28:25PM +, Hugo Mills wrote:
>Before you do this, can you take a btrfs-image of your metadata,
> and add a report to bugzilla.kernel.org? You're not the only person
> who's had this problem recently, and I suspect there's something
> still lurking in there that ne
I found out that a drive that used to be part of a raid system that is mounted
and running without it, btrfs apparently decides that the drive is part of the
mounted
raidset and in use.
As a result, I had to eventually dd 0's over it, btrfs device scan, and finally
I was able to use it again.
btr
On Sun, Mar 23, 2014 at 11:09:07AM -0700, Marc MERLIN wrote:
> I found out that a drive that used to be part of a raid system that is mounted
> and running without it, btrfs apparently decides that the drive is part of
> the mounted
> raidset and in use.
> As a result, I had to ev
On Sun, Mar 23, 2014 at 05:34:09PM +, Hugo Mills wrote:
>xaba on IRC has just pointed out that it looks like you're running
> this on a mounted filesystem -- it needs to be unmounted for
> btrfs-image to work reliably.
Sorry, I didn't realize that, although it makes sense. btrfs-image rea
On Wed, Mar 19, 2014 at 10:53:33AM -0600, Chris Murphy wrote:
>
> On Mar 19, 2014, at 9:40 AM, Marc MERLIN wrote:
> >
> > After adding a drive, I couldn't quite tell if it was striping over 11
> > drive2 or 10, but it felt that at least at times, it was striping
On Sun, Mar 23, 2014 at 12:10:17PM -0700, Marc MERLIN wrote:
> On Sun, Mar 23, 2014 at 05:34:09PM +, Hugo Mills wrote:
> >xaba on IRC has just pointed out that it looks like you're running
> > this on a mounted filesystem -- it needs to be unmounted for
> > bt
If I lose 2 drives on a raid5, -m raid1 should ensure I haven't lost my
metadate.
>From there, would I indeed have small files that would be stored entirely on
some of the drives that didn't go missing, and therefore I could recover
some data with 2 missing drives?
Or is it kind of pointless/waste
Ok, thanks to the help I got from you, and my own experiments, I've
written this:
http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html
If someone reminds me how to edit the btrfs wiki, I'm happy to copy that
there, or give anyone permission to take part of all of what I wrot
On Sun, Mar 23, 2014 at 10:52:29PM +, Hugo Mills wrote:
> On Sun, Mar 23, 2014 at 03:44:35PM -0700, Marc MERLIN wrote:
> > If I lose 2 drives on a raid5, -m raid1 should ensure I haven't lost my
> > metadate.
> > From there, would I indeed have small files that w
On Mon, Mar 24, 2014 at 07:17:12PM +, Martin wrote:
> Thanks for the very good summary.
>
> So... In very brief summary, btrfs raid5 is very much a work in progress.
If you know how to use it, which I didn't know do now, it's technically very
usable as is. The corner cases are in having a fai
On Mon, Mar 24, 2014 at 06:38:30PM +, Duncan wrote:
> Marc MERLIN posted on Sun, 23 Mar 2014 09:25:06 -0700 as excerpted:
>
> > On Sun, Mar 23, 2014 at 04:18:43PM +, Hugo Mills wrote:
> >> On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
> >>
On Mon, Mar 24, 2014 at 07:19:14PM +, Duncan wrote:
> Marc MERLIN posted on Sun, 23 Mar 2014 11:58:16 -0700 as excerpted:
>
> > On Sun, Mar 23, 2014 at 11:09:07AM -0700, Marc MERLIN wrote:
> >> I found out that a drive that used to be part of a raid system that is
>
On Tue, Mar 25, 2014 at 01:11:43AM +, Martin wrote:
> Yes, looking good, but for my usage I need the option to run ok with a
> failed drive. So, that's one to keep a development eye on for continued
> progress...
So it does run with a failed drive, it'll just fill the logs with write
errors,
I had a tree with some amount of thousand files (less than 1 million)
on top of md raid5.
It took 18H to rm it in 3 tries:
gargamel:/mnt/dshelf2/backup/polgara# time rm -rf current.todel/
real1087m26.491s
user0m2.448s
sys 4m42.012s
gargamel:/mnt/dshelf2/backup/polgara# btrfs fi show /
On Tue, Mar 25, 2014 at 12:13:50PM +, Martin wrote:
> On 25/03/14 01:49, Marc MERLIN wrote:
> > I had a tree with some amount of thousand files (less than 1 million)
> > on top of md raid5.
> >
> > It took 18H to rm it in 3 tries:
I ran another test after typing t
On Fri, Mar 28, 2014 at 11:45:03PM +, Hugo Mills wrote:
> On Fri, Mar 28, 2014 at 04:38:09PM -0700, Lists wrote:
> > On 03/28/2014 02:42 PM, Avi Miller wrote:
> > >Have you considered Oracle Linux? We are continually backporting btrfs
> > >fixes and enhancements to our Unbreakable Enterprise K
I had a look at
http://bj0z.wordpress.com/2011/04/27/determining-snapshot-size-in-btrfs/#comment-35
but it's quite old and does not work anymore since userland became
incompatible with it.
Has anyone seen something newer or have a newer fixed version of this?
Thanks,
Marc
--
"A mouse is a device
I had someone asking me about this bug:
Btrfs file content missmatch incrementally sending subvolumes containing
systemd journal files
https://bugzilla.kernel.org/show_bug.cgi?id=66941
Specifically:
Just to note, I have also this issue with other files:
jkarlson/.config/chromium/Default/Archived
On Sat, Mar 22, 2014 at 02:04:56PM -0700, Marc MERLIN wrote:
> After deleting a huge directory tree in my /home subvolume, syncing
> snapshots now fails with:
>
> ERROR: rmdir o1952777-157-0 failed. No such file or directory
So, I'm ok again after I deleted my destination snapsh
On Thu, Mar 20, 2014 at 11:21:27PM -0400, sepero...@gmx.com wrote:
> Hello all. I submit bugs to different foss projects regularly, but I
> don't really have a bug report this time. I have a broken filesystem
> to report. And I have no idea how to reproduce it.
>
> I am including a link to the fil
On Sun, Mar 30, 2014 at 02:13:35PM +0100, Filipe David Manana wrote:
> On Sun, Mar 30, 2014 at 1:42 PM, Hugo Mills wrote:
> > On Sat, Mar 29, 2014 at 08:22:02PM -0700, Marc MERLIN wrote:
> >> On Sat, Mar 22, 2014 at 02:04:56PM -0700, Marc MERLIN wrote:
> >> > After
On Sun, Mar 30, 2014 at 04:14:59PM +0100, Filipe David Manana wrote:
> > Cool, thanks for fixing those.
> > Is that meant to make it in 3.14 final, or is it going to be 3.15?
>
> My guess is 3.15.
Understood. I'll see if I can find your btrfs patches and apply them to
3.14.
Thanks for letting me
On Sat, Mar 29, 2014 at 05:21:23PM -0700, Marc MERLIN wrote:
> I had a look at
> http://bj0z.wordpress.com/2011/04/27/determining-snapshot-size-in-btrfs/#comment-35
> but it's quite old and does not work anymore since userland became
> incompatible with it.
>
> Has anyone
On Wed, Apr 02, 2014 at 09:24:10AM -0400, Chris Mason wrote:
> On 04/02/2014 04:29 AM, Qu Wenruo wrote:
> >Convert the old btrfs man pages to new asciidoc and split the huge
> >btrfs man page into subcommand man page.
> >
> >The asciidoc style and Makefile things are mostly simplified from git
> >D
On Wed, Apr 02, 2014 at 04:29:35PM +0800, Qu Wenruo wrote:
> Convert man page for btrfs-zero-log
>
> Signed-off-by: Qu Wenruo
> ---
> Documentation/Makefile | 2 +-
> Documentation/btrfs-zero-log.txt | 39 +++
> 2 files changed, 40 insertions(+), 1
On Wed, Apr 02, 2014 at 04:29:25PM +0800, Qu Wenruo wrote:
> +If the source device is not available anymore, or if the -r option is set,
> +the data is built only using the RAID redundancy mechanisms.
> +After completion of the operation, the source device is removed from the
> +filesystem.
Woudl
static void btrfs_release_extent_buffer_page(struct extent_buffer *eb,
unsigned long start_idx)
{
unsigned long index;
unsigned long num_pages;
struct page *page;
int mapped = !test_bit(EXTENT_BUFFER_DUMMY, &eb->bflags)
On Sat, Apr 05, 2014 at 08:37:21AM -0700, Marc MERLIN wrote:
> static void btrfs_release_extent_buffer_page(struct extent_buffer *eb,
> unsigned long start_idx)
> {
> unsigned long index;
> unsigned long num_pages;
>
On Sat, Apr 05, 2014 at 04:00:27PM -0600, cwillu wrote:
> >> +'btrfs-zero-log' will remove the log tree if log tree is corrupt, which
> >> will
> >> +allow you to mount the filesystem again.
> >> +
> >> +The common case where this happens has been fixed a long time ago,
> >> +so it is unlikely tha
On Sat, Apr 05, 2014 at 03:02:03PM -0700, Marc MERLIN wrote:
> On Sat, Apr 05, 2014 at 04:00:27PM -0600, cwillu wrote:
> > >> +'btrfs-zero-log' will remove the log tree if log tree is corrupt, which
> > >> will
> > >> +allow you to mount the f
On Sat, Apr 05, 2014 at 11:03:46PM +0100, Hugo Mills wrote:
>As far as I recall, -orecovery is read-write. -oro,recovery is
> read-only.
Yes, we both corrected my Email at the same time :)
Actually it's better/worse than that. From my notes at
http://marc.merlins.org/perso/btrfs/2014-03.html#
On Fri, Apr 04, 2014 at 04:20:41PM +0100, Filipe David Borba Manana wrote:
> This new send flag makes send calculate first the amount of new file data (in
> bytes)
> the send root has relatively to the parent root, or for the case of a
> non-incremental
> send, the total amount of file data we wi
On Sun, Apr 06, 2014 at 05:57:38PM +0100, Filipe David Manana wrote:
> > I looked around and found nothing that looked similar enough.
> > Obviously it's an assert, so I can run without it, but my source being
> > very different from yours just made me want to check that this was most
> > likely ok
I was debugging my why backup failed to run, and eventually found it was
stuck on sync:
14080 18:18 btrfs_tree_read_lock sync
This was hung for hours on this lock.
Strangely, it looks like taking my sysrq-w hung the machine pretty hard for
close to 30sec, but this seems to have un
On Mon, Apr 07, 2014 at 12:10:52PM -0400, Josef Bacik wrote:
> On 04/07/2014 12:05 PM, Marc MERLIN wrote:
> >I was debugging my why backup failed to run, and eventually found it was
> >stuck on sync:
> >14080 18:18 btrfs_tree_read_lock sync
> >
> &g
On Mon, Apr 07, 2014 at 03:32:13PM -0400, Chris Mason wrote:
> >You're recommending that I try btrfs-next on a 3.15 pre kernel, correct?
> >If so would it be likely to fix my filesystem and let me go back to a
> >stable 3.14? (I'm a bit warry about running some unstable 3.15 on it :).
>
> Right no
Soon after upgrading my server from 3.14rc5 to 3.14.0, my server went
into a crash loop.
Unfortunately because I used btrfs on my root filesystem, the problem
didn't get logged because btrfs failing on a separate array prevented
another thread to use btrfs to write on a healthy /var filesystem.
S
esystem is almost 2TB, so the image will again be big)
Marc
On Tue, Apr 08, 2014 at 08:36:09AM -0700, Marc MERLIN wrote:
> Soon after upgrading my server from 3.14rc5 to 3.14.0, my server went
> into a crash loop.
>
> Unfortunately because I used btrfs on my root filesystem, the probl
On Tue, Apr 08, 2014 at 07:49:14PM -0400, Chris Mason wrote:
>
>
> On 04/08/2014 06:09 PM, Marc MERLIN wrote:
> >I forgot to add that while I'm not sure if anyone ended up looking at the
> >last image I made regarding
> >https://bugzilla.kernel.org/show_bug.cgi?
On Tue, Apr 08, 2014 at 09:31:25PM -0700, Marc MERLIN wrote:
> On Tue, Apr 08, 2014 at 07:49:14PM -0400, Chris Mason wrote:
> >
> >
> > On 04/08/2014 06:09 PM, Marc MERLIN wrote:
> > >I forgot to add that while I'm not sure if anyone ended up looking at t
On Tue, Apr 08, 2014 at 10:31:39PM -0700, Marc MERLIN wrote:
> On Tue, Apr 08, 2014 at 09:31:25PM -0700, Marc MERLIN wrote:
> > On Tue, Apr 08, 2014 at 07:49:14PM -0400, Chris Mason wrote:
> > >
> > >
> > > On 04/08/2014 06:09 PM, Marc MERLIN wrote:
> >
On Wed, Apr 09, 2014 at 11:46:13AM -0400, Chris Mason wrote:
> Downloading the image now. I'd just run a readonly btrfsck /dev/xxx
http://marc.merlins.org/tmp/btrfs-raid0-image-fsck.txt (6MB)
I admit to not knowing how to read that output, I've only ever seen
thousands of lines of output from it
On Mon, Apr 07, 2014 at 01:00:02PM -0700, Marc MERLIN wrote:
> On Mon, Apr 07, 2014 at 03:32:13PM -0400, Chris Mason wrote:
> > >You're recommending that I try btrfs-next on a 3.15 pre kernel, correct?
> > >If so would it be likely to fix my filesystem and let me go
ou have other suggestions/comments, please share :)
Thanks,
Marc
On Tue, Mar 25, 2014 at 09:41:42AM -0700, Marc MERLIN wrote:
> On Tue, Mar 25, 2014 at 12:13:50PM +, Martin wrote:
> > On 25/03/14 01:49, Marc MERLIN wrote:
> > > I had a tree with some amount of thousand files
On Tue, Apr 08, 2014 at 09:42:03AM +0800, Qu Wenruo wrote:
> >I had debian add this to the initramfs initrd by default so that someone
> >can recover their root filesystem with this command if it won't mount.
> >
> >What got fixed is the kernel used to oops and crash, and now it gives a
> >nice "ca
On Fri, Apr 11, 2014 at 10:43:52AM +0800, Qu Wenruo wrote:
> Add device management related paragraph to better explain btrfs device
> management.
Thank you for these updates.
> Cc: Marc MERLIN
> Signed-off-by: Qu Wenruo
(comments below)
> ---
> Documentation/btrf
On Fri, Apr 11, 2014 at 10:43:53AM +0800, Qu Wenruo wrote:
> Add more explain on btrfs-zero-log about when to use it.
>
> Cc: Marc MERLIN
> Signed-off-by: Qu Wenruo
Reviewed-by: Marc MERLIN
Looks good, thank you.
Marc
> ---
> Documentation/btrfs-zero-log.txt | 18 +
On Sat, Apr 12, 2014 at 12:15:14AM +1000, Chris Samuel wrote:
> On Tue, 25 Mar 2014 09:41:42 AM Marc MERLIN wrote:
>
> > 4 hours to stat 700K files. That's bad...
> > Even 11mn to restat them just to count them looks bad too.
>
> One way to get an idea of where t
On Fri, Apr 11, 2014 at 07:36:28PM +0200, David Sterba wrote:
> On Thu, Apr 10, 2014 at 11:10:48PM -0700, Marc MERLIN wrote:
> > > --- a/Documentation/btrfs-replace.txt
> > > +++ b/Documentation/btrfs-replace.txt
> > > @@ -13,6 +13,9 @@ DESCRIPTION
> > > ---
On Mon, Apr 07, 2014 at 07:17:31PM +0200, George Eleftheriou wrote:
> Thank you too for the enlightenment. Not just now but so many times in
> the past (just the compilation of your list interventions is a wiki in
> its own right).
>
> Me too, I've been meaning to create a wiki account for quite s
On Fri, Apr 04, 2014 at 04:09:06PM +0100, Hugo Mills wrote:
> > - Generally speaking, does LZO compression improve or degrade performance ?
> > I'm not able to figure it out clearly.
>
>Yes, it improves or degrades performance. :)
>
>It'll depend entirely on what you're doing with it. If
a btrfsck running, looks like it may take over an hour, so I'll post
that after it's finished)
Other suggestions welcome.
Thanks,
Marc
On Mon, Mar 24, 2014 at 06:49:56PM -0700, Marc MERLIN wrote:
> I had a tree with some amount of thousand files (less than 1 million)
> on top
On Sat, Apr 12, 2014 at 01:25:42PM -0700, Marc MERLIN wrote:
> (I have a btrfsck running, looks like it may take over an hour, so I'll post
> that after it's finished)
I ran out of patience after 18 hours of waiting since it seemed to have not
progressed after 12 hours (it was s
On Sun, Apr 13, 2014 at 07:57:00AM -0700, Marc MERLIN wrote:
> On Sat, Apr 12, 2014 at 01:25:42PM -0700, Marc MERLIN wrote:
> > (I have a btrfsck running, looks like it may take over an hour, so I'll post
> > that after it's finished)
>
> I ran out of patience aft
On Sun, Apr 13, 2014 at 04:02:36AM +, Duncan wrote:
> What happens if you simply mount it ro, without the recovery option? Is
> it still normal-speed or is that slow as a rw mount?
I just tried in ro without recovery, and I could copy data out at 52MB/s
for a big file, so that's quite good.
On Mon, Apr 14, 2014 at 09:16:44AM +0800, Qu Wenruo wrote:
> >Generally, would you agree to putting more links to the wiki in man pages
> >since man pages are not forever but sure take a long time to update on the
> >installed based and the wiki can be up to date for everyone right away?
> >(I'm no
On Mon, Apr 14, 2014 at 10:28:36AM +, Duncan wrote:
> But you might well be the first report where the devs have good access to
> enough detail to actually trace down the problem.
>
> > I'll wait until tomorrow night to see if the devs want anything else out
> > of it, but otherwise I'll wipe
Can you help me design this right?
Long story short, I'm wondering if I can use btrfs send to copy sub
subvolumes (by snapshotting a parent subvolume, and hopefully getting
all the children underneath). My reading so far, says no.
So, Ideally I would have:
/mnt/btrfs_pool/backup: backup is a sub
I was looking at using qgroups for my backup server, which will be
filled with millions of files in subvolumes with snapshots.
I read a warning that quota groups had performance issues, at least in
the past.
Is it still true?
If there is a performance issue, is it as simple as just turning off
q
On Sun, Apr 20, 2014 at 11:39:22PM -0600, Chris Murphy wrote:
>
> On Apr 20, 2014, at 1:46 PM, Marc MERLIN wrote:
>
> > Can you help me design this right?
> >
> > Long story short, I'm wondering if I can use btrfs send to copy sub
> > subvolumes (b
601 - 700 of 761 matches
Mail list logo