to delete subvolume 176188 during send
I don't see any zombie btrfs send processes lying around. Is there
anyway to delete this volume? Do I just need a reboot?
-Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
ed on Fri, 01 Dec 2017 18:06:23 +0100 as
excerpted:
On 12/01/2017 05:31 PM, Matt McKinnon wrote:
Sorry, I missed your in-line reply:
2) How big is this filesystem? What does your `btrfs fi df
/mountpoint` say?
# btrfs fi df /export/
Data, single: total=30.45TiB, used=30.25TiB
System, DUP:
Right. The file system is 48T, with 17T available, so we're not quite
pushing it yet.
So far so good on the space_cache=v2 mount. I'm surprised this isn't on
the gotcha page in the wiki; it may end up making a world of difference
to the users here
Thanks again,
Matt
On
Thanks, I'll give space_cache=v2 a shot.
My mount options are: rw,relatime,space_cache,autodefrag,subvolid=5,subvol=/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majo
Sorry, I missed your in-line reply:
1) The one right above, btrfs_write_out_cache, is the write-out of the
free space cache v1. Do you see this for multiple seconds going on, and
does it match the time when it's writing X MB/s to disk?
It seems to only last until the next watch update.
[] i
These seem to come up most often:
[] transaction_kthread+0x133/0x1c0 [btrfs]
[] kthread+0x109/0x140
[] ret_from_fork+0x25/0x30
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
Thanks for this. Here's what I get:
[] transaction_kthread+0x133/0x1c0 [btrfs]
[] kthread+0x109/0x140
[] ret_from_fork+0x25/0x30
...
[] io_schedule+0x16/0x40
[] get_request+0x23e/0x720
[] blk_queue_bio+0xc1/0x3a0
[] generic_make_request+0xf8/0x2a0
[] submit_bio+0x75/0x150
[] btrfs_map_bio+0xe
.12.8-custom
# btrfs --version
btrfs-progs v4.13.3
Yes, I know I'm a bit behind there...
-Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi All,
Been having issues on one machine and I was wondering if I could get
some help tracking the issue down.
# uname -a
Linux riperton 4.13.5-custom #1 SMP Sat Oct 7 18:28:16 EDT 2017 x86_64
x86_64 x86_64 GNU/Linux
# btrfs --version
btrfs-progs v4.13.3
# btrfs fi show
Label: none uuid:
g off-site. So far the
btrfs-transaction and memory spikes have not returned.
-Matt
On 05/09/2017 03:14 PM, Liu Bo wrote:
On Fri, May 05, 2017 at 09:24:32AM -0400, Matt McKinnon wrote:
Too little information. Is IO happening at the same time? Is
compression on? Deduplicated? Lots of subvol
om 30G to under 2G.
-Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
, but after running a full defrag of the file
system, and also enabling the 'autodefrag' mount option, the problem
still persists.
What's the best way to figure out what btrfs is chugging away at here?
Kernel: 4.10.13-custom
btrfs-progs: v4.10.2
-Matt
--
To unsubscribe from this
over log tree)
[ 709.355570] BTRFS error (device sda1): cleaner transaction attach
returned -30
[ 709.548919] BTRFS error (device sda1): open_ctree failed
-Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...
expected csum 0
Jan 27 19:42:47 my_machine kernel: [ 335.033249] BTRFS warning (device
sda1): csum failed ino 28472371 off 8077312 csum 4031878292 expected csum 0
Can these be ignored?
On 01/25/2017 04:06 PM, Liu Bo wrote:
On Mon, Jan 23, 2017 at 03:03:55PM -0500, Matt McKinnon wrote:
Wondering
Wondering what to do about this error which says 'reboot needed'. Has
happened a three times in the past week:
Jan 23 14:16:17 my_machine kernel: [ 2568.595648] BTRFS error (device
sda1): err add delayed dir index item(index: 23810) into the deletion
tree of the delayed node(root id: 257, ino
s not apparent in the
previous kernel (4.7).
The poster mentioned some suggestions from Ducan here:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg60083.html
But those are not visible in the thread. What suggestions were given to
help alleviate this pain?
-Matt
--
To unsubscribe from
5 PM, Chris Murphy wrote:
On Tue, Aug 9, 2016 at 6:29 PM, Matt McKinnon wrote:
Spoke too soon. Do I need to continue to run with that mount option in
place?
It shouldn't be necessary. Something's still wrong for some reason,
even with DUP metadata being CoW'd so someone else is goin
free space
cache for block group 23113395863552, rebuilding it now
then a crash dump.
Remounted with -o clear_cache,nospace_cache and the balance completed.
Running a larger balance now.
Will umount, and remount with default options to see if that works.
-Matt
On 08/10/2016 03:09 AM, g6094
64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7faad34cdf45]
btrfs[0x40a0f9]
and we crashed out of the check there.
-Matt
On 08/09/2016 08:06 PM, Chris Murphy wrote:
On Tue, Aug 9, 2016 at 6:01 PM, Chris Murphy wrote:
On Tue, Aug 9, 2016 at 5:15 PM, Matt McKinnon wrote:
Hello,
Our server
, Matt McKinnon wrote:
Hello,
Our server recently crashed and was rebooted. When it returned our BTRFS
volume is mounting read-only:
What happens when you try mounting with -o usebackuproot ?
If that fails, what output do you get for 'btrfs check' (without
--repair)? If you onl
-o usebackuproot worked well.
after the file system settled, performing a sync and a clean umount, a
normal mount works now as well.
Anything I should be doing going forward?
Thanks,
Matt
On 08/09/2016 08:01 PM, Chris Murphy wrote:
On Tue, Aug 9, 2016 at 5:15 PM, Matt McKinnon wrote
Hello,
Our server recently crashed and was rebooted. When it returned our
BTRFS volume is mounting read-only:
[ 142.395093] BTRFS: error (device sda1) in
btrfs_run_delayed_refs:2963: errno=-17 Object already exists
[ 142.404418] BTRFS info (device sda1): forced readonly
I tried upgrading
> On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn wrote:
>
> On 2016-07-15 05:51, Matt wrote:
>> Hello
>>
>> I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
>> file system (see below). One of the 6 disk failed. What is the
lti-disk btrfs filesystem.
Would some variant of "btrfs balance" do something helpful?
Any help is appreciated!
Regards,
Matt
# btrfs fi show
Label: none uuid: d82fff2c-0232-47dd-a257-04c67141fc83
Total devices 6 FS bytes used 16.83TiB
devid1 size 3.64TiB used 3.47
On 2015-08-25 09:44, Miguel Negrão wrote:
> Hi list,
>
> This weekend had my first btrfs horror story.
>
> system: 3.13.0-49-lowlatency, btrfs-progs v4.1.2
>
> A disclaimer: I know 3.13 is very out of date, but I the requirement of
> keeping kernel up to date clashes with my requirement of keeping
allow a length of 1,000,000 bytes to be passed as it is equal to the
file lengths and would be internally extended to the end of the block
(1,015,808), allowing one set of extents to be shared completely between
the full length of both files.
Signed-off-by: Matt Robinson
---
fs/btrfs/ioctl.c | 21
it is serious issue.
Could some one look at making the clean up process more sensitive to when the
system is idle? MD Raid is very good at this, and it should be possible to set
this up.
Best Regards,
Matt Grant
Hi All,
As David hasn't got back to me I'm guessing that he is too busy with
other things at present. If anyone else is able to spare the time to
review my patch and give me feedback that would be very much
appreciated.
Many Thanks,
Matt
On 3 March 2015 at 00:27, Zygo Blaxell wr
, hint :-)
URL for github is:
https://github.com/grantma/pybtrfs.git
There is also a shell script there that can be called from cron.
Please get back to me if you have any questions.
Standard not warranted disclaimers apply to the code. Its GPLv3
licensed.
Best Regards,
Matt Grant
--
To
Hi David,
Have you had a chance to look at this? Am very happy to answer
further questions, adjust my implementation, provide a different kind
of test case, etc.
Many Thanks,
Matt
On 28 January 2015 at 19:46, Matt Robinson wrote:
> On 28 January 2015 at 12:55, David Sterba wrote:
>&g
On Thu, Feb 26, 2015 at 11:04 PM, Chris Mason wrote:
>
>
> On Thu, Feb 26, 2015 at 4:49 PM, Matt wrote:
>>
>> Hi linux-btrfs list,
>>
>> Hi Chris, Hi Josef,
>>
>>
>> it seemingly happened in the past and now it seems to happen again:
>
aving fetched the latest state of Chris' repo
Am I missing something ?
It would be really nice to have a repo where all of the latest Btrfs
patches are stored and accessible - and a clear picture on why this
weirdness happens
Sorry if this was already asked in the past, since I'm not aware
On 28 January 2015 at 12:55, David Sterba wrote:
> On Mon, Jan 26, 2015 at 06:05:51PM +0000, Matt Robinson wrote:
>> It is not currently possible to deduplicate the last block of files
>> whose size is not a multiple of the block size, as the btrfs_extent_same
>> ioctl retu
.
Signed-off-by: Matt Robinson
---
fs/btrfs/ioctl.c | 21 ++---
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index d49fe8a..a407d8a 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2871,14 +2871,16 @@ static int
ransid verify failed on 20809493159936 wanted
4486137218058286914 found 390978
I have been sending incremental snapshot dumps over to an identical file
server as backups. Everything checks out OK there. Do I try to run
check with --repair first, and fall back to my backup if that fails?
-Matt
--
zo compression on an Intel SSD.
Last time this happened I had the partition formatted with zlib/gzip
compression. This time it's with lzo and also happening.
The problem is that rsync can't be killed off - so the load will
increase over time, only option being to reboot via Magic SYSRQ K
RAW
whether that makes a difference
It would also be interesting to see the output of
cryptsetup luksDump
and
Version: *
Cipher name:*
Cipher mode:*
Hash spec:*
Interesting find indeed ! Thanks for sharing the finding
I'm currently using Btrfs on an encrypted
reduction with 2 threads on 6 drives
- 10% speed reduction with 2 threads on 3 drives
- 5% speed reduction with 2 threads on 1 drive
I only have 12 slots on my HBA card, but I wonder if 24 drives would
reduce the speed to 25% with 2 threads?
Matt
make btrfs fs...
___
12 drives
e the same results with multiple drives in
your raid...
Thanks,
Matt
On Thu, Apr 25, 2013 at 2:10 PM, Josef Bacik wrote:
> On Thu, Apr 25, 2013 at 03:01:18PM -0600, Matt Pursley wrote:
>> Ok, awesome, let me know how it goes.. I don't have the raid
>> formatted to btr
Ok, awesome, let me know how it goes.. I don't have the raid
formatted to btrfs right now, but I could probably do that in about 30
minutes or so.
Thanks Josef,
Matt
On Thu, Apr 25, 2013 at 1:39 PM, Josef Bacik wrote:
> On Thu, Apr 25, 2013 at 01:52:44PM -0600, Matt Pursley wrote
Hey Josef,
Were you able to look into this any further?
It's still pretty reproducible on my machine...
Thanks,
Matt
On Thu, Apr 18, 2013 at 2:58 PM, Josef Bacik wrote:
> This is strange, and I can't see any reason why this would happen. I'll try
> and
> repr
is issue here...
https://bugzilla.kernel.org/show_bug.cgi?id=56771
Thanks,
Matt
___ mdraid6 + ext4 ___
kura1 / # mount | grep -i /var/data
/dev/md0 on /var/data type ext4 (rw)
kura1 / # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
[linear] [multipath]
On Tue, Apr 16, 2013 at 11:55 PM, Sander wrote:
> Matt Pursley wrote (ao):
>> I have an LSI HBA card (LSI SAS 9207-8i) with 12 7200rpm SAS drives
>> attached. When it's formated with mdraid6+ext4 I get about 1200MB/s
>> for multiple streaming random reads with iozone.
s
seem to change the behaviour.
Anyone know any reasons why I would see the speed drop when going from
one to more then one stream at a time with btrfs raid6? We would like
to use btrfs (mostly for snapshots), but we do need to get the full
1200MB/s streaming speeds too..
Thanks,
Matt
d a chance to test it
yet with the new btrfs-progs - haven't suspended meanwhile)
Kind Regards
Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 16, 2011 at 1:27 AM, Dan Magenheimer
wrote:
>> -Original Message-
>> From: Matt [mailto:jackdac...@gmail.com]
>> Sent: Tuesday, February 15, 2011 5:12 PM
>> To: Minchan Kim
>> Cc: Dan Magenheimer; gre...@suse.de; Chris Mason; linux-
&g
On Mon, Feb 14, 2011 at 4:35 AM, Minchan Kim wrote:
> On Mon, Feb 14, 2011 at 10:29 AM, Matt wrote:
>> On Mon, Feb 14, 2011 at 1:24 AM, Matt wrote:
>>> On Mon, Feb 14, 2011 at 12:08 AM, Matt wrote:
>>>> On Wed, Feb 9, 2011 at 1:03 AM, Dan Magenheimer
>>&
On Mon, Feb 14, 2011 at 8:59 PM, Matt wrote:
> On Mon, Feb 14, 2011 at 1:29 AM, Matt wrote:
>> On Mon, Feb 14, 2011 at 1:24 AM, Matt wrote:
>>> On Mon, Feb 14, 2011 at 12:08 AM, Matt wrote:
>>>> On Wed, Feb 9, 2011 at 1:03 AM, Dan Magenheimer
>>>> wro
rnel.org/majordomo-info.html
>
Hi Andrew,
you could try the following patch to speed up dm-crypt:
https://patchwork.kernel.org/patch/365542/
I'm using it on top of a highly-patched 2.6.37 kernel
not sure if exactly that version was included in 2.6.38
there are some additional handles
heavy writes as well
Jan 5 16:56:46 linuscs101 kernel: [ 3666.496742] [ cut here
]
Jan 5 16:56:46 linuscs101 kernel: [ 3666.496754] WARNING: at
fs/btrfs/inode.c:2143 btrfs_orphan_commit_root+0xb0/0xc0()
Jan 5 16:56:46 linuscs101 kernel: [ 3666.496756] Hardware name
fs
filesystems and haven't seen any corruptions since then
(ext4 got "fixed" since 2.6.37-rc6, xfs showed no problems from the start)
http://git.eu.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=1449032be17abb69116dbc393f67ceb8bd034f92
(is the actual temporary fix fo
On Wed, Dec 15, 2010 at 8:25 PM, Matt wrote:
> On Wed, Dec 15, 2010 at 8:16 PM, Andi Kleen wrote:
>>> I have a question though: the deactivation of multiple page-io
>>> submission support most likely only would affect bigger systems or
>>> also desktop systems (like
orkaround.
> The problem with the other path still really needs to be tracked down.
>
> -Andi
>
> --
> a...@linux.intel.com -- Speaking for myself only.
>
ok,
thanks for the clarification
Regards
Matt
--
To unsubscribe from this list: send the line "unsubscri
ation of multiple page-io
submission support most likely only would affect bigger systems or
also desktop systems (like mine) ?
Regards
Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
df917459422cb2aecac440febc8879d410
3) bd2d0210cf22f2bd0cef72eb97cf94fc7d31d8cc
1 -> 3 (earlier -> later)
Regards
Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
output of mount of the system-partition of the
system I was running the kernel on - where the [more observable]
corruption was observed (checkout
bd2d0210cf22f2bd0cef72eb97cf94fc7d31d8cc)
-> this output got generated while I mounted it from my working (no
corruption observed) system with 2.6.36 kernel - I don't know if it&
On Sat, Dec 4, 2010 at 8:38 PM, Mike Snitzer wrote:
> On Sat, Dec 04 2010 at 2:18pm -0500,
> Matt wrote:
>
>> On Wed, Dec 1, 2010 at 10:23 PM, Mike Snitzer wrote:
>> > Matt and Jon,
>> >
>> > If you'd be up to it: could you try testing your dm-cry
On Sat, Dec 4, 2010 at 8:38 PM, Mike Snitzer wrote:
> On Sat, Dec 04 2010 at 2:18pm -0500,
> Matt wrote:
>
>> On Wed, Dec 1, 2010 at 10:23 PM, Mike Snitzer wrote:
>> > Matt and Jon,
>> >
>> > If you'd be up to it: could you try testing your dm-cry
on top of that.
>>
>> With unpatched dmcrypt (IOW with Linus' git)? Then it must be ext4 or
>> dm-core problem because there were no patches for dm-crypt...
>
> Matt and Jon,
>
> If you'd be up to it: could you try testing your dm-crypt+ext4
> corruption rep
on top of that.
>>
>> With unpatched dmcrypt (IOW with Linus' git)? Then it must be ext4 or
>> dm-core problem because there were no patches for dm-crypt...
>
> Matt and Jon,
>
> If you'd be up to it: could you try testing your dm-crypt+ext4
> corruption rep
On Wed, Dec 1, 2010 at 5:52 PM, Mike Snitzer wrote:
> On Wed, Dec 01 2010 at 11:05am -0500,
> Matt wrote:
>
>> On Mon, Nov 15, 2010 at 12:24 AM, Matt wrote:
>> > On Sun, Nov 14, 2010 at 10:54 PM, Milan Broz wrote:
>> >> On 11/14/2010 10:49 PM, Matt wrote:
&
On Mon, Nov 15, 2010 at 12:24 AM, Matt wrote:
> On Sun, Nov 14, 2010 at 10:54 PM, Milan Broz wrote:
>> On 11/14/2010 10:49 PM, Matt wrote:
>>> only with the dm-crypt scaling patch I could observe the data-corruption
>>
>> even with v5 I sent on Friday?
>>
&g
On Sun, Nov 14, 2010 at 10:54 PM, Milan Broz wrote:
> On 11/14/2010 10:49 PM, Matt wrote:
>> only with the dm-crypt scaling patch I could observe the data-corruption
>
> even with v5 I sent on Friday?
>
> Are you sure that it is not related to some fs problem in 2.6.37-rc
it seemingly caused corruptions right from the start (the mentioned
corruption /etc/env.d/02opengl to be the most obvious candidate and
probably even more)
with those corruptions being anticipated over longer uptime and heavy
use-patterns (such as re-compiling the whole system).
I don't k
From: Matt Lupfer
Fixes innocuous style issues identified by the checkpatch stript.
Signed-off-by: Matt Lupfer
Reviewed-by: Ben Chociej
Reviewed-by: Conor Scott
Reviewed-by: Steve French
---
fs/btrfs/async-thread.c |2 +-
fs/btrfs/disk-io.c |4 ++--
fs/btrfs/export.c
bt the command will have rsync's update or delete abilities.
But, maybe it could.
Questionable
- May be faster than dd/resize, or it may be just as slow as rsync is
with hard links. And I am talking about dozens to thousands of
snapshots, and millions to billions of files.
Matt
ans on the matter?
Thanks,
Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
67 matches
Mail list logo