On Mon, Apr 11, 2016 at 3:51 PM, lenovomi wrote:
> Hi,
>
> i didnt try mount -o ro, when i tried to mount it via esata i got
> kernel panic immediately. Then i conntected enclosure with drives via
> usb and tried to mount it :
OK so try '-o ro,recovery' and report back what you get.
>
> https:
Mark Fasheh wrote on 2016/04/11 11:09 -0700:
On Mon, Apr 11, 2016 at 09:05:47AM +0800, Qu Wenruo wrote:
Mark Fasheh wrote on 2016/04/08 12:18 -0700:
On Fri, Apr 08, 2016 at 03:10:35PM +0200, Holger Hoffstätte wrote:
[cc: Mark and Qu]
On 04/08/16 13:51, Holger Hoffstätte wrote:
On 04/08/1
On 2016/04/12 3:04, David Sterba wrote:
> The size of root item is more than 400 bytes, which is quite a lot of
> stack space. As we do IO from inside the subvolume ioctls, we should
> keep the stack usage low in case the filesystem is on top of other
> layers (NFS, device mapper, iscsi, etc).
>
>
Hi,
i didnt try mount -o ro, when i tried to mount it via esata i got
kernel panic immediately. Then i conntected enclosure with drives via
usb and tried to mount it :
https://bpaste.net/show/641ab9172539
plugged via usb -> mount randomly one of the drive mount /dev/sda /mnt/brtfs
I was told on
Using the offwakecputime bpf script I noticed most of our time was spent waiting
on the delayed ref throttling. This is what is supposed to happen, but
sometimes the transaction can commit and then we're waiting for throttling that
doesn't matter anymore. So change this stuff to be a little smart
On Mon, Apr 11, 2016 at 2:08 PM, lenovomi wrote:
> Hi,
>
>
> I was running cubox with kernel 4.4.0 and with btrfs raid1 ...
>
> not it failed somehow and getting kernel panic during the boot...
>
> https://bpaste.net/show/0455daa876de
>
> i tried to connect the esata box with 3.x
>
> https://bpast
Hi,
I was running cubox with kernel 4.4.0 and with btrfs raid1 ...
not it failed somehow and getting kernel panic during the boot...
https://bpaste.net/show/0455daa876de
i tried to connect the esata box with 3.x
https://bpaste.net/show/98732bc6ce49
Any idea? Does it mean that whole volume i
On Mon, Apr 11, 2016 at 09:05:47AM +0800, Qu Wenruo wrote:
>
>
> Mark Fasheh wrote on 2016/04/08 12:18 -0700:
> >On Fri, Apr 08, 2016 at 03:10:35PM +0200, Holger Hoffstätte wrote:
> >>[cc: Mark and Qu]
> >>
> >>On 04/08/16 13:51, Holger Hoffstätte wrote:
> >>>On 04/08/16 13:14, Filipe Manana wrot
The size of root item is more than 400 bytes, which is quite a lot of
stack space. As we do IO from inside the subvolume ioctls, we should
keep the stack usage low in case the filesystem is on top of other
layers (NFS, device mapper, iscsi, etc).
Signed-off-by: David Sterba
---
fs/btrfs/ioctl.c
Hi,
using the gcc option -fstack-usage I measured the stack usage and tried to get
rid of the worst offenders. The best improvement was in create_subvol where we
stored a 400B+ structure on stack, but otherwise it's not always clear why the
stack usage is that high. Most functions consume less tha
The key variable occupies 17 bytes, the key_start is used once, we can
simply reuse existing 'key' for that purpose. As the key is not a simple
type, compiler doest not do it on itself.
Signed-off-by: David Sterba
---
fs/btrfs/scrub.c | 19 +--
1 file changed, 9 insertions(+), 10
Signed-off-by: David Sterba
---
fs/btrfs/send.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 19b7bf4284ee..8f6f9d6d14df 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -6022,10 +6022,13 @@ long btrfs_ioctl_send(struc
Signed-off-by: David Sterba
---
fs/btrfs/send.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 02967374d0d9..53a40a7077a2 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -6059,10 +6059,13 @@ long btrfs_ioctl_send(str
Signed-off-by: David Sterba
---
fs/btrfs/ioctl.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 21423dd15da4..0cb80379e6f6 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -3468,13 +3468,16 @@ static int btrfs_clo
Signed-off-by: David Sterba
---
fs/btrfs/send.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 8f6f9d6d14df..fc9d7f6212c1 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -6031,10 +6031,13 @@ long btrfs_ioctl_send(struc
We're going to use the argument multiple times later.
Signed-off-by: David Sterba
---
fs/btrfs/send.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index fc9d7f6212c1..ab1b4d259836 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/
Signed-off-by: David Sterba
---
fs/btrfs/send.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index ab1b4d259836..02967374d0d9 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -6047,10 +6047,13 @@ long btrfs_ioctl_send(struc
Hi,
inspired by a recent fix where we tried to kmalloc a 64k nodesize buffer,
without the vmalloc fallback, and failed. This series add the "kmalloc-first
and vmalloc-fallback" logic to more places, namely to the buffers used during
send. If the memory is not fragmented, kmalloc succeeds and does
On Mon, Dec 14, 2015 at 06:29:32PM -0800, Liu Bo wrote:
> Now we force to create empty block group to keep data profile alive,
> however, in the below example, we eventually get an empty block group
> while we're trying to get more space for other types (metadata/system),
>
> - Before,
> block gro
On 2016-04-09 03:24, Duncan wrote:
Yauhen Kharuzhy posted on Fri, 08 Apr 2016 22:53:00 +0300 as excerpted:
On Fri, Apr 08, 2016 at 03:23:28PM -0400, Austin S. Hemmelgarn wrote:
On 2016-04-08 12:17, Chris Murphy wrote:
I would personally suggest adding a per-filesystem node in sysfs to
handle
On Mon, Apr 11, 2016 at 3:48 AM, Jérôme Poulin wrote:
> Sorry for the confusion, allow me to clarify and I will summarize with
> what I learned since I now understand that corruption was present
> before disk went bad.
>
> Note that this BTRFS was once on a MD RAID5 on LVM on LUKS before
> being m
21 matches
Mail list logo