On Fri, Nov 25, 2016 at 4:17 AM, Roman Mamedov wrote:
> On Fri, 25 Nov 2016 12:05:57 +0100
> Niccolò Belli wrote:
>
>> This is something pretty unbelievable, so I had to repeat it several times
>> before finding the courage to actually post it to the
Hi
I have problem mounting my 3 disk raid1.
This happened after upgrading from Kubuntu 14.04 to 16.04.
The raid1 started as a 2 disk raid1 and was gowned to 3 disks raid1 a
while back. The physical disk has dm-crypt LUKS on top.
Does anyone have any advise?
Information about the system follow,
Hi,
I have comments regarding the code organization, not really the raid56
functionality itself.
On Fri, Oct 28, 2016 at 10:31:36AM +0800, Qu Wenruo wrote:
> For any one who wants to try it, it can be get from my repo:
> https://github.com/adam900710/btrfs-progs/tree/fsck_scrub
>
> Currently, I
On Fri, Jun 03, 2016 at 12:05:14PM -0700, Liu Bo wrote:
> @@ -6648,6 +6648,7 @@ int btrfs_read_chunk_tree(struct btrfs_root *root)
> struct btrfs_key found_key;
> int ret;
> int slot;
> + u64 total_dev = 0;
>
> root = root->fs_info->chunk_root;
>
> @@ -6689,6
So I rebooted with 4.9rc6 with the patch inspired by the thread
"[PATCH] btrfs: limit the number of asynchronous delalloc pages to
reasonable value", but at 512K pages, ie:
diff -u2 fs/btrfs/inode.c ../linux-4.9-rc6/fs/btrfs/
--- fs/btrfs/inode.c2016-11-13 13:32:32.0 -0500
+++
On Tue, Nov 08, 2016 at 06:27:12AM -0500, Sanidhya Solanki wrote:
> On Tue, 8 Nov 2016 10:20:43 +0800
> Qu Wenruo wrote:
>
> > Introduce the following trace points:
> > qgroup_update_reserve
> > qgroup_meta_reserve
> >
> > These trace points are handy to trace qgroup
On Fri, 25 Nov 2016 12:01:37 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Obviously this can be a HUGE problem on spinning rust due to its seek times,
> a problem zero-seek-time ssds don't have
They are not strictly zero seek time either. Sure you don't have the issue of
moving the
Ulli Horlacher posted on Fri, 25 Nov 2016 09:28:40 +0100 as excerpted:
> I have vmware and virtualbox VMs on btrfs SSD.
>
> I read in
> https://btrfs.wiki.kernel.org/index.php/SysadminGuide
#When_To_Make_Subvolumes
>
> certain types of data (databases, VM images and similar typically
>
On Fri, 25 Nov 2016 12:05:57 +0100
Niccolò Belli wrote:
> This is something pretty unbelievable, so I had to repeat it several times
> before finding the courage to actually post it to the mailing list :)
>
> After dozens of data loss I don't trust my btrfs partition
So it's btrfs problem,
i catch hung again with 4.8.7, and i can't catch if ES data stored on ext4
Trace from 4.8.7:
Nov 25 14:09:30 msq-k1-srv-ids-01 kernel: INFO: task
btrfs-transacti:4143 blocked for more than 120 seconds.
Nov 25 14:09:30 msq-k1-srv-ids-01 kernel: Not tainted 4.8.0-1-amd64
This is something pretty unbelievable, so I had to repeat it several times
before finding the courage to actually post it to the mailing list :)
After dozens of data loss I don't trust my btrfs partition that much, so I
make a backup copy with dd weekly. Yesterday I was going to do some
On Thu, Nov 24, 2016 at 02:18:29AM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> We were setting the qgroup_rescan_running flag to true only after the
> rescan worker started (which is a task run by a queue). So if a user
> space task starts a rescan and
On Fri, Nov 25, 2016 at 09:07:45AM +0100, Christoph Hellwig wrote:
> this series has a few patches that switch btrfs to use the proper helpers for
> accessing bio internals. This helps to prepare for supporting multi-page
> bio_vecs, which are currently under development.
>
> Changes since v1:
>
Hi all,
this series has a few patches that switch btrfs to use the proper helpers for
accessing bio internals. This helps to prepare for supporting multi-page
bio_vecs, which are currently under development.
Changes since v1:
- fixed two compression related bugs
- various minor cleanups
-
And remove the bogus check for a NULL return value from kmap, which
can't happen. While we're at it: I don't think that kmapping up to 256
will work without deadlocks on highmem machines, a better idea would
be to use vm_map_ram to map all of them into a single virtual address
range.
FWIW, I still see the lockdep splat in btrfs in 4.9-rc5+
[ 159.698343] =
[ 159.698345] [ INFO: possible recursive locking detected ]
[ 159.698347] 4.9.0-rc5+ #136 Tainted: GW
[ 159.698348] -
Hi,
btrfs-progs version 4.8.4 have been released.
Changes:
* check: support for clearing space cache v2 (free-space-tree)
* send:
* more sanity checks (with tests), cleanups
* fix for fstests/btrfs/038 and btrfs/117 failures
* build:
* fix compilation of standalone ioctl.h,
I have vmware and virtualbox VMs on btrfs SSD.
I read in
https://btrfs.wiki.kernel.org/index.php/SysadminGuide#When_To_Make_Subvolumes
certain types of data (databases, VM images and similar typically big
files that are randomly written internally) may require CoW to be
disabled
Pass the full bio to the decompression routines and use bio iterators
to iterate over the data in the bio.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/compression.c | 123 +
fs/btrfs/compression.h | 12 ++---
fs/btrfs/lzo.c
Use bio_for_each_segment_all to iterate over the segments instead.
This requires a bit of reshuffling so that we only lookup up the ordered
item once inside the bio_for_each_segment_all loop.
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
---
Use the bvec offset and len members to prepare for multipage bvecs.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/compression.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 77042a8..3c1c25c
Rework the loop a little bit to use the generic bio_for_each_segment_all
helper for iterating over the bio.
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
---
fs/btrfs/file-item.c | 32 +++-
1 file changed, 11 insertions(+),
Instead of using bi_vcnt to calculate it.
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
---
fs/btrfs/compression.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index
Any chance to get someone look at this or the next bug report?
On Mon, Nov 14, 2016 at 04:35:29AM -0800, Christoph Hellwig wrote:
> btrfs/130 [ 384.645337] run fstests btrfs/130 at 2016-11-14
> 12:33:26
> [ 384.827333] BTRFS: device fsid bf118b00-e2e0-4a96-a177-765789170093 devid
> 1
Just use bio_for_each_segment_all to iterate over all segments.
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
---
fs/btrfs/raid56.c | 16 ++--
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/raid56.c
Just use bio_for_each_segment_all to iterate over all segments.
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
---
fs/btrfs/inode.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index
26 matches
Mail list logo