Using the structure btrfs_sector_sum to keep the checksum value is
unnecessary, because the extents that btrfs_sector_sum points to are
continuous, we can find out the expected checksums by btrfs_ordered_sum's
bytenr and the offset, so we can remove btrfs_sector_sum's bytenr. After
removing bytenr,
It was fixed by Wei Yongjun
http://marc.info/?l=linux-btrfs&m=136910396606489&w=2
Thanks
Miao
On tue, 18 Jun 2013 22:57:41 +0100, Djalal Harouni wrote:
> btrfs_check_trunc_cache_free_space() tries to check if there is enough
> space for cache inode truncation but it fails.
>
> Currently this fu
btrfs_check_trunc_cache_free_space() tries to check if there is enough
space for cache inode truncation but it fails.
Currently this function always returns success even if there is no
enough space. Fix this by returning the -ENOSPC error code.
Signed-off-by: Djalal Harouni
---
Totally untested
This has plagued us forever and I'm so over working around it. When we truncate
down to a non-page aligned offset we will call btrfs_truncate_page to zero out
the end of the page and write it back to disk, this will keep us from exposing
stale data if we truncate back up from that point. The prob
Josef Bacik fusionio.com> writes:
>
> On Tue, Jun 11, 2013 at 11:43:30AM -0400, Sage Weil wrote:
> > I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
> > is a known problem? In this case there is no powercycling; just a regular
> > ceph-osd workload.
..
I'm able to
Quoting Josef Bacik (2013-06-18 12:37:06)
> On Tue, Jun 11, 2013 at 11:43:30AM -0400, Sage Weil wrote:
> > I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
> > is a known problem? In this case there is no powercycling; just a regular
> > ceph-osd workload.
> >
>
> Have
Quoting Sage Weil (2013-06-18 11:56:37)
> On Wed, 12 Jun 2013, Sage Weil wrote:
> > On Tue, 11 Jun 2013, Chris Mason wrote:
> > > Quoting Sage Weil (2013-06-11 11:43:30)
> > > > I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
> > > > is a known problem? In this case there
On Tue, Jun 11, 2013 at 11:43:30AM -0400, Sage Weil wrote:
> I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
> is a known problem? In this case there is no powercycling; just a regular
> ceph-osd workload.
>
Have you gotten sysrq+w? Can you tell me where
log_one_exte
On Wed, 12 Jun 2013, Sage Weil wrote:
> On Tue, 11 Jun 2013, Chris Mason wrote:
> > Quoting Sage Weil (2013-06-11 11:43:30)
> > > I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
> > > is a known problem? In this case there is no powercycling; just a regular
> > > ceph-osd
From: Harald Hoyer
Given the following /etc/fstab entries:
/dev/sda3 /mnt/foo btrfs subvol=foo,ro 0
/dev/sda3 /mnt/bar btrfs subvol=bar,rw 0
you can't issue:
$ mount /mnt/foo
$ mount /mnt/bar
You would have to do:
$ mount /mnt/foo
$ mount -o remount,rw /mnt/foo
$ mount --bind -o remount,ro /
Does anything show up in dmesg when you mount?
If mount just hangs, do an alt-sysrq-w, and then post what that sends to dmesg.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.
On Tue, Jun 18, 2013 at 11:13:37AM +0800, Miao Xie wrote:
> From: Josef Bacik
>
> Patch "Btrfs: remove btrfs_sector_sum structure" introduced a problem
> that we copied the checksum value to the wrong address when doing
> relocation.
>
> The reason is:
> It is very likely that one ordered extent
Thanks for the reply.
On Tue, 2013-06-18 at 06:04 +, Duncan wrote:
[...]
> 1) I had an similar issue some time back that turned out to be a
> corrupted space-cache. Try mounting with the "nospace_cache" option. If
> that works that's it; mount with the "clear_cache" option to clear the
>
My multi-device btrfs (3*2TB) won't mount anymore.
The fs was created with lubuntu 13.04 (amd64) and the default kernel
with -n 8192 -d single -m raid1, first with two devices the third was
added later.
Quotas where enabled after fs creation (btrfs quota enable) but
nothing else, there are several
14 matches
Mail list logo