Balance will create reloc_root for each fs root, and it's going to
record last_snapshot to filter shared blocks. The side effect of
setting last_snapshot is to break nocow attributes of files.
Since the extents are not shared by the relocation tree after the balance,
we can recover the old last_s
On wed, 5 Jun 2013 15:36:36 +0200, David Sterba wrote:
> On Wed, Jun 05, 2013 at 10:34:08AM +0800, Miao Xie wrote:
>> On tue, 4 Jun 2013 16:26:57 -0700, Zach Brown wrote:
>>> On Tue, Jun 04, 2013 at 07:16:53PM -0400, Chris Mason wrote:
Quoting Zach Brown (2013-06-04 18:17:54)
> Hi g
On 05/06/13 17:24, David Sterba wrote:
> On Wed, Jun 05, 2013 at 04:43:29PM +0100, Hugo Mills wrote:
>>OK, so you've got plenty of space to allocate. There were some
>> issues in this area (block reserves and ENOSPC, and I think
>> specifically addressing the issue of ENOSPC when there's space
On Wed, Jun 05, 2013 at 04:43:29PM +0100, Hugo Mills wrote:
>OK, so you've got plenty of space to allocate. There were some
> issues in this area (block reserves and ENOSPC, and I think
> specifically addressing the issue of ENOSPC when there's space
> available to allocate) that were fixed bet
On Wed, Jun 05, 2013 at 04:59:57PM +0100, Martin wrote:
> On 05/06/13 16:43, Hugo Mills wrote:
> > On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
> >> btrfs fi df:
> >>
> >> Data, RAID1: total=2.85TB, used=2.84TB Data: total=8.00MB,
> >> used=0.00 System, RAID1: total=8.00MB, used=412.00K
On 05/06/13 16:43, Hugo Mills wrote:
> On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
>> On 05/06/13 16:05, Hugo Mills wrote:
>>> On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid
On Mon, May 20, 2013 at 02:59:27PM +0200, Holger Fischer wrote:
> Dear BTRFS-Community,
>
> as far as I understand I believe it would make sense to apply that one
> upstream:
Thanks for bringing it up.
> like described, it ... Fixes FTBFS on alpha and ia64 ...
>
> >cat 02-ftbfs.patch
> Author
On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
> On 05/06/13 16:05, Hugo Mills wrote:
> > On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
> >> Dear Devs,
> >>
> >> I have x4 4TB HDDs formatted with:
> >>
> >> mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
> >>
> >>
> >
On 05/06/13 16:05, Hugo Mills wrote:
> On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
>> Dear Devs,
>>
>> I have x4 4TB HDDs formatted with:
>>
>> mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
>>
>>
>> /etc/fstab mounts with the options:
>>
>> noatime,noauto,space_cache,inod
On Wed, 5 Jun 2013, Josef Bacik wrote:
> On Tue, Jun 04, 2013 at 09:59:05PM -0600, Sage Weil wrote:
> > Hi-
> >
> > I'm pretty reliably triggering the following bug after powercycling an
> > active btrfs + ceph workload and then trying to remount. Is this a known
> > issue?
> >
>
> Yeah sorry
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
/etc/fstab mounts with the options:
noatime,noauto,space_cache,inode_cache
All on kernel 3.8.13.
Upon using rsync to copy some heavily hardlinked backups from ReiserFS,
I've seen:
The fo
On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
> Dear Devs,
>
> I have x4 4TB HDDs formatted with:
>
> mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
>
>
> /etc/fstab mounts with the options:
>
> noatime,noauto,space_cache,inode_cache
>
>
> All on kernel 3.8.13.
>
>
> Upo
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
/etc/fstab mounts with the options:
noatime,noauto,space_cache,inode_cache
All on kernel 3.8.13.
Upon using rsync to copy some heavily hardlinked backups from ReiserFS,
I've so far had var
On Tue, Jun 04, 2013 at 10:03:41PM -0400, Jörn Engel wrote:
> I have seen a lot of boilerplate code that either follows the pattern of
> while (!list_empty(head)) {
> pos = list_entry(head->next, struct foo, list);
> list_del(pos->list);
> ...
>
On Wed, Jun 05, 2013 at 08:53:50AM +0200, Arne Jansen wrote:
> On 05.06.2013 04:09, Jörn Engel wrote:
> > On Tue, 4 June 2013 14:44:35 -0400, Jörn Engel wrote:
> >>
> >> Or while_list_drain?
>
> I'm fine with while_list_drain, although a name starting with list_
> like all other list macros would
I noticed that I was getting these errors on a bigger file system with more
snapshots that had been removed. This check is bogus since we won't inc
rec->found_ref if we don't find a REF_KEY _and_ a DIR_ITEM, so we only have to
worry about there being no references to a root if it actually has a ro
On Wed, Jun 05, 2013 at 10:34:08AM +0800, Miao Xie wrote:
> Ontue, 4 Jun 2013 16:26:57 -0700, Zach Brown wrote:
> > On Tue, Jun 04, 2013 at 07:16:53PM -0400, Chris Mason wrote:
> >> Quoting Zach Brown (2013-06-04 18:17:54)
> >>> Hi gang,
> >>>
> >>> I finally sat down to fix that readdir hang t
Quoting Josef Bacik (2013-06-05 08:54:40)
> On Tue, Jun 04, 2013 at 09:59:05PM -0600, Sage Weil wrote:
> > Hi-
> >
> > I'm pretty reliably triggering the following bug after powercycling an
> > active btrfs + ceph workload and then trying to remount. Is this a known
> > issue?
> >
>
> Yeah so
On Tue, Jun 04, 2013 at 09:59:05PM -0600, Sage Weil wrote:
> Hi-
>
> I'm pretty reliably triggering the following bug after powercycling an
> active btrfs + ceph workload and then trying to remount. Is this a known
> issue?
>
Yeah sorry it's fixed in 3.10, I really need to send the patch back
On Tue, 4 Jun 2013 19:23:18 +0300, Alex Lyakas wrote:
[...]
> P.S: should I open a bugzilla for this?
Yes.
Otherwise the bug report gets lost.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at htt
20 matches
Mail list logo