Daily snapshots work welk with kernel 3.14 and above (I had problems with 3.13
and previous). I have snapshots every 15 mins on some subvols.
Very large numbers of snapshots can cause performance problems. I suggest
keeping below 1000 snapshots at this time.
You can use send/recv functionality
When page aligned start and len passed to extent_fiemap(), the result is
good, but when start and len is not aligned, e.g. start = 1 and len =
4095 is passed to extent_fiemap(), it returns no extent.
The problem is that start and len is all rounded down which causes the
problem. This patch will ro
Duncan <1i5t5.duncan cox.net> writes:
>
> Gareth Clay posted on Tue, 15 Jul 2014 14:35:22 +0100 as excerpted:
>
> > I noticed yesterday that the mount points on my btrfs RAID1 filesystem
> > had become read-only. On a reboot, the filesystem fails to mount. I
> > wondered if someone here might b
I've a couple of questions on incremental backups. I've read the wiki
page, and would like to confirm my understanding of some features, and
also see if other features are possible that are not mentioned. I'm
looking to replace my existing backup solution, and hoping to match the
features I current
Hi, the following patches try to fix a long outstanding issue with qgroups
and snapshot deletion. The core problem is that btrfs_drop_snapshot will
skip shared extents during it's tree walk. This results in an inconsistent
qgroup state once the drop is processed. We also have a bug where qgroup
ite
We want this to debug qgroup changes on live systems.
Signed-off-by: Mark Fasheh
Reviewed-by: Josef Bacik
---
fs/btrfs/qgroup.c| 3 +++
fs/btrfs/super.c | 1 +
include/trace/events/btrfs.h | 56
3 files changed, 60 insertion
During its tree walk, btrfs_drop_snapshot() will skip any shared
subtrees it encounters. This is incorrect when we have qgroups
turned on as those subtrees need to have their contents
accounted. In particular, the case we're concerned with is when
removing our snapshot root leaves the subtree with
ulist_add() can return '1' on sucess, which qgroup_subtree_accounting()
doesn't take into account. As a result, that value can be bubbled up to
callers, causing an error to be printed. Fix this by only returning the
value of ulist_add() when it indicates an error.
Signed-off-by: Mark Fasheh
---
btrfs_drop_snapshot() leaves subvolume qgroup items on disk after
completion. This wastes space and also can cause problems with snapshot
creation. If a new snapshot tries to claim the deleted subvolumes id,
btrfs will get -EEXIST from add_qgroup_item() and go read-only.
We can partially fix this
From: Josef Bacik
Before I extended the no_quota arg to btrfs_dec/inc_ref because I didn't
understand how snapshot delete was using it and assumed that we needed the
quota operations there. With Mark's work this has turned out to be not the
case, we _always_ need to use no_quota for btrfs_dec/in
> > > @@ -515,7 +515,8 @@ static int write_buf(struct file *filp, const void
> > > *buf,
> > > u32 len, loff_t *off)
> >
> > Though this probably wants to be rewritten in terms of kernel_write().
> > That'd give an opportunity to get rid of the sctx->send_off and have it
> > use f_pos in the filp.
> On 16 July 2014 at 19:20 Zach Brown wrote:
>
>
> On Tue, Jul 15, 2014 at 09:17:17PM +0200, Fabian Frederick wrote:
> > Fix the following sparse warning:
> > fs/btrfs/send.c:518:51: warning: incorrect type in argument 2 (different
> > address spaces)
> > fs/btrfs/send.c:518:51: expected char
On Thu, Jul 17, 2014 at 01:03:01AM -0700, Christoph Hellwig wrote:
> On Wed, Jul 16, 2014 at 02:37:56PM -0700, Luis R. Rodriguez wrote:
> > From: "Luis R. Rodriguez"
> >
> > This makes the implementation simpler by stuffing the struct on
> > the driver and just letting the driver iinsert it and r
Hugo Mills posted on Thu, 17 Jul 2014 09:41:53 +0100 as excerpted:
>> and are there any combinations of possibly conflicting mount options
>> one should be aware of (compression, autodefrag, cache clearing)? Is it
>> advisable to use the same mount options for all mounts pointing to the
>> same ph
On Thu, Jul 17, 2014 at 01:02:06PM +, philippe.simo...@swisscom.com wrote:
> Hi Hugo
>
> > -Original Message-
> > From: Hugo Mills [mailto:h...@carfax.org.uk]
> > Sent: Thursday, July 17, 2014 1:13 PM
> > To: Simonet Philippe, INI-ON-FIT-NW-IPE
> > Cc: linux-btrfs@vger.kernel.org
> > S
[ deadlocks during rsync in 3.15 with compression enabled ]
Hi everyone,
I still haven't been able to reproduce this one here, but I'm going
through a series of tests with lzo compression foraced and every
operation forced to ordered. Hopefully it'll kick it out soon.
While I'm hammering away,
Hi Hugo
> -Original Message-
> From: Hugo Mills [mailto:h...@carfax.org.uk]
> Sent: Thursday, July 17, 2014 1:13 PM
> To: Simonet Philippe, INI-ON-FIT-NW-IPE
> Cc: linux-btrfs@vger.kernel.org
> Subject: Re: NFS FILE ID not unique when exporting many brtfs subvolumes
>
> On Thu, Jul 17, 20
On 07/17/2014 04:08 AM, Liu Bo wrote:
> xfstests generic/127 detected this problem.
>
> With commit 7fc34a62ca4434a79c68e23e70ed26111b7a4cf8, now fsync will only
> flush
> data within the passed range. This is the cause of the above problem,
> -- btrfs's fsync has a stage called 'sync log' which
On Thu, Jul 17, 2014 at 10:40:14AM +, philippe.simo...@swisscom.com wrote:
> I have a problem using btrfs/nfs to store my vmware images.
[snip]
> - vmware is basing its NFS files locks on the nfs fileid field returned from
> a NFS GETATTR request for the file being locked
>
> http://kb.
'btrfs filesystem defrag' has an option '-t', whose manpage says
"Any extent bigger than threshold given by -t option, will be
considered already defragged. Use 0 to take the kernel default, and
use 1 to say every single extent must be rewritten."
Here 'use 0' still works, it refers to the defaul
xfstest btrfs/023 which does the following tests
create_group_profile "raid0"
check_group_profile "RAID0"
create_group_profile "raid1"
check_group_profile "RAID1"
create_group_profile "raid10"
check_group_profile "RAID10"
create_group_profile "raid5"
check_group_profile "RAID5"
create_group_pr
Original Message
Subject: Re: Is it safe to mount subvolumes of already-mounted volumes
(even with different options)?
From: Sebastian Ochmann
To: Chris Murphy , zhe.zhang.resea...@gmail.com
Date: 2014年07月17日 15:58
Hello,
I need to clarify, I'm _not_ sharing a drive between
On Thu, Jul 17, 2014 at 10:02:01AM +0200, Swâmi Petaramesh wrote:
> Hi there,
>
> Since a few days, I have noticed that "btrfs fi df /" displays an entry about
> "unknown" used space, and I can see this on several Fedora machines, so it is
> not an issue related to a given system...
>
> Does an
On Thu, Jul 17, 2014 at 12:18:37AM +0200, Sebastian Ochmann wrote:
> I'm sharing a btrfs-formatted drive between multiple computers and each of
> the machines has a separate home directory on that drive. The root of the
> drive is mounted at /mnt/tray and the home directory for machine {hostname}
>
xfstests generic/127 detected this problem.
With commit 7fc34a62ca4434a79c68e23e70ed26111b7a4cf8, now fsync will only flush
data within the passed range. This is the cause of the above problem,
-- btrfs's fsync has a stage called 'sync log' which will wait for all the
ordered extents it've record
On Wed, Jul 16, 2014 at 02:37:56PM -0700, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez"
>
> This makes the implementation simpler by stuffing the struct on
> the driver and just letting the driver iinsert it and remove it
> onto the sb list. This avoids the kzalloc() completely.
Again, NA
Hi there,
Since a few days, I have noticed that "btrfs fi df /" displays an entry about
"unknown" used space, and I can see this on several Fedora machines, so it is
not an issue related to a given system...
Does anybody know what these "unknown" data are ?
i.e:
# btrfs fi df /
Data, single:
Hello,
I need to clarify, I'm _not_ sharing a drive between multiple computers
at the _same_ time. It's a portable device which I use at different
locations with different computers. I just wanted to give a rationale
for mounting the whole drive to some mountpoint and then also part of
that d
28 matches
Mail list logo