This is one I have not seen before.
When running a simple, well-tested and well-used script that makes
backups using btrfs send | receive, I got these two errors:
At subvol snapshot
ERROR: rename o131621-1091-0 ->
usr/lib/node_modules/node-gyp/gyp/pylib/gyp/MSVSVersion.py failed: No
space left
I'm using btrfs and snapper on a system with an SSD. On this system
when I run `snapper -c root ls` (where `root` is the snapper config
for /), the process takes a very long time and top shows the following
process using 100% of the CPU:
kworker/u8:6+btrfs-qgroup-rescan
I have multiple
On Wed, Sep 19, 2018 at 12:12:03AM -0400, Zygo Blaxell wrote:
> On Mon, Sep 10, 2018 at 07:06:46PM +1000, Dave Chinner wrote:
> > On Thu, Sep 06, 2018 at 11:53:06PM -0400, Zygo Blaxell wrote:
> > > On Thu, Sep 06, 2018 at 06:38:09PM +1000, Dave Chinner wrote:
> > > >
On Thu, Sep 06, 2018 at 11:53:06PM -0400, Zygo Blaxell wrote:
> On Thu, Sep 06, 2018 at 06:38:09PM +1000, Dave Chinner wrote:
> > On Fri, Aug 31, 2018 at 01:10:45AM -0400, Zygo Blaxell wrote:
> > > On Thu, Aug 30, 2018 at 04:27:43PM +1000, Dave Chinner wrote:
> > > >
On Fri, Aug 31, 2018 at 01:10:45AM -0400, Zygo Blaxell wrote:
> On Thu, Aug 30, 2018 at 04:27:43PM +1000, Dave Chinner wrote:
> > On Thu, Aug 23, 2018 at 08:58:49AM -0400, Zygo Blaxell wrote:
> > > On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> > > &
On Thu, Aug 23, 2018 at 08:58:49AM -0400, Zygo Blaxell wrote:
> On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> > On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > > - is documenting rejection on request alignment grounds
> > > (i
On Mon, Aug 20, 2018 at 08:17:18PM -0500, Eric Sandeen wrote:
>
>
> On 8/20/18 7:49 PM, Dave Chinner wrote:
> > Upon successful completion of this ioctl, the number of
> > bytes successfully deduplicated is returned in bytes_deduped
> > and a status
On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > So why was this dedupe request even accepted by the kernel? Well,
> > I think there's a bug in the check just above this:
> >
> >
[cc linux-fsdevel now, too]
On Mon, Aug 20, 2018 at 09:11:26AM +1000, Dave Chinner wrote:
> [cc linux-...@vger.kernel.org]
>
> On Fri, Aug 17, 2018 at 09:39:24AM +0100, fdman...@kernel.org wrote:
> > From: Filipe Manana
> >
> > Test that deduplication of an
on bug we need to fix via a "oh, by the way" comment in
a commit message for a regression test
Cheers,
Dave.
> Signed-off-by: Filipe Manana
> ---
> tests/generic/505 | 84
> +++
> tests/generic/505.out | 33 ++
A slight change here to _require_xfs_io_command as well, so that tests
> which simply fail with "Inappropriate ioctl" can be caught in the
> common case.
>
> Signed-off-by: Eric Sandeen <sand...@redhat.com>
> ---
>
> Now with new and improved sequential V4 vers
put
>
> V3: lowercase local vars, simplify max label len function
Looks good now, but I wondered about one thing the test doesn't
cover: can you clear the label by setting it to a null string?
i.e you check max length bounds, but don't check empty string
behaviour...
Cheers,
Dave.
--
Dave
On Mon, May 14, 2018 at 06:26:07PM -0500, Eric Sandeen wrote:
> On 5/14/18 6:11 PM, Dave Chinner wrote:
> > On Mon, May 14, 2018 at 12:09:16PM -0500, Eric Sandeen wrote:
> >> This tests the online label ioctl that btrfs has, which has been
> >> recently propose
+
> +# And that the it is still there when it's unmounted
> +_scratch_unmount
> +blkid -s LABEL $SCRATCH_DEV | _filter_scratch | sed -e "s/ $//g"
Ok, so "LABEL" here is a special blkid match token....
> +# And that it persists after a remount
> +_scratch_mount
&
On Tue, May 08, 2018 at 10:06:44PM -0400, Jeff Mahoney wrote:
> On 5/8/18 7:38 PM, Dave Chinner wrote:
> > On Tue, May 08, 2018 at 11:03:20AM -0700, Mark Fasheh wrote:
> >> Hi,
> >>
> >> The VFS's super_block covers a variety of filesystem functionality.
ields we need for a
> subvolume namespace into their own structure.
I'm not convinced yet - it still feels like it's the wrong layer to
be solving the multiple namespace per superblock problem
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
asn't actually guaranteed inode changes
made prior to the fsync to be persistent on disk. i.e. that's a
violation of ordered metadata semantics and probably a bug.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrf
On Fri, Apr 13, 2018 at 10:27:56PM -0500, Vijay Chidambaram wrote:
> Hi Dave,
>
> Thanks for the reply.
>
> I feel like we are not talking about the same thing here.
>
> What we are asking is: if you perform
>
> fsync(symlink)
> crash
>
> can we expect it t
On Fri, Apr 13, 2018 at 09:39:27AM -0500, Jayashree Mohan wrote:
> Hey Dave,
>
> Thanks for clarifying the crash recovery semantics of strictly
> metadata ordered filesystems. We had a follow-up question in this
> case.
>
> On Fri, Apr 13, 2018 at 8:16 AM, Amir Goldstei
g strictly ordered metadata
recovery semantics, so it should behave the same way as ext4 and
XFS in tests like these. If it doesn't, then there's filesystem bugs
that need fixing...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
PROG $MKFS_OPTIONS $mixed_opt -b $fssize $SCRATCH_DEV
> ;;
> jfs)
Makes sense.
Reviewed-by: Dave Chinner <dchin...@redhat.com>
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> _scratch_mkfs_sized $((256 * 1024 * 1024)) >>$seqres.full 2>&1
But this uses a filesystem larger than the mixed mode threshold in
_scratch_mkfs_sized(). Please update the generic threshold rather
than special c
On Mon, Apr 09, 2018 at 11:00:52AM +0100, Filipe Manana wrote:
> On Mon, Apr 9, 2018 at 10:51 AM, Dave Chinner <da...@fromorbit.com> wrote:
> > On Sun, Apr 08, 2018 at 10:07:54AM +0800, Eryu Guan wrote:
> >> On Thu, Apr 05, 2018 at 10:56:14PM +0100, fdman...@kernel.org wr
t;fsync" \
> > +$SCRATCH_MNT/baz
You also cannot assume that two separate preallocations beyond EOF
are going to be contiguous (i.e. it could be two separate extents.
What you should just be checking is that there are extents allocated
covering EOF to 3MB, not the exactl
it was *completed*. If we've only replayed up to the
FUA write with 1:63 in it, then no metadata writes should have been
*issued* with 1:396 in it as the LSN that is stamped into metadata
is only updated on log IO completion
On first glance, this implies a bug in the underlying device write
rep
Can anyone give me any ideas why this error would happen? The receive
directory started empty. Snapshot 3 exists at both source and target.
# mkdir /.snapshots/bw538/
# btrfs send -p /mnt/backup/root/laptop/3/snapshot/
/mnt/backup/root/laptop/4/snapshot/ | btrfs receive /.snapshots/bw538/
At
I want to exclude my ~/.cache directory from snapshots. The obvious
way to do this is to mount a btrfs subvolume at that location.
However, I also want the ~/.cache directory to be nodatacow. Since the
parent volume is COW, I believe it isn't possible to mount the
subvolume with different mount
lay...@redhat.com>
> Reviewed-by: Jan Kara <j...@suse.cz>
Documentation helps a lot in understanding all this. Thanks for
adding it into the patch!
Acked-by: Dave Chinner <dchin...@redhat.com>
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 09, 2018 at 09:10:57AM -0500, Jeff Layton wrote:
> From: Jeff Layton <jlay...@redhat.com>
>
> If XFS_ILOG_CORE is already set then go ahead and increment it.
>
> Signed-off-by: Jeff Layton <jlay...@redhat.com>
> Acked-by: Darrick J. Wong <darrick
On Tue, Jan 09, 2018 at 09:10:54AM -0500, Jeff Layton wrote:
> From: Jeff Layton <jlay...@redhat.com>
>
> Signed-off-by: Jeff Layton <jlay...@redhat.com>
> Acked-by: Darrick J. Wong <darrick.w...@oracle.com>
Looks ok, but I haven't tested it at all.
Acked-by: Dav
On Wed, Jan 03, 2018 at 02:59:21PM +0100, Jan Kara wrote:
> On Wed 03-01-18 13:32:19, Dave Chinner wrote:
> > I think we could probably block ->write_metadata if necessary via a
> > completion/wakeup style notification when a specific LSN is reached
> > by the log
On Tue, Jan 02, 2018 at 11:13:06AM -0500, Josef Bacik wrote:
> On Wed, Dec 20, 2017 at 03:30:55PM +0100, Jan Kara wrote:
> > On Wed 20-12-17 08:35:05, Dave Chinner wrote:
> > > On Tue, Dec 19, 2017 at 01:07:09PM +0100, Jan Kara wrote:
> > > > On Wed 13-12-1
On Tue, Dec 19, 2017 at 01:07:09PM +0100, Jan Kara wrote:
> On Wed 13-12-17 09:20:04, Dave Chinner wrote:
> > On Tue, Dec 12, 2017 at 01:05:35PM -0500, Josef Bacik wrote:
> > > On Tue, Dec 12, 2017 at 10:36:19AM +1100, Dave Chinner wrote:
> > > > On Mon, Dec 11, 2
inode: inode to check
> @@ -248,7 +250,7 @@ inode_query_iversion(struct inode *inode)
> {
> u64 cur, old, new;
>
> - cur = atomic64_read(>i_version);
> + cur = inode_peek_iversion_raw(inode);
> for (;;) {
> /* If flag is already set, then
, so maybe it would be better to split relatively isolated
functionality like this out while it's being reworked and you're
already touching every file that uses it?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btr
On Tue, Dec 12, 2017 at 01:05:35PM -0500, Josef Bacik wrote:
> On Tue, Dec 12, 2017 at 10:36:19AM +1100, Dave Chinner wrote:
> > On Mon, Dec 11, 2017 at 04:55:31PM -0500, Josef Bacik wrote:
> > > From: Josef Bacik <jba...@fb.com>
> > >
> > > Now that we
On Mon, Dec 11, 2017 at 02:12:28PM -0800, Joe Perches wrote:
> On Tue, 2017-12-12 at 08:43 +1100, Dave Chinner wrote:
> > On Sat, Dec 09, 2017 at 09:00:18AM -0800, Joe Perches wrote:
> > > On Sat, 2017-12-09 at 09:36 +1100, Dave Chinner wrote:
> > > > 1. Usin
their dirty metadata before sync() returns,
even though it is not necessary to provide correct sync()
semantics
Mind you, writeback invocation is so convoluted now I could easily
be mis-interpretting this code, but it does seem to me like this
code is going to have some unintended behaviou
On Sun, Dec 10, 2017 at 08:23:15PM -0800, Matthew Wilcox wrote:
> On Mon, Dec 11, 2017 at 10:57:45AM +1100, Dave Chinner wrote:
> > i.e. the fact the cmpxchg failed may not have anything to do with a
> > race condtion - it failed because the slot wasn't empty like
On Sat, Dec 09, 2017 at 09:00:18AM -0800, Joe Perches wrote:
> On Sat, 2017-12-09 at 09:36 +1100, Dave Chinner wrote:
> > 1. Using lockdep_set_novalidate_class() for anything other
> > than device->mutex will throw checkpatch warnings. Nice. (*)
> []
> > (*)
On Tue, Oct 31, 2017 someone wrote:
>
>
> > 2. Put $HOME/.cache on a separate BTRFS subvolume that is mounted
> > nocow -- it will NOT be snapshotted
I did exactly this. It servers the purpose of avoiding snapshots.
However, today I saw the following at
https://wiki.archlinux.org/index.php/Btrfs
On Fri, Dec 08, 2017 at 03:01:31PM -0800, Matthew Wilcox wrote:
> On Thu, Dec 07, 2017 at 11:38:43AM +1100, Dave Chinner wrote:
> > > > cmpxchg is for replacing a known object in a store - it's not really
> > > > intended for doing initial inserts after a lookup tells
use the RCU checking because it knows that every reference is protected
> by either the spinlock or the RCU lock.
>
> Dave was saying that he has a tree which has to be protected by a mutex
> because of where it is in the locking hierarchy, and I was vigorously
> declining his propo
Who-ever adds semaphore checking to lockdep can add those
annotations. The externalisation of the development cost of new
lockdep functionality is one of the problems here.
-Dave.
(*) checkpatch.pl is considered mostly harmful round here, too,
but that's another rant
(**) the frequent occurren
On Fri, Dec 08, 2017 at 01:45:52PM +0900, Byungchul Park wrote:
> On Fri, Dec 08, 2017 at 09:22:16AM +1100, Dave Chinner wrote:
> > On Thu, Dec 07, 2017 at 11:06:34AM -0500, Theodore Ts'o wrote:
> > > On Wed, Dec 06, 2017 at 06:06:48AM -0800, Matthew Wilcox wrote:
> > >
problem, you'd be happier, right?
I'd be much happier if it wasn't turned on by default in the first
place. We gave plenty of warnings that there were still unsolved
false positive problems with the new checks in the storage stack.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.
ckdep as being too annyoing and a waste of developer
> time when trying to figure what is a legitimate locking bug versus
> lockdep getting confused.
>
> I can't even disable the new Lockdep feature which is throwing
> lots of new false positives --- it's just all or nothing.
>
On Wed, Dec 06, 2017 at 06:06:48AM -0800, Matthew Wilcox wrote:
> On Wed, Dec 06, 2017 at 07:44:04PM +1100, Dave Chinner wrote:
> > On Tue, Dec 05, 2017 at 08:45:49PM -0800, Matthew Wilcox wrote:
> > > That said, using xa_cmpxchg() in the dquot code looked like the right
> &g
On Tue, Dec 05, 2017 at 08:45:49PM -0800, Matthew Wilcox wrote:
> On Wed, Dec 06, 2017 at 02:14:56PM +1100, Dave Chinner wrote:
> > > The other conversions use the normal API instead of the advanced API, so
> > > all of this gets hidden away. For example, the inode cache d
On Tue, Dec 05, 2017 at 06:02:08PM -0800, Matthew Wilcox wrote:
> On Wed, Dec 06, 2017 at 12:36:48PM +1100, Dave Chinner wrote:
> > > - if (radix_tree_preload(GFP_NOFS))
> > > - return -ENOMEM;
> > > -
> > > INIT_LIST_HEAD(>list_node);
> >
On Tue, Dec 05, 2017 at 06:05:15PM -0800, Matthew Wilcox wrote:
> On Wed, Dec 06, 2017 at 12:45:49PM +1100, Dave Chinner wrote:
> > On Tue, Dec 05, 2017 at 04:40:46PM -0800, Matthew Wilcox wrote:
> > > From: Matthew Wilcox <mawil...@microsoft.com>
> > >
ault y", so should be turned on. But
it's not? And there's no obvious HMM menu config option, either
What a godawful mess Kconfig has turned into.
I'm just going to enable TRANSPARENT_HUGEPAGE - madness awaits me if
I follow the other path down the rat hole
Ok, it build t
On Wed, Dec 06, 2017 at 12:45:49PM +1100, Dave Chinner wrote:
> On Tue, Dec 05, 2017 at 04:40:46PM -0800, Matthew Wilcox wrote:
> > From: Matthew Wilcox <mawil...@microsoft.com>
> >
> > I looked through some notes and decided this was version 4 of the XArray.
&g
On Tue, Dec 05, 2017 at 04:40:46PM -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <mawil...@microsoft.com>
>
> I looked through some notes and decided this was version 4 of the XArray.
> Last posted two weeks ago, this version includes a *lot* of changes.
> I'd like
tions. Turning that around
so that a larger XFS structure and algorithm is now protected by an
opaque internal lock from generic storage structure the forms part
of the larger structure seems like a bad design pattern to me...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsu
On Tue, Nov 21, 2017 at 04:52:53AM -0800, Matthew Wilcox wrote:
> On Tue, Nov 21, 2017 at 05:48:15PM +1100, Dave Chinner wrote:
> > On Mon, Nov 20, 2017 at 08:32:40PM -0800, Matthew Wilcox wrote:
> > > On Mon, Nov 20, 2017 at 05:37:53PM -0800, Darrick J. Wong wrote:
> >
On Mon, Nov 20, 2017 at 08:32:40PM -0800, Matthew Wilcox wrote:
> On Mon, Nov 20, 2017 at 05:37:53PM -0800, Darrick J. Wong wrote:
> > On Tue, Nov 21, 2017 at 09:27:49AM +1100, Dave Chinner wrote:
> > > First thing I noticed was that "xa" as a prefix is already qu
On Mon, Nov 20, 2017 at 01:51:00PM -0800, Matthew Wilcox wrote:
> On Tue, Nov 21, 2017 at 07:26:06AM +1100, Dave Chinner wrote:
> > On Mon, Nov 20, 2017 at 08:18:29AM -0800, Matthew Wilcox wrote:
> > > On Fri, Nov 17, 2017 at 11:39:25AM -0800, Darrick J. Wong wrote:
>
solving this problem. The XArray is going to introduce a set
> of entries which can be stored to locations in the page cache that I'm
> calling 'wait entries'.
What's this XArray thing you speak of?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line &q
On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov <r...@romanrm.net> wrote:
>
> On Mon, 13 Nov 2017 22:39:44 -0500
> Dave <davestechs...@gmail.com> wrote:
>
> > I have my live system on one block device and a backup snapshot of it
> > on another block device. I
On Wed, Nov 1, 2017 at 1:15 AM, Roman Mamedov <r...@romanrm.net> wrote:
> On Wed, 1 Nov 2017 01:00:08 -0400
> Dave <davestechs...@gmail.com> wrote:
>
>> To reconcile those conflicting goals, the only idea I have come up
>> with so far is to use btrfs send-receive
ny thoughts about how we could efficiently support accounting for
variable sized, non-page based metadata with this generic
infrastructure?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ma
On Sat, Nov 4, 2017 at 1:25 PM, Chris Murphy <li...@colorremedies.com> wrote:
>
> On Sat, Nov 4, 2017 at 1:26 AM, Dave <davestechs...@gmail.com> wrote:
> > On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy <li...@colorremedies.com>
> > wrote:
> >>
> &g
On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy wrote:
>
> That is not a general purpose file system. It's a file system for admins who
> understand where the bodies are buried.
I'm not sure I understand your comment...
Are you saying BTRFS is not a general purpose file
On Thu, Nov 2, 2017 at 4:46 PM, Kai Krakow <hurikha...@gmail.com> wrote:
> Am Wed, 1 Nov 2017 02:51:58 -0400
> schrieb Dave <davestechs...@gmail.com>:
>
>> >
>> >> To reconcile those conflicting goals, the only idea I have come up
>> >&g
On Thu, Nov 2, 2017 at 7:07 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
> On 2017-11-01 21:39, Dave wrote:
>> I'm going to make this change now. What would be a good way to
>> implement this so that the change applies to the $HOME/.cache of each
>> user?
On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow wrote:
>
> You may want to try btrfs autodefrag mount option and see if it
> improves things (tho, the effect may take days or weeks to apply if you
> didn't enable it right from the creation of the filesystem).
>
> Also,
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
wrote:
>> And the worst performing machine was the one with the most RAM and a
>> fast NVMe drive and top of the line hardware.
>
> Somewhat nonsensically, I'll bet that NVMe is a contributing factor in this
> particular
Has this been discussed here? Has anything changed since it was written?
Parity-based redundancy (RAID5/6/triple parity and beyond) on BTRFS
and MDADM (Dec 2014) – Ronny Egners Blog
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
wrote:
>> The cache is in a separate location from the profiles, as I'm sure you
>> know. The reason I suggested a separate BTRFS subvolume for
>> $HOME/.cache is that this will prevent the cache files for all
>>
On Wed, Nov 1, 2017 at 1:48 PM, Peter Grandi wrote:
>> When defragmenting individual files on a BTRFS filesystem with
>> COW, I assume reflinks between that file and all snapshots are
>> broken. So if there are 30 snapshots on that volume, that one
>> file will suddenly
On Wed, Nov 1, 2017 at 9:31 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dave posted on Tue, 31 Oct 2017 17:47:54 -0400 as excerpted:
>
>> 6. Make sure Firefox is running in multi-process mode. (Duncan's
>> instructions, while greatly appreciated and very useful, left me
&g
On Wed, Nov 1, 2017 at 4:34 AM, Marat Khalili wrote:
>> We do experience severe performance problems now, especially with
>> Firefox. Part of my experiment is to reduce the number of snapshots on
>> the live volumes, hence this question.
>
> Just for statistics, how many snapshots
On Wed, Nov 1, 2017 at 2:19 AM, Marat Khalili wrote:
> You seem to have two tasks: (1) same-volume snapshots (I would not call them
> backups) and (2) updating some backup volume (preferably on a different
> box). By solving them separately you can avoid some complexity...
Yes, it
On Wed, Nov 1, 2017 at 1:15 AM, Roman Mamedov <r...@romanrm.net> wrote:
> On Wed, 1 Nov 2017 01:00:08 -0400
> Dave <davestechs...@gmail.com> wrote:
>
>> To reconcile those conflicting goals, the only idea I have come up
>> with so far is to use btrfs send-receive
Our use case requires snapshots. btrfs snapshots are best solution we
have found for our requirements, and over the last year snapshots have
proven their value to us.
(For this discussion I am considering both the "root" volume and the
"home" volume on a typical desktop workstation. Also, all
On Tue, Oct 31, 2017 at 7:06 PM, Peter Grandi
wrote:
>
> Also nothing forces you to defragment a whole filesystem, you
> can just defragment individual files or directories by using
> 'find' with it.
Thanks for that info. When defragmenting individual files on a
many unexpected
changes when using sync, so we do not use it.
On Thu, Sep 21, 2017 at 7:09 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dave posted on Wed, 20 Sep 2017 02:38:13 -0400 as excerpted:
>
>> Here's my scenario. Some months ago I built an over-the-top powerful
>>
This is a very helpful thread. I want to share an interesting related story.
We have a machine with 4 btrfs volumes and 4 Snapper configs. I
recently discovered that Snapper timeline cleanup been turned off for
3 of those volumes. In the Snapper configs I found this setting:
On Mon, Oct 09, 2017 at 09:00:51AM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> > On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > > > Integrating into fstests means it will be immediately available to
> > &
On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > Hello,
> > >
> > > One thing that comes up a lot every LSF is the
omparison options so you can compare against averages of all
> previous runs and such.
Yup, that fits exactly into what fstests is for... :P
Integrating into fstests means it will be immediately available to
all fs developers, it'll run on everything that everyone already has
setup for filesystem te
On Tue, Oct 03, 2017 at 01:40:51PM -0700, Matthew Wilcox wrote:
> On Wed, Oct 04, 2017 at 07:10:35AM +1100, Dave Chinner wrote:
> > On Tue, Oct 03, 2017 at 03:19:18PM +0200, Martin Steigerwald wrote:
> > > [repost. I didn´t notice autocompletion gave me wrong address
On Tue, Oct 03, 2017 at 03:19:18PM +0200, Martin Steigerwald wrote:
> [repost. I didn´t notice autocompletion gave me wrong address for fsdevel,
> blacklisted now]
>
> Hello.
>
> What do you think of
>
> http://open-zfs.org/wiki/Projects/ZFS_Channel_Programs
Domain not
These are great suggestions. I will test several of them (or all of
them) and report back with my results once I have done the testing.
Thank you! This is a fantastic mailing list.
P.S. I'm inclined to stay with Firefox, but I will definitely test
Chromium vs Firefox after making a series of
ure
> + # invalidate the page cache
> + $XFS_IO_PROG -f -c "fadvise -d 0 128K" $SCRATCH_MNT/foobar |
> _filter_xfs_io
> +
> + enable_io_failure
> + od -x $SCRATCH_MNT/foobar > /dev/null &
why are you using od to read the data when the output is piped
On Tue, Sep 19, 2017 at 3:37 PM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
> 18.09.2017 09:10, Dave пишет:
>> I use snap-sync to create and send snapshots.
>>
>> GitHub - wesbarnett/snap-sync: Use snapper snapshots to backup to external
>> drive
>>
>On Thu 2017-08-31 (09:05), Ulli Horlacher wrote:
>> When I do a
>> btrfs filesystem defragment -r /directory
>> does it defragment really all files in this directory tree, even if it
>> contains subvolumes?
>> The man page does not mention subvolumes on this topic.
>
>No answer so far :-(
>
>But
On Fri, Sep 15, 2017 at 6:01 AM, Ulli Horlacher
wrote:
>
> On Fri 2017-09-15 (06:45), Andrei Borzenkov wrote:
>
> > The actual question is - do you need to mount each individual btrfs
> > subvolume when using encfs?
>
> And even worse it goes with ecryptfs: I do not
On Mon, Sep 18, 2017 at 12:23 AM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
>
> 18.09.2017 05:31, Dave пишет:
> > Sometimes when using btrfs send-receive, I get errors like this:
> >
> > ERROR: parent determination failed for
> >
> > When this happens
new subject for new question
On Mon, Sep 18, 2017 at 1:37 PM, Andrei Borzenkov wrote:
> >> What scenarios can lead to "ERROR: parent determination failed"?
> >
> > The man page for btrfs-send is reasonably clear on the requirements
> > btrfs imposes. If you want to use
On Mon, Sep 18, 2017 at 12:23 AM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
> 18.09.2017 05:31, Dave пишет:
>> Sometimes when using btrfs send-receive, I get errors like this:
>>
>> ERROR: parent determination failed for
>>
>> When this happens, b
Sometimes when using btrfs send-receive, I get errors like this:
ERROR: parent determination failed for
When this happens, btrfs send-receive backups fail. And all subsequent
backups fail too.
The issue seems to stem from the fact that an automated cleanup
process removes certain earlier
On Mon, Sep 11, 2017 at 11:19 PM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
> 11.09.2017 20:53, Axel Burri пишет:
>> On 2017-09-08 06:44, Dave wrote:
>>> I'm referring to the link below. Using "btrfs subvolume snapshot -r"
>>> copies the Received
olume read-write, I
> recommend to use "btrfs subvolume snapshot ".
>
> There is a FAQ entry on btrbk on how to fix this:
>
> https://github.com/digint/btrbk/blob/master/doc/FAQ.md#im-getting-an-error-aborted-received-uuid-is-set
>
>
> On 2017-09-07 15:34, Dave wrote:
> >
l.
Thanks for any further feedback, including answers to my questions and
comments about whether this is a known issue.
On Thu, Sep 7, 2017 at 8:39 AM, Dave <davestechs...@gmail.com> wrote:
>
> Hello. Can anyone further explain this issue ("you have a Received UUID on
> the
gt; The problem can be that you have a Received UUID on the source volume. This
> breaks send-receive.
>
> From: Dave <davestechs...@gmail.com> -- Sent: 2017-09-07 - 06:43
>
>> Here is more info and a possible (shocking) explanation. This
>> aggregates my prio
Here is more info and a possible (shocking) explanation. This
aggregates my prior messages and it provides an almost complete set of
steps to reproduce this problem.
Linux srv 4.9.41-1-lts #1 SMP Mon Aug 7 17:32:35 CEST 2017 x86_64 GNU/Linux
btrfs-progs v4.12
My steps:
[root@srv]# sync
recent files are MISSING
Any ideas what could be causing this problem with incremental backups?
On Wed, Sep 6, 2017 at 3:23 PM, Dave <davestechs...@gmail.com> wrote:
>
> Here is more info on this problem. I can reproduce this without using my
> script. Simple btrfs commands
I'm running Arch Linux on BTRFS. I use Snapper to take hourly
snapshots and it works without any issues.
I have a bash script that uses send | receive to transfer snapshots to
a couple external HDD's. The script runs daily on a systemd timer. I
set all this up recently and I first noticed that it
1 - 100 of 635 matches
Mail list logo