On Friday 25 September 2015 13:51:34 Hugo Mills wrote:
> On Fri, Sep 25, 2015 at 03:36:18PM +0200, Sjoerd wrote:
> > Thanks all for the feedback. Still doubting though to go for 4.2.1 or not.
> > Main reason is that I am currently running 4.1.7 on my laptop which seems
> > to work fine and had
On 25 September 2015 at 15:51, Hugo Mills wrote:
> On Fri, Sep 25, 2015 at 03:36:18PM +0200, Sjoerd wrote:
>> Thanks all for the feedback. Still doubting though to go for 4.2.1 or not.
>> Main reason is that I am currently running 4.1.7 on my laptop which seems to
>> work fine
I want to set some per transaction flags, so instead of adding yet another int
lets just convert the current two int indicators to flags and add a flags field
for future use. Thanks,
Signed-off-by: Josef Bacik
---
V1->V2: set the wrong bit in my conversion
On Thu, Sep 24, 2015 at 09:18:51AM +0800, Qu Wenruo wrote:
> After some ext* lecture given by my teammate, Wang Xiaoguang, I'm more
> convinced that, at least for convert from ext*, separate chunk type will
> not be a good idea.
Thanks for the additional research.
> For above almost empty ext4
On Thu, Sep 10, 2015 at 10:34:13AM +0800, Qu Wenruo wrote:
> btrfs-progs: fsck: Add check for extent and parent chunk type
> btrfs-progs: utils: Check nodesize against features
Applied the two, thanks.
> btrfs-progs: convert: force convert to used mixed block group
> btrfs-progs: util:
On 09/24/15 22:56, Josef Bacik wrote:
> We have a mechanism to make sure we don't lose updates for ordered extents
> that
> were logged in the transaction that is currently running. We add the ordered
> extent to a transaction list and then the transaction waits on all the ordered
> extents in
On 09/25/15 13:05, Holger Hoffstätte wrote:
> Tried this and unexpectedly didn't get any lockups or 'splosions during normal
> operation, but balance now seems very slow and sits idle most of the time.
Meh..this doesn't seem to have anything to do with this particular patch after
all. Whoopdedoo.
On Thu, Aug 06, 2015 at 11:05:55AM +0800, Zhao Lei wrote:
> Scrub output following error message in my test:
> ERROR: scrubbing /var/ltf/tester/scratch_mnt failed for device id 5
> (Success)
>
> It is caused by a broken kernel and fs, but the we need to avoid
> outputting both "error and
On Wed, May 13, 2015 at 05:15:34PM +0800, Qu Wenruo wrote:
> Before the patch, btrfs-progs will only read
> sizeof(struct btrfs_super_block) and restore it into super_copy.
>
> This makes checksum check for superblock impossible.
> Change it to read the whole superblock.
>
> Signed-off-by: Qu
Reject copies that don't have the COPY_FR_REFLINK flag set.
Signed-off-by: Anna Schumaker
Reviewed-by: David Sterba
---
fs/btrfs/ioctl.c | 4
1 file changed, 4 insertions(+)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index
From: Zach Brown
This rearranges the existing COPY_RANGE ioctl implementation so that the
.copy_file_range file operation can call the core loop that copies file
data extent items.
The extent copying loop is lifted up into its own function. It retains
the core btrfs error
The way to think about this is that the destination filesystem reads the
data from the source file and processes it accordingly. This is
especially important to avoid an infinate loop when doing a "server to
server" copy on NFS.
Signed-off-by: Anna Schumaker
---
From: Zach Brown
Add a copy_file_range() system call for offloading copies between
regular files.
This gives an interface to underlying layers of the storage stack which
can copy without reading and writing all the data. There are a few
candidates that should support copy
On Fri, Sep 25, 2015 at 2:26 PM, Jogi Hofmüller wrote:
> That was right while the RAID was in degraded state and rebuilding.
On the guest:
Aug 28 05:17:01 vm kernel: [140683.741688] BTRFS info (device vdc):
disk space caching is enabled
Aug 28 05:17:13 vm kernel: [140695.575896]
I still want to do an in-kernel copy even if the files are on different
mountpoints, and NFS has a "server to server" copy that expects two
files on different mountpoints. Let's have individual filesystems
implement this check instead.
Signed-off-by: Anna Schumaker
copy_file_range() is a new system call for copying ranges of data
completely in the kernel. This gives filesystems an opportunity to
implement some kind of "copy acceleration", such as reflinks or
server-side-copy (in the case of NFS).
Signed-off-by: Anna Schumaker
From: Zach Brown
Add sys_copy_file_range to the x86 syscall tables.
Signed-off-by: Zach Brown
[Anna Schumaker: Update syscall number in syscall_32.tbl]
Signed-off-by: Anna Schumaker
---
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
The NFS server will need some kind offallback for filesystems that don't
have any kind of copy acceleration, and it should be generally useful to
have an in-kernel copy to avoid lots of switches between kernel and user
space.
I make this configurable by adding two new flags. Users who only want
Copy system calls came up during Plumbers a while ago, mostly because several
filesystems (including NFS and XFS) are currently working on copy acceleration
implementations. We haven't heard from Zach Brown in a while, so I volunteered
to push his patches upstream so individual filesystems don't
On Wed, May 13, 2015 at 05:15:35PM +0800, Qu Wenruo wrote:
> Now btrfs-progs will have much more restrict superblock check based on
> kernel superblock check.
>
> This should at least provide some hostile crafted image to crash command
> like btrfsck.
>
> Signed-off-by: Qu Wenruo
On Fri, Sep 25, 2015 at 9:25 AM, Bostjan Skufca wrote:
>
> Similar here: I am sticking with 3.19.2 which has proven to work fine for me
I'd recommend still tracking SOME stable series. I'm sure there were
fixes in 3.19 for btrfs (to say nothing of other subsystems) that
you're
>From offlist host logs (the Btrfs errors happen in a VM guest), I
think this is a hardware problem.
Aug 28 07:04:22 host kernel: [41367948.153031] sas:
sas_scsi_find_task: task 0x880e85c09c00 is done
Aug 28 07:04:22 host kernel: [41367948.153033] sas:
sas_eh_handle_sas_errors: task
On Tue, Jul 28, 2015 at 03:53:58PM +0800, Zhaolei wrote:
> From: Zhao Lei
>
> Anthony Plack reported a output bug in maillist:
> title: btrfs-progs SCRUB reporting aborted but still running - minor
>
> btrfs scrub status report it was aborted but
Thanks for heart-warming recommendation, this is also what I generally do.
In this case (and I remember vaguely) the reasoning for going with
3.19.x at the time was that I was hitting some btrfs issues around
3.16 and at the same time eyeing btrfs changesets going into mainline.
This, combined
On Mon, Jul 27, 2015 at 07:32:37PM +0800, Zhaolei wrote:
> From: Zhao Lei
>
> fsck-tests.sh failed and show following message in my node:
> # ./fsck-tests.sh
> [TEST] 001-bad-file-extent-bytenr
> disk-io.c:1444: write_dev_supers: Assertion `ret !=
On Tue, Jun 09, 2015 at 03:57:40PM +0800, Qu Wenruo wrote:
> When testing under libguestfs, btrfs-convert will never succeed to fix
> chunk map, and always fails.
>
> But in that case, it's already a mountable btrfs.
> So better to info user with different error message for that case.
>
> The
On Thu, Aug 06, 2015 at 11:05:54AM +0800, Zhao Lei wrote:
> switch statement is more suitable for outputing currsponding message
> for errno.
>
> Suggested-by: David Sterba
> Signed-off-by: Zhao Lei
Applied, thanks.
--
To unsubscribe from this list:
Hi Linus,
My for-linus-4.3 branch has a few fixes:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.3
This is an assorted set I've been queuing up:
Jeff Mahoney tracked down a tricky one where we ended up starting IO on
the wrong mapping for special files in
Hello all,
I have kind of a serious problem with one of my disks.
The controller of one of my external drives died (WD Studio). The disk
is alright though. I cracked open the case, got the drive out and
connected it via a SATA-USB interface.
Now, mounting the filesystem is not possible.
Le 2015-09-25 04:37, Qu Wenruo a écrit :
Stephane Lesimple reported an qgroup rescan bug:
[92098.841309] general protection fault: [#1] SMP
[92098.841338] Modules linked in: ...
[92098.841814] CPU: 1 PID: 24655 Comm: kworker/u4:12 Not tainted
4.3.0-rc1 #1
[92098.841868] Workqueue:
On 25/09/15 21:48, Anna Schumaker wrote:
> The NFS server will need some kind offallback for filesystems that don't
> have any kind of copy acceleration, and it should be generally useful to
> have an in-kernel copy to avoid lots of switches between kernel and user
> space.
>
> I make this
On Fri, Sep 25, 2015 at 11:45:44PM +0200, Marcel Bischoff wrote:
> Hello all,
>
> I have kind of a serious problem with one of my disks.
>
> The controller of one of my external drives died (WD Studio). The
> disk is alright though. I cracked open the case, got the drive out
> and connected it
On Tue, Sep 22, 2015 at 7:45 AM, David Sterba wrote:
> On Wed, Sep 02, 2015 at 06:05:17PM -0700, Justin Maggard wrote:
>> v2: Fix stupid error while making formatting changes...
>
> I haven't noticed any difference between the patches, what exactly did
> you change?
>
I broke
Sjoerd posted on Fri, 25 Sep 2015 15:40:39 +0200 as excerpted:
> Is it better to use raw devices for a RAID setup or make one partition
> on the drive and then create your RAID from there?
> Right now if have one setup that uses raw, but get messages "unknown
> partition table" all the time in my
On Fri, Sep 25, 2015 at 1:48 PM, Anna Schumaker
wrote:
> The NFS server will need some kind offallback for filesystems that don't
> have any kind of copy acceleration, and it should be generally useful to
> have an in-kernel copy to avoid lots of switches between kernel
Marcel Bischoff posted on Fri, 25 Sep 2015 23:45:44 +0200 as excerpted:
> Hello all,
>
> I have kind of a serious problem with one of my disks.
>
> The controller of one of my external drives died (WD Studio). The disk
> is alright though. I cracked open the case, got the drive out and
>
Bostjan Skufca posted on Fri, 25 Sep 2015 16:34:16 +0200 as excerpted:
> Similar here: I am sticking with 3.19.2 which has proven to work fine
> for me (backup systems with btrfs on lvm, lots of snapshots/subvolumes
> and occasional rebalance, no fancy/fresh stuff like btrfs-raid, online
>
Anand Jain wrote on 2015/09/25 14:54 +0800:
On 09/21/2015 10:10 AM, Qu Wenruo wrote:
Just the same for mount time check, use new btrfs_check_degraded() to do
per chunk check.
Signed-off-by: Qu Wenruo
---
fs/btrfs/super.c | 11 +++
1 file changed, 7
From: Qu Wenruo
Just the same for mount time check, use new btrfs_check_degraded() to do
per chunk check.
Signed-off-by: Qu Wenruo
Btrfs: use btrfs_error instead of btrfs_err during remount
apply on top of the patch
[PATCH 1/1] Btrfs:
Thanks Anand,
I'm OK with both new patches.
Thanks for the modification.
Qu
Anand Jain wrote on 2015/09/25 16:30 +0800:
Qu,
Strictly speaking IMO it should be reported to the user on the cli
terminal, and no logging in required. since its not that easy to get
that at this point, I am ok with
From: Qu Wenruo
Now use the btrfs_check_degraded() to do mount time degraded check.
With this patch, now we can mount with the following case:
# mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
# wipefs -a /dev/sdc
# mount /dev/sdb /mnt/btrfs -o degraded
As the
Qu,
Strictly speaking IMO it should be reported to the user on the cli
terminal, and no logging in required. since its not that easy to get
that at this point, I am ok with logging it as error. Since we are
failing the task(mount), error is better.
I have made that change this on top of the
On Sat, Sep 19, 2015 at 9:26 PM, Jim Salter wrote:
>
> ZFS, by contrast, works like absolute gangbusters for KVM image storage.
I'd be interested in what allows ZFS to handle KVM image storage well,
and whether this could be implemented in btrfs. I'd think that the
fragmentation
Followup from my observation wrt. "Btrfs: change how we wait for
pending ordered extents" and balance sitting idle:
On Thu, Sep 24, 2015 at 4:47 PM, Josef Bacik wrote:
> I want to set some per transaction flags, so instead of adding yet another int
> lets just convert the current
I suspect that the answer most likely boils down to "the ARC".
ZFS uses an Adaptive Replacement Cache instead of a standard FIFO, which
keeps blocks in cache longer if they have been accessed in cache. This
means much higher cache hit rates, which also means minimizing the
effects of
On 2015-09-25 08:48, Rich Freeman wrote:
On Sat, Sep 19, 2015 at 9:26 PM, Jim Salter wrote:
ZFS, by contrast, works like absolute gangbusters for KVM image storage.
I'd be interested in what allows ZFS to handle KVM image storage well,
and whether this could be implemented
Hi,
the commit "Btrfs: incremental send, check if orphanized dir inode needs
delayed rename" causes incremental send/receive to fail if a file is
unlinked and then reflinked to the same location from the parent
snapshot. An xfstest reproducing the issue is attached.
Regards,
Martin
From
btrfs_error() and btrfs_std_error() does the same thing
and calls _btrfs_std_error(), so consolidate them together.
And the main motivation is that btrfs_error() is closely
named with btrfs_err(), one handles error action the other
is to log the error, so don't closely name them.
Signed-off-by:
On 09/21/2015 10:10 AM, Qu Wenruo wrote:
Just the same for mount time check, use new btrfs_check_degraded() to do
per chunk check.
Signed-off-by: Qu Wenruo
---
fs/btrfs/super.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git
On 09/21/2015 10:10 AM, Qu Wenruo wrote:
Now use the btrfs_check_degraded() to do mount time degraded check.
With this patch, now we can mount with the following case:
# mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
# wipefs -a /dev/sdc
# mount /dev/sdb /mnt/btrfs -o degraded
As
On Fri, Sep 25, 2015 at 02:43:01PM +0800, Anand Jain wrote:
> btrfs_error() and btrfs_std_error() does the same thing
> and calls _btrfs_std_error(), so consolidate them together.
> And the main motivation is that btrfs_error() is closely
> named with btrfs_err(), one handles error action the
On Thu, Sep 24, 2015 at 08:13:33PM +0100, Luis de Bethencourt wrote:
> reada is using -1 instead of the -ENOMEM defined macro to specify that
> a buffer allocation failed. Since the error number is propagated, the
> caller will get a -EPERM which is the wrong error condition.
>
> Also, updating
On Wed, Sep 16, 2015 at 05:40:46PM +0800, Zhao Lei wrote:
> +static inline void __veprintf(const char *prefix, const char *format,
> + va_list ap)
> +{
> + if (prefix)
> + fprintf(stderr, "%s", prefix);
> + vfprintf(stderr, format, ap);
I'm not sure
Thanks all for the feedback. Still doubting though to go for 4.2.1 or not.
Main reason is that I am currently running 4.1.7 on my laptop which seems to
work fine and had some issues with the 4.2.0 kernel. No issues I thing that
were btrfs related, but more related to my nvidia card. Anyway
On 09/25/2015 08:30 AM, Holger Hoffstätte wrote:
Followup from my observation wrt. "Btrfs: change how we wait for
pending ordered extents" and balance sitting idle:
On Thu, Sep 24, 2015 at 4:47 PM, Josef Bacik wrote:
I want to set some per transaction flags, so instead of
On Fri, 25 Sep 2015 09:12:15 -0400
Rich Freeman wrote:
> I'll just say that my btrfs stability has gone WAY up when I stopped
> following this advice and instead followed a recent longterm. Right
> now I'm following 3.18. There were some really bad corruption issues
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K
blocksize, 8K drives get 8K blocksize (Samsung SSDs)
* LZO compression is a win. But it's not like anything sucks without
On Fri, Sep 25, 2015 at 03:36:18PM +0200, Sjoerd wrote:
> Thanks all for the feedback. Still doubting though to go for 4.2.1 or not.
> Main reason is that I am currently running 4.1.7 on my laptop which seems to
> work fine and had some issues with the 4.2.0 kernel. No issues I thing that
>
On Fri, Sep 25, 2015 at 7:20 AM, Austin S Hemmelgarn
wrote:
> On 2015-09-24 17:07, Sjoerd wrote:
>>
>> Maybe a silly question for most of you, but the wiki states to always try
>> to
>> use the latest kernel with btrfs. Which one would be best:
>> - 4.2.1 (currently latest
On 2015-09-25 09:12, Jim Salter wrote:
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K
blocksize, 8K drives get 8K blocksize (Samsung SSDs)
* LZO compression is a win. But
Is it better to use raw devices for a RAID setup or make one partition on the
drive and then create your RAID from there?
Right now if have one setup that uses raw, but get messages "unknown partition
table" all the time in my logs.
I am planning to create a RAID 5 setup (seems to be stable
Now I have memory ok.
I found this patch, but i am not sure, if its the correct one.
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg33496.html
Cay you help me please, witch patch i should use?
Frantisek
2015-09-23 17:22 GMT+02:00 Vackář František :
> Yes, I have
On Fri, Sep 25, 2015 at 11:36:55AM +0200, Vackář František wrote:
> Now I have memory ok.
>
> I found this patch, but i am not sure, if its the correct one.
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg33496.html
>
> Cay you help me please, witch patch i should use?
Yes,
On Tue, Sep 15, 2015 at 04:46:23PM +0800, Anand Jain wrote:
> fsid can be mounted multiple times, with different subvolid.
> And we don't have to scan a mount point if we already have
> that in the scanned list.
>
> And thus nicely avoids the following warning with multiple
> subvol mounts on
2015-09-25 16:52 GMT+03:00 Jim Salter :
> Pretty much bog-standard, as ZFS goes. Nothing different than what's
> recommended for any generic ZFS use.
>
> * set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
> drives get 8K blocksize (Samsung SSDs)
> * LZO
On 2015-09-25 10:02, Timofey Titovets wrote:
2015-09-25 16:52 GMT+03:00 Jim Salter :
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
drives
66 matches
Mail list logo