On 04.08.2016 18:53, Lutz Vieweg wrote:
>
> I was today hit by what I think is probably the same bug:
> A btrfs on a close-to-4TB sized block device, only half filled
> to almost exactly 2 TB, suddenly says "no space left on device"
> upon any attempt to write to it. The filesystem was NOT
On 30.07.2016 22:02, Chris Murphy wrote:
> Short version: When systemd-logind login.conf KillUserProcesses=yes,
> and the user does "sudo btrfs scrub start" in e.g. GNOME Terminal, and
> then logs out of the shell, the user space operation is killed, and
> btrfs scrub status reports that the
On 21.07.2016 14:56, Chris Mason wrote:
> On 07/20/2016 01:50 PM, Gabriel C wrote:
>>
>> After 24h of running the program and thundirbird all is still fine here.
>>
>> I let it run one more day.. But looks very good.
>>
>
> Thanks for your time i
On 20.07.2016 15:50, Chris Mason wrote:
>
>
> On 07/19/2016 08:11 PM, Gabriel C wrote:
>>
>>
>> On 19.07.2016 13:05, Chris Mason wrote:
>>> On Mon, Jul 11, 2016 at 11:28:01AM +0530, Chandan Rajendra wrote:
>>>> Hi Chris,
>>>>
>>
On 19.07.2016 13:05, Chris Mason wrote:
> On Mon, Jul 11, 2016 at 11:28:01AM +0530, Chandan Rajendra wrote:
>> Hi Chris,
>>
>> I am able to reproduce the issue with the 'short-write' program. But before
>> the call trace associated with btrfs_destroy_inode(), I see the following
>> call
>>
On 08.07.2016 14:41, Chris Mason wrote:
On 07/08/2016 05:57 AM, Gabriel C wrote:
2016-07-07 21:21 GMT+02:00 Chris Mason <c...@fb.com>:
On 07/07/2016 06:24 AM, Gabriel C wrote:
Hi,
while running thunderbird on linux 4.6.3 and 4.7.0-rc6 ( didn't tested
other versions )
I t
2016-07-08 14:41 GMT+02:00 Chris Mason <c...@fb.com>:
>
>
> On 07/08/2016 05:57 AM, Gabriel C wrote:
>>
>> 2016-07-07 21:21 GMT+02:00 Chris Mason <c...@fb.com>:
>>>
>>>
>>>
>>> On 07/07/2016 06:24 AM, Gabriel C wrote:
>>&g
2016-07-07 21:21 GMT+02:00 Chris Mason <c...@fb.com>:
>
>
> On 07/07/2016 06:24 AM, Gabriel C wrote:
>>
>> Hi,
>>
>> while running thunderbird on linux 4.6.3 and 4.7.0-rc6 ( didn't tested
>> other versions )
>> I trigger the following :
>
>
1 size 1.36TiB used 37.06GiB path /dev/sda1
btrfs fi df /
Data, single: total=32.00GiB, used=30.43GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.50GiB, used=1.04GiB
GlobalReserve, single: total=368.00MiB, used=0.00B
Regards,
Gabriel C
--
To unsubscribe from this list
On Sat, 15 Aug 2015 07:40:40 +0800, Anand Jain wrote:
Hello,
as of now btrfs sysfs does not include the attributes for the volume
manager part in its sysfs layout, so its being developed and there are
two types of layout here below, so I have a quick survey to know which
will be
Le 18/11/2014 11:39, Robert White a écrit :
Howdy,
How does one get the exact size (in blocks preferably, but bytes okay)
of the filesystem inside a partition? I know how to get the partition
size, but that's not useful when shrinking a partition...
dev_item.total_bytes in
Hello,
For over a year now, I've been experimenting with stacked filesystems
as a way to save on resources. A basic OS layer is shared among
Containers, each of which stacks a layer with modifications on top of
it. This approach means that Containers share buffer cache and
loaded
Now... since the snapshot's FS tree is a direct duplicate of the
original FS tree (actually, it's the same tree, but they look like
different things to the outside world), they share everything --
including things like inode numbers. This is OK within a subvolume,
because we have the
Le mar. 23 juil. 2013 21:30:13 CEST, Hugo Mills a écrit :
On Tue, Jul 23, 2013 at 07:47:41PM +0200, Gabriel de Perthuis wrote:
Now... since the snapshot's FS tree is a direct duplicate of the
original FS tree (actually, it's the same tree, but they look like
different things to the outside
On Sat, 20 Jul 2013 17:15:50 +0200, Jason Russell wrote:
Ive also noted that this excessive hdd chatter does not occur
immediately after a fresh format with arch on btrfs root.
Ive made some deductions/assumptions:
This only seems to occur with btrfs roots.
This only happens after some
On Mon, 15 Jul 2013 13:55:51 -0700, Zach Brown wrote:
I'd get rid of all this code by only copying each input argument on to
the stack as it's needed and by getting rid of the writable output
struct fields. (more on this later)
As I said, I'd get rid of the output fields. Like the other
---
The matching kernel patch is here:
https://github.com/g2p/linux/tree/v3.10%2Bextent-same (rebased on 3.10, fixing
a small conflict)
Requires the btrfs-extent-same command:
- http://permalink.gmane.org/gmane.comp.file-systems.btrfs/26579
- https://github.com/markfasheh/duperemove
On Thu, 20 Jun 2013 10:16:22 +0100, Hugo Mills wrote:
On Thu, Jun 20, 2013 at 10:47:53AM +0200, Clemens Eisserer wrote:
Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the same image to different mount points.
Each modification to one image is also
Instead of redirecting to a different block device, Btrfs could and
should refuse to mount an already-mounted superblock when the block
device doesn't match, somewhere in or below btrfs_mount. Registering
extra, distinct superblocks for an already mounted raid is a different
matter, but that
Thank you for your reply. I appreciate it. Unfortunately this issue is a deal
killer for us. The ability to take very fast snapshots and replicate them to
another site is key for us. We just can't us Btrfs with this setup. That's
too bad. Good luck and thank you.
The issue we were
Le 11/06/2013 22:31, Mark Fasheh a écrit :
Perhaps this isn't a limiation per-se but extent-same requires read/write
access to the files we want to dedupe. During my last series I had a
conversation with Gabriel de Perthuis about access checking where we tried
to maintain the ability
Le 11/06/2013 23:04, Mark Fasheh a écrit :
On Tue, Jun 11, 2013 at 10:56:59PM +0200, Gabriel de Perthuis wrote:
What I found however is that neither of these is a great idea ;)
- We want to require that the inode be open for writing so that an
unprivileged user can't do things like run
+#define BTRFS_MAX_DEDUPE_LEN (16 * 1024 * 1024)
+#define BTRFS_ONE_DEDUPE_LEN (1 * 1024 * 1024)
+
+static long btrfs_ioctl_file_extent_same(struct file *file,
+void __user *argp)
+{
+ struct btrfs_ioctl_same_args *args;
+ struct
Le sam. 25 mai 2013 00:38:27 CEST, Mark Fasheh a écrit :
On Fri, May 24, 2013 at 09:50:14PM +0200, Gabriel de Perthuis wrote:
Sure. Actually, you got me thinking about some sanity checks... I need to
add at least this check:
if (btrfs_root_readonly(root))
return -EROFS
On Fri, 17 May 2013 18:54:38 +0800, Anand Jain wrote:
The idea was to introduce /dev/mapper to find for btrfs disk,
However I found first we need to congregate the disk scan
procedure at a function so it would help to consistently tune
it across the btrfs-progs. As of now both fi show and
Just scan /dev/block/*. That contains all block devices.
Oh, this is about finding nicer names. Never mind.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
A user of a workstation has a home directory /home/john as a subvolume. I
wrote a cron job to make read-only snapshots of it under /home/john/backup
which was fortunate as they just ran a script that did something like
rm -rf ~.
Apart from copying dozens of gigs of data back, is there a
We want this for btrfs_extent_same. Basically readpage and friends do their
own extent locking but for the purposes of dedupe, we want to have both
files locked down across a set of readpage operations (so that we can
compare data). Introduce this variant and a flag which can be set for
We want this for btrfs_extent_same. Basically readpage and friends do their
own extent locking but for the purposes of dedupe, we want to have both
files locked down across a set of readpage operations (so that we can
compare data). Introduce this variant and a flag which can be set for
How will it compare to bcache? I'm currently thinking about buying an SSD
but bcache requires some efforts in migrating the storage to use. And after
all those hassles I am even not sure if it would work easily with a dracut
generated initramfs.
On the side note: dm-cache, which is
Do you plan to support deduplication on a finer grained basis than file
level? As an example, in the end it could be interesting to deduplicate 1M
blocks of huge files. Backups of VM images come to my mind as a good
candidate. While my current backup script[1] takes care of this by using
On Tue, 07 May 2013 23:58:08 +0200, Kai Krakow wrote:
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
On the side note: dm-cache, which is already in-kernel, do not need to
reformat backing storage.
On the other hand dm-cache is somewhat complex to assemble, and letting
the system
On Wed, 08 May 2013 01:04:38 +0200, Kai Krakow wrote:
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing
The search ioctl skips items that are too large for a result buffer, but
inline items of a certain size occuring before any search result is
found would trigger an overflow and stop the search entirely.
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=57641
Signed-off-by: Gabriel de Perthuis
-by: Gabriel de Perthuis g2p.code+bt...@gmail.com
---
(resent, with the correct header to have stable copied)
fs/btrfs/ioctl.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 2c02310..f49b62f 100644
--- a/fs/btrfs/ioctl.c
On Sun, 05 May 2013 12:07:17 +0200, Kai Krakow wrote:
Hey list,
I wonder if it is possible to deduplicate read-only snapshots.
Background:
I'm using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot
Hello
If I want to manage a complete disk with btrfs, what's the Best
Practice? Would it be best to create the btrfs filesystem on
/dev/sdb, or would it be better to create just one partition from
start to end and then do mkfs.btrfs /dev/sdb1?
Partitions (GPT) are always more flexible and
#define BTRFS_IOC_DEV_REPLACE _IOWR(BTRFS_IOCTL_MAGIC, 53, \
struct btrfs_ioctl_dev_replace_args)
+#define BTRFS_IOC_DEDUP_REGISTER _IO(BTRFS_IOCTL_MAGIC, 54)
This number has already been used by the offline dedup patches.
--
To unsubscribe from this
On Sat, Apr 20, 2013 at 05:49:25PM +0200, Gabriel de Perthuis wrote:
Hi,
The following series of patches implements in btrfs an ioctl to do
offline deduplication of file extents.
I am a fan of this patch, the API is just right. I just have a few tweaks
to suggest to the argument checking
Hi,
The following series of patches implements in btrfs an ioctl to do
offline deduplication of file extents.
I am a fan of this patch, the API is just right. I just have a few
tweaks to suggest to the argument checking.
At first the 1M limitation on the length was a bit inconvenient, but
There is a missing dependency: liblzo2-dev
I suggest to make amendment to the wiki and add a liblzo2-dev to the
apt-get line for Ubuntu/Debian.
Added. Other distros may need some additions too.
Anyone can edit the wiki, as the spambots will attest; a ConfirmEdit
captcha at signup would be
Hello,
I have a filesystem that has become unusable because of a balance I can't
stop. It is very close to full, and the balance is preventing me from
growing it.
It was started like this:
sudo btrfs filesystem balance start -v -musage=60 -dusage=60 /srv/backups
It has been stuck at 0% across
On Sat, 02 Mar 2013 17:12:37 +0600, Roman Mamedov wrote:
Mount with the skip_balance option
https://btrfs.wiki.kernel.org/index.php/Mount_options then you can issue
btrfs fi balance cancel and it will succeed.
Excellent, thank you.
I had just thought of doing the same thing with ro and it
Hi,
After mounting the system with noatime the problem disappeared, like in
magic.
Incidentally, the current version of bedup uses a private mountpoint with
noatime whenever you don't give it the path to a mounted volume. You can
use it with no arguments or designate a filesystem by its
Here is what I see in my kern.log (see below).
For me this first happened when the filesystem was close to full (less
than 1GB left), but someone on the irc channel mentioned a similar
problem on suspend to ram.
The files that have checksum failures end up with their first 4k filled
with 0x01
On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
On 2012-11-02 12:18, Martin Steigerwald wrote:
Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
in total. I understand the logic behind this, but this could be a bit
confusing.
But it makes sense: Showing
On Fri, 02 Nov 2012 20:31:56 +0100, Goffredo Baroncelli wrote:
On 11/02/2012 08:05 PM, Gabriel wrote:
On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
On 2012-11-02 12:18, Martin Steigerwald wrote:
Metadata, DUP is displayed as 3,50GB on the device level and as
1,75GB in total
On Fri, 02 Nov 2012 22:06:04 +, Hugo Mills wrote:
On Fri, Nov 02, 2012 at 07:05:37PM +, Gabriel wrote:
On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
On 2012-11-02 12:18, Martin Steigerwald wrote:
Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
On Fri, 02 Nov 2012 21:46:35 +, Michael Kjörling wrote:
On 2 Nov 2012 20:40 +, from g2p.c...@gmail.com (Gabriel):
Now that I've started bikeshedding, here is something that I would
find pretty much ideal:
DataMetadata System Unallocated
On Thu, 01 Nov 2012 06:06:57 +0100, Arne Jansen wrote:
On 11/01/2012 02:28 AM, Shane Spencer wrote:
That's Plan B. I'll be making a btrfs stream decoder and doing in
place edits. I need to move stuff around to other filesystem types
otherwise I'd just store the stream or apply the stream to
On Thu, 01 Nov 2012 12:29:36 +0100, Arne Jansen wrote:
On 01.11.2012 12:00, Gabriel wrote:
On Thu, 01 Nov 2012 06:06:57 +0100, Arne Jansen wrote:
On 11/01/2012 02:28 AM, Shane Spencer wrote:
That's Plan B. I'll be making a btrfs stream decoder and doing in
place edits. I need to move stuff
On Thu, 25 Oct 2012 23:26:14 -0700, Darrick J. Wong wrote:
Now, here's my proposal for fixing that:
A BTRFS_IOC_SAME_RANGE ioctl would be ideal. Takes two file
descriptors, two offsets, one length, does some locking, checks that
the ranges are identical (returns EINVAL if not), and defers to
As for online dedupe (which seems useful for reducing writes), would it
be useful if one could, given a write request, compare each of the
dirty pages in that request against whatever else the fs has loaded in
the page cache, and try to dedupe against that? We could probably
speed up the
To see the problem, create many hardlinks to the same file (120 should do it),
then look up paths by inode with:
ls -i
btrfs inspect inode-resolve -v $ino /mnt/btrfs
I noticed the memory layout of the fspath-val data had some irregularities
(some unnecessary gaps that stop appearing about
.
Also skip invalid high OIDs, to prevent spurious warnings.
Signed-off-by: Gabriel de Perthuis g2p.code+bt...@gmail.com
---
send-utils.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/send-utils.c b/send-utils.c
index a43d47e..03ca72a 100644
--- a/send-utils.c
+++ b/send
This message is more explicit than ERROR: could not resolve root_id,
the message that will be shown immediately before `btrfs send` bails.
Also skip invalid high OIDs.
Signed-off-by: Gabriel de Perthuis g2p.code+bt...@gmail.com
---
send-utils.c | 6 ++
1 file changed, 6 insertions(+)
diff
This fixes a bug which causes the first character of each filename in
the destination to be omitted.
Signed-off-by: Eduard - Gabriel Munteanu eduard.munte...@linux360.ro
---
bcp |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/bcp b/bcp
index 5729e91..c6b4bef 100755
57 matches
Mail list logo