On Wednesday 05 January 2011 03:11:36 Shaohua Li wrote:
Did you notice the comment above the function? ;-)
You should really add the new ioctls to compat_sys_ioctl, not
to the COMPATIBLE_IOCTL() list, in order to make the behavior
consistent between 32 and 64 bit user space. The main
On Wednesday 05 January 2011 10:09:20 Arnd Bergmann wrote:
Thanks, fixed them.
The patch you posted still uses COMPATIBLE_IOCTL. Wrong patch?
On a second look, I noticed that you now have both the COMPATIBLE_IOCTL
and the case statement in compat_sys_ioctl. The former can be
dropped.
On Wednesday 05 January 2011 03:17:16 Shaohua Li wrote:
On Tue, 2011-01-04 at 17:40 +0800, Arnd Bergmann wrote:
Have you tried passing just a single metadata_incore_ent
at the ioctl and looping in user space? I would guess the
extra overhead of that would be small enough, but that might
Hello, Chris
I have a bunch of random fixes of the space management in
git://repo.or.cz/linux-btrfs-devel.git space-manage
They are the ENOSPC fixes, as well as fixes for df command.
The first one and the last one fixed the wrong free space information reported
by df command. The second one
Josef has implemented mixed data/metadata chunks, we must add those chunks'
space just like data chunks.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/super.c |7 +++
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index
- make it return the start position and length of the max free space when it can
not find a suitable free space.
- make it more readability
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/extent-tree.c |4 +-
fs/btrfs/volumes.c | 155
We cannot write data into files when when there is tiny space in the filesystem.
Reproduce steps:
# mkfs.btrfs /dev/sda1
# mount /dev/sda1 /mnt
# dd if=/dev/zero of=/mnt/tmpfile0 bs=4K count=1
# dd if=/dev/zero of=/mnt/tmpfile1 bs=4K count=99
(fill the filesystem)
# umount
With this patch, we change the handling method when we can not get enough free
extents with default size.
Implementation:
1. Look up the suitable free extent on each device and keep the search result.
If not find a suitable free extent, keep the max free extent
2. If we get enough suitable
When we store data by raid profile in btrfs with two or more different size
disks, df command shows there is some free space in the filesystem, but the
user can not write any data in fact, df command shows the wrong free space
information of btrfs.
# mkfs.btrfs -d raid1 /dev/sda9 /dev/sda10
#
There are two tiny problem:
- One is When we check the chunk size is greater than the max chunk size or not,
we should take mirrors into account, but the original code didn't.
- The other is btrfs shouldn't use the size of the residual free space as the
length of of a dup chunk when doing
Hi, Mitch
Could you test the third version of this patchset for me?
Thanks
Miao
On mon, 3 Jan 2011 23:46:00 -0600, Mitch Harder wrote:
2010/12/29 Miao Xiemi...@cn.fujitsu.com:
Hello, Chris
I have a bunch of random fixes of the space management in
git://repo.or.cz/linux-btrfs-devel.git
This program very basically does dedup. It searches a directory recursively and
scans all of the files looking for 64k extents and hashing them to figure out if
there are any duplicates. After that it calls the btrfs same extent ioctl for
all of the duplicates in order to dedup the space on
On Wed, Jan 05, 2011 at 11:20:51AM -0500, Josef Bacik wrote:
This program very basically does dedup. It searches a directory recursively
and
scans all of the files looking for 64k extents and hashing them to figure out
if
there are any duplicates. After that it calls the btrfs same extent
Here are patches to do offline deduplication for Btrfs. It works well for the
cases it's expected to, I'm looking for feedback on the ioctl interface and
such, I'm well aware there are missing features for the userspace app (like
being able to set a different blocksize). If this interface is
This program very basically does dedup. It searches a directory recursively and
scans all of the files looking for 64k extents and hashing them to figure out if
there are any duplicates. After that it calls the btrfs same extent ioctl for
all of the duplicates in order to dedup the space on
This adds the ability for userspace to tell btrfs which extents match eachother.
You pass in
-a logical offset
-a length
-a hash type (currently only sha256 is supported)
-the hash
-a list of file descriptors with their logical offset
and this ioctl will split up the extent on the target file
2011/1/5 Miao Xie mi...@cn.fujitsu.com:
Hello, Chris
I have a bunch of random fixes of the space management in
git://repo.or.cz/linux-btrfs-devel.git space-manage
They are the ENOSPC fixes, as well as fixes for df command.
The first one and the last one fixed the wrong free space
Josef Bacik wrote:
Basically I think online dedup is huge waste of time and completely useless.
I couldn't disagree more. First, let's consider what is the
general-purpose use-case of data deduplication. What are the resource
requirements to perform it? How do these resource requirements
Josef Bacik wrote:
This adds the ability for userspace to tell btrfs which extents match
eachother. You pass in
-a logical offset
-a length
-a hash type (currently only sha256 is supported)
-the hash
-a list of file descriptors with their logical offset
and this ioctl will split up
On Wed, Jan 05, 2011 at 05:42:42PM +, Gordan Bobic wrote:
Josef Bacik wrote:
Basically I think online dedup is huge waste of time and completely useless.
I couldn't disagree more. First, let's consider what is the
general-purpose use-case of data deduplication. What are the resource
On Wed, Jan 05, 2011 at 07:41:13PM +0100, Diego Calleja wrote:
On Miércoles, 5 de Enero de 2011 18:42:42 Gordan Bobic escribió:
So by doing the hash indexing offline, the total amount of disk I/O
required effectively doubles, and the amount of CPU spent on doing the
hashing is in no way
On ke, 2011-01-05 at 14:46 -0500, Josef Bacik wrote:
Blah blah blah, I'm not having an argument about which is better because I
simply do not care. I think dedup is silly to begin with, and online dedup
even
sillier. The only reason I did offline dedup was because I was just toying
around
On Wed, Jan 5, 2011 at 11:46 AM, Josef Bacik jo...@redhat.com wrote:
Dedup is only usefull if you _know_ you are going to have duplicate
information,
so the two major usecases that come to mind are
1) Mail server. You have small files, probably less than 4k (blocksize) that
you are storing
On Wed, Jan 05, 2011 at 07:58:13PM +, Lars Wirzenius wrote:
On ke, 2011-01-05 at 14:46 -0500, Josef Bacik wrote:
Blah blah blah, I'm not having an argument about which is better because I
simply do not care. I think dedup is silly to begin with, and online dedup
even
sillier. The
On 01/05/2011 06:41 PM, Diego Calleja wrote:
On Miércoles, 5 de Enero de 2011 18:42:42 Gordan Bobic escribió:
So by doing the hash indexing offline, the total amount of disk I/O
required effectively doubles, and the amount of CPU spent on doing the
hashing is in no way reduced.
But there are
On 01/05/2011 07:01 PM, Ray Van Dolson wrote:
On Wed, Jan 05, 2011 at 07:41:13PM +0100, Diego Calleja wrote:
On Miércoles, 5 de Enero de 2011 18:42:42 Gordan Bobic escribió:
So by doing the hash indexing offline, the total amount of disk I/O
required effectively doubles, and the amount of CPU
On Wed, Jan 05, 2011 at 11:01:41AM -0800, Ray Van Dolson wrote:
On Wed, Jan 05, 2011 at 07:41:13PM +0100, Diego Calleja wrote:
On Miércoles, 5 de Enero de 2011 18:42:42 Gordan Bobic escribió:
So by doing the hash indexing offline, the total amount of disk I/O
required effectively
On Wed, Jan 5, 2011 at 12:15 PM, Josef Bacik jo...@redhat.com wrote:
Yeah for things where you are talking about sending it over the network or
something like that every little bit helps. I think deduplication is far more
interesting and usefull at an application level than at a filesystem
On 01/05/2011 07:46 PM, Josef Bacik wrote:
Blah blah blah, I'm not having an argument about which is better because I
simply do not care. I think dedup is silly to begin with, and online dedup even
sillier.
Offline dedup is more expensive - so why are you of the opinion that it
is less
On ke, 2011-01-05 at 19:58 +, Lars Wirzenius wrote:
(For my script, see find-duplicate-chunks in
http://code.liw.fi/debian/pool/main/o/obnam/obnam_0.14.tar.gz or get the
current code using bzr get http://code.liw.fi/obnam/bzr/trunk/;.
http://braawi.org/obnam/ is the home page of the backup
Hello people of BTRFS :)
I'm writing because (as the subject says) I have a problem mounting a
btrfs after a power failure.
About the context...
This a as pretty simple BTRFS setup: the 3ware 9690SA-4I RAID controller
with RAID-5 composed of 4x1TB drives. No software RAID, no BTRFS RAID.
My
On 01/05/2011 09:14 PM, Diego Calleja wrote:
In fact, there are cases where online dedup is clearly much worse. For
example, cases where people suffer duplication, but it takes a lot of
time (several months) to hit it. With online dedup, you need to enable
it all the time to get deduplication,
On 01/06/2011 12:22 AM, Spelic wrote:
On 01/05/2011 09:46 PM, Gordan Bobic wrote:
On 01/05/2011 07:46 PM, Josef Bacik wrote:
Offline dedup is more expensive - so why are you of the opinion that
it is less silly? And comparison by silliness quotiend still sounds
like an argument over which is
On Wed, 2011-01-05 at 17:42 +0800, Arnd Bergmann wrote:
On Wednesday 05 January 2011 03:17:16 Shaohua Li wrote:
On Tue, 2011-01-04 at 17:40 +0800, Arnd Bergmann wrote:
Have you tried passing just a single metadata_incore_ent
at the ioctl and looping in user space? I would guess the
On 01/05/2011 09:46 PM, Gordan Bobic wrote:
On 01/05/2011 07:46 PM, Josef Bacik wrote:
Offline dedup is more expensive - so why are you of the opinion that
it is less silly? And comparison by silliness quotiend still sounds
like an argument over which is better.
If I can say my opinion, I
Hello. I have two questions:
1) When btrfs can be used in systems, that can be rebooted unexpectedly?
btrfs fsck is ready to use? does after power failure or hard reboot file
system can be damaged and can't be corrected?
2) When i use xen domU virtual machine a can dynamically change capacity
of
Excerpts from Gordan Bobic's message of 2011-01-05 12:42:42 -0500:
Josef Bacik wrote:
Basically I think online dedup is huge waste of time and completely useless.
I couldn't disagree more. First, let's consider what is the
general-purpose use-case of data deduplication. What are the
On 01/06/2011 02:03 AM, Gordan Bobic wrote:
That's just alarmist. AES is being cryptanalyzed because everything
uses it. And the news of it's insecurity are somewhat exaggerated (for
now at least).
Who cares... the fact of not being much used is a benefit for RIPEMD /
blowfish-twofish
On Wed, Jan 5, 2011 at 5:03 PM, Gordan Bobic gor...@bobich.net wrote:
On 01/06/2011 12:22 AM, Spelic wrote:
Definitely agree that it should be a per-directory option, rather than per
mount.
JOOC, would the dedupe table be done per directory, per mount, per
sub-volume, or per volume? The
On Wednesday, January 05, 2011 08:19:04 pm Spelic wrote:
I'd just make it always use the fs block size. No point in making it
variable.
Agreed. What is the reason for variable block size?
First post on this list - I mostly was just reading so far to learn more on fs
design but this is
Hi, Kitayama-san
Firstly, thanks for your test.
On Sat, 1 Jan 2011 00:43:41 +0900, Itaru Kitayama wrote:
Hi Miao,
The HEAD of the perf-improve fails to boot on my virtual machine.
The system calls btrfs_delete_delayed_dir_index() with trans block_rsv set to
NULL,
thus selects, in
On Thursday 06 January 2011, Shaohua Li wrote:
I don't understand. adding a case statement in compat_sys_ioctl, so we will do
compat_ioctl_check_table(). If I add COMPATIBLE_IOCTL(), then the check
will success, we will go to the found_handler code path and execute
do_vfs_ioctl, which is what
On Thu, 2011-01-06 at 15:38 +0800, Arnd Bergmann wrote:
On Thursday 06 January 2011, Shaohua Li wrote:
I don't understand. adding a case statement in compat_sys_ioctl, so we will
do
compat_ioctl_check_table(). If I add COMPATIBLE_IOCTL(), then the check
will success, we will go to the
On Wed, 2011-01-05 at 17:26 +0800, Arnd Bergmann wrote:
On Wednesday 05 January 2011 10:09:20 Arnd Bergmann wrote:
Thanks, fixed them.
after the patch 1/5 is changed, this patch can still apply but has
hunks. below is the latest refresh patch.
Subject: add metadata_readahead ioctl in vfs
44 matches
Mail list logo