Hi!
Today I started my backup script which rsync´s my system to an external 3,5
inch 2 TB harddisk with the wrong destination dir which I notices more than
150 GiB of data has been copied twice instead of diffing with an existing
subvolume.
Thus I rm -rf the misplaced backup and started the
On Tue, 21 Jun 2011 11:24:11 -0400, Calvin Walton wrote:
On Mon, 2011-06-20 at 23:51 +0200, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since
then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called
On 06/22/2011 10:15 AM, Henning Rohlfs wrote:
On Tue, 21 Jun 2011 11:24:11 -0400, Calvin Walton wrote:
On Mon, 2011-06-20 at 23:51 +0200, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently
On Wed, 2011-06-22 at 11:39 -0400, Josef Bacik wrote:
On 06/22/2011 10:15 AM, Henning Rohlfs wrote:
On Tue, 21 Jun 2011 11:24:11 -0400, Calvin Walton wrote:
On Mon, 2011-06-20 at 23:51 +0200, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since
On 06/22/2011 11:57 AM, Calvin Walton wrote:
On Wed, 2011-06-22 at 11:39 -0400, Josef Bacik wrote:
On 06/22/2011 10:15 AM, Henning Rohlfs wrote:
On Tue, 21 Jun 2011 11:24:11 -0400, Calvin Walton wrote:
On Mon, 2011-06-20 at 23:51 +0200, Henning Rohlfs wrote:
Hello,
I've migrated my system
On Mon, 20 Jun 2011 20:12:16 -0400, Josef Bacik wrote:
On 06/20/2011 05:51 PM, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since
then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is
Henning Rohlfs wrote (ao):
- space_cache was enabled, but it seemed to make the problem worse.
It's no longer in the mount options.
space_cache is a one time mount option which enabled space_cache. Not
supplying it anymore as a mount option has no effect (dmesg | grep btrfs).
Sander
On Tue, 21 Jun 2011 10:00:59 +0200, Sander wrote:
Henning Rohlfs wrote (ao):
- space_cache was enabled, but it seemed to make the problem worse.
It's no longer in the mount options.
space_cache is a one time mount option which enabled space_cache. Not
supplying it anymore as a mount option
On 06/21/2011 05:26 AM, Henning Rohlfs wrote:
On Tue, 21 Jun 2011 10:00:59 +0200, Sander wrote:
Henning Rohlfs wrote (ao):
- space_cache was enabled, but it seemed to make the problem worse.
It's no longer in the mount options.
space_cache is a one time mount option which enabled
On Mon, 2011-06-20 at 23:51 +0200, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is idle can take 20 up
to 60 seconds.
On Tue, 21 Jun 2011 11:18:30 -0400, Josef Bacik wrote:
On 06/21/2011 05:26 AM, Henning Rohlfs wrote:
On Tue, 21 Jun 2011 10:00:59 +0200, Sander wrote:
Henning Rohlfs wrote (ao):
- space_cache was enabled, but it seemed to make the problem
worse.
It's no longer in the mount options.
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is idle can take 20 up
to 60 seconds. Creating or deleting files often has several seconds
latency,
On 06/20/2011 05:51 PM, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is idle can take 20 up
to 60 seconds. Creating or deleting
Peter Stuge pe...@stuge.se wrote:
Hey,
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get it to work (it still has the same
number of extends and they are horrently
On Tue, May 3, 2011 at 4:33 AM, Bernhard Schmidt be...@birkenwald.de wrote:
Peter Stuge pe...@stuge.se wrote:
Hey,
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get
Excerpts from John Wyzer's message of 2011-04-30 18:33:20 -0400:
Excerpts from Mitch Harder's message of Sun May 01 00:16:53 +0200 2011:
Hmm.
Tried it and it gives me about 50 lines of
FIBMAP: Invalid argument
and then:
large_file: 1 extent found
Is that the way
Excerpts from John Wyzer's message of 2011-04-30 18:33:20 -0400:
Excerpts from Mitch Harder's message of Sun May 01 00:16:53 +0200 2011:
Hmm.
Tried it and it gives me about 50 lines of
FIBMAP: Invalid argument
and then:
large_file: 1 extent found
Is that the way
Excerpts from Bernhard Schmidt's message of 2011-05-03 06:33:25 -0400:
Peter Stuge pe...@stuge.se wrote:
Hey,
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get it
Am 03.05.2011 13:08, schrieb Chris Mason:
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get it to work (it still has the same
number of extends and they are horrently
Excerpts from Bernhard Schmidt's message of 2011-05-03 07:30:36 -0400:
Am 03.05.2011 13:08, schrieb Chris Mason:
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get
Am 03.05.2011 13:00, schrieb cwillu:
Hi,
defragging btrfs does not seem to work for me. I have run the filefrag
command over the whole fs and (manually) tried to defrag a few heavily
fragmented files, but I don't get it to work (it still has the same
number of extends and they are horrently
Hi,
Using compression is not a problem, but in order to reduce the maximum
amount of ram we need to uncompress an extent, we enforce a max size on
the extent. So you'll tend to have more extents, but they should be
close together on disk.
Could you please do a filefrag -v on the file?
Excerpts from Bernhard Schmidt's message of 2011-05-03 07:43:04 -0400:
Hi,
Using compression is not a problem, but in order to reduce the maximum
amount of ram we need to uncompress an extent, we enforce a max size on
the extent. So you'll tend to have more extents, but they should be
Hi,
Ok, looks like we could be doing a little better job when compression is
on to build out a bigger extent. This shouldn't be causing trouble on
an ssd at all but on your rotating disk it'll be slightly slower.
Still most of these extents are somewhat close together, this is roughly
On Tue, May 3, 2011 at 8:03 AM, Bernhard Schmidt be...@birkenwald.de wrote:
Hi,
Ok, looks like we could be doing a little better job when compression is
on to build out a bigger extent. This shouldn't be causing trouble on
an ssd at all but on your rotating disk it'll be slightly slower.
On 3 May 2011 19:30, Bernhard Schmidt be...@birkenwald.de wrote:
[]
The file the defrag ioctl works is that it schedules things for defrag
but doesn't force out the IO immediately unless you use -f.
So, to test the result of the defrag, you need to either wait a bit or
run sync.
Did so, no
Am 03.05.2011 16:54, schrieb Daniel J Blueman:
Hi,
The file the defrag ioctl works is that it schedules things for defrag
but doesn't force out the IO immediately unless you use -f.
So, to test the result of the defrag, you need to either wait a bit or
run sync.
Did so, no change. See my
On Tue, May 3, 2011 at 9:41 AM, Daniel J Blueman
daniel.blue...@gmail.com wrote:
It does seem the case generally; on 2.6.39-rc5, writing to a fresh
filesystem using rsync with BTRFS compression enabled, 128KB extents
seem very common [1] (filefrag inconsistency noted).
Defragmenting with
Excerpts from Mitch Harder's message of 2011-05-03 11:42:56 -0400:
On Tue, May 3, 2011 at 9:41 AM, Daniel J Blueman
daniel.blue...@gmail.com wrote:
It does seem the case generally; on 2.6.39-rc5, writing to a fresh
filesystem using rsync with BTRFS compression enabled, 128KB extents
seem
On Fri, Apr 29, 2011 at 10:01 AM, Chris Mason chris.ma...@oracle.com wrote:
Excerpts from John Wyzer's message of 2011-04-29 10:46:08 -0400:
Currently on
commit 7cf96da3ec7ca225acf4f284b0e904a1f5f98821
Author: Tsutomu Itoh t-i...@jp.fujitsu.com
Date: Mon Apr 25 19:43:53 2011 -0400
Excerpts from Mitch Harder's message of Sat Apr 30 19:33:16 +0200 2011:
Also, please note that 'btrfs filesystem defragment -v /' will
defragment the directory structure, but not the files.
[...]
To defragment your entire volume, you'll need a command like:
# for file in $(find
Resending this to the list since I did not notice that I just responded to
Chris Mason for the last emails...
Excerpts from John Wyzer's message of Sat Apr 30 08:57:06 +0200 2011:
Excerpts from Chris Mason's message of Fri Apr 29 22:48:58 +0200 2011:
http://bayimg.com/NahClAadn
On Sat, Apr 30, 2011 at 3:40 PM, John Wyzer john.wy...@gmx.de wrote:
If you just want to see your fragmentation you can use the 'filefrag'
program from e2fsprogs:
# for file in $(find PATH/TO/BTRFS/VOL/ -type f); do filefrag
${file}; done | sort -n -k 2 | less
Hmm.
Tried it and it gives
Mitch Harder wrote:
To defragment your entire volume, you'll need a command like:
# for file in $(find PATH/TO/BTRFS/VOL/ -type f); do btrfs
filesystem defragment ${file}; done
Suggest:
find /path/to/btrfs/vol -type f -exec btrfs filesystem defragment '{}' ';'
If you just want to see
Currently on
commit 7cf96da3ec7ca225acf4f284b0e904a1f5f98821
Author: Tsutomu Itoh t-i...@jp.fujitsu.com
Date: Mon Apr 25 19:43:53 2011 -0400
Btrfs: cleanup error handling in inode.c
merged into 2.6.38.4
I'm on a btrfs filesystem that has been used for some time. Let's say nine
months.
Excerpts from John Wyzer's message of 2011-04-29 10:46:08 -0400:
Currently on
commit 7cf96da3ec7ca225acf4f284b0e904a1f5f98821
Author: Tsutomu Itoh t-i...@jp.fujitsu.com
Date: Mon Apr 25 19:43:53 2011 -0400
Btrfs: cleanup error handling in inode.c
merged into 2.6.38.4
I'm on a
36 matches
Mail list logo