Hello, Chris.
On Sun, Mar 20, 2011 at 08:10:51PM -0400, Chris Mason wrote:
I went through a number of benchmarks with the explicit
blocking/spinning code and back then it was still significantly faster
than the adaptive spin. But, it is definitely worth doing these again,
how many dbench
For datacow control, the corresponding inode flags are needed.
This is for btrfs use.
v1-v2:
Change FS_COW_FL to another bit due to conflict with the upstream e2fsprogs
Signed-off-by: Liu Bo liubo2...@cn.fujitsu.com
---
include/linux/fs.h |2 ++
1 files changed, 2 insertions(+), 0
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to
Excerpts from Al Viro's message of 2011-03-21 01:17:25 -0400:
On Mon, Mar 07, 2011 at 11:58:13AM -0500, Chris Mason wrote:
Thanks, these both look good but I'll test here as well. Are you
planning on pushing for .38?
No, but .39 would be nice ;-) Do you want that to go through btrfs
Excerpts from Miao Xie's message of 2011-03-21 01:05:22 -0400:
On sun, 20 Mar 2011 20:33:34 -0400, Chris Mason wrote:
Excerpts from Miao Xie's message of 2011-03-18 05:24:46 -0400:
Changelog V3 - V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space
cache
This patch makes the free space cluster refilling code a little easier to
understand, and fixes some things with the bitmap part of it. Currently we
either want to refill a cluster with
1) All normal extent entries (those without bitmaps)
2) A bitmap entry with enough space
The current code has
We have been creating bitmaps for small extents unconditionally forever. This
was great when testing to make sure the bitmap stuff was working, but is
overkill normally. So instead of always adding small chunks of free space to
bitmaps, only start doing it if we go past half of our extent
Hiya,
I'm trying to move a btrfs FS that's on a hardware raid 5 (6TB
large, 4 of which are in use) to another machine with 3 3TB HDs
and preserve all the subvolumes/snapshots.
Is there a way to do that without using a software/hardware raid
on the new machine (that is just use btrfs
Hello,
Here are the results with voluntary preemption. I've moved to a
beefier machine for testing. It's dual Opteron 2347, so dual socket,
eight core. The memory is limited to 1GiB to force IOs and the disk
is the same OCZ Vertex 60gig SSD. /proc/stat is captured before and
after dbench 50.
On Mon, Mar 21, 2011 at 05:59:55PM +0100, Tejun Heo wrote:
I'm running DFL again just in case but SIMPLE or SPIN seems to be a
much better choice.
Got 644.176 MB/sec, so yeah the custom locking is definitely worse
than just using mutex.
Thanks.
--
tejun
--
To unsubscribe from this list: send
Excerpts from Tejun Heo's message of 2011-03-21 12:59:55 -0400:
Hello,
Here are the results with voluntary preemption. I've moved to a
beefier machine for testing. It's dual Opteron 2347, so dual socket,
eight core. The memory is limited to 1GiB to force IOs and the disk
is the same OCZ
On Mon, Mar 21, 2011 at 04:57:13PM +0800, liubo wrote:
@@ -4581,8 +4583,6 @@ static struct inode *btrfs_new_inode(struct
btrfs_trans_handle *trans,
location-offset = 0;
btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
- btrfs_inherit_iflags(inode, dir);
-
if
Hi,
I decided to try btrfs for a few file systems on my not-too-critical home
server, including my root fs. Most file systems are on a RAID5 MD software
array, but my rootfs is btrfs striped as RAID1 over 3 partitions.
I got hit by the Intel Sandy Bridge SATA chipset bug, so eventually the 3rd
Hello,
On Mon, Mar 21, 2011 at 01:24:37PM -0400, Chris Mason wrote:
Very interesting. Ok, I'll definitely rerun my benchmarks as well. I
used dbench extensively during the initial tuning, but you're forcing
the memory low in order to force IO.
This case doesn't really hammer on the locks,
Hi,
I noticed that btrfs_getattr() is filling stat-dev with an anonymous device
(for per-snapshot root?):
stat-dev = BTRFS_I(inode)-root-anon_super.s_dev;
but /proc/pid/maps uses the real block device:
dev = inode-i_sb-s_dev;
This results in some unfortunate behavior for lsof as it reports
On 03/22/2011 01:43 AM, Johann Lombardi wrote:
On Mon, Mar 21, 2011 at 04:57:13PM +0800, liubo wrote:
@@ -4581,8 +4583,6 @@ static struct inode *btrfs_new_inode(struct
btrfs_trans_handle *trans,
location-offset = 0;
btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
-
Hi Miao,
Here is an excerpt of the V4 patch applied kernel boot log:
===
[ INFO: possible circular locking dependency detected ]
2.6.36-xie+ #117
---
vgs/1210 is trying to acquire lock:
On tue, 22 Mar 2011 11:33:10 +0900, Itaru Kitayama wrote:
Here is an excerpt of the V4 patch applied kernel boot log:
===
[ INFO: possible circular locking dependency detected ]
2.6.36-xie+ #117
On Tue, 22 Mar 2011 11:12:37 +0800
Miao Xie mi...@cn.fujitsu.com wrote:
We can't fix it by this way, because the work threads may do insertion or
deletion at the same time,
and we may lose some directory items.
Ok.
Maybe we can fix it by adding a reference for the delayed directory items,
19 matches
Mail list logo