Variable 'p' is not used any more. So, remove it.
Signed-off-by: Tsutomu Itoh t-i...@jp.fujitsu.com
---
fs/btrfs/send.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index ed897dc..96a826a 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -3479,7
If one of the copy of the superblock is zero it does not
confirm to us that btrfs isn't there on that disk. When
we are having more than one copy of superblock we should
rather let the for loop to continue to check other copies.
the following test case and results would justify the
fix
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each where
I wish to use the disk packs aggregated for 16TB and up to 64TB backups...
Can btrfs...?
1:
Mirror data such that there is a copy of data on each *disk pack* ?
Note that esata shows just the disks as individual
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each where
I wish to use the disk packs aggregated for 16TB and up to 64TB backups...
Can btrfs...?
1:
Mirror data such that there is a copy of data on each
Hi,
xfstests loop has hit this after a day, failing test was 276. The sources are
btrfs-next/linus-base branch. I've hit this some time ago with
3.9.0-rc4-default+ .
[64394.422743] BUG: unable to handle kernel NULL pointer dereference at
0078
[64394.426716] IP: [a0010e0f]
On Thu, Apr 18, 2013 at 04:42:18PM +0200, David Sterba wrote:
xfstests loop has hit this after a day, failing test was 276.
sorry it's test 273
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Apart from the dates, this sounds highly plausible :-)
If the hashing is done before the compression and the compression is
done for isolated blocks, then this could even work!
Any takers? ;-)
For a performance enhancement, keep a hash tree in memory for the n
most recently used/seen
On Fri, Apr 12, 2013 at 09:44:53AM +0200, Stefan Behrens wrote:
On Fri, 12 Apr 2013 08:58:27 +0800, Wang Shilong wrote:
btrfs subvolume list gets a new option --fields=... which allows
to specify which pieces of information about subvolumes shall be
printed. This is necessary because this
On 18/04/13 15:06, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each
where I wish to use the disk packs aggregated for 16TB and up to
64TB backups...
Can btrfs...?
1:
Mirror data
On Thu, Apr 11, 2013 at 06:22:08PM +0200, Stefan Behrens wrote:
+static char *all_field_items[] = {
+ [BTRFS_LIST_OBJECTID] = rootid,
+ [BTRFS_LIST_GENERATION] = gen,
+ [BTRFS_LIST_CGENERATION]= cgen,
+ [BTRFS_LIST_OGENERATION]= ogen,
+
1) Right now scrub_stripe() is looping in some unnecessary cases:
* when the found extent item's objectid has been out of the dev extent's range
but we haven't finish scanning all the range within the dev extent
* when all the items has been processed but we haven't finish scanning all the
On Thu, Apr 18, 2013 at 05:29:10PM +0100, Martin wrote:
On 18/04/13 15:06, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each
where I wish to use the disk packs aggregated for 16TB and
Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
snip
Note that esata shows just the disks as individual physical disks, 4 per
disk pack. Can physical disks be grouped together to force the RAID data
to be mirrored across all the nominated groups?
On Wed, Apr 17, 2013 at 07:50:09PM -0600, Matt Pursley wrote:
Hey All,
Here are the results of making and reading back a 13GB file on
mdraid6 + ext4, mdraid6 + btrfs, and btrfsraid6 + btrfs.
Seems to show that:
1) mdraid6 + ext4 can do ~1100 MB/s for these sequential reads with
either
On 18/04/13 20:44, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 05:29:10PM +0100, Martin wrote:
On 18/04/13 15:06, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks
each where I wish to use the
On 18/04/13 20:48, Alex Elsayed wrote:
Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
snip
Note that esata shows just the disks as individual physical disks, 4 per
disk pack. Can physical disks be grouped together to force the RAID data
to be mirrored
fget() returns NULL if error. So, we should check NULL or not.
Signed-off-by: Tsutomu Itoh t-i...@jp.fujitsu.com
---
fs/btrfs/send.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 96a826a..f892e0e 100644
--- a/fs/btrfs/send.c
+++
Martin wrote:
snip
Or perhaps include the same Ceph code routines into btrfs?...
That's actually what I was thinking. The CRUSH code is actually already
pretty well factored out - it lives in net/ceph/crush/ in the kernel source
tree, and is treated as part of 'libceph' (which is used by
18 matches
Mail list logo