Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the same image to different mount points.
Each modification to one image is also performed in the second one.
Example:
dd if=/dev/sda? of=image1 bs=1M
cp image1 image2
mount -o loop image1 m1
mount -o loop
On Thu, Jun 20, 2013 at 3:47 PM, Clemens Eisserer linuxhi...@gmail.com wrote:
Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the same image to different mount points.
Each modification to one image is also performed in the second one.
Example:
dd
On Thu, Jun 20, 2013 at 10:47:53AM +0200, Clemens Eisserer wrote:
Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the same image to different mount points.
Each modification to one image is also performed in the second one.
Example:
dd
Hi
On Mon, Jun 17, 2013 at 11:43 PM, Alexander Skwar
alexanders.mailinglists+nos...@gmail.com wrote:
Hello Josef
On Mon, Jun 17, 2013 at 11:21 PM, Josef Bacik jba...@fusionio.com wrote:
Pull down my tree
git://github.com/josefbacik/btrfs-progs.git
and build and run the fsck in there and
On Thu, 20 Jun 2013 10:16:22 +0100, Hugo Mills wrote:
On Thu, Jun 20, 2013 at 10:47:53AM +0200, Clemens Eisserer wrote:
Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the same image to different mount points.
Each modification to one image is also
On Thu, Jun 20, 2013 at 10:22:07AM +, Gabriel de Perthuis wrote:
On Thu, 20 Jun 2013 10:16:22 +0100, Hugo Mills wrote:
On Thu, Jun 20, 2013 at 10:47:53AM +0200, Clemens Eisserer wrote:
Hi,
I've observed a rather strange behaviour while trying to mount two
identical copies of the
Instead of redirecting to a different block device, Btrfs could and
should refuse to mount an already-mounted superblock when the block
device doesn't match, somewhere in or below btrfs_mount. Registering
extra, distinct superblocks for an already mounted raid is a different
matter, but that
On Thu, Jun 20, 2013 at 10:41:53AM +, Gabriel de Perthuis wrote:
Instead of redirecting to a different block device, Btrfs could and
should refuse to mount an already-mounted superblock when the block
device doesn't match, somewhere in or below btrfs_mount. Registering
extra, distinct
As for skinny metadata, key.offset stores levels rather than extent length.
Signed-off-by: Liu Bo bo.li@oracle.com
---
btrfs-image.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/btrfs-image.c b/btrfs-image.c
index 739ae35..e5ff795 100644
--- a/btrfs-image.c
+++
A device can be added to the device list without getting a name, so we may
access to illegal addresses while opening devices with their name.
Signed-off-by: Liu Bo bo.li@oracle.com
---
volumes.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/volumes.c b/volumes.c
Patch 1-3 are bug fixes for several places.
Patch 4 adds btrfs-image support of multiple disks restore.
Liu Bo (4):
Btrfs-progs: fix misuse of skinny metadata in btrfs-image
Btrfs-progs: skip open devices which is missing
Btrfs-progs: delete fs_devices itself from fs_uuid list before
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
This aims to address the following case,
$ mkfs.btrfs -m raid0 sda sdb
$ btrfs-image sda image.file
$ btrfs-image -r image.file sdc
-
so we can
Otherwise we will access illegal addresses while searching on fs_uuid list.
Signed-off-by: Liu Bo bo.li@oracle.com
---
disk-io.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/disk-io.c b/disk-io.c
index 21b410d..2892300 100644
--- a/disk-io.c
+++ b/disk-io.c
@@
On Thu, Jun 20, 2013 at 08:05:30PM +0800, Liu Bo wrote:
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
This aims to address the following case,
$ mkfs.btrfs -m raid0 sda sdb
$ btrfs-image sda
On Thu, Jun 20, 2013 at 08:24:32AM -0400, Josef Bacik wrote:
On Thu, Jun 20, 2013 at 08:05:30PM +0800, Liu Bo wrote:
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
This aims to address the
Quoting Josef Bacik (2013-06-20 08:24:32)
On Thu, Jun 20, 2013 at 08:05:30PM +0800, Liu Bo wrote:
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
This aims to address the following case,
$
On Thu, Jun 20, 2013 at 08:22:12AM -0500, Kevin O'Kelley wrote:
Thank you for your reply. I appreciate it. Unfortunately this issue
is a deal killer for us. The ability to take very fast snapshots and
replicate them to another site is key for us. We just can't us Btrfs
with this setup. That's
Thank you for your reply. I appreciate it. Unfortunately this issue is a deal
killer for us. The ability to take very fast snapshots and replicate them to
another site is key for us. We just can't us Btrfs with this setup. That's
too bad. Good luck and thank you.
The issue we were
Thank you for your reply. I appreciate it. Unfortunately this issue is a deal
killer for us. The ability to take very fast snapshots and replicate them to
another site is key for us. We just can't us Btrfs with this setup. That's too
bad. Good luck and thank you.
Sent from my iPhone
On Jun
On Thu, Jun 20, 2013 at 08:39:19AM -0400, Josef Bacik wrote:
On Thu, Jun 20, 2013 at 08:24:32AM -0400, Josef Bacik wrote:
On Thu, Jun 20, 2013 at 08:05:30PM +0800, Liu Bo wrote:
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks
On Thu, Jun 20, 2013 at 08:39:19AM -0400, Josef Bacik wrote:
On Thu, Jun 20, 2013 at 08:24:32AM -0400, Josef Bacik wrote:
On Thu, Jun 20, 2013 at 08:05:30PM +0800, Liu Bo wrote:
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks
@@ -3380,6 +3382,10 @@ static int update_space_info(struct btrfs_fs_info
*info, u64 flags,
if (!found)
return -ENOMEM;
+ ret = percpu_counter_init(found-total_bytes_pinned, 0);
+ if (ret)
+ return ret;
+
Leaks *found if percpu_counter_init()
try_to_writeback_inodes_sb_nr returns 1 if writeback is already underway, which
is completely fraking useless for us as we need to make sure pages are actually
written before we go and check if there are ordered extents. So replace this
with an open coding of try_to_writeback_inodes_sb_nr minus
Ping.
Is there any reason why the btrfs progs (except for btrfs-show-super)
don't validate the super block's checksum?
thanks
On Mon, Jun 10, 2013 at 8:51 PM, Filipe David Borba Manana
fdman...@gmail.com wrote:
After finding a super block in a device also validate its
checksum. This
On Thu, Jun 20, 2013 at 09:26:15AM -0700, Zach Brown wrote:
@@ -3380,6 +3382,10 @@ static int update_space_info(struct btrfs_fs_info
*info, u64 flags,
if (!found)
return -ENOMEM;
+ ret = percpu_counter_init(found-total_bytes_pinned, 0);
+ if (ret)
+
In order to be able to detect the case that a filesystem is mounted
with an old kernel, add a uuid-tree-gen field like the free space
cache is doing it. It is part of the super block and written with
each commit. Old kernels do not know this field and don't update it.
Signed-off-by: Stefan
This tree is not created by mkfs.btrfs. Therefore when a filesystem
is mounted writable and the UUID tree does not exist, this tree is
created if required. The tree is also added to the fs_info structure
and initialized, but this commit does not yet read or write UUID tree
elements.
Mapping UUIDs to subvolume IDs is an operation with a high effort
today. Today, the algorithm even has quadratic effort (based on the
number of existing subvolumes), which means, that it takes minutes
to send/receive a single subvolume if 10,000 subvolumes exist. But
even linear effort would be
When a new subvolume or snapshot is created, a new UUID item is added
to the UUID tree. Such items are removed when the subvolume is deleted.
The ioctl to set the received subvolume UUID is also touched and will
now also add this received UUID into the UUID tree together with the
ID of the
If the filesystem was mounted with an old kernel that was not
aware of the UUID tree, this is detected by looking at the
uuid_tree_generation field of the superblock (similar to how
the free space cache is doing it). If a mismatch is detected
at mount time, a thread is started that does two
Mapping UUIDs to subvolume IDs is an operation with a high effort
today. Today, the algorithm even has quadratic effort (based on the
number of existing subvolumes), which means, that it takes minutes
to send/receive a single subvolume if 10,000 subvolumes exist. But
even linear effort would be
When the UUID tree is initially created, a task is spawned that
walks through the root tree. For each found subvolume root_item,
the uuid and received_uuid entries in the UUID tree are added.
This is such a quick operation so that in case somebody wants
to unmount the filesystem while the task is
This should never be needed, but since all functions are there
to check and rebuild the UUID tree, a mount option is added that
allows to force this check and rebuild procedure.
Signed-off-by: Stefan Behrens sbehr...@giantdisaster.de
---
fs/btrfs/ctree.h | 1 +
fs/btrfs/disk-io.c | 3 ++-
+/* for items that use the BTRFS_UUID_KEY */
+#define BTRFS_UUID_ITEM_TYPE_SUBVOL 0 /* for UUIDs assigned to subvols */
+#define BTRFS_UUID_ITEM_TYPE_RECEIVED_SUBVOL 1 /* for UUIDs assigned to
+* received subvols */
+
+/* a sequence of such
On Wed, 19 Jun 2013, Sage Weil wrote:
Hi Chris,
On Tue, 18 Jun 2013, Chris Mason wrote:
[...]
Very long way of saying I think we're one release_path short. Sage, I
haven't tested this at all yet, I was hoping to trigger it first.
diff --git a/fs/btrfs/tree-log.c
Quoting Sage Weil (2013-06-20 17:56:19)
On Wed, 19 Jun 2013, Sage Weil wrote:
Hi Chris,
On Tue, 18 Jun 2013, Chris Mason wrote:
[...]
Very long way of saying I think we're one release_path short. Sage, I
haven't tested this at all yet, I was hoping to trigger it first.
On Thu, 20 Jun 2013, Chris Mason wrote:
Quoting Sage Weil (2013-06-20 17:56:19)
On Wed, 19 Jun 2013, Sage Weil wrote:
Hi Chris,
On Tue, 18 Jun 2013, Chris Mason wrote:
[...]
Very long way of saying I think we're one release_path short. Sage, I
haven't tested this at all
Quoting Sage Weil (2013-06-20 21:00:21)
On Thu, 20 Jun 2013, Chris Mason wrote:
Awesome, thanks for getting the traces for us. Looks like this one has
been around since v3.7, so I'm not going to try and sneak it into the
3.10 final. I'll have it in the next merge window and for stable.
On Thu, 20 Jun 2013, Chris Mason wrote:
Quoting Sage Weil (2013-06-20 21:00:21)
On Thu, 20 Jun 2013, Chris Mason wrote:
Awesome, thanks for getting the traces for us. Looks like this one has
been around since v3.7, so I'm not going to try and sneak it into the
3.10 final. I'll have
Quoting Liu Bo (2013-06-20 08:05:30)
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
I'd like to pull this in, could you please rebase it against my current
master?
Thanks!
-chris
--
To unsubscribe
Quoting Jon Nelson (2013-06-18 13:19:04)
Josef Bacik jbacik at fusionio.com writes:
On Tue, Jun 11, 2013 at 11:43:30AM -0400, Sage Weil wrote:
I'm also seeing this hang regularly with both 3.9 and 3.10-rc5. Is this
is a known problem? In this case there is no powercycling; just a
On Thu, Jun 20, 2013 at 09:10:24PM -0400, Chris Mason wrote:
Quoting Liu Bo (2013-06-20 08:05:30)
This adds a 'btrfs-image -m' option, which let us restore an image that
is built from a btrfs of multiple disks onto several disks altogether.
I'd like to pull this in, could you please rebase
Is this what you are looking for?
After this, the CPU gets stuck and I have to reboot.
[360491.932226] [ cut here ]
[360491.932261] kernel BUG at
/home/abuild/rpmbuild/BUILD/kernel-desktop-3.9.6/linux-3.9/fs/btrfs/ctree.c:1144!
[360491.932312] invalid opcode: [#1]
On Jun 20, 2013, at 7:46 PM, Jon Nelson jnel...@jamponi.net wrote:
Is this what you are looking for?
If you're able to reproduce while you're remoted in via ssh, then if you get
the dmesg at least you won't have to spend time trying to save it somewhere
since you'll have it on the remote
44 matches
Mail list logo