does 'ceph mds dump' list pool 3 in teh data_pools line?
Yes. It lists the desired poolids I wanted to put data in.
-- Forwarded message --
From: hemant surale hemant.sur...@gmail.com
Date: Thu, Nov 29, 2012 at 2:59 PM
Subject: Re: OSD daemon changes port no
To: Sage Weil
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
-EINPROGRESS.
Changes since last PATCH:
- fixed
Hi Josh,
Am 28.11.2012 19:51, schrieb Josh Durgin:
No idea how to archieve this with git send-email ;-( But still more
important is also the patch for discards...
Use git format-patch, edit the patch file (it includes the basic
headers already), then send it with git send-email.
done /
Hi,
I'm using RBD to store VM image and they're accessed through the
kernel client (xen vms).
In the client dmesg log, I see periodically :
Nov 29 10:46:48 b53-04 kernel: [160055.012206] libceph: osd8
10.208.2.213:6806 socket closed
Nov 29 10:46:48 b53-04 kernel: [160055.013635] libceph: osd8
On Thu, Nov 22, 2012 at 11:00:19AM +0100, Stefan Priebe wrote:
@@ -406,10 +401,11 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb-ret = r;
}
}
+acb-status = 0;
+
I suggest doing this in the BH. The qemu_aio_wait() loop in
qemu_rbd_aio_cancel() needs to
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
-EINPROGRESS.
Changes since PATCHv3:
- removed
Hi,
i hope i've done everything correctly. I've send a new v4 patch.
Am 29.11.2012 14:58, schrieb Stefan Hajnoczi:
On Thu, Nov 22, 2012 at 11:00:19AM +0100, Stefan Priebe wrote:
@@ -406,10 +401,11 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb-ret = r;
}
In __unregister_request(), there is a call to list_del_init()
referencing a request that was the subject of a call to
ceph_osdc_put_request() on the previous line. This is not
safe, because the request structure could have been freed
by the time we reach the list_del_init().
Fix this by
- Messaggio originale -
Da: Stefan Priebe s.pri...@profihost.ag
A: qemu-de...@nongnu.org
Cc: stefa...@gmail.com, josh durgin josh.dur...@inktank.com,
ceph-devel@vger.kernel.org, pbonz...@redhat.com,
Stefan Priebe s.pri...@profihost.ag
Inviato: Giovedì, 29 novembre 2012 15:28:35
@@ -574,6 +570,12 @@ static void
qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb)
{
RBDAIOCB *acb = (RBDAIOCB *) blockacb;
acb-cancelled = 1;
+
+while (acb-status == -EINPROGRESS) {
+qemu_aio_wait();
+}
+
There should be a
On Thu, 29 Nov 2012, Andrey Korolyov wrote:
$ ceph osd down -
osd.0 is already down
$ ceph osd down ---
osd.0 is already down
the same for ``+'', ``/'', ``%'' and so - I think that for osd subsys
ceph cli should explicitly work only with positive integers plus zero,
refusing all other
Hi Sylvian,
Can you attach/post the whole log somewhere? I'm curious what is leading
up to it not having secret_id=0. Ideally with 'debug auth = 20' and
'debug osd = 20' and 'debug ms = 1'.
Thanks!
sage
On Thu, 29 Nov 2012, Sylvain Munaut wrote:
Hi,
I'm using RBD to store VM image and
Reviewed-by: Sage Weil s...@inktank.com
On Thu, 29 Nov 2012, Alex Elder wrote:
In __unregister_request(), there is a call to list_del_init()
referencing a request that was the subject of a call to
ceph_osdc_put_request() on the previous line. This is not
safe, because the request structure
On Thu, Nov 29, 2012 at 8:34 PM, Sage Weil s...@inktank.com wrote:
On Thu, 29 Nov 2012, Andrey Korolyov wrote:
$ ceph osd down -
osd.0 is already down
$ ceph osd down ---
osd.0 is already down
the same for ``+'', ``/'', ``%'' and so - I think that for osd subsys
ceph cli should explicitly
On 11/29/2012 07:01 PM, Andrey Korolyov wrote:
On Thu, Nov 29, 2012 at 8:34 PM, Sage Weil s...@inktank.com wrote:
On Thu, 29 Nov 2012, Andrey Korolyov wrote:
$ ceph osd down -
osd.0 is already down
$ ceph osd down ---
osd.0 is already down
the same for ``+'', ``/'', ``%'' and so - I think
I'm getting the assert failure below with the following test:
ceph_mount(cmount, /otherdir);
ceph_getcwd(cmount);
--
client/Inode.h: In function 'Dentry* Inode::get_first_parent()' thread
7fded47c8780 time 2012-11-29 11:49:00.890184
client/Inode.h: 165: FAILED assert(!dn_set.empty())
ceph
On 11/29/2012 01:52 PM, Noah Watkins wrote:
I'm getting the assert failure below with the following test:
ceph_mount(cmount, /otherdir);
This should fail with ENOENT if you check the return code.
-sam
ceph_getcwd(cmount);
--
client/Inode.h: In function 'Dentry*
Oh, let me clarify. /otherdir exists, and the mount succeeds.
- Noah
On Thu, Nov 29, 2012 at 11:58 AM, Sam Lang sam.l...@inktank.com wrote:
On 11/29/2012 01:52 PM, Noah Watkins wrote:
I'm getting the assert failure below with the following test:
ceph_mount(cmount, /otherdir);
This
Here is the full test case:
TEST(LibCephFS, MountRootChdir) {
struct ceph_mount_info *cmount;
/* create mount and new directory */
ASSERT_EQ(ceph_create(cmount, NULL), 0);
ASSERT_EQ(ceph_conf_read_file(cmount, NULL), 0);
ASSERT_EQ(ceph_mount(cmount, /), 0);
On 11/22/2012 12:04 PM, Nick Bartos wrote:
Here are the ceph log messages (including the libceph kernel debug
stuff you asked for) from a node boot with the rbd command hung for a
couple of minutes:
Nick, I have put together a branch that includes two fixes
that might be helpful. I don't
On 11/29/2012 02:12 PM, Noah Watkins wrote:
Here is the full test case:
Sorry - I was assuming it was just an issue with checking the return
code. I've pushed a one-line fix to wip-mount-subdir. You can
cherry-pick to your branch if you want.
-sam
TEST(LibCephFS, MountRootChdir) {
fwding to the list as I forgot to hit reply all ...
Can you attach/post the whole log somewhere? I'm curious what is leading
up to it not having secret_id=0. Ideally with 'debug auth = 20' and
'debug osd = 20' and 'debug ms = 1'.
Well without the debug options there isn't anything else
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
-EINPROGRESS.
Changes since PATCHv4:
- removed
Hi Paolo,
Am 29.11.2012 16:23, schrieb Paolo Bonzini:
+qemu_vfree(acb-bounce);
This vfree is not needed, since the BH will run and do the free.
new patch v5 sent.
Greets,
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
On 11/29/2012 07:01 PM, Andrey Korolyov wrote:
On Thu, Nov 29, 2012 at 8:34 PM, Sage Weil s...@inktank.com wrote:
On Thu, 29 Nov 2012, Andrey Korolyov wrote:
$ ceph osd down -
osd.0 is already down
$ ceph osd down ---
osd.0 is already down
the same for ``+'', ``/'', ``%'' and so - I think
On 11/30/2012 01:22 AM, Isaac Otsiabah wrote:
This command below which adds a new to the crushmap to enable it to receive
data has changed and does not work anymore.
ceph osd crush set {id} {name}
Please, what is the new command to add a new osd to the crushmap to enable it
to
From: Yan, Zheng zheng.z@intel.com
Rename operation can call predirty_journal_parents() twice. So
directory fragment's rstat can be modified twice. But only the
first modification is journaled because EMetaBlob::add_dir() does
not update existing dirlump.
Signed-off-by: Yan, Zheng
From: Yan, Zheng zheng.z@intel.com
Hi,
The 1st patch fixes a rstat accounting bug. The 5th patch fixes journal
replay bug, the fix requires a minor disk format change.
These patches are also in:
git://github.com/ukernel/ceph.git wip-mds
Regards
Yan, Zheng
--
To unsubscribe from this
From: Yan, Zheng zheng.z@intel.com
The migrator calls eval() for imported caps after importing a
directory tree. We should do the same thing after importing a
inode.
Signed-off-by: Yan, Zheng zheng.z@intel.com
---
src/mds/Migrator.cc | 1 +
1 file changed, 1 insertion(+)
diff --git
From: Yan, Zheng zheng.z@intel.com
handle_client_openc() calls handle_client_open() if the linkage isn't
null. handle_client_open() calls rdlock_path_pin_ref() which returns
mdr-in[0] directly because mdr-done_locking is true. the problem here
is that mdr-in[0] can be NULL if the linkage is
From: Yan, Zheng zheng.z@intel.com
At that point, the request already auth pins some objects.
So CDir::fetch() should ignore can_auth_pin check.
Signed-off-by: Yan, Zheng zheng.z@intel.com
---
src/mds/Server.cc | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git
From: Yan, Zheng zheng.z@intel.com
Otherwise we may wrongly increase mds-sessionmap.version, which
will confuse future journal replays that involving sessionmap.
Signed-off-by: Yan, Zheng zheng.z@intel.com
---
src/mds/Server.cc| 2 ++
src/mds/events/EUpdate.h | 8 ++--
From: Yan, Zheng zheng.z@intel.com
Otherwise osd may truncate the object to larger size.
Signed-off-by: Yan, Zheng zheng.z@intel.com
---
net/ceph/osd_client.c | 8
1 file changed, 8 insertions(+)
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index ccbdfbb..f8b0e56
Hi Sage,Community ,
I am unable to use 2 directories to direct data to 2 different
pools. I did following expt.
Created 2 pool host ghost to seperate data placement .
--//crushmap file
---
#
34 matches
Mail list logo