Only for ceph_sync_write, the osd can return EOLDSNAPC.so move the
related codes after the call ceph_sync_write.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/file.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/fs/ceph/file.c
Hi Sage,
During the discussions about continuous integration at the CDS this week (
http://youtu.be/cGosx5zD4FM?t=1h16m05s ) you mentionned that github was able to
keep track of the successive versions of a pull request commit, even in the
case of a rebase. I just tried the following:
a)
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982
A few observations:
On Wed, 7 Aug 2013, James Harper wrote:
Hi James,
Here is a somewhat simpler patch; does this work for you? Note that if
you something like /etc/init.d/ceph status osd.123 where osd.123 isn't in
ceph.conf then you get a status 1 instead of 3. But for the
/etc/init.d/ceph status mds
Hey Loic, that's great!
Seems like quite a bit of work, but I'm sure people will find it
helpful. Feel free to add it to the CDS page for reference. Thanks.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph ||
Hello,
I don't know if it's useful, but I can also reproduce this bug with :
rbd kernel 3.10.4
ceph osd 0.61.4
image format 2
rbd formatted in xfs, after some snapshots, and mount/umount test (no
write on the file system), xfs mount make segfault and kernel have same log.
Cheers,
Laurent
Looks good; I've applied this to the tree. Canyou review the below patch
while we are looking at this code?
Thanks!
sage
From 26d0d7b213d87db0ef46e885ae749c27395c11b1 Mon Sep 17 00:00:00 2001
From: Sage Weil s...@inktank.com
Date: Thu, 8 Aug 2013 09:39:44 -0700
Subject: [PATCH] ceph: replace
Hi Mike,
Am 08.08.2013 16:05, schrieb Mike Dawson:
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0]
There are a number of subsystems that the clients use, so a number of
knobs that matter. The logging/authentication name (the type.id
name), by default, is 'client.admin'. Some of the relevant logging
knobs are ms/monc; of course there are usually effects caused by the
daemons too, so the
On Thu, 8 Aug 2013, Loic Dachary wrote:
Hi Sage,
During the discussions about continuous integration at the CDS this week (
http://youtu.be/cGosx5zD4FM?t=1h16m05s ) you mentionned that github was able
to keep track of the successive versions of a pull request commit, even in
the case of
Hi,
Trying to use Ubuntu precise virtual machines as teuthology targets ( making
sure they have 2GB of RAM because ceph-test-dbg will not even install with 1GB
of RAM ;-) and installing the key with
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
| sudo apt-key
On Thu, 8 Aug 2013, Andreas Bluemle wrote:
Hi,
maybe this is the wrong list - but I am looking for
logging support for the /usr/bin/ceph adminstration
comamand.
This is build from the same source as the ceph daemons are.
For the daemons, logging is controlled for the different
On Wed, 7 Aug 2013, James Harper wrote:
Hi James,
Here is a somewhat simpler patch; does this work for you? Note that if
you something like /etc/init.d/ceph status osd.123 where osd.123 isn't in
ceph.conf then you get a status 1 instead of 3. But for the
/etc/init.d/ceph
Yes the procedure didn't change.
If you're on debian I could also sent your prebuilt .deb for blktap
and for a patched xen version that includes userspace RBD support.
If you have any issue, I can be found on ceph's IRC under 'tnt' nick.
I've had a few occasions where tapdisk has
On Fri, 9 Aug 2013, James Harper wrote:
On Wed, 7 Aug 2013, James Harper wrote:
Hi James,
Here is a somewhat simpler patch; does this work for you? Note that if
you something like /etc/init.d/ceph status osd.123 where osd.123 isn't
in
ceph.conf then you get a status 1
Looks good; I've applied this to the tree. Canyou review the below patch
while we are looking at this code?
Thanks!
sage
From 26d0d7b213d87db0ef46e885ae749c27395c11b1 Mon Sep 17 00:00:00 2001
From: Sage Weil s...@inktank.com
Date: Thu, 8 Aug 2013 09:39:44 -0700
Subject: [PATCH] ceph: replace
On Fri, 9 Aug 2013, majianpeng wrote:
Looks good; I've applied this to the tree. Canyou review the below patch
while we are looking at this code?
Thanks!
sage
From 26d0d7b213d87db0ef46e885ae749c27395c11b1 Mon Sep 17 00:00:00 2001
From: Sage Weil s...@inktank.com
Date: Thu, 8 Aug 2013
Hi Milosz!
I have a few comments below on invalidate_page:
On Wed, 7 Aug 2013, Milosz Tanski wrote:
Adding support for fscache to the Ceph filesystem. This would bring it to on
par with some of the other network filesystems in Linux (like NFS, AFS,
etc...)
In order to mount the filesystem
I've translated the blueprint that came out of the CDS rsockets session
into tickets in the tracker:
http://tracker.ceph.com/issues/5912
http://tracker.ceph.com/issues/5913
http://tracker.ceph.com/issues/5914
http://tracker.ceph.com/issues/5915
http://tracker.ceph.com/issues/5916
This includes
Hi Yan,
I just found the wip-zfs branch in your repo and noticed your comment on
the wiki; somehow I missed that (and your message in #ceph-summit) during
the actual session!
This branch looks great! My one suggestion is that the FileStore code
would be cleaner if we implement a
Hi,
Now it’s a bit hard for us to track the bugs, review the submits, and track
the blueprints. We do have a bug tracking system, but most of the time it
doesn’t connect with a github submit link. We have email review , pull
requests, and also some internal mechanism inside inktank , we do
Hi,
??Now it?s a bit hard for us to track the bugs, review the submits, and
track the blueprints. We do have a bug tracking system, but most of the
time it doesn?t connect with a github submit link. We have email review
, pull requests, and also some internal mechanism inside inktank , we
22 matches
Mail list logo