On Nov 12, 2013 2:38 AM, Berant Lemmenes ber...@lemmenes.com wrote:
I noticed the same behavior on my dumpling cluster. They wouldn't show up
after boot, but after a service restart they were there.
I haven't tested a node reboot since I upgraded to emperor today. I'll
give it a shot tomorrow.
Hi all!
long time no see!
I want to use the function rados_exec, and I found the class
cls_crypto.cc in the code source of ceph;
so I run the funtion like this:
rados_exec(ioctx, foo_object, crypto , md5, buf, sizeof(buf),buf2,
sizeof(buf2) )
ant the function return
Hi guys,
I use ceph-deploy to manage my cluster, but I get failed while creating the
OSD, the process seems to hang up at creating first osd. By the way,
SELinux is disabled, and my ceph-disk is patched according to the page:
http://www.spinics.net/lists/ceph-users/msg03258.html
can you guys give
Sorry, just spotted you're mounting on sdc. Can you chuck out a partx -v
/dev/sda to see if there's anything odd about the data currently on there?
-Michael
On 12/11/2013 18:22, Michael wrote:
As long as there's room on the SSD for the partitioner it'll just use
the conf value for osd journal
We probably do need to go over it again and account for PG splitting.
On Fri, Nov 8, 2013 at 9:26 AM, Gregory Farnum g...@inktank.com wrote:
After you increase the number of PGs, *and* increase the pgp_num to do the
rebalancing (this is all described in the docs; do a search), data will move
I think we removed the experimental warning in cuttlefish. It
probably wouldn't hurt to do it in bobtail particularly if you test it
extensively on a test cluster first. However, we didn't do extensive
testing on it until cuttlefish. I would upgrade to cuttlefish
(actually, dumpling or emperor,
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after
installing ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As ceph-deploy does not show support for ZFS, I used the instructions
at:
On Tue, Nov 12, 2013 at 3:43 PM, Eric Eastman eri...@aol.com wrote:
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after installing
ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As
On 11/12/2013 04:43 PM, Eric Eastman wrote:
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after
installing ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As ceph-deploy does not show
Hi,
we're experiencing the same problem. We have a cluster with 6 machines and 60
OSDs (Supercmiro 2 HE 24 disks max, LSI controller). We have three R300 as
monitor nodes and two more R300 as iscsi-targets. We are using targetcli, too.
I don't need to say we have a cluster, public and
Out of curiosity - can you live-migrate instances with this setup?
On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko dborodae...@mirantis.com
wrote:
And to answer my own question, I was missing a meaningful error
message: what the ObjectNotFound exception I got from librados didn't
tell me
Still working on it, watch this space :)
On Tue, Nov 12, 2013 at 3:44 PM, Dinu Vlad dinuvla...@gmail.com wrote:
Out of curiosity - can you live-migrate instances with this setup?
On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko dborodae...@mirantis.com
wrote:
And to answer my own
Since the disk is failing and you have 2 other copies I would take osd.0 down.
This means that ceph will not attempt to read the bad disk either for clients
or to make another copy of the data:
* Not sure about the syntax of this for the version of ceph you are running
ceph osd down 0
On Tue, Nov 12, 2013 at 7:28 PM, Joao Eduardo Luis joao.l...@inktank.comwrote:
This looks an awful lot like you started another instance of an OSD with
the same ID while another was running. I'll walk you through the log lines
that point me towards this conclusion. Would still be weird if
On Wed, Nov 13, 2013 at 6:43 AM, Eric Eastman eri...@aol.com wrote:
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after installing
ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As
Hi all!
I try to use the rados_exec methods, it allows librados users to call the
custom methods !
my ceph version is 0.62。 It is worked for the class cls_rbd, for it is alerdy
build and load into the ceph class(/usr/local/lib/rados-class). but I do not
konw how to build and load a
16 matches
Mail list logo