On 03/25/2013 10:35 AM, Chen, Xiaoxi wrote:
OK,but my VM didnt crash, it's ceph-osd daemon crashed. So is it safe for me
to say the issue I hit is a different issue?(not #3737)
Yes, then it surely is a different issue. Actually you just said ceph
crashed, no mention of an OSD, so it was
Hi,
Apologies if this is already a known bug (though I didn't find it).
If we try to map a device that doesn't exist, we get an immediate and
reproduceable kernel BUG (see the P.S.). We hit this by accident
because we forgot to add the --pool ourpool.
This works:
[root@afs245 /]# rbd map
Rephrase it to make it more clear
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chen, Xiaoxi
Sent: 2013年3月25日 17:02
To: 'ceph-users@lists.ceph.com' (ceph-users@lists.ceph.com)
Cc: ceph-de...@vger.kernel.org
Subject: [ceph-users] Ceph Crach at
They keyring.* vs key.* distinction in mkcephfs appears correct. Can you
attach your ceph.conf? It looks a bit like no daemons are defined.
sage
On Mon, 25 Mar 2013, Steve Carter wrote:
Although it doesn't attempt to login to my other machines as I thought it was
designed to do, as I know
There have been several important fixes that we've backported to bobtail
that users are hitting in the wild. Most notably, there was a problem with
pool names with - and _ that OpenStack users were hitting, and memory
usage by ceph-osd and other daemons due to the trimming of in-memory logs.
Sage,
Sure, here you go:
[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
max open files = 4096
[mon]
mon data = /data/${name}
keyring = /data/${name}/keyring
[osd]
osd data = /data/${name}
keyring =
On 03/25/2013 04:07 PM, peter_j...@dell.com wrote:
Hi,
I have a couple of HW provisioning questions in regards to SSD for OSD
Journals.
I’d like to provision 12 OSDs per a node and there are enough CPU
clocks and Memory.
Each OSD is allocated one 3TB HDD for OSD data – these 12 * 3TB