Re: [ceph-users] Ceph Crach at sync_thread_timeout after heavy random writes.

2013-03-25 Thread Wolfgang Hennerbichler
On 03/25/2013 10:35 AM, Chen, Xiaoxi wrote: OK,but my VM didnt crash, it's ceph-osd daemon crashed. So is it safe for me to say the issue I hit is a different issue?(not #3737) Yes, then it surely is a different issue. Actually you just said ceph crashed, no mention of an OSD, so it was

[ceph-users] kernel BUG when mapping unexisting rbd device

2013-03-25 Thread Dan van der Ster
Hi, Apologies if this is already a known bug (though I didn't find it). If we try to map a device that doesn't exist, we get an immediate and reproduceable kernel BUG (see the P.S.). We hit this by accident because we forgot to add the --pool ourpool. This works: [root@afs245 /]# rbd map

Re: [ceph-users] Ceph Crach at sync_thread_timeout after heavy random writes.

2013-03-25 Thread Chen, Xiaoxi
Rephrase it to make it more clear From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chen, Xiaoxi Sent: 2013年3月25日 17:02 To: 'ceph-users@lists.ceph.com' (ceph-users@lists.ceph.com) Cc: ceph-de...@vger.kernel.org Subject: [ceph-users] Ceph Crach at

Re: [ceph-users] Weird problem with mkcephfs

2013-03-25 Thread Sage Weil
They keyring.* vs key.* distinction in mkcephfs appears correct. Can you attach your ceph.conf? It looks a bit like no daemons are defined. sage On Mon, 25 Mar 2013, Steve Carter wrote: Although it doesn't attempt to login to my other machines as I thought it was designed to do, as I know

[ceph-users] v0.56.4 released

2013-03-25 Thread Sage Weil
There have been several important fixes that we've backported to bobtail that users are hitting in the wild. Most notably, there was a problem with pool names with - and _ that OpenStack users were hitting, and memory usage by ceph-osd and other daemons due to the trimming of in-memory logs.

Re: [ceph-users] Weird problem with mkcephfs

2013-03-25 Thread Steve Carter
Sage, Sure, here you go: [global] auth cluster required = cephx auth service required = cephx auth client required = cephx max open files = 4096 [mon] mon data = /data/${name} keyring = /data/${name}/keyring [osd] osd data = /data/${name} keyring =

Re: [ceph-users] SSD Capacity and Partitions for OSD Journals

2013-03-25 Thread Matthieu Patou
On 03/25/2013 04:07 PM, peter_j...@dell.com wrote: Hi, I have a couple of HW provisioning questions in regards to SSD for OSD Journals. I’d like to provision 12 OSDs per a node and there are enough CPU clocks and Memory. Each OSD is allocated one 3TB HDD for OSD data – these 12 * 3TB