Re: [ceph-users] OSD Weights

2013-02-14 Thread Sébastien Han
Hi, As far as I know Ceph won't attempt to do any weight modifications. If you use the default CRUSH map, every devices get a default weight of 1. However this value can be modified while the cluster runs. Simply update the CRUSH map like so: # ceph osd crush reweight {name} {weight} If you

urgent journal conf on ceph.conf

2013-02-14 Thread charles L
Pls can someone help me with the ceph.conf for 0.56.2. I have two servers for STORAGE with 3tb hard drives each and two SSD's each.  I want to use OSD data on the hard drive and osd journal on SSD.  I want to know how osd journal configuration is set to SSD. My SSD is mounted on /dev/sdb. I

Re: OSD dies after seconds

2013-02-14 Thread Jesus Cuenca
I upgraded to ceph 0.56-3 but the problem persist... OSD starts but after a second it finishes: 2013-02-14 12:18:34.504391 7fae613ea760 10 journal _open journal is not a block device, NOT checking disk write cache on '/var/lib/ceph/osd/ceph-0/jour nal' 2013-02-14 12:18:34.504400 7fae613ea760 1

Re: urgent journal conf on ceph.conf

2013-02-14 Thread Wido den Hollander
On 02/14/2013 11:24 AM, charles L wrote: Pls can someone help me with the ceph.conf for 0.56.2. I have two servers for STORAGE with 3tb hard drives each and two SSD's each. I want to use OSD data on the hard drive and osd journal on SSD. I want to know how osd journal configuration is set

Re: urgent journal conf on ceph.conf

2013-02-14 Thread Joao Eduardo Luis
Including ceph-users, as it feels like this belongs there :-) On 02/14/2013 01:47 PM, Wido den Hollander wrote: On 02/14/2013 11:24 AM, charles L wrote: Pls can someone help me with the ceph.conf for 0.56.2. I have two servers for STORAGE with 3tb hard drives each and two SSD's each. I want

Re: [ceph-users] urgent journal conf on ceph.conf

2013-02-14 Thread Sébastien Han
+1 for Wido Moreover, if you want to store the journal on a block device, you should partition your journal disk and assign one partition per OSD like /dev/sdb1, 2 , 3 Again, osd journal = /dev/osd$id/journal is wrong, if you use this directive, this must point to a filesystem because the

radosgw: Update a key's meta data

2013-02-14 Thread Sylvain Munaut
Hi, I was wondering how I could update a key's metadata like the Content-Type. The solution on S3 seem to be to copy the key on itself and replacing meta data. If I do that in ceph, will it work ? And more importantly, will it be done intelligently (i.e. without copying the actual file data

osdc/ObjectCacher.cc: 834: FAILED assert(ob-last_commit_tid tid)

2013-02-14 Thread Martin Mailand
Hi List, I get reproducible this assertion, how can I help to debug it? -martin (Lese Datenbank ... 52246 Dateien und Verzeichnisse sind derzeit installiert.) Vorbereitung zum Ersetzen von linux-firmware 1.79 (durch .../linux-firmware_1.79.1_all.deb) ... Ersatz für linux-firmware wird entpackt

Re: osdc/ObjectCacher.cc: 834: FAILED assert(ob-last_commit_tid tid)

2013-02-14 Thread Sage Weil
Hi Martin- On Thu, 14 Feb 2013, Martin Mailand wrote: Hi List, I get reproducible this assertion, how can I help to debug it? Can you describe the workload? Are the OSDs also running 0.56.2(+)? Any other activity on the server side (data migration, OSD failure, etc.) that may have

Re: Simple doc update pull request

2013-02-14 Thread Sage Weil
Merged. Thanks, Travis! On Thu, 14 Feb 2013, Travis Rhoden wrote: Hey folks, I submitted a pull-request for some simple doc updates related to cephx and creating new keys/clients. Please take a look when possible. https://github.com/ceph/ceph/pull/56 - Travis -- To unsubscribe

Re: Questions on some minor issues when upgrading from 0.48 to 0.56

2013-02-14 Thread Daniel Hoang
Thanks Wido for the clarifications.  I guessed this means that I can update the OSD cluster to 0.56, and client that was compiled with the old librados2 0.48 should still be able to access the cluster without any change. Client that compiled with the new librados2 0.56 (API level 0.48) has

Re: osdc/ObjectCacher.cc: 834: FAILED assert(ob-last_commit_tid tid)

2013-02-14 Thread Martin Mailand
Hi Sage, everything is on 0.56.2 and the cluster is healthy. I can reproduce it with an apt-get upgrade within the vm, the vm os is 12.04. Most of the time the assertion happened when the firmware .deb is updated. See the log in my first email. But I use a custom build qemu version (1.4-rc1),

Re: [ceph] Fix more performance issues found by cppcheck (#51)

2013-02-14 Thread Gregory Farnum
Hey Danny, I've merged in most of these (commit ffda2eab4695af79abdc9ed9bf001c3cd662a1f2) but had comments on a couple: d99764e8c72a24eaba0542944f497cc2d9e154b4 is a patch on gtest. We did import that wholesale into our repository as that's what they recommend,b but I'd prefer to get patches by

Re: [ceph-commit] [ceph/ceph] e330b7: mon: create fail_mds_gid() helper; make 'ceph mds ...

2013-02-14 Thread Gregory Farnum
On Thu, Feb 14, 2013 at 11:39 AM, GitHub nore...@github.com wrote: Branch: refs/heads/master Home: https://github.com/ceph/ceph Commit: e330b7ec54f89ca799ada376d5615e3c1dfc54f0 https://github.com/ceph/ceph/commit/e330b7ec54f89ca799ada376d5615e3c1dfc54f0 Author: Sage Weil

Re: [ceph-commit] [ceph/ceph] e330b7: mon: create fail_mds_gid() helper; make 'ceph mds ...

2013-02-14 Thread Sage Weil
On Thu, 14 Feb 2013, Gregory Farnum wrote: In the tests I ran last night on this branch I saw some Valgrind warnings in the OSDs and Monitors, but I couldn't figure out any way for this series to have caused them so I assume they're latent and pop up occasionally in master? In any case, please

Further thoughts on fsck for CephFS

2013-02-14 Thread Gregory Farnum
Sage sent out an early draft of what we were thinking about doing for fsck on CephFS at the beginning of the week, but it was a bit incomplete and still very much a work in progress. I spent a good chunk of today thinking about it more so that we can start planning ticket-level chunks of work. The

Re: slow requests, hunting for new mon

2013-02-14 Thread Chris Dunlop
On 2013-02-12, Chris Dunlop ch...@onthe.net.au wrote: Hi, What are likely causes for slow requests and monclient: hunting for new mon messages? E.g.: 2013-02-12 16:27:07.318943 7f9c0bc16700 0 monclient: hunting for new mon ... 2013-02-12 16:27:45.892314 7f9c13c26700 0 log [WRN] : 6 slow

Re: slow requests, hunting for new mon

2013-02-14 Thread Sage Weil
On Fri, 15 Feb 2013, Chris Dunlop wrote: On 2013-02-12, Chris Dunlop ch...@onthe.net.au wrote: Hi, What are likely causes for slow requests and monclient: hunting for new mon messages? E.g.: 2013-02-12 16:27:07.318943 7f9c0bc16700 0 monclient: hunting for new mon ... 2013-02-12

Re: [ceph-users] snapshot, clone and mount a VM-Image

2013-02-14 Thread Josh Durgin
On 02/14/2013 12:53 PM, Sage Weil wrote: Hi Jens- On Thu, 14 Feb 2013, Jens Kristian S?gaard wrote: Hi Sage, block device level. We plan to implement an incremental backup function for the relative change between two snapshots (or a snapshot and the head). It's O(n) the size of the device

Mon losing touch with OSDs

2013-02-14 Thread Chris Dunlop
G'day, In an otherwise seemingly healthy cluster (ceph 0.56.2), what might cause the mons to lose touch with the osds? I imagine a network glitch could cause it, but I can't see any issues in any other system logs on any of the machines on the network. Having (mostly?) resolved my previous slow

Re: Mon losing touch with OSDs

2013-02-14 Thread Sage Weil
Hi Chris, On Fri, 15 Feb 2013, Chris Dunlop wrote: G'day, In an otherwise seemingly healthy cluster (ceph 0.56.2), what might cause the mons to lose touch with the osds? I imagine a network glitch could cause it, but I can't see any issues in any other system logs on any of the machines