Re: [ceph-users] ceph-deploy Errors - Fedora 21

2015-01-02 Thread Ken Dreyer
On 01/02/2015 12:38 PM, Travis Rhoden wrote: > Hello, > > I believe this is a problem specific to Fedora packaging. The Fedora > package for ceph-deploy is a bit different than the ones hosted at > ceph.com . Can you please tell me the output of "rpm > -q python-remoto"? > > I

Re: [ceph-users] rbd map hangs

2015-01-02 Thread Dyweni - Ceph-Users
Your OSDs are full. The cluster will block, until space is freed up and both OSDs leave full state. You have 2 OSDs, so I'm assuming you are running replica size of 2? A quick (but risky) method might be to reduce your replica down to 1, to get the cluster unblocked, clean up space, then go

Re: [ceph-users] Ceph-deploy install and pinning on Ubuntu 14.04

2015-01-02 Thread Travis Rhoden
Hi Giuseppe, ceph-deploy does try to do some pinning for the Ceph packages. Those settings should be found at /etc/apt/preferences.d/ceph.pref If you find something is incorrect there, please let us know what it is and we can can look into it! - Travis On Sat, Dec 20, 2014 at 11:32 AM, Giusep

[ceph-users] rbd map hangs

2015-01-02 Thread Max Power
After I tried to copy some files into a rbd device I ran into a "osd full" state. So I restarted my server and wanted to remove some files from the filesystem again. But at this moment I cannot execute "rbd map" anymore and I do not know why. This all happened in my testing environment and this is

Re: [ceph-users] ceph-deploy Errors - Fedora 21

2015-01-02 Thread Travis Rhoden
Hello, I believe this is a problem specific to Fedora packaging. The Fedora package for ceph-deploy is a bit different than the ones hosted at ceph.com. Can you please tell me the output of "rpm -q python-remoto"? I believe the problem is that the python-remoto package is too old, and there is n

Re: [ceph-users] Weighting question

2015-01-02 Thread Gregory Farnum
The meant-for-human-consumption free space estimates and things won't be accurate if you weight evenly instead of by size, but otherwise things should work just fine -- you'll simply get full OSD warnings when you have 1TB/OSD. -Greg On Thu, Jan 1, 2015 at 3:10 PM Lindsay Mathieson < lindsay.mathie

Re: [ceph-users] Adding Crush Rules

2015-01-02 Thread Gregory Farnum
I'm on my phone at the moment, but I think if you run "ceph osd crush rule" it will prompt you with the relevant options? On Tue, Dec 30, 2014 at 6:00 PM Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > Is there a command to do this without decompiling/editing/compiling the > crush > set?

Re: [ceph-users] RadosGW slow gc

2015-01-02 Thread Gregory Farnum
You can store radosgw data in a regular EC pool without any caching in front. I suspect this will work better for you, as part of the slowness is probably the OSDs trying to look up all the objects in the ec pool before deleting them. You should be able to check if that's the case by looking at the

Re: [ceph-users] ceph-deploy Errors - Fedora 21

2015-01-02 Thread Ken Dreyer
On 12/29/2014 08:24 PM, deeepdish wrote: > Hello. > > I’m having an issue with ceph-deploy on Fedora 21. > > - Installed ceph-deploy via ‘yum install ceph-deploy' > - created non-root user > - assigned sudo privs as per documentation > - http://ceph.com/docs/master/rados/deployment/preflight-c

Re: [ceph-users] Weird scrub problem

2015-01-02 Thread Samuel Just
If the file structure is corrupted, then all bets are kind of off. You'd have to characterize precisely the kind of corruption you want handled and add a feature request for that. -Sam On Sat, Dec 27, 2014 at 5:14 PM, Andrey Korolyov wrote: > On Sat, Dec 27, 2014 at 4:09 PM, Andrey Korolyov wrot

Re: [ceph-users] Is there an negative relationship between storage utilization and ceph performance?

2015-01-02 Thread Udo Lembke
Hi again, ... after a long time! Now I have change the whole ceph-cluster from xfs to ext4 (60 OSDs), change tunables and fill the cluster again. So I can compare the bench values. For my setup the cluster runs better with ext4 than with xfs - latency drop from ~14ms to ~8ms. (rados -p test benc

Re: [ceph-users] Weighting question

2015-01-02 Thread Dyweni - Ceph-Users
On 2015-01-01 14:04, Lindsay Mathieson wrote: On Thu, 1 Jan 2015 08:27:33 AM Dyweni - Ceph-Users wrote: You might a little improvement on the writes (since the spinners have to work too), but the reads should have the most improvement (since ceph only has to read from the ssd). Couple of t

Re: [ceph-users] redundancy with 2 nodes

2015-01-02 Thread Mark Kirkwood
On 01/01/15 23:16, Christian Balzer wrote: Hello, On Thu, 01 Jan 2015 18:25:47 +1300 Mark Kirkwood wrote: but I agree that you should probably not get a HEALTH OK status when you have just setup 2 (or in fact any even number of) monitors...HEALTH WARN would make more sense, with a wee message

Re: [ceph-users] Weighting question

2015-01-02 Thread Lindsay Mathieson
On Thu, 1 Jan 2015 08:50:20 AM you wrote: > > http://ceph.com/docs/master/rados/operations/crush-map/#primary-affinity > > > > > > This may help you too: > > http://cephnotes.ksperis.com/blog/2014/08/20/ceph-primary-affinity H - so if I have three OSD's on a Node (looking at get two extra d

[ceph-users] Worthwhile setting up Cache tier with small leftover SSD partions?

2015-01-02 Thread Lindsay Mathieson
Expanding my tiny ceph setup from 2 OSD's to six, and two extra SSD's for journals (IBM 530 120GB) Yah, I know the 5300's would be much better Assuming I use 10GB ber OSD for journal and 5GB spare to improve the SSD lifetime, that leaves 85GB spare per SSD. Is it worthwhile setting up a

Re: [ceph-users] Weighting question

2015-01-02 Thread Dyweni - Ceph-Users
On 2015-01-01 08:27, Dyweni - Ceph-Users wrote: Hi, I'm going to take a stab at this, since I've just recently/am currently dealing with this/something similar myself. On 2014-12-31 21:59, Lindsay Mathieson wrote: As mentioned before :) we have two osd ndoes with one 3TB osd each. (replica

Re: [ceph-users] Weighting question

2015-01-02 Thread Dyweni - Ceph-Users
Hi, I'm going to take a stab at this, since I've just recently/am currently dealing with this/something similar myself. On 2014-12-31 21:59, Lindsay Mathieson wrote: As mentioned before :) we have two osd ndoes with one 3TB osd each. (replica 2) About to add a smaller (1TB) faster drive to

Re: [ceph-users] redundancy with 2 nodes

2015-01-02 Thread Jiri Kanicky
Hi, I noticed this message after shutting down the other node. You might be right that I need 3 monitors. 2015-01-01 15:47:35.990260 7f22858dd700 0 monclient: hunting for new mon But what is quite unexpected is that you cannot run even "ceph status" on the running node t find out the state o

Re: [ceph-users] Weighting question

2015-01-02 Thread Christian Balzer
On Thu, 01 Jan 2015 13:59:57 +1000 Lindsay Mathieson wrote: > As mentioned before :) we have two osd ndoes with one 3TB osd each. > (replica 2) > > About to add a smaller (1TB) faster drive to each node > > From the docs, normal practice would be to weight it in accordance with > size, i.e 3 for

Re: [ceph-users] Not running multiple services on the same machine?

2015-01-02 Thread Lindsay Mathieson
On Fri, 2 Jan 2015 09:59:36 PM Gregory Farnum wrote: > The only technical issue I can think of is that you don't want to put > kernel clients on the same OS as an OSD (due to deadlock scenarios under > memory pressure and writeback). The only kernel client is the cephfs driver? qemu rbd client is

Re: [ceph-users] Not running multiple services on the same machine?

2015-01-02 Thread Gregory Farnum
I think it's just for service isolation that people recommend splitting them. The only technical issue I can think of is that you don't want to put kernel clients on the same OS as an OSD (due to deadlock scenarios under memory pressure and writeback). -Greg On Sat, Dec 27, 2014 at 12:11 PM Christo