Re: [ceph-users] How to remove mds from cluster

2014-12-30 Thread debian Only
ceph 0.87 , Debian 7.5, anyone can help ? 2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com: i want to move mds from one host to another. how to do it ? what did i do as below, but ceph health not ok, mds was not removed : *root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*

Re: [ceph-users] Ceph data consistency

2014-12-30 Thread Chen, Xiaoxi
Hi, First of all, the data is safe since it's persistent in journal, if error occurs on OSD data partition, replay the journal will get the data back. And, there is a wbthrottle there, you can config how much data(ios, bytes, inodes) you wants to remain in memory. A background thread

Re: [ceph-users] Ceph data consistency

2014-12-30 Thread Paweł Sadowski
On 12/30/2014 09:40 AM, Chen, Xiaoxi wrote: Hi, First of all, the data is safe since it's persistent in journal, if error occurs on OSD data partition, replay the journal will get the data back. Agree. Data are safe in journal. But when journal is flushed data are moved to a filestore and

Re: [ceph-users] Improving Performance with more OSD's?

2014-12-30 Thread Eneko Lacunza
Hi, On 29/12/14 15:12, Christian Balzer wrote: 3rd Node - Monitor only, for quorum - Intel Nuc - 8GB RAM - CPU: Celeron N2820 Uh oh, a bit weak for a monitor. Where does the OS live (on this and the other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes it fast, SSDs preferably.

Re: [ceph-users] How to remove mds from cluster

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote: ceph 0.87 , Debian 7.5, anyone can help ? 2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com: i want to move mds from one host to another. how to do it ? what did i do as below, but ceph health not ok, mds was not removed :

Re: [ceph-users] Block and NAS Services for Non Linux OS

2014-12-30 Thread Eneko Lacunza
Hi Steven, Welcome to the list. On 30/12/14 11:47, Steven Sim wrote: This is my first posting and I apologize if the content or query is not appropriate. My understanding for CEPH is the block and NAS services are through specialized (albeit opensource) kernel modules for Linux. What

Re: [ceph-users] Improving Performance with more OSD's?

2014-12-30 Thread Eneko Lacunza
Hi, On 30/12/14 11:55, Lindsay Mathieson wrote: On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote: have a small setup with such a node (only 4 GB RAM, another 2 good nodes for OSD and virtualization) - it works like a charm and CPU max is always under 5% in the graphs. It only peaks when

Re: [ceph-users] Improving Performance with more OSD's?

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote: have a small setup with such a node (only 4 GB RAM, another 2 good nodes for OSD and virtualization) - it works like a charm and CPU max is always under 5% in the graphs. It only peaks when backups are dumped to its 1TB disk using NFS.

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Christian Eichelmann
Hi Nico and all others who answered, After some more trying to somehow get the pgs in a working state (I've tried force_create_pg, which was putting then in creating state. But that was obviously not true, since after rebooting one of the containing osd's it went back to incomplete), I decided to

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Eneko Lacunza
Hi Christian, Have you tried to migrate the disk from the old storage (pool) to the new one? I think it should show the same problem, but I think it'd be a much easier path to recover than the posix copy. How full is your storage? Maybe you can customize the crushmap, so that some OSDs

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Christian Eichelmann
Hi Eneko, I was trying a rbd cp before, but that was haning as well. But I couldn't find out if the source image was causing the hang or the destination image. That's why I decided to try a posix copy. Our cluster is sill nearly empty (12TB / 867TB). But as far as I understood (If not, somebody

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Eneko Lacunza
Hi Christian, New pool's pgs also show as incomplete? Did you notice something remarkable in ceph logs in the new pools image format? On 30/12/14 12:31, Christian Eichelmann wrote: Hi Eneko, I was trying a rbd cp before, but that was haning as well. But I couldn't find out if the source

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Christian Eichelmann
Hi Eneko, nope, new pool has all pgs active+clean, not errors during image creation. The format command just hangs, without error. Am 30.12.2014 12:33, schrieb Eneko Lacunza: Hi Christian, New pool's pgs also show as incomplete? Did you notice something remarkable in ceph logs in the

Re: [ceph-users] Block and NAS Services for Non Linux OS

2014-12-30 Thread Nick Fisk
I'm working on something very similar at the moment to present RBD's to ESXi Hosts. I'm going to run 2 or 3 VM's on the local ESXi storage to act as iSCSI proxy nodes. They will run a pacemaker HA setup with the RBD and LIO iSCSI resource agents to provide a failover iSCSI target which maps back

Re: [ceph-users] Block and NAS Services for Non Linux OS

2014-12-30 Thread Eneko Lacunza
Hi Steven, On 30/12/14 13:26, Steven Sim wrote: You mentioned that machines see a QEMU IDE/SCSI disk, they don't know whether its on ceph, NFS, local, LVM, ... so it works OK for any VM guest SO. But what if I want to CEPH cluster to serve a whole range of clients in the data center,

Re: [ceph-users] cephfs kernel module reports error on mount

2014-12-30 Thread Jiri Kanicky
Hi. I have got the same message in Debian Jessie, while the CephFS mounts and works fine. Jiri. On 18/12/2014 01:00, John Spray wrote: Hmm, from a quick google it appears you are not the only one who has seen this symptom with mount.ceph. Our mtab code appears to have diverged a bit from

[ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Lindsay Mathieson
I looked at the section for setting up different pools with different OSD's (e.g SSD Pool): http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds And it seems to make the assumption that the ssd's and platters all live on separate hosts. Not the

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Erik Logtenberg
Hi Lindsay, Actually you just setup two entries for each host in your crush map. One for hdd's and one for ssd's. My osd's look like this: # idweight type name up/down reweight -6 1.8 root ssd -7 0.45host ceph-01-ssd 0 0.45osd.0 up

[ceph-users] Weights: Hosts vs. OSDs

2014-12-30 Thread Nico Schottelius
Good evening, for some time we have the problem that ceph stores too much data on a host with small disks. Originally we used weight 1 = 1 TB, but we reduced the weight for this particular host further to keep it somehow alive. Our setup currently consists of 3 hosts: wein: 6x 136G (fest

[ceph-users] calamari dashboard missing usage data after adding/removing ceph nodes

2014-12-30 Thread Brian Jarrett
Cephistas, I've been running a Ceph cluster for several months now. I started out with a VM called master as the admin node and a monitor and two Dell servers as OSD nodes (called Node1 and Node2) and also made them monitors so I had 3 monitors. After I got that all running fine, I added

Re: [ceph-users] calamari dashboard missing usage data after adding/removing ceph nodes

2014-12-30 Thread Michael Kuriger
Hi Brian, I had this problem when I upgraded to firefly (or possibly giant) – At any rate, the data values changed at some point and calamari needs a slight update. Check this file: /opt/calamari/venv/lib/python2.6/site-packages/calamari_rest_api-0.1-py2.6.egg/calamari_rest/views/v1.py diff

Re: [ceph-users] Cache tiers flushing logic

2014-12-30 Thread Eric Eastman
On Tue, Dec 30, 2014 at 7:56 AM, Erik Logtenberg e...@logtenberg.eu wrote: Hi, I use a cache tier on SSD's in front of the data pool on HDD's. I don't understand the logic behind the flushing of the cache however. If I start writing data to the pool, it all ends up in the cache pool at

Re: [ceph-users] calamari dashboard missing usage data after adding/removing ceph nodes

2014-12-30 Thread Brian Jarrett
It took me a minute to realize the original for those lines was given last. LOL Thanks! Those changes and a restart of Apache worked perfectly. I'd like to know how those values get populated, and why it changed from total_used to total_used_bytes, etc. On Tue, Dec 30, 2014 at 10:39 AM,

Re: [ceph-users] Cache tiers flushing logic

2014-12-30 Thread Erik Logtenberg
Hi Erik, I have tiering working on a couple test clusters. It seems to be working with Ceph v0.90 when I set: ceph osd pool set POOL hit_set_type bloom ceph osd pool set POOL hit_set_count 1 ceph osd pool set POOL hit_set_period 3600 ceph osd pool set POOL

Re: [ceph-users] Weights: Hosts vs. OSDs

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote: While writing this I noted that the relation / factor is exactly 5.5 times wrong, so I *guess* that ceph treats all hosts with the same weight (even though it looks differently to me in the osd tree and the crushmap)? I believe If you

Re: [ceph-users] Cache tiers flushing logic

2014-12-30 Thread Eric Eastman
On Tue, Dec 30, 2014 at 12:38 PM, Erik Logtenberg e...@logtenberg.eu wrote: Hi Erik, I have tiering working on a couple test clusters. It seems to be working with Ceph v0.90 when I set: ceph osd pool set POOL hit_set_type bloom ceph osd pool set POOL hit_set_count 1 ceph osd pool set

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote: As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host entries. This works great. you have - host ceph-01 - host ceph-01-ssd Don't the host names have to match the real host names? -- Lindsay signature.asc

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote: As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host entries. This works great. you have - host ceph-01 - host ceph-01-ssd Don't the host names have to match the real host names? -- Lindsay signature.asc

Re: [ceph-users] Weights: Hosts vs. OSDs

2014-12-30 Thread Nico Schottelius
Hey Lindsay, Lindsay Mathieson [Wed, Dec 31, 2014 at 06:23:10AM +1000]: On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote: While writing this I noted that the relation / factor is exactly 5.5 times wrong, so I *guess* that ceph treats all hosts with the same weight (even though it

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Erik Logtenberg
No, bucket names in crush map are completely arbitrary. In fact, crush doesn't really know what a host is. It is just a bucket, like rack or datacenter. But they could be called cat and mouse just as well. The only reason to use host names is for human readability. You can then use crush rules

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 10:38:14 PM Erik Logtenberg wrote: No, bucket names in crush map are completely arbitrary. In fact, crush doesn't really know what a host is. It is just a bucket, like rack or datacenter. But they could be called cat and mouse just as well. Hmmm, I tried that earlier and

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Erik Logtenberg
If you want to be able to start your osd's with /etc/init.d/ceph init script, then you better make sure that /etc/ceph/ceph.conf does link the osd's to the actual hostname :) Check out this snippet from my ceph.conf: [osd.0] host = ceph-01 osd crush location = host=ceph-01-ssd root=ssd [osd.1]

[ceph-users] Adding Crush Rules

2014-12-30 Thread Lindsay Mathieson
Is there a command to do this without decompiling/editing/compiling the crush set? makes me nervous ... -- Lindsay signature.asc Description: This is a digitally signed message part. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Crush Map and SSD Pools

2014-12-30 Thread Lindsay Mathieson
On Tue, 30 Dec 2014 11:25:40 PM Erik Logtenberg wrote: f you want to be able to start your osd's with /etc/init.d/ceph init script, then you better make sure that /etc/ceph/ceph.conf does link the osd's to the actual hostname I tried again and it was ok for a short while, then *something*

[ceph-users] One more issue with Calamari dashboard and monitor numbers

2014-12-30 Thread Brian Jarrett
Cephistas, I have one other (admittedly minor) issue. The number of Monitors listed on the dashboard (in the Mon section) says 2/2 Quorum, but in the Hosts section it correctly says 3 Reporting 3 Mon/3 OSD. Any idea how I can get the dashboard to display the correct number of monitors in the