[ceph-users] Long interruption when increasing placement groups

2018-07-03 Thread fcid
Hello ceph community, Last week I was increasing the PGs in a pool used for RBD, in a attempt to reach 1024 PGs (from 128 PGs). The increments were of 32 each time and after creating the new placement groups I trigger re-balance of data using the pgp_num parameter. Every thing was fine

Re: [ceph-users] Error in osd_client.c, request_reinit

2017-12-13 Thread fcid
Thats a relief. Thank you Ilya On 12/12/2017 04:57 PM, Ilya Dryomov wrote: On Tue, Dec 12, 2017 at 8:18 PM, fcid <f...@altavoz.net> wrote: Hello everyone, We had an incident regarding a client which reboot after experiencing some issues with a ceph cluster. The other clients who c

[ceph-users] Error in osd_client.c, request_reinit

2017-12-12 Thread fcid
Hello everyone, We had an incident regarding a client which reboot after experiencing some issues with a ceph cluster. The other clients who consume RBD images from the same ceph cluster showed and error at the time of the reboot in logs related to libceph. The errors looks like this: Dec

Re: [ceph-users] Small-cluster performance issues

2017-08-22 Thread fcid
ar 100% busy utilization. filestore_max_sync_interval: i do not recommend decreasing this to 0.1, i would keep it at 5 sec osd_op_threads do not increase this unless you have enough cores. but adding disks is the way to go Maged On 2017-08-22 20:08, fcid wrote:

Re: [ceph-users] Small-cluster performance issues

2017-08-22 Thread fcid
osd_op_threads do not increase this unless you have enough cores. I'll look into this today too. but adding disks is the way to go Maged On 2017-08-22 20:08, fcid wrote: Hello everyone, I've been using ceph to provide storage using RBD for 60 KVM virtual machi

[ceph-users] Small-cluster performance issues

2017-08-22 Thread fcid
Hello everyone, I've been using ceph to provide storage using RBD for 60 KVM virtual machines running on proxmox. The ceph cluster we have is very small (2 OSDs + 1 mon per node, and a total of 3 nodes) and we are having some performace issues, like big latency times (apply lat:~0.5 s;

Re: [ceph-users] VM disk operation blocked during OSDs failures

2016-11-07 Thread fcid
-of-object-replicas). So if size=2 and min_size=1 then an OSD failure means blocked operations to all objects located on the failed OSD until they have been replicated again. On Sat, Nov 5, 2016 at 9:04 AM, fcid <f...@altavoz.net <mailto:f...@altavoz.net>> wrote: Dear ce

[ceph-users] VM disk operation blocked during OSDs failures

2016-11-04 Thread fcid
Dear ceph community, I'm working in a small ceph deployment for testing purposes, in which i want to test the high availability features of Ceph and how clients are affected during outages in the cluster. This small cluster is deployed using 3 servers on which are running 2 OSDs and 1