Re: [ceph-users] Whole cluster flapping

2018-08-02 Thread CUZA Frédéric
for all. Regards, De : Brent Kennedy Envoyé : 31 July 2018 23:36 À : CUZA Frédéric ; 'ceph-users' Objet : RE: [ceph-users] Whole cluster flapping I have had this happen during large data movements. Stopped happening after I went to 10Gb though(from 1Gb). What I had done is injected a setting

[ceph-users] Whole cluster flapping

2018-07-31 Thread CUZA Frédéric
Hi Everyone, I just upgrade our cluster to Luminous 12.2.7 and I delete a quite large pool that we had (120 TB). Our cluster is made of 14 Nodes with each composed of 12 OSDs (1 HDD -> 1 OSD), we have SDD for journal. After I deleted the large pool my cluster started to flapping on all OSDs.

Re: [ceph-users] Whole cluster flapping

2018-08-07 Thread CUZA Frédéric
the mons mark them as down due to no response. Check also the OSD logs to see if they are actually crashing and restarting, and disk IO usage (i.e. iostat). Regards, Webert Lima DevOps Engineer at MAV Tecnologia Belo Horizonte - Brasil IRC NICK - WebertRLZ On Tue, Jul 31, 2018 at 7:23 AM CUZA

Re: [ceph-users] Whole cluster flapping

2018-08-07 Thread CUZA Frédéric
[detail]` Regards, Webert Lima DevOps Engineer at MAV Tecnologia Belo Horizonte - Brasil IRC NICK - WebertRLZ On Tue, Aug 7, 2018 at 5:46 AM CUZA Frédéric mailto:frederic.c...@sib.fr>> wrote: It’s been over a week now and the whole cluster keeps flapping, it is never the same OSDs that g

Re: [ceph-users] Whole cluster flapping

2018-08-08 Thread CUZA Frédéric
numbers. Regards, Webert Lima DevOps Engineer at MAV Tecnologia Belo Horizonte - Brasil IRC NICK - WebertRLZ On Tue, Aug 7, 2018 at 10:47 AM CUZA Frédéric mailto:frederic.c...@sib.fr>> wrote: Pool is already deleted and no longer present in stats. Regards, De : ceph-users mailto:ceph

Re: [ceph-users] Whole cluster flapping

2018-08-28 Thread CUZA Frédéric
: Webert de Souza Lima ; CUZA Frédéric Cc : ceph-users Objet : RE: [ceph-users] Whole cluster flapping Hi again Frederic, It may be worth looking at a recovery sleep. osd recovery sleep Description: Time in seconds to sleep before next recovery or backfill op. Increasing this value will slow down

Re: [ceph-users] Be careful with orphans find (was Re: Lost TB for Object storage)

2018-07-20 Thread CUZA Frédéric
Hi Matthew, Thanks for the advice but we are no longer using orphans find since the problem does not seem to be solved with it. Regards, -Message d'origine- De : Matthew Vernon Envoyé : 20 July 2018 11:03 À : CUZA Frédéric ; ceph-users@lists.ceph.com Objet : Be careful with orphans

[ceph-users] Lost TB for Object storage

2018-07-19 Thread CUZA Frédéric
Hi Guys, We are running a Ceph Luminous 12.2.6 cluster. The cluster is used both for RBD storage and Ceph Object Storage and is about 742 TB raw space. We have an application that push snapshots of our VMs through RGW all seem to be fine except that we have a decorrelation between what the S3

Re: [ceph-users] rbd.ReadOnlyImage: [errno 30]

2019-06-05 Thread CUZA Frédéric
Thank you all for you quick answer. I think that will solve our problem. This is what we came up with this : rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring /etc/ceph/Nceph.client.admin.keyring import -

Re: [ceph-users] Multiple rbd images from different clusters

2019-06-05 Thread CUZA Frédéric
Envoyé : 04 June 2019 14:11 À : Burkhard Linke Cc : ceph-users Objet : Re: [ceph-users] Multiple rbd images from different clusters On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman wrote: > > On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke > wrote: > > > > Hi, > > &

[ceph-users] Multiple rbd images from different clusters

2019-06-04 Thread CUZA Frédéric
Hi everyone, We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do not wish to upgrade the actual cluster as all the hardware is EOS and we upgrade the configuration of the servers. We can't find a "proper" way to mount two rbd images from two different cluster on the

[ceph-users] Weird behaviour of ceph-deploy

2019-06-14 Thread CUZA Frédéric
Hi everyone, I am facing a strange behavious from ceph-deploy. I try to add a new node to our cluster : ceph-deploy install --no-adjust-repos sd0051 Everything seems to work fine but the new bucket (host) is not created in the crushmap and when I try to add a new osd to that host, the osd is

Re: [ceph-users] Weird behaviour of ceph-deploy

2019-06-14 Thread CUZA Frédéric
-deploy admin sd0051 and nothing change. When I do the install there is not .conf pushed the new node. Regards, De : ceph-users De la part de CUZA Frédéric Envoyé : 14 June 2019 18:28 À : ceph-users@lists.ceph.com Objet : [ceph-users] Weird behaviour of ceph-deploy Hi everyone, I am facing

Re: [ceph-users] Weird behaviour of ceph-deploy

2019-06-17 Thread CUZA Frédéric
to the host where it is created and I can't move it the this host right now; Regards, De : ceph-users De la part de CUZA Frédéric Envoyé : 15 June 2019 00:34 À : ceph-users@lists.ceph.com Objet : Re: [ceph-users] Weird behaviour of ceph-deploy Little update : I check one osd I've installed even

Re: [ceph-users] Weird behaviour of ceph-deploy

2019-06-18 Thread CUZA Frédéric
Things are not evolving, If I found an alternative to add a new osds nodes in the future I’ll mark it here. I’m abandoning ceph-deploy since it seems to be buggy. Regards, De : ceph-users De la part de CUZA Frédéric Envoyé : 18 June 2019 10:40 À : Brian Topping Cc : ceph-users@lists.ceph.com

Re: [ceph-users] Weird behaviour of ceph-deploy

2019-06-18 Thread CUZA Frédéric
Topping Envoyé : 17 June 2019 16:39 À : CUZA Frédéric Cc : ceph-users@lists.ceph.com Objet : Re: [ceph-users] Weird behaviour of ceph-deploy I don’t have an answer for you, but it’s going to help others to have shown: 1. Versions of all nodes involved and multi-master configuration 2. Confirm

[ceph-users] Ceph Buckets Backup

2019-09-26 Thread CUZA Frédéric
Hi everyone, As aynone ever made a backup of a ceph bucket into Amazon Glacier ? If so did you use a script that use the api to "migrate" the objects ? If no one use amazon s3, how did you make those backups ? Thanks in advance. Regards, ___

[ceph-users] Missing field "host" in logs sent to Graylog

2019-09-30 Thread CUZA Frédéric
Hi everyone, We are facing a problem where we cannot read logs sent to graylog because it is missing one mandatory field. GELF message (received from ) has empty mandatory "host" field. Does anyone know what we are missing ? I know there was someone facing the same issue but it seems that

[ceph-users] ceph-ansible / block-db block-wal

2019-10-30 Thread CUZA Frédéric
Hi Everyone, Does anyone know how to indicate block-db and block-wal to device on ansible ? In ceph-deploy it is quite easy : ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12 --block-wal /dev/sdn12 -bluestore On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for

[ceph-users] Understand ceph df details

2020-01-21 Thread CUZA Frédéric
Hi everyone, I'm trying to understand where is the difference between the command : ceph df details And the result I'm getting when I run this script : total_bytes=0 while read user; do echo $user bytes=$(radosgw-admin user stats --uid=${user} | grep total_bytes_rounded | tr -dc "0-9") if