for all.
Regards,
De : Brent Kennedy
Envoyé : 31 July 2018 23:36
À : CUZA Frédéric ; 'ceph-users'
Objet : RE: [ceph-users] Whole cluster flapping
I have had this happen during large data movements. Stopped happening after I
went to 10Gb though(from 1Gb). What I had done is injected a setting
Hi Everyone,
I just upgrade our cluster to Luminous 12.2.7 and I delete a quite large pool
that we had (120 TB).
Our cluster is made of 14 Nodes with each composed of 12 OSDs (1 HDD -> 1 OSD),
we have SDD for journal.
After I deleted the large pool my cluster started to flapping on all OSDs.
the mons mark them as
down due to no response.
Check also the OSD logs to see if they are actually crashing and restarting,
and disk IO usage (i.e. iostat).
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
On Tue, Jul 31, 2018 at 7:23 AM CUZA
[detail]`
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
On Tue, Aug 7, 2018 at 5:46 AM CUZA Frédéric
mailto:frederic.c...@sib.fr>> wrote:
It’s been over a week now and the whole cluster keeps flapping, it is never the
same OSDs that g
numbers.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
On Tue, Aug 7, 2018 at 10:47 AM CUZA Frédéric
mailto:frederic.c...@sib.fr>> wrote:
Pool is already deleted and no longer present in stats.
Regards,
De : ceph-users
mailto:ceph
: Webert de Souza Lima ; CUZA Frédéric
Cc : ceph-users
Objet : RE: [ceph-users] Whole cluster flapping
Hi again Frederic,
It may be worth looking at a recovery sleep.
osd recovery sleep
Description:
Time in seconds to sleep before next recovery or backfill op. Increasing this
value will slow down
Hi Matthew,
Thanks for the advice but we are no longer using orphans find since the problem
does not seem to be solved with it.
Regards,
-Message d'origine-
De : Matthew Vernon
Envoyé : 20 July 2018 11:03
À : CUZA Frédéric ; ceph-users@lists.ceph.com
Objet : Be careful with orphans
Hi Guys,
We are running a Ceph Luminous 12.2.6 cluster.
The cluster is used both for RBD storage and Ceph Object Storage and is about
742 TB raw space.
We have an application that push snapshots of our VMs through RGW all seem to
be fine except that we have a decorrelation between what the S3
Thank you all for you quick answer.
I think that will solve our problem.
This is what we came up with this :
rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring
export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring
/etc/ceph/Nceph.client.admin.keyring import -
Envoyé : 04 June 2019 14:11
À : Burkhard Linke
Cc : ceph-users
Objet : Re: [ceph-users] Multiple rbd images from different clusters
On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman wrote:
>
> On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
> wrote:
> >
> > Hi,
> >
&
Hi everyone,
We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do
not wish to upgrade the actual cluster as all the hardware is EOS and we
upgrade the configuration of the servers.
We can't find a "proper" way to mount two rbd images from two different cluster
on the
Hi everyone,
I am facing a strange behavious from ceph-deploy.
I try to add a new node to our cluster :
ceph-deploy install --no-adjust-repos sd0051
Everything seems to work fine but the new bucket (host) is not created in the
crushmap and when I try to add a new osd to that host, the osd is
-deploy admin sd0051
and nothing change.
When I do the install there is not .conf pushed the new node.
Regards,
De : ceph-users De la part de CUZA Frédéric
Envoyé : 14 June 2019 18:28
À : ceph-users@lists.ceph.com
Objet : [ceph-users] Weird behaviour of ceph-deploy
Hi everyone,
I am facing
to the host where it is
created and I can't move it the this host right now;
Regards,
De : ceph-users De la part de CUZA Frédéric
Envoyé : 15 June 2019 00:34
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Weird behaviour of ceph-deploy
Little update :
I check one osd I've installed even
Things are not evolving, If I found an alternative to add a new osds nodes in
the future I’ll mark it here.
I’m abandoning ceph-deploy since it seems to be buggy.
Regards,
De : ceph-users De la part de CUZA Frédéric
Envoyé : 18 June 2019 10:40
À : Brian Topping
Cc : ceph-users@lists.ceph.com
Topping
Envoyé : 17 June 2019 16:39
À : CUZA Frédéric
Cc : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Weird behaviour of ceph-deploy
I don’t have an answer for you, but it’s going to help others to have shown:
1. Versions of all nodes involved and multi-master configuration
2. Confirm
Hi everyone,
As aynone ever made a backup of a ceph bucket into Amazon Glacier ?
If so did you use a script that use the api to "migrate" the objects ?
If no one use amazon s3, how did you make those backups ?
Thanks in advance.
Regards,
___
Hi everyone,
We are facing a problem where we cannot read logs sent to graylog because it is
missing one mandatory field.
GELF message (received from
) has empty mandatory "host" field.
Does anyone know what we are missing ?
I know there was someone facing the same issue but it seems that
Hi Everyone,
Does anyone know how to indicate block-db and block-wal to device on ansible ?
In ceph-deploy it is quite easy :
ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12
--block-wal /dev/sdn12 -bluestore
On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for
Hi everyone,
I'm trying to understand where is the difference between the command :
ceph df details
And the result I'm getting when I run this script :
total_bytes=0
while read user; do
echo $user
bytes=$(radosgw-admin user stats --uid=${user} | grep total_bytes_rounded |
tr -dc "0-9")
if
20 matches
Mail list logo