Van Hecke
Sent: 04 February 2019 07:27
To: Sage Weil
Cc: ceph-users@lists.ceph.com; Belnet Services; ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
Sage,
Not during the network flap or before flap , but after i had already tried
"osd": "51",
"status": "osd is down"
},
{
"osd": "63",
"status": "osd is down"
},
{
On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> So i restarted the osd but he stop after some time. But this is an effect on
> the cluster and cluster is on a partial recovery process.
>
> please find here log file of osd 49 after this restart
>
Weil
Cc: ceph-users@lists.ceph.com; Belnet Services
Subject: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
So i restarted the osd but he stop after some time. But this is an effect on
the cluster and cluster is on a partial recovery process.
please find here log file
.
From: Philippe Van Hecke
Sent: 04 February 2019 07:42
To: Sage Weil
Cc: ceph-users@lists.ceph.com; Belnet Services
Subject: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
oot@ls-node-5-lcl:~# ceph-objectstore-tool --data-path
/var/lib/ceph
> From: Sage Weil
> Sent: 04 February 2019 07:26
> To: Philippe Van Hecke
> Cc: ceph-users@lists.ceph.com; Belnet Services
> Subject: Re: [ceph-users] Luminous cluster in very bad state need some
> assistance.
>
> On Mon, 4 Feb 2019, Philippe
From: Sage Weil
> Sent: 04 February 2019 07:26
> To: Philippe Van Hecke
> Cc: ceph-users@lists.ceph.com; Belnet Services
> Subject: Re: [ceph-users] Luminous cluster in very bad state need some
> assistance.
>
> On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> > Hi Sag
: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> Hi Sage,
>
> I try to make the following.
>
> ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-49/ --journal
> /var/lib/ceph/osd/ceph-49/journal
Sage,
Not during the network flap or before flap , but after i had already tried the
ceph-objectstore-tool remove export with no possibility to do it.
And conf file never had the "ignore_les" option. I was even not aware of the
existence of this option and seem that it preferable to forget
ces
Subject: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> Hi Sage, First of all tanks for your help
>
> Please find here
> https://filesender.belnet.be/?s=download=dea0edda-5b6a-4284-9ea1-c1fdf88b65e9
> the
06:59
> To: Philippe Van Hecke
> Cc: ceph-users@lists.ceph.com; Belnet Services
> Subject: Re: [ceph-users] Luminous cluster in very bad state need some
> assistance.
>
> On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> > Hi Sage, First of all tanks for your help
> >
On Mon, 4 Feb 2019, Sage Weil wrote:
> On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> > Hi Sage, First of all tanks for your help
> >
> > Please find here
> > https://filesender.belnet.be/?s=download=dea0edda-5b6a-4284-9ea1-c1fdf88b65e9
Something caused the version number on this PG to reset,
On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> Hi Sage, First of all tanks for your help
>
> Please find here
> https://filesender.belnet.be/?s=download=dea0edda-5b6a-4284-9ea1-c1fdf88b65e9
> the osd log with debug info for osd.49. and indeed if all buggy osd can
> restart that can may be
From: Sage Weil
Sent: 03 February 2019 18:25
To: Philippe Van Hecke
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Luminous cluster in very bad state need some
assistance.
On Sun, 3 Feb 2019, Philippe Van Hecke wrote:
> Hello,
> I'am w
On Sun, 3 Feb 2019, Philippe Van Hecke wrote:
> Hello,
> I'am working for BELNET the Belgian Natioanal Research Network
>
> We currently a manage a luminous ceph cluster on ubuntu 16.04
> with 144 hdd osd spread across two data centers with 6 osd nodes
> on each datacenter. Osd(s) are 4 TB sata
15 matches
Mail list logo