Hi,
Quoting Stefan Kooman (ste...@bit.nl):
> Hi,
>
> We see the following in the logs after we start a scrub for some osds:
>
> ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700 0
> log_channel(cluster) log [DBG] : 1.2d8 scrub starts
> ceph-osd.2.log:2017-12-14 06:50:47.180915 7f0f47db270
Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: donderdag 12 april 2018 11:04
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
$object?
Usually the problem is not that you are missing snapshot data, but that
you got too many
>
>
> -Original Message-
> From: Paul Emmerich [mailto:paul.emmer...@croit.io]
> Sent: dinsdag 10 april 2018 20:14
> To: Marc Roos
> Cc: ceph-users
> Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
> $object?
>
> Hi,
>
>
> you'll
-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: dinsdag 10 april 2018 20:14
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
$object?
Hi,
you'll usually see this if there are "orphaned" snapshot objects. One
common cau
Hi,
you'll usually see this if there are "orphaned" snapshot objects. One
common cause for this are
pre-12.2.2 clients trying to delete RBD snapshots with a data pool (i.e.,
erasure coded pools)
They send the snapshot requests to the wrong pool and you end up with lots
of problems.
Paul
2018-04
I have this on a rbd pool with images/snapshots that have been created
in Luminous
> Hi Stefan, Mehmet,
>
> Are these clusters that were upgraded from prior versions, or fresh
> luminous installs?
>
>
> This message indicates that there is a stray clone object with no
> associated head or s
I have found one image, how do I know what snapshot version to delete? I
have multiple
-Original Message-
From: c...@elchaka.de [mailto:c...@elchaka.de]
Sent: zondag 8 april 2018 13:30
To: ceph-users
Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
$object?
Am
o:c...@elchaka.de]
>Sent: zondag 8 april 2018 10:44
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
>$object?
>
>Hi Marc,
>
>Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos
>:
>>
>>How do you resolve these i
: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
$object?
Hi Marc,
Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos
:
>
>How do you resolve these issues?
>
In my Case i could get rid of this by deleting the existing Snapshots.
- Mehmet
>
>Apr 7 22:39:21 c03 ceph-
Hi Marc,
Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos :
>
>How do you resolve these issues?
>
In my Case i could get rid of this by deleting the existing Snapshots.
- Mehmet
>
>Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700
>-1
>osd.13 pg_epoch: 19008 pg[17.13( v 1
How do you resolve these issues?
Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700 -1
osd.13 pg_epoch: 19008 pg[17.13( v 19008'6019891
(19008'6018375,19008'6019891] local-lis/les=18980/18981 n=3825
ec=3636/3636 lis/c 18980/18980 les/c/f 18981/18982/0 18980/18980/18903)
[4
Sage Wrote( Tue, 2 Jan 2018 17:57:32 + (UTC)):
Hi Stefan, Mehmet,
Hi Sage,
Sorry for the *extremly late* response!
Are these clusters that were upgraded from prior versions, or fresh
luminous installs?
My Cluster was initialy installed with jewel (10.2.1) have seen some
minor updates
On 01/04/2018 11:53 PM, Stefan Kooman wrote:
OpenNebula 5.4.3 (issuing rbd commands to ceph cluster).
Yes! And what librbd is installed on "commands issuer"?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Quoting Konstantin Shalygin (k0...@k0ste.ru):
> On 01/04/2018 11:38 PM, Stefan Kooman wrote:
> >Only luminous clients. Mostly rbd (qemu-kvm) images.
>
> Who is managed your images? May be OpenStack Cinder?
OpenNebula 5.4.3 (issuing rbd commands to ceph cluster).
Gr. Stefan
--
| BIT BV http://
On 01/04/2018 11:38 PM, Stefan Kooman wrote:
Only luminous clients. Mostly rbd (qemu-kvm) images.
Who is managed your images? May be OpenStack Cinder?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-u
Quoting Konstantin Shalygin (k0...@k0ste.ru):
> >This is still a pre-production cluster. Most tests have been done
> >using rbd. We did make some rbd clones / snapshots here and there.
>
> What clients you used?
Only luminous clients. Mostly rbd (qemu-kvm) images.
Gr. Stefan
--
| BIT BV http:
Am 3. Januar 2018 08:59:41 MEZ schrieb Stefan Kooman :
>Quoting Sage Weil (s...@newdream.net):
>> Hi Stefan, Mehmet,
>>
>> Are these clusters that were upgraded from prior versions, or fresh
>> luminous installs?
>
>Fresh luminous install... The cluster was installed with
>12.2.0, and later upg
This is still a pre-production cluster. Most tests have been done
using rbd. We did make some rbd clones / snapshots here and there.
What clients you used?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
Quoting Sage Weil (s...@newdream.net):
> Hi Stefan, Mehmet,
>
> Are these clusters that were upgraded from prior versions, or fresh
> luminous installs?
Fresh luminous install... The cluster was installed with
12.2.0, and later upgraded to 12.2.1 and 12.2.2.
> This message indicates that there
Hi Stefan, Mehmet,
Are these clusters that were upgraded from prior versions, or fresh
luminous installs?
This message indicates that there is a stray clone object with no
associated head or snapdir object. That normally should never
happen--it's presumably the result of a (hopefully old) bug
So finally it logs "scrub ok", but what does " _scan_snaps no head for ..."
mean?
Does this indicate a problem?
Ceph 12.2.2 with bluestore on lvm
I think this is because you have snaps created by client before 11.2.1.
See http://tracker.ceph.com/issues/19413
I have already come across this o
Hi Stefan,
Am 14. Dezember 2017 09:48:36 MEZ schrieb Stefan Kooman :
>Hi,
>
>We see the following in the logs after we start a scrub for some osds:
>
>ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700 0
>log_channel(cluster) log [DBG] : 1.2d8 scrub starts
>ceph-osd.2.log:2017-12-14 06:50:47.
22 matches
Mail list logo