Re: [ceph-users] Pool statistics via API

2019-10-15 Thread Ernesto Puerta
Hi Sinan,

Unfortunately, Ceph-Dashboard v2 didn't show up until Mimic release
(Luminous one is v1).

Kind regards,

Ernesto Puerta

He / Him / His

Senior Software Engineer, Ceph

Red Hat 



On Mon, Oct 14, 2019 at 4:17 AM Sinan Polat  wrote:

> Hi Ernesto,
>
> I just opened the Dashboard and there is no menu at the top-right. Also no
> "?". I have a menu at the top-left which has the following items: Cluster
> health, Cluster, Block and Filesystems.
>
> Running Ceph version 12.2.8-89.
>
> Kind regards,
> Sinan Polat
>
> Op 11 oktober 2019 om 22:09 schreef Sinan Polat :
>
> Hi Ernesto,
>
> Thanks for the information! I didn’t know about the existence of the REST
> Dashboard API. I will check that out, Thanks again!
>
> Sinan
>
> Op 11 okt. 2019 om 21:06 heeft Ernesto Puerta  het
> volgende geschreven:
>
> Hi Sinan,
>
> If it's in the Dashboard, it sure comes from the Dashboard REST API (which
> is an API completely unrelated to the RESTful Module).
>
> To check the Dashboard REST API, log in there and click on the top-right
> "?" menu, and in the dropdown, click on "API". That will lead you to the
> Swagger/OpenAPI spec of the Dashboard. You will likely want to explore the
> "/pool" and "/block" endpoints. The API page will give you ready-to-use
> curl commands (the only thing you'd need to renew, once expired, is the
> authorization token).
>
> Kind regards,
>
> Ernesto Puerta
>
> He / Him / His
>
> Senior Software Engineer, Ceph
>
> Red Hat 
>
> 
>
>
> On Thu, Oct 10, 2019 at 2:16 PM Sinan Polat  wrote:
>
> Hi,
>
> Currently I am getting the pool statistics (especially USED/MAX AVAIL) via
> the command line:
> ceph df -f json-pretty| jq '.pools[] | select(.name == "poolname") |
> .stats.max_avail'
> ceph df -f json-pretty| jq '.pools[] | select(.name == "poolname") |
> .stats.bytes_used'
>
> Command "ceph df" does not show the (total) size of the provisioned RBD
> images. It only shows the real usage.
>
> I managed to get the total size of provisioned images using the Python rbd
> module https://docs.ceph.com/docs/master/rbd/api/librbdpy/
>
> Using the same Python module I also would like to get the USED/MAX AVAIL
> per pool. That should be possible using rbd.RBD().pool_stats_get, but
> unfortunately my python-rbd version doesn't support that (running 12.2.8).
>
> So I went ahead and enabled the dashboard to see if the data is present in
> the dashboard and it seems it is. Next step is to enable the restful module
> and access this information, right? But unfortunately the restful api
> doesn't provide this information.
>
> My question is, how can I access the USED/MAX AVAIL information of a pool
> without using the ceph command line and without upgrading my python-rbd
> package?
>
> Kind regards
> Sinan Polat
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] New User Question - /etc/ceph/ceph.conf

2019-10-15 Thread Dave Hall

Hello.

My apologies if this has been asked and answered previously...

I'm just getting started with Ceph, but I will soon deploy a PB+ cluster 
- likely based on Debian 9 and Luminous (until Mimic is available for 
Debian).


As I'm working through some of the beginner installation procedures such as

https://docs.ceph.com/docs/master/install/manual-deployment

I'm not seeing much direction to add content to /etc/ceph/ceph.conf.  
Yet, all of the services for my initial node seem to start just fine 
after every reboot.


So, is it that ceph.conf isn't as important in recent releases as it was 
earlier, or am I missing something?  Is there is a document indicating 
the purpose and required content of this file that I haven't found yet?


Thanks.

-Dave

--
Dave Hall
Binghamton University
kdh...@binghamton.edu  
607-760-2328 (Cell)
607-777-4641 (Office)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size

2019-10-15 Thread Mike Christie
On 10/14/2019 09:01 AM, Kilian Ries wrote:
> 
> @Mike
> 
> 
> Did you have the chance to update download.ceph.com repositories for the
> new version?

No. I have updated the upstream repos with the needed patches and made
new releases there. I appear to be hitting a bug with jenkins and can't
even make devel builds on shaman.

I am working with lab person right now.

> 
> 
> I just tested the packages from shaman in our DEV environment and it
> seems to fix the work - after updating the packages i was not able to
> reproduce the error again and tcmu-runner starts up without any errors ;)
> 
> 
> 
> *Von:* Mike Christie 
> *Gesendet:* Donnerstag, 3. Oktober 2019 00:20:51
> *An:* Kilian Ries; dilla...@redhat.com
> *Cc:* ceph-users@lists.ceph.com
> *Betreff:* Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image
> size
>  
> On 10/02/2019 02:15 PM, Kilian Ries wrote:
>> Ok i just compared my local python files and the git commit you sent me
>> - it really looks like i have the old files installed. All the changes
>> are missing in my local files.
>> 
>> 
>> 
>> Where can i get a new ceph-iscsi-config package that has the fixe
>> included? I have installed version:
> 
> They are on shaman only right now:
> 
> https://4.chacra.ceph.com/r/ceph-iscsi-config/master/24deeb206ed2354d44b0f33d7d26d475e1014f76/centos/7/flavors/default/noarch/
> 
> https://4.chacra.ceph.com/r/ceph-iscsi-cli/master/4802654a6963df6bf5f4a968782cfabfae835067/centos/7/flavors/default/noarch/
> 
> The shaman rpms above have one bug we just fixed in ceph-iscsi-config
> where if DNS is not setup correctly gwcli commands can take minutes.
> 
> I am going to try and get download.ceph.com updated.
> 
>> 
>> ceph-iscsi-config-2.6-2.6.el7.noarch
>> 
>> *Von:* ceph-users  im Auftrag von
>> Kilian Ries 
>> *Gesendet:* Mittwoch, 2. Oktober 2019 21:04:45
>> *An:* dilla...@redhat.com
>> *Cc:* ceph-users@lists.ceph.com
>> *Betreff:* Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image
>> size
>>  
>> 
>> Yes, i created all four luns with these sizes:
>> 
>> 
>> lun0 - 5120G
>> 
>> lun1 - 5121G
>> 
>> lun2 - 5122G
>> 
>> lun3 - 5123G
>> 
>> 
>> Its always one GB more per LUN... Is there any newer ceph-iscsi-config
>> package than i have installed?
>> 
>> 
>> ceph-iscsi-config-2.6-2.6.el7.noarch
>> 
>> 
>> Then i could try to update the package and see if the error is fixed ...
>> 
>> 
>> *Von:* Jason Dillaman 
>> *Gesendet:* Mittwoch, 2. Oktober 2019 16:00:03
>> *An:* Kilian Ries
>> *Cc:* ceph-users@lists.ceph.com
>> *Betreff:* Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image
>> size
>>  
>> On Wed, Oct 2, 2019 at 9:50 AM Kilian Ries  wrote:
>>>
>>> Hi,
>>>
>>>
>>> i'm running a ceph mimic cluster with 4x ISCSI gateway nodes. Cluster was 
>>> setup via ceph-ansible v3.2-stable. I just checked my nodes and saw that 
>>> only two of the four configured iscsi gw nodes are working correct. I first 
>>> noticed via gwcli:
>>>
>>>
>>> ###
>>>
>>>
>>> $gwcli -d ls
>>>
>>> Traceback (most recent call last):
>>>
>>>   File "/usr/bin/gwcli", line 191, in 
>>>
>>> main()
>>>
>>>   File "/usr/bin/gwcli", line 103, in main
>>>
>>> root_node.refresh()
>>>
>>>   File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 87, in 
>>> refresh
>>>
>>> raise GatewayError
>>>
>>> gwcli.utils.GatewayError
>>>
>>>
>>> ###
>>>
>>>
>>> I investigated and noticed that both "rbd-target-api" and "rbd-target-gw" 
>>> are not running. I were not able to restart them via systemd. I then found 
>>> that even tcmu-runner is not running and it exits with the following error:
>>>
>>>
>>>
>>> ###
>>>
>>>
>>> tcmu_rbd_check_image_size:827 rbd/production.lun1: Mismatched sizes. RBD 
>>> image size 5498631880704. Requested new size 5497558138880.
>>>
>>>
>>> ###
>>>
>>>
>>> Now i have the situation that two nodes are running correct and two cant 
>>> start tcmu-runner. I don't know where the image size mismatches are coming 
>>> from - i haven't configured or resized any of the images.
>>>
>>>
>>> Is there any chance to get my two iscsi gw nodes back working?
>> 
>> It sounds like you are potentially hitting [1]. The ceph-iscsi-config
>> library thinks your image size is 5TiB but you actually have a 5121GiB
>> (~5.001TiB) RBD image. Any clue how your RBD image got to be 1GiB
>> larger than an even 5TiB?
>> 
>>>
>>>
>>>
>>> The following packets are installed:
>>>
>>>
>>> rpm -qa |egrep "ceph|iscsi|tcmu|rst|kernel"
>>>
>>>
>>> libtcmu-1.4.0-106.gd17d24e.el7.x86_64
>>>
>>> ceph-iscsi-cli-2.7-2.7.el7.noarch
>>>
>>> kernel-3.10.0-957.5.1.el7.x86_64
>>>
>>> ceph-base-13.2.5-0.el7.x86_64
>>>
>>> ceph-iscsi-config-2.6-2.6.el7.noarch
>>>
>>> ceph-common-13.2.5-0.el7.x86_64
>>>
>>> ceph-selinux-13.2.5-0.el7.x86_64
>>>
>>> 

Re: [ceph-users] hanging slow requests: failed to authpin, subtree is being exported

2019-10-15 Thread Kenneth Waegeman

Hi Robert, all,


On 23/09/2019 17:37, Robert LeBlanc wrote:

On Mon, Sep 23, 2019 at 4:14 AM Kenneth Waegeman
 wrote:

Hi all,

When syncing data with rsync, I'm often getting blocked slow requests,
which also block access to this path.


2019-09-23 11:25:49.477 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 31.895478 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:26:19.477 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 61.896079 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:27:19.478 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 121.897268 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:29:19.488 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 241.899467 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:33:19.680 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 482.087927 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:36:09.881 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 32.677511 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:36:39.881 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 62.678132 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:37:39.891 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 122.679273 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:39:39.892 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 242.684667 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:41:19.893 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 962.305681 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:43:39.923 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 482.712888 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:51:40.236 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 963.037049 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 11:57:20.308 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 1922.719287 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.38352684:92684 lookup
#0x100152383ce/vsc42531 2019-09-23 11:25:17.598077 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 12:07:40.621 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 1923.409501 seconds old, received at 2019-09-23
11:35:37.217113: client_request(client.38347357:111963 lookup
#0x20005b0130c/testing 2019-09-23 11:35:37.217015 caller_uid=0,
caller_gid=0{0,}) currently failed to authpin, subtree is being exported
2019-09-23 12:29:20.639 7f4f401e8700  0 log_channel(cluster) log [WRN]
: slow request 3843.057602 seconds old, 

Re: [ceph-users] mds fail ing to start 14.2.2

2019-10-15 Thread Kenneth Waegeman

Hi Zheng,

Thanks, that let me think I forgot to remove some 'temporary-key' for 
the inconsistency issue I've got. Once those were removed,the mds 
started again.


Thanks again!

Kenneth

On 12/10/2019 04:26, Yan, Zheng wrote:



On Sat, Oct 12, 2019 at 1:10 AM Kenneth Waegeman 
mailto:kenneth.waege...@ugent.be>> wrote:


Hi all,

After solving some pg inconsistency problems, my fs is still in
trouble.  my mds's are crashing with this error:


>     -5> 2019-10-11 19:02:55.375 7f2d39f10700  1 mds.1.564276
rejoin_start
>     -4> 2019-10-11 19:02:55.385 7f2d3d717700  5 mds.beacon.mds01
> received beacon reply up:rejoin seq 5 rtt 1.01
>     -3> 2019-10-11 19:02:55.495 7f2d39f10700  1 mds.1.564276
> rejoin_joint_start
>     -2> 2019-10-11 19:02:55.505 7f2d39f10700  5 mds.mds01
> handle_mds_map old map epoch 564279 <= 564279, discarding
>     -1> 2019-10-11 19:02:55.695 7f2d33f04700 -1
>

/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/src/mds/mdstyp
> es.h: In function 'static void
> dentry_key_t::decode_helper(std::string_view, std::string&,
> snapid_t&)' thread 7f2d33f04700 time 2019-10-11 19:02:55.703343
>

/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/src/mds/mdstypes.h:

> 1229: FAILED ceph_assert(i != string::npos
> )
>
>  ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be)
> nautilus (stable)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x14a) [0x7f2d43393046]
>  2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char
> const*, char const*, ...)+0) [0x7f2d43393214]
>  3: (CDir::_omap_fetched(ceph::buffer::v14_2_0::list&,
> std::map std::less, std::allocator ceph::buffer::v14_2_0::list> > >&, bool, int)+0xa68) [0x556a17ec
> baa8]
>  4: (C_IO_Dir_OMAP_Fetched::finish(int)+0x54) [0x556a17ee0034]
>  5: (MDSContext::complete(int)+0x70) [0x556a17f5e710]
>  6: (MDSIOContextBase::complete(int)+0x16b) [0x556a17f5e9ab]
>  7: (Finisher::finisher_thread_entry()+0x156) [0x7f2d433d8386]
>  8: (()+0x7dd5) [0x7f2d41262dd5]
>  9: (clone()+0x6d) [0x7f2d3ff1302d]
>
>  0> 2019-10-11 19:02:55.695 7f2d33f04700 -1 *** Caught signal
> (Aborted) **
>  in thread 7f2d33f04700 thread_name:fn_anonymous
>
>  ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be)
> nautilus (stable)
>  1: (()+0xf5d0) [0x7f2d4126a5d0]
>  2: (gsignal()+0x37) [0x7f2d3fe4b2c7]
>  3: (abort()+0x148) [0x7f2d3fe4c9b8]
>  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x199) [0x7f2d43393095]
>  5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char
> const*, char const*, ...)+0) [0x7f2d43393214]
>  6: (CDir::_omap_fetched(ceph::buffer::v14_2_0::list&,
> std::map std::less, std::allocator ceph::buffer::v14_2_0::list> > >&, bool, int)+0xa68) [0x556a17ec
> baa8]
>  7: (C_IO_Dir_OMAP_Fetched::finish(int)+0x54) [0x556a17ee0034]
>  8: (MDSContext::complete(int)+0x70) [0x556a17f5e710]
>  9: (MDSIOContextBase::complete(int)+0x16b) [0x556a17f5e9ab]
>  10: (Finisher::finisher_thread_entry()+0x156) [0x7f2d433d8386]
>  11: (()+0x7dd5) [0x7f2d41262dd5]
>  12: (clone()+0x6d) [0x7f2d3ff1302d]
>  NOTE: a copy of the executable, or `objdump -rdS ` is
> needed to interpret this.
>
> [root@mds02 ~]# ceph -s
>   cluster:
>     id: 92bfcf0a-1d39-43b3-b60f-44f01b630e47
>     health: HEALTH_WARN
>     1 filesystem is degraded
>     insufficient standby MDS daemons available
>     1 MDSs behind on trimming
>     1 large omap objects
>
>   services:
>     mon: 3 daemons, quorum mds01,mds02,mds03 (age 4d)
>     mgr: mds02(active, since 3w), standbys: mds01, mds03
>     mds: ceph_fs:2/2 {0=mds02=up:rejoin,1=mds01=up:rejoin(laggy or
> crashed)}
>     osd: 535 osds: 533 up, 529 in
>
>   data:
>     pools:   3 pools, 3328 pgs
>     objects: 376.32M objects, 673 TiB
>     usage:   1.0 PiB used, 2.2 PiB / 3.2 PiB avail
>     pgs: 3315 active+clean
>  12   active+clean+scrubbing+deep
>  1    active+clean+scrubbing
>
Someone an idea where to go from here ?☺


looks like omap for dirfrag is corrupted.  please check mds log 
(debug_mds = 10) to find which omap is corrupted. Basically all omap 
keys of dirfrag should be in format _.



Thanks!

K

___
ceph-users mailing list
ceph-users@lists.ceph.com 

Re: [ceph-users] problem returning mon back to cluster

2019-10-15 Thread Harald Staub

On 14.10.19 16:31, Nikola Ciprich wrote:

On Mon, Oct 14, 2019 at 01:40:19PM +0200, Harald Staub wrote:

Probably same problem here. When I try to add another MON, "ceph
health" becomes mostly unresponsive. One of the existing ceph-mon
processes uses 100% CPU for several minutes. Tried it on 2 test
clusters (14.2.4, 3 MONs, 5 storage nodes with around 2 hdd osds
each). To avoid errors like "lease timeout", I temporarily increase
"mon lease", from 5 to 50 seconds.

Not sure how bad it is from a customer PoV. But it is a problem by
itself to be several minutes without "ceph health", when there is an
increased risk of losing the quorum ...


Hi Harry,

thanks a lot for your reply! not sure we're experiencing the same issue,
i don't have it on any other cluster.. when this is happening to you, does
only ceph health stop working, or it also blocks all clients IO?


Hi Nik

Yes you are right, client I/O is not affected. Also stopping and 
starting an existing MON is ok. But adding a MON without increasing "mon 
lease" as mentioned lead to quorum flapping, so this might be similar.


Cheers
 Harry
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Past_interval start interval mismatch (last_clean_epoch reported)

2019-10-15 Thread Huseyin Cotuk
Hi all,

I also hit the bug #24866 in my test environment. According to the logs, the 
last_clean_epoch in the specified OSD/PG is 17703, but the interval starts with 
17895. So the OSD fails to start. There are some other OSDs in the same status. 

2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg_epoch: 18432 pg[18.51( v 
18388'4 lc 18386'3 (0'0,18388'4] local-lis/les=18430/18431 n=1 ec=295/295 lis/c 
18430/17702 les/c/f 18431/17703/0 18428/18430/18421) [11,21]/[11,21,20] r=1 
lpr=18431 pi=[17895,18430)/3 crt=18388'4 lcod 0'0 unknown m=1 mbc={}] 18.51 
past_intervals [17895,18430) start interval does not contain the required bound 
[17703,18430) start

The cause is pg 18.51 went clean in 17703 but 17895 is reported to the monitor. 

I am using the last stable version of Mimic (13.2.6).

Any idea how to fix it? Is there any way to bypass this check or fix the 
reported epoch #?

Thanks in advance. 

Best regards,
Huseyin Cotuk
hco...@gmail.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com