Please raise a tracker for rgw and also provide some additional journalctl
logs and info(ceph version, os version etc):
http://tracker.ceph.com/projects/rgw

On Mon, Jul 24, 2017 at 9:03 AM, Vaibhav Bhembre <[email protected]>
wrote:

> I am seeing the same issue on upgrade to Luminous v12.1.0 from Jewel.
> I am not using Keystone or OpenStack either and my radosgw daemon
> hangs as well. I have to restart it to resume processing.
>
> 2017-07-24 00:23:33.057401 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 00:38:33.057524 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 00:53:33.057648 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 01:08:33.057749 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 01:23:33.057878 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 01:38:33.057964 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 01:53:33.058098 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
> 2017-07-24 02:08:33.058225 7f196096a700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> The following are my keystone config options:
>
> "rgw_keystone_url": ""
> "rgw_keystone_admin_token": ""
> "rgw_keystone_admin_user": ""
> "rgw_keystone_admin_password": ""
> "rgw_keystone_admin_tenant": ""
> "rgw_keystone_admin_project": ""
> "rgw_keystone_admin_domain": ""
> "rgw_keystone_barbican_user": ""
> "rgw_keystone_barbican_password": ""
> "rgw_keystone_barbican_tenant": ""
> "rgw_keystone_barbican_project": ""
> "rgw_keystone_barbican_domain": ""
> "rgw_keystone_api_version": "2"
> "rgw_keystone_accepted_roles": "Member
> "rgw_keystone_accepted_admin_roles": ""
> "rgw_keystone_token_cache_size": "10000"
> "rgw_keystone_revocation_interval": "900"
> "rgw_keystone_verify_ssl": "true"
> "rgw_keystone_implicit_tenants": "false"
> "rgw_s3_auth_use_keystone": "false"
>
> Is this fixed in RC2 by any chance?
>
> On Thu, Jun 29, 2017 at 3:11 AM, Martin Emrich
> <[email protected]> wrote:
> > Since upgrading to 12.1, our Object Gateways hang after a few hours, I
> only
> > see these messages in the log file:
> >
> >
> >
> > 2017-06-29 07:52:20.877587 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> > 2017-06-29 08:07:20.877761 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> > 2017-06-29 08:07:29.994979 7fa8e11e7700  0 process_single_logshard:
> Error in
> > get_bucket_info: (2) No such file or directory
> >
> > 2017-06-29 08:22:20.877911 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> > 2017-06-29 08:27:30.086119 7fa8e11e7700  0 process_single_logshard:
> Error in
> > get_bucket_info: (2) No such file or directory
> >
> > 2017-06-29 08:37:20.878108 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> > 2017-06-29 08:37:30.187696 7fa8e11e7700  0 process_single_logshard:
> Error in
> > get_bucket_info: (2) No such file or directory
> >
> > 2017-06-29 08:52:20.878283 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> > 2017-06-29 08:57:30.280881 7fa8e11e7700  0 process_single_logshard:
> Error in
> > get_bucket_info: (2) No such file or directory
> >
> > 2017-06-29 09:07:20.878451 7fa8e01e5700  0 ERROR: keystone revocation
> > processing returned error r=-22
> >
> >
> >
> > FYI: we do not use Keystone or Openstack.
> >
> >
> >
> > This started after upgrading from jewel (via kraken) to luminous.
> >
> >
> >
> > What could I do to fix this?
> >
> > Is there some “fsck” like consistency check + repair for the radosgw
> > buckets?
> >
> >
> >
> > Thanks,
> >
> >
> >
> > Martin
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to