ayo de 2019 10:14
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
On 10/05/2019 08:42, EDH - Manuel Rios Fernandez wrote:
> Hi
>
> Yesterday night we added 2 Intel Optane Nvme
>
> Generated 4 partitions for get the max pe
l maybe this help to
> allow software complete the listing?
>
> Best Regards
> Manuel
>
> -Mensaje original-
> De: Matt Benjamin Enviado el: viernes, 3 de mayo de
> 2019 15:47
> Para: EDH - Manuel Rios Fernandez
> CC: ceph-users
> Asunto: Re: [ceph
---Mensaje original-
De: ceph-users En nombre de EDH - Manuel
Rios Fernandez
Enviado el: sábado, 4 de mayo de 2019 15:53
Para: 'Matt Benjamin'
CC: 'ceph-users'
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Folks,
The user is telling us that their software drops
fdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 ====== req done
> req=0x55f2fde20970 op status=-104 http_status=206 ==
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fernandez Enviado
> el: viernes
42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 == req done req=0x55f2fde20970 op
> status=-104 http_status=206 ======
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fern
us=206 ==
-Mensaje original-
De: EDH - Manuel Rios Fernandez
Enviado el: viernes, 3 de mayo de 2019 15:12
Para: 'Matt Benjamin'
CC: 'ceph-users'
Asunto: RE: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Matt,
Thanks for your help,
We have done the changes plus a
Enviado el: viernes, 3 de mayo de 2019 14:00
Para: EDH - Manuel Rios Fernandez
CC: ceph-users
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Folks,
Thanks for sharing your ceph.conf along with the behavior.
There are some odd things there.
1. rgw_num_rados_h
Hi Folks,
Thanks for sharing your ceph.conf along with the behavior.
There are some odd things there.
1. rgw_num_rados_handles is deprecated--it should be 1 (the default),
but changing it may require you to check and retune the values for
objecter_inflight_ops and objecter_inflight_op_bytes to
Hi,
We got a ceph deployment 13.2.5 version, but several bucket with millions of
files.
services:
mon: 3 daemons, quorum CEPH001,CEPH002,CEPH003
mgr: CEPH001(active)
osd: 106 osds: 106 up, 106 in
rgw: 2 daemons active
data:
pools: 17 pools, 7120 pgs