So, my pool size has increased to a point where the autoscaler did suggest an
increase of pg_num (from 100 to 512). Autoscaler mode is “on”, but no change
happens..
ceph osd pool ls detail reports:
…
pool 10 'rbd1' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins
pg_num 100
On 16/9/19 18:52, Wido den Hollander wrote:
>
> On 9/14/19 4:24 AM, Alfred wrote:
>> Hi ceph users,
>>
>>
>> If I understand correctly the "min_compat_client" option in the OSD map
>> was replaced in Luminous with "require_min_compat_client".
>>
>> After upgrading a cluster to Luminous and
Thanks for responding!
It's good to hear that the primary OSD has some smarts when dealing with
partial reads, and that seems to line up with what I was seeing, i.e. I would
have expected drastically worse performance otherwise with our large object
sizes and tiny block sizes.
I'm am still
On 9/14/19 4:24 AM, Alfred wrote:
> Hi ceph users,
>
>
> If I understand correctly the "min_compat_client" option in the OSD map
> was replaced in Luminous with "require_min_compat_client".
>
> After upgrading a cluster to Luminous and setting
> set-require-min-compat-client to jewel, the
Hi Team,
In ceph 14.2.2 , ceph dashboard does not have set-ssl-certificate .
We are trying to enable ceph dashboard and while using the ssl certificate
and key , it is not working .
cn5.chn5au1c1.cdn ~# ceph dashboard set-ssl-certificate -i dashboard.crt
no valid command found; 10 closest
Hi, cephers
recently when using s3cmd to upload a large file, last POST request
which meaned I have finished mutltipart upload.But in fact file upload
success.
some key points:
1.s3cmd send the POST request.but the server response spend 30s;why
some times this POST request need 30s to finish,
Hi Igor,
Am 12.09.19 um 19:34 schrieb Igor Fedotov:
> Hi Stefan,
>
> thanks for the update.
>
> Relevant PR from Paul mentions kernels (4.9+):
> https://github.com/ceph/ceph/pull/23273
>
> Not sure how correct this is. That's all I have..
>
> Try asking Sage/Paul...
>
> Also could you please