So before you have any data, how you can calculate? Like I have 360TB free
space, for this how much should be the imdex pool.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e:
Yeah, I see the same, 6 servers have nvme drives and today in the iops side
were all maxed out, but I don’t understand why, the user made head operation
like 6 sometimes in a minute, the cluster iops was like 60-70k and how can
this max out 6 nvme drives which should be able to server each
I would like to directly mount cephfs from the windows client, and keep getting
the error below.
PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe -l x
2021-07-15T17:41:30.365Eastern Daylight Time 4 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
If I run the commands now, I still don't get any information returned with
ceph fs perf stats.
cephfs-top still says there is no cluster ceph.
The clients are all el7 & el8 kernel clients and show up in the Dashboard.
When I execute "ceph tell mds. client ls", I can see all clients
are there.
Hello ceph-users, does someone have an idea why I got this?
$ radosgw-admin user stats --uid someone --reset-stats
ERROR: could not reset user stats: (75) Value too large for defined data
type
___
ceph-users mailing list -- ceph-users@ceph.io
To
On Thu, Jul 15, 2021 at 5:18 PM Eugen Block wrote:
>
> Hi,
>
> I just setup a virtual one-node cluster (16.2.5) to check out
> cephfs-top. Regarding the number of clients I was a little surprised,
> too, in the first couple of minutes the number switched back and forth
> between 0 and 1 although
Contradicting outputs, it seems immediate ones. You need to wait for a
few seconds for the 'perf stats' (and of-course the cephfs-top) to
display the correct the metrics. I hope you have ongoing IO while
running 'perf stats' and cephfs-top. What is your kernel version?
On 15/07/21 5:17 pm,
Hi,
I just setup a virtual one-node cluster (16.2.5) to check out
cephfs-top. Regarding the number of clients I was a little surprised,
too, in the first couple of minutes the number switched back and forth
between 0 and 1 although I had not connected any client yet. But after
a while
Hi guys,
I remember some CEPH member deploy CEPH over nodes of 1U and 16 HDD 3,5.
Currently with the chip shortage Supermicro deliver new nodes in October...
Im looking for the model and brand of the 1U and 16 HDD I think is Asus or
Asrock server nodes... but someone can post the server
Hi,
I'm facing something strange in ceph (v12.2.13, filestore). I have two
clusters with the same config (kernel, network, disks, ...). One of them
has 3ms latency the other has 100ms latency. Both physical disk latency on
write is less than 1ms.
In the cluster with 100ms latency on write when I
Hi,
How can I know which size of the nvme drive needed for my index pool? At the
moment I'm using 6x1.92TB NVME (overkill) but I have no idea how is it used.
Thanks
This message is confidential and is for the sole use of the intended
recipient(s). It may also
What you mean? You can check pool usage via 'ceph df detail' output
Sent from my iPhone
> On 15 Jul 2021, at 07:53, Szabo, Istvan (Agoda)
> wrote:
>
> How can I know which size of the nvme drive needed for my index pool? At the
> moment I'm using 6x1.92TB NVME (overkill) but I have no idea
12 matches
Mail list logo