Hi,
Note : Firewall is disabled on all hosts.
Regards.
Le ven. 15 mars 2024 à 06:42, wodel youchi a
écrit :
> Hi,
>
> I did recreate the cluster again, and it is the result.
>
> This is my initial bootstrap
>
> cephadm --image 192.168.2.36:4000/ceph/ceph:v18 bootstrap
> --initial-dashboard-use
Hi,
I did recreate the cluster again, and it is the result.
This is my initial bootstrap
cephadm --image 192.168.2.36:4000/ceph/ceph:v18 bootstrap
--initial-dashboard-user admin \
--initial-dashboard-password adminpass --dashboard-password-noupdate
--registry-url 192.168.2.36:4000 \
--registry
Also, before anyone asks- I have just gone over every client attached to
this filesystem through native CephFS or NFS and checked for deleted
files. There are a total of three deleted files, amounting to about 200G.
On 15/03/2024 10:05 am, Thorne Lawler wrote:
Igor,
Yes. Just a bit.
root@pm
root@pmx101:/mnt/pve/iso# getfattr -n ceph.dir.rentries .
# file: .
ceph.dir.rentries="67"
On 15/03/2024 4:56 am, Bailey Allison wrote:
Hey All,
It might be easier to check using cephfs dir stats using getfattr, ex.
getfattr -n ceph.dir.rentries /path/to/dir
Regards,
Bailey
-Original M
Igor,
Yes. Just a bit.
root@pmx101:/mnt/pve/iso# du -h | wc -l
10
root@pmx101:/mnt/pve/iso# du -h
0 ./snippets
0 ./tmp
257M ./xcp_nfs_sr/2ba36cf5-291a-17d2-b510-db1a295ce0c2
5.5T ./xcp_nfs_sr/5aacaebb-4469-96f9-729e-fe45eef06a14
5.5T ./xcp_nfs_sr
0 ./failover_test
11G
Hello I’m looking for suggestions how to track bucket creation over s3 api and
bucket usage (num of objects and size) of all buckets in time.
In our RGW setup, we have a custom client panel, where like 85% percent of
buckets are created which is easy for us to then track the newly created
bucke
Hey All,
It might be easier to check using cephfs dir stats using getfattr, ex.
getfattr -n ceph.dir.rentries /path/to/dir
Regards,
Bailey
> -Original Message-
> From: Igor Fedotov
> Sent: March 14, 2024 1:37 PM
> To: Thorne Lawler ; ceph-users@ceph.io;
> etienne.men...@ubisoft.com; v
Hi,
> On 14 Mar 2024, at 19:29, Denis Polom wrote:
>
> so metric itself is miliseconds and after division on _count it's in seconds?
>
>
This is two metrics for long running averages [1], the query that produces
"seconds" unit looks like this
(irate(ceph_osd_op_r_latency_sum[1m]) / irate(ce
Thorn,
you might want to assess amount of files on the mounted fs by runnning
"du -h | wc". Does it differ drastically from amount of objects in the
pool = ~3.8 M?
And just in case - please run "rados lssnap -p cephfs.shared.data".
Thanks,
Igor
On 3/14/2024 1:42 AM, Thorne Lawler wrote:
Hi,
> On 14 Mar 2024, at 16:44, Denis Polom wrote:
>
> do you know if there is some table of Ceph metrics and units that should be
> used for them?
>
> I currently struggling with
>
> ceph_osd_op_r_latency_sum
>
> ceph_osd_op_w_latency_sum
>
> if they are in ms or seconds?
>
> Any idea ple
Hi,
I am creating a new ceph cluster using REEF.
This is my host_specs file
[root@controllera config]# cat hosts-specs2.yml
service_type: host
hostname: computehci01
addr: 20.1.0.2
location:
chassis: chassis1
---
service_type: host
hostname: computehci02
addr: 20.1.0.3
location:
chassis: chassi
Hi guys,
do you know if there is some table of Ceph metrics and units that should
be used for them?
I currently struggling with
ceph_osd_op_r_latency_sum
ceph_osd_op_w_latency_sum
if they are in ms or seconds?
Any idea please?
Thx!
___
ceph-use
Hi all,
I have just setup a small ceph cluster with ceph fs.
The setup is reef 18.2.1 on Debian bookworm.
The system is up and running the way it should,
though I have a problem with ceph fs snapshots.
When I read the doc I should be able to make a
snapshot in any directory in the filesystem.
I
Yup, that does look like a huge difference.
@Pedro Gonzalez Gomez @Aashish Sharma
@Ankush Behl Could you guys help
here? Did we miss any fixes for 18.2.2?
Regards,
On Thu, Mar 14, 2024 at 2:17 AM Harry G Coin wrote:
> Thanks! Oddly, all the dashboard checks you suggest appear normal, yet
14 matches
Mail list logo