Then it's probably something different. Does that happen with every
volume/image or just this one time?
Zitat von 徐蕴 :
Hi Eugen,
Thank you for sharing your experience. I will dig into OpenStack
cinder logs to check if something happened. The strange thing is the
volume I deleted is not
Hi,
since we upgraded to Luminous we have had an issue with snapshot
deletion that could be related: when a largish (a few TB) snapshot gets
deleted we see a spike in the load of the OSD daemon followed by a brief
flap of the daemons themselves.
It seems that while the snapshot would have been
Hi Eugen,
Thank you for sharing your experience. I will dig into OpenStack cinder logs to
check if something happened. The strange thing is the volume I deleted is not
created from a snapshot, or doesn’t have any snapshot. And the rbd_id.xxx,
rbd_header.xxx and rbd_object_map.xxx were deleted,
the situation is:
health: HEALTH_WARN
1 pools have many more objects per pg than average
$ ceph health detail
MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
pool cephfs_data objects per pg (315399) is more than 1227.23 times
cluster average (257)
$ ceph df
RAW
Hi,
we ran some benchmarks with a few samples of Seagate's new HDDs that some
of you might find interesting:
Blog post:
https://croit.io/2020/01/06/2020-01-06-benchmark-mach2
GitHub repo with scripts and raw data:
https://github.com/croit/benchmarks/tree/master/mach2-disks
Tl;dr: way faster
i think there is something wrong with the cephfs_data pool.
i created a new pool "cephfs_data2" and copied data from the
"cephfs_data" to the "cephfs_data2" pool by using this command:
$ rados cppool cephfs_data cephfs_data2
$ ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL