Any other ideas ?
> Am 15.01.2020 um 15:50 schrieb Oskar Malnowicz
> :
>
> the situation is:
>
> health: HEALTH_WARN
> 1 pools have many more objects per pg than average
>
> $ ceph health detail
> MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
> pool cephfs_data
i think there is something wrong with the cephfs_data pool.
i created a new pool "cephfs_data2" and copied data from the
"cephfs_data" to the "cephfs_data2" pool by using this command:
$ rados cppool cephfs_data cephfs_data2
$ ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL
the situation is:
health: HEALTH_WARN
1 pools have many more objects per pg than average
$ ceph health detail
MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
pool cephfs_data objects per pg (315399) is more than 1227.23 times
cluster average (257)
$ ceph df
RAW
i executed the commands from above again ("Recovery from missing
metadata objects") and now the mds daemons start.
still the same situation like before :(
Am 14.01.20 um 22:36 schrieb Oskar Malnowicz:
> i just restartet the mds daemons and now they crash during the boot.
>
> -36> 2020-01-14
i just restartet the mds daemons and now they crash during the boot.
-36> 2020-01-14 22:33:17.880 7fc9bbeaa700 2 mds.0.13470 Booting: 0:
opening inotable
-35> 2020-01-14 22:33:17.880 7fc9bbeaa700 2 mds.0.13470 Booting: 0:
opening sessionmap
-34> 2020-01-14 22:33:17.880 7fc9bbeaa700 2
this was the new state. the results are equal to florians
$ time cephfs-data-scan scan_extents cephfs_data
cephfs-data-scan scan_extents cephfs_data 1.86s user 1.47s system 21%
cpu 15.397 total
$ time cephfs-data-scan scan_inodes cephfs_data
cephfs-data-scan scan_inodes cephfs_data 2.76s user
I'm asking that you get the new state of the file system tree after
recovering from the data pool. Florian wrote that before I asked you
to do this...
How long did it take to run the cephfs-data-scan commands?
On Tue, Jan 14, 2020 at 11:58 AM Oskar Malnowicz
wrote:
>
> as florian already wrote,
as florian already wrote, `du -hc` shows a total usage of 31G, but `ceph
df` show us an usage of 2.1
# du -hs
31G
# ceph df
cephfs_data 6 2.1 TiB 2.48M 2.1 TiB 25.00 3.1 TiB
Am 14.01.20 um 20:44 schrieb Patrick Donnelly:
> On Tue, Jan 14, 2020 at 11:40 AM Oskar
On Tue, Jan 14, 2020 at 11:40 AM Oskar Malnowicz
wrote:
>
> i run this commands, but still the same problems
Which problems?
> $ cephfs-data-scan scan_extents cephfs_data
>
> $ cephfs-data-scan scan_inodes cephfs_data
>
> $ cephfs-data-scan scan_links
> 2020-01-14 20:36:45.110 7ff24200ef80 -1
i run this commands, but still the same problems
$ cephfs-data-scan scan_extents cephfs_data
$ cephfs-data-scan scan_inodes cephfs_data
$ cephfs-data-scan scan_links
2020-01-14 20:36:45.110 7ff24200ef80 -1 mds.0.snap updating last_snap 1
-> 27
$ cephfs-data-scan cleanup cephfs_data
do you
On Tue, Jan 14, 2020 at 11:24 AM Oskar Malnowicz
wrote:
>
> $ ceph daemon mds.who flush journal
> {
> "message": "",
> "return_code": 0
> }
>
>
> $ cephfs-table-tool 0 reset session
> {
> "0": {
> "data": {},
> "result": 0
> }
> }
>
> $ cephfs-table-tool 0 reset
$ ceph daemon mds.who flush journal
{
"message": "",
"return_code": 0
}
$ cephfs-table-tool 0 reset session
{
"0": {
"data": {},
"result": 0
}
}
$ cephfs-table-tool 0 reset snap
{
"result": 0
}
$ cephfs-table-tool 0 reset inode
{
"0": {
"data":
Please try flushing the journal:
ceph daemon mds.foo flush journal
The problem may be caused by this bug: https://tracker.ceph.com/issues/43598
As for what to do next, you would likely need to recover the deleted
inodes from the data pool so you can retry deleting the files:
Hello Patrick,
"purge_queue": {
"pq_executing_ops": 0,
"pq_executing": 0,
"pq_executed": 5097138
},
We already restarted the MDS daemons, but no change.
There are no other health warnings than that one what Florian already
mentioned.
cheers Oskar
Am 14.01.20 um
On Tue, Jan 14, 2020 at 5:15 AM Florian Pritz
wrote:
> `ceph daemon mds.$hostname perf dump | grep stray` shows:
>
> > "num_strays": 0,
> > "num_strays_delayed": 0,
> > "num_strays_enqueuing": 0,
> > "strays_created": 5097138,
> > "strays_enqueued": 5097138,
> > "strays_reintegrated": 0,
> >
15 matches
Mail list logo