Any other ideas ?
> Am 15.01.2020 um 15:50 schrieb Oskar Malnowicz
> :
>
> the situation is:
>
> health: HEALTH_WARN
> 1 pools have many more objects per pg than average
>
> $ ceph health detail
> MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than
i think there is something wrong with the cephfs_data pool.
i created a new pool "cephfs_data2" and copied data from the
"cephfs_data" to the "cephfs_data2" pool by using this command:
$ rados cppool cephfs_data cephfs_data2
$ ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL
the situation is:
health: HEALTH_WARN
1 pools have many more objects per pg than average
$ ceph health detail
MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
pool cephfs_data objects per pg (315399) is more than 1227.23 times
cluster average (257)
$ ceph df
RAW
i executed the commands from above again ("Recovery from missing
metadata objects") and now the mds daemons start.
still the same situation like before :(
Am 14.01.20 um 22:36 schrieb Oskar Malnowicz:
> i just restartet the mds daemons and now they crash during the boot.
>
>
; table 0
-1> 2020-01-14 22:33:17.912 7fc9ba6a7700 -1
/build/ceph-14.2.5/src/mds/journal.cc: In function 'virtual void
ESession::replay(MDSRank*)' thread 7fc9ba6a7700 time 2020-01-14
22:33:17.912135
/build/ceph-14.2.5/src/mds/journal.cc: 1655: FAILED
ceph_assert(g_conf()->mds_wipe_sessions)
Am
>
> On Tue, Jan 14, 2020 at 11:58 AM Oskar Malnowicz
> wrote:
>> as florian already wrote, `du -hc` shows a total usage of 31G, but `ceph
>> df` show us an usage of 2.1
>>
>> # du -hs
>> 31G
>>
>> # ceph df
>> cephfs_data 6 2.1 TiB
as florian already wrote, `du -hc` shows a total usage of 31G, but `ceph
df` show us an usage of 2.1
# du -hs
31G
# ceph df
cephfs_data 6 2.1 TiB 2.48M 2.1 TiB 25.00 3.1 TiB
Am 14.01.20 um 20:44 schrieb Patrick Donnelly:
> On Tue, Jan 14, 2020 at 11:40 AM Os
you have other ideas ?
Am 14.01.20 um 20:32 schrieb Patrick Donnelly:
> On Tue, Jan 14, 2020 at 11:24 AM Oskar Malnowicz
> wrote:
>> $ ceph daemon mds.who flush journal
>> {
>> "message": "",
>> "return_code": 0
&
eted
> inodes from the data pool so you can retry deleting the files:
> https://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
>
>
> On Tue, Jan 14, 2020 at 9:30 AM Oskar Malnowicz
> wrote:
>> Hello Patrick,
>>
>>
Hello Patrick,
"purge_queue": {
"pq_executing_ops": 0,
"pq_executing": 0,
"pq_executed": 5097138
},
We already restarted the MDS daemons, but no change.
There are no other health warnings than that one what Florian already
mentioned.
cheers Oskar
Am 14.01.20 um
10 matches
Mail list logo