Hi,

On Mon, 23 Sep 2019, Koebbe, Brian wrote:
> Our cluster has a little over 100 RBDs.  Each RBD is snapshotted with a 
> typical "frequently", hourly, daily, monthly type of schedule.
> A while back a 4th monitor was temporarily added to the cluster that took 
> hours to synchronize with the other 3.
> While trying to figure out why that addition took so long, we discovered that 
> our monitors have what seems like a really large number of osd_snap keys:
> 
> ​ceph-monstore-tool /var/lib/ceph/mon/xxxxxx dump-keys |awk '{print $1}'|uniq 
> -c
>     153 auth
>       2 config
>      10 health
>    1441 logm
>       3 mdsmap
>     313 mgr
>       1 mgr_command_descs
>       3 mgr_metadata
>     163 mgrstat
>       1 mkfs
>     323 mon_config_key
>       1 mon_sync
>       6 monitor
>       1 monitor_store
>      32 monmap
>     120 osd_metadata
>       1 osd_pg_creating
> 5818618 osd_snap
>   41338 osdmap
>     754 paxos
> 
> A few questions:
> 
> Could this be the cause of the slow addition/synchronization?

Probably!

> Is what looks like an unbounded number of osd_snaps expected?

Maybe.  Can you send me the output of 'ceph osd dump'?  Also, if you don't 
mind doing the dump above and grepping out just the osd_snap keys, so I 
can see what they look like and if they match the osd map contents?

Thanks!
sage


> If trimming/compacting them would help, how would one do that?
> 
> Thanks,
> Brian
> 
> ________________________________
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
> 
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to