Today I was surprised to find our cluster in HEALTH_WARN condition and
searching in documentation was no help at all.

Does anybody have an idea how to cure the dreaded "failing to respond
to cache pressure" message. As I understand it, it tells me that a
client is not responding to MDS request to prune it's cache but
I have no idea what is causing the problem and how to cure it.

I'm using kernel cephfs driver on kernel 4.4.14.

# ceph --version
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)

# ceph -s
    cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb1
     health HEALTH_WARN
            mds0: Client HOST1 failing to respond to cache pressure
            mds0: Client HOST2 failing to respond to cache pressure
            mds0: Client HOST3 failing to respond to cache pressure
            mds0: Client HOST4 failing to respond to cache pressure
     monmap e2: 5 mons at 
{HOST10=1.2.3.10:6789/0,HOST5=1.2.3.5:6789/0,HOST6=1.2.3.6:6789/0,HOST7=1.2.3.7:6789/0,HOST11=1.2.3.11:6789/0}
            election epoch 188, quorum 0,1,2,3,4 HOST10,HOST5,HOST6,HOST7,HOST11
     mdsmap e777: 1/1/1 up {0=HOST7=up:active}, 2 up:standby
     osdmap e8293: 61 osds: 61 up, 60 in
      pgmap v3149484: 6144 pgs, 3 pools, 706 GB data, 650 kobjects
            1787 GB used, 88434 GB / 90264 GB avail
                6144 active+clean
  client io 31157 kB/s rd, 1567 kB/s wr, 647 op/s
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to