Hi,

I'm having the same issues running MDS version 11.2.0 and kernel clients
4.10.

Regards
Jose

El 10/05/17 a las 09:11, gjprabu escribió:
> HI John,
>       
>     Thanks for you reply , we are using below version for client and
> MDS (ceph version 10.2.2)
>
> Regards
> Prabu GJ
>
>
> ---- On Wed, 10 May 2017 12:29:06 +0530 *John Spray
> <[email protected]>* wrote ----
>
>     On Thu, May 4, 2017 at 7:28 AM, gjprabu <[email protected]
>     <mailto:[email protected]>> wrote:
>     > Hi Team,
>     >
>     > We are running cephfs with 5 OSD and 3 Mon and 1 MDS. There is
>     > Heath Warn "failing to respond to cache pressure" . Kindly
>     advise to fix
>     > this issue.
>
>     This is usually due to buggy old clients, and occasionally due to a
>     buggy old MDS. What client and MDS versions are you using?
>
>     John
>
>     >
>     >
>     > cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13
>     > health HEALTH_WARN
>     > mds0: Client integ-hm8-1.csez.zohocorpin.com failing to respond
>     > to cache pressure
>     > mds0: Client integ-hm5 failing to respond to cache pressure
>     > mds0: Client integ-hm9 failing to respond to cache pressure
>     > mds0: Client integ-hm2 failing to respond to cache pressure
>     > monmap e2: 3 mons at
>     >
>     
> {intcfs-mon1=192.168.113.113:6789/0,intcfs-mon2=192.168.113.114:6789/0,intcfs-mon3=192.168.113.72:6789/0}
>
>     > election epoch 16, quorum 0,1,2
>     > intcfs-mon3,intcfs-mon1,intcfs-mon2
>     > fsmap e79409: 1/1/1 up {0=intcfs-osd1=up:active}, 1 up:standby
>     > osdmap e3343: 5 osds: 5 up, 5 in
>     > flags sortbitwise
>     > pgmap v13065759: 564 pgs, 3 pools, 5691 GB data, 12134 kobjects
>     > 11567 GB used, 5145 GB / 16713 GB avail
>     > 562 active+clean
>     > 2 active+clean+scrubbing+deep
>     > client io 8090 kB/s rd, 29032 kB/s wr, 25 op/s rd, 129 op/s wr
>     >
>     >
>     > Regards
>     > Prabu GJ
>     >
>     > _______________________________________________
>     > ceph-users mailing list
>     > [email protected] <mailto:[email protected]>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     >
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to