Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-29 Thread renjianxinlover
hi,Stefan
  could you please provide further guidence?
Brs


| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/28/2019 21:44,renjianxinlover wrote:
Sorry what i said was fuzzy before. 
Currently, my mds is running with certain osds at same node in which SSD drive 
serves as cache device.


| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/28/2019 15:49,Stefan Kooman wrote:
Quoting renjianxinlover (renjianxinlo...@163.com):
HI, Nathan, thanks for your quick reply!
comand 'ceph status' outputs warning including about ten clients failing to 
respond to cache pressure;
in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five 
seconds as follow,

You should run this iostat -x 1 on the OSD nodes ... MDS is not doing
any IO in and of itself as far as Ceph is concerned.

Gr. Stefan


--
| BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-28 Thread renjianxinlover
Sorry what i said was fuzzy before. 
Currently, my mds is running with certain osds at same node in which SSD drive 
serves as cache device.


| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/28/2019 15:49,Stefan Kooman wrote:
Quoting renjianxinlover (renjianxinlo...@163.com):
HI, Nathan, thanks for your quick reply!
comand 'ceph status' outputs warning including about ten clients failing to 
respond to cache pressure;
in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five 
seconds as follow,

You should run this iostat -x 1 on the OSD nodes ... MDS is not doing
any IO in and of itself as far as Ceph is concerned.

Gr. Stefan


--
| BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-27 Thread renjianxinlover
HI, Nathan, thanks for your quick reply!
comand 'ceph status' outputs warning including about ten clients failing to 
respond to cache pressure;
in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five 
seconds as follow,
Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0.00 0.00 6400.000.00 49992.00 0.0015.62 
0.960.150.150.00   0.08  51.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.800.001.090.650.00   94.47


Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0.00 0.00 2098.000.00 16372.00 0.0015.61 
0.280.140.140.00   0.11  23.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.930.001.281.400.00   93.39


Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0.00 0.00 4488.000.00 35056.00 0.0015.62 
0.600.130.130.00   0.10  42.80


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.960.000.861.150.00   94.03


Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0.00 0.00 3666.006.00 28768.0028.0015.68 
0.500.140.140.67   0.10  35.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   2.720.000.270.040.00   96.97


Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0.00 0.00   14.00   14.00   108.0080.0013.43 
0.010.290.290.29   0.29   0.80


any clue?


Brs


| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/27/2019 00:04,Nathan Fish wrote:
I would start by viewing "ceph status", drive IO with: "iostat -x 1 
/dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status" warns 
that the MDS cache is oversized, that may be an easy fix.



On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover  wrote:

hello,
   recently, after deleting some fs data in a small-scale ceph cluster, 
some clients IO performance became bad, specially latency. for example, opening 
a tiny text file by vim maybe consumed nearly twenty  seconds, i am not clear 
about how to diagnose the cause, could anyone give some guidence?


Brs
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs kernel client io performance decreases extremely

2019-12-26 Thread renjianxinlover
hello,
   recently, after deleting some fs data in a small-scale ceph cluster, 
some clients IO performance became bad, specially latency. for example, opening 
a tiny text file by vim maybe consumed nearly twenty  seconds, i am not clear 
about how to diagnose the cause, could anyone give some guidence?


Brs
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph mds/db tansfer

2019-01-28 Thread renjianxinlover
hi, professor
Recently, i am intend to make big adaption for local small-scale ceph 
cluster. The job mainly includes two parts:
(1) mds metadata: switch metadata storage medium to ssd.
(2) osd bluestore wal: switch wal storage medium to ssd.
   now, we are doing some research and test but have doubt and concern about 
cluster stability and availability.
   so, could you please guide us by outlined solution or steps?
   thanks very much!
Ren   
Brs


 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs performance degraded very fast

2019-01-22 Thread renjianxinlover
hi, 
   at some time, as cache pressure or caps release failure, client apps mount 
got stuck.
   my use case is in kubernetes cluster and automatic kernel client mount in 
nodes.
   is anyone faced with same issue or has related solution?
Brs___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com