Thank you for your reply. Below are extra info.
GFS2 partition in question:
[root@apps03 ~]# df -h
Filesystem                                Size  Used Avail Use% Mounted on
/dev/mapper/SAN5020-order                 800G  435G  366G  55%
/sanstorage/data0/images/order <- one that suffered from slow down
/dev/mapper/SAN5020-SNPimage              3.0T  484G  2.6T  16%
/sanstorage/data0/ShotNPrint/SNP_image
/dev/mapper/SAN5020-data0                 1.3T  828G  475G  64%
/sanstorage/data0/images/album

normal nodes, copying a 200MB file to affected GFS2 partition
[root@apps01 ~]# time cp 200m.test /sanstorage/data0/images/order/
real    0m1.073s
user    0m0.008s
sys     0m0.996s

[root@apps08 ~]# time cp 200m.test /sanstorage/data0/images/order/
real    0m5.046s
user    0m0.004s
sys     0m1.398s

affected node, 1/30th the normal throughput atm.
I cannot reproduced the extreme slow down due as the problem reccurs
in strochastic manner.
but this is not rare since this week.
[root@apps03 ~]# time cp 200m.test /sanstorage/data0/images/order
real    0m30.885s
user    0m0.006s
sys     0m6.348s

affected node, writing to different export from the same SAN
storage(an IBM DS5020)
[root@apps03 ~]# time cp 200m.test /sanstorage/data0/ShotNPrint/SNP_image
real    0m2.353s
user    0m0.006s
sys     0m2.033s

[root@apps03 ~]# time cp 200m.test /sanstorage/data0/images/album
real    0m2.319s
user    0m0.010s
sys     0m1.798s

As the connection topology is single fibre pair connected to central
SAN switch in a star topology,
without any redundant path, if there are hardware issue
SAN5020-SNPimage/SAN5020-data0 should be affected too,
but that is not the case

-- 
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to