I'm experiencing poor AFS performance on under Sparc solaris 9 09/05HW running Openafs server 1.4.1 on a Sun StorageTeck 3511 Fibre channel to SATA array
At first, I thought that having UFS logging disabled was the culprit, but I have enabled UFS logging and I am using the namei server, but performance still stinks. It took 1.5 hours to move a 3.2GB volume to the server. Things seem fine except on the Fibre channel disks. Here is a snippet from "iostat 2": tty sd1 ssd0 ssd1 ssd2 cpu tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id 3 41 0 0 0 575 82 60 0 0 0 0 0 0 0 0 26 73 21 125 0 0 0 544 78 78 0 0 0 0 0 0 0 1 26 72 0 40 0 0 0 663 96 67 0 0 0 0 0 0 0 1 23 76 0 40 0 0 0 442 50 23 0 0 0 0 0 0 0 0 25 75 21 47 0 0 0 398 50 23 0 0 0 0 0 0 0 0 25 75 20 102 0 0 0 388 51 24 1558 6 41 0 0 0 0 2 28 69 0 40 0 0 0 425 54 24 45657 56 140 0 0 0 1 13 62 25 0 41 3 1 9 735 57 22 46988 60 136 0 0 0 0 14 57 28 0 41 0 0 0 826 66 23 44894 63 138 0 0 0 0 14 59 27 0 41 0 0 0 951 73 24 46346 61 137 0 0 0 0 14 59 26 0 66 1 1 7 812 73 35 10235 48 116 0 0 0 0 6 36 58 0 41 0 0 0 1026 89 31 228 2 21 0 0 0 0 2 24 74 kps on ssd0 (/vicepa) stays around 300-400. The 44000kps on ssd1 is when I ran "dd if=/dev/zero of=/vicepc/dummy" to a different disk on the same array My bonnie++ performance numbers for vicepa are here: http://www.coe.uncc.edu/~jwedgeco/bonnie.html What could AFS be doing that causes the performance to stink? Thanks, Jason _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
