Re: I/O delays

2010-10-20 Thread Devin Bougie
Just incase anyone was interested in the resolution to this problem, it appears 
that the poor drive performance was caused by vibrations from the chassis fans. 
 After replacing about every component in this system (hard drive, motherboard, 
SATA cable, memory, and CPU's) to no avail, we discovered that our chassis had 
a mix of 6.8W and 2.8W fans.  When we corrected this so that all five chassis 
fans were of the 2.8W variety, our disk performance problems appeared to 
disappear.

For what it's worth, a quick search produces a few reports of vibrations 
degrading disk performance. For example:
http://www.dtc.umn.edu/publications/reports/2005_08.pdf
http://etbe.coker.com.au/2009/04/22/vibration-strange-sata-performance/

Finally to quantify the improvement in disk speed, here are the results 
(showing the average and range) of each test from three runs of bonnie++:
bonnie++ -d /mnt/scratch/test_dab66 -n 0 -s 98304 -f

ORIGINAL FAN CONFIGURATION (two 2.8W and three 6.8W)
Writes AVG = 5348; 4327 - 6369
Rewrites AVG = 2489.5; 2284 - 2695
Reads AVG = 9575.5; 7778 - 11373
Random Seeks AVG = 68.4; 65.3 - 71.5

CORRECT FAN CONFIGURATION (five 2.8W)
Writes AVG = 53901.333; 43143 - 60430
Rewrites AVG = 27684; 20178 - 31942
Reads AVG = 75541.333; 65704 - 80717
Random Seeks AVG = 181.333; 117.9 - 218.6

Many thanks for everyone's time and help.

Sincerely,
Devin


Re: I/O delays

2010-09-09 Thread Devin Bougie
Hi Stijn,

Thank you very much for your reply.

On Sep 9, 2010, at 2:13 AM, Stijn De Weirdt wrote:
> so there is only 1 disk in your system according to dmesg (a 500GB SATA
> disk) and dd reports speeds > 1GB/s. you are either the lucky owner of a
> wonder SATA disk or you are measuring linux caching ;) 

Yes, indeed =).

> when you run the tests, can you also run
> watch grep Dirty /proc/meminfo
> and check if it starts increasing, what the maximum is and when it
> starts decreasing.

It appears to start decreasing at 1GB.  With /proc/sys/vm/dirty_ratio set to 
40, the maximum I've seen is 1030708 kB.  With it set to 2, the max I've seen 
is 991704 kB.

> if you have default SL settings  /proc/sys/vm/dirty_ratio is 40, so
> that's 19GB of dirty memory allowed. with your dd you won't reach that,
> but default /proc/sys/vm/dirty_expire_centisecs is 3000, so after 30
> seconds, the system will start to write the dirty memory away, which is
> probably when the "reduced performance" starts to hit you.
> 
> you can safely lower /proc/sys/vm/dirty_ratio to 2 (which is still
> almost 1GB of dirty memory) to get a more stable performance (i think
> recent kernel do this automated based on total size of memory)
> 
> and if you want to measure your disk performance, use iozone/bonnie/IOR
> with sync options (or add 'sync' to you timing stats)

The system is currently in use and I haven't yet had a chance to run any real 
disk benchmarks.  For what it's worth, here's the dd output with a subsequent 
"time sync".  During the sync, the system as a whole becomes sluggish (as we 
only have a single drive used both for the OS and user data) and it takes 
several seconds for commands like "top" to return.  We also see comparable 
behavior when using ext3 or XFS.

[da...@lnx4103 ~]% time dd if=/dev/zero of=/mnt/scratch/test bs=1M count=1K ; 
time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.928643 seconds, 1.2 GB/s

real0m1.077s
user0m0.000s
sys 0m1.068s

real1m35.341s
user0m0.000s
sys 0m0.182s

> btw, try dstat (sort of iostat and vmstat combined, with colors ;)

Yes, dstat is very nice!

We will let you know whether or not things improve after replacing the disk.

Thanks again,
Devin


Re: I/O delays

2010-09-08 Thread Stijn De Weirdt
hi devin,

so there is only 1 disk in your system according to dmesg (a 500GB SATA
disk) and dd reports speeds > 1GB/s. you are either the lucky owner of a
wonder SATA disk or you are measuring linux caching ;) 

when you run the tests, can you also run
watch grep Dirty /proc/meminfo
and check if it starts increasing, what the maximum is and when it
starts decreasing.
if you have default SL settings  /proc/sys/vm/dirty_ratio is 40, so
that's 19GB of dirty memory allowed. with your dd you won't reach that,
but default /proc/sys/vm/dirty_expire_centisecs is 3000, so after 30
seconds, the system will start to write the dirty memory away, which is
probably when the "reduced performance" starts to hit you.

you can safely lower /proc/sys/vm/dirty_ratio to 2 (which is still
almost 1GB of dirty memory) to get a more stable performance (i think
recent kernel do this automated based on total size of memory)

and if you want to measure your disk performance, use iozone/bonnie/IOR
with sync options (or add 'sync' to you timing stats)

btw, try dstat (sort of iostat and vmstat combined, with colors ;)


stijn

> Hi Konstantin,
> 
> Thank you for the reply.  Here is the output of smartctl.  We will see if a 
> different drive model helps.
> 
> Sincerely,
> Devin
> 
> [r...@lnx4103 ~]# smartctl -a /dev/sda
> smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce 
> Allen
> Home page is http://smartmontools.sourceforge.net/
> 
> === START OF INFORMATION SECTION ===
> Device Model: WDC WD5002ABYS-02B1B0
> Serial Number:WD-WCASYDXX
> Firmware Version: 02.03B03
> User Capacity:500,107,862,016 bytes
> Device is:Not in smartctl database [for details use: -P showall]
> ATA Version is:   8
> ATA Standard is:  Exact ATA specification draft version not indicated
> Local Time is:Wed Sep  8 23:34:17 2010 EDT
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
> 
> === START OF READ SMART DATA SECTION ===
> SMART overall-health self-assessment test result: PASSED
> 
> General SMART Values:
> Offline data collection status:  (0x84)   Offline data collection activity
>   was suspended by an interrupting 
> command from host.
>   Auto Offline Data Collection: Enabled.
> Self-test execution status:  (   0)   The previous self-test routine 
> completed
>   without error or no self-test has ever 
>   been run.
> Total time to complete Offline 
> data collection:   (9480) seconds.
> Offline data collection
> capabilities:  (0x7b) SMART execute Offline immediate.
>   Auto Offline data collection on/off 
> support.
>   Suspend Offline collection upon new
>   command.
>   Offline surface scan supported.
>   Self-test supported.
>   Conveyance Self-test supported.
>   Selective Self-test supported.
> SMART capabilities:(0x0003)   Saves SMART data before entering
>   power-saving mode.
>   Supports SMART auto save timer.
> Error logging capability:(0x01)   Error logging supported.
>   General Purpose Logging supported.
> Short self-test routine 
> recommended polling time:  (   2) minutes.
> Extended self-test routine
> recommended polling time:  ( 112) minutes.
> Conveyance self-test routine
> recommended polling time:  (   5) minutes.
> SCT capabilities:(0x303f) SCT Status supported.
>   SCT Feature Control supported.
>   SCT Data Table supported.
> 
> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
> WHEN_FAILED RAW_VALUE
>   1 Raw_Read_Error_Rate 0x002f   197   197   051Pre-fail  Always  
>  -   900
>   3 Spin_Up_Time0x0027   239   236   021Pre-fail  Always  
>  -   1050
>   4 Start_Stop_Count0x0032   100   100   000Old_age   Always  
>  -   42
>   5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always  
>  -   0
>   7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always  
>  -   0
>   9 Power_On_Hours  0x0032   100   100   000Old_age   Always  
>  -   401
>  10 Spin_Retry_Count0x0032   100   253   000Old_age   Always  
>  -   0
>  11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always  
>  - 

Re: I/O delays

2010-09-08 Thread Devin Bougie
Hi Konstantin,

Thank you for the reply.  Here is the output of smartctl.  We will see if a 
different drive model helps.

Sincerely,
Devin

[r...@lnx4103 ~]# smartctl -a /dev/sda
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Device Model: WDC WD5002ABYS-02B1B0
Serial Number:WD-WCASYDXX
Firmware Version: 02.03B03
User Capacity:500,107,862,016 bytes
Device is:Not in smartctl database [for details use: -P showall]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:Wed Sep  8 23:34:17 2010 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84) Offline data collection activity
was suspended by an interrupting 
command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0) The previous self-test routine completed
without error or no self-test has ever 
been run.
Total time to complete Offline 
data collection: (9480) seconds.
Offline data collection
capabilities:(0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine 
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:( 112) minutes.
Conveyance self-test routine
recommended polling time:(   5) minutes.
SCT capabilities:  (0x303f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   197   197   051Pre-fail  Always   
-   900
  3 Spin_Up_Time0x0027   239   236   021Pre-fail  Always   
-   1050
  4 Start_Stop_Count0x0032   100   100   000Old_age   Always   
-   42
  5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always   
-   0
  7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always   
-   0
  9 Power_On_Hours  0x0032   100   100   000Old_age   Always   
-   401
 10 Spin_Retry_Count0x0032   100   253   000Old_age   Always   
-   0
 11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always   
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always   
-   40
192 Power-Off_Retract_Count 0x0032   200   200   000Old_age   Always   
-   39
193 Load_Cycle_Count0x0032   200   200   000Old_age   Always   
-   2
194 Temperature_Celsius 0x0022   121   112   000Old_age   Always   
-   26
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always   
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always   
-   0
198 Offline_Uncorrectable   0x0030   100   253   000Old_age   Offline  
-   0
199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age   Always   
-   0
200 Multi_Zone_Error_Rate   0x0008   100   253   000Old_age   Offline  
-   0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
100  Not_testing
200  Not_testing
300  Not_testing
400  Not_testing
500  Not_testi

Re: I/O delays

2010-09-08 Thread Devin Bougie
Hi Steve,

On Sep 8, 2010, at 5:03 PM, Steven Timm wrote:
> Devin--what is the output of the command "dmesg"
> Are there any kernel traces or bugs in that output?

Unfortunately I don't see anything unusual in syslog or the output of dmesg.

Thanks,
Devin



dmesg.log
Description: dmesg.log


Re: I/O delays

2010-09-08 Thread Konstantin Olchanski
On Wed, Sep 08, 2010 at 04:42:43PM -0400, Devin Bougie wrote:
> Hi, All.  We are seeing periodic I/O delays on a new large compute node ...


Hi, you do not say what your disks are (model number as reported by smartctl),
but I have seen same symptoms as you describe with some WDC 2TB "advanced 
format" disks.
The problem vanished with the next shipment of these 2TB disks, which also 
happen
to have a slightly different model number.

So I say, if you see disk delays, try different disks (different maker, 
different
model, different capacity).


-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada


Re: I/O delays

2010-09-08 Thread Steven Timm

Devin--what is the output of the command "dmesg"
Are there any kernel traces or bugs in that output?

Steve


On Wed, 8 Sep 2010, Devin Bougie wrote:


Hi, All.  We are seeing periodic I/O delays on a new large compute node using 
two Xeon X5670 CPU's (hyper-threaded for a total of 24 processors) with 48GB 
memory.  The system is running the latest kernel on a fully updated x86_64 
SL5.5.  So far my attempts at changing schedulers and BIOS configurations has 
not helped.  This is a unique system for us, and I have not been able to 
reproduce this on any other x86_64 SL5.5 systems.  Various benchmarks we've run 
(most extensively using Mathematica) show the system performing very well.  So 
far we've only seen the problem when performing disk I/O.

Please let me know if there is any more information I can provide, and any 
suggestions would be greatly appreciated.

Many thanks,
Devin

--

Here is an example of how we can reproduce the delays by repeatedly writing 
modest amounts of data.

--
[da...@lnx4103 ~]% time dd if=/dev/zero of=/mnt/scratch/test bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.914825 seconds, 1.2 GB/s

real0m2.037s
user0m0.000s
sys 0m1.050s
[da...@lnx4103 ~]% time dd if=/dev/zero of=/mnt/scratch/test bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.903858 seconds, 1.2 GB/s

real0m1.129s
user0m0.001s
sys 0m1.125s
[da...@lnx4103 ~]% time dd if=/dev/zero of=/mnt/scratch/test bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.923554 seconds, 1.2 GB/s

real0m1.141s
user0m0.001s
sys 0m1.138s
[da...@lnx4103 ~]% time dd if=/dev/zero of=/mnt/scratch/test bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.911595 seconds, 1.2 GB/s

real1m27.200s
user0m0.001s
sys 0m1.148s

--

Here is a look at "vmstat -S M 5" before, during, and after the above tests.

--
procs ---memory-- ---swap-- -io --system-- 
-cpu--
r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
0  0  0  46937 35   118400 481   437  0  0 98  2  0
0  0  0  46937 35   118400 118 1017   85  0  0 100  0  0
0  0  0  46937 35   118400 0 5 1013   93  0  0 100  0  0
0  0  0  46937 35   118400 042 1017   84  0  0 100  0  0
0  1  0  46937 35   118400 1 2 1005   91  0  0 100  0  0
0  0  0  46937 35   118400 017 1016   90  0  0 100  0  0
0  0  0  46937 35   118400 0 5 1012   89  0  0 100  0  0
0  0  0  46937 35   118400 0 5 1010   85  0  0 100  0  0
0  1  0  46919 35   118600   59612 1071  224  0  0 99  1  0
0  0  0  46919 35   118600 070 1019   89  0  0 100  0  0
0  0  0  46919 35   118600 011 1015   98  0  0 100  0  0
0  0  0  46919 35   118600 0 7 1021  112  0  0 100  0  0
1  0  0  47780 3833700 031 1035  760  0  2 98  0  0
0  5  0  46929 38   113500 0 26431 1040  341  0  1 94  5  0
0  6  0  46986 38   108100 2  9840 1032  158  0  0 84 16  0
0  6  0  47054 39   10130065 13113 1052  186  0  0 85 15  0
0  5  0  47137 3994800 2 13006 1036  171  0  0 86 14  0
0  7  0  47208 3988100 0 13135 1035  161  0  0 86 14  0
0  6  0  47258 3983300 0  9736 1031  150  0  0 84 16  0
0  6  0  47300 3979300 0  8656 1025  131  0  0 83 17  0
0  5  0  47349 3975000 0 2 1027  569  0  0 89 11  0
procs ---memory-- ---swap-- -io --system-- 
-cpu--
r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
0  3  0  47437 3966000 0 26781 1042 1372  0  0 88 12  0
0  5  0  47517 3958500 0 13008 1041  177  0  0 85 15  0
0  5  0  47571 3953100 0 13107 1030  148  0  0 83 17  0
0  5  0  47622 3948400 0  9734 1033  150  0  0 87 13  0
0  5  0  47668 3944000 0  6554 1029  134  0  0 87 12  0
0  5  0  47711 3939900 1  9832 1027  146  0  0 83 17  0
0  4  0  47772 3934000 0 13006 1036  149  0  0 85 15  0
0  5  0  47841 3927500 0 13110 1037  164  0  0 86 14  0
0  5  0  47909 3921200 0  6754 1037  310  0  0 87 13  0
0  1  0  46894 39   118600 0 11842 1031 1825  0  1 93  6  0
0  4  0  46892 39   118600 0 27131 1055  104  0  0 95  5  0
0  4  0  46893 39   118600 0  9834 1025  110  0  0 93  6  0
0  5