David Rees schrieb:
> On Tue, Feb 26, 2008 at 4:39 PM, David Rees <[EMAIL PROTECTED]> wrote:
>>  So there you go. IMO, unless you are willing to overhaul your storage
>>  system or slightly increase the risk of data corruption (IMO,
>>  data=writeback instead of the default data=ordered should be a large
>>  gain for you and is very safe), you are going to continue to fall
>>  further behind in your nightly cleanup runs.
> 
> I forgot to mention, this link may be informative:
> 
> http://wiki.centos.org/HowTos/Disk_Optimization
> 
> But I think it covers most of the topics in this thread already.

I guess theory and testing that theory on your own system are two 
different things.

Only now I really tested different IO schedulers, to get some numbers.
I changed the scheduler on the iSCSI target, as it is the device where 
the writes take place.


After a short period of testing, it seems to me that these are the best 
schedulers (1 being better than 2, first item seemed only slightly 
better than the second, i.e. anticipatory was slightly better than noop):

1. anticipatory, or noop
2. deadline, or cfq


Note: these results may not be very scientific as the conditions were 
not constant all the time (BackupPC fetching data from hosts).


1. The Anticipatory elevator introduces a controlled delay before 
dispatching the I/O to attempt to aggregate and/or re-order requests 
improving locality and reducing disk seek operations.

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             158,58      2846,31         0,00      28520          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             168,03      1572,03         0,00      15736          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             144,47      1207,11     15554,15      12216     157408

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              85,01       600,20     24815,98       6008     248408

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             119,28       279,72      1537,66       2800      15392

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             164,47      1370,06       598,00      13728       5992

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             151,55      1778,22         0,00      17800          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             189,51      1854,15         0,00      18560          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             118,36      1386,77     19005,73      14048     192528

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             164,84      1914,89      2836,36      19168      28392



2. The NOOP scheduler is a simple FIFO queue and uses the minimal amount 
of CPU/instructions per I/O to accomplish the basic merging and sorting 
functionality to complete the I/O.

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             173,63      2318,48         0,00      23208          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             130,27       801,60     16715,28       8024     167320

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             160,04      1774,23        91,91      17760        920

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             117,18      1251,55         0,00      12528          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             166,43      1904,50         0,00      19064          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             168,36      1821,16         0,00      18248          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             187,11      2106,69         0,00      21088          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             133,80      1248,80     13886,40      12488     138864

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             128,10       980,80       529,60       9808       5296

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             178,11      2333,93       125,77      23456       1264



3. The Deadline elevator uses a deadline algorithm to minimize I/O 
latency for a given I/O request.

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             216,73      2408,76         0,00      24184          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             132,30      1426,40      2812,00      14264      28120

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             205,28      3076,37         0,00      30856          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             190,81      3359,84         0,00      33632          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             185,03      2435,93         0,00      24408          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             161,94      1783,02         0,00      17848          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             186,70      1896,00         0,00      18960          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             117,47       540,52      4150,10       5416      41584

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             115,20       779,20      1395,20       7792      13952

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             123,68       387,61      1509,69       3880      15112



4. CFQ maintains a scalable per-process I/O queue and attempts to 
distribute the available I/O bandwidth equally among all I/O requests. 
CFQ is well suited for mid-to-large multi-processor systems and for 
systems which require balanced I/O performance over multiple LUNs and 
I/O controllers.

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             214,21      2567,79       126,44      25832       1272

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             178,62      2579,02         0,00      25816          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             157,64      1870,13         0,00      18720          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             138,36      1677,52         0,00      16792          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             117,38      1397,00      3396,60      13984      34000

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             143,91      1688,62         0,00      16920          0

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             149,50      1868,00       252,00      18680       2520

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             126,67      1469,73       979,82      14712       9808

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             135,86       501,90      1353,05       5024      13544

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             132,87       401,20      1293,11       4016      12944





-- 
Tomasz Chmielewski
http://wpkg.org

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to