I have yet to find someone who can give me a straight answer to this.  I have
a file that is badly overflowed (actually, I have a few files this way).  It
is affecting our batch processing with ridiculous complete times.  What
would be the best way to increase performance with this file.  Do I increase
the split load so that splits occur less frequently?  Any assistance would
be greatly appreciated.

Thanks,
Kevin


The stats of the file:

File name ..................   STOCK                                            
Pathname ...................   STOCK                                            
File type ..................   DYNAMIC                                          
Hashing Algorithm ..........   GENERAL                                          
No. of groups (modulus) ....   777564 current ( minimum 1, 14835 empty,         
                                            247100 overflowed, 28403 badly )    
Number of records ..........   4770913                                          
Large record size ..........   1628 bytes                                       
Number of large records ....   8                                                
Group size .................   2048 bytes                                       
Load factors ...............   80% (split), 50% (merge) and 80% (actual)        
Total size .................   2185496576 bytes                                 
Total size of record data ..   1253392129 bytes                                 
Total size of record IDs ...   32664403 bytes                                   
Unused space ...............   899435948 bytes                                  
Total space for records ....   2185492480 bytes                                 

File name ..................   STOCK                                            
                               Number per group ( total of 777564 groups )      
                               Average    Minimum    Maximum     StdDev         
Group buffers ..............      1.37          1          4       0.53         
Records ....................      6.14          1         26       3.55         
Large records ..............      0.00          1          1       0.00         
Data bytes .................   1611.95        194       6893     936.26         
Record ID bytes ............     42.01          4        179      24.39         
Unused bytes ...............   1156.74         12       3560     576.33         
Total bytes ................   2810.69       2048       8192       0.00         
                                                                                
                                                                                
                               Number per record ( total of 4770913 records
)   
                               Average    Minimum    Maximum     StdDev         
Data bytes .................    262.72        184       2061      25.79         
Record ID bytes ............      6.85          4         15       0.40         
Total bytes ................    269.56        188       2076      25.93         


-- 
View this message in context: 
http://www.nabble.com/Badly-overflowed-Dynamic-files-tf4880009.html#a13965441
Sent from the U2 - Users mailing list archive at Nabble.com.
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to