I never had to use the interface to DFSORT and that was when I was sorting 
stuff that was so
large it crashed RAID servers (mostly SYSLOGs).

I never had the sort in Pipelines "stall" unless I was also asking it to do 
"unique" record
filter as well.  Straight collates were always fine.   We still use it 
internally to search through a filesystem 
for backlevels of message IDs that LookAt has surfaced in the past.   That can 
be 3/4 million records coming in
from multiple files (OK, I'm talking zVM here).   I think Jon Hartmann, 
original author of the pipelines, did his reads and
work below the normal application layer (at least in VM), so it was incredibly 
fast and for straight sorts didn't seem
to choke on any size data.  I personally believe the DFSORT interface was there 
in case you already had such a beast and wanted
to continue using it; not because the "sort" in pipelines couldn't handle the 
volume.

Most of the base stages in BatchPipes/SmartBatch were identical.  I assume it 
would perform similarly.

Kevin.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to