In message <[EMAIL PROTECTED]>, Jeffrey Butera
<[EMAIL PROTECTED]> writes
I'm going to ask yet another dumb question - Unidata 6.1.4 on Solaris (soon to
be 7.1.x).
I'm selecting a bunch of records and then outputting data from them into an
ascii file in _HOLD_. If I open a sequential file, write the data
line-by-line (WRITESEQF) and close the file, it takes about 5 minutes. If I
save the data in a @FM delimited record and then write the record out at the
end, it takes about 3 seconds.
BE WARNED. We had exactly the same thing the other way round - indeed I
rewrote a lot of our routines to use WRITESEQ instead of building an
array in BASIC.
I'm well aware that writing sequentially is doing a whole lot more disk I/O
but I can't believe the difference in speed. Are their any subtleties other
than disk I/O? I seem to recall some discussion about seq files and
maintaining the pointer of where the current position in the file, but I'm
foggy on these topics...
Our system was short of RAM - 16 meg for 32 users (PI/Open on an EXL
7330). It thrashed enough under normal load, even before you started to
try and build large (and I mean LARGE) strings in BASIC...
Cheers,
Wol
--
Anthony W. Youngman <[EMAIL PROTECTED]>
'Yings, yow graley yin! Suz ae rikt dheu,' said the blue man, taking the
thimble. 'What *is* he?' said Magrat. 'They're gnomes,' said Nanny. The man
lowered the thimble. 'Pictsies!' Carpe Jugulum, Terry Pratchett 1998
Visit the MaVerick web-site - <http://www.maverick-dbms.org> Open Source Pick
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/