My theory on using files for anything is:
Disk access is expensive :)
Harddrives are pretty much the slowest components
you could have to deal with (aside from other
even more slow external storage mediums)
Memory access and processor cache access times are
measured in nanoseconds.
Harddrive access times are measured in milliseconds.
Millisecond 1,000th of a second or 1000 milliseconds = 1 second.
Nanosecond 1,000,000,000 = 1 Second.
So would you rather have, 5ns 5/1,000,000,000 of a second
or, around 5ms for a nice fast SCSI Harddrive so thats
5/1,000 of a second.
Altho have a 10K RPM SCSI harddrive can make file access
dreamy <G>, memory is still faster
Generally speaking unless you have to use Virtual Memory
like Dave suggested many small file writes are much less
effecient because each seperate file write you throw at
the OS (Kernel) requires the kernel to grab a file handle
so each seperate write has (more) computational overhead
whereas as long as you stay within your systems phsyical
RAM it is generally going to be more effecient.
Now, what would be fun? Non-Volatile memory which retains
all the data so say you powered down your system when
you turned it back on it would be in the exact same place
you were before. Some companies are working on non-volatile
ram and it should be availalbe sometime within the next
couple of years.
Jeremy Allen
[EMAIL PROTECTED]
-----Original Message-----
From: Dave Watts [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 10, 2000 3:04 PM
To: '[EMAIL PROTECTED]'
Cc: '[EMAIL PROTECTED]'
Subject: RE: cffile and cfloop ?
> > This is much more efficient, Bud. However, it depends on
> > how long the cumulative records are.
> >
> > I haven't tested this on 4.5+, but under 4.01, if I tried to
> > write a variable to disk that was over about 32k, the CF server
> > would spike at 100% CPU and lock up completely, requiring a reboot.
>
> Well, as usual I had to test it. And as usual, I'm wrong. LOL
>
> I ran it my way on a query with 3915 records. It created a file 179
> KB. It didn't lock up, and that's on Win98 with 4.01 and a whopping
> 64 MB RAM. But, it did take 4 minutes. Your way took 10 seconds. LOL
> So, I think it's safe to say that your way is more efficient AND
> safer. :)
>
> Learn something new every day here. I would think that one large
> write would be quicker than 4,000 small ones. Certainly not 2,400%
> slower. :)
Don't be too hasty with your conclusions!
There are two things which will affect the efficiency of your file writes,
as far as I can tell. The first is how many times you open and close the
file, and the second is how much data needs to be buffered in memory before
being written to file.
You might find on a server that's better prepared to handle operations in
memory, your method is much more efficient. For example, if you do it on an
NT server that has physical memory to spare, you may be better off with your
method. I've found this to be the case for some very large writes I've had
to do in the past. On your 64 Mb Win98 machine, it may have had to use
virtual memory to complete the operation, which of course requires reading
to and writing from the disk.
The moral of this story, if there is one, is that there are lots of
variables that enter into a "which is faster" question - often so many that
you can't arrive at a general answer.
Dave Watts, CTO, Fig Leaf Software
http://www.figleaf.com/
voice: (202) 797-5496
fax: (202) 797-5444
----------------------------------------------------------------------------
--
Archives: http://www.mail-archive.com/[email protected]/
To Unsubscribe visit
http://www.houseoffusion.com/index.cfm?sidebar=lists&body=lists/cf_talk or
send a message to [EMAIL PROTECTED] with 'unsubscribe' in
the body.
------------------------------------------------------------------------------
Archives: http://www.mail-archive.com/[email protected]/
To Unsubscribe visit
http://www.houseoffusion.com/index.cfm?sidebar=lists&body=lists/cf_talk or send a
message to [EMAIL PROTECTED] with 'unsubscribe' in the body.