[EMAIL PROTECTED] wrote on 09/03/2006 01:11:43 AM:

> Anyone can create an example of extremes to invalidate any suggestion.

I didn't really see one million records as being extreme.  That's really 
not a very big file in today's world.  Since many people here are working 
on large systems, I just wanted to give them a heads-up before they 
implement logic that drags down their systems.  You never said this was 
for only small sets of records, so I felt people needed to be aware of the 
limitations.  Many people who come here are new to the MV world, and we 
need to look out for them.

> If I were intending to process 5 million records as you would suggest, I
> would write a simple program to create the csv's. I create many of these 
as
> programs for their recurring use. I use download for the one-shot simple
> projects.

The problem is that somebody could inherit the system years later, see 
something called DOWNLOAD, and assume it will work for their needs. Unless 
there are built-in warnings that say they should only process VERY small 
data sets with this logic, they're going to get bitten.  And we all know 
that one-time programs often make into production.  Who here has never had 
that happen?  :-)

> Besides, so what if it did 5 million executes. These systems can handle 
it.

I know the common thinking these days is that the hardware can handle 
anything that programmers throw at it, and that programmer time is more 
expensive than hardware.  Therefore, programmers should just put something 
together that works and let the system handle it.  If not, you can always 
buy more hardware.  When programmers think that way, their employers end 
up bringing in people like me to make it right - not that I'm complaining. 
 :-)

With one line of BASIC code you can consume an entire CPU.  If your system 
has four CPUs, that's 25% of the entire system's capacity!  Run that 
program in four different sessions and you've bogged down the entire 
system.  Yes, hardware has come down in price over the years, but nobody's 
giving away CPUs yet.  Also, the fact is that faster, more powerful 
systems very often exploit inefficient coding practices, and can create 
some very nasty bottlenecks.

> Time or processor consumption wasn't an issue in the original request

Again, just warning people.  It's my belief that system efficiency should 
always be a key consideration in developing software.  I acknowledge that 
you shouldn't agonize over every CPU cycle, but it doesn't take that much 
longer to consider how coding and implementation practices will impact the 
entire system.  Unfortunately, performance is generally seen as a problem 
for the SAs and DBAs to solve.  In the VAST majority of cases that I've 
come across, the best performance improvements come from cleaning up the 
application code.

Please understand that I'm not picking on anyone here.  Just trying to 
help people understand the consequences of the decisions they make while 
developing systems.  When systems perform sluggishly, there's a tendency 
to blame that oddball Uni-whatchamacallit database.  The more we all do to 
keep the application humming along, the better off we'll all be.  And if 
anybody out there isn't sure how to make that happen, there are plenty of 
us out here that are willing to help out.

[Wipes brow and steps down from soapbox]
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to