On Thu, 6 Jan 2005 09:38:45 -0800
Josh Berkus <josh@agliodbs.com> wrote:

> Frank,
> >   Now that's rich.  I don't think I've ever seen a database perform
> >   worse after it was normalized.  In fact, I can't even think of a
> >   situation where it could!
> Oh, there are some.    For example, Primer's issues around his dating 
> database; it turned out that a fully normalized design resulted in
> very bad select performance because of the number of joins involved. 
> Of course, the method that did perform well was *not* a simple
> denormalization, either.
> The issue with denormalization is, I think, that a lot of developers
> cut their teeth on the likes of MS Access, Sybase 2 or Informix 1.0,
> where a poor-performing join often didn't complete at all.   As a
> result, they got into the habit of "preemptive tuning"; that is, doing
> things "for performance reasons" when the system was still in the
> design phase, before they even know what the performance issues
> *were*.     
> Not that this was a good practice even then, but the average software
> project allocates grossly inadequate time for testing, so you can see
> how it became a bad habit.  And most younger DBAs learn their skills
> on the job from the older DBAs, so the misinformation gets passed
> down.

  Yeah the more I thought about it I had a fraud detection system I
  built for a phone company years ago that when completely normalized 
  couldn't get the sub-second response the users wanted. It was Oracle
  and we didn't have the best DBA in the world.  

  I ended up having to push about 20% of the deep call details into
  flat files and surprisingly enough it was faster to grep the flat
  files than use the database, because as was previously mentioned
  all of the joins. 

   Frank Wiles <[EMAIL PROTECTED]>

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to