Hello David,

Appologies for my delayed response, a friend of mine suggested something
along the lines of creating all my sql queries and running them at once
rather than individually.  I have definitely considered this approach, where
it falls apart though is that I am doing lookups on each row (which is a
major speed issue at this point), before I write, as I need to see whether
the product exists already, if I am creating a new entry for the product, or
if it does exist, has anything changed because we need to audit changes.

On the other hand using the 'ON DUPLICATE KEY' portion, (which I am not
familiar with right now, but I will be looking that up), maybe there is a
way I can use that somehow to create the audit entries if it already exists.

As far as the Memory engine, I have played with that a little bit, however
you are restricted in that you can't have any columns that are blob/text, so
best I can do is a varchar for stuff that might actually exceed 255 (I think
that is the limit) bytes/chars.  This inherently kills my 'Description'
column, which could (on rare occasion) exceed 1k (or more).  On the other
hand, one thing that I have played with is throwing the products table into
an SQLite database and put the database file on a ram drive, and that has
helped performance, but we are still talking about a lot of read/writes, and
I am also not a huge fan of the column type structure of SQLite (not bashing
it, by any means, and most definitely not discounting its use in the future,
I am just more comfortable with mysql tables/structure).  I have also not
stepped into storing mysql databases on a ram drive (am sure it is a lot
more complex scenario, and really when this project goes into production, I
don't want to fall into struggles with hosting packages that require
multitudes of ram for a ram drive scenario), basically I am trying to
optimize my approach with disk based database structures, keeping memory
usage to a minimum, while keeping things fast (yeah I know an uphill
battle).

Now also thinking about your steps listed there with the temp table, would
you create that temp table in memory, by reading the database record and
then creating the new record in the temp table then dump the temp table over
top the current table (in blocks of rows at a time?)

I hope my logic in this approach is well explained as I have a large amount
of data, and I want to keep the management of the data as simple and
accurate as possible.  At the same time I feel that all this checking /
comparing / reading data over and over is maybe overdoing it.

Thanks again for your post, I will really take it into strong consideration
while I am refining the whole process.

Aaron Murray

--
View this message in context: 
http://zend-framework-community.634137.n4.nabble.com/Database-Optimization-Normalization-tp3426978p3432263.html
Sent from the Zend Framework mailing list archive at Nabble.com.

-- 
List: [email protected]
Info: http://framework.zend.com/archives
Unsubscribe: [email protected]


Reply via email to