prakash reddy wrote:
> Yes . you all are True from the Database perspective.
>    
>   But my requirement is necessary.
>    
>   I work for convergys and we have a Well built product Infinys.
>    
>   Infinys is a telecom Billing product.
>    
>   We have a table in Infinys and it is COSTEDEVENT
>   This table is the Core table of our product. This table holds the details 
> of each and every event ( telephone Call details / SMS / MMS / GPRS etc ) 
> that were generated in the Network. so the minimum number of records in this 
> table are ( 5 million ) and this is a Minimum Figure
>    
>   Some of our clients maintain more than 15 Million records in this table. 
>    
>   SO for us we cannot query this table because of Performance issues.
>   so what we did is,  we have a Plugin where we have written a code which 
> makes a copy of every event that goes to Core table and puts in another 
> table. and we are querying this table.
>   Now we want to cache everything so that we get better performance of this 
> data.
>    
>   Now we have to Cache this data and we are not supposed to query the 
> database so 
>    
>   This is a requirement from our perspective.
>    
>   I need some help in this.
>    
>   Any suggestions are welcome !
>    
>   Thanks.
>    
>   Cheers
>   Krishna kanth.

I don't know if you got a satisfactory reply to this, but I've done a 
LOT of high-performance database stuff.  15 million records is quite a 
few records but nothing that isn't manageable.

My first question is - if you were to directly query the database all 
the time, what sort of patterns are there in both queries AND results? 
Caches work best if you can cache based on known query/result patterns.

Can you cache just the most-requested data or do you have to cache it 
all?  This relates to the previous pattern question.  In general, there 
are two types of caches to choose from.

How much hardware do you have available for this caching system?  This 
requires running inventory.

Can you archive portions of the database?  Ironically, the less data 
there is in the table, the faster the database server will run queries 
on that table.  If you can archive stuff that is, say, older than two 
weeks in a separate but identical table, your main table will become a 
lot smaller.  And then have a secondary archive table for really old 
data (e.g. data older than 6 months).

Can you forego ACID on the main table?  For instance, MySQL InnoDB 
tables run slower than MyISAM because InnoDB has full ACID support while 
MyISAM does not.  You take a greater risk of database corruption without 
ACID but the tradeoff is a _drastic_ improvement in performance.

My last question:  How complex are your queries?  Anything beyond simple 
SELECT statements is going to be detrimental to performance on 15 
million records.

I'm assuming you've got well-designed indexes on the 'core' table for 
the appropriate fields.  If you don't, I recommend creating indexes but 
you'll have to play some balancing games to avoid chewing up all 
available hard drive space.  The downside to indexes is that INSERT INTO 
queries take longer to complete but searching becomes faster.  I also 
assume you regularly take the database offline and run database 
optimization queries.  You also should be using persistent connections 
if you aren't.

-- 
Thomas Hruska
CubicleSoft President
Ph: 517-803-4197

*NEW* MyTaskFocus 1.1
Get on task.  Stay on task.

http://www.CubicleSoft.com/MyTaskFocus/

Reply via email to