Hi Alexander,

all our Applications are web-based, so there is no other application that could feed the cache.
But I think when dropping table and indexes, the cache for data that belongs to that table is deleted too.


So did it makes any scence to delete only the data of the table and not the table itsselve for holding the cached data and all other build in optimising strategies.

Our strategie with deleting tables is justified in the past with ADABAS D as Database, 20% of datavolume, other machines with single processor, without raw devices and lower ram. Today a load of 5GB data needs about 15min, in the past 2 hours.
I think its time to change the loading strategie to get better performance but I want to make sure that changing will bring up the wanted results (there are a lot of scripts to change).


rgds.

Albert



Schroeder, Alexander schrieb:
Hello,

it looks to me like the 1st query did not hit the data cache, and was punished
with lots of data I/O. All following queries possibly then feed from the cache
to a certain extend and need not to access the hard disk too often.


As dropping, loading, and backup already takes some time, and is done before the server is 'switched on' for the application, possibly executing
the nasty query before from some client program (e.g. dbmcli) may just feed the cache, so that the 1st query from the application isn't really the 1st
one ...


Alexander Schröder
SAP DB, SAP Labs Berlin



-----Original Message-----
From: Albert Steckenborn [mailto:[EMAIL PROTECTED]
Sent: Monday, December 15, 2003 4:47 PM
To: [EMAIL PROTECTED]
Subject: Tuning Question tables (5GB) with complete reload each day


Hi folks,


of course somebody could give me a hint for my questions.

We are running some applications with tables up to 5GB with SAPDB 7.4 on Linux.
Each day we are getting the whole data again in partial steps (with size from 0,1 GB to 1,5 GB) at different times.


At the Moment we are dropping the whole table and indexes for 4 times per day and creating everything new, loading new data with fastload to our Standby Database-Server. At this Database-Server we are doing everything else to do (creating indexes a.s.o.). Then we are running a backup of the Database switching the applications to this Server and running a recover of our backup at the Main Database-Server.
After this we are switching back to the Main Server.


Now our Problem:
The first selects (over 1.500.000 rows) after this needs up to 30 seconds for getting the results. Thats definitiv to much. No way for optimising indexes (we have done our best. Explain looks nice). With the second select, the result is coming up directly (0 sec. in reason of caching).


Now the Question:
Do you think it is making a scence for performance issues to drop only the data (and reload the new) and not the whole table with indexes.


Of course you have an other idea what we can do.

With best rgds.

Albert


--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]


--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to