Hi folks,

of course somebody could give me a hint for my questions.

We are running some applications with tables up to 5GB with SAPDB 7.4 on Linux.
Each day we are getting the whole data again in partial steps (with size from 0,1 GB to 1,5 GB) at different times.


At the Moment we are dropping the whole table and indexes for 4 times per day and creating everything new, loading new data with fastload to our Standby Database-Server. At this Database-Server we are doing everything else to do (creating indexes a.s.o.). Then we are running a backup of the Database switching the applications to this Server and running a recover of our backup at the Main Database-Server.
After this we are switching back to the Main Server.


Now our Problem:
The first selects (over 1.500.000 rows) after this needs up to 30 seconds for getting the results. Thats definitiv to much. No way for optimising indexes (we have done our best. Explain looks nice). With the second select, the result is coming up directly (0 sec. in reason of caching).


Now the Question:
Do you think it is making a scence for performance issues to drop only the data (and reload the new) and not the whole table with indexes.


Of course you have an other idea what we can do.

With best rgds.

Albert


-- MaxDB Discussion Mailing List For list archives: http://lists.mysql.com/maxdb To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to