Albert Steckenborn wrote: > > Hi Alexander, > > all our Applications are web-based, so there is no other application > that could feed the cache. > But I think when dropping table and indexes, the cache for data that > belongs to that table is deleted too. > > So did it makes any scence to delete only the data of the > table and not > the table itsselve for holding the cached data and all other build in > optimising strategies. > > Our strategie with deleting tables is justified in the past > with ADABAS > D as Database, 20% of datavolume, other machines with single > processor, > without raw devices and lower ram. Today a load of 5GB data > needs about > 15min, in the past 2 hours. > I think its time to change the loading strategie to get better > performance but I want to make sure that changing will bring up the > wanted results (there are a lot of scripts to change). > > rgds. > > Albert >
On one of our sides there seems to be a misunderstanding. As far as I understood, the deletion, creation and loading of one table is done on one database (let's call it A), then the backup is done and recovered on database B. Afterwards NO data is in cache of B and the first select lasts too long. As Alexander says: one first select will feed the cache. And guys being able to write and run scripts with such complex SQL/backup-mixture will be able to add another SQL-call (a select feeding the cache) to this script (in the same 'application' doing the drop table...) Your idea that not dropping, but deleting the table should change anything, seems to be a misinterpretation. The difference is only, that the meta-data of the data (and the root-page for the data) have to be created anew / need not be created anew. There is no table-specific cache anyway. There is no data in the global cache which can be recycled. In database A, doing the loading of the table, there WOULD be data in cache if no FASTLOAD, but the normal one would be used. But this will not help database B in any case, only the (possible) selects on A would run on a feeded cache. Did you check the speed of a normal load instead of a fastload? Why do you switch back to your main server (different hardware/speed ?) ? On the other hand: perhaps it is not only me, who does not really understand, why the whole table has to be filled from scratch 4 times a day. Are there so many changes? Why don't you update but uses this back-and-for of servers? Perhaps some info why this decision was made would help. Elke SAP Labs Berlin > > Schroeder, Alexander schrieb: > > Hello, > > > > it looks to me like the 1st query did not hit the data > cache, and was punished > > with lots of data I/O. All following queries possibly then > feed from the cache > > to a certain extend and need not to access the hard disk too often. > > > > As dropping, loading, and backup already takes some time, > and is done > > before the server is 'switched on' for the application, > possibly executing > > the nasty query before from some client program (e.g. > dbmcli) may just > > feed the cache, so that the 1st query from the application > isn't really the 1st > > one ... > > > > Alexander Schröder > > SAP DB, SAP Labs Berlin > > > > > >>-----Original Message----- > >>From: Albert Steckenborn [mailto:[EMAIL PROTECTED] > >>Sent: Monday, December 15, 2003 4:47 PM > >>To: [EMAIL PROTECTED] > >>Subject: Tuning Question tables (5GB) with complete reload each day > >> > >> > >>Hi folks, > >> > >>of course somebody could give me a hint for my questions. > >> > >>We are running some applications with tables up to 5GB with > >>SAPDB 7.4 on > >>Linux. > >>Each day we are getting the whole data again in partial steps > >>(with size > >>from 0,1 GB to 1,5 GB) at different times. > >> > >>At the Moment we are dropping the whole table and indexes > for 4 times > >>per day and creating everything new, loading new data with > >>fastload to > >>our Standby Database-Server. At this Database-Server we are doing > >>everything else to do (creating indexes a.s.o.). Then we are > >>running a > >>backup of the Database switching the applications to this > Server and > >>running a recover of our backup at the Main Database-Server. > >>After this we are switching back to the Main Server. > >> > >>Now our Problem: > >>The first selects (over 1.500.000 rows) after this needs up to 30 > >>seconds for getting the results. Thats definitiv to much. > No way for > >>optimising indexes (we have done our best. Explain looks > >>nice). With the > >>second select, the result is coming up directly (0 sec. in > reason of > >>caching). > >> > >>Now the Question: > >>Do you think it is making a scence for performance issues to > >>drop only > >>the data (and reload the new) and not the whole table with indexes. > >> > >>Of course you have an other idea what we can do. > >> > >>With best rgds. > >> > >>Albert > >> > >> > >>-- > >>MaxDB Discussion Mailing List > >>For list archives: http://lists.mysql.com/maxdb > >>To unsubscribe: > > > > http://lists.mysql.com/[EMAIL PROTECTED] > > > -- > MaxDB Discussion Mailing List > For list archives: http://lists.mysql.com/maxdb > To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MaxDB Discussion Mailing List For list archives: http://lists.mysql.com/maxdb To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]