Filip Sergeys wrote:

hey Filip, * !

i created my own script now, your one was of great help. here is what i made differently:

2) dynamically build a command script that will be executed by dbmcli in
the very last step. Start with bringing database in admin mode and do
util_connect

that's what i do now, too. that's just the second-best solution since you cannot detect errors when they happen. but see below..


3) look for the counter file and extract the lognumber of the last
applied log file

i do not use a counter-file. i look at the backup-history to find the last successfully imported log fragment. works somehow like this:
backup_history_open - make sure to see the current version
backup_history_list -c LABEL,ACTION,RC -Inverted > $tmp


type-v v-fragment v-action v- error code
l=`tail +3 $tmp | sed -e 's/^LOG_\([0-9]*\)|RESTORE *| *0|/\1/' -e t -e '/.*/d' | head -1`


... now $l is the last fragment. this of course works only when the history-list page we look at contains the matching line. my script fails when it detects that it cannot parse this one page.

4) build a loop that scans over every .arch file (archived logfile)
5) if the lognumber in the .arch file is lower than the last applied log
number -> skip it
6) if the lognumber in the .arch file is equal to the last applied log
number -> apply that log again. (this is needed because then you are
sure to have applied the last log PAGE, as already described in earlier
mails). The first log you apply should always start with "recover_start
ARCH LOG $lognr"

yep.

7) subsequent logfiles are applied with the command "recover_replace
ARCH </path/to/logfiles/> $lognr"
8) update your counter file with the last lognumber you applied
9) end with recover_cancel
10) util_release (don't forget this one)
11) bring the database back to sleep
12) execute the dynamically build script

yep.

to make sure that inserting my logs worked out, i check the backup history again to see if the last fragment is indeed the last file i tried to import. if there is a mismatch, i abort and show both the script and the result. but it didnt happen, yet.

so this works, hooray. but: my DB crashes every time. i even changed the LOG_SEGMENT_SIZE in both instances (let it be determined automatically). no change. but the crash seems to do no harm.

filip: may i ask you to check your knldiag.err for lines like this:

2005-01-27 16:44:37 18778 ERR 8 Admin ERROR 'cancelled' CAUSED EMERGENCY SHUTDOWN

perhaps your kernel crashes too, but you dont notice because you db_offline it in any case.

sap-folks: is this supposed to happen? anything i can do to prevent it?

what i need to do now is to run export/import cycles automatically for several days and see if this really works :)

thanks for your help,
        Raimund

--
7. RedDot Anwendertagung der RedDot Usergroup e.V. am 31.1.2005
Pinuts pr�sentiert neue Entwicklung, http://www.pinuts.de/news

Pinuts media+science GmbH                 http://www.pinuts.de
Dipl.-Inform. Raimund Jacob               [EMAIL PROTECTED]
Krausenstr. 9-10                          voice : +49 30 59 00 90 322
10117 Berlin                              fax   : +49 30 59 00 90 390
Germany


-- MaxDB Discussion Mailing List For list archives: http://lists.mysql.com/maxdb To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to