Thanks for replies and sorry for late answer;) here
comes a batch answer ..

> -----Urspr�ngliche Nachricht-----
> Von: Brunzema, Martin [mailto:[EMAIL PROTECTED]
> Gesendet: Montag, 24. Februar 2003 09:38
> An: 'Tobias Oberstein'; [EMAIL PROTECTED]
> Betreff: RE: Is the Log writer lazy by default?

Hi Martin,

> > I've read through the manuals and sources and found the following
> > kernel options:
> >
> >
> > #define         PAN_DELAY_LW                  "_DELAY_LOGWRITER"
> > #define         PAN_DELAY_COMMIT              "_DELAY_COMMIT"
> >
> >
> > but the functions seems to be unused (never called). Any
> > hints from the
> > insiders?
>
> Hi,
>
> these parameters are not used in 7.4 anymore. Instead an implicit
> group-commit is forced by the new implementation in 7.4:
>
> Usertasks are now enabled to reserve space in the Log-Queue in
> parallel. The page is written by the LogWriter if all these pending
> reserve-operations are completed. If you have a high parallelity
> of small transactions then more than on transaction will be write
> their commit-entry by one page-flush on the Log-Volume.
> Also not only one page may be flushed, but a block of pages if the
> throughput is high enough. This enables us to write more than one
> page per rotation.
>
> regards, Martin
>

Thanks for clarifying. The log writer strategy sounds quite
sophisticated.

Btw. and for comparision, I recently stumbled across a technical info
regarding
the internals of Oracle's log writing process - I found it worth reading:

http://www.ixora.com.au/notes/redo_write_triggers.htm

(there are more internals http://www.ixora.com.au/notes e.g. on Kernelized
asynchronous I/O and so on)

Anyway, from what I understand I'm not running on "group commit" since I've
got a single client session and there can be at most _one_ outstanding
request ("order") on a single session. Issueing multiple order requests
over one database session without first waiting for every single request
to return is an error, right?
I got this from ftp://ftp.sap.com/pub/sapdb/misc/xorder7.doc.
Is it still up to date?

----------------------------------------------------------------------------
---
From: =?ISO-8859-1?Q?Sven_K=F6hler?= <[EMAIL PROTECTED]>
Subject: Re: Is the Log writer lazy by default?
Date: Sat, 22 Feb 2003 20:09:28 +0100

Hi Sven,

> > I wonder, this cannot be the case if log entries were written
> > fully synchronous. If the Log writer UKT would flush it's log
> > queue on every COMMIT, the performance could be at max. :
> >
> > 7.200 RPM / 60 = 120 records / sec
>
> I don't understand that calculation, could you expalin it a bit?

Sorry, what I meant is this: the magnetic platter in my PC's harddisk
rotates at 7200 rotations per minute - that is 120 rotations per second.
Lets assume the disk's read/write head doesn't move at all - which seems
reasonable, since writing log pages amounts to just _appending_ to the
log file (if there's single logging w/ one file n the log area).
Then the "end of file" location on the platter will pass the read/write
head 120 times per second giving chance to update the last page in the
log file.

>
> > That is: Is the Log writer laszy by default? E.g. does it wait
> > until log pages are full to flush them to disk? Or is this a
> > Win32 specifica where Windows plainly ignores requests for
> > syncing file writes? How does SAP DB force a sync on Win32?
>
> Perhaps SAPDB does a sync, and Windows flushes it's buffers. That might
> only mean, that the data is within the harddisk's cache-memory.

Yes, you were right. My fault: I did not deactivate "write caching"
on my harddisk. After doing that, performance comes down to the values
I proposed. The details follow in another posting. Lesson learned: Never
ever run critical data on disks with write caching enabled.


----------------------------------------------------------------------------
---
From: [EMAIL PROTECTED] (Hauke Fath)
Newsgroups: tangro.lists.sap.sapdb-general
Subject: Re: Is the Log writer lazy by default?

Hi Hauke,

> > I wonder, this cannot be the case if log entries were written
> > fully synchronous. If the Log writer UKT would flush it's log
> > queue on every COMMIT, the performance could be at max. :
> >
> > 7.200 RPM / 60 = 120 records / sec
> >
> > My harddisk does 7.200 rpms. Best case if the disk head does not
> > move at all, but log pages are written out just as the location
> > on disk passes the head. This is the case excatly 120 times/sec.
>
> Your understanding of harddisk operation appears to be a leeetle bit
> (like, a dozen years) outdated.  ;)
>
> Modern disks use a cache that is large compared to the size of a track
> and effectively decouple bus transfers and physical disk access.
>

I was aware of caches inside harddisks, but I didn't know that one has
to explicitely switch "write caching" off to get real durable
synchronous writes. I naively thought, that a fsync() will just _tell_
the harddisk to bypass the write cache. As it appears this is not
always the case.

ACID guarantees "durability" as soon as a COMMIT returns to the client.
At that time data may still reside in the write cache (if enabled)
and a power shortage may result in loosing that already committed data.
See the link below to an IBM page for their statement on the case.

> And more: The calculation above implies that you write one page _per
> revolution_, while it is probably safe to assume that you can write
> blocks continuously on a track (modulo FS layout policy which is not
> relevant for raw devices) almost up to the physical transfer rate of the
> disk hardware.
>
>         hauke

I was INSERTing varchar(100) values, committing every single INSERT.
Sustained continous write bandwidth of the harddisk is irrelevant in that
case. It'S all about RPMs.

I did a bunch of experiments to further clear this matter up - in another
posting.

Cheers,
Tobias

_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to