At 13:49 16/03/01 -0500, Jan Wieck wrote:
>
>    Similar problem as with shared  memory  -  size.  If  a  long
>    running  backend  of  a multithousand table database needs to
>    send access stats per table - and had accessed them all up to
>    now - it'll be alot of wasted bandwidth.

Not if you only send totals for individual counters when they change; some
stats may never be resynced, but for the most part it will work. Also, does
Unix allow interrupts to occur as a result of data arrivibg in a pipe? If
so, how about:

- All backends to do *blocking* IO to collector.

- Collector to receive an interrupt when a message arrives; while in the
interrupt it reads the buffer into a local queue, and returns from the
interrupt.

- Main line code processes the queue and writes it to a memory mapped file
for durability.

- If collector dies, postmaster starts another immediately, which slears
the backlog of data in the pipe and then remaps the file.

- Each backend has its own local copy of it's counters which *possibly* to
collector can ask for when it restarts.




----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|
                                 |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

Reply via email to