Pavan Deolasee wrote:
> On 1/26/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> >
> >
> > I think what he's suggesting is deliberately not updating the hint bits
> > during a SELECT ...
>
>
> No, I was suggesting doing it in bgwriter so that we may not need to that
> during
> a SELECT. Of course, we ne
Tom Lane wrote:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> > I'd like to see still more evidence that it's a problem before we start
> > changing that piece of code. It has served us well for years.
>
> What I see here is mostly evidence suggesting that we should consider
> raising NUM_CL
On 1/26/07, Tom Lane <[EMAIL PROTECTED]> wrote:
I think what he's suggesting is deliberately not updating the hint bits
during a SELECT ...
No, I was suggesting doing it in bgwriter so that we may not need to that
during
a SELECT. Of course, we need to investigate more and have numbers to pr
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Pavan Deolasee wrote:
>> It would also be interesting to investigate whether early setting of
>> hint bits can reduce subsequent writes of blocks. A typical case would
>> be a large table being updated heavily for a while, followed by SELECT
>> queries.
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I'd like to see still more evidence that it's a problem before we start
> changing that piece of code. It has served us well for years.
What I see here is mostly evidence suggesting that we should consider
raising NUM_CLOG_BUFFERS, rather than anyt
On 1/26/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Maybe have the bgwriter update hint bits as it evicts pages out of the
cache? It could result in pg_clog read traffic for each page that needs
eviction; not such a hot idea.
I thought once we enhance clog so that there are no clog reads,
Pavan Deolasee wrote:
> On 1/26/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> >
> >Heikki Linnakangas wrote:
> >> I'd like to see still more evidence that it's a problem before we start
> >> changing that piece of code. It has served us well for years.
> >
> >So the TODO could be "investigate whe
On 1/26/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Heikki Linnakangas wrote:
> I'd like to see still more evidence that it's a problem before we start
> changing that piece of code. It has served us well for years.
So the TODO could be "investigate whether caching pg_clog and/or
pg_subtrans
Heikki Linnakangas wrote:
> I'd like to see still more evidence that it's a problem before we start
> changing that piece of code. It has served us well for years.
So the TODO could be "investigate whether caching pg_clog and/or
pg_subtrans in local memory can be useful for vacuum performance".
I'd like to see still more evidence that it's a problem before we start
changing that piece of code. It has served us well for years.
Bruce Momjian wrote:
Is there a TODO here?
---
Heikki Linnakangas wrote:
Pavan Deolase
Is there a TODO here?
---
Heikki Linnakangas wrote:
> Pavan Deolasee wrote:
> > Another simpler solution for VACUUM would be to read the entire CLOG file
> > in local memory. Most of the transaction status queries can be sat
Pavan Deolasee wrote:
Another simpler solution for VACUUM would be to read the entire CLOG file
in local memory. Most of the transaction status queries can be satisfied
from
this local copy and the normal CLOG is consulted only when the status is
unknown (TRANSACTION_STATUS_IN_PROGRESS)
The clo
On 1/25/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
>
> Also is it worth optimizing on the total read() system calls which might
> not
> cause physical I/O, but
> still consume CPU ?
I don't think it's worth it, but now that we're talking about it: What
I'd like to d
Pavan Deolasee wrote:
On 1/24/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Hmm. So there is some activity there. Could you modify the patch to
count how many of those reads came from OS cache? I'm thinking of doing
a gettimeofday() call before and after read, and counting how many
calls fin
On 1/24/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
> I have just counted the number of read/write calls on the CLOG blocks.
As
> you can
> see the total number of CLOG reads jumped from 545323 to 1181851 i.e.
> 1181851 - 545323 = 636528 CLOG block reads for 1554697 p
On Wed, 2007-01-24 at 09:32 +0530, Pavan Deolasee wrote:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps. That implies CLOG would get filled up in
> less
> than 262144/1500=174 seconds. VACUUM on accounts table takes much
> longer to trigger.
Y
Pavan Deolasee wrote:
I have just counted the number of read/write calls on the CLOG blocks. As
you can
see the total number of CLOG reads jumped from 545323 to 1181851 i.e.
1181851 - 545323 = 636528 CLOG block reads for 1554697 pages of stock
table.
Hmm. So there is some activity there. Could
On 1/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps.
Only if you had fsync off, or equivalently a disk drive that lies about
write-complete. You could po
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps.
Only if you had fsync off, or equivalently a disk drive that lies about
write-complete. You could possibly achieve such rates in a non-broken
configura
On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
> Another source of I/O is perhaps the CLOG read/writes for checking
> transaction status. If we are talking about large tables like accounts
in
> pgbench or customer/stock in DBT2, the tables are vacuumed much later
On 1/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Would it help to set the status of the XMIN/XMAX of tuples early enough
such
> that the heap page is still in the buffer cache, but late enough such
that
> the XMIN/XMAX transactions are finished ? How
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Would it help to set the status of the XMIN/XMAX of tuples early enough such
> that the heap page is still in the buffer cache, but late enough such that
> the XMIN/XMAX transactions are finished ? How about doing it when the
> bgwriter is about to wri
Pavan Deolasee wrote:
Another source of I/O is perhaps the CLOG read/writes for checking
transaction status. If we are talking about large tables like accounts in
pgbench or customer/stock in DBT2, the tables are vacuumed much later than
the actual UPDATEs. I don't have any numbers to prove yet,
On 1/22/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
I've been looking at the way we do vacuums.
The fundamental performance issue is that a vacuum generates
nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
spread the cost like part payment, but the total is the same.
Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> Vacuum is done in 3 phases:
> 1. Scan heap
> 2. Vacuum index
> 3. Vacuum heap
> A variation of the scheme would be to keep scanning pages that are in
> cache, until the tid list reaches a predefined size, instead of keeping
> track of which pages
On Mon, Jan 22, 2007 at 02:51:47PM +, Heikki Linnakangas wrote:
> I've been looking at the way we do vacuums.
>
> The fundamental performance issue is that a vacuum generates
> nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
> spread the cost like part payment, but the
On Mon, 2007-01-22 at 13:41 +, Heikki Linnakangas wrote:
> Any thoughts before I start experimenting?
Probably only to detail the various use cases we are discussing.
My thoughts on various use cases are:
- small table with frequent update/delete, heap and indexes all/mostly
cached
e.g. Coun
27 matches
Mail list logo