Grzegorz Jaskiewicz <[EMAIL PROTECTED]> writes:
> On Mar 5, 2007, at 2:36 AM, Tom Lane wrote:
>> I'm also less than convinced that it'd be helpful for a big seqscan:
>> won't reading a new disk page into memory via DMA cause that memory to
>> get flushed from the processor cache anyway?
> Nope. DM
> So either way, it isn't in processor cache after the read.
> So how can there be any performance benefit?
It's the copy from kernel IO cache to the buffer cache that is L2
sensitive. When the shared buffer cache is polluted, it thrashes the L2
cache. When the number of pages being written to
Hi Tom,
> Now this may only prove that the disk subsystem on this
> machine is too cheap to let the system show any CPU-related
> issues.
Try it with a warm IO cache. As I posted before, we see double the
performance of a VACUUM from a table in IO cache when the shared buffer
cache isn't bein
On Mar 5, 2007, at 2:36 AM, Tom Lane wrote:
n into account.
I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway?
Nope. DMA is writing directly into main memory.
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
>> So either way, it isn't in processor cache after the read.
>> So how can there be any performance benefit?
> It's the copy from kernel IO cache to the buffer cache that is L2
> sensitive. When the shared buffer cache is polluted, it thrashes the L2
Hi Tom,
> Even granting that your conclusions are accurate, we are not
> in the business of optimizing Postgres for a single CPU architecture.
I think you're missing my/our point:
The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are
Luke Lonergan wrote:
The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".
My proposal for a fix: ensure that when relations larger (much larger?)
than buffer cache are sc
Ühel kenal päeval, E, 2007-03-05 kell 03:51, kirjutas Luke Lonergan:
> Hi Tom,
>
> > Even granting that your conclusions are accurate, we are not
> > in the business of optimizing Postgres for a single CPU architecture.
>
> I think you're missing my/our point:
>
> The Postgres shared buffer ca
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> I think you're missing my/our point:
> The Postgres shared buffer cache algorithm appears to have a bug. When
> there is a sequential scan the blocks are filling the entire shared
> buffer cache. This should be "fixed".
No, this is not a bug; it is
* Tom Lane:
> That makes absolutely zero sense. The data coming from the disk was
> certainly not in processor cache to start with, and I hope you're not
> suggesting that it matters whether the *target* page of a memcpy was
> already in processor cache. If the latter, it is not our bug to fix.
Ühel kenal päeval, E, 2007-03-05 kell 04:15, kirjutas Tom Lane:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
> > I think you're missing my/our point:
>
> > The Postgres shared buffer cache algorithm appears to have a bug. When
> > there is a sequential scan the blocks are filling the entire shar
Hi
Thanks for a lot of feadback and good ideas on the restartable vacuum.
Here is a new design overview of it based on previous discussions.
There are several ideas to address the problem of long running VACUUM
in a defined maintenance window. One idea might be: when maintenance
time is running
> > The Postgres shared buffer cache algorithm appears to have a bug.
> > When there is a sequential scan the blocks are filling the entire
> > shared buffer cache. This should be "fixed".
>
> No, this is not a bug; it is operating as designed. The
> point of the current bufmgr algorithm
Gavin Sherry wrote:
On Mon, 5 Mar 2007, Mark Kirkwood wrote:
To add a little to this - forgetting the scan resistant point for the
moment... cranking down shared_buffers to be smaller than the L2 cache
seems to help *any* sequential scan immensely, even on quite modest HW:
(snipped)
When I'v
Hi Mark,
> lineitem has 1535724 pages (11997 MB)
>
> Shared Buffers Elapsed IO rate (from vmstat)
> -- --- -
> 400MB 101 s122 MB/s
>
> 2MB 100 s
> 1MB 97 s
> 768KB93 s
> 512KB86 s
> 256KB
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> The evidence seems to clearly indicate reduced memory writing due to an
> L2 related effect.
You might try using valgrind's cachegrind tool which I understand can actually
emulate various processors' cache to show how efficiently code uses it. I
hav
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> > * Aggressive freezing
> > we will use OldestXmin as the threshold to freeze tuples in
> > dirty pages or pages that have some dead tuples. Or, many UNFROZEN
> > pages still remain after vacuum and they will cost us in the next
> > vacuum preventing XID
Hi all,
I'd like to see the indexam API changes needed by the bitmap indexam to
be committed soon. Has anyone looked at the proposed API in the latest
patch? Any thoughts?
I'm quite happy with it myself, with a few reservations:
- All the getbitmap implementations except the new bitmap index
Heikki,
On Mon, 5 Mar 2007, Heikki Linnakangas wrote:
> Hi all,
>
> I'd like to see the indexam API changes needed by the bitmap indexam to
> be committed soon. Has anyone looked at the proposed API in the latest
> patch? Any thoughts?
Thanks for looking at it!
>
> I'm quite happy with it mysel
Is there any plan for supporting XQuery or XPath in 8.3?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that yo
On 3/5/07, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
Is there any plan for supporting XQuery or XPath in 8.3?
I've submitted patch for simple XPath 1.0 support (based on libxml2):
http://archives.postgresql.org/pgsql-patches/2007-03/msg00088.php
This function does XML parsing at query time. So,
From: "Nikolay Samokhvalov" <[EMAIL PROTECTED]>
Subject: [HACKERS] Re: [HACKERS] XQuery or XPathサポート
Date: Mon, 5 Mar 2007 14:51:43 +0300
Message-ID: <[EMAIL PROTECTED]>
> On 3/5/07, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> > Is there any plan for supporting XQuery or XPath in 8.3?
>
> I've subm
On 3/5/07, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
From: "Nikolay Samokhvalov" <[EMAIL PROTECTED]>
> I've submitted patch for simple XPath 1.0 support (based on libxml2):
> http://archives.postgresql.org/pgsql-patches/2007-03/msg00088.php
But contrib/README.xml2 stated:
"This version of the XML
SE-PostgreSQL 8.2.3-1.0 alpha was released as follows.
The purpose of this version is to get any feedback from the open source
community like requirements, your opinion, bug reports and so on.
The developer welcomes anything to improve.
> On 3/5/07, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> > From: "Nikolay Samokhvalov" <[EMAIL PROTECTED]>
> > > I've submitted patch for simple XPath 1.0 support (based on libxml2):
> > > http://archives.postgresql.org/pgsql-patches/2007-03/msg00088.php
> >
> > But contrib/README.xml2 stated:
> >
>
On 3/5/07, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
The XPath support is 1.0 or 2.0?
1.0
--
Best regards,
Nikolay
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Hi,
What is the opinion of the list as to the best way of measuring if the
following implementation is ok?
http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
As mentioned in earlier mails, this will reduce the per-backend usage of
memory by an amount which will be a fraction (sin
Overview
CREATE INDEX, CREATE INDEX CONCURRENTLY and VACUUM FULL all need some
adaptation to work correctly with HOT.
[This summary and proposal supercedes all previous proposals by me
regarding utilities with HOT]
The Problem
---
With HOT, CREATE INDEX may find tuples that are
On Fri, 2007-03-02 at 21:53 -0500, Bruce Momjian wrote:
> Simon Riggs wrote:
>
> > It would also be very useful to have a version of pgstattuple that
> > worked with heaps, so test cases can be written that examine the header
> > fields, info flags etc. It would be useful to be able to specify th
Simon Riggs wrote:
>
> - VACUUM FULL - The best solution, for now, is to make VACUUM FULL
> perform a reindex on all indexes on the table. Chilling may require us
> to modify considerably more index entries than previously. UPDATE & WAIT
> would be very good, but probably should wait for the next
NikhilS <[EMAIL PROTECTED]> writes:
> What is the opinion of the list as to the best way of measuring if the
> following implementation is ok?
> http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
> As mentioned in earlier mails, this will reduce the per-backend usage of
> memory by a
Hello
This proposal is about access managenent to custom variables. Currently any
user can modify it, and isn't way to protect value:
Premises:
* variables are controlled from modules
* syntax of custom variables is without changes
* all modules are safe
Functions:
* reset_custom_variable(cu
A closer reading, however, shows that at least for cases like intarray,
btree_gist, etc., the detoasting of an index value is being done in the
gist decompress function, so the value seen via GISTENTRY in the other
functions should already have been detoasted once.
Right, any stored value form i
On Mon, 2007-03-05 at 21:39 +0530, Pavan Deolasee wrote:
> Simon Riggs wrote:
> >
> > - VACUUM FULL - The best solution, for now, is to make VACUUM FULL
> > perform a reindex on all indexes on the table. Chilling may require us
> > to modify considerably more index entries than previously. UPDA
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> The first function reads a single block from a file, returning the
> complete page as a bytea of length BLCKSZ.
> CREATE OR REPLACE FUNCTION bufpage_get_raw_page(text, int4)
> RETURNS bytea ...
Directly from the file? What if the version in buffers i
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Shared Buffers Elapsed IO rate (from vmstat)
> -- --- -
> 400MB 101 s122 MB/s
> 2MB 100 s
> 1MB 97 s
> 768KB93 s
> 512KB86 s
> 256KB77 s
> 1
Simon Riggs wrote:
On Mon, 2007-03-05 at 21:39 +0530, Pavan Deolasee wrote:
Currently each tuple is moved individually. You'd need to inspect the
whole HOT chain on a page, calculate space for that and then try to move
them all in one go. I was originally thinking that would be a problem,
but
On Mon, 2007-03-05 at 11:39 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > The first function reads a single block from a file, returning the
> > complete page as a bytea of length BLCKSZ.
> > CREATE OR REPLACE FUNCTION bufpage_get_raw_page(text, int4)
> > RETURNS bytea .
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
Shared Buffers Elapsed IO rate (from vmstat)
-- --- -
400MB 101 s122 MB/s
2MB 100 s
1MB 97 s
768KB93 s
512KB86 s
256KB77
Hi Tom,
On 3/5/07 8:53 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Hm, that seems to blow the "it's an L2 cache effect" theory out of the
> water. If it were a cache effect then there should be a performance
> cliff at the point where the cache size is exceeded. I see no such
> cliff, in fact t
Simon Riggs wrote:
The main point is to get a set of functions that can be used directly in
additional regression tests as well as diagnostics. ISTM we need to
*prove* HOT works, not just claim it. I'm very open to different
approaches as to how we might do this.
Functions to support regre
On Mon, 2007-03-05 at 12:29 -0500, Andrew Dunstan wrote:
> Simon Riggs wrote:
> >
> > The main point is to get a set of functions that can be used directly in
> > additional regression tests as well as diagnostics. ISTM we need to
> > *prove* HOT works, not just claim it. I'm very open to different
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Isn't the size of the shared buffer pool itself acting as a performance
> penalty in this case ? May be StrategyGetBuffer() needs to make multiple
> passes over the buffers before the usage_count of any buffer is reduced
> to zero and the buffer is cho
Tom Lane wrote:
> NikhilS <[EMAIL PROTECTED]> writes:
>> What is the opinion of the list as to the best way of measuring if the
>> following implementation is ok?
>> http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
>> As mentioned in earlier mails, this will reduce the per-backend
ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> This is a stand-alone patch for aggressive freezing. I'll propose
> to use OldestXmin instead of FreezeLimit as the freeze threshold
> in the circumstances below:
I think it's a really bad idea to freeze that aggressively under any
circumstances excep
Tom,
> Yes, autovacuum is off, and bgwriter shouldn't have anything useful to
> do either, so I'm a bit at a loss what's going on --- but in any case,
> it doesn't look like we are cycling through the entire buffer space
> for each fetch.
I'd be happy to DTrace it, but I'm a little lost as to whe
Tom,
On 3/5/07 8:53 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Hm, that seems to blow the "it's an L2 cache effect" theory out of the
> water. If it were a cache effect then there should be a performance
> cliff at the point where the cache size is exceeded. I see no such
> cliff, in fact the
I wrote:
> "Pavan Deolasee" <[EMAIL PROTECTED]> writes:
>> Isn't the size of the shared buffer pool itself acting as a performance
>> penalty in this case ? May be StrategyGetBuffer() needs to make multiple
>> passes over the buffers before the usage_count of any buffer is reduced
>> to zero and th
On Sun, 2007-03-04 at 11:54 +, Simon Riggs wrote:
> > (2) sync_scan_offset: Start a new scan this many pages before a
> > currently running scan to take advantage of the pages
> > that are likely already in cache.
>
> I'm somewhat dubious about this parameter, I have to say, even though I
> a
Here's four more points on the curve - I'd use a "dirac delta function" for
your curve fit ;-)
Shared_buffers Select CountVacuum
(KB)(s) (s)
===
248 5.522.46
368 4.772.40
552
Tom,
> I seem to recall that we've previously discussed the idea of letting the
> clock sweep decrement the usage_count before testing for 0, so that a
> buffer could be reused on the first sweep after it was initially used,
> but that we rejected it as being a bad idea. But at least with large
>
Tom Lane wrote:
Nope, Pavan's nailed it: the problem is that after using a buffer, the
seqscan leaves it with usage_count = 1, which means it has to be passed
over once by the clock sweep before it can be re-used. I was misled in
the 32-buffer case because catalog accesses during startup had le
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> I am wondering whether seqscan would set the usage_count to 1 or to a higher
> value. usage_count is incremented while unpinning the buffer. Even if
> we use
> page-at-a-time mode, won't the buffer itself would get pinned/unpinned
> every time seqsca
"Tom Lane" <[EMAIL PROTECTED]> writes:
> I seem to recall that we've previously discussed the idea of letting the
> clock sweep decrement the usage_count before testing for 0, so that a
> buffer could be reused on the first sweep after it was initially used,
> but that we rejected it as being a b
Tom Lane wrote:
ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
This is a stand-alone patch for aggressive freezing. I'll propose
to use OldestXmin instead of FreezeLimit as the freeze threshold
in the circumstances below:
I think it's a really bad idea to freeze that aggressively under any
circu
Florian G. Pflug wrote:
There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit
would be calculated as
GetOldestXmin() - vacuum_freeze_limit
We already have that. It's called vacuum_freeze_min_age, and the default
is 100 million transactions.
IIRC we added it late in the 8.2 re
I am taking vacation time March 7-17 and will be offline for that
period. Tom, Neil, and others will be handling patches during that
time. However, they are not able to update the patch queue.
When I return to email, I will process all outstanding requests well
before feature freeze April 1.
--
Heikki Linnakangas wrote:
Florian G. Pflug wrote:
There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit
would be calculated as
GetOldestXmin() - vacuum_freeze_limit
We already have that. It's called vacuum_freeze_min_age, and the default
is 100 million transactions.
IIRC we
On Mon, 2007-03-05 at 10:46 -0800, Josh Berkus wrote:
> Tom,
>
> > I seem to recall that we've previously discussed the idea of letting the
> > clock sweep decrement the usage_count before testing for 0, so that a
> > buffer could be reused on the first sweep after it was initially used,
> > but t
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> Itakgaki-san and I were discussing in January the idea of cache-looping,
> whereby a process begins to reuse its own buffers in a ring of ~32
> buffers. When we cycle back round, if usage_count==1 then we assume that
> we can reuse that buffer. This avoid
This sounds like a good idea.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Simon Riggs [mailto:[EMAIL PROTECTED]
Sent: Monday, March 05, 2007 02:37 PM Eastern Standard Time
To: Josh Berkus; Tom Lane; Pavan Deolasee; Mark Kirkwood; Gavin Sherry;
Luke Lonergan; PG
I'm a bit embarrassed to bring this up here because I don't know much
about storage layout and indexing. It's probably a silly notion, but if
so, could someone please tell me how and why?
First I'll describe the situation that leads me to write this. I'm seeing
some performance problems in an ap
On Mon, 2007-03-05 at 14:41 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > Itakgaki-san and I were discussing in January the idea of cache-looping,
> > whereby a process begins to reuse its own buffers in a ring of ~32
> > buffers. When we cycle back round, if usage_count==1
On Mon, 2007-03-05 at 03:51 -0500, Luke Lonergan wrote:
> The Postgres shared buffer cache algorithm appears to have a bug. When
> there is a sequential scan the blocks are filling the entire shared
> buffer cache. This should be "fixed".
>
> My proposal for a fix: ensure that when relations lar
On Mon, 2007-03-05 at 11:10 +0200, Hannu Krosing wrote:
> > My proposal for a fix: ensure that when relations larger (much larger?)
> > than buffer cache are scanned, they are mapped to a single page in the
> > shared buffer cache.
>
> How will this approach play together with synchronized scan pa
Jeroen T. Vermeulen wrote:
[Q: Is there some other transparent optimization for values that correlate
with insertion/update order?]
So I was wondering whether it would make sense to have a more compact kind
of index. One that partitions the value range of a given column into
sub-ranges, and for
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> Best way is to prove it though. Seems like not too much work to have a
> private ring data structure when the hint is enabled. The extra
> bookeeping is easily going to be outweighed by the reduction in mem->L2
> cache fetches. I'll do it tomorrow, if no
On Mon, 2007-03-05 at 09:09 +, Heikki Linnakangas wrote:
> In fact, the pages that are left in the cache after the seqscan finishes
> would be useful for the next seqscan of the same table if we were smart
> enough to read those pages first. That'd make a big difference for
> seqscanning a t
"Pavel Stehule" <[EMAIL PROTECTED]> writes:
> * reset_custom_variable(cusvar); ... set default from postgresql.conf
> * revoke_custom_variable(READ|MODIFY, cusvar, roleid);
> * grant_custom_variable(READ|MODIFY, cusvar, roleid);
This seems pointlessly complex. An unprivileged user can only SET t
Jeff Davis <[EMAIL PROTECTED]> writes:
> Absolutely. I've got a parameter in my patch "sync_scan_offset" that
> starts a seq scan N pages before the position of the last seq scan
> running on that table (or a current seq scan if there's still a scan
> going).
Strikes me that expressing that param
On Tue, March 6, 2007 03:17, Heikki Linnakangas wrote:
> I think you've just described a range-encoded bitmap index. The idea is
> to divide the range of valid values into a some smallish number of
> subranges, and for each of these boundary values you store a bitmap
> where you set the bit repres
On Mon, 2007-03-05 at 15:30 -0500, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > Absolutely. I've got a parameter in my patch "sync_scan_offset" that
> > starts a seq scan N pages before the position of the last seq scan
> > running on that table (or a current seq scan if there's sti
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> The earlier objections to AdminPack were about functions that write to
> files. These functions just read data, not write them. So there's no
> objection there, AFAICS.
Au contraire, both reading and writing are issues. But I had
misunderstood your orig
On Mar 3, 2007, at 23:19 , Robert Treat wrote:
A similar idea we've been kicking around would be having a set storage
parameter = nologging option for alter table which would, as it's name
implies, cause the system to ignore writing wal logs for the table,
much like
it does for temp tables n
"Pavel Stehule" <[EMAIL PROTECTED]> writes:
> * reset_custom_variable(cusvar); ... set default from postgresql.conf
> * revoke_custom_variable(READ|MODIFY, cusvar, roleid);
> * grant_custom_variable(READ|MODIFY, cusvar, roleid);
This seems pointlessly complex. An unprivileged user can only SE
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Mon, 2007-03-05 at 15:30 -0500, Tom Lane wrote:
>> Strikes me that expressing that parameter as a percentage of
>> shared_buffers might make it less in need of manual tuning ...
> The original patch was a percentage of effective_cache_size, because in
>
Jeff Davis wrote:
On Mon, 2007-03-05 at 15:30 -0500, Tom Lane wrote:
Jeff Davis <[EMAIL PROTECTED]> writes:
Absolutely. I've got a parameter in my patch "sync_scan_offset" that
starts a seq scan N pages before the position of the last seq scan
running on that table (or a current seq scan if the
Tom Lane wrote:
So the
problem is not so much the clock sweep overhead as that it's paid in a
very nonuniform fashion: with N buffers you pay O(N) once every N reads
and O(1) the rest of the time. This is no doubt slowing things down
enough to delay that one read, instead of leaving it nicely I
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Mark, can you detect "hiccups" in the read rate using
>> your setup?
> I think so, here's the vmstat output for 400MB of shared_buffers during
> the scan:
Hm, not really a smoking gun there. But just for grins, would you try
this pat
Pavel Stehule wrote:
"Pavel Stehule" <[EMAIL PROTECTED]> writes:
> * reset_custom_variable(cusvar); ... set default from postgresql.conf
> * revoke_custom_variable(READ|MODIFY, cusvar, roleid);
> * grant_custom_variable(READ|MODIFY, cusvar, roleid);
This seems pointlessly complex. An unprivi
ISTM you are trying to do too much. We need to get the base functionality,
as described by Tom in the thread I referred you to, working first. Extra
stuff could be added later if necessary.
cheers
I don't wont to build cathedral. Now is time for discussion, no? I am
collect any arguments.
Tom Lane wrote:
Hm, not really a smoking gun there. But just for grins, would you try
this patch and see if the numbers change?
Applied to 8.2.3 (don't have lineitem loaded in HEAD yet) - no change
that I can see:
procs ---memory-- ---swap-- -io --system--
cp
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Elapsed time is exactly the same (101 s). Is is expected that HEAD would
> behave differently?
Offhand I don't think so. But what I wanted to see was the curve of
elapsed time vs shared_buffers?
regards, tom lane
-
Pavel Stehule wrote:
ISTM you are trying to do too much. We need to get the base
functionality, as described by Tom in the thread I referred you to,
working first. Extra stuff could be added later if necessary.
cheers
I don't wont to build cathedral. Now is time for discussion, no? I am
On Mon, 2007-03-05 at 21:03 +, Heikki Linnakangas wrote:
> Another approach I proposed back in December is to not have a variable
> like that at all, but scan the buffer cache for pages belonging to the
> table you're scanning to initialize the scan. Scanning all the
> BufferDescs is a fairl
[EMAIL PROTECTED] (Bruce Momjian) writes:
> Add GUC temp_tablespaces to provide a default location for temporary
> objects.
> Jaime Casanova
I hadn't looked at this patch before, but now that I have, it is
rather broken.
In the first place, it makes no provision for RemovePgTempFiles() to
clean u
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> If you think there's a
> case for some extra functionality to be exposed, maybe you could provide
> some more examples / use cases.
I think what Pavel is on about is making use of not-known-to-C-code
custom variables as all-purpose intrasession storag
Simon Riggs wrote:
On Mon, 2007-03-05 at 14:41 -0500, Tom Lane wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
Itakgaki-san and I were discussing in January the idea of cache-looping,
whereby a process begins to reuse its own buffers in a ring of ~32
buffers. When we cycle back round, if usage
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
If you think there's a
case for some extra functionality to be exposed, maybe you could provide
some more examples / use cases.
I think what Pavel is on about is making use of not-known-to-C-code
custom variables as all-purpos
Tom Lane wrote:
But what I wanted to see was the curve of
elapsed time vs shared_buffers?
Of course! (lets just write that off to me being pre coffee...).
With the patch applied:
Shared Buffers Elapsed vmstat IO rate
-- --- --
400MB 101 s122 MB/
Attatched you'll find a patch that I've been kicking around for a
while that I'd like to propose for inclusion in 8.3. I attempted to
submit this through the original xml2 author (as far back as the 7.4
days) but got no response.
It's really fairly trivial, but I will be using the features it
p
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> But what I wanted to see was the curve of
>> elapsed time vs shared_buffers?
> ...
> Looks *very* similar.
Yup, thanks for checking.
I've been poking into this myself. I find that I can reproduce the
behavior to some extent even with
"Tom Lane" <[EMAIL PROTECTED]> writes:
> I don't see any good reason why overwriting a whole cache line oughtn't be
> the same speed either way.
I can think of a couple theories, but I don't know if they're reasonable. The
one the comes to mind is the inter-processor cache coherency protocol. Wh
Hi Tom,
Good info - it's the same in Solaris, the routine is uiomove (Sherry wrote it).
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, March 05, 2007 07:43 PM Eastern Standard Time
To: Mark Kirkwood
Cc: Pavan D
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am playing with this now ... sorry for delay ...
- --On Wednesday, February 28, 2007 12:58:04 -0500 Tom Lane <[EMAIL PROTECTED]>
wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
>> Joshua D. Drake wrote:
>>> We should add this to the mailing list
Gregory Stark <[EMAIL PROTECTED]> writes:
> What happens if VACUUM comes across buffers that *are* already in the buffer
> cache. Does it throw those on the freelist too?
Not unless they have usage_count 0, in which case they'd be subject to
recycling by the next clock sweep anyway.
On Wed, 28 Feb 2007, Tom Lane wrote:
> AFAICT, the footer in question tries to make it illegal for us even to
> have the message in our mail archives. If I were running the PG lists,
> I would install filters that automatically reject mails containing such
> notices, with a message like "Your cor
Gavin Sherry <[EMAIL PROTECTED]> writes:
> On Wed, 28 Feb 2007, Tom Lane wrote:
>> AFAICT, the footer in question tries to make it illegal for us even to
>> have the message in our mail archives. If I were running the PG lists,
>> I would install filters that automatically reject mails containing
On Thu, 22 Feb 2007, Jim C. Nasby wrote:
It would also be extremely useful to make checkpoint stats visible
somewhere in the database (presumably via the existing stats
mechanism)... I'm thinking just tracking how many pages had to be
flushed during a checkpoint would be a good start.
I'm in
On Wed, 21 Feb 2007, Robert Treat wrote:
My impression of this is that DBA's would typically want to run this for a
short period of time to get thier systems tuned and then it pretty much
becomes chatter. Can you come up with an idea of what information DBA's need
to know?
I am structing the
1 - 100 of 117 matches
Mail list logo