On Sun, 2006-12-10 at 18:09 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
The EA case is pretty straightforward though;
Well, no its not, as you'll recall if you re-read the prior discussions.
The killer problem is that it's unclear whether the early termination of
the query
On Fri, 2006-12-08 at 11:05 +0900, Takayuki Tsunakawa wrote:
I understand that checkpoints occur during crash
recovery and PITR, so time for those operations would get longer.
A restorepoint happens during recovery, not a checkpoint. The recovery
is merely repeating the work of the checkpoint
Kevin Grittner [EMAIL PROTECTED] wrote:
We have not experience any increase in I/O, just a smoothing. Keep in
mind that the file system cache will collapse repeated writes to the
same location until things settle, and the controller's cache also has a
chance of doing so. If we just push
Tom Lane [EMAIL PROTECTED] writes:
We might be able to finesse the protocol problem by teaching EA to
respond to query cancel by emitting the data-so-far as a NOTICE (like it
used to do many moons ago), rather than a standard query result, then
allowing the query to error out. However
Simon Riggs wrote:
Intermediate results are always better than none at all. I do understand
what a partial execution would look like - frequently it is the
preparatory stages that slow a query down - costly sorts, underestimated
hash joins etc. Other times it is loop underestimation, which can
Hello,
From: ITAGAKI Takahiro [EMAIL PROTECTED]
Takayuki Tsunakawa [EMAIL PROTECTED] wrote:
I'm afraid it is difficult for system designers to expect steady
throughput/response time, as long as PostgreSQL depends on the
flushing of file system cache. How does Oracle provide stable
Mr. Riggs,
Thank you for teaching me the following. I seem to have misunderstood.
I'll learn more.
From: Simon Riggs [EMAIL PROTECTED]
On Fri, 2006-12-08 at 11:05 +0900, Takayuki Tsunakawa wrote:
I understand that checkpoints occur during crash
recovery and PITR, so time for those operations
I wonder how the other big DBMS, IBM DB2, handles this. Is Itagaki-san
referring to DB2?
DB2 would also open data files with O_SYNC option and page_cleaners
(counterparts of bgwriter) would exploit AIO if available on the system.
Inaam Rana
EnterpriseDB http://www.enterprisedb.com
Hi all,
When I swithed to the newest version og pgbuildfarm, I noticed that
--with-ldap (now by defaut) didn't work on UnixWare.
This is because, on Unixware, ldap needs lber and resolv.
Not being a configure guru, I made the change bellow locally and that
works for me.
Surely, one of you
ohp@pyrenet.fr wrote:
Hi all,
When I swithed to the newest version og pgbuildfarm, I noticed that
--with-ldap (now by defaut) didn't work on UnixWare.
This is because, on Unixware, ldap needs lber and resolv.
Not being a configure guru, I made the change bellow locally and that
works for me.
Thanks for replying,
You are right but I have no knowedge on howto popagate this to
Makefile.unixware.
On Mon, 11 Dec 2006, Andrew Dunstan wrote:
Date: Mon, 11 Dec 2006 10:03:04 -0500
From: Andrew Dunstan [EMAIL PROTECTED]
To: ohp@pyrenet.fr
Cc: pgsql-hackers list
Hm, in psql if I set FETCH_COUNT to a nonzero value I suddenly find I'm unable
to use SELECT ... FOR UPDATE.
I suspect this is unnecessary, that the only reason cursors can't hold locks
is because we don't support the kind of read-write operations that clients may
expect to be able to issue
Andrew Dunstan [EMAIL PROTECTED] writes:
The right way to do this I think is to put an entry adjusting LIBS in
src/makefiles/Makefile.unixware, but first it looks like we need to
propagate the with-ldap switch into src/Makefile.global
The Makefile is far too late --- this has to be adjusted
Gregory Stark [EMAIL PROTECTED] writes:
I suspect this is unnecessary, that the only reason cursors can't hold locks
is because we don't support the kind of read-write operations that clients may
expect to be able to issue against read-write cursors?
I think the rationale is that the SQL spec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Short version: is it optimal for vacuum to always populate reltuples
with live rows + dead rows?
I came across a problem in which I noticed that a vacuum did not change
the reltuples value as I expected. A vacuum analyze indicated a correct
Greg Sabino Mullane [EMAIL PROTECTED] writes:
Short version: is it optimal for vacuum to always populate reltuples
with live rows + dead rows?
If we didn't do that, it would tend to encourage the use of seqscans on
tables with lots of dead rows, which is probably a bad thing.
Is there any way
On Mon, 2006-12-11 at 11:00 +, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
We might be able to finesse the protocol problem by teaching EA to
respond to query cancel by emitting the data-so-far as a NOTICE (like it
used to do many moons ago), rather than a standard query
Greg Sabino Mullane [EMAIL PROTECTED] writes:
Bleh. Isn't that what a plain analyze would encourage then? Should analyze
be considering the dead rows somehow as well?
Very possibly, at least for counting purposes (it mustn't try to analyze
the content of such rows, since they could be
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2006-12-11 at 11:00 +, Gregory Stark wrote:
What I suggested was introducing a new FE/BE message type for analyze query
plans.
I like the idea, but its more work than I really wanted to get into
right now.
Yeah ... a protocol change is
Tom Lane wrote:
Yeah ... a protocol change is *painful*, especially if you really want
clients to behave in a significantly new way.
A backward-incompatible protocol change is painful, sure, but ISTM we
could implement what Greg describes as a straightforward extension to
the V3 protocol.
Simon Riggs wrote:
I like the idea, but its more work than I really wanted to get into
right now.
Well, from another point of view: do we need this feature so urgently
that there is not enough time to do it properly? IMHO, no.
-Neil
---(end of
Neil Conway [EMAIL PROTECTED] writes:
Tom Lane wrote:
Yeah ... a protocol change is *painful*, especially if you really want
clients to behave in a significantly new way.
A backward-incompatible protocol change is painful, sure, but ISTM we
could implement what Greg describes as a
ITAGAKI Takahiro wrote:
Kevin Grittner [EMAIL PROTECTED] wrote:
...the file system cache will collapse repeated writes to the
same location until things settle ...
If we just push dirty pages out to the OS as soon as possible,
and let the file system do its job, I think we're in better
Jim C. Nasby wrote:
On usage, ISTM it would be better to turn on GIT only for a clustered
index and not the PK? I'm guessing your automatic case is intended for
SERIAL PKs, but maybe it would be better to just make that explicit.
Not necessarily; since often (in my tables at least) the data
On Mon, Dec 11, 2006 at 3:31 PM, in message
[EMAIL PROTECTED], Ron Mayer
[EMAIL PROTECTED] wrote:
One thing I do worry about is if both postgresql and the OS
are both delaying write()s in the hopes of collapsing them
at the same time. If so, this would cause both to be experience
bigger
25 matches
Mail list logo