Hi,
Alvaro Herrera wrote:
Markus Wanner wrote:
(Although, I'm still less than thrilled about the internal storage
format of these tuple collections. That can certainly be improved and
simplified.)
Care to expand more on what it is?
Well, what I really dislike is the overhead in code to
In 8.2, we started to use getrlimit(RLIMIT_STACK, ...) to set
max_stack_depth automatically, to a max of 2MB:
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00389.php
However, it's not behaving as I expected when the stack limit is set to
unlimited. I would expect max_stack_depth to
Hi,
Tom Lane wrote:
Adjust things so that the query_string of a cached plan and the sourceText of
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement.
Hey guys,
I am asking here, because I know there is bunch of people here that know
the topic very well.
I need to use few 'overlaps' for timedate in my query. And I guess it is
quite inefficient there. So my question would be, why isn't postgresql
using indexes for OVERLAPS, and why optimizer
Grzegorz Jaśkiewicz [EMAIL PROTECTED] writes:
So my question would be, why isn't postgresql using indexes for OVERLAPS,
and why optimizer doesn't substitute it with something like:
(c = a AND d a) OR ( c = a AND c b)
How would you use an index for that?
--
Gregory Stark
EnterpriseDB
Hi,
Here is the lastest WITH RECURSIVE patches against 2007/07/17 CVS (CVS
HEAD won't compile for me).
This version includes regression tests and is almost ready for commit
IMO.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
recursive_query.patch.gz
Description: Binary data
--
Sent via pgsql-hackers
Hi,
Tatsuo Ishii wrote:
CVS HEAD won't compile for me
Did you try 'make clean' and rebuild? (Or even distclean). There were
some changes to pg_proc. At least, that problem has biten me this
morning, but after that, I could compile HEAD just fine.
Regards
Markus
--
Sent via
Hi,
Tatsuo Ishii wrote:
CVS HEAD won't compile for me
Did you try 'make clean' and rebuild? (Or even distclean).
Actually what I did was do cvs update on 2007/07/17 CVS HEAD (I kept
freshly gotten CVS work files as a tar ball).
There were
some changes to pg_proc. At least, that
Gregory Stark pisze:
Grzegorz Jaśkiewicz [EMAIL PROTECTED] writes:
So my question would be, why isn't postgresql using indexes for OVERLAPS,
and why optimizer doesn't substitute it with something like:
(c = a AND d a) OR ( c = a AND c b)
How would you use an index for that?
check
Markus Wanner [EMAIL PROTECTED] writes:
This commit added a variable 'query_string' to the function
ExecuteQuery() in src/backend/commands/prepare.c, but that function
already takes an argument named 'queryString'. What's the difference?
Which is which? Do we need both?
The query_string
2008/7/21 Tatsuo Ishii [EMAIL PROTECTED]:
Hi,
Tatsuo Ishii wrote:
CVS HEAD won't compile for me
Did you try 'make clean' and rebuild? (Or even distclean).
Actually what I did was do cvs update on 2007/07/17 CVS HEAD (I kept
freshly gotten CVS work files as a tar ball).
There were
some
Gregory Stark [EMAIL PROTECTED] writes:
Grzegorz JaÅkiewicz [EMAIL PROTECTED] writes:
So my question would be, why isn't postgresql using indexes for OVERLAPS,
and why optimizer doesn't substitute it with something like:
(c = a AND d a) OR ( c = a AND c b)
How would you use an index for
Hi,
Tom Lane wrote:
This seems like a bad idea, because it makes the code gratuitously
different from the names used for this purpose everywhere else.
I find that a pretty dubious reason for having 'query_string' and
'queryString' in the same function. In fact, having it in the same code
Heikki Linnakangas [EMAIL PROTECTED] writes:
In 8.2, we started to use getrlimit(RLIMIT_STACK, ...) to set
max_stack_depth automatically, to a max of 2MB:
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00389.php
However, it's not behaving as I expected when the stack limit is set
Tom Lane pisze:
The reason we don't automatically translate OVERLAPS is that the spec's
definition of OVERLAPS is too weird for that to work; in particular
it demands a true result for some cases in which one of the four
endpoints is NULL, which'd be pretty hard to do with an interval-style
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
In 8.2, we started to use getrlimit(RLIMIT_STACK, ...) to set
max_stack_depth automatically, to a max of 2MB:
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00389.php
However, it's not behaving as I expected when the
Markus Wanner [EMAIL PROTECTED] writes:
However, what about at least adding a comment, so fellow hackers have a
chance of understanding the subtle difference there?
Sure, done.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
Le Monday 21 July 2008, Heikki Linnakangas a écrit :
In 8.2, we started to use getrlimit(RLIMIT_STACK, ...) to set
max_stack_depth automatically, to a max of 2MB:
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00389.php
However, it's not behaving as I expected when the stack limit
Cédric Villemain wrote:
Le Monday 21 July 2008, Heikki Linnakangas a écrit :
I think we should differentiate between infinite and unknown in the
return value of get_stack_depth_limit(), and use max_stack_depth of 2MB
in case of infinite, and fall back to the 100kB only in the unknown case.
Le Monday 21 July 2008, Heikki Linnakangas a écrit :
Cédric Villemain wrote:
Le Monday 21 July 2008, Heikki Linnakangas a écrit :
I think we should differentiate between infinite and unknown in the
return value of get_stack_depth_limit(), and use max_stack_depth of 2MB
in case of
On Jul 18, 2008, at 09:53, David E. Wheeler wrote:
However, if someone with a lot more C and Pg core knowledge wanted
to sit down with me for a couple hours next week and help me bang
out these functions, that would be great. I'd love to have the
implementation be that much more complete.
The current commitfest queue has two entries that propose to migrate
existing pgfoundry projects (or improved versions thereof) into our
core distribution. The more I think about this the less happy I am
with it. From a maintenance point of view there seems little need
for either project to get
Tom,
Comments?
Well, in the *general* case, I think if we're going to have first
class pgfoundry projects, then having a unified official Kitchen Sink
Package will all of these add-ins becomes an imperative priority for
8.4. EDB's recent open sourcing of their installer might help with
On Sat, Jul 19, 2008 at 07:09:46AM +0530, Gurjeet Singh wrote:
Will try this option, at least in the next schema upgrade or when setting up
Slony.
As I've already suggested, however, if you try to set up slony on a
loaded database, you're going to see all manner of problems. Slony
takes some
Apologies for the delayed response - vacation, travel, etc got in the way!
Zdenek Kotala wrote:
I performed review of merged patch from Robert Treat. At first point
the patch does not work (SunOS 5.11 snv_86 sun4u sparc
SUNW,Sun-Fire-V240)
The attached patch fixed the regression test
Marko Kreen [EMAIL PROTECTED] writes:
[ plproxy ]
I looked through this a bit, and my principal reaction was what are
the security implications? It seems like it'd be very easy to create
functions that allow untrusted users to execute arbitrary SQL on
other databases in the plproxy cluster. As
On Jul 21, 2008, at 12:43, Tom Lane wrote:
From a maintenance point of view there seems little need
for either project to get integrated: they don't appear to have much
of any code that is tightly tied to backend innards.
Well, citext against CVS HEAD is quite different from the other
On Mon, 2008-07-21 at 17:03 -0300, Tom Lane wrote:
[. . .] I think it
would be a good idea to be open to reviewing pgfoundry code with the
same standards we'd use if we were going to integrate it. Perhaps
commitfest is not the right venue for that, though, if only because
of the possibility
On Jul 21, 2008, at 12:53, Josh Berkus wrote:
In the specific cases of pl/proxy and citext, they are very much in
line with what we already package with the core code, including
things like dblink, ISN, and CIDR. citext in particular would
eliminate a long-time newbie complaint about
Josh Berkus [EMAIL PROTECTED] writes:
So I would argue maybe on pl/proxy, but that citext does belong in core.
Well, at least citext is pretty tiny ...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
Tom Lane wrote:
The current commitfest queue has two entries that propose to migrate
existing pgfoundry projects (or improved versions thereof) into our
core distribution. The more I think about this the less happy I am
with it. From a maintenance point of view there seems little need
for
On 7/21/08, Tom Lane [EMAIL PROTECTED] wrote:
Marko Kreen [EMAIL PROTECTED] writes:
[ plproxy ]
I looked through this a bit, and my principal reaction was what are
the security implications? It seems like it'd be very easy to create
functions that allow untrusted users to execute
On Mon, Jul 21, 2008 at 01:17:39PM -0700, David E. Wheeler wrote:
pgFoundry ain't the CPAN, alas.
Maybe that's the problem that really needs solving?
One of the big Postgres features is its extensibility. I agree that
the extensions can sometimes be hard to find, but surely the answer to
that
On Jul 21, 2008, at 13:19, Andrew Dunstan wrote:
I was going to write some stuff about citext anyway. Quite apart
from the above considerations I'm still a bit concerned about its
performance characteristics. And I'm not sure we really want all the
baggage that David is proposing to bring
On Jul 21, 2008, at 13:28, Andrew Sullivan wrote:
Maybe that's the problem that really needs solving?
One of the big Postgres features is its extensibility. I agree that
the extensions can sometimes be hard to find, but surely the answer to
that is not an infinitely large source tarball?
On 21-Jul-08, at 4:28 PM, Andrew Sullivan wrote:
On Mon, Jul 21, 2008 at 01:17:39PM -0700, David E. Wheeler wrote:
pgFoundry ain't the CPAN, alas.
Maybe that's the problem that really needs solving?
One of the big Postgres features is its extensibility. I agree that
the extensions can
On Fri, Jul 18, 2008 at 02:23:43AM -0400, Andrew Sullivan wrote:
On Fri, Jul 18, 2008 at 10:41:36AM +0530, Gurjeet Singh wrote:
Just started INIT cluster Slonik command and that spiked too.. for more than
10 minutes now!!
Are you attempting to do Slony changes (such as install Slony) on
On Tue, Jul 22, 2008 at 1:29 AM, Andrew Sullivan [EMAIL PROTECTED]
wrote:
On Sat, Jul 19, 2008 at 07:09:46AM +0530, Gurjeet Singh wrote:
Will try this option, at least in the next schema upgrade or when setting
up
Slony.
As I've already suggested, however, if you try to set up slony on a
Bruce Momjian wrote:
Andrew Dunstan wrote:
Alvaro Herrera wrote:
Andrew Dunstan wrote:
OK, I have spent some time generating and filtering typdefs via objdump
on various platforms. I filtered them and Bruce's list to eliminate
items not actually found in the sources
Currently, one cannot perform a concurrent VACUUM and ANALYZE. This
is a significant problem for tables which are not only large and have
designated cost-delays, but which are also heavily inserted into and
deleted from. After performing a quick cursory investigation on this,
it doesn't appear
On Tue, Jul 22, 2008 at 02:41:55AM +0530, Gurjeet Singh wrote:
I am aware of the heavy locking involved with Slony, which should mean that
it blocks the application connections; that's be completely acceptable,
given all the warnings in the Slony docs. But what I am concerned about and
trying
Jonah H. Harris escribió:
Currently, one cannot perform a concurrent VACUUM and ANALYZE. This
is a significant problem for tables which are not only large and have
designated cost-delays, but which are also heavily inserted into and
deleted from. After performing a quick cursory
Andrew Sullivan [EMAIL PROTECTED] writes:
On Tue, Jul 22, 2008 at 02:41:55AM +0530, Gurjeet Singh wrote:
I am aware of the heavy locking involved with Slony, which should mean that
it blocks the application connections; that's be completely acceptable,
given all the warnings in the Slony docs.
Jonah H. Harris [EMAIL PROTECTED] writes:
Because we wouldn't want multiple ANALYZEs running on the same table,
changing the lock back to an AccessShareLock doesn't sound like a
solution.
It flat will not work. We used to do it that way, and it didn't
(search for tuple concurrently updated in
On Mon, Jul 21, 2008 at 6:15 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
This means that VACUUM ANALYZE must grab both locks ... is there a
gotcha here?
Agreed.
The main problem I see with this idea is that the dead and total tuple
count computed by ANALYZE would be immediately out of date,
Marko Kreen [EMAIL PROTECTED] writes:
On 7/21/08, Tom Lane [EMAIL PROTECTED] wrote:
I looked through this a bit, and my principal reaction was what are
the security implications?
There are 2 aspects to it:
1. Function can be created only by superuser.
What I'm concerned about is who they
On Mon, Jul 21, 2008 at 7:59 PM, Tom Lane [EMAIL PROTECTED] wrote:
Jonah H. Harris [EMAIL PROTECTED] writes:
Because we wouldn't want multiple ANALYZEs running on the same table,
changing the lock back to an AccessShareLock doesn't sound like a
solution.
It flat will not work. We used to do
Jonah H. Harris [EMAIL PROTECTED] writes:
The case I'm looking at is a large table which requires a lazy vacuum,
and a zero vacuum cost delay would cause too much I/O. Yet, this
table has enough insert/delete activity during a vacuum, that it
requires a fairly frequent analysis to maintain
On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane [EMAIL PROTECTED] wrote:
Jonah H. Harris [EMAIL PROTECTED] writes:
The case I'm looking at is a large table which requires a lazy vacuum,
and a zero vacuum cost delay would cause too much I/O. Yet, this
table has enough insert/delete activity during
At 8:34 AM +0100 7/11/08, Simon Riggs wrote:
On Mon, 2008-07-07 at 15:46 +0200, Bernd Helmle wrote:
There's a behavior in pg_dump that annoyed me a little bit, the last few
times i had to deal with it:
Consider you have to dump a specific namespace only, you are going to use
pg_dump -n
Jonah H. Harris wrote:
On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane [EMAIL PROTECTED] wrote:
I don't find this a compelling argument, at least not without proof that
the various vacuum-improvement projects already on the radar screen
(DSM-driven vacuum, etc) aren't going to fix your problem.
51 matches
Mail list logo