Andrew Chernow wrote:
libpqtypes already implemented this. It is a different approach but
provides the same functionality; with the inclusion of being able to
handle every data type. libpqtypes uses the PGresult API for
composites and arrays, rather than adding a new set of functions.
Andrew Dunstan [EMAIL PROTECTED] writes:
Lastly, the idea is to provide extra facilities to libpq clients
without requiring any extra library.
Or more to the point, without requiring boatloads of new code that
only some libpq users would have any use for.
To my mind, the point of the present
Andrew Dunstan wrote:
Tom Lane wrote:
I'm tempted to suggest letting the script invoke autoconf, too,
but that would require standardizing where to find the correct
version of autoconf for each branch; so it might not be such a
great idea.
Unfortunately that's true. Maybe we could agree
That makes it quite useless for my intended purpose.
I found no more use cases for text results after libpqtypes started to
take shape, eventhough libpqtypes supports all data types in text
binary excluding arrays and composites. Because of this, adding a text
parser for arrays and
This topic has been discussed on this list and many user expect that
PostgreSQL implements it.
I'd like to work on this feature and hope that we can include it on 8.4.
Former discussions are here:
http://archives.postgresql.org/pgsql-hackers/2004-11/msg01093.php
Alvaro Herrera wrote:
Actually, now that I try it, it seems that the MediaWiki markup is not
completely helpful here -- right now, on some items we have a one-line
header and then possibly a longer description, and it seems the only
way to do that in MediaWiki is like this:
* Set proper
On Jun 7, 2008, at 2:00 PM, Andrew Dunstan wrote:
If we go down this road then I would far rather we tried to devise
some safe (or semi-safe) way of doing it instead of simply
providing expert (a.k.a. footgun) mode.
For instance, I'm wondering if we could do something with checksums
of
On Jun 9, 2008, at 7:32 AM, H.Harada wrote:
This topic has been discussed on this list and many user expect that
PostgreSQL implements it.
I'd like to work on this feature and hope that we can include it on
8.4.
I can't really comment on the technical aspects of your proposal, but
yes,
Hi there,
I'm currently looking at a bug report in PostGIS where we are getting
extremely long index scan times querying an index in one case, but the
same scan can take much less time if the input geometry is calculated as
the result of another function.
First of all, I include the EXPLAIN
Decibel! wrote:
Yes, but that provides no help at all outside of pg_dump. Being able
to add a FK with NO CHECK would be tremendously useful outside of
pg_dump. Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
am Mon, dem 09.06.2008, um 14:18:50 +0100 mailte Mark Cave-Ayland folgendes:
Hi there,
I'm currently looking at a bug report in PostGIS where we are getting
extremely long index scan times querying an index in one case, but the
same scan can take much less time if the input geometry is
On Mon, 2008-06-09 at 14:18 +0100, Mark Cave-Ayland wrote:
Unfortunately I can't seem to work out why the extra time is
disappearing into the index scan when my extra mcatest() function is
not present,
Hmmm, perhaps implicit casting?
Try this to see if it works better also
select count(*)
Simon Riggs wrote:
Hmmm, perhaps implicit casting?
Try this to see if it works better also
select count(*) from geography where centroid
(select the_geom::geometry from geography where id=69495);
Hi Simon,
Unfortunately that seems to take the slow runtime path too. I did
initially
Decibel! [EMAIL PROTECTED] writes:
Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
everything up to perform the key checks.
Ah, finally a useful comment. I think it might be possible to do an
add FK concurrently
It says here that CVS HEAD has a deadlock risk:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=chinchilladt=2008-06-09%2008:16:01
The problem manifested while attempting to drop a GIN index (though
I doubt GIN is specially at fault):
[484ceb40.47da:169] ERROR: deadlock detected
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
everything up to perform the key checks.
Ah, finally a useful comment. I
On Sun, 2008-06-08 at 19:03 -0400, Tom Lane wrote:
Your argument seems to consider only columns having a normal
distribution. How badly does it fall apart for non-normal
distributions? (For instance, Zipfian distributions seem to be pretty
common in database work, from what I've seen.)
If
Simon Riggs wrote:
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
everything up to perform the key checks.
Ah,
Tom Lane wrote:
If that's the right diagnosis, we probably ought to try to fix DROP
INDEX to not do that, since it could deadlock against any number of
things not just autovac. But there's another question here: can we
make autovac lose the deadlock war, instead of the user process?
Hmm, I
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Ah, finally a useful comment. I think it might be possible to do an
add FK concurrently type of command that would take exclusive lock
That's good, but it doesn't solve the original user complaint about
On Mon, 2008-06-09 at 11:23 -0400, Andrew Dunstan wrote:
Simon Riggs wrote:
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that
On Mon, 2008-06-09 at 11:33 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Ah, finally a useful comment. I think it might be possible to do an
add FK concurrently type of command that would take exclusive lock
That's
Mark Cave-Ayland [EMAIL PROTECTED] writes:
So by adding in an extra function around the subselect result, we have
speeded up the index lookup by several orders of magnitude, and the
speedup appears to be coming from somewhere within the index scan?!
Is the value you are fetching from the
Tom Lane wrote:
Is the value you are fetching from the geography table large enough to
be toasted? I'm thinking you might be looking at the cost of repeated
de-toastings.
Yeah, it's a fairly large geometry field so it will definitely be
getting toasted. So is it a case of with the mcatest
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 11:33 -0400, Tom Lane wrote:
No, we are running a large query to which the user *thinks* he knows the
answer. There are any number of reasons why he might be wrong.
Of course. I should have said to which we already know the answer
Mark Cave-Ayland [EMAIL PROTECTED] writes:
Tom Lane wrote:
Is the value you are fetching from the geography table large enough to
be toasted? I'm thinking you might be looking at the cost of repeated
de-toastings.
Yeah, it's a fairly large geometry field so it will definitely be
getting
2008/6/9 Simon Riggs [EMAIL PROTECTED]:
On Mon, 2008-06-09 at 10:57 -0400, Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
everything up to perform the key
Simon Riggs wrote:
But we don't know it for dead sure, we only think we do. What if the
data for one or other of the tables is corrupted? We'll end up with data
we believe is consistent but in fact is not, ISTM. If you can somehow
guarantee the integrity of data in both tables then we might
I would like to consider adding constraints to GiST indexes. I think it
is possible to add constraints that are more sophisticated than just
UNIQUE. My use case is a non-overlapping constraint, but I think it's
possible for this to be more general.
The idea is to make an array in shared memory
Hello all,
session variables are missing feature long time. Currently nobody
works on package or SQL/PSM module implementation so there is some
gap.Currently best know solution are session variables based on plper
hash tables or Joe Conway's C implementation.
I propose include Joe's code into
On Mon, 2008-06-09 at 11:49 -0400, Tom Lane wrote:
Mark Cave-Ayland [EMAIL PROTECTED] writes:
So by adding in an extra function around the subselect result, we have
speeded up the index lookup by several orders of magnitude, and the
speedup appears to be coming from somewhere within the
On Monday 09 June 2008 11:59:27 Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 11:33 -0400, Tom Lane wrote:
No, we are running a large query to which the user *thinks* he knows the
answer. There are any number of reasons why he might be wrong.
Of course. I
Pavel Stehule wrote:
Hello all,
session variables are missing feature long time. Currently nobody
works on package or SQL/PSM module implementation so there is some
gap.Currently best know solution are session variables based on plper
hash tables or Joe Conway's C implementation.
I propose
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 11:49 -0400, Tom Lane wrote:
Is the value you are fetching from the geography table large enough to
be toasted? I'm thinking you might be looking at the cost of repeated
de-toastings.
So you are saying it is de-toasted 32880 times,
Jeff Davis [EMAIL PROTECTED] writes:
I would like to consider adding constraints to GiST indexes. I think it
is possible to add constraints that are more sophisticated than just
UNIQUE. My use case is a non-overlapping constraint, but I think it's
possible for this to be more general.
I would
2008/6/9 Andrew Dunstan [EMAIL PROTECTED]:
Pavel Stehule wrote:
Hello all,
session variables are missing feature long time. Currently nobody
works on package or SQL/PSM module implementation so there is some
gap.Currently best know solution are session variables based on plper
hash
Your argument seems to consider only columns having a normal
distribution.
My example was based upon normally distributed data because people
usually know what they are and they are reasonably common.
How badly does it fall apart for non-normal distributions?
This should work to the extent
On Mon, 2008-06-09 at 12:37 -0400, Robert Treat wrote:
On Monday 09 June 2008 11:59:27 Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 11:33 -0400, Tom Lane wrote:
No, we are running a large query to which the user *thinks* he knows the
answer. There are
On Mon, 2008-06-09 at 13:28 -0400, Tom Lane wrote:
Jeff Davis [EMAIL PROTECTED] writes:
I would like to consider adding constraints to GiST indexes. I think it
is possible to add constraints that are more sophisticated than just
UNIQUE. My use case is a non-overlapping constraint, but I
Pavel Stehule wrote:
The URL fails, so I can't comment on Joe's work. But I seriously doubt that
any contrib module is really the right way to go about doing session
variables. If we're going to have them then they need to be properly
builtin, and available to all PLs.
probably Joe's server is
Gregory Stark wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
IFAIK (dimly recalling numerical analysis courses at university) SUM and ROUND
can *never* be commuted. In general the recommended approach is to round as
late as possible and as few times are possible - so your 1st query is the
Simon Riggs wrote:
If we break down the action into two parts.
ALTER TABLE ... ADD CONSTRAINT foo FOREIGN KEY ... NOVALIDATE;
which holds exclusive lock, but only momentarily
After this runs any new data is validated at moment of data change, but
the older data has yet to be validated.
Tom,
Actually, the reason it's still 10 is that the effort expended to get it
changed has been *ZERO*. I keep asking for someone to make some
measurements, do some benchmarking, anything to make a plausible case
for a specific higher value as being a reasonable place to set it.
The silence has
Alvaro Herrera napsal(a):
Hi,
We've been making noises about dealing with TOAST tables as separate
entities in autovacuum for some time now. So here's a proposal:
Maybe dumb idea - whats about make a queue of toast pointers ready for vacuum
and remove this toast items directly from toast
Josh Berkus [EMAIL PROTECTED] writes:
Where analyze does systematically fall down is with databases over 500GB in
size, but that's not a function of d_s_t but rather of our tiny sample size.
Speak to the statisticians. Our sample size is calculated using the same
theory behind polls which
Alvaro Herrera [EMAIL PROTECTED] writes:
The problem I see with this approach in general (two-phase FK creation)
is that you have to keep the same transaction for the first and second
command, but you really want concurrent backends to see the tuple for
the not-yet-validated constraint row.
On 6/9/08, Gregory Stark [EMAIL PROTECTED] wrote:
Josh Berkus [EMAIL PROTECTED] writes:
Where analyze does systematically fall down is with databases over 500GB
in
size, but that's not a function of d_s_t but rather of our tiny sample
size.
n_distinct. For that Josh is right, we
Hakan Kocaman [EMAIL PROTECTED] writes:
On 6/9/08, Gregory Stark [EMAIL PROTECTED] wrote:
n_distinct. For that Josh is right, we *would* need a sample size
proportional to the whole data set which would practically require us to
scan the whole table (and have a technique for summarizing the
Greg,
Speak to the statisticians. Our sample size is calculated using the same
theory behind polls which sample 600 people to learn what 250 million
people are going to do on election day. You do NOT need (significantly)
larger samples for larger populations.
Your analogy is bad. For
On Mon, 2008-06-09 at 14:07 -0400, Alvaro Herrera wrote:
Simon Riggs wrote:
If we break down the action into two parts.
ALTER TABLE ... ADD CONSTRAINT foo FOREIGN KEY ... NOVALIDATE;
which holds exclusive lock, but only momentarily
After this runs any new data is validated at moment
Simon Riggs wrote:
Maybe we say that you can defer the check, but after a while autovacuum
runs it for you if you haven't done so. It would certainly be useful to
run the VALIDATE part as a background task with vacuum wait enabled.
It would be useful if there was anywhere to report the error
Jeff Davis [EMAIL PROTECTED] writes:
On Mon, 2008-06-09 at 13:28 -0400, Tom Lane wrote:
I would like to see something that replaces the current btree-only kluge
for UNIQUE, if we're going to try to do something general. IOW, don't
think of this as GiST-specific.
I'm not sure exactly what
Gurjeet Singh wrote:
On Fri, May 30, 2008 at 10:40 AM, Tom Lane [EMAIL PROTECTED] wrote:
But since you mention it: one of the plausible answers for fixing the
vacuum problem for read-only slaves is to have the slaves push an xmin
back upstream to the master to prevent premature vacuuming.
Andreas 'ads' Scherbaum wrote:
On Fri, 30 May 2008 16:22:41 -0400 (EDT) Greg Smith wrote:
On Fri, 30 May 2008, Andreas 'ads' Scherbaum wrote:
Then you ship 16 MB binary stuff every 30 second or every minute but
you only have some kbyte real data in the logfile.
Not if you use
Bruce Momjian wrote:
Agreed. I realize why we are not zeroing those bytes (for performance),
but can't we have the archiver zero those bytes before calling the
'archive_command'?
Perhaps make the zeroing user-settable.
--
Alvaro Herrera
Bruce Momjian [EMAIL PROTECTED] writes:
Gurjeet Singh wrote:
There could be multiple slaves following a master, some serving
For the slave to not interfere with the master at all, we would need to
delay application of WAL files on each slave until visibility on that
slave allows the WAL to
Bruce Momjian [EMAIL PROTECTED] writes:
Agreed. I realize why we are not zeroing those bytes (for performance),
but can't we have the archiver zero those bytes before calling the
'archive_command'?
The archiver doesn't know any more about where the end-of-data is than
the archive_command
On Mon, 9 Jun 2008, Tom Lane wrote:
It should also be pointed out that the whole thing becomes uninteresting
if we get real-time log shipping implemented. So I see absolutely no
point in spending time integrating pg_clearxlogtail now.
There are remote replication scenarios over a WAN (mainly
Hi, everyone:
In functin exec_simple_query(), why we run a simple query
wrapped in a portal.
For instance:
version 8.3.0 ,postgres.c, Line 908
/*
* Create unnamed portal to run the query or queries in. If
there
Just for information.
In terms of archive compression, I have archive log compression which
will be found in http://pgfoundry.org/projects/pglesslog/
This feature is also included in NTT's synchronized log shipping
replication presented in the last PGCon.
2008/6/10 Greg Smith [EMAIL PROTECTED]:
On Mon, 2008-06-09 at 21:00 -0400, Tom Lane wrote:
1. It's btree-specific and can't be shared by other index AMs that might
wish to implement constraints.
This can be solved by my proposal, but I just don't know how it would
apply to something like GIN, for instance. It could replace the
=?GB2312?Q?=CD=F4=E7=F9?= [EMAIL PROTECTED] writes:
In other words, what's the benifit we use a portal to run a
simple query?
er, because it doesn't work otherwise?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
62 matches
Mail list logo