I think that the ONLY was wrong from day one :-(
Well, sure, but until we have an implementation that actually
*works* across multiple tables, it has to be there so that we
can at least consistently support the current single-table
semantics. Until we have some form of cross-table
This bothers me a bit, because in
fact the effects if any of the tested query would have been
rolled back. Not sure we have any choice though. If we
expose the error then we'll have problems with clients not
showing the EXPLAIN results.
I think we should leave it in top level, throw
On a separate note. The one major remaining piece here is in
constraints. I'm thinking what I have to check is that every
constraint present on the parent table is present on the
child tables. And I'm thinking I should do that by looking at
the constraint's textual definition (consrc).
Or maybe I should insist that a matching constraint name be present
*and* that the source text match? That's more of a pain to code
though.
That could also break some partitioning schemes; I don't
think it's a given that parents and children have matching
constraints, and afaik a
I can take other if that helps.
Larry, could you help me in the setup?
Regards,
On Thu, 8 Jun 2006, Andrew Dunstan wrote:
Date: Thu, 08 Jun 2006 10:54:09 -0400
From: Andrew Dunstan [EMAIL PROTECTED]
Newsgroups: pgsql.hackers
Subject: Re: Going for 'all green' buildfarm results
Larry
On Thu, 2006-06-08 at 17:23 -0400, Greg Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Thu, 2006-06-08 at 16:47 -0400, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
So? They'll get re-merged with the parent column during CREATE
On Thu, 2006-06-08 at 17:21 -0400, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Tried on two machines. The first (Turion64 laptop) gives 44-45 ms for
the SELECT, and 50-51 ms for the EXPLAIN ANALYZE.
The second machine, desktop Celeron 533, gives 197-200 ms for the SELECT
On Fri, 9 Jun 2006 ohp@pyrenet.fr wrote:
Date: Fri, 9 Jun 2006 11:12:07 +0200
From: ohp@pyrenet.fr
To: Andrew Dunstan [EMAIL PROTECTED], Larry Rosenman ler@lerctr.org
Newsgroups: pgsql.hackers
Subject: Re: Going for 'all green' buildfarm results
I can take other if that helps.
Ooops...
Alvaro Herrera wrote:
The second machine, desktop Celeron 533, gives 197-200 ms for the SELECT
and 788-790 for the EXPLAIN ANALYZE. I guess this is the reproduction
you were looking for.
Hmm, I take that back. I updated to the current CVS tip on this machine
and now I don't see the problem.
Simon Riggs wrote:
On Thu, 2006-06-08 at 17:21 -0400, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Tried on two machines. The first (Turion64 laptop) gives 44-45 ms for
the SELECT, and 50-51 ms for the EXPLAIN ANALYZE.
The second machine, desktop Celeron 533, gives
Alvaro Herrera [EMAIL PROTECTED] writes:
Hmm, I take that back. I updated to the current CVS tip on this machine
and now I don't see the problem. Without EXPLAIN ANALYZE, the times are
around 115-117 ms. With EXPLAIN ANALYZE, 149-152ms.
At the moment, CVS HEAD still has the sampling patch
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Hmm, I take that back. I updated to the current CVS tip on this
machine and now I don't see the problem. Without EXPLAIN ANALYZE,
the times are around 115-117 ms. With EXPLAIN ANALYZE, 149-152ms.
At the moment, CVS HEAD still has
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Hmm, I take that back. I updated to the current CVS tip on this machine
and now I don't see the problem. Without EXPLAIN ANALYZE, the times are
around 115-117 ms. With EXPLAIN ANALYZE, 149-152ms.
At the moment, CVS HEAD still
Larry Rosenman ler@lerctr.org writes:
Does my post from yesterday (39ms without explain analyze, 280+ with explain
analyze) on FreeBSD/amd64, Dual Xeon's in HTT mode help?
Well, it confirms that FreeBSD is subject to the same problem ;-).
I think the bottom line here is that there are some
Hi Guys, thank you for the info.
I asked the question because I thought that there is something like
(pre-beta, not-officially relased) version Of PG
8.2 ;) .
I will try with CVS.
Thanks . Milen
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jim C.
I'll place contrib module which will make all Snowball stemmers. Right
now I'm working on supporting OpenOffice's dictionaries in tsearch2, so
it will be simple to add it to packaging system.
done, http://archives.postgresql.org/pgsql-committers/2006-06/msg00112.php
--
Teodor Sigaev
On Fri, 2006-06-09 at 10:00 -0400, Tom Lane wrote:
I think the bottom line here is that there are some machines out there
where gettimeofday() is fast enough for our purposes, and some where
it is nowhere near fast enough. And that kernel-level fixes may be
possible for some of the latter,
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: 29 May 2006 18:05
To: Mark Cave-Ayland
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Proposal for debugging of server-side stored
procedures
(cut)
As far as the debug protocol details go, it'd be worth
Simon Riggs [EMAIL PROTECTED] writes:
I propose we revert the sampling patch (sorry Martijn)
yah ...
and go with the
patch to have an explain_analyze_timing parameter (default=on).
This I'm unexcited about. EXPLAIN output isn't all that transparent
anyway, and losing the extra cue of seeing
On Fri, Jun 09, 2006 at 10:00:20AM -0400, Tom Lane wrote:
To tell you the truth, this information makes me even less pleased with
the sampling-gettimeofday patch than I was before. If gettimeofday() in
itself increases the runtime of a node by a factor of 10, then just
trying to subtract off
It would be nice to keep the gettimeofday()s wherever they are most useful
on hardware/software where they are cheap. Perhaps a compile-time option?
On Fri, June 9, 2006 11:18 am, Martijn van Oosterhout wrote:
On Fri, Jun 09, 2006 at 10:00:20AM -0400, Tom Lane wrote:
To tell you the truth,
Some thoughts from another Tom...
Mark Cave-Ayland wrote:
... debugging in Perl is initiated with perl -d somefile.pl which then
becomes a special interactive interpreter session. The same is also true
with Python which is launched in a similar way, python -m pdb somefile.py.
All PL's are
Martijn van Oosterhout kleptog@svana.org writes:
On Fri, Jun 09, 2006 at 10:00:20AM -0400, Tom Lane wrote:
I had thought we were applying an order-of-ten-percent correction by
subtracting SampleOverhead, not an order-of-10x correction :-(
Eh? The whole point is to call gettimeofday() much
Mark Cave-Ayland wrote:
[snip]
Perl seems a little more messy in that I can't find a documented C API to
hook into the interpreter, but it looks as if it may be possible to cook
something up with writing a new DB package [2] which uses XS call a C
callback. The other issue is that unlike
On Fri, 2006-06-09 at 10:56 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I propose we revert the sampling patch (sorry Martijn)
yah ...
and go with the
patch to have an explain_analyze_timing parameter (default=on).
This I'm unexcited about. EXPLAIN output isn't all
Zeugswetter Andreas DCP SD [EMAIL PROTECTED] writes:
This bothers me a bit, because in
fact the effects if any of the tested query would have been
rolled back. Not sure we have any choice though. If we
expose the error then we'll have problems with clients not
showing the EXPLAIN
Sven Geisler [EMAIL PROTECTED] writes:
I created a patch for PostgreSQL and x86 architecture.
This patch address a Opteron-specific
behavior regarding some assembler statements.
AFAICT this patch essentially proposes that we should allow the single
case of an Opteron running in 32-bit mode to
Hi all,
I created a patch for PostgreSQL and x86 architecture.
This patch address a Opteron-specific
behavior regarding some assembler statements.
The patch based on a patch to PostgreSQL 8.0.3 which was worked out by
RedHat.
Tom did change src/include/storage/s_lock.h for PostgreSQL 8.1.x.
On Fri, Jun 09, 2006 at 11:22:11AM -0400, A.M. wrote:
It would be nice to keep the gettimeofday()s wherever they are most useful
on hardware/software where they are cheap. Perhaps a compile-time option?
I think making it optional at compile time is worse, because then at
support time you might
Tom Lane [EMAIL PROTECTED] writes:
I fear you'll have to actually reverse-compile the conbin strings and see if
you get a match.
There seems to be something I'm missing wrt the conbin string. I have a table
here with a simple check constraint:
Table public.a
Column | Type
Tom Lane [EMAIL PROTECTED] writes:
I was intending to push forward with the idea of being able to get
numbers out of a canceled EXPLAIN. That will allow you to get some
information even when the underlying query runs longer than you're
willing to tolerate. I still say that the number of
Greg Stark [EMAIL PROTECTED] writes:
There seems to be something I'm missing wrt the conbin string.
You can't access fields that lie to the right of a variable-width field
using C struct field names. You need to use heap_getattr() or one of
its relatives to obtain a pointer to such a field.
Greg Stark [EMAIL PROTECTED] writes:
Are we still going to subtract out the gettimeofday overhead?
I was always more excited about that than the sampling aspect. I've run into
queries where EXPLAIN ANALYZE results were deceptive due to the gettimeofday
overhead but I've never run into a query
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
There seems to be something I'm missing wrt the conbin string.
You can't access fields that lie to the right of a variable-width field
using C struct field names. You need to use heap_getattr() or one of
its
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
Are we still going to subtract out the gettimeofday overhead?
I was always more excited about that than the sampling aspect. I've run into
queries where EXPLAIN ANALYZE results were deceptive due to the gettimeofday
Greg Stark [EMAIL PROTECTED] writes:
To avoid user confusion it would reasonable to print out a line at the bottom:
Explain analyze profiling overhead removed: xxx ms
That also gives the user feedback on how precise their explain analyze results
are. If they see that the overhead being
Currently, the only way to get a listing of tables in a schema via psql
is to modify your search_path, which is both non-intuitive and a PITA.
You can do \d schemaname., but that's the equivalent of
set search_path = schemaname
\d *
I'd like to propose that the behavior of \d schemaname. be
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
To avoid user confusion it would reasonable to print out a line at the
bottom:
Explain analyze profiling overhead removed: xxx ms
That also gives the user feedback on how precise their explain analyze
On Fri, Jun 09, 2006 at 02:33:57PM -0500, Jim C. Nasby wrote:
Currently, the only way to get a listing of tables in a schema via psql
is to modify your search_path, which is both non-intuitive and a PITA.
You can do \d schemaname., but that's the equivalent of
Isn't this \dt schemaname.*?
The
On Fri, 2006-06-09 at 14:33 -0500, Jim C. Nasby wrote:
Currently, the only way to get a listing of tables in a schema via psql
is to modify your search_path, which is both non-intuitive and a PITA.
I've griped about psql's limited support for schemas in the past:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd like to propose that the behavior of \d schemaname. be changed to
match
set search_path = schemaname
\d
I'm not sure what your reasoning is here, but AFAICS this would move the
behavior away from what you say you want. What exactly have you got
in
On Fri, Jun 09, 2006 at 04:20:16PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd like to propose that the behavior of \d schemaname. be changed to
match
set search_path = schemaname
\d
I'm not sure what your reasoning is here, but AFAICS this would move the
It's not uncommon for index creation to take a substantial amount of
time for loading data, even when using the 'trick' of loading the data
before building the indexes. On fast RAID arrays, it's also possible for
this to be a CPU-bound operation, so I've been wondering if there was
some reasonable
Jim C. Nasby [EMAIL PROTECTED] writes:
What I'm looking for is what \d provides you, only limited to a
specific schema. \d information_schema. (for example) doesn't provide
that; it provides the details for every table/view in
information_schema.
What you're looking for is \dt, or perhaps
On Fri, Jun 09, 2006 at 04:55:07PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
What I'm looking for is what \d provides you, only limited to a
specific schema. \d information_schema. (for example) doesn't provide
that; it provides the details for every table/view in
Jim C. Nasby [EMAIL PROTECTED] writes:
On Fri, Jun 09, 2006 at 04:55:07PM -0400, Tom Lane wrote:
I'd be the first to agree that the behavior of \d isn't particularly
orthogonal, but it's not the pattern language that's the problem, it's
the command itself.
Perhaps \d without an argument
On Fri, Jun 09, 2006 at 03:55:28PM -0400, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
There seem to be two types of overhead going on. There's the amount of time
spent in gettimeofday itself which is pretty consistent.
That is a fact not in evidence. The impression I had from
On Tuesday 2006-06-06 20:11, Mark Woodward wrote:
Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10
video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
Ideally, the transaction management system would be proportional to the
marginal change in size of the database rather than the gross size of the
database. That is VACCUM being O(N) should be replaced (or there should be
an
Hello,
I can guess some of these:
pg_get_tabledef() : Would take a table name and return the columns and
associated types
pg_get_acldef(): Would take an object name and return the associated
roles and permissions for the object
pg_get_typedefault(): This one I am unsure about
50 matches
Mail list logo