Hi,
On Tue, Jul 28, 2009 at 12:09 AM, Tom Lanet...@sss.pgh.pa.us wrote:
I think you're making things more complicated when they should be
getting simpler.
It strikes me that the current API of pass the BackendId if known or
InvalidBackendId if not still works for processes without a
I revised the SE-PostgreSQL Specifications:
http://wiki.postgresql.org/wiki/SEPostgreSQL_Draft
- Put several external link to introduce something too detail
for PostgreSQL documentations.
- Paid attention not to use undefined terminology, such as
security context, security policy and
On Sunday 26 July 2009 14:35:41 Sam Mason wrote:
I'm coming to the conclusion that you really need to link to external
material here; there must be good (and canonical) definitions of these
things outside and because SE-PG isn't self contained I really think you
need to link to them.
This is
Thanks for the updates.
I might suggest a couple of small changes:
a) a section that explains comments like This is not supported in the initial
version -- do you
mean in the first Beta release of SE-PostgreSQL, or not in the initial
release(s) for commitfests ?
If it is not supported why
On Thu, Jul 23, 2009 at 4:45 PM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
Laurent Laborde kerdez...@gmail.com wrote:
(iostat show a 5~25MB/s bandwidth at 100%util instead of 2~5MB/s at
100%util).
Any numbers for overall benefit at the application level?
So... now i'm not sure
On Mon, Jul 27, 2009 at 01:53:07PM -0400, Chris Browne wrote:
s...@samason.me.uk (Sam Mason) writes:
On Sun, Jul 26, 2009 at 01:42:32PM +0900, KaiGai Kohei wrote:
Robert Haas wrote:
In some cases, the clearance of infoamtion may be changed. We often
have dome more complex requirements
[wretched top-posting -- begs forgiveness!]
KaiGai --
I have edited the first three sections (up to but not including
Architecture), mostly cleaning up language but I did run into a few places
where I am not sure if I got the proper meaning -- I flagged those in square
brackets (e.g.[...])
On Monday 27 July 2009 14:50:30 Alvaro Herrera wrote:
We've developed some code to implement fixed-length datatypes for well
known digest function output (MD5, SHA1 and the various SHA2 types).
These types have minimal overhead and are quite complete, including
btree and hash opclasses.
On Friday 24 July 2009 18:15:00 Tom Lane wrote:
Another question is that this proposal effectively redefines the
current_query column as not the current query, but something that
might be better be described as latest_query. Should we change the
name? We'd probably break some client code if
On Wed, Jul 22, 2009 at 9:23 PM, Robert Haasrobertmh...@gmail.com wrote:
On Sun, Jul 19, 2009 at 4:00 AM, Peter Eisentrautpete...@gmx.net wrote:
Please submit an updated patch.
If you would like to have this change committed during this
CommitFest, please submit an updated patch ASAP.
Peter Eisentraut wrote:
On Sunday 26 July 2009 14:35:41 Sam Mason wrote:
I'm coming to the conclusion that you really need to link to external
material here; there must be good (and canonical) definitions of these
things outside and because SE-PG isn't self contained I really think you
need to
I'm currently rewriting the whole toaster stuff to simply define :
- a compression threshold (size limit to compress, in Nth of page)
- an external threshold (size limit to externalize compressed data, in
Nth of page)
i keep the TOAST_INDEX_TARGET and EXTERN_TUPLES_PER_PAGE.
I expect a lot of
On Mon, Jul 27, 2009 at 16:14, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
To fix that we'd just have to turn those functions all into returning
boolean and log with LOG instead. AFAIK, we've had zero reports of
this actually happening, so I'm not sure it's
Greg Williamson wrote:
[wretched top-posting -- begs forgiveness!]
KaiGai --
I have edited the first three sections (up to but not including Architecture),
mostly cleaning up language but I did run into a few places where I am not
sure if I got the proper meaning -- I flagged those in
On Tuesday 28 July 2009 15:36:29 KaiGai Kohei wrote:
Peter Eisentraut wrote:
On Sunday 26 July 2009 14:35:41 Sam Mason wrote:
I'm coming to the conclusion that you really need to link to external
material here; there must be good (and canonical) definitions of these
things outside and
Magnus Hagander mag...@hagander.net writes:
On Mon, Jul 27, 2009 at 16:14, Tom Lanet...@sss.pgh.pa.us wrote:
I'm not really insisting on a redesign. I'm talking about the places
where the code author appears not to have understood that ERROR means
FATAL, because the code keeps plugging on
Hi,
It seems postgres cache the plan under CacheMemoryContext during the
plpgsql executing.
If there is a function with lots of variables and every one of them got a
default value,
postgres will allocate lots of memory for caching the default value plan(we
have to run
the function at least
I wrote:
So far, all tests have shown no difference in performance based on
the patch;
My testing to that point had been on a big machine with 16 CPUs and
128 GB RAM and dozens of spindles. Last night I tried with a dual
core machine with 4 GB RAM and 5 spindles in RAID 5. Still no
On Tue, Jul 28, 2009 at 7:15 AM, Peter Eisentrautpete...@gmx.net wrote:
On Monday 27 July 2009 14:50:30 Alvaro Herrera wrote:
We've developed some code to implement fixed-length datatypes for well
known digest function output (MD5, SHA1 and the various SHA2 types).
These types have minimal
Tao Ma feng_e...@163.com writes:
Once we DROP the function, the memory consumed
by the plan will be leak.
I'm pretty unconcerned about DROP FUNCTION. The case that seems worth
worrying about is CREATE OR REPLACE FUNCTION, and in that case we'll
reclaim the storage on the next call of the
I'm curious about the pg_regress change ... is it really necessary?
To test unaccent dictionary it's needed to input accented characters, not all
encodings allow that. UTF8 allows that, but it doesn't compatible with a lot of
locales. So, --no-locale should be propagated to CREATE DATABASE
On Tue, Jul 28, 2009 at 2:36 PM, Laurent Labordekerdez...@gmail.com wrote:
I'm currently rewriting the whole toaster stuff to simply define :
- a compression threshold (size limit to compress, in Nth of page)
- an external threshold (size limit to externalize compressed data, in
Nth of page)
[ thinks a bit ... ] At least for GIST, it is possible that whether
data can be regurgitated will vary depending on the selected opclass.
Some opclasses use the STORAGE modifier and some don't. I am not sure
how hard we want to work to support flexibility there. Would it be
sufficient to
On Tue, 28 Jul 2009, Josh Williams wrote:
Maybe pgbench itself is less of a bottleneck in this environment,
relatively speaking?
On UNIXish systems, you know you've reached the conditions under which the
threaded pgbench would be helpful if the pgbench client program itself is
taking up a
... btw, where in the SQL spec do you read that PRIMARY KEY constraints
can't be deferred? I don't see that.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
Laurent Laborde kerdez...@gmail.com wrote:
If it works, and if you're interested, i may try to write a patch
for 8.5.
Cool! I can help with it if you wish.
If you haven't already done so, be sure to read this carefully:
http://wiki.postgresql.org/wiki/Developer_FAQ
Also, be sure you
I knew that the delete_function() will reclaim the memory context
allocated for the function. But I did not find any code for removing
the plan(SPI plan memory context), saved by calling _SPI_save_plan.
Is the plan memory context freed when someone issued CREATE OR
REPLACE FUNCTION?
Thanks.
Tom
On Sat, Jul 25, 2009 at 2:08 AM, Brendan Jurddire...@gmail.com wrote:
2009/7/24 Euler Taveira de Oliveira eu...@timbira.com:
Here is my review. The patch applied without problems. The docs and
regression
tests are included. Both of them worked as expected. Also, you included a fix
in RN
You sent this message to the list. What you want to do is go and
subscribe yourself here:
http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org?func=lists-long-fullextra=pgsql-hackers
...Robert
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Hello,
Version 8.4 has brought a very useful function : pg_terminate_backend()
There have been many reports since years about idle processes remaining on the
server while clients are no longer connected. While this may be due to poor
application code not closing connections correctly, it does
On Tue, Jul 28, 2009 at 12:44 PM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
Finally, you should probably consider volunteering to review a patch
or two for the next commitfest. :-) To ensure timely review of
submitted patches, while still allowing the reviewers some guarantee
of
On Tue, Jul 28, 2009 at 1:19 PM, Saleem EDAH-TALLYnm...@netcourrier.com wrote:
Hello,
Version 8.4 has brought a very useful function : pg_terminate_backend()
There have been many reports since years about idle processes remaining on the
server while clients are no longer connected. While
... speaking of adding catalog columns, I just discovered that the patch
adds searches of pg_depend and pg_constraint to BuildIndexInfo. This
seems utterly unacceptable on two grounds:
* It's sheer luck that it gets through bootstrap without crashing.
Those aren't part of the core set of
Building 8.4 on a client's system, I get a regression failure apparently
due to some difference between the system's timezone DB and what out
regression tests expect, as shown below.
I'm wondering if we should not disable the timestamptz regression test
when we configure with the system
On Tue, 2009-07-28 at 13:41 -0400, Tom Lane wrote:
* It's sheer luck that it gets through bootstrap without crashing.
Those aren't part of the core set of catalogs that we expect to be
accessed by primitive catalog searches. I wouldn't be too surprised
if it gets the wrong answer, and the
Andrew Dunstan and...@dunslane.net writes:
Building 8.4 on a client's system, I get a regression failure apparently
due to some difference between the system's timezone DB and what out
regression tests expect, as shown below.
Those regression tests were *intentionally* set up to fail if
Jeff Davis pg...@j-davis.com writes:
On Tue, 2009-07-28 at 13:41 -0400, Tom Lane wrote:
I think we had better add the deferrability state to pg_index
to avoid this.
This might make it difficult to allow multiple constraints to use the
same index.
Huh? That hardly seems possible anyway, if
On Jul 27, 2009, at 5:19 PM, David E. Wheeler wrote:
Yep, that's just what I needed, thanks. I think I'll send a patch
for the Cursors section of the PL/pgSQL documentation that
mentions this. Would have saved me a bunch of hassle.
So would have reading two more sentences of the docs,
Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
Building 8.4 on a client's system, I get a regression failure apparently
due to some difference between the system's timezone DB and what out
regression tests expect, as shown below.
Those regression tests were
Sorry to bring this up, I know you've been fighting about XML for a while.
Currently, I am using XML2 functionality and have tried to get the newer
XPath function to work similarly, but can't quite seem to do it.
I think the current xpath function is too limited. (The docs said to post
problems
On Tue, 2009-07-28 at 15:15 -0400, Tom Lane wrote:
This might make it difficult to allow multiple constraints to use the
same index.
Huh? That hardly seems possible anyway, if some of them want deferred
checks and others do not.
I don't see why it's completely impossible. You could have:
Jeff Davis pg...@j-davis.com writes:
On Tue, 2009-07-28 at 15:15 -0400, Tom Lane wrote:
Sure it does. Whether the check is immediate must be considered a
property of the index itself. Any checking you do later could be
per-constraint, but the index is either going to fail at insert or not.
On Tue, Jul 28, 2009 at 3:21 PM, pg...@mohawksoft.com wrote:
Sorry to bring this up, I know you've been fighting about XML for a while.
Currently, I am using XML2 functionality and have tried to get the newer
XPath function to work similarly, but can't quite seem to do it.
I think the
pg...@mohawksoft.com wrote:
Sorry to bring this up, I know you've been fighting about XML for a while.
Currently, I am using XML2 functionality and have tried to get the newer
XPath function to work similarly, but can't quite seem to do it.
I think the current xpath function is too limited.
Andrew Dunstan and...@dunslane.net wrote:
This is really a usage question, which doesn't belong on -hackers.
Perhaps this sentence in the 8.4.0 docs should be amended or removed?:
If you find that some of the functionality of this module is not
available in an adequate form with the newer
[sigh, forgot to cc hackers the first time ]
Foreign key behavior is only sane if the referenced column(s) are
unique. With the proposed patch, it is possible that the uniqueness
check on the referenced columns is deferred, which means it might not
occur till after an FK check does. Discuss.
Kevin Grittner wrote:
Andrew Dunstan and...@dunslane.net wrote:
This is really a usage question, which doesn't belong on -hackers.
Perhaps this sentence in the 8.4.0 docs should be amended or removed?:
If you find that some of the functionality of this module is not
available
Andrew Dunstan and...@dunslane.net wrote:
in fact the desired functionality is present [...] You just need to
use the text() function to get the contents of the node, and an
array subscript to pull it out of the result array.
I just took a quick look, and that didn't jump out at me from the
Andrew Dunstan and...@dunslane.net wrote:
in fact the desired functionality is present [...] You just need to
use the text() function to get the contents of the node, and an
array subscript to pull it out of the result array.
I just took a quick look, and that didn't jump out at me from the
Hello
I would to solve some points from ToDo. I began with TYPE [] support.
I thing, so this should be relative simple, but there are one issue.
There are syntax for declare array from scalar type -
create or replace function x(a int)
returns ... as $$
declare f a%type[] --
begin ...
but there
Brendan Jurd escreveu:
Please find attached version 4 of the patch, and incremental diff from
version 3. It fixes the bug ( is now accepted as a valid
form of ), and lifts the restriction on only having one digit
before the decimal point.
Looks better but I did some tests and
2009/7/28 Tom Lane t...@sss.pgh.pa.us:
[sigh, forgot to cc hackers the first time ]
Foreign key behavior is only sane if the referenced column(s) are
unique. With the proposed patch, it is possible that the uniqueness
check on the referenced columns is deferred, which means it might not
On Tue, Jul 28, 2009 at 10:53:08PM +0200, Pavel Stehule wrote:
Hello
I would to solve some points from ToDo. I began with TYPE [] support.
I thing, so this should be relative simple, but there are one issue.
snip
My first idea is using word element:
create or replace function x(a int[])
On Tue, 2009-07-28 at 22:10 +0100, Dean Rasheed wrote:
Hmm, yes, looking in the SQL spec, I've just noticed this under 11.8,
referential constraint definition:
The table constraint descriptor describing the unique constraint
definition whose unique column list identifies the referenced
Jeff Davis pg...@j-davis.com writes:
Is it a problem to allow unique constraints to be deferrable until the
end of the command though?
Yes. If you do have a case where this matters, the command updating the
referenced table is most likely different from the one updating the
referencing table,
Another thought on the index AM API issues: after poking through the
code I realized that there is *nobody* paying any attention to the
existing bool result of aminsert() (ie, did we insert anything or not).
So I think that instead of adding a bool* parameter, we should repurpose
the function
On Tuesday, July 28, 2009, pg...@mohawksoft.com wrote:
Andrew Dunstan and...@dunslane.net wrote:
in fact the desired functionality is present [...] You just need to
use the text() function to get the contents of the node, and an
array subscript to pull it out of the result array.
I just
Greg Williamson wrote:
Thanks for the updates.
I might suggest a couple of small changes:
a) a section that explains comments like This is not supported in the
initial version -- do you
mean in the first Beta release of SE-PostgreSQL, or not in the initial
release(s) for commitfests ?
On Tue, Jul 28, 2009 at 10:28 AM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
I wrote:
So far, all tests have shown no difference in performance based on
the patch;
My testing to that point had been on a big machine with 16 CPUs and
128 GB RAM and dozens of spindles. Last night I
Robert Haas robertmh...@gmail.com writes:
The other possibility here is that this just doesn't work. :-)
That's why we wanted to test it ;-).
I don't have time to look right now, but ISTM the original discussion
that led to making that patch had ideas about scenarios where it would
be faster.
pg...@mohawksoft.com wrote:
Another thing that is troubling is that more exotic types do not seem to
be supported at all. For instance, in my example I used uuid, and if one
substitutes uuid() for text() that doesn't work.
text() is an XPath function, with well defined semantics that
On Tue, 2009-07-28 at 12:10 -0400, Greg Smith wrote:
If your test system
is still setup, it might be interesting to try the 64 and 128 client cases
with Task Manager open, to see what percentage of the CPU the pgbench
driver program is using. If the pgbench client isn't already pegged at a
Pavel Stehule pavel.steh...@gmail.com writes:
I would to solve some points from ToDo. I began with TYPE [] support.
plpgsql's %type support is a crock that's going to have to be rewritten
from the ground up as soon as we consolidate the lexer with the core.
I wouldn't suggest spending any time
2009/7/29 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
I would to solve some points from ToDo. I began with TYPE [] support.
plpgsql's %type support is a crock that's going to have to be rewritten
from the ground up as soon as we consolidate the lexer with the
64 matches
Mail list logo