On Tue, Apr 3, 2012 at 11:12 PM, Greg Stark st...@mit.edu wrote:
On Wed, Apr 4, 2012 at 1:19 AM, Dave Page dp...@pgadmin.org wrote:
then, we're talking about making parts of the filesystem
world-writeable so it doesn't even matter if the user is running as an
admin for a trojan or some other
On Tue, Apr 3, 2012 at 9:26 AM, Andrew Dunstan and...@dunslane.net wrote:
First, either the creation of the destination directory needs to be delayed
until all the sanity checks have passed and we're sure we're actually going
to write something there, or it needs to be removed if we error exit
Using a cursor argument name equal to another plpgsql variable results
in the error:
cursor .. has no argument named
The attached patch fixes that.
Instead of solving the issue like is done in the patch, another way
would be to expose internal_yylex() so that could be used instead of
On Tue, Apr 3, 2012 at 11:04 AM, Robert Haas robertmh...@gmail.com wrote:
OK, but it seems like a pretty fragile assumption that none of the
workers will ever manage to emit any other error messages. We don't
rely on this kind of assumption in the backend, which is a lot
better-structured and
On Mon, Apr 2, 2012 at 12:33 PM, Robert Haas robertmh...@gmail.com wrote:
This particular example shows the above chunk of code taking 13s to
execute. Within 3s, every other backend piles up behind that, leading
to the database getting no work at all done for a good ten seconds.
My guess is
On 04/04/2012 05:03 AM, Joachim Wieland wrote:
Second, all the PrintStatus traces are annoying and need to be removed, or
perhaps better only output in debugging mode (using ahlog() instead of just
printf())
Sure, PrintStatus is just there for now to see what's going on. My
plan was to remove
On 4.4.2012 05:35, Greg Smith wrote:
On 03/05/2012 05:20 PM, Tomas Vondra wrote:
What is the current state of this effort? Is there someone else working
on that? If not, I propose this (for starters):
* add a new page Performance results to the menu, with a list of
members that
Hi,
While looking into SSL code in secure_read() of be-secure.c and
pqsecure_read() of fe-secure.c, I noticed subtle difference between
them.
In secure_read:
--
case SSL_ERROR_WANT_READ:
case
On Wed, Apr 4, 2012 at 6:26 AM, Josh Berkus j...@agliodbs.com wrote:
While I was doing this I always thought this would have been a better
approach for my previous project, an accounting application. If I could
just have stored entities like invoice customer as a single document
that
Shigeru HANADA wrote:
During a foreign scan, type input functions are used to convert
the text representation of values. If a foreign table is
misconfigured,
you can get error messages from these functions, like:
ERROR: invalid input syntax for type double precision: etwas
or
ERROR:
Marko Kreen mark...@gmail.com writes:
On Tue, Apr 03, 2012 at 05:32:25PM -0400, Tom Lane wrote:
Well, there are really four levels to the API design:
* Plain old PQexec.
* Break down PQexec into PQsendQuery and PQgetResult.
* Avoid waiting in PQgetResult by testing PQisBusy.
* Avoid waiting
umi.tan...@gmail.com writes:
http://www.postgresql.org/docs/9.1/static/spi-spi-execute.html
===
SPI_execute(INSERT INTO foo SELECT * FROM bar, false, 5);
will allow at most 5 rows to be inserted into the table.
===
This seems not true unless I'm missing something.
Hmm ... that did work as
Yeb Havinga yebhavi...@gmail.com writes:
Using a cursor argument name equal to another plpgsql variable results
in the error:
cursor .. has no argument named
The attached patch fixes that.
Instead of solving the issue like is done in the patch, another way
would be to expose
Tatsuo Ishii is...@postgresql.org writes:
Those code fragment judges the return value from
SSL_read(). secure_read() does retrying when SSL_ERROR_WANT_READ *and*
SSL_ERROR_WANT_WRITE returned. However, pqsecure_read() does not retry
when SSL_ERROR_WANT_READ. It seems they are not consistent.
On Wed, Apr 4, 2012 at 17:57, Tom Lane t...@sss.pgh.pa.us wrote:
Tatsuo Ishii is...@postgresql.org writes:
Those code fragment judges the return value from
SSL_read(). secure_read() does retrying when SSL_ERROR_WANT_READ *and*
SSL_ERROR_WANT_WRITE returned. However, pqsecure_read() does not
Scott Mead sco...@openscg.com writes:
Personally, I feel that if unix will let you be stupid:
$ export PATH=/usr/bin:/this/invalid/crazy/path
$ echo $PATH
/usr/bin:/this/invalid/crazy/path
PG should trust that I'll get where I'm going eventually :)
Well, that's an interesting
On Wed, Apr 4, 2012 at 12:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Mead sco...@openscg.com writes:
Personally, I feel that if unix will let you be stupid:
$ export PATH=/usr/bin:/this/invalid/crazy/path
$ echo $PATH
/usr/bin:/this/invalid/crazy/path
PG should trust
Andrew Dunstan and...@dunslane.net writes:
On 04/02/2012 01:03 PM, Tom Lane wrote:
When I said list, I meant a List *. No fixed size.
Ok, like this?
I think this could use a bit of editorialization (I don't think the
stripe terminology is still applicable, in particular), but the
general
Magnus Hagander mag...@hagander.net writes:
On Wed, Apr 4, 2012 at 17:57, Tom Lane t...@sss.pgh.pa.us wrote:
I rather wonder whether the #ifdef WIN32 bit in the backend isn't dead
code though. If the port isn't in nonblock mode, we shouldn't really
get here at all, should we?
Not in a
Scott Mead sco...@openscg.com writes:
On Wed, Apr 4, 2012 at 12:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Well, that's an interesting analogy. Are you arguing that we should
always accept any syntactically-valid search_path setting, no matter
whether the mentioned schemas exist? It wouldn't
On Wed, Apr 4, 2012 at 8:00 AM, Robert Haas robertmh...@gmail.com wrote:
There's some apparent regression on the single-client test, but I'm
inclined to think that's a testing artifact of some kind and also
probably not very important. It would be worth paying a small price
in throughput to
On Wed, Apr 4, 2012 at 12:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Mead sco...@openscg.com writes:
On Wed, Apr 4, 2012 at 12:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Well, that's an interesting analogy. Are you arguing that we should
always accept any syntactically-valid search_path
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I don't think I'm getting my point across by explaining, so here's a
modified version of the patch that does what I was trying to say.
Minor side point: some of the diff noise in this patch comes from
On Wed, Apr 4, 2012 at 9:53 AM, Dobes Vandermeer dob...@gmail.com wrote:
I think there is something to be gained by having a first-class concept of a
document in the database. It might save some trouble managing
parent/child relations, versioning, things like that.
Methinks this needs a *lot*
On 04.04.2012 19:32, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
I don't think I'm getting my point across by explaining, so here's a
modified version of the patch that does what I was trying to say.
Minor side point: some of the diff noise in this patch
2012/4/4 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
On 04.04.2012 19:32, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
I don't think I'm getting my point across by explaining, so here's a
modified version of the patch that does what I was trying
Robert Haas robertmh...@gmail.com writes:
On Wed, Apr 4, 2012 at 12:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
You're getting squishy on me ...
My feeling on this is that it's OK to warn if the search_path is set
to something that's not valid, and it might also be OK to not warn.
Right now we
On 04/04/2012 12:13 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 04/02/2012 01:03 PM, Tom Lane wrote:
When I said list, I meant a List *. No fixed size.
Ok, like this?
I think this could use a bit of editorialization (I don't think the
stripe terminology is still
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
, everybody's next few CLOG requests hit some other
buffer but eventually the long-I/O-in-progress buffer again becomes
least recently used and the next CLOG eviction causes a second backend
to begin waiting for that
Andrew Dunstan and...@dunslane.net writes:
On 04/04/2012 12:13 PM, Tom Lane wrote:
Does anyone feel that it's a bad idea that list entries are never
reclaimed? In the worst case a transient load peak could result in
a long list that permanently adds search overhead. Not sure if it's
worth
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
3. I noticed that the blocking described by slru.c:311 blocked by
slru.c:405 seemed to be clumpy - I would get a bunch of messages
about that all at once. This makes me wonder if the SLRU machinery is
occasionally making
Excerpts from Greg Stark's message of mié abr 04 14:11:29 -0300 2012:
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
, everybody's next few CLOG requests hit some other
buffer but eventually the long-I/O-in-progress buffer again becomes
least recently used and the
Hello, This is the new version of dblink patch.
- Calling dblink_is_busy prevents row processor from being used.
- some PGresult leak fixed.
- Rebased to current head.
A hack on top of that hack would be to collect the data into a
tuplestore that contains all text columns, and then convert
... would be really nice to have. Especially pgbench and pg_upgrade for
me, but it would be useful to have man pages for everything.
Unfortunately, we can't just replace the sect1's in in Appendix F [0]
with refentry's, because the content model of DocBook doesn't allow
that. (You can't have a
I wrote:
The idea I had in mind was to compensate for adding list-removal logic
by getting rid of the concept of an unused entry. If the removal is
conditional then you can't do that and you end up with the complications
of both methods. Anyway I've not tried to code it yet.
I concluded
Kyotaro HORIGUCHI horiguchi.kyot...@oss.ntt.co.jp writes:
What I'm currently thinking we should do is just use the old method
for async queries, and only optimize the synchronous case.
Ok, I agree with you except for performance issue. I give up to use
row processor for async query with
2012-04-04 17:12 keltezéssel, Boszormenyi Zoltan írta:
2012-04-04 16:22 keltezéssel, Boszormenyi Zoltan írta:
2012-04-04 15:17 keltezéssel, Boszormenyi Zoltan írta:
Hi,
2012-04-04 12:30 keltezéssel, Boszormenyi Zoltan írta:
Hi,
attached is a patch to implement a framework to simplify and
Excerpts from Peter Eisentraut's message of mié abr 04 15:53:20 -0300 2012:
... would be really nice to have. Especially pgbench and pg_upgrade for
me, but it would be useful to have man pages for everything.
Unfortunately, we can't just replace the sect1's in in Appendix F [0]
with
I think this patch is doing two things: first touching infrastructure
stuff and then adding lock_timeout on top of that. Would it work to
split the patch in two pieces?
--
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication,
Excerpts from Joachim Wieland's message of mié abr 04 15:43:53 -0300 2012:
On Wed, Apr 4, 2012 at 8:27 AM, Andrew Dunstan and...@dunslane.net wrote:
Sure, PrintStatus is just there for now to see what's going on. My
plan was to remove it entirely in the final patch.
We need that final
2012/4/4 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
On 30.03.2012 12:36, Pavel Stehule wrote:
2012/3/28 Heikki Linnakangasheikki.linnakan...@enterprisedb.com:
In prepare_expr(), you use a subtransaction to catch any ERRORs that
happen
during parsing the expression. That's a
On ons, 2012-04-04 at 16:29 -0300, Alvaro Herrera wrote:
Unfortunately, we can't just replace the sect1's in in Appendix F [0]
with refentry's, because the content model of DocBook doesn't allow
that. (You can't have a mixed sequence of sect1 and refentry, only one
or the other.)
Hm,
On Wed, Apr 4, 2012 at 1:11 PM, Greg Stark st...@mit.edu wrote:
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
, everybody's next few CLOG requests hit some other
buffer but eventually the long-I/O-in-progress buffer again becomes
least recently used and the next CLOG
Every so often I find myself trying to write
postgres -D ... --ssl
or
postgres -D ... --debug-print-plan
which fails, because you need to write --ssl=on or
--debug-print-plan=true etc.
Have others had the same experience? Would it be worth supporting the
case without value to default to
On 4 April 2012 19:53, Peter Eisentraut pete...@gmx.net wrote:
... would be really nice to have. Especially pgbench and pg_upgrade for
me, but it would be useful to have man pages for everything.
Unfortunately, we can't just replace the sect1's in in Appendix F [0]
with refentry's, because
Dave Page wrote:
Exactly - which is why I was objecting to recommending a distribution
of PostgreSQL that came in a packaging system that we were told
changed /usr/local to be world writeable to avoid the use/annoyance of
the standard security measures on the platform.
We... that's not
On fre, 2011-12-23 at 19:51 +0200, Peter Eisentraut wrote:
On ons, 2011-12-21 at 11:04 +0100, Pavel Stehule wrote:
this patch adds a bytea_agg aggregation.
It allow fast bytea concatetation.
Why not call it string_agg? All the function names are the same between
text and bytea (e.g.,
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
I'll do some testing to try to confirm whether this theory is correct
and whether the above fix helps.
Very interesting work.
Having performed this investigation, I've discovered a couple of
interesting things.
On Wed, Apr 4, 2012 at 6:25 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Greg Stark's message of mié abr 04 14:11:29 -0300 2012:
On Wed, Apr 4, 2012 at 1:00 PM, Robert Haas robertmh...@gmail.com wrote:
, everybody's next few CLOG requests hit some other
buffer but
On Wed, Apr 4, 2012 at 9:05 PM, Robert Haas robertmh...@gmail.com wrote:
Yes, the SLRU is thrashing heavily. In this configuration, there are
32 CLOG buffers. I just added an elog() every time we replace a
buffer. Here's a sample of how often that's firing, by second, on
this test (pgbench
On 04-04-2012 17:07, Peter Eisentraut wrote:
postgres -D ... --debug-print-plan
which fails, because you need to write --ssl=on or
--debug-print-plan=true etc.
Have others had the same experience? Would it be worth supporting the
case without value to default to on/true?
Please, don't
On 04/04/2012 03:09 PM, Tom Lane wrote:
I wrote:
The idea I had in mind was to compensate for adding list-removal logic
by getting rid of the concept of an unused entry. If the removal is
conditional then you can't do that and you end up with the complications
of both methods. Anyway I've
On Wed, Apr 4, 2012 at 10:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Given the lack of consensus around the suspension API, maybe the best
way to get the underlying libpq patch to a committable state is to take
it out --- that is, remove the return zero option for row processors.
Since we don't
I'm afraid not re-initializing materialize_needed for the next query
in the latest dblink patch.
I will confirm that and send the another one if needed in a few hours.
# I need to catch the train I usually get on..
Hello, This is the new version of dblink patch.
regards,
--
Kyotaro Horiguchi
On Wed, Apr 4, 2012 at 9:34 PM, Simon Riggs si...@2ndquadrant.com wrote:
Why is this pgbench run accessing so much unhinted data that is 1
million transactions old? Do you believe those numbers? Looks weird.
I think this is in the nature of the workload pgbench does. Because
the updates are
On Wed, Apr 4, 2012 at 9:05 PM, Robert Haas robertmh...@gmail.com wrote:
Here's a sample of how often that's firing, by second, on
this test (pgbench with 32 clients):
4191 19:54:21
4540 19:54:22
Hm, so if that's evenly spread out that's 1/4ms between slru flushes
and if each flush takes
Marko Kreen mark...@gmail.com writes:
On Wed, Apr 4, 2012 at 10:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Given the lack of consensus around the suspension API, maybe the best
way to get the underlying libpq patch to a committable state is to take
it out --- that is, remove the return zero
Kyotaro HORIGUCHI horiguchi.kyot...@oss.ntt.co.jp writes:
I'm afraid not re-initializing materialize_needed for the next query
in the latest dblink patch.
I will confirm that and send the another one if needed in a few hours.
I've committed a revised version of the previous patch. I'm not
Peter Eisentraut pete...@gmx.net writes:
On fre, 2011-12-23 at 19:51 +0200, Peter Eisentraut wrote:
On ons, 2011-12-21 at 11:04 +0100, Pavel Stehule wrote:
this patch adds a bytea_agg aggregation.
Why not call it string_agg?
Here is a patch to do the renaming. As it stands, it fails the
Greg Stark st...@mit.edu writes:
On Wed, Apr 4, 2012 at 9:34 PM, Simon Riggs si...@2ndquadrant.com wrote:
Why is this pgbench run accessing so much unhinted data that is 1
million transactions old? Do you believe those numbers? Looks weird.
I think this is in the nature of the workload
On 4/4/12 4:02 PM, Tom Lane wrote:
Greg Stark st...@mit.edu writes:
On Wed, Apr 4, 2012 at 9:34 PM, Simon Riggs si...@2ndquadrant.com wrote:
Why is this pgbench run accessing so much unhinted data that is 1
million transactions old? Do you believe those numbers? Looks weird.
I think this
On Tue, Apr 3, 2012 at 7:29 AM, Huchev hugochevr...@gmail.com wrote:
For a C implementation, it could interesting to consider LZ4 algorithm, since
it is written natively in this language. In contrast, Snappy has been ported
to C by Andy from the original C++ Google code, which lso translate
On Wed, Apr 4, 2012 at 11:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The renaming you propose would only be acceptable to those who have
forgotten that history. I haven't.
I had. I looked it up
http://archives.postgresql.org/pgsql-bugs/2010-08/msg00044.php
That was quite a thread.
--
greg
On Wed, Apr 4, 2012 at 4:23 PM, Simon Riggs si...@2ndquadrant.com wrote:
Measurement?
Sounds believable, I just want to make sure we have measured things.
Yes, I measured things. I didn't post the results because they're
almost identical to the previous set of results which I already
posted.
On Wed, Apr 4, 2012 at 4:34 PM, Simon Riggs si...@2ndquadrant.com wrote:
Interesting. You've spoken at length how this hardly ever happens and
so this can't have any performance effect. That was the reason for
kicking out my patch addressing clog history, wasn't it?
Uh, no, the reason for
On Wed, Apr 4, 2012 at 7:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark st...@mit.edu writes:
On Wed, Apr 4, 2012 at 9:34 PM, Simon Riggs si...@2ndquadrant.com wrote:
Why is this pgbench run accessing so much unhinted data that is 1
million transactions old? Do you believe those
I'm afraid not re-initializing materialize_needed for the next query
in the latest dblink patch.
I've found no need to worry about the re-initializing issue.
I've committed a revised version of the previous patch.
Thank you for that.
I'm not sure that the case of dblink_is_busy not
On Wed, Apr 4, 2012 at 12:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Now, Scott's comment seems to me to offer a principled way out of this:
if we define the intended semantics of search_path as being similar
to the traditional understanding of Unix PATH, then it's not an error
or even
Hi all,
I've written a pg_upgrade wrapper for upgrading our users (heroku) to
postgres 9.1. In the process I encountered a specific issue that could
easily be improved. We've had this process work consistently for many users
both internal and external, with the exception of just a few for whom
On Wed, Apr 4, 2012 at 4:10 PM, Thom Brown t...@linux.com wrote:
+1 to anything that separates these out. Cramming them into one list
like we currently have is confusing.
+1 as well.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
Harold,
* Harold Giménez (harold.gime...@gmail.com) wrote:
Possible workarounds on the current version:
This has actually been discussed before and unfortunately there aren't
any trivial solutions.
* Rewrite pg_hba.conf temporarily while the pg_upgrade script runs to
disallow any other
Stephen Frost sfr...@snowman.net writes:
The single-user option *sounds* viable, but, iirc, it actually isn't due
to the limitations on what can be done in that mode.
Yeah. IMO the right long-term fix is to be able to run pg_dump and psql
talking to a standalone backend, but nobody's gotten
72 matches
Mail list logo