hi again,
On 20dic, 2010, at 18:48 , Tom Lane wrote:
So, now I'm using the PQisBusy to check if postgre is still busy and I can
safely call the PQgetResult wihtout blocking, or just wait *some time*
before sending a new PQisBusy.
Your proposed code is still a busy-wait loop. What you
Hello,
This question is just for my curiosity...
When an index is available for a query, the planner decides whether to
use it or not depending on whether it would make the query perform
better, right? However if an index, which does not exist, would make
the query run better the planner
Hi Alban
Many thanks for your answers.
You answered:
1. Filter out all SQL commands which are *not* read-only (no DROP
Most people do this using permissions.
Oh, yes: forgot to mention that; that's obvious. What I also looked
for was the PL/pgSQL's EXECUTE command-string.
2. Get the
Ciao Dario,
On Tue, 21 Dec 2010 09:14:36 +, Dario Beraldi
dario.bera...@ed.ac.uk wrote:
the query run better the planner is not able (allowed?) to create
such
index, use it, and drop it once the query is done. Why is it so?
Because it is not its responsibility. This is the simplest and
Hello Dario,
When an index is available for a query, the planner decides whether to use
it or not depending on whether it would make the query perform better,
right? However if an index, which does not exist, would make the query run
better the planner is not able (allowed?) to create such
Hi Harald,
On Tue, 21 Dec 2010 11:42:40 +0100, Massa, Harald Armin
c...@ghum.de wrote:
a) There is a proposal (and, at the time being) also some code on
pgfoundry creating hypothetical indexes
On 21 Dec 2010, at 10:57, Stefan Keller wrote:
You answered:
1. Filter out all SQL commands which are *not* read-only (no DROP
Most people do this using permissions.
Oh, yes: forgot to mention that; that's obvious. What I also looked
for was the PL/pgSQL's EXECUTE command-string.
I'm not
Dear all,
I am not able to find any useful document regarding Configuration and
Running Pgbouncer with Postgres-8.4.2.
How it helps and is it able to boost some performance ?
Or if there is another useful tool available for Connection Pooling.
Please guide me for this.
Thanks Regards
hello.
I think that left/right joins and limit may be optimized.
When there aren't WHERE conditions this may be executed as below:
Limit N
Merge Left Join
Sort Top N
Bitmap Heap Scan
...
Sort
Bitmap Heap Scan
...
pasman
--
Sent via pgsql-general mailing
Ok, thanks a lot to all of you for your answers! (Always impressed by
the prompt feedback you get on this list!)
Quoting Gabriele Bartolini gabriele.bartol...@2ndquadrant.it:
Ciao Dario,
On Tue, 21 Dec 2010 09:14:36 +, Dario Beraldi
dario.bera...@ed.ac.uk wrote:
the query run better
On 2010-12-21 10:42, Massa, Harald Armin wrote:
b) creating an index requires to read the data-to-be-indexed. So, to have an
index pointing at the interesting rows for your query, the table has to be
read ... which would be the perfect time to allready select the interesting
rows. And after
Hi there,
Have been trying to uninstall old instances of Postgres from my Snow
Leopard install, preparing to install 9.0
Not sure how old these instances are (probably dates back to 7). I can
see them in the active process list, but I'm not sure how to
permanently stop them. Any old timers have
On 2010-12-21 10:42, Massa, Harald Armin wrote:
b) creating an index requires to read the data-to-be-indexed. So, to
have an
index pointing at the interesting rows for your query, the table has to
be
read ... which would be the perfect time to allready select the
interesting
rows. And
On Mon, Dec 20, 2010 at 8:53 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
Do you have a trusted boot path from BIOS to bootloader to kernel to init
core userspace, where everything is digitally signed (by you or someone
else) and verified before execution? Do you disable kernel module
I don't think planner should do things like creating an index. But it
might hint at doing it in the logs.
There was a discussion around that sort of feature on -hackers not so
long time ago. I don't remember what the conclusion was, but probably
that it just isn't worth wasting planner's cycles
I don't think planner should do things like creating an index. But it
might hint at doing it in the logs.
There was a discussion around that sort of feature on -hackers not so
long time ago. I don't remember what the conclusion was, but probably
that it just isn't worth wasting planner's
On 2010-12-21 14:26, t...@fuzzy.cz wrote:
Why not auto-create indices for some limited period after database load
(copy? any large number of inserts from a single connection?), track
those
that actually get re-used and remove the rest? Would this not provide
a better out-of-the-box experience
On Tue, Dec 21, 2010 at 7:34 AM, Jeremy Harris j...@wizmail.org wrote:
On 2010-12-21 14:26, t...@fuzzy.cz wrote:
Why not auto-create indices for some limited period after database load
(copy? any large number of inserts from a single connection?), track
those
that actually get re-used and
2010/12/21 Adarsh Sharma adarsh.sha...@orkash.com:
Dear all,
I am not able to find any useful document regarding Configuration and
Running Pgbouncer with Postgres-8.4.2.
that's strange, there are several good pages on the web; there is also
my mini-howto:
2010/12/21 Filip Rembiałkowski filip.rembialkow...@gmail.com:
Or if there is another useful tool available for Connection Pooling. Please
guide me for this.
yes there are some; see
http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling
it depends on what you need.
Richard Broersma richard.broer...@gmail.com wrote:
2010/12/21 Filip Rembiałkowski filip.rembialkow...@gmail.com:
Or if there is another useful tool available for Connection Pooling. Please
guide me for this.
yes there are some; see
PostgreSQLers,
I'm hoping for some help creating a constraint/key on a table such that there
are no overlapping ranges of dates for any id.
Specifically: Using PostgreSQL 9.0.1, I'm creating a name-value pair table as
such this:
CREATE TABLE tbl (id INTEGER, start_date DATE, stop_date DATE,
On Tue, Dec 21, 2010 at 7:49 AM, McGehee, Robert
robert.mcge...@geodecapital.com wrote:
PostgreSQLers,
I'm hoping for some help creating a constraint/key on a table such that there
are no overlapping ranges of dates for any id.
There is something you can try, but it is not exactly what you
Hello, I'm sending a group of queries to the database with PQsendQuery
and using PQgetResult to return results similar to this:
PQsendQuery( select current_timestamp; select pg_sleep(1); select
current_timestamp );
while( result = PQgetResult() )
doSomethingWith( result )
I'm finding that
Dne 21.12.2010 16:34, Jeremy Harris napsal(a):
There really is no automatic way to solve this puzzle using a single
query. Indexing strategy is a very tough design discipline, and it
requires a complex knowledge of the workload. One slow query does not
mean
the index should be created - what
On 2010-12-21 18:50, Tomas Vondra wrote:
Then the index you just built gets automatically dropped, as I said above.
I'm a bit confused. Should the indexes be dropped automatically (as you
state here) or kept for the future. Because if they should be dropped,
then it does not make sense to do
You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.
Don't forget to free memory with PQclear() . I guess you omitted that
because it's just
Dne 21.12.2010 20:03, Jeremy Harris napsal(a):
On 2010-12-21 18:50, Tomas Vondra wrote:
Then the index you just built gets automatically dropped, as I said
above.
I'm a bit confused. Should the indexes be dropped automatically (as you
state here) or kept for the future. Because if they
On Tue, Dec 21, 2010 at 2:21 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.
OP
Yes, I omitted the PQclear for simplicity.
I'm not concurrently executing queries, I'm sending multiple queries
to be executed serially by the backend. I'm expecting the server to
send me the PQresult objects as each query completes rather than
sending them all *after* all of the queries have
This should do it:
#include stdio.h
#include stdlib.h
#include libpq-fe.h
#define CONNINFO your info here
#define COMMANDS select current_timestamp; select pg_sleep(5); select
current_timestamp
void fatal( const char *msg ) { fprintf( stderr, %s\n, msg ); exit(1); }
int
main()
{
PGresult
Kelly Burkhart wrote:
#define COMMANDS select current_timestamp; select pg_sleep(5); select
current_timestamp
You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.
Best regards,
--
Daniel
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite dan...@manitou-mail.org wrote:
Kelly Burkhart wrote:
#define COMMANDS select current_timestamp; select pg_sleep(5); select
current_timestamp
You should use current_clock() instead of current_timestamp, because
current_timestamp returns a
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite dan...@manitou-mail.org
wrote:
Kelly Burkhart wrote:
#define COMMANDS select current_timestamp; select pg_sleep(5); select
current_timestamp
You should use
On Tue, Dec 21, 2010 at 3:37 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite dan...@manitou-mail.org
wrote:
Kelly Burkhart wrote:
#define COMMANDS select
On Tue, Dec 21, 2010 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:37 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite dan...@manitou-mail.org
2010/12/21 Andreas Kretschmer akretsch...@spamfence.net:
I'm looking for a solution to split read and write access to different
servers (streaming replication, you know ...). Can i do that with
pgpool? (setting backend_weightX=0 or 1)? I have read the doc, but i'm
not sure if pgpool the right
On Tue, Dec 21, 2010 at 1:32 AM, Neil D'Souza
neil.xavier.dso...@gmail.com wrote:
You can have a look at my project on sourceforge:
http://sourceforge.net/projects/proghelp builds applications with PG as a
backend automatically. It uses a modified create table sql grammar as an
input.
1. It
A postgresql based game, that you can play from psql! Written by Abstrct (Josh)
http://www.schemaverse.com/
merlin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Dec 21, 2010, at 5:06 PM, Merlin Moncure wrote:
A postgresql based game, that you can play from psql! Written by Abstrct
(Josh)
http://www.schemaverse.com/
Finally, a game which makes it look like I am doing work!
--
Sent via pgsql-general mailing list
FYI, not looking for a detailed how to here.. I have read the manual twice
and just can't figure which sections are relevant. The manual seems to be
trying to cover all uses simultaneously which is always going to get
confusing :) For example do I need I need WAL archiving or not?
On Tue, Dec 21,
In postgresql-9.0.1 I have to modify my plpython functions that return arrays.
It seems one dimesional arrays are handled properly, but not
2-dimensional arrays.
create or replace function atest() returns integer[] as $eopy$
a = list()
a.append(1)
a.append(2)
a.append(3)
#return a works fine
On 21 December 2010 22:48, TJ O'Donnell t...@acm.org wrote:
In postgresql-9.0.1 I have to modify my plpython functions that return arrays.
It seems one dimesional arrays are handled properly, but not
2-dimensional arrays.
create or replace function atest() returns integer[] as $eopy$
a =
On 21 December 2010 23:17, Thom Brown t...@linux.com wrote:
Are you sure that a returns okay in that scenario. You're using a
list. Shouldn't you be using an array? Like: a = []
a =[] actually declares an empty list in Python. You can return a list
or a tuple from a pl/python function in 9.0
Hi Ben,
On 2010/12/22 7:46, Ben Carbery wrote:
FYI, not looking for a detailed how to here.. I have read the manual twice and
just can't figure which sections are relevant. The manual seems to be trying to
cover all uses simultaneously which is always going to get confusing :) For
example do
On Tuesday 21 December 2010 2:48:16 pm TJ O'Donnell wrote:
In postgresql-9.0.1 I have to modify my plpython functions that return
arrays. It seems one dimesional arrays are handled properly, but not
2-dimensional arrays.
create or replace function atest() returns integer[] as $eopy$
a =
Merlin Moncure mmonc...@gmail.com writes:
hm, a pq_flush() after command completion putmessage in
backend/tcop/dest.c seems to fix the problem. I'll send up a patch to
-hackers. They might backpatch it, unless there is a good reason not
to do this (I can't think of any).
If you just
On Tuesday 21 December 2010 3:25:48 pm Peter Geoghegan wrote:
On 21 December 2010 23:17, Thom Brown t...@linux.com wrote:
Are you sure that a returns okay in that scenario. You're using a
list. Shouldn't you be using an array? Like: a = []
a =[] actually declares an empty list in Python.
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Merlin Moncure mmonc...@gmail.com writes:
hm, a pq_flush() after command completion putmessage in
backend/tcop/dest.c seems to fix the problem. I'll send up a patch to
-hackers. They might backpatch it, unless there is a
Merlin Moncure mmonc...@gmail.com writes:
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush
On 12/22/2010 02:05 AM, Kenneth Buckler wrote:
I find it very comforting that I am not the only one who finds this
requirement a bit out there.
Unfortunately, these requirements are set in stone, and no matter how
hard I try, can not be altered.
We live in a world where compliance is king.
We live in a world where compliance is king. Nevermind if compliance
doesn't actually make the system more secure.
Er .. re my previous post, I don't mean lie to RH and claim to want to
buy RHEL to get free support. I mean that you should consider going to
management and getting approval for
Hi Ben,
load balancing is not possible with the tools that are in the postgres
installation. There is no automatic switch-over to a slave if the master
fails. The trigger file needs to be created to make a slave to the master. This
is not done automaitcally by postgres, but should be done by a
I attempted to unsubscribe from this list (for the holidays) without
success.
Could anyone please help me. I am continuing to get messages from the
list.
I broke open the message header and did as it said for unsubscribing.
See below for what the majordomo sent back.
-Will
unsub
On Wed, Dec 22, 2010 at 8:31 AM, Satoshi Nagayasu
satoshi.nagay...@gmail.com wrote:
My blog entry would be a good entry point for you. :)
5 steps to implement a PostgreSQL replication system
http://pgsnaga.blogspot.com/2010/05/5-steps-to-implement-postgresql.html
Or
In previous versions (8.x) for plpython fn returning integer[]
I created (had to create) a string in the proper SQL format {
{1,2,3}, {4,5,6} }
and returned that. It worked fine.
I LIKE the ability to not have to do that in 9.0
but I CAN'T return and string like { {1,2,3}, {4,5,6} } for a fn
Hello,
We are looking to distribute postgres databases to our customers along with our
application. We are currently evaluating postgres version 8.4.4. The database
can be of size 25 gb (compressed files fits in few dvds, the product is
distributed on dvds). The pg_restore of this database
On Tuesday 21 December 2010 4:16:00 pm William Gordon Rutherdale (rutherw)
wrote:
I attempted to unsubscribe from this list (for the holidays) without
success.
Could anyone please help me. I am continuing to get messages from the
list.
I broke open the message header and did as it said
Hi,
One of the caveats described in the documentation for table inheritance is
that foreign key constraints cannot cover the case where you want to check
that a value is found somewhere in a table or in that table's
descendants. It says there is no good workaround for this.
What about
On Tue, Dec 21, 2010 at 9:32 PM, Andy Chambers achamb...@mcna.net wrote:
\
create table guidebooks (
city check (city in (select name
from cities)),
isbn text,
author text,
publisher text);
This is a nice idea. They only problem is that PostggreSQL doesn't
60 matches
Mail list logo