[GENERAL] pg_get_procdef ?

2010-10-27 Thread Marc Mamin
Hello, I want to export a list of procedure definitions, which seems to be a hard nut to crack :-( A solution could be to use a combination of pg_dump and pg_restore, but this also requires some time investment. It would be fine, if pg_dump could be more selective about the object to select...

Re: [GENERAL] pg_get_procdef ?

2010-10-27 Thread hubert depesz lubaczewski
On Wed, Oct 27, 2010 at 11:21:43AM +0200, Marc Mamin wrote: Hello, I want to export a list of procedure definitions, which seems to be a hard nut to crack :-( A solution could be to use a combination of pg_dump and pg_restore, but this also requires some time investment. It would be fine,

Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model

2010-10-27 Thread Dmitriy Igrishin
Hey Craig, 2010/10/27 Craig Ringer cr...@postnewspapers.com.au On 27/10/10 04:49, Dmitriy Igrishin wrote: Hey Tony, 2010/10/27 Tony Cebzanov tony...@andrew.cmu.edu mailto:tony...@andrew.cmu.edu On 10/23/10 11:01 AM, Craig Ringer wrote: Yep. As for not explicitly

Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?

2010-10-27 Thread Merlin Moncure
On Tue, Oct 26, 2010 at 4:30 PM, Diego Schulz dsch...@gmail.com wrote: On Tue, Oct 26, 2010 at 2:18 PM, Ozz Nixon ozzni...@gmail.com wrote: I am the only user on this system right now, and one table select count(*) took over 20 minutes: wikitags exists and has 58,988,656 records. Structure

[GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Daniel . Crespo
Hi, The quick question is: How (if possible) can I do to get data (maybe through xlogs) from two separate databases and merge them into one? For those that want to know my situation, here it is: I have two postgresql 9.0 that are working as master/slave using streaming replication. At

Re: [GENERAL] DB become enormous with continuos insert and update

2010-10-27 Thread Vick Khera
On Tue, Oct 26, 2010 at 5:55 PM, John R Pierce pie...@hogranch.com wrote: never do VACUUM FULL.   Rather, use CLUSTER to rebuild heavily used tables in order of the most frequently used key (typically the PK), however this requires a global table lock for the duration, so should only be used

Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Vick Khera
On Wed, Oct 27, 2010 at 9:58 AM, daniel.cre...@l-3com.com wrote: So, the question would be: How can I do to merge data from DB0 and DB1 and make it available in the new master, whichever is chosen? Any ideas? Perhaps investigate bucardo for replication, as it is supposed to be able to help in

Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Lennin Caro
--- On Wed, 10/27/10, Vick Khera vi...@khera.org wrote: From: Vick Khera vi...@khera.org Subject: Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)? To: pgsql-general pgsql-general@postgresql.org Date: Wednesday, October 27, 2010, 8:26 PM On Wed, Oct 27,

Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Vick Khera
On Wed, Oct 27, 2010 at 4:37 PM, Lennin Caro lennin.c...@yahoo.com wrote: IMHO pgpool is the solution How does that solve the problem of having two disconnected networks, each thinking their DB is the master?

[GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-27 Thread Michael Clark
Hello everyone. I have been investigating the PG async calls and trying to determine whether I should go down the road of using them. In doing some experiments I found that using PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower results than simply calling PQexecParams. Upon

Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Daniel . Crespo
On Wed, Oct 27, 2010 at 4:37 PM, Lennin Caro lennin.c...@yahoo.com wrote: IMHO pgpool is the solution How does that solve the problem of having two disconnected networks, each thinking their DB is the master? The original question is how can I do to merge the data of the two master

Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-27 Thread Alex Hunsaker
On Wed, Oct 27, 2010 at 15:02, Michael Clark codingni...@gmail.com wrote: Hello everyone. Upon some investigation I found that not calling PQconsumeInput/PQisBusy produces results in line with PQexecParams (which PQexecParams seems to be doing under the hood). (please keep in mind this is

Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-27 Thread David Wilson
On Wed, Oct 27, 2010 at 5:02 PM, Michael Clark codingni...@gmail.comwrote: while ( ((consume_result = PQconsumeInput(self.db)) == 1) ((is_busy_result = PQisBusy(self.db)) == 1) ) ; The problem with this code is that it's effectively useless as a test. You're just spinning in a

Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-27 Thread Tom Lane
Michael Clark codingni...@gmail.com writes: In doing some experiments I found that using PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower results than simply calling PQexecParams. Well, PQconsumeInput involves at least one extra kernel call (to see whether data is

Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-27 Thread Vick Khera
On Wed, Oct 27, 2010 at 5:19 PM, daniel.cre...@l-3com.com wrote: thinking about the possibility of shipping all xlogs of both databases and putting them into the final master (one of them), and replay them to have all data. Later, I would take care of the conflicts. Again, I recommend you

[GENERAL] exceptionally large UPDATE

2010-10-27 Thread Ivan Sergio Borgonovo
I've to make large UPDATE to a DB. The largest UPDATE involve a table that has triggers and a gin index on a computed tsvector. The table is 1.5M records with about 15 fields of different types. I've roughly 2.5-3Gb of ram dedicated to postgres. UPDATE queries are simple, few of them use join and

Re: [GENERAL] exceptionally large UPDATE

2010-10-27 Thread Rob Sargent
Ivan Sergio Borgonovo wrote: I've to make large UPDATE to a DB. The largest UPDATE involve a table that has triggers and a gin index on a computed tsvector. The table is 1.5M records with about 15 fields of different types. I've roughly 2.5-3Gb of ram dedicated to postgres. UPDATE queries are

Re: [GENERAL] DB become enormous with continuos insert and update

2010-10-27 Thread Gabriele Bartolini
Hi, Il 27/10/10 00:22, Hfe80 ha scritto: The problem is that updates need more space becouse datas is not overwritten in the same space... As I said earlier, we need to know which PostgreSQL version you are using. PostgreSQL 8.3 introduced Heap Only Tuples (HOT) updates. Is it at least an

Re: [GENERAL] What is better method to backup postgresql DB.

2010-10-27 Thread Uwe Schroeder
In response to Steeles stee...@gmail.com: new to postgresql. need to backup postgresql DB, which way is better to backup DB. from training, I learned that we can backup the whole PGdata and other directories to achieve backup goal, originally I was planned to schedule jobs to use

Re: [GENERAL] Is there a plugin/script for nagios that monitors pitr being up to djavascript:;ate?

2010-10-27 Thread Devrim GÜNDÜZ
On Fri, 2010-10-22 at 10:25 -0400, Chris Barnes wrote: It's Escaping me where nagios is in the listing? I'm probably way off but more specifically if you could please? See cmd_archiver.ini file, mainly notify_ok, notify_warning and notify_critical parameters. Using nsca+nagios, you can send

Re: [GENERAL] What is better method to backup postgresql DB.

2010-10-27 Thread Simon Riggs
On Tue, 2010-10-26 at 14:27 -0400, Steeles wrote: new to postgresl. need to backup postgresql DB, which way is better to backup DB. from training, I learned that we can backup the whole PGdata and other direcotries to acheive backup goal, originally I was plainned to schedule jobs to use