Hello,
I want to export a list of procedure definitions, which seems to be a
hard nut to crack :-(
A solution could be to use a combination of pg_dump and pg_restore, but
this also requires some time investment.
It would be fine, if pg_dump could be more selective about the object to
select...
On Wed, Oct 27, 2010 at 11:21:43AM +0200, Marc Mamin wrote:
Hello,
I want to export a list of procedure definitions, which seems to be a
hard nut to crack :-(
A solution could be to use a combination of pg_dump and pg_restore, but
this also requires some time investment.
It would be fine,
Hey Craig,
2010/10/27 Craig Ringer cr...@postnewspapers.com.au
On 27/10/10 04:49, Dmitriy Igrishin wrote:
Hey Tony,
2010/10/27 Tony Cebzanov tony...@andrew.cmu.edu
mailto:tony...@andrew.cmu.edu
On 10/23/10 11:01 AM, Craig Ringer wrote:
Yep. As for not explicitly
On Tue, Oct 26, 2010 at 4:30 PM, Diego Schulz dsch...@gmail.com wrote:
On Tue, Oct 26, 2010 at 2:18 PM, Ozz Nixon ozzni...@gmail.com wrote:
I am the only user on this system right now, and one table select count(*)
took over 20 minutes:
wikitags exists and has 58,988,656 records.
Structure
Hi,
The quick question is:
How (if possible) can I do to get data (maybe through xlogs) from two
separate databases and merge them into one?
For those that want to know my situation, here it is:
I have two postgresql 9.0 that are working as master/slave using
streaming replication. At
On Tue, Oct 26, 2010 at 5:55 PM, John R Pierce pie...@hogranch.com wrote:
never do VACUUM FULL. Rather, use CLUSTER to rebuild heavily used tables
in order of the most frequently used key (typically the PK), however this
requires a global table lock for the duration, so should only be used
On Wed, Oct 27, 2010 at 9:58 AM, daniel.cre...@l-3com.com wrote:
So, the question would be: How can I do to merge data from DB0 and DB1 and
make it available in the new master, whichever is chosen? Any ideas?
Perhaps investigate bucardo for replication, as it is supposed to be
able to help in
--- On Wed, 10/27/10, Vick Khera vi...@khera.org wrote:
From: Vick Khera vi...@khera.org
Subject: Re: [GENERAL] How to merge data from two separate databases into one
(maybe using xlogs)?
To: pgsql-general pgsql-general@postgresql.org
Date: Wednesday, October 27, 2010, 8:26 PM
On Wed, Oct 27,
On Wed, Oct 27, 2010 at 4:37 PM, Lennin Caro lennin.c...@yahoo.com wrote:
IMHO pgpool is the solution
How does that solve the problem of having two disconnected networks, each
thinking their DB is the master?
Hello everyone.
I have been investigating the PG async calls and trying to determine whether
I should go down the road of using them.
In doing some experiments I found that using
PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower
results than simply calling PQexecParams.
Upon
On Wed, Oct 27, 2010 at 4:37 PM, Lennin Caro lennin.c...@yahoo.com
wrote:
IMHO pgpool is the solution
How does that solve the problem of having two disconnected networks,
each thinking their DB is the master?
The original question is how can I do to merge the data of the two
master
On Wed, Oct 27, 2010 at 15:02, Michael Clark codingni...@gmail.com wrote:
Hello everyone.
Upon some investigation I found that not calling PQconsumeInput/PQisBusy
produces results in line with PQexecParams (which PQexecParams seems to be
doing under the hood).
(please keep in mind this is
On Wed, Oct 27, 2010 at 5:02 PM, Michael Clark codingni...@gmail.comwrote:
while ( ((consume_result = PQconsumeInput(self.db)) == 1)
((is_busy_result = PQisBusy(self.db)) == 1) )
;
The problem with this code is that it's effectively useless as a test.
You're just spinning in a
Michael Clark codingni...@gmail.com writes:
In doing some experiments I found that using
PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower
results than simply calling PQexecParams.
Well, PQconsumeInput involves at least one extra kernel call (to see
whether data is
On Wed, Oct 27, 2010 at 5:19 PM, daniel.cre...@l-3com.com wrote:
thinking about the possibility of shipping all xlogs of both databases
and putting them into the final master (one of them), and replay them to
have all data. Later, I would take care of the conflicts.
Again, I recommend you
I've to make large UPDATE to a DB.
The largest UPDATE involve a table that has triggers and a gin
index on a computed tsvector.
The table is 1.5M records with about 15 fields of different types.
I've roughly 2.5-3Gb of ram dedicated to postgres.
UPDATE queries are simple, few of them use join and
Ivan Sergio Borgonovo wrote:
I've to make large UPDATE to a DB.
The largest UPDATE involve a table that has triggers and a gin
index on a computed tsvector.
The table is 1.5M records with about 15 fields of different types.
I've roughly 2.5-3Gb of ram dedicated to postgres.
UPDATE queries are
Hi,
Il 27/10/10 00:22, Hfe80 ha scritto:
The problem is that updates need more space becouse datas is not overwritten
in the same space...
As I said earlier, we need to know which PostgreSQL version you are
using. PostgreSQL 8.3 introduced Heap Only Tuples (HOT) updates. Is it
at least an
In response to Steeles stee...@gmail.com:
new to postgresql. need to backup postgresql DB, which way is better to
backup DB.
from training, I learned that we can backup the whole PGdata and other
directories to achieve backup goal, originally I was planned to schedule
jobs to use
On Fri, 2010-10-22 at 10:25 -0400, Chris Barnes wrote:
It's Escaping me where nagios is in the listing? I'm probably way off
but more specifically if you could please?
See cmd_archiver.ini file, mainly notify_ok, notify_warning and
notify_critical parameters. Using nsca+nagios, you can send
On Tue, 2010-10-26 at 14:27 -0400, Steeles wrote:
new to postgresl. need to backup postgresql DB, which way is better to
backup DB.
from training, I learned that we can backup the whole PGdata and other
direcotries to acheive backup goal, originally I was plainned to
schedule jobs to use
21 matches
Mail list logo