Duncan McDonald wrote:
Hi All,
I was wondering whether there was a way to back up partial sets of data
as INSERT statements? Pg_dump seems only to handle whole databases or
tables.
I have two identical databases (primary and backup) and I need to
transfer a small portion of missing data fro
Hi All,
I was wondering whether there was a way to back up partial sets of data as
INSERT statements? Pg_dump seems only to handle whole databases or tables.
I have two identical databases (primary and backup) and I need to transfer a
small portion of missing data from one to the other. Is th
On 10/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
CLUSTER will eat maintenance_work_mem during index rebuilds --- more orless. You shouldn't expect these numbers to be dead on, particularlynot in older releases. It looks like your 2Gb spec has turned into
3.6Gb actually eaten, which is a bit slopp
=?ISO-8859-1?Q?Lu=EDs_Sousa?= <[EMAIL PROTECTED]> writes:
> Steps to reproduce:
> 1. pg_dump with -Fc option from database
> 2. A column name on table C is changed
> 3. pg_restore using option -S and --disable-triggers with error
> identifying that a column on table C was changed
> 4. drop table B
"Joshua Marsh" <[EMAIL PROTECTED]> writes:
> The CLUSTER function seems to be using more memory than I expect.
CLUSTER will eat maintenance_work_mem during index rebuilds --- more or
less. You shouldn't expect these numbers to be dead on, particularly
not in older releases. It looks like your 2G
Mike Goldner <[EMAIL PROTECTED]> writes:
> First of all, my max_fsm_pages is obviously way off. However, every
> time I increase my max_fsm_pages the next vacuum says that it requires
> more. Will there every be a plateau in the requested pages?
We realized recently that this can happen if you h
On Wed, 2006-10-25 at 15:54 -0400, Mike Goldner wrote:
> I have a nightly vacuum scheduled as follows:
>
> su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
>
> Last night, it appears that the vacuum blocked db access from my
> application server (JBoss). Here is the logfile snippet:
Mike Goldner wrote:
> I have a nightly vacuum scheduled as follows:
>
> su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
>
> Last night, it appears that the vacuum blocked db access from my
> application server (JBoss). Here is the logfile snippet:
>
> [3693-jbossdb-postgres-2006-10
I have a nightly vacuum scheduled as follows:
su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
Last night, it appears that the vacuum blocked db access from my
application server (JBoss). Here is the logfile snippet:
[3693-jbossdb-postgres-2006-10-25 06:52:29.488 EDT]NOTICE: number
Hi again,
Version: 7.4.7-6sarge3
Structure of database: table A (id_a primary key) <-> table B (id_a,id_c
- foreign keys from table A and table C) <-> table C (id_c primary key).
Table A, table B and table C have one record for test purposes
Problem: After error on pg_restore can't drop table
The CLUSTER function seems to be using more memory than I expect. Here is what I get from top and from my config file:from top:27589 postgres 25 0 3943m 3.6g 11m R 99.9 61.6 639:19.41 postgres: postgres data
127.0.0.1(42126) CLUSTERfrom postgresql.confshared_buffers = 1000 #26214 #
Hi,
Is it possible to do a pg_restore inside a block transaction? That is,
if something goes wrong on restore all data can be rolled back.
Thanks
Luís Sousa
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Chris Hoover wrote:
Has anyone ever done a conversion from Sql Anywhere to PostgreSQL? I
have a
task to investigate what this would take.
Also, the Sql Anywhere database stores image files in the database as long
binary types. What would be the best format to store these in PostgreSQL?
That
On Tue, 24 Oct 2006 18:39:48 +0200
Emmanuel Courcelle <[EMAIL PROTECTED]> wrote:
> when I try to use psql like this:
>sudo -u postgres psql
> I get the message:
> psql: FATAL: database "postgres" does not exist
Are you sure that postgres data
14 matches
Mail list logo