Re: [GENERAL] Problems restarting after database crashed (signal 11).

2004-07-01 Thread Tom Lane
Christopher Cashell [EMAIL PROTECTED] writes:
   [2004-06-30 08:44:53 AM] Failed connection to database template1
   with error: FATAL:  the database system is starting up
   .
   [2004-06-30 08:44:53 AM] Failed connection to database template1
   with error: FATAL:  the database system is starting up
   .
   [2004-06-30 08:44:53 AM] Error: Cannot connect to template1,
   exiting.
  
   So, I shut down Postgres, then restarted Postgres, then restarted
   the autovacuum utility, and everything worked just peachy.  I'm
   guessing that perhaps the autovacuum tool was trying to connect to
   Postgres while it was replaying the transaction log from not
   having been shut down cleanly, and that's why it choked, but I
   don't know that for sure.

Yeah, that's what it looks like to me --- autovacuum just a bit too
quick to give up.  You could've just restarted autovacuum once the
database was up.

In 7.5 I think autovacuum will be integrated, and the postmaster won't
bother to start it till the startup sequence is done, so this won't
be an issue.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] VACUUM technical question

2004-07-01 Thread Ilia Kantor
Hello,

I have a technical question about VACUUM

I assume all table data is kept in a single file
and each index has its single file also.

Also I assume that VACUUM cleans the file of 'dead pages' that consist
of dead rows only.

How does it manage to reduce file size without any locks ?
Or which locks are applied ?




-- 
Best regards,
 Ilia  mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] max_fsm_pages

2004-07-01 Thread wespvp
If max_fsm_pages is too small and I have space not reclaimed by vacuum, if I
increase max_fsm_pages and restart postmaster, will the next VACUUM ANALYZE
relcaim all overlooked pages or must I do a VACUUM FULL?

Wes


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] postgresql web site gone missing!!!

2004-07-01 Thread A.N.Other
does anybody know what has happened to www.postgresql.org I am getting page
not found, although I can ping it?

where can i find the latest version of postgresql and it utilities/tools?

Any news on the windows version? I am probably going to use a Linux version
in production, but having windows verion might be useful for dev purposes as
client app will be windows based.

TIA
ANO



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Error loading trigger in C

2004-07-01 Thread Juan Jose Costello Levien
Hello,
I am trying to use a trigger function I wrote in C. Basically what I 
want to do is to audit a table when a row is inserted into another table 
by copying the row to the new table. It compiles Ok and I created a 
shared library trigger.so. But when I load it into pgAdmin it tells me 
'An error had occured'.

To address that I put here the source code so you can check it out:
(note: it has a oid (blob) field the row, and that could be a problem 
for the query syntax, maybe there is the error)

#include postgres.h
#include executor/spi.h /* SPI */
#include commands/trigger.h /* triggers */
#include stdlib.h
#include string.h
extern Datum trigf(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(trigf);
Datum
trigf(PG_FUNCTION_ARGS)
{
   char * query = (char *)malloc(sizeof(char *)*500);
   TriggerData *trigdata = (TriggerData *) fcinfo-context;
   TupleDesc tupdesc;
   HeapTuple rettuple;
   char *when;
   bool checknull = false;
   bool isnull;
   int ret, i;
/* aquí nos aseguramos que se llama a la función como un trigger*/
if (!CALLED_AS_TRIGGER(fcinfo))
   elog(ERROR, trigf: no se llamó por el administrador de triggers);
/* datos que devuelve el ejecutor */
if (TRIGGER_FIRED_BY_UPDATE(trigdata-tg_event))
   rettuple = trigdata-tg_newtuple;
else
   rettuple = trigdata-tg_trigtuple;
/* chequeo de valores nulos */
if (!TRIGGER_FIRED_BY_DELETE(trigdata-tg_event)
 TRIGGER_FIRED_BEFORE(trigdata-tg_event))
   checknull = true;
if (TRIGGER_FIRED_BEFORE(trigdata-tg_event))
   when = antes;
else
   when = después;
tupdesc = trigdata-tg_relation-rd_att;
/* conexión al SPI (Server Programming Interface) */
if ((ret = SPI_connect())  0)
   elog(INFO, trigf (disparado %s): SPI_connect devolvió %d, when, ret);
/* ejecuto el query para insertar en la tabla de auditoría */
strcpy(query, INSERT INTO visita_log VALUES (');
strcat(query, trigdata-tg_trigger-tgargs[0]);
strcat(query, ',');
strcat(query, trigdata-tg_trigger-tgargs[1]);
strcat(query, ',');
strcat(query, trigdata-tg_trigger-tgargs[2]);
strcat(query, ',');
strcat(query, trigdata-tg_trigger-tgargs[3]);
strcat(query, ',);
strcat(query, trigdata-tg_trigger-tgargs[4]);
strcat(query, ,');
strcat(query, trigdata-tg_trigger-tgargs[5]);
strcat(query, '););
SPI_exec(query, 0);
/* count(*) devuelve int8, cuidado con la conversión */
i = DatumGetInt64(SPI_getbinval(SPI_tuptable-vals[0],
   SPI_tuptable-tupdesc,
   1,
   isnull));
elog (INFO, trigf (disparado %s): hay %d filas en la tabla, when, i);
SPI_finish();
if (checknull)
{
   SPI_getbinval(rettuple, tupdesc, 1, isnull);
   if (isnull)
   rettuple = NULL;
}
return PointerGetDatum(rettuple);
}
// End of source code
I use PostgreSQL 7.4.1 and Mandrake 10.
The compiler sequence I used is:
gcc -I ./ -fpic -c trigger.c
gcc -I ./ -shared -o trigger.so trigger.o
and when I include it in Trigger functions pgAdmin complains with An 
error had occured (and says nothing). What I am doing wrong?

Thanks in advance.
Juan
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] max_fsm_pages

2004-07-01 Thread Matthew T. O'Connor
If you increase max_fsm_pages to a large enough number then the next 
vacuum analyze will make all the pages with free space available to be 
reused.  A normal VACUUM does not actually reclaim space (unless it's at 
the end of the table I think), it only marks the space as reuseable. 
VACUUM FULL will reclaim space immediately.

Matthew
[EMAIL PROTECTED] wrote:
If max_fsm_pages is too small and I have space not reclaimed by vacuum, if I
increase max_fsm_pages and restart postmaster, will the next VACUUM ANALYZE
relcaim all overlooked pages or must I do a VACUUM FULL?
Wes
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Error loading trigger in C

2004-07-01 Thread Doug McNaught
Juan Jose Costello Levien [EMAIL PROTECTED] writes:

 Hello,
 I am trying to use a trigger function I wrote in C. Basically what I
 want to do is to audit a table when a row is inserted into another
 table by copying the row to the new table. It compiles Ok and I
 created a shared library trigger.so. But when I load it into pgAdmin
 it tells me 'An error had occured'.

Does it work if you take pgAdmin out of the loop?  E.g. in psql:

CREATE FUNCTION trigf ... 
CREATE TRIGGER ...

-Doug

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] inner data format

2004-07-01 Thread Ilia Kantor
Hello,


How PostgreSQL keeps its data?

When I enter a data dir I notice a bunch of files while I have a
single database with the only table.

Is there a reference on the inner format and structure of
PostgreSQL 7.3 ?

  

-- 
Best regards,
 Ilia  mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Slow dump with pg_dump/pg_restore ? How to improve ?

2004-07-01 Thread Soeren Gerlach

 Soeren Gerlach [EMAIL PROTECTED] writes:
   * pg_dump takes 2/3 of the [single] CPU, postmaster the other 1/3 for
  both dumps

 Really!?  Hmm, that seems fairly backwards ... thinks ...

 In the -Fc case this makes some amount of sense because pg_dump runs
 gzip-style compression on the data (which is why the output file is
 so much smaller).  But for plain text dump, pg_dump should be just
 pushing the data straight through to stdout; it really ought not take
 much CPU as far as I can see.  There may be some simple performance
 glitch involved there.  Are you interested in recompiling with -pg
 and getting a gprof profile of pg_dump?

Yes I'am but I'm a little short on time ,-)) In fact I'm glad to d'l a 
ready-to-run archive for Debian Woody. In two weeks I'll have some time to 
check this issue with my own compiled versions, until then I'm just 
interested ,-))
Today I upgraded to 7.4.3 from 7.4.1 but this did not change anything real. 
Do you have numbers in respect to speed (rows per second) for comparison 
available. I.e. dump on a single CPU machine which quite fast drives?


Regards,
Soeren


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] row level grants

2004-07-01 Thread Alexander Cohen
Is there any way to grant user access at the row level? Or maybe some 
way to simulate it?

Alex
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] backups

2004-07-01 Thread Christopher Petrilli
On Wed, 30 Jun 2004 18:23:08 -0500, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 What do other sites with mondo databases do?


Let me offer the ideas of what I've used in some high-end environments
before. First, we've always used a mirror configuration in most
situations, simply for reliability and performance concerns (we can
get into the arguments of RAID-5, but that's neither here nor there). 
So, it ends up being 1+0 (mirroring of striped sets).

What you can do is split the mirror and back up one side of the duplex
set. This leaves the database running on the other side, and when you
join them back together, the logs will catch up. This does have a
potential performance problem associated with it, of course, but
backups always do. It really depends on write-percentages.

If you still need high availability when doing backups, I've also used
triplex setups (3 mirrors), so that you still have one left. The
reality is, drive space is cheap, and the ability to pull them off and
do backups that way is very helpful. You can in-fact in an SAN
reattach them to another server for backups.

As someone else pointed out, you do have the issue of sanity of the
files when you do a backup, so given PostgreSQL's current lack of
PITR, I'd likely stop the database, split the mirrors, and restart the
database. I don't know of anyway to coalesce the database and quiet it
for 1 second to do the split.

Chris
-- 
| Christopher Petrilli
| [EMAIL PROTECTED]

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] max_fsm_pages

2004-07-01 Thread Scott Marlowe
On Thu, 2004-07-01 at 11:45, [EMAIL PROTECTED] wrote:
 If max_fsm_pages is too small and I have space not reclaimed by vacuum, if I
 increase max_fsm_pages and restart postmaster, will the next VACUUM ANALYZE
 relcaim all overlooked pages or must I do a VACUUM FULL?

Let's say you have a table with 1,000 rows, but you've deleted 1,000,000
over the past year, and most of those are unclaimed.  Regular vacuum
will put those 900,000 odd pages into the FSM, and the database can use
them.  However, scans on this table will still be mostly reading empty
space.

So, to collapse the table back down to something reasonable, you'll need
to do a vacuum full, then regular vacuums should keep things tight from
then on.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL 7.4.3 on a windows XP Pro environment Installation

2004-07-01 Thread Chuck Bernardes
Thanks all in providing me some needed tips and information.  Being a newbie
to this environment, I was jumping from one set of README's from a previous
versions and the reading the current README's for 7.4.3 and was confused on
why one set of instructions was using the ipc-daemon2 and the current
version using cygserver.  In doing so, I also didn't realize the different
path information and daemon or cygserver that were being used i.e.  in the
previous version it was referencing /usr/share/postgresql/data and the
current was using /var/postgresql/data.  I still don't know why the paths
changed.  Does anyone know?   Overall, it was just a big mess in my part.  I
also found out that I didn't set my path and environmental variables
correctly.  After an exhaustive research based on previous past problems
that other users posted, I was able to knock a problem off one by one.
Sorry for being long winded, but just wanted to inform everyone that I was
finally able to get it running on XP pro.  Now on to configuring the
database and its tables for my Company's website.

Thanks,
Chuck

-Original Message-
From: Raymond O'Donnell [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 01, 2004 3:25 AM
To: Chuck Bernardes
Subject: Re: [GENERAL] PostgreSQL 7.4.3 on a windows XP Pro environment
Installation

On 30 Jun 2004 at 16:17, Chuck Bernardes wrote:

 Anyone successful in installing PostgreSQL 7.4.3 on a windows XP Pro
 environment?  I've read various README's but seem daunting and
 confusing. 

I have 7.4.1 working on my XP Pro laptop, via Cygwin..I ran into 
all sorts of problems trying to compile from source, so in the end I 
just downloaded the pre-built binary from the Cygwin site and it 
worked first time. I use it for development, so I start/stop 
Postmaster by hand as necessary.

The only thing that threw me initially was that you have to have ipc-
daemon running - I think that's in a or a README somewhere.

--Ray.


-
Raymond O'Donnell http://www.galwaycathedral.org/recitals
[EMAIL PROTECTED]  Galway Cathedral Recitals
-





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] postgresql +AMD64 +big address spaces - does it work?

2004-07-01 Thread Andy B
If I bought one of these boxes/OS combos as a postgresql database server,
would postgresql be able to make the best use of it with a huge (e.g. 40GB)
database?

Box: HP ProLiant DL585, with  4 AMD64 CPUs and 64GB of RAM. (other
vendor options also exist)
OS:  SUSE enterprise 8 linux for AMD

(links to product info at bottom)

e.g. Could postgresql make use of most of this address space to map a huge
database (say 40GB) all paged into RAM at once?


According to Suse, a process running on such a setup can individually
address 0.5 terabytes of address space, and at any one point, with the
hardware above, up to 64GB (minus a bit!) of this address space could be
paged into physical memory at any one time.



According to the docs I could find, I just need to do the following:

1. set postgresql.conf-shared_buffers to a value like (40GB/8KB),

2. Check cat /proc/sys/kernel/shmmax is big enough, or successfully increase
it to   40GB

From experience that's a long way from knowing it will work. Does anyone
have any experience at all with such big memory configurations?


Many thanks,
Andy

__
My references:
Suse: www.suse.com/amd64 and
www.suse.com/en/business/products/server/sles/sles8_amd64.pdf
HP:   http://h18004.www1.hp.com/products/servers/proliantdl585/index.html







---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] backups

2004-07-01 Thread vinny
On Wed, 30 Jun 2004 22:32:26 -0500, [EMAIL PROTECTED] (Bruno Wolff III)
wrote:

On Wed, Jun 30, 2004 at 18:23:08 -0500,
  [EMAIL PROTECTED] wrote:
 
 What do other sites with mondo databases do?

There have been comments from people using storage systems that they
can freeze the storage system and get a consistant snap shot of the
file system. This can be used to do a restore. It will look just like
postgres crashed when coming back up.
If you find one of the posts about this in the archives the poster may
have more details on their storage systems.

---(end of broadcast)---
TIP 8: explain analyze is your friend

I've been playing around with something like that.
On my test server I have put the postgresql directory (including the
config files) onto a software raid-1 array. This array starts off as
just one disk, but when the time comes to create a backup, you can add
a secondary disk to the array, on-the-fly, so the database does not
have to stop for this. The recovery-synchronosing of the disk consumes
a few % of the CPU, but nothing too bad (it's disk-to-disk copying)

When syncing is complete I shutdown the database, remove the secondary
disk from the array and start the database up again. Ofcourse this is
in a test environment so this operation takes a few seconds, I have
yet to test what this will do with a normal production load.

Now the secondary disk is an exact copy of the datafiles as they were
when the database was offline, and because it is software-raid, the
secondary disk can now be mounted and backed-up. And because the files
were in an offline state at backup, they can be restored without the
database server having to recover at startup.

It seems to work ok in the test, but ofcourse this has to be tested on
a much much larger scale.

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL]

2004-07-01 Thread simonw
Subject: Re: [GENERAL] Bug in function to_char() !!

Message-Id: [EMAIL PROTECTED]
From: [EMAIL PROTECTED]
Date: Thu, 01 Jul 2004 13:47:04 +0100


Hi

Try 

SELECT to_char('2005-03-27'::date,'/MM/DD');

I think you date format is wrong

Simon

 Original Message 
From:   Najib Abi Fadel
Date:   Thu 7/1/04 8:18
To: generalpost
Subject:[GENERAL] Bug in function to_char() !!

Hi, i am using PostgreSQL 7.3.2=20

there's a bug for the date '2005-03-27' !!!

SELECT to_char('2005-03-27'::date,'DD/MM/');
  to_char

 26/03/2005
(1 row)

I get the date 26/03/2005 instead of 27/03/2005 !!!=20

For other dates the function works well !!
--=20
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Bug in function to_char() !!

2004-07-01 Thread Maciej Bliziski
Najib Abi Fadel wrote:
Hi, i am using PostgreSQL 7.3.2
there's a bug for the date '2005-03-27' !!!
For other dates the function works well !!
I have 7.4.2 and it works well:
[EMAIL PROTECTED]:~$ psql template1
Welcome to psql 7.4.2, the PostgreSQL interactive terminal.
Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit
template1=# SELECT to_char('2005-03-27'::date,'DD/MM/');
  to_char

 27/03/2005
(1 row)
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Drop Column with Foreign Key Problem

2004-07-01 Thread Thomas Chille (spoon)
Hello!

I have a Problem.

A table with a ForeignKeyRef exits. The Name of the field with this FK-Constraint is 
'id_employee'
because it is referencing the field 'id_employee' in 'public.sd_employee'. I drop this 
column of the
table with cascade in a script:

ALTER TABLE public.sd_messaging_participant DROP COLUMN id_employee CASCADE;

In one script later i try to make an update on the referenced Table:

UPDATE sd_employee SET leave = 1.5;

But it doesent works. I get always this Error:

ERROR:  constraint participant_employee: table sd_messaging_participant does not have 
an attribute
id_employee

The constraint 'participant_employee' should be droped too, due the use of CASCADE, 
but it seems
that he is alive.

Also explizit dropping the constraint 'participant_employee' before dropping the field 
will not
solve the problem:

ALTER TABLE public.sd_messaging_participant DROP CONSTRAINT participant_employee 
CASCADE;

If i try to drop the constraint after dropping the field, postgres means the 
constraint is not
existing anymore. But if i try to do the update it produces still this error.

If i dump the DB i can not found the constraint 'participant_employee' anymore but 3 
triggers
belonging to this constraint are still in the DB:

...
CREATE CONSTRAINT TRIGGER participant_employee
AFTER INSERT OR UPDATE ON sd_messaging_participant
FROM sd_employee
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE RI_FKey_check_ins ('participant_employee', 
'sd_messaging_participant',
'sd_employee', 'UNSPECIFIED', 'id_employee', 'id_employee');
...

That is for Insert and there are one for Update and one for Deleting too. I have 
absolutly no idea
how can this happens and i think i could solve this problem by dropping these 3 
Triggers. But i dont
know how is the syntax to drop such triggers?

And know anybody out there how it could happen?

Thanks for your help,
Thomas!


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] cygpopt-0.dll problem, ipc-daemon2

2004-07-01 Thread Chuck Bernardes








Hi all,



I am running windows xp and went through the setup
instructions according to the instructions.



When I attempt to run the command:



ipc-daemon2 



I get the following message:



This application has failed to start because
cygpopt-0.dll was not found. Re-installing the application may fix this
problem 



I downloaded the following file and extracted the contents:




cygipc-2.03-2.tar.bz2



I am a newbie to this so any help would be appreciated.



Thanks in advance.

-Chuck












[GENERAL] Full Text Index Info

2004-07-01 Thread Sheeraz Fazal



hi,

I am looking for some examples/text 
for implementing full text index in postgresql. Can anyone help me in this 
regard?

Thanks.



[GENERAL] Enough RAM for entire Database.. cost aside, is this going to be fastest?

2004-07-01 Thread Andy B
Hello,

Sorry for this newbish question.

Briefly, my problem:
--
I expect the database I'm working on to reach something in the order of
12-16 Gigabytes, and I am interested in understanding as much as I can about
how I can make this go as fast as possible on a linux system. I haven't run
such a large database before. The nature of the database is such that
successive queries are very likely to lead to  poor cache performance.

I have lots of experience with disks, operating systems, caching, virtual
memory, RAM etc. - just none running gargantuan databases very quickly!
---

I've read all the performance tuning and configuration stuff I can, but
there is one basic question I can't get an answer to:

My question:

If I can afford to have the entire database residing in RAM (within the
postgresql shared buffer cache, swapped into real memory) without impacting
other services on the machine, is there any reason why I shouldn't do it,
other than cost? (!)



Basically, I'm finding it almost impossible to predict 'how much RAM is
right'. I know I don't need the *entire* database to sit in RAM, and a lot
of this answer depends on a lot of things -  the speed of IO, the nature of
queries etc. But when you get to a certain amount of RAM, (specifically, the
amount where nothing needs to be swapped out), then surely things get a bit
more certain... or do they?

So, could I, for example, setup postgresql with a 16 GB shared buffer cache
and expect the postgresql backend processes to fly like the wind (CPU, RAM
and disk write speed permitting)?

I understand that writes can delay the progression of updates if setup in a
certain way, and that's ok - I'm really just wondering if there are some
*other* boundaries that will get in the way. I've read that I should be able
to configure a linux box (with the right processors) to address up to 64GB
of RAM, and I'm aware of more esoteric boxes like the SGI Altix 3000 which
can go far higher, but maybe that's overkill..

If there are any resources out there that point to other experiences of
others trying to coerce a huge database to run largely from RAM, I'd be
grateful for the links.

Many thanks
Andy






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] pg_dump and pg_restore problems

2004-07-01 Thread Otto Blomqvist
Hello ! 

I was just wondering if anyone knows if this is a bug or whats up.

Im using psql 7.2.2

I can do full restores and dumps all day long, however when I do the
following I run into problems

Do a full dump with pg_dump -Fc -O Database1  dump.tar

I restore one table to Database1 using 

pg_restore -t table_25 -c -O -d database1 dump.tar

I dump database1 again

pg_dump -Fc -O Database1  dump2.tar

Then I create a new Database and attempt a full pg_restore  into the
new DB. Which, does not work. It does not restore the database but
throws me a table_25 relation does not exist - message instead. But
the table_25 is present in database1, it just seems like the dump does
not see it, after a partial restore.

Any ideas?

Thanks

/Otto Blomqvist

---(end of broadcast)---
TIP 8: explain analyze is your friend