Re: [ADMIN] Training

2004-06-09 Thread Peter Eisentraut
Richard Dockery wrote:
> I am interested in Postgres training but there doesn't appear to be
> much of it out there for resources.  Can anyone point me in a good
> direction?

Googling for postgresql+training should give you enough choices.  
Alternatively, go to

http://techdocs.postgresql.org/companies.php

grep for "training" and pick one.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] RHDB just sits and does nothing?

2004-06-09 Thread Mauri Sahlberg
Hi,

What an earth I am doing wrong? I am trying to delete about 6500 rows and
I am the only user connected to the database and it just sits there and
does nothing?

rahaks=# select count(*) from tilikartta where yritys=18231413;
 count
---
  6347
(1 row)

rahaks=# explain delete from tilikartta where yritys=18231413;
  QUERY PLAN
--
 Seq Scan on tilikartta  (cost=0.00..500.23 rows=249 width=6)
   Filter: (yritys = 18231413)
(2 rows)

rahaks=# begin foo
rahaks-# ;
ERROR:  parser: parse error at or near "foo" at character 7
rahaks=# begin;
BEGIN
rahaks=# set constraints all deferred;
SET CONSTRAINTS
rahaks=# delete from tilikartta where yritys=18231413;

Logs are fed to syslog and there I can see the following:
Jun  9 10:02:11 pihsi postgres[31547]: [19] LOG:  query: select count(*)
from tilikartta where yritys=18231413;
Jun  9 10:02:11 pihsi postgres[31547]: [20] LOG:  duration: 0.041401 sec
Jun  9 10:02:33 pihsi postgres[31547]: [21] LOG:  query: explain delete
from tilikartta where yritys=18231413;
Jun  9 10:02:33 pihsi postgres[31547]: [22] LOG:  duration: 0.000781 sec
Jun  9 10:02:41 pihsi postgres[31547]: [23] LOG:  query: begin foo
Jun  9 10:02:41 pihsi postgres[31547]: [24] ERROR:  parser: parse error at
or near "foo" at character 7
Jun  9 10:02:44 pihsi postgres[31547]: [25] LOG:  query: begin;
Jun  9 10:02:44 pihsi postgres[31547]: [26] LOG:  duration: 0.000181 sec
Jun  9 10:02:51 pihsi postgres[31547]: [27] LOG:  query: set constraints
all deferred;
Jun  9 10:02:51 pihsi postgres[31547]: [28] LOG:  duration: 0.000205 sec
Jun  9 10:03:04 pihsi postgres[31547]: [29] LOG:  query: delete from
tilikartta where yritys=18231413;

It is now 10:09 local time and it still sits there and does nothing?

   Table "public.tilikartta"
   Column   |  Type  | Modifiers
++---
 numero | integer| not null
 tililaji   | smallint   | not null
 kielikoodi | smallint   | not null
 yritys | integer| not null
 selite | character varying(255) | not null
 tilimalli  | character varying(255) |
Indexes: pk_tilikartta primary key btree (numero, tililaji, kielikoodi,
yritys)
Foreign Key constraints: fk_tilikartta_yritys FOREIGN KEY (yritys)
REFERENCES yritys(numero) ON UPDATE NO ACTION ON DELETE NO ACTION,
 fk_tilikartta_tilikarttamallit FOREIGN KEY
(tilimalli) REFERENCES tilikarttamallit(nimi) ON
UPDATE NO ACTION ON DELETE NO ACTIO

psql (PostgreSQL) 7.3.6-RH
contains support for command-line editing

rh-postgresql-server-7.3.6-1


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[Fwd: Re: [ADMIN] RHDB just sits and does nothing?]

2004-06-09 Thread Mauri Sahlberg
Hi,

What an earth I am doing wrong? I am trying to delete about 6500 rows and
I am the only user connected to the database and it just sits there and
does nothing?

Jun  9 10:02:44 pihsi postgres[31547]: [25] LOG:  query: begin;
Jun  9 10:02:44 pihsi postgres[31547]: [26] LOG:  duration: 0.000181 sec
Jun  9 10:02:51 pihsi postgres[31547]: [27] LOG:  query: set constraints
all deferred;
Jun  9 10:02:51 pihsi postgres[31547]: [28] LOG:  duration: 0.000205 sec
Jun  9 10:03:04 pihsi postgres[31547]: [29] LOG:  query: delete from
tilikartta
where yritys=18231413;

and after a long wait...
Jun  9 10:46:16 pihsi postgres[31547]: [30] LOG:  duration: 2592.534609
sec Jun  9 11:00:28 pihsi postgres[31547]: [31] LOG:  query: commit;
Jun  9 11:00:28 pihsi postgres[31547]: [32] LOG:  duration: 0.002915 sec

DELETE 6347
rahaks=# commit;
COMMIT

It just took about three quarters of a hour to delete 6500 rows!
Something is definitely broken and probably I am the one who broke it. Any
guesses what I have broken and how I could fix this?

Full log as an attachment.



foo
Description: Binary data

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] Is the server running locally and accepting connections on Unix domain socket /tmp/.s.PGSQL.5432 ?

2004-06-09 Thread lise chhay
On Mandrake, I launched the script creer-base under the user postgres, I had an error 
message : 
you need to use createlang to load the language into the database. 
 
When I've run the command under the user postgres : 
createlang -d auth pltcl 
 
I had an error message : 
createlang: could not connect to auth database : could not connect to server : No such 
file or directory 
Is the server running locally and accepting connections on Unix domain socket 
/tmp/.s.PGSQL.5432 ? 
 
Can you reply to me, please. 
-- 
___
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Grega Bremec
Hello, List,

I recently stumbled across a problem that I can't really get across.

We have a database cluster (PG 7.4.2) that was initialized as follows:

$ pg_controldata /data/dir
pg_control version number:72
Catalog version number:   200310211
Database cluster state:   in production
pg_control last modified: sre 09 jun 2004 03:00:26 CEST
Current log file ID:  20
Next log file segment:63
...  ...
Database block size:  8192
Blocks per segment of large relation: 131072
Maximum length of identifiers:64
Maximum number of function arguments: 32
Date/time type storage:   64-bit integers
Maximum length of locale name:128
LC_COLLATE:   C
LC_CTYPE: C

Inside that cluster, there are several databases using different encodings:

template1=# \l
 List of databases
Name |  Owner   | Encoding 
-+--+--
 db1 | ownera   | UNICODE
 db2 | ownera   | UNICODE
 db3 | ownerb   | LATIN2
 db4 | ownerc   | LATIN2
 db5 | ownera   | LATIN2
 db6 | ownera   | LATIN2
 template0   | postgres | UNICODE
 template1   | postgres | UNICODE

Collate order for those databases, however, needs to be different. Obviously,
db3, db4, db5 and db6 will want to use some collate ordering scheme based on
ISO-8859-2, whereas the other two could possibly have table- or even column-
based collate requirements, as they contain unicode data in UTF-8 encoding,
which doesn't give any warranties wrt the contents of these databases.

Producing a list of requirements and/or imposing certain conventions on the
format of data stored in those tables is outside the scope of my authorities,
the only reasonable assumption I can make is that these databases could be
limited to one collating order per database (or several databases, as it is)
without much hassle.

Also, running several postmasters on this same machine is not an option, as
it only has 1.5GB RAM, of which only 1GB is available for cache (~260MB is
swapped after roughly a month's uptime, but that doesn't change much after
settling down in a week or two).

My question to the list would be the following:

Is it possible to do either of these things that could solve this problem
adequately:

- somehow manage to make one postmaster run on top of two separate
  database clusters that would each have a different collate ordering
  scheme

- use some other method of initializing one database from a different
  template and taking with it LC_COLLATE setting (I suppose not, as
  the "${PGDATA}/global/" directory is global to the cluster)

- use a patch that would add such functionality or upgrade to a version
  (even if release-candidate, beta is not really an option, i gather)
  of PostgreSQL that supported it

- in absence of any other viable solution, change the global setting of
  the database cluster without having to dump/reinitdb/restore it

I thank you in advance for your valuable input,
-- 
Grega Bremec
Senior Administrator
Noviforum Ltd., Software & Media
http://www.noviforum.si/


pgpitMpqhEJ9o.pgp
Description: PGP signature


Re: [Fwd: Re: [ADMIN] RHDB just sits and does nothing?]

2004-06-09 Thread Scott Marlowe
On Wed, 2004-06-09 at 02:16, Mauri Sahlberg wrote:
> Hi,
> 
> What an earth I am doing wrong? I am trying to delete about 6500 rows and
> I am the only user connected to the database and it just sits there and
> does nothing?
> 
> Jun  9 10:02:44 pihsi postgres[31547]: [25] LOG:  query: begin;
> Jun  9 10:02:44 pihsi postgres[31547]: [26] LOG:  duration: 0.000181 sec
> Jun  9 10:02:51 pihsi postgres[31547]: [27] LOG:  query: set constraints
> all deferred;
> Jun  9 10:02:51 pihsi postgres[31547]: [28] LOG:  duration: 0.000205 sec
> Jun  9 10:03:04 pihsi postgres[31547]: [29] LOG:  query: delete from
> tilikartta
> where yritys=18231413;
> 
> and after a long wait...
> Jun  9 10:46:16 pihsi postgres[31547]: [30] LOG:  duration: 2592.534609
> sec Jun  9 11:00:28 pihsi postgres[31547]: [31] LOG:  query: commit;
> Jun  9 11:00:28 pihsi postgres[31547]: [32] LOG:  duration: 0.002915 sec
> 
> DELETE 6347
> rahaks=# commit;
> COMMIT
> 
> It just took about three quarters of a hour to delete 6500 rows!
> Something is definitely broken and probably I am the one who broke it. Any
> guesses what I have broken and how I could fix this?

Do you have any foreign key references that are of mismatched types
involved?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [Fwd: Re: [ADMIN] RHDB just sits and does nothing?]

2004-06-09 Thread Tom Lane
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> On Wed, 2004-06-09 at 02:16, Mauri Sahlberg wrote:
>> It just took about three quarters of a hour to delete 6500 rows!

> Do you have any foreign key references that are of mismatched types
> involved?

Specifically, what other tables reference this one?  Are their
referencing columns of identical types, and *do they have indexes*?

I'm betting on lack of index myself; that's the only FK mistake that
hurts you only at primary-key-delete time.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Corrupted or deleted pg_tables

2004-06-09 Thread Curtis Klumas

We have a database that we manage through Webmin and today, the entires for
the databases dissappeared from webmin. Going in through psql we could see
our data in tables still intact, and have a copy of the directory structure.
Is thier anyway to extract data and reconstruct a new database? We tried
pg_dump but had a host of errors.

---

Curtis Klumas
Randr, Inc.
909-205-9138 (cell)
909-669-3427 (office)
[EMAIL PROTECTED]

---




---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] PostgreSQL 7.4.2 on SunOS 4.1.4

2004-06-09 Thread Cook, Tom
Title: PostgreSQL 7.4.2 on SunOS 4.1.4






Hi,


Back in December 2002, Warren Spencer asked this question:


> I'm attempting to install PostgreSQL on a SunOS 4.1.4 box.  So far,

> I've tried various combinations of gcc 2.95.1,.2,.3, PostgreSQL 7.1.3,

> 7.0.3, 7.3.  In some cases, gcc won't build. In others, PostgreSQL

> won't build.  But in the gcc 2.95.2/pg 7.1.3 case, I've got exactly

> the same INITDB problem as Jeff Stevens (post 2002-09-25 09:36:36

> PST):  ERROR:  syntax error at line 2658: unexpected token parse

> error.


I’m now trying to get PostgreSQL running on a SunOS 4.1.4 system

(to replace Another Database which costs us a packet each year).  There

were only a few minor changes required to get it to compile (I can send

a diff if you like, but probably only once I can run the regression tests).

But while trying to run the regression tests, when initdb is running, I run

into this error:


...

DEBUG:  start transaction

DEBUG:  close relation (null)

DEBUG:  commit transaction

ERROR:  syntax error at line 3467: unexpected token “syntax error”

DEBUG:  proc_exit(1)

DEBUG:  shmem_exit(1)

DEBUG:  exit(1)

...


This line number seems to be from postgres.bki, and it appears to be one

short of where the error actually arises – inserting a blank line at line 3468

moves the error to line 3468.


A response to Warren’s question suggested that maybe the awk version he

was using was to blame, and was generating incorrect bki files.  But I can’t

see anything wrong with the bki file generated – the pg_aggregate data does

not suffer from the same problems as noted in the response to Warren’s

question, and the area that causes the problem looks like this:


...

)

open pg_depend

close pg_depend

declare unique index pg_aggregate_fnoid_index on pg_aggregate using btree(aggfnoid oid_ops)

declare unique index pg_am_name_index on pg_am using btree(amname name_ops)

...


The first ‘declare unique index’ line is the one that is causing the problem –

commenting it out just moves the same error message to the next line.


I have not regenerated the bison-generated parser source file – it is the one

that comes with PostgreSQL 7.4.2.


Does anyone have any ideas what could cause this error?


Thanks,

Tom Cook

--

Duct tape is like the Force, it has a light side, a dark side, and holds the universe together

    - gcc bug database


This e-mail (including attachments) is confidential information of Australian Submarine Corporation Pty Limited (ASC).  It may also be legally privileged.  Unauthorised use and disclosure is prohibited.  ASC is not taken to have waived confidentiality or privilege if this e-mail was sent to you in error. If you have received it in error, please notify the sender promptly.  While ASC takes steps to identify and eliminate viruses, it cannot confirm that this e-mail is free from them.  You should scan this e-mail for viruses before it is used.  The statements in this e-mail are those of the sender only, unless specifically stated to be those of ASC by someone with authority to do so.


Re: [ADMIN] out of memory error

2004-06-09 Thread Naomi Walker
Jie Liang wrote:

Does 7.3* support this?  Can you tell me a bit more about it, please?  
Hash aggregate..?

>I had a similar problem after upgrade to 7.4.2,
>Try: 
>SET enable_hashagg = false;
>Before you execute that SELECT stmt 
>If you don't want disable it in postgresql.conf
>
>Jie Liang
>
>  
>
 

-- CONFIDENTIALITY NOTICE --

This message is intended for the sole use of the individual and entity to whom it is 
addressed, and may contain information that is privileged, confidential and exempt 
from disclosure under applicable law. If you are not the intended addressee, nor 
authorized to receive for the intended addressee, you are hereby notified that you may 
not use, copy, disclose or distribute to anyone the message or any information 
contained in the message. If you have received this message in error, please 
immediately advise the sender by reply email, and delete the message. Thank you.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] RHDB just sits and does nothing?

2004-06-09 Thread Stephan Szabo

On Wed, 9 Jun 2004, Mauri Sahlberg wrote:

> rahaks=# begin foo
> rahaks-# ;
> ERROR:  parser: parse error at or near "foo" at character 7
> rahaks=# begin;
> BEGIN
> rahaks=# set constraints all deferred;
> SET CONSTRAINTS

In addition to the other messages, the above only defers deferrable
constraints.  If you didn't specify initially deferred or deferrable when
creating the fk constraint, the above will not affect it.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Corrupted or deleted pg_tables

2004-06-09 Thread Tom Lane
"Curtis Klumas" <[EMAIL PROTECTED]> writes:
> We have a database that we manage through Webmin and today, the entires for
> the databases dissappeared from webmin. Going in through psql we could see
> our data in tables still intact, and have a copy of the directory structure.
> Is thier anyway to extract data and reconstruct a new database? We tried
> pg_dump but had a host of errors.

Like what?  The only specific information you've provided here is that
things still work from psql ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] PostgreSQL Performance

2004-06-09 Thread Milosz Miecznik
Hi!

I have very important question about performance of PostgreSQL Database. How
it will work with:
- about 300 insert operation per day,
- about 100 selects per day,
- about 100 still connected users?
What hardware platform shall I use for such big database (RAM, No. of CPUs,
Hard discs capasity...?)
What operating system  shall I use? (I think about RedHat 8.x)

Can You help and reply to me, please.
Milosz Miecznik


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Corrupted or deleted pg_tables

2004-06-09 Thread Andrew Janian
Can you post your errors?

Andrew

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Curtis Klumas
Sent: Tuesday, June 08, 2004 8:09 PM
To: [EMAIL PROTECTED]
Cc: Jonathan Yee
Subject: Re: [ADMIN] Corrupted or deleted pg_tables



We have a database that we manage through Webmin and today, the entires for
the databases dissappeared from webmin. Going in through psql we could see
our data in tables still intact, and have a copy of the directory structure.
Is thier anyway to extract data and reconstruct a new database? We tried
pg_dump but had a host of errors.

---

Curtis Klumas
Randr, Inc.
909-205-9138 (cell)
909-669-3427 (office)
[EMAIL PROTECTED]

---




---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] Dump only part of a DB

2004-06-09 Thread David F. Skoll
Hi,

pg_dump can be used to dump an entire database, or just a single
table.  Is there a way to make a consistent dump of more than one
table, but less than all of the tables in the database?  Doing a bunch
of single-table pg_dumps isn't really an option, because some tables
may change during the dumps, leading to constraint violations if we
try to restore them.

Rationale:  We have an application that makes a nightly dump of its
database.  There's one particular table that tends to be large, but happily
it's not critical if it's lost -- no real need to back it up.  So we'd
like to back up everything except that one large, non-critical table.

Regards,

David.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Tom Lane
Grega Bremec <[EMAIL PROTECTED]> writes:
> Collate order for those databases, however, needs to be different.

If you need multiple LC_COLLATE settings then you have to run multiple
postmasters.  There is no other solution today, nor likely to be one in
the near future.

> Also, running several postmasters on this same machine is not an option,

Sure it is.  Just don't go overboard with the shared_buffers settings.
Let the kernel do the bulk of the buffering and you'll be fine.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] [HACKERS] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Honza Pazdziora
On Wed, Jun 09, 2004 at 12:33:03PM +0200, Grega Bremec wrote:
> 
> Collate order for those databases, however, needs to be different. Obviously,

[...]

> Is it possible to do either of these things that could solve this problem
> adequately:
> 
> - somehow manage to make one postmaster run on top of two separate
>   database clusters that would each have a different collate ordering
>   scheme
> 
> - use some other method of initializing one database from a different
>   template and taking with it LC_COLLATE setting (I suppose not, as
>   the "${PGDATA}/global/" directory is global to the cluster)
> 
> - use a patch that would add such functionality or upgrade to a version
>   (even if release-candidate, beta is not really an option, i gather)
>   of PostgreSQL that supported it
> 
> - in absence of any other viable solution, change the global setting of
>   the database cluster without having to dump/reinitdb/restore it

If you do not need the collating sequence to affect index operations,
you can use nls_string function to sort using

order by nls_string(column, 'sl_SI.utf-8')

where any locale can be specified in the runtime. The nls_string
result can also be used to compare strings in collating manner, however,
indexes will not be used in that case, which may or may not be a
problem for your usage.

The source of nls_string with installation instructions can be found
at

http://www.fi.muni.cz/~adelton/l10n/

-- 

 Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
 .project: Perl, mod_perl, DBI, Oracle, large Web systems, XML/XSL, ...
Only self-confident people can be simple.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] PostgreSQL 7.4.2 on SunOS 4.1.4

2004-06-09 Thread Tom Lane
"Cook, Tom" <[EMAIL PROTECTED]> writes:
> ERROR:  syntax error at line 3467: unexpected token "syntax error"

> This line number seems to be from postgres.bki, and it appears to be
> one short of where the error actually arises - inserting a blank line
> at line 3468 moves the error to line 3468.

Hmm, both the weird phrasing of the error message and the off-by-one
line count are pretty obvious bugs now that I look at bootscanner.l.
I guess nobody noticed because so few people have ever seen an actual
syntax error here.

> The first 'declare unique index' line is the one that is causing the
> problem - commenting it out just moves the same error message to the
> next line.

It looks fine to me too.  I think that either the bootstrap lexer
or parser must be broken on your machine.

> I have not regenerated the bison-generated parser source file - it is
> the one that comes with PostgreSQL 7.4.2.

Are you speaking specifically of

src/backend/bootstrap/bootparse.c
src/backend/bootstrap/bootscanner.c
src/backend/bootstrap/bootstrap_tokens.h

Make doubly sure that these didn't get regenerated.  If they didn't, the
only other theory that comes to mind is a compiler bug.  Which compiler
are you using exactly?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] PostgreSQL Performance

2004-06-09 Thread Scott Marlowe
On Wed, 2004-06-09 at 08:34, Milosz Miecznik wrote:
> Hi!
> 
> I have very important question about performance of PostgreSQL Database. How
> it will work with:
> - about 300 insert operation per day,
> - about 100 selects per day,
> - about 100 still connected users?
> What hardware platform shall I use for such big database (RAM, No. of CPUs,
> Hard discs capasity...?)
> What operating system  shall I use? (I think about RedHat 8.x)

In acceptance testing on a dual 2.8 GHz P4 with 2 gigs ram and a pair of
UWSCSI drives running under an LSI/MegaRAID320 controller, with
pg_autovacuum running in the background, I was able to handle 10,000,000
transactions in 12 hours.  That was with 250 simo connections running
40,000 transactions each.

Now, how big are your inserts?  If they're reasonable small, like
financial info, then you can easily handle 3M inserts a day.  If they're
really complex and have lots of constraints or FKs to deal with, then
maybe not.   But a Dual P4 with 2 gigs ram and a pair of mirrored hard
drives is kinda an entry level db server nowadays.  A machine with more
disk bandwidth could probably handle quite a bit more.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread Scott Marlowe
On Wed, 2004-06-09 at 08:09, David F. Skoll wrote:
> Hi,
> 
> pg_dump can be used to dump an entire database, or just a single
> table.  Is there a way to make a consistent dump of more than one
> table, but less than all of the tables in the database?  Doing a bunch
> of single-table pg_dumps isn't really an option, because some tables
> may change during the dumps, leading to constraint violations if we
> try to restore them.
> 
> Rationale:  We have an application that makes a nightly dump of its
> database.  There's one particular table that tends to be large, but happily
> it's not critical if it's lost -- no real need to back it up.  So we'd
> like to back up everything except that one large, non-critical table.

If you put the application's data into a specific schema, then you can
dump just that schema with the -n switch...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread David F. Skoll
On Wed, 9 Jun 2004, Scott Marlowe wrote:

> If you put the application's data into a specific schema, then you can
> dump just that schema with the -n switch...

Thanks.  That's a solution for 7.4, but some of our installed base
(especially the older ones with large DB's that cause the problem) are
still on 7.2 or 7.3.

I'll keep it in mind, though.

Regards,

David.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread Scott Marlowe
On Wed, 2004-06-09 at 10:43, David F. Skoll wrote:
> On Wed, 9 Jun 2004, Scott Marlowe wrote:
> 
> > If you put the application's data into a specific schema, then you can
> > dump just that schema with the -n switch...
> 
> Thanks.  That's a solution for 7.4, but some of our installed base
> (especially the older ones with large DB's that cause the problem) are
> still on 7.2 or 7.3.

7.3 supported schemas, but I'm not sure its pg_dump supported dumping
individual ones.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] [GENERAL] Help in finding the error

2004-06-09 Thread Duane Lee - EGOVX
Title: RE: [GENERAL] Help in finding the error 





I'm on 7.4.2.  I did some more testing and ended up dropping and recreating the table with all the constraints, etc. in the definition and tried my inserts, INSERT...SELECT..., and it worked just fine.  I probably had something else set that I didn't want set that was causing my problems.  But still, I would like to know where I might go in the catalog to find the info about "relation with OID ".

For the most part I'll chalk up my problem to my lack of experience with Postgres.  All in all I'm impressed with what it can do having come from a DB2 on the mainframe background.

Thanks,
Duane


-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 08, 2004 10:32 PM
To: Duane Lee - EGOVX
Cc: PG General (E-mail); PGADMIN (E-mail)
Subject: Re: [GENERAL] Help in finding the error 



Duane Lee - EGOVX <[EMAIL PROTECTED]> writes:
> I'm trying to insert data into an empty table but I keep getting the error:
> ERROR: could not open relation with OID 42939


More detail please?  For starters, what PG version is this?  


> I have a couple of constraints on the table but what I'm entering doesn't
> violate those.


My crudest guesses would involve rules not constraints (specifically,
a rule that references a no-longer-extant table).


            regards, tom lane





Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread Bill Montgomery
Scott Marlowe wrote:
On Wed, 2004-06-09 at 10:43, David F. Skoll wrote:
 

On Wed, 9 Jun 2004, Scott Marlowe wrote:
   

If you put the application's data into a specific schema, then you can
dump just that schema with the -n switch...
 

Thanks.  That's a solution for 7.4, but some of our installed base
(especially the older ones with large DB's that cause the problem) are
still on 7.2 or 7.3.
   

7.3 supported schemas, but I'm not sure its pg_dump supported dumping
individual ones.
 

I'm running 7.3.4 and use pg_dump with individual schemas.
-Bill Montgomery
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread Tom Lane
"David F. Skoll" <[EMAIL PROTECTED]> writes:
> pg_dump can be used to dump an entire database, or just a single
> table.  Is there a way to make a consistent dump of more than one
> table, but less than all of the tables in the database?

This has been discussed before, and I think we had agreed that the
Right Thing is to make pg_dump accept more than one -t switch (also
more than one -n switch, at the schema level), and dump anything
that matches any -t or -n switch.

No one's got round to making this happen, but it seems like it
should not be a big job.  Want to send in a patch?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Dump only part of a DB

2004-06-09 Thread Bruce Momjian
Tom Lane wrote:
> "David F. Skoll" <[EMAIL PROTECTED]> writes:
> > pg_dump can be used to dump an entire database, or just a single
> > table.  Is there a way to make a consistent dump of more than one
> > table, but less than all of the tables in the database?
> 
> This has been discussed before, and I think we had agreed that the
> Right Thing is to make pg_dump accept more than one -t switch (also
> more than one -n switch, at the schema level), and dump anything
> that matches any -t or -n switch.
> 
> No one's got round to making this happen, but it seems like it
> should not be a big job.  Want to send in a patch?

Added to TODO:

* Allow pg_dump to use multiple -t and -n switches

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org