Re: [SQL] pg_dump problem

2000-06-27 Thread drfrog

Ed Loehr wrote:
> 
> Graham Vickrage wrote:
> >
> > I am trying to backup a production database running on v6.5 and restore it
> > on our test machine running v6.5.
> >
> > The largest table has about 750,000 rows, the other 5 tables are very small
> > approx 100 rows.
> >
> > When I try to restore the database using  "psql -e database < db.out" I get
> > the error message
> > "query buffer max length of 16384 exceeded" after each row.
> >
> > Would somebody please tell me how to increse this buffer (assuming this is
> > whats required to solve the problem) as I have looked though the
> > documentation and I am still struggling :-(
> 
> You probably don't need to increase the buffer.  More likely, there is an
> unterminated string constant somewhere in db.out.  Yes, that would
> probably be a bug in pg_dump, but I do vaguely recall problems in 6.5*
> with pg_dump.  You might search deja for your error message.
> 
> Regards,
> Ed Loehr


i seem to remember something about this,


if i remeber right i broke out the data and inserted it back in using a
perl script

using a count loop



Re: [SQL] Re: unique sequences

2000-06-27 Thread DalTech - Continuing Technical Education

I imagine it should if you create a unique index on the field.

Cheers,
Christopher Currie


- Original Message -
From: K Parker <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 27, 2000 1:52 AM
Subject: [SQL] Re: unique sequences


> > Rather, each time you wish to do an insert,
> > query the existing data to see what the
> > maximum value is existing in the database.
> > Add 1 to that and use this as your new value.
> > This is guaranteed to not have any holes in
> > the sequence.
>
> True, but alas it _doesn't_ guarantee uniqueness, since there's nothing to
prohibit 2 or more back ends getting the same max() at the
> same time...
>
> Thinking off the top of my head, I'm not sure you can apply FOR UPDATE to
an aggregate function, so there's probably no solution there, either.
>
>
>
> Join 18 million Eudora users by signing up for a free Eudora Web-Mail
account at http://www.eudoramail.com




[SQL] Backup?

2000-06-27 Thread Emils Klotins

Hello,

I had a look for 'backup' on the mailing list archives and strangely 
enough didn't find a message containing the word in admin, general 
and sql mailing lists... I am sure this must have been asked before, 
but still:

Would it be possible to backup a db by just copying the dir struct 
under the PG_DATA or must I absolutely use pg_dump? I am 
using Postgres 7.0. The purpose of the backup is of course to have 
a copy handy if eg the harddrive breaks. As I understand pg_dump 
could be comparatively slower, plus it produces a text format 
dumpfile, which could be kind of large in my db in future.

Thanks in advance,

Emils



Re: [SQL] Backup?

2000-06-27 Thread Tom Lane

"Emils Klotins" <[EMAIL PROTECTED]> writes:
> Would it be possible to backup a db by just copying the dir struct 
> under the PG_DATA or must I absolutely use pg_dump?

If you ensure nothing at all is happening (as in "shut down the
postmaster" --- I would not trust any half measures) then a dump of
PGDATA would work.  If anything is going on then a dump done that way
will be inconsistent.

> As I understand pg_dump could be comparatively slower, plus it
> produces a text format dumpfile, which could be kind of large in my db
> in future.

pg_dump is slower, but since it can run in parallel with other
activities, I'm not sure that the extra time is a material
disadvantage.  I wouldn't assume that the text dumpfile will be
larger than a tar of the database directory either --- very likely
it'd be the other way round.  Finally, the text dump is amenable to
editing, selective restoration, etc, which you *cannot* do with
a directory dump --- the only thing a directory dump is good for is
restoring the entire DB to exactly where it was.

The main thing that makes a directory dump less useful than it might
appear to be is that the logical state of each table is split between
the table file proper (containing tuples marked with transaction
insert/delete stamps) and pg_log (which tells you which transactions
committed and which did not).  Thus you cannot restore just one
table file; you have to restore pg_log as well, and that makes it
an all-or-nothing proposition.

regards, tom lane



[SQL] Creating tables from within functions

2000-06-27 Thread Saltsgaver, Scott

Is it possible to issue a CREATE TABLE statement from inside of a PostgreSQL
function?  If not, then how about from within a PL/pgSQL function?  I have
tried a lot of different permutations but can't seem to get it to compile
and/or run cleanly.  For example, consider:

CREATE FUNCTION _testcreate () RETURNS text AS '
BEGIN
CREATE TABLE switch_table2 (switch_id varchar(32),
selection_name varchar(100));
end;
' language 'plpgsql';

What is wrong with this?  What type should be returned?  Or perhaps it is
simply not possible to create a table from within a function?

Thanks!

Scott Saltsgaver



Re: [SQL] Backup?

2000-06-27 Thread Bryan White

> Hello,
>
> I had a look for 'backup' on the mailing list archives and strangely
> enough didn't find a message containing the word in admin, general
> and sql mailing lists... I am sure this must have been asked before,
> but still:
>
> Would it be possible to backup a db by just copying the dir struct
> under the PG_DATA or must I absolutely use pg_dump? I am
> using Postgres 7.0. The purpose of the backup is of course to have
> a copy handy if eg the harddrive breaks. As I understand pg_dump
> could be comparatively slower, plus it produces a text format
> dumpfile, which could be kind of large in my db in future.

The pg_dump will in general be smaller.  There is a fair amount of overhead
in fitting rows into page boundaries.  Also the indexes are not backed up.
Here are some datapoints from my system.  I have a datadirectory that is
2.4GB.  A pg_dump from that is 800MB (about 400MB compressed).  My nightly
dump currently takes about 10 minutes during which the database is not down.
A reload takes 100 minutes including a vacuum analyze.  I also sleep better
knowing the data is in a readable text format that I could work with if I
had to.




[SQL] Problem with pg_dumpall

2000-06-27 Thread Ryan Bradetich

Hello all,

I am having a new problem with pg_dumpall that I have not seen before.
I've been
browsing the documentation and could not find anything related to this
problem.  Any
ideas or pointers would greatly be appreciated.

boi260 sanity $ /opt/pgsql/bin/pg_dumpall -v -o | /usr/contrib/bin/gzip
-c > /opt/pgsql/backup/db.pgdump.gz

[ Extra verbage snipped ]

--  dumping out the contents of Table 'medusa'
FATAL 1:  Memory exhausted in AllocSetAlloc()
PQendcopy: resetting connection
SQL query to dump the contents of Table 'medusa' did not execute
correctly.  After we read all the table contents from the backend,
PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
exhausted in AllocSetAlloc()
'.
The query was: 'COPY "medusa" WITH OIDS TO stdout;
'.
pg_dump failed on procman, exiting






procman=# select version();
version
---
 PostgreSQL 7.0.0 on hppa2.0w-hp-hpux11.00, compiled by gcc 2.95.2
(1 row)


Thanks,

- Ryan





Re: [SQL] Problem with pg_dumpall

2000-06-27 Thread Tom Lane

Ryan Bradetich <[EMAIL PROTECTED]> writes:
> --  dumping out the contents of Table 'medusa'
> FATAL 1:  Memory exhausted in AllocSetAlloc()
> PQendcopy: resetting connection
> SQL query to dump the contents of Table 'medusa' did not execute
> correctly.  After we read all the table contents from the backend,
> PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
> exhausted in AllocSetAlloc()
> '.
> The query was: 'COPY "medusa" WITH OIDS TO stdout;

Hmm.  What is the full definition of that table?  (pg_dump -s -t medusa
would do.)

regards, tom lane



Re: [SQL] Problem with pg_dumpall

2000-06-27 Thread Ryan Bradetich

Tom Lane wrote:

> Ryan Bradetich <[EMAIL PROTECTED]> writes:
> > --  dumping out the contents of Table 'medusa'
> > FATAL 1:  Memory exhausted in AllocSetAlloc()
> > PQendcopy: resetting connection
> > SQL query to dump the contents of Table 'medusa' did not execute
> > correctly.  After we read all the table contents from the backend,
> > PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
> > exhausted in AllocSetAlloc()
> > '.
> > The query was: 'COPY "medusa" WITH OIDS TO stdout;
>
> Hmm.  What is the full definition of that table?  (pg_dump -s -t medusa
> would do.)
>
> regards, tom lane

Tom,


boi260 /data08 $ pg_dump -s -t medusa procman
\connect - postgres
CREATE TABLE "medusa" (
"host_id" int4,
"timestamp" timestamp,
"current" int4,
"catagory" text,
"cat_desc" text,
"anomaly" text
);
CREATE  INDEX "medusa_catagory_key" on "medusa" using btree ( "catagory"
"text_ops" );
CREATE  INDEX "medusa_host_id_key" on "medusa" using btree ( "host_id"
"int4_ops" );
CREATE  INDEX "medusa_current_key" on "medusa" using btree ( "current"
"int4_ops" );

Ryan





Re: [SQL] Problem with pg_dumpall

2000-06-27 Thread Tom Lane

Ryan Bradetich <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Ryan Bradetich <[EMAIL PROTECTED]> writes:
 --  dumping out the contents of Table 'medusa'
 FATAL 1:  Memory exhausted in AllocSetAlloc()
 PQendcopy: resetting connection
 SQL query to dump the contents of Table 'medusa' did not execute
 correctly.  After we read all the table contents from the backend,
 PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
 exhausted in AllocSetAlloc()
 '.
 The query was: 'COPY "medusa" WITH OIDS TO stdout;

Now that I look at it, it appears that COPY WITH OIDS leaks the memory
used for the string representation of the OIDs.  That'd probably cost
you 32 bytes or so of backend memory per row --- which you'd get back
at the end of the COPY, but small comfort if you ran out before that.

Is the table large enough to make that a plausible explanation?

regards, tom lane



Re: [SQL] Problem with pg_dumpall

2000-06-27 Thread Ryan Bradetich

Tom Lane wrote:

> Ryan Bradetich <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Ryan Bradetich <[EMAIL PROTECTED]> writes:
>  --  dumping out the contents of Table 'medusa'
>  FATAL 1:  Memory exhausted in AllocSetAlloc()
>  PQendcopy: resetting connection
>  SQL query to dump the contents of Table 'medusa' did not execute
>  correctly.  After we read all the table contents from the backend,
>  PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
>  exhausted in AllocSetAlloc()
>  '.
>  The query was: 'COPY "medusa" WITH OIDS TO stdout;
>
> Now that I look at it, it appears that COPY WITH OIDS leaks the memory
> used for the string representation of the OIDs.  That'd probably cost
> you 32 bytes or so of backend memory per row --- which you'd get back
> at the end of the COPY, but small comfort if you ran out before that.
>
> Is the table large enough to make that a plausible explanation?
>
> regards, tom lane

Tom,

This table is very large so that could be the problem.

Here are the startup parameters I am using (in case it matters):
-B 1024
-S
-o -F
-o -o /home/postgres/nohup.out
-i
-p 5432
-D/data08


nohup su - postgres -c "/opt/pgsql/bin/postmaster -B 1024 -S -o \"-F\" -o
\"-o /home/postgres/nohup.out\" -i -p 5432 -D/data08"
procman=# select count(*) from medusa;
  count
-
 6986499
(1 row)



FYI:

That was the problem.  Good job at spotting that Tom.  I just successfully
completed a backup without using the -o
option to pg_dumpall.

Thanks again for the help!

- Ryan


--
Ryan Bradetich
AIT Operations
Unix Platform Team






[SQL] Problem with pg_dumpall in reverse

2000-06-27 Thread Kenn Thompson

Ok- this may be a simple answer, but-

Once the database has been dumped, how do you restore while keeping the original OIDs? 
I've used the OID as a unique key that ties records together, and somehow they don't 
go back together nicely...

doing a dump with "pg_dumpall -o -c > db.out"
restoring with "cat db.out | psql"

Thanks,
Kenn 

>>> Ryan Bradetich <[EMAIL PROTECTED]> 06/27/00 11:27AM >>>
Hello all,

I am having a new problem with pg_dumpall that I have not seen before.
I've been
browsing the documentation and could not find anything related to this
problem.  Any
ideas or pointers would greatly be appreciated.

boi260 sanity $ /opt/pgsql/bin/pg_dumpall -v -o | /usr/contrib/bin/gzip
-c > /opt/pgsql/backup/db.pgdump.gz

[ Extra verbage snipped ]

--  dumping out the contents of Table 'medusa'
FATAL 1:  Memory exhausted in AllocSetAlloc()
PQendcopy: resetting connection
SQL query to dump the contents of Table 'medusa' did not execute
correctly.  After we read all the table contents from the backend,
PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
exhausted in AllocSetAlloc()
'.
The query was: 'COPY "medusa" WITH OIDS TO stdout;
'.
pg_dump failed on procman, exiting






procman=# select version();
version
---
 PostgreSQL 7.0.0 on hppa2.0w-hp-hpux11.00, compiled by gcc 2.95.2
(1 row)


Thanks,

- Ryan






[SQL] HI! Problems with cursors into triggers

2000-06-27 Thread Andrés Mauricio Marín Restrepo

HELLO!

I'm making one trigger (plpgsql) and in the function i use CURSOR for to
bring an elements and to take them to a variable but no me it does not
work, WHY?

The script is:

DROP TRIGGER miTrig on otro;
DROP FUNCTION miFunc();
DROP TABLE otro;
DROP TABLE cual;

CREATE TABLE otro (
cuentainteger;
);

CREATE TABLE  cual (
cuenta1 integer;
cuenta2 integer;
);

CREATE FUNCTION miFunc () RETURNS OPAQUE AS '
   DECLARE
  Cuenta_t INT4;
   BEGIN
  begin work;
  declare c_Cuenta CURSOR FOR SELECT * FROM otro;
  FETCH forward 1 in c_Cuenta INTO :Cuenta_t;
  CLOSE c_Cuenta;
  commit;
  INSERT INTO cual VALUES (Cuenta_t,NEW.cuenta);
  RETURN NEW;
   END; '

LANGUAGE 'plpgsql';

CREATE TRIGGER miTrig AFTER INSERT ON otro FOR EACH ROW EXECUTE
PROCEDURE miFunc();


Help me please!!!

Thanks

Mauricio Marín
Colombia


begin:vcard 
n:Marín Restrepo;Andrés Mauricio
x-mozilla-html:FALSE
org:ITEC TELECOM;Research Departament
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:System Enginner 
fn:Andrés Mauricio Marín Restrepo
end:vcard



[SQL] Hash Join not using hashed index?

2000-06-27 Thread Ang Chin Han

I'm using Postgresql 7.02.

==
# explain select city.name, country.name from country, city
where city.country_id = country.country_id;
NOTICE:  QUERY PLAN:
 
Hash Join  (cost=8.85..16.76 rows=75 width=18)
  ->  Seq Scan on city  (cost=0.00..1.75 rows=75 width=16)
  ->  Hash  (cost=5.53..5.53 rows=253 width=2)
->  Seq Scan on country  (cost=0.00..5.53 rows=253 width=2) 

EXPLAIN 
# create index country_id_idx on country using hash (country_id);
CREATE
# vacuum analyze;
VACUUM
# explain select city.name, country.name from country, city
where city.country_id = country.country_id;
NOTICE:  QUERY PLAN:
 
Hash Join  (cost=8.85..16.76 rows=75 width=18)
  ->  Seq Scan on city  (cost=0.00..1.75 rows=75 width=16)
  ->  Hash  (cost=5.53..5.53 rows=253 width=2)
->  Seq Scan on country  (cost=0.00..5.53 rows=253 width=2)
 
EXPLAIN
==

Why doesn't PostgreSQL use country_id_idx, but rather rehashing 
country_id?



Re: [SQL] Problem with pg_dumpall

2000-06-27 Thread Tom Lane

Ryan Bradetich <[EMAIL PROTECTED]> writes:
> That was the problem.  Good job at spotting that Tom.  I just successfully
> completed a backup without using the -o
> option to pg_dumpall.

OK, if you need it with -o try the following patch against 7.0.2.

regards, tom lane


*** src/backend/commands/copy.c.origWed Jun 28 02:07:58 2000
--- src/backend/commands/copy.c Wed Jun 28 02:13:01 2000
***
*** 484,491 
  
if (oids && !binary)
{
!   CopySendString(oidout(tuple->t_data->t_oid), fp);
CopySendChar(delim[0], fp);
}
  
for (i = 0; i < attr_count; i++)
--- 484,493 
  
if (oids && !binary)
{
!   string = oidout(tuple->t_data->t_oid);
!   CopySendString(string, fp);
CopySendChar(delim[0], fp);
+   pfree(string);
}
  
for (i = 0; i < attr_count; i++)