Re: [ADMIN] How to update from 8.1 to 8.2 ?

2007-10-15 Thread Alexander Kuprijanov
Hello

Does exist another way to make upgrade (without dump-restore)?

I have this question, because I have very big DB (or very poor server): dump 
~40min, restore >2h

I can't stop my DB so long...



Thanks


В сообщении от Thursday 20 September 2007 19:58:24 Milen A. Radev написал(а):
> Josef Springer написа:
> > Where is the documentation of the steps dto do for updating e.g. from 8.1
> > to 8.2 ? I have never done this and could not find any docs for.
>
> [...]
>
> http://www.postgresql.org/docs/8.2/static/install-upgrading.html



-- 
Se okulo ne atentas, dorso eksentas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Jessica Richard
When you see a hanging Postgres connection (or a job running so long and you 
don't want to continue any more) from 

select * from pg_stat_activity

and you want to disconnect it,

how do you do it?

thanks,
Jessica

   
-
 Check out  the hottest 2008 models today at Yahoo! Autos.

Re: [ADMIN] How to update from 8.1 to 8.2 ?

2007-10-15 Thread Kenneth Marshall
Use Slony to replicate to the new version from the old version.
Once they are sync-ed up, you switch the new version to be the
master and re-point your apps to the new DB.

Ken

On Mon, Oct 15, 2007 at 04:52:43PM +0400, Alexander Kuprijanov wrote:
> Hello
> 
> Does exist another way to make upgrade (without dump-restore)?
> 
> I have this question, because I have very big DB (or very poor server): dump 
> ~40min, restore >2h
> 
> I can't stop my DB so long...
> 
> 
> 
> Thanks
> 
> 
> ?? ??  Thursday 20 September 2007 19:58:24 Milen A. Radev 
> ??(??):
> > Josef Springer :
> > > Where is the documentation of the steps dto do for updating e.g. from 8.1
> > > to 8.2 ? I have never done this and could not find any docs for.
> >
> > [...]
> >
> > http://www.postgresql.org/docs/8.2/static/install-upgrading.html
> 
> 
> 
> -- 
> Se okulo ne atentas, dorso eksentas
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Jonah H. Harris
On 10/15/07, Jessica Richard <[EMAIL PROTECTED]> wrote:
> When you see a hanging Postgres connection (or a job running so long and you
> don't want to continue any more) from
>
> select * from pg_stat_activity
>
> and you want to disconnect it,
>
> how do you do it?

See pg_cancel_backend

You should also look at using statement_timeout if this is a regular occurrence.


-- 
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
499 Thornall Street, 2nd Floor  | [EMAIL PROTECTED]
Edison, NJ 08837| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Tommy Gildseth

Jessica Richard wrote:
When you see a hanging Postgres connection (or a job running so long 
and you don't want to continue any more) from


select * from pg_stat_activity

and you want to disconnect it,

how do you do it?


|pg_cancel_backend()


|http://www.postgresql.org/docs/8.1/interactive/functions-admin.html

--
Tommy Gildseth
DBA, Gruppe for databasedrift
Universitetet i Oslo, USIT
m: +47 45 86 38 50
t: +47 22 85 29 39


---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] Distributed DataBases

2007-10-15 Thread Martial Elisée Wendbé KIBA
Hi all,

I have some questions about postgreSQL, concerning Distributed DataBases.
I know Oracle can do this very Well. With Oracle, i can configure my dataBases 
to use data stored in another DataSource (Microsoft SQL Server, Access,  
Access, ...),  using Oracle Transparent Gateway.
What i want to know is There is a way to configure PostgreSQL to do the same, 
or there is contrib/module that i can configure to do the same like Oracle 
Transparent Gateway.

It is important for me because in my company, there is many DataBases Stored in 
different SGBDR, and users used to ask about statistical datas that are stored 
in many SGBDR. It would be of great benefits for me to have all those Datas 
logicaly stored in one place.

Thanks in advance

 
Martial Elisée Wendbé KIBA
Ingénieur de Conception en Informatique
 876, Avenue du Burkina secteur N°04
01 BP 1132 Ouaga 01
BURKINA FASO
Email: [EMAIL PROTECTED]
Bureau:   (+226) 50 32 62 06
Cel: (+226) 70 15 44 93





  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 

Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

So having upgraded to 8.1.10, I thought I'd try out a few things...


# /usr/pkg/bin/pg_dumpall
--
-- PostgreSQL database cluster dump
--

\connect postgres

--
-- Roles
--

CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN;
CREATE ROLE root;
ALTER ROLE root WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN;






--
-- Database creation
--

REVOKE ALL ON DATABASE template1 FROM PUBLIC;
REVOKE ALL ON DATABASE template1 FROM postgres;
GRANT ALL ON DATABASE template1 TO postgres;


\connect postgres

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 20.
pg_dump: The command was: LOCK TABLE public.foo IN ACCESS SHARE MODE
pg_dumpall: pg_dump failed on database "postgres", exiting
# SELECT * FROM foo LIMIT 1;
ERROR:  out of memory
DETAIL:  Failed on request of size 20.

I don't know if this is of any significance:
# \l
   List of databases
  Name|  Owner   | Encoding
---+--+---
postgres  | postgres | SQL_ASCII
template0 | postgres | SQL_ASCII
template1 | postgres | SQL_ASCII
(3 rows)

# /usr/pkg/bin/psql -U postgres template1
psql: FATAL:  out of memory
DETAIL:  Failed on request of size 20.

What puzzles me is why the transaction log hasn't
resulted in postgresql being able to restore itself
to a known clean state.

Darren


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Darren Reed wrote:

Scott Marlowe wrote:

On 10/7/07, Darren Reed <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
> > On 10/7/07, Darren Reed <[EMAIL PROTECTED]> wrote:
> > > Scott Marlowe wrote:

> A few days ago I did:
> pg_dumpall > foo
> What I was doing yesterday was:
> rm -rf /data/db/*
> initdb -D /data/db
> start
> psql < foo
> run for some period
> stop
> reboot
> start
> ...tables have gone but disk space is still in use.
> I dont know if it was during the period of running that the
> database got corrupted (interrupted insert/update/query?)
> or what happened.

Are you sure postgresql was starting up in the /data/db directory
after reboot and not somewhere else like /var/lib/pgsql/data???

IF you're definitely hitting the right directory, then Is the database
shutting down cleanly on reboot?  It might be that it's getting killed
during a write and you've got some kind of problem with fsync on your
machine so the db is getting corrupted

> > Can you be more specific on what exact query causes the problem 
to show up?

> >
>
> It turned out that _any_ query on that table caused the problem to 
show up.

>
> I couldn't even do "DROP TABLE ifl;" without postgres growing until it
> ran out of memory.

definitely sounds like some kind of issue other just the size of the
table, like some kind of corruption.

...
And I don't see anything else in your postgresql.conf that looks
suspicious.  I'm leaning towards possible pilot error in shutting down
or starting up the db.
  


Ok, I've had another reoccurance of this problem.

The sequence of events was something like this:
CREATE TABLESPACE foo LOCATION "/data/index/ext";



Of course postgresql didn't shut down cleanly because it was
naughtly earlier and ate all my RAM, causing the box to hang.
Now I'm back to the prior problem: entire tables are missing
when postgresql starts back up again.  Obviously there is some
sort of corruption (caused by postgresql) and it isn't able to
recover properly.


I suppose the obvious question here is how do I restore the
missing tables?  The data is all there, is there some sort of
transaction log that can be unwound to restore visibility of
the missing tables?  Where do I start for trying to do some
forensics?  Or am I SOL?

Darren


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Distributed DataBases

2007-10-15 Thread Jonah H. Harris
On 10/14/07, Martial Elisée Wendbé KIBA <[EMAIL PROTECTED]> wrote:
> I have some questions about postgreSQL, concerning Distributed DataBases.
> I know Oracle can do this very Well. With Oracle, i can configure my
> dataBases to use data stored in another DataSource (Microsoft SQL Server,
> Access,  Access, ...),  using Oracle Transparent Gateway.
> What i want to know is There is a way to configure PostgreSQL to do the
> same, or there is contrib/module that i can configure to do the same like
> Oracle Transparent Gateway.

See the projects:

dblink (Postgres-to-Postgres in contrib)
dblink_odbc (Postgres-to-ODBC on PgFoundry)
dblink_tds (Postgres-to-Sybase/SQL Server on PgFoundry [has a memory leak])
dbi-link (Postgres-to-DBD in Perl)

-- 
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
499 Thornall Street, 2nd Floor  | [EMAIL PROTECTED]
Edison, NJ 08837| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Tom Lane
Darren Reed <[EMAIL PROTECTED]> writes:
> # /usr/pkg/bin/psql -U postgres template1
> psql: FATAL:  out of memory
> DETAIL:  Failed on request of size 20.

I'm starting to think there is something very broken about your machine :-(.
Have you run any hardware diagnostics on it lately?  The level of
flakiness you're seeing starts to suggest bad RAM to me.

Anyway, the above error should have also produced a map of per-context
memory usage in the postmaster log (ie, postmaster stderr).  If you
could show us that, it might be revealing.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Jessica Richard
Thanks a lot!

"select pg_cancel_backend(procpid) " can end the current query for that user, 
but then this connection becomes IDLE, still connected.

Is there a command for me to totally disconnect a user by procpid? Some times, 
I need to kick out a particular Postgres user completely.

thanks


Tommy Gildseth <[EMAIL PROTECTED]> wrote: Jessica Richard wrote:
> When you see a hanging Postgres connection (or a job running so long 
> and you don't want to continue any more) from
>
> select * from pg_stat_activity
>
> and you want to disconnect it,
>
> how do you do it?

|pg_cancel_backend()


|http://www.postgresql.org/docs/8.1/interactive/functions-admin.html

-- 
Tommy Gildseth
DBA, Gruppe for databasedrift
Universitetet i Oslo, USIT
m: +47 45 86 38 50
t: +47 22 85 29 39


---(end of broadcast)---
TIP 6: explain analyze is your friend


   
-
Need a vacation? Get great deals to amazing places on Yahoo! Travel. 

Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Scott Marlowe
On 10/15/07, Jessica Richard <[EMAIL PROTECTED]> wrote:
> Thanks a lot!
>
> "select pg_cancel_backend(procpid) " can end the current query for that
> user, but then this connection becomes IDLE, still connected.
>
> Is there a command for me to totally disconnect a user by procpid? Some
> times, I need to kick out a particular Postgres user completely.

>From the command line on the server you can issue a kill  to do
that.  From within pgsql you'd need to write a function in an
untrusted language to pull it off.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Jonah H. Harris
On 10/15/07, Jessica Richard <[EMAIL PROTECTED]> wrote:
> Thanks a lot!
>
> "select pg_cancel_backend(procpid) " can end the current query for that
> user, but then this connection becomes IDLE, still connected.
>
> Is there a command for me to totally disconnect a user by procpid? Some
> times, I need to kick out a particular Postgres user completely.

There used to be a pg_terminate_backend, but it was #ifdef'd out due
to corruption concerns.  Basically, all it did was:

kill -TERM pid

I'm not sure whether anyone has completed the research required to
know if anything remains corrupted, but it is used occasionally.  Best
to do pg_cancel_backend and then kill -TERM.


-- 
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
499 Thornall Street, 2nd Floor  | [EMAIL PROTECTED]
Edison, NJ 08837| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Is there a way to kill a connection from the pg_stat_activitly list?

2007-10-15 Thread Kevin Grittner
>>> On Mon, Oct 15, 2007 at 12:34 PM, in message
<[EMAIL PROTECTED]>, "Jonah H.
Harris" <[EMAIL PROTECTED]> wrote: 
> 
> There used to be a pg_terminate_backend, but it was #ifdef'd out due
> to corruption concerns.  Basically, all it did was:
> 
> kill -TERM pid
> 
> I'm not sure whether anyone has completed the research required to
> know if anything remains corrupted, but it is used occasionally.  Best
> to do pg_cancel_backend and then kill -TERM.
 
Where does pg_ctl kill fit in?
 
Is TERM the normal signal to use there, too?
 
Should the pg_ctl docs give some guidelines on the signals?
 
-Kevin
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Tom Lane wrote:

Darren Reed <[EMAIL PROTECTED]> writes:
> # /usr/pkg/bin/psql -U postgres template1
> psql: FATAL:  out of memory
> DETAIL:  Failed on request of size 20.

I'm starting to think there is something very broken about your 
machine :-(.

Have you run any hardware diagnostics on it lately?  The level of
flakiness you're seeing starts to suggest bad RAM to me.
  


No, I haven't run any diagnostics.

But I'm not convinced the hardware is a problem because the flakiness 
has only really
been a problem when I started doing more than just inserts and updates.  
The table that
has shown the most problems (ifl) is a table of work to do, so I'm 
inserting records,
doing a select of random items out of it and also deleting records (once 
the work is
complete.)  Multiple processes can be trying to simultaneously be doing 
all of these,
which should not be anything out of the ordinary.  Or maybe this 
workload is just making

the database stress the hardware more?


Anyway, the above error should have also produced a map of per-context
memory usage in the postmaster log (ie, postmaster stderr).  If you
could show us that, it might be revealing.
  


I think what you're referring to is this:

TopMemoryContext: 32768 total in 3 blocks; 7776 free (8 chunks); 24992 used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 
used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 
used
MessageContext: 267378688 total in 43 blocks; 1768 free (45 chunks); 
267376920 used
smgr relation table: 8192 total in 1 blocks; 3904 free (0 chunks); 4288 
used

Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used
Relcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used
CacheMemoryContext: 253952 total in 5 blocks; 11880 free (1 chunks); 
242072 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 
696 used

pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0 
chunks);696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used

pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0 
chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used

pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks; 192 free (0 
chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 
696 used

pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

MdSmgr: 8192 total in 1 blocks; 8152 free (0 chunks); 40 used
LockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0 
chu

Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Scott Marlowe
On 10/15/07, Darren Reed <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> > Darren Reed <[EMAIL PROTECTED]> writes:
> > > # /usr/pkg/bin/psql -U postgres template1
> > > psql: FATAL:  out of memory
> > > DETAIL:  Failed on request of size 20.
> >
> > I'm starting to think there is something very broken about your machine :-(.
> > Have you run any hardware diagnostics on it lately?  The level of
> > flakiness you're seeing starts to suggest bad RAM to me.
> >
>
> No, I haven't run any diagnostics.
>
> But I'm not convinced the hardware is a problem because the flakiness
> has only really
> been a problem when I started doing more than just inserts and updates.
> The table that
> has shown the most problems (ifl) is a table of work to do, so I'm
> inserting records,
> doing a select of random items out of it and also deleting records (once
> the work is
> complete.)  Multiple processes can be trying to simultaneously be doing
> all of these,
> which should not be anything out of the ordinary.  Or maybe this
> workload is just making
> the database stress the hardware more?

So, I'm guessing you're doing something like:

select * from ifl order by random()

in several different threads?  that means that the table has to be
materialized twice in memory, and then most of the result thrown away.

Generally, the preferred way to do a random select of a large table is
to assign a random number to each row and then select a range based on
that number.  So, you have 100,000 rows, you assign the numbers 1
through 100,000 to each row at random, then you select them using
something like a sequence to make sure that each process isn't bumping
into each other.

So, if you're going to process 10 records at a time, you create a
sequence with an increment of 10 and use a select from it to get your
"random" row to operate on

Again, I'm kinda shooting in the dark here as you reveal more and more
what you are doing a little at a time.  A test case that can invoke
this failure would be most useful.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Tom Lane wrote:

Darren Reed <[EMAIL PROTECTED]> writes:
> # /usr/pkg/bin/psql -U postgres template1
> psql: FATAL:  out of memory
> DETAIL:  Failed on request of size 20.

I'm starting to think there is something very broken about your machine :-(.
Have you run any hardware diagnostics on it lately?  The level of
flakiness you're seeing starts to suggest bad RAM to me.
  


No, I haven't run any diagnostics.

But I'm not convinced the hardware is a problem because the flakiness 
has only really
been a problem when I started doing more than just inserts and updates.  
The table that
has shown the most problems (ifl) is a table of work to do, so I'm 
inserting records,
doing a select of random items out of it and also deleting records (once 
the work is
complete.)  Multiple processes can be trying to simultaneously be doing 
all of these,
which should not be anything out of the ordinary.  Or maybe this 
workload is just making

the database stress the hardware more?


Anyway, the above error should have also produced a map of per-context
memory usage in the postmaster log (ie, postmaster stderr).  If you
could show us that, it might be revealing.
  


I think what you're referring to is this:

TopMemoryContext: 32768 total in 3 blocks; 7776 free (8 chunks); 24992 used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 
used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 
used
MessageContext: 267378688 total in 43 blocks; 1768 free (45 chunks); 
267376920 used

smgr relation table: 8192 total in 1 blocks; 3904 free (0 chunks); 4288 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used
Relcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used
CacheMemoryContext: 253952 total in 5 blocks; 11880 free (1 chunks); 
242072 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 
696 used

pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0 
chunks);696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used

pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0 
chunks); 832 used

pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks; 192 free (0 
chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 
696 used

pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used

pg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 
768 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

MdSmgr: 8192 total in 1 blocks; 8152 free (0 chunks); 40 used
LockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0 
chunks)

Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Tom Lane
Darren Reed <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Anyway, the above error should have also produced a map of per-context
>> memory usage in the postmaster log (ie, postmaster stderr).  If you
>> could show us that, it might be revealing.

> MessageContext: 267378688 total in 43 blocks; 1768 free (45 chunks); 
> 267376920 used

> The standout problem is the "MessageContext" count.

Indeed.  And there shouldn't even be anything in MessageContext until
the first client command has been received.  Maybe you have something
in ~/.psqlrc that you haven't told us about?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Tom Lane wrote:

Darren Reed <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Anyway, the above error should have also produced a map of per-context
>> memory usage in the postmaster log (ie, postmaster stderr).  If you
>> could show us that, it might be revealing.

> MessageContext: 267378688 total in 43 blocks; 1768 free (45 chunks); 
> 267376920 used


> The standout problem is the "MessageContext" count.

Indeed.  And there shouldn't even be anything in MessageContext until
the first client command has been received.  Maybe you have something
in ~/.psqlrc that you haven't told us about?
  


That's easy - I don't even have one of these files!

Darren


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Tom Lane
Darren Reed <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Indeed.  And there shouldn't even be anything in MessageContext until
>> the first client command has been received.  Maybe you have something
>> in ~/.psqlrc that you haven't told us about?

> That's easy - I don't even have one of these files!

Then the behavior you showed is impossible ;-)

There is *something* that is completely broken about your machine,
and the rest of us really don't have enough context to tell what.
You haven't told us anything about the hardware or operating system,
or how you built or obtained the Postgres executables.

I don't think you should dismiss the possibility of a hardware problem,
especially since the failures aren't 100% reproducible (AFAICT from your
previous remarks).  We've seen more than one case where Postgres
stressed a system more than anything else that was being run, and
thereby exposed hardware problems that didn't manifest otherwise.
For instance, a bit of bad RAM up near the end of physical memory might
not get used at all until Postgres starts eating up memory.

Another line of thought is that you built Postgres with a buggy compiler
and thereby got buggy executables.  Have you tried running the PG
regression tests?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Scott Marlowe wrote:

...

Again, I'm kinda shooting in the dark here as you reveal more and more
what you are doing a little at a time.  A test case that can invoke
this failure would be most useful.
  

After seeing this today:
ERROR:  duplicate key violates unique constraint "ers_pkey"
ERROR:  duplicate key violates unique constraint "foo_pkey"
ERROR:  duplicate key violates unique constraint "foo_pkey"
ERROR:  duplicate key violates unique constraint "foo_pkey"
ERROR:  duplicate key violates unique constraint "foo_pkey"
ERROR:  duplicate key violates unique constraint "foo_pkey"
ERROR:  could not open segment 1 of relation 1663/10793/2659 (target 
block 858862642): No such file or directory
ERROR:  could not open segment 1 of relation 1663/10793/2659 (target 
block 858862642): No such file or directory
ERROR:  could not open segment 1 of relation 1663/10793/2659 (target 
block 858862642): No such file or directory
ERROR:  could not open segment 1 of relation 1663/10793/2659 (target 
block 858862642): No such file or directory

...

...there was little or no activity during this time, apart from
some inserts, maybe some selects, etc.  Nothing that should
have caused this kind of upset.

There is a file that matches this:
-rw---  1 postgres  wheel  57344 Oct 14 22:57 
/data/db/second/base/10793/2659

but it isn't in the directory where I moved most of the indexes to:
ls /data/index/ext/10793/
16390  16397  16399  16406  16407  16410  16414  16425  16434  16435

I don't know if the file numbers have any meaning?

But in addition, the list of tables (\dp) is now fubar'd.

I'm starting to wonder if it is a combination of:
- the operating system (NetBSD 4.99.20)
- the hardware (small HP box, not meant for hard work like this but 
shouldn't be impossible for it)

- the way pkgsrc compiles postgresql for NetBSD

I'm shying away from the hardware (or at least RAM/CPU) because I'd 
expect there to be some other kind kinds of faults show up, ultimately 
leading to a panic due to just random corruption of some kernel data 
structure.  As it is, everything else seems to be functioning ok.


Darren


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Scott Marlowe
On 10/15/07, Darren Reed <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
> > ...
> >
> > Again, I'm kinda shooting in the dark here as you reveal more and more
> > what you are doing a little at a time.  A test case that can invoke
> > this failure would be most useful.
> >
> After seeing this today:
> ERROR:  duplicate key violates unique constraint "ers_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ...
>
> ...there was little or no activity during this time, apart from
> some inserts, maybe some selects, etc.  Nothing that should
> have caused this kind of upset.
>
> There is a file that matches this:
> -rw---  1 postgres  wheel  57344 Oct 14 22:57
> /data/db/second/base/10793/2659
> but it isn't in the directory where I moved most of the indexes to:
> ls /data/index/ext/10793/
> 16390  16397  16399  16406  16407  16410  16414  16425  16434  16435

How, exactly, did you move those indexes?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Tom Lane
Darren Reed <[EMAIL PROTECTED]> writes:
> I'm starting to wonder if it is a combination of:
> - the operating system (NetBSD 4.99.20)

Um ... what was your motivation for choosing that?  According to
http://www.netbsd.org/releases/release-map.html
a .99 release number signifies "an alpha quality distribution. It isn't
even guaranteed to compile."  It looks like NetBSD 4 is currently
up to an RC2 release, which is probably not what you've got there ...
but even if you were running the RC2 code I'd question the sanity
of insisting on a back-rev Postgres release on top of bleeding edge
operating system.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Is my database now too big?

2007-10-15 Thread Darren Reed

Scott Marlowe wrote:

On 10/15/07, Darren Reed <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
> > ...
> >
> > Again, I'm kinda shooting in the dark here as you reveal more and more
> > what you are doing a little at a time.  A test case that can invoke
> > this failure would be most useful.
> >
> After seeing this today:
> ERROR:  duplicate key violates unique constraint "ers_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  duplicate key violates unique constraint "foo_pkey"
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ERROR:  could not open segment 1 of relation 1663/10793/2659 (target
> block 858862642): No such file or directory
> ...
>
> ...there was little or no activity during this time, apart from
> some inserts, maybe some selects, etc.  Nothing that should
> have caused this kind of upset.
>
> There is a file that matches this:
> -rw---  1 postgres  wheel  57344 Oct 14 22:57
> /data/db/second/base/10793/2659
> but it isn't in the directory where I moved most of the indexes to:
> ls /data/index/ext/10793/
> 16390  16397  16399  16406  16407  16410  16414  16425  16434  16435

How, exactly, did you move those indexes?
  


With "ALTER TABLE".

Since then I recreated the database and after merging a chunk of data, I 
see this:

(various errors about duplicate keys and values too wide for fields...)
ERROR:  duplicate key violates unique constraint "t_a_pkey"
LOG:  unexpected EOF on client connection
ERROR:  value too long for type character(12)
LOG:  unexpected EOF on client connection
LOG:  unexpected EOF on client connection
LOG:  could not send data to client: Broken pipe
LOG:  unexpected EOF on client connection
ERROR:  relation "t_a" does not exist

I've attached the commands I've used to create the schema.
It's called sanitised because I've culled all of the fields that
aren't used.  This script was used to create the database that
I then imported records into before seeing the above.

Darren

CREATE TABLESPACE ext LOCATION '/data/index/ext';

CREATE ROLE root;

CREATE LANGUAGE 'plpgsql' HANDLER plpgsql_call_handler LANCOMPILER 'PL/pgSQL';

CREATE TABLE t_a (
int1 INTEGER NOT NULL PRIMARY KEY,
str1 CHARACTER(20),
bool1 boolean
);

CREATE VIEW a_v1 AS
SELECT DISTINCT(str1),bool1 FROM t_a WHERE bool1 IS TRUE ORDER BY str1;

CREATE VIEW a_v2 AS SELECT distinct(str1),count(*) FROM t_a GROUP BY t_a.str1;

CREATE TABLE t_b (
int1 INTEGER NOT NULL,
str2 CHARACTER VARYING(20) NOT NULL,
bool1 BOOLEAN
);

CREATE TABLE t_c (
str1 CHAR(20) NOT NULL,
str2 VARCHAR(20) NOT NULL PRIMARY KEY
);

CREATE TABLE ifl (
recno SERIAL PRIMARY KEY,
int1 INTEGER NOT NULL
);

DROP FUNCTION add_str1tot_a();
CREATE OR REPLACE FUNCTION add_str1tot_a()
RETURNS TRIGGER AS $ptot_a$
DECLARE
temp VARCHAR(20);
BEGIN
SELECT p.str1 INTO temp FROM t_c p WHERE p.str2=NEW.str2;
UPDATE t_a SET str1=temp WHERE str1 IS NULL AND int1=NEW.int1;
RETURN NULL;
END;
$ptot_a$ LANGUAGE plpgsql;

DROP FUNCTION sett_astr1bool1();
CREATE OR REPLACE FUNCTION sett_astr1bool1()
RETURNS TRIGGER as $sepi$
DECLARE
ig BOOLEAN;
BEGIN
IF NEW.str1 IS NOT NULL AND NEW.bool1 IS NULL THEN
SELECT b.bool1 INTO ig FROM a_v1 b WHERE b.str1=NEW.str1;
IF ig IS NOT NULL THEN
UPDATE t_a SET bool1=ig WHERE int1=NEW.int1;
END IF;
END IF;
RETURN NULL;
END;
$sepi$ LANGUAGE plpgsql;

DROP FUNCTION sett_abool1();
CREATE OR REPLACE FUNCTION sett_abool1()
RETURNS TRIGGER as $sei$
DECLARE
temp BOOLEAN;
temp2 CHAR(20);
BEGIN
SELECT b.bool1 INTO temp FROM badt_b b WHERE (b.str2=NEW.str2) AND
(bool1 IS NOT NULL);
IF temp IS NOT NULL THEN
UPDATE t_b SET bool1=temp WHERE str2=NEW.str2;
END IF;
SELECT t.str1 INTO temp2 FROM t_a t WHERE t.int1=NEW.int1;
IF temp2 IS NULL THEN
SELECT u.str1 INTO temp2 FROM t_c u WHERE u.str2=NEW.str2;
IF temp2 IS NOT NULL THEN
IF temp IS NOT NULL THEN
UPDATE t_a SET str1=temp2,bool1=temp
WHERE int1=NEW.int1;
ELSE
UPDATE t_a SET str1=temp2 WHERE int1=NEW.int1;
END IF;
ELSE
IF temp IS NOT NULL THEN