[GENERAL] pg_dump: ERROR: could not open relation with OID ...

2007-06-04 Thread Thomas F. O'Connell
During a routine backup procedure (that does not run nightly) for an  
8.2.3 postgres cluster, pg_dump failed:


pg_dump: Error message from server: ERROR:  could not open relation  
with OID ...


In doing some log forensics, I discovered that this error has been  
showing up in the logs intermittently unconnected to pg_dump for the  
past 6 days. It's not occurring at an alarming rate, but the fact  
that it's occurring at all is mildly alarming, and the fact that it's  
preventing backups is even more alarming.


In reviewing the logs, one OID in particular shows up in the vast  
majority of the errors, and it doesn't correspond to any entries I  
can find in pg_class. A handful of other OIDs show up, and a sampling  
of them reveals, too, no entries in pg_class.


The other curious item is that a number of the errors occur in a  
pattern where they precede other error statements and don't seem to  
be directly tied to connections, except during the failed pg_dumps.


The typical pattern in the logs is:

[timestamp] [pid] [remote host/port]:ERROR:  [error message]
[timestamp] [pid] [remote host/port]:STATEMENT: [statement]

With this error, though, the format doesn't include the remote host/ 
port, which makes me wonder if it's occurring as a result of  
autovacuum or other local/internal activity.


Now my thoughts return nervously to a previous thread:

http://archives.postgresql.org/pgsql-general/2007-05/msg01208.php

I didn't think much of it at the time, but now I wonder if it was  
indicative of trouble on the way?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] Large Database \d: ERROR: cache lookup failed for relation ...

2007-05-24 Thread Thomas F. O'Connell
I'm dealing with a database where there are ~150,000 rows in  
information_schema.tables. I just tried to do a \d, and it came back  
with this:


ERROR:  cache lookup failed for relation [oid]

Is this indicative of corruption, or is it possibly a resource issue?

I don't see a lot of evidence of this error in the archives.

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] Continuous Archiving for Multiple Warm Standby Servers

2007-05-07 Thread Thomas F. O'Connell
I'm attempting to design a postgres system whereby an authoritative  
primary server simultaneously feeds continuous archives to a number  
of warm standby servers that live both on the local network and on  
remote networks.


The sticking point in my current thinking about such a system is what  
to do in the event that any of an array of possible nodes becomes  
unreachable. I would expect a custom archive_command to have the  
intelligence about network reachability and to report a nonzero  
status if it was unable to submit an archive to any particular node.


The way I understand it, postgres would then resubmit the file that  
caused the nonzero status, which, if connectivity has been restored,  
is no problem for the node that caused the nonzero status in the  
first place. But then the issue becomes what to do with the nodes  
that were fine when the nonzero status.


From the docs http://www.postgresql.org/docs/8.2/static/continuous- 
archiving.html#BACKUP-ARCHIVING-WAL:


It is advisable to test your proposed archive command to ensure that  
it indeed does not overwrite an existing file, and that it returns  
nonzero status in this case. We have found that cp -i does this  
correctly on some platforms but not others. If the chosen command  
does not itself handle this case correctly, you should add a command  
to test for pre-existence of the archive file.


What is the advised remedy for this scenario in general? And then  
what is it if nonzero status is returned by archive_command because  
the file already exists on nodes that stayed up after a scenario  
where nonzero status is returned because one or more nodes became  
unreachable?


A follow-on question is: Does it become the responsibility of  
archive_command in a scenario like this to track which files have  
been archived on which nodes? Is there any introspective way for a  
standby server to know that a file has been archived by primary? If  
not, is it safe to reply on using sequential numbering of WAL files  
for implicit introspection? I don't see any functions that provide  
introspection of this nature. I ask because it seems like network-to- 
network failures are a common enough occurrence that some mechanism  
for archive verification is a must-have. I'm just trying to determine  
how much of that functionality I'll have to build myself...


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] NFS vs. PostgreSQL on Solaris

2007-04-30 Thread Thomas F. O'Connell


On Apr 26, 2007, at 6:51 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

1. What aspect of postgres' memory usage would create an out of
memory condition?


I'm guessing you ran the box out of swap space --- look into what  
other
processes got started as a result of adding the NFS mount, and how  
much

memory they wanted to eat.


3. What would cause postgres to die from a signal 11?
I've also got a core file if that's necessary for further forensics.


Send gdb backtrace, please.

regards, tom lane


Unfortunately, the production build in question is lacking --enable- 
debug. :(


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] NFS vs. PostgreSQL on Solaris

2007-04-26 Thread Thomas F. O'Connell
 on the box  
behaving badly, we saw this:


2007-04-26 14:53:05 CDT 15175 :LOG:  server process (PID 18543) was  
terminated by signal 11
2007-04-26 14:53:05 CDT 15175 :LOG:  terminating any other active  
server processes


So my questions are:

1. What aspect of postgres' memory usage would create an out of  
memory condition?
2. What is the difference between an ERROR and FATAL out of memory  
message?

3. What would cause postgres to die from a signal 11?

I've also got a core file if that's necessary for further forensics.

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

Re: [GENERAL] [DOCS] Incrementally Updated Backups: Docs Clarification

2007-04-25 Thread Thomas F. O'Connell


On Apr 25, 2007, at 9:42 AM, Simon Riggs wrote:


On Thu, 2007-04-19 at 15:48 -0500, Thomas F. O'Connell wrote:


If we take a backup of the standby server's files while it is
following logs shipped from the primary, we will be able to reload
that data and restart the standby's recovery process from the last
restart point. We no longer need to keep WAL files from before the
restart point. If we need to recover, it will be faster to recover
from the incrementally updated backup than from the original base
backup.

I'm specifically confused about the meaning of the following phrases:

backup of the standby server's files - Which files?


the files that make up the database server:
- data directory
- all tablespace directories


reload that data - What does this mean in postgres terms?


copy back from wherever you put them in the first place

that data referring to the files that make up the db server


last restart point - What is this? Wouldn't it be able to restart
from the last recovered file, which would presumably occur later than
the last restart point?


No, we don't restart file-by-file.

http://developer.postgresql.org/pgdocs/postgres/continuous- 
archiving.html#BACKUP-PITR-RECOVERY


If recovery finds a corruption in the WAL... onwards explains the
restart mechanism. It's much like checkpointing, so we don't restart
from the last log file we restart from a point possibly many log files
in the past.


Does this mean make a filesystem backup of the standby server's data
directory while it's stopped, and then start it again with that data
and the restricted set of WAL files needed to continue recovery?


No need to stop server. Where do you read you need to do that?


I'd like to see the language here converted to words that have more
meaning in the context of postgres. I'd be happy to attempt a  
revision

of this section once I'm able to complete an incrementally updated
backup successfully.


Feel free to provide updates that make it clearer.


Here's how I envision it playing out in practice:

1. stop standby postgres server
2. [optional] preserve data directory, remove unnecessary WAL files
3. restart standby server


step 2 only.

Clearly not an optional step, since its a 1 stage process. :-)

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com


Well, this conversation made things a lot clearer, but I'm not sure  
(yet) how to patch the docs. It seems like the original version is  
written in general terms, whereas what our QA produces here is very  
postgres-specific. I'll see if I can produce a version that would be  
add clarity (for me).


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Incrementally Updated Backups: Docs Clarification

2007-04-19 Thread Thomas F. O'Connell
I'm about to begin playing with incrementally updated backups for a  
warm standby scenario, but I need some help understanding this  
paragraph in postgres terms. From 23.4.5 in the 8.2.3 docs:


If we take a backup of the standby server's files while it is  
following logs shipped from the primary, we will be able to reload  
that data and restart the standby's recovery process from the last  
restart point. We no longer need to keep WAL files from before the  
restart point. If we need to recover, it will be faster to recover  
from the incrementally updated backup than from the original base  
backup.


I'm specifically confused about the meaning of the following phrases:

backup of the standby server's files - Which files?

reload that data - What does this mean in postgres terms?

last restart point - What is this? Wouldn't it be able to restart  
from the last recovered file, which would presumably occur later than  
the last restart point?


Does this mean make a filesystem backup of the standby server's data  
directory while it's stopped, and then start it again with that data  
and the restricted set of WAL files needed to continue recovery? I'd  
like to see the language here converted to words that have more  
meaning in the context of postgres. I'd be happy to attempt a  
revision of this section once I'm able to complete an incrementally  
updated backup successfully.


Here's how I envision it playing out in practice:

1. stop standby postgres server
2. [optional] preserve data directory, remove unnecessary WAL files
3. restart standby server

Is that all there is to it?

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] Status of Postgres 8.2.4 and pg_standby

2007-04-17 Thread Thomas F. O'Connell

On Apr 17, 6:56 pm, [EMAIL PROTECTED] (Bruce Momjian) wrote:
 CAJ CAJ wrote:
  Hello,

  What is the ETA of postgresql 6.2.4? Also, will pg_standby make  
it to

  8.2.xbranch?

 pg_standby will not be in 8.2.X.  It is a new feature.

As I understand it, though, pg_standby as distributed in contrib for  
8.3 is designed to be backward compatible with 8.2.x.


http://archives.postgresql.org/pgsql-admin/2007-04/msg00069.php

I'm currently having good success in testing.

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] pg_standby + test_warm_standby: A Community Report

2007-04-16 Thread Thomas F. O'Connell
After being asked to implement a warm standby server, I was pleased  
to happen upon Simon's pg_standby contribution. I was even more  
pleased to discover that it was designed to be compatible with the  
8.2.x releases despite not being officially released until 8.3. I was  
then even more pleased to discover that Simon had helpfully provided  
a test harness for pg_standby, which is still available at the link  
referenced in this thread [note that pg_standby itself should  
probably be obtained from CVS at this point rather than from the  
thread referenced here]:


http://archives.postgresql.org/pgsql-patches/2006-12/msg00085.php

I had to tweak a few things to get the test harness functioning  
smoothly under Solaris 10 for x86, but once I did, pg_standby seemed  
fully up to the task of operating a warm standby server in a  
controlled environment.


Specifically, I had to:

* alter the hardcoded locale settings (e.g., the lc_ parameters in  
the supplied .conf files) from en_GB.UTF-8.
* avoid the use of the $PWD/.. syntax in archive_command and  
recovery_command. (I'm not sure whether this was just a shell  
environment issue or what.)
* change the deprecated -m option (for moving archived files) in  
restore_command in the supplied recovery.conf to -c


I've since tested pg_standby in a two-server primary/standby simple  
failover/recovery scenario, and everything worked smoothly.


All testing was conducted using postgres 8.2.3 on Solaris 10 for x86.

I still need to test restartable recovery and incrementable backups,  
but I'd like to say thanks to Simon and the whole PostgreSQL Global  
Development Group for a fine product and a robust community.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

Re: [GENERAL] pg_standby

2007-04-12 Thread Thomas F. O'Connell

On Mar 29, 2:17 pm, [EMAIL PROTECTED] (Thomas F. O'Connell) wrote:
 I see that Simon has pushed pg_standbyinto contrib for 8.3. Is there
 anything that would make the current version in CVS unsuitable for  
use

 in 8.2.x? I've done a cursory inspection of the code, but I'll admit
 that I haven't looked up the interfaces used from postgres_fe.h and
 pg_config_manual.h to see whether anything would be unusable or  
behave

 unexpectedly in 8.2.

 I'm assuming the correct way to install it would be to take the  
pg_standby directory from CVS, add it to an 8.2.x source contrib tree,

 and build as if it were a part of contrib in 8.2?

A post from Simon on a thread in pgsql-admin answers the primary  
question:


http://archives.postgresql.org/pgsql-admin/2007-04/msg00069.php

And I answered the part about building from testing. The answer seems  
to be yes.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] pg_standby: Unremovable Trigger File

2007-04-11 Thread Thomas F. O'Connell
I've been testing pg_standby as a helper application for a warm  
standby setup. So far, so good. When the environment is controlled  
and everything happens as expected, I'm able to operate a basic  
primary/standby setup. (This is all using 8.2.3 on Solaris x86, btw.)


One thing I noticed in early testing, though, was the scenario where  
the trigger file can't be removed by pg_standby. I touched a trigger  
file as root, which made it unremovable by postgres. So this tripped  
the relevant error condition in pg_standby.


I had a little difficulty understanding in what state this left the  
recovery process, and I'm not helping myself much by reading the  
code. Doesn't the non-zero exit from CheckForExternalTrigger mean  
that pg_standby will be signaling to the standby server a file-not- 
found scenario?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] pg_standby

2007-03-29 Thread Thomas F. O'Connell
I see that Simon has pushed pg_standby into contrib for 8.3. Is there
anything that would make the current version in CVS unsuitable for use
in 8.2.x? I've done a cursory inspection of the code, but I'll admit
that I haven't looked up the interfaces used from postgres_fe.h and
pg_config_manual.h to see whether anything would be unusable or behave
unexpectedly in 8.2.

I'm assuming the correct way to install it would be to take the
pg_standby directory from CVS, add it to an 8.2.x source contrib tree,
and build as if it were a part of contrib in 8.2?

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] New US DST Rules PostgreSQL

2007-03-13 Thread Thomas F. O'Connell

On Jan 23, 1:18 am, [EMAIL PROTECTED] (Tom Lane) wrote:
 Adam Gordon [EMAIL PROTECTED] writes:
  Anyone know where to find info about whether or not the new  
USDSTrules
  impact certain versions of Postgres and what needs to be done to  
ensure

  observance of the new rules?  Thanks.

 http://developer.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/ 
no...


 The short version:

 Any 7.x PG: uses your OS's timezone info, hence OK if you've  
updated the OS


 8.0.x PG: need 8.0.4 or later

 8.1.x PG: all releases know about 2007 USDSTupdates

 8.2.x PG: all releases know about 2007 USDSTupdates

 regards, tom lane

I'm curious. For 7.4.x, does the database require a restart for the  
change to take effect? I'm aware of a few production installations  
where the OS changed, but postgres time functions didn't take hold.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] PostgreSQL on Solaris: Changing Compilers During Point Upgrade

2007-02-06 Thread Thomas F. O'Connell
I'm working on a system where postgres 8.2.1 was built from source on  
Solaris 10 using gcc. Based on a number of recommendations, we  
decided to rebuild postgres Sun Studio cc. Without changing  
platforms, I wouldn't've expected the compiler to make a difference,  
but we just built 8.2.2 from source using cc, and now we're seeing  
this type of error in the logs:


ERROR:  attribute 3 has wrong type
DETAIL:  Table has type character varying, but query expects  
character varying.


Is changing compilers under postgres on the same platform without a  
dump/reload a Bad Idea?


More important: Has this risked any catastrophic data corruption? If  
we just switch to a gcc 8.2.2, will we be fine?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] PostgreSQL on Solaris: Changing Compilers During Point Upgrade

2007-02-06 Thread Thomas F. O'Connell

On Tue, Feb 06, 2007 at 09:43:01AM -0600, Thomas F. O'Connell wrote:
 DETAIL:  Table has type character varying, but query expects
 character varying.

In another thread, someone else is reporting this too.  I'm
wondering whether something went wrong in the 8.2.2 release.


Is this the other thread?

http://archives.postgresql.org/pgsql-general/2007-02/msg00235.php

This looks like it's affecting 8.1...

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

Re: [GENERAL] PostgreSQL on Solaris: Changing Compilers During Point Upgrade

2007-02-06 Thread Thomas F. O'Connell

On Feb 6, 10:33 am, [EMAIL PROTECTED] (Tom Lane) wrote:
 Thomas F. O'Connell [EMAIL PROTECTED] writes:

  but we just built 8.2.2 from source using cc, and now we're seeing
  this type of error in the logs:
  ERROR:  attribute 3 has wrong type
  DETAIL:  Table has type character varying, but query expects
  character varying.

 This has nothing to do with your compiler :-(

I just wanted to eliminate that variable. It took me a while to track  
down the source of the error in the source.


I saw this nearby thread on -hackers:

http://archives.postgresql.org/pgsql-hackers/2007-02/msg00276.php

The tables in question generating these errors in this database do  
indeed have functional indexes. Is there a known fix, or does this  
qualify as a bug?		


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



[GENERAL] 8.2.2 Announcement?

2007-02-06 Thread Thomas F. O'Connell
Shouldn't there be an announcement about the buggy 8.2.2 announced  
yesterday preceding the availability of new binaries, or is the bug  
not considered severe enough to invalidate the 8.2.2 sources that are  
currently in distribution?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] More activity in pg_stat_activity

2007-01-08 Thread Thomas F. O'Connell


On Jan 8, 2007, at 10:32 AM, Erik Jones wrote:


Erik Jones [EMAIL PROTECTED] writes:

One question regarding my previous thread about the 8.2 client  
tools.  We have yet to have time (personal as well as usage pattern  
constraints) to dump our schema to analyze it for any possible  
discrepencies and clock schema queries.  Is there any reason we  
couldn't use the 8.1 pg_dump facility until such a time as we can  
figure out our issues with the 8.2 pg_dump client (and psql, etc...)?


If I recall correctly, older pg_dump clients won't work at all with  
newer postmasters. In fact, I think it will error out.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Database Corruption - last chance recovery options?

2007-01-06 Thread Thomas F. O'Connell


On Jan 5, 2007, at 10:01 PM, Tom Lane wrote:


Michael Best [EMAIL PROTECTED] writes:
Set your memory requirement too high in postgresql.conf, reload  
instead

of restarting the database, it silently fails sometime later?


Yeah, wouldn't surprise me, since the reload is going to ignore any
changes related to resizing shared memory.  I think that 8.2 might  
warn

you that it was ignoring the un-applyable changes, but the warnings
would only go to the postmaster log, where they're easily missed :-(


Wait, now I'm curious. If a change in postgresql.conf that requires a  
restart doesn't take effect on reload, then how could a related  
failure manifest at all, regardless of when?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Interrupted pg_dump / pg_restore Upgrade

2007-01-05 Thread Thomas F. O'Connell


On Jan 4, 2007, at 7:03 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

My big question is: Is there anything that happens late in the game
in a pg_dumpall that affects system catalogs or other non-data
internals in any critical ways that would make an interrupted
pg_dumpall | psql sequence unstable?


There's quite a lot of stuff that happens after the data load, yes.
One thought that comes to mind is that permissions aren't
granted/revoked until somewhere near the end.  But why don't you
look at the output of pg_dumpall -s and find out for yourself
what got lost?


Yeah, now that I think about it, though, everything that pg_dumpall  
produces is SQL or DDL, so unless it does anything involving  
preservation of system catalogs that is critical, I'm somewhat less  
concerned about this particular issue.


I still intend to review the schema diff, but I think there are some  
other issues that need investigation as well.


Thanks for the tip.

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Database Corruption - last chance recovery options?

2007-01-05 Thread Thomas F. O'Connell


On Jan 4, 2007, at 11:24 PM, Michael Best wrote:

When I finally got the error report in the morning the database was  
in this state:


$ psql dbname

dbname=# \dt
ERROR:  cache lookup failed for relation 20884


Do you have your error logs, and were there any relevant errors in  
them preceding your inability to get a table listing via psql?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] upgrading and pg_restore versions

2007-01-05 Thread Thomas F. O'Connell


On Jan 4, 2007, at 2:42 PM, Angva wrote:


Dear Postgres gurus,

I come seeking advice on upgrading my 8.1.2 databases to 8.2. My
environment in a nutshell: I have a master database on one server that
is used to process large amounts of data. This data is replicated  
daily

to several destination databases, each on a separate server. There are
so many destinations that I fear I won't be able to upgrade them  
all in

one evening.

Does anyone know if it would be safe to upgrade a subset of the
databases per night? I am concerned with version mismatches between  
dmp

files, pg_restore, and destination database versions. Should the
pg_restore still work ok? And if that is ok, should the master  
database
be upgraded in any particular order (before or after the  
destinations)?


I would greatly appreciate any advice that can be offered.

Thanks!
Mark


A few questions:

1. How is the data being replicated?
2. Is each slave getting all the data?

I would expect it to be easier to upgrade each slave as you have time  
to do so, depending on how you're moving your data around. Later  
versions of pg_dump usually play nicely with older versions of  
postgres, whereas older pg_dump clients don't usually play nicely  
with newer versions of postgres.


The nice thing about pg_dump is that you can output in SQL/DDL  
format, which should usually be able to be restored into a different  
version of postgres (possibly requiring edits to the dump file to  
work around any incompatibilities). You don't have to rely on  
pg_restore.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Interrupted pg_dump / pg_restore Upgrade

2007-01-04 Thread Thomas F. O'Connell
I just became involved in a scenario wherein a migration between  
releases (8.1.x - 8.2) using pg_dumpall piped to psql (per section  
23.5 of the 8.2 docs) was interrupted based on duration of the  
procedure. The interruption was green lit because it was determined  
that the data had been migrated and that indexes and constraints were  
still to come (indexes were actually mid-way). A decision was made to  
go ahead and move forward with the 8.2 database with the intention of  
rebuilding indexes and other constraints manually.


My big question is: Is there anything that happens late in the game  
in a pg_dumpall that affects system catalogs or other non-data  
internals in any critical ways that would make an interrupted  
pg_dumpall | psql sequence unstable?


There are a number of irregularities turning up with the upgraded  
database, and I'm trying to rule out as many root causes as possible.


The new database is 8.2 (as were all the client utilities used in the  
migration), built from source, running on Solaris:


SunOS x41-xl-01.int 5.10 Generic_118855-19 i86pc i386 i86pc

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] Changing the data directory Ubuntu

2006-08-15 Thread Thomas F. O'Connell


On Aug 15, 2006, at 1:19 PM, [EMAIL PROTECTED] wrote:


Hello,

I have done a good deal of investigation and cannot seem to find a
straight answer.  Is there  way to change the default data directory?
I am using Ubuntu Dapper LTS.  I have a seperate hard drive  (and
partition) that I want to keep my data on in case of a problem with  
the

OS.

Any help would be appreciated.

Shaun


Unless Ubuntu is doing anything funny, you should be able to set  
data_directory in postgresql.conf:


http://www.postgresql.org/docs/8.1/static/runtime-config-file- 
locations.html


--
Thomas F. O'Connell
Sitening, LLC

http://www.sitening.com/
3004B Poston Avenue
Nashville, TN 37203-1314
615-469-5150 x802
615-469-5151 (fax)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs

2006-08-08 Thread Thomas F. O'Connell


On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote:


On 8/2/06, Thomas F. O'Connell [EMAIL PROTECTED] wrote:
I'm working on a postgres instance (8.1.2 running on Solaris 10)  
where the
data directory (including WAL) is being mounted on tmpfs. Based on  
this, and
with knowledge that fsync is disabled, I'm operating under the  
assumption
that recoverability is not a precondition for optimized  
performance. With
that assumption, I have several questions, some performance- 
related, others

internals-related:


to be honest, I think the best approach is to simply write to the
traditional filesystem and leave fsync off.  writing to a ramdisk
might be a bit faster, but you deprive the server memory from doing
other things like caching and sorting.  this might be more true for
some o/s than others though.  i'm just curious, what led you to do
ramdisk implementation  (most people who ask questions about ramdisk
have no idea what they are talking about, although you seem to).


That was how I found it. :)

I think, though, that it was the result of benchmarking a variety of  
on-disk RAID configurations with an eye toward ever increasing write  
throughput.


4. Considering that recoverability is not a precondition, is there  
an easy
patch that could be applied to the 8.1.x series from 8.1.4 on that  
would
allow disabling full_page_writes? For a database in RAM with high  
write

volume, is this setting even likely to make a difference?


I would suggest pulling 8.2dev (shortly beta) if you want this and
experiment. it is perfectly stable.  looking at the todo list, 8.2
also gets the multiple insert syntax, which is nice.

if have super high write volumes, consider writing your insert call in
C. prepare your statement, and use the parameterized
versionExecPrepared(...).


Can you point to a good example of this anywhere in the docs? I don't  
see ExecPrepared anywhere in the core documentation.


--
Thomas F. O'Connell
Sitening, LLC

http://www.sitening.com/
3004B Poston Avenue
Nashville, TN 37203-1314
615-469-5150 x802
615-469-5151 (fax)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs

2006-08-08 Thread Thomas F. O'Connell


On Aug 8, 2006, at 1:10 PM, Merlin Moncure wrote:


On 8/8/06, Thomas F. O'Connell [EMAIL PROTECTED] wrote:

On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote:
 if have super high write volumes, consider writing your insert  
call in

 C. prepare your statement, and use the parameterized
 versionExecPrepared(...).

Can you point to a good example of this anywhere in the docs? I don't
see ExecPrepared anywhere in the core documentation.


well, it's actually PQexecPrepared()
http://www.postgresql.org/docs/8.1/interactive/libpq-exec.html

do some tests and you should see a nice improvement over PQexec().


Thanks!

I remain curious, though: in the event that a RAM-disk-based  
architecture remains in place, do all traditional disk-based  
considerations go out the window? For instance, does trying to  
cluster same-table statements together in a transaction in an effort  
to reduce disk activity make any difference?


And is the overall strategy of attempting to keep distance between  
checkpoints somewhat high (especially since the need for  
checkpointing overall is reduced) still a good basis?


--
Thomas F. O'Connell
Sitening, LLC

http://www.sitening.com/
3004B Poston Avenue
Nashville, TN 37203-1314
615-469-5150 x802
615-469-5151 (fax)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to read log files

2006-08-03 Thread Thomas F. O'Connell


On Aug 3, 2006, at 8:33 AM, Andy Dale wrote:

I am currently trying to debug an issue we are experiencing and i  
have turned the server logging onto debug level 5 and filtering out  
the LOG messages, i still need some advice on how to interpret  
them.  I an extract from the log is:


[snip]

The main difficulty i am having at the moment is knowing which  
commit belongs to which statement (especially when you have 2 or  
more statements), does it work like a stack  structure where the  
last statement is committed first, or more like a queue where that  
statements are committed in the order in which the were issued.   
Also as you can see from the above log extract it has a lot of $1  
and $2 is there anyway to print these out in the log as well ?


Have you tried running with log_statement enabled? I find that  
incredibly useful for tracking application behavior from the postgres  
logs.


http://www.postgresql.org/docs/8.1/static/runtime-config- 
logging.html#RUNTIME-CONFIG-LOGGING-WHAT


--
Thomas F. O'Connell
Sitening, LLC

http://www.sitening.com/
3004B Poston Avenue
Nashville, TN 37203-1314
615-469-5150 x802
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SAVEPOINT and FOR UPDATE

2006-08-03 Thread Thomas F. O'Connell
I'm curious to know more about the postgres implementation of subtransactions via SAVEPOINT.If I wanted to set up a multi-statement transaction in which I needed multiple SELECT ... FOR UPDATE + UPDATE blocks, it would seem advantageous to be able to combine the SELECT ... FOR UPDATE clauses with the corresponding UPDATE clauses in a subtransaction in order to avoid locking rows for the duration of the entire outer transaction. In my experimentation, I'm not seeing this behavior, so I'm wondering if I'm misreading or overlooking something in the docs about how to use SAVEPOINT to create subtransactions.Here's what I set up as a basic test case in psql:postgres=# CREATE TABLE updateable1 ( id int primary key );postgres=# INSERT INTO updateable1 VALUES ( 1 );postgres=# START TRANSACTION;postgres=# SAVEPOINT u1;postgres=# SELECT id FROM updateable1 WHERE id = 1 FOR UPDATE;Then, in a separate session, I do this:postgres=# UPDATE updateable1 SET id = 0 WHERE id = 1;This, appropriately, waits.In the original session, I now do this:postgres=# UPDATE updateable1 SET id = 0 WHERE id = 1;postgres=# RELEASE u1;Unfortunately, the second session is still waiting and continues to do so until I commit the transaction started in the first session. I sort of expected the release of the savepoint to be tantamount to a commit of the subtransaction, but it doesn't appear to have been.I'd like a method for doing the following:START TRANSACTION;// do work// start subtransactionSELECT ... FOR UPDATE;UPDATE ...;// commit subtransaction// do more workCOMMIT;Is there any way to achieve the behavior I'm after? --Thomas F. O'ConnellSitening, LLChttp://www.sitening.com/3004B Poston AvenueNashville, TN 37203-1314615-469-5150 x802615-469-5151 (fax) 

[GENERAL] PostgreSQL on RAM Disk / tmpfs

2006-08-02 Thread Thomas F. O'Connell
I'm working on a postgres instance (8.1.2 running on Solaris 10) where the data directory (including WAL) is being mounted on tmpfs. Based on this, and with knowledge that fsync is disabled, I'm operating under the assumption that recoverability is not a precondition for optimized performance. With that assumption, I have several questions, some performance-related, others internals-related:1. Should there be any perceptible difference between using a RAM disk and tmpfs? Would the fact that the RAM disk were fixed-size make it at all faster?2. Would there be any benefit to having WAL on a separate RAM disk? I.e., would RAM access be contentious in anything resembling the way disk access is contentious? One possible setup would be a fixed-size RAM disk for data and WAL on tmpfs, for instance.3. In this scenario, is the strategy of keeping checkpoints far apart still a valid performance consideration? I would expect that there would be an optimal spot on the curve somewhere on the axes of checkpoint distance and available memory. During testing on the RAM disk, one problem encountered was that increasing checkpoint_segments caused the RAM disk to fill up, which suggests one potential benefit of pushing WAL off to tmpfs but possibly leaving data on the RAM disk. There would be no reason to push for aggressive checkpointing, would there?4. Considering that recoverability is not a precondition, is there an easy patch that could be applied to the 8.1.x series from 8.1.4 on that would allow disabling full_page_writes? For a database in RAM with high write volume, is this setting even likely to make a difference?5. Does wal_debug provide insight into whether LogInsert is being forced to write WAL buffers? This would seem to be a useful indicator as to whether wal_buffers was set high enough. (A side note: I couldn't find reference to LogInsert or LogFlush in the source; would it make sense to have the documentation refer to the actual functions?) Unfortunately, I don't have access to a system that can be easily recompiled for me to test this. A corollary question: if data and pg_xlog are both in RAM, does LogInsert still spill to the kernel cache, or would it spill to RAM? --Thomas F. O'ConnellSitening, LLChttp://www.sitening.com/3004B Poston AvenueNashville, TN 37203-1314615-469-5150 x802615-469-5151 (fax) 

Re: [GENERAL] Tape backup, 2 versions, same database name, which is pg_dump backing up

2006-08-02 Thread Thomas F. O'Connell
You'll need to specify the non-default port explicitly in your pg_dump command in order to back up the postmaster running on 55432.E.g., pg_dump -p 55432 -U postgres -C -D -f /tmp/$(date+%F)owl.sql owlBy default, the postgres command-line utilities attempt to connect to 5432 (or $PGPORT or whatever is configured as the default port).In the meantime, you're still backing up the 7.3.4 postmaster with that script. --Thomas F. O'ConnellSitening, LLChttp://www.sitening.com/3004B Poston AvenueNashville, TN 37203-1314615-469-5150 x802615-469-5151 (fax) On Aug 2, 2006, at 3:47 PM, [EMAIL PROTECTED] wrote:Yesterday a consultant upgraded me from 7.3.4 to 8.1.4 on a RedHat 9 server. To make sure everything was okay we left the server so that both versions of Postgresql load. 7.3.4 loads on the default port of 5432 and 8.1.4 loads on port 55432 . My database was moved into the new version so both the old and new databases have the same name.  I have a little .sh file that runs a tape backup using pg_dump and I am wondering if both versions are running how do I know which version of the pg_dump is running and which version of the database is being backed up?   The backup command I use is          pg_dump -U postgres -C -D -f /tmp/$(date+%F)owl.sql owl  then I use tar to copy the file to tape.  Please answer to me as well as the list, I am on digest.   Thanks, *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297  This e-mail message and any attachment(s) are for the sole use of the intended recipient(s) and may contain proprietary and/or confidential information which may be privileged or otherwise protected from disclosure.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient(s), please contact the sender by reply email and destroy the original message and any copies of the message as well as any attachment(s) to the original message.

Re: [GENERAL] Tape backup, 2 versions, same database name, which is pg_dump backing up

2006-08-02 Thread Thomas F. O'Connell
Your confirmation is correct with one slight technical exception: specifying the port doesn't technically cause the system to know which version of postgres to use.In my previous post, I forgot to mention a related note: the version of pg_dump that you're using matters slightly. pg_dump can typically be used with older postgres installations, but not newer ones. So you should prefer using the 8.1.4 version of pg_dump to dump at least your 8.1.4 postmaster, but probably also your 7.3.4 postmaster.A 7.3.4 pg_dump client will probably not be able to run against an 8.1.4 postmaster.Otherwise, pg_dump doesn't really care about the version as much as it cares about being able to connect to a postmaster. That's what specifying the port helps it do. When you don't specify the port, it just tries port 5432. If it finds a postmaster listening there, great, it will try to dump it; otherwise, it will fail with a complaint that it couldn't connect to anything on 5432. --Thomas F. O'ConnellSitening, LLChttp://www.sitening.com/3004B Poston AvenueNashville, TN 37203-1314615-469-5150 x802615-469-5151 (fax) On Aug 2, 2006, at 4:10 PM, [EMAIL PROTECTED] wrote:"Thomas F. O'Connell" [EMAIL PROTECTED] wrote on 08/02/2006 02:04:35 PM:   You'll need to specify the non-default port explicitly in your   pg_dump command in order to back up the postmaster running on 55432.E.g., pg_dump -p 55432 -U postgres -C -D -f /tmp/$(date+%F)owl.sql owlBy default, the postgres command-line utilities attempt to connect   to 5432 (or $PGPORT or whatever is configured as the default port).In the meantime, you're still backing up the 7.3.4 postmaster with   that script.--  Thomas F. O'Connell  Sitening, LLC  To confirm: what you're saying it that by specifying the port in my command the system knows which database to backup and which version of Postgresql to use.   Thank you for the assistance.   Margaret Gillon

Re: [GENERAL] Tape backup, 2 versions, same database name, which is pg_dump backing up

2006-08-02 Thread Thomas F. O'Connell
 On Aug 2, 2006, at 4:27 PM, [EMAIL PROTECTED] wrote:You're correct, I cannot use the pg_dump. I get a error message that the pg_dump is aborting because of a version mismatch, then it says to use the i opt. How do I call the pg_dump from the 8.1.4 version?You'll probably need to specify an absolute path. Are both the client and server applications running on the same server? If so, you might need to get your consultant to tell you where the client applications for the 8.1.4 installation were actually installed. By default, postgres installs both client and server applications into /usr/local/pgsql, but at least one of your installations must be in a different location; otherwise, you probably couldn't have two versions of the postmaster running concurrently.--Thomas F. O'ConnellSitening, LLChttp://www.sitening.com/3004B Poston AvenueNashville, TN 37203-1314615-469-5150 x802615-469-5151 (fax)

[GENERAL] ERROR: could not access status of transaction

2006-04-17 Thread Thomas F. O'Connell

I recently saw this in some postgres logs:

ERROR:  could not access status of transaction ...
DETAIL:  could not open file pg_clog/...: No such file or directory
STATEMENT:  COPY ...

The copy command was issued amongst a number of other queries from an  
application layer at the same time as a pg_dumpall was running.


Based on this:

http://archives.postgresql.org/pgsql-committers/2006-01/msg00287.php

I would not have thought this cluster was vulnerable. It's running  
8.1.3. It makes me a little nervous that the nightly backup was  
interrupted, but it makes me a little more nervous that it seems  
indicative of an abstruse error condition. Any thoughts from the  
hackers?


--
Thomas F. O'Connell
Database Architecture and Programming
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to find the latest (partial) WAL file

2006-04-10 Thread Thomas F. O'Connell


On Apr 10, 2006, at 6:24 PM, Tom Lane wrote:


Just Someone [EMAIL PROTECTED] writes:

Is there a way to discover what is the real current WAL file?


If you sort first by mtime and second by file name you should find the
right one in all cases, ie, take the latest mtime among the
properly-named files, breaking ties by taking the higher filename.

It'd probably be better if we had a function to report this, but
you can get along without one.


Seems like a natural candidate for a TODO item. I'm not familiar with  
the relevant internals, but on the surface it seems like it might  
even be a % item?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] CREATE TABLE (with INHERITS) and ACCESS EXCLUSIVE locks

2006-04-04 Thread Thomas F. O'Connell


On Apr 4, 2006, at 12:53 AM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

I'm dealing with an application that can potentially do ad hoc DDL.
It uses a PG/pgSQL function, and the only DDL statements in the
function are CREATE TABLE and CREATE INDEX statements. But I'm
noticing that during the backup process (with pg_dump or pg_dumpall),
the function is acquiring ACCESS EXCLUSIVE locks and bringing the
application to its knees.


Please provide a test case.  AFAIR neither of those should take any
AccessExclusive locks --- except on the new table, which shouldn't
matter because pg_dump won't see it.


Below is a sketch of the function where the only difference with  
reality is identifier names. I'm pretty sure I obfuscated it  
consistently.


As for how this plays out in the real world, a pg_dumpall will start  
and run for a few hours. Sometime during that, this function might  
get called. When it does, an ACCESS EXCLUSIVE lock is held against  
the table identified as t13, here directly referenced only as a  
FOREIGN KEY.


This function is only DDL statements and calls no other functions.

CREATE OR REPLACE FUNCTION takes_access_exclusive_lock(character  
varying) RETURNS character varying

AS '
DECLARE
-- alias
id ALIAS FOR $1;

-- sql variables
create_child1 VARCHAR;
create_child2 VARCHAR;
create_child3 VARCHAR;
create_child4 VARCHAR;
create_child5 VARCHAR;
create_child6 VARCHAR;
create_child7 VARCHAR;
create_child8 VARCHAR;
create_child9 VARCHAR;
create_child10 VARCHAR;
create_child11 VARCHAR;
create_child12 VARCHAR;
create_indexes VARCHAR;

-- helpers
table_prefix VARCHAR;
BEGIN
table_prefix := ''child_'' || id;

create_child1 :=  ''
CREATE TABLE '' || table_prefix || ''_t1 (
  CONSTRAINT '' || table_prefix || ''_t1_pkey PRIMARY KEY (id)
) INHERITS (t1) WITHOUT OIDS '';

create_child2 :=  ''
CREATE TABLE '' || table_prefix || ''_t2 (
  CONSTRAINT '' || table_prefix || ''_t2_pkey PRIMARY KEY (id)
) INHERITS (t2) WITHOUT OIDS '';

create_child3 := ''
CREATE TABLE '' || table_prefix || ''_t3 (
  CONSTRAINT '' || table_prefix || ''_t3_pkey PRIMARY KEY (id1, id2),
  CONSTRAINT '' || table_prefix || ''_t3_fkey2 FOREIGN KEY (id2)
REFERENCES public.t13 (id) ON UPDATE RESTRICT ON DELETE RESTRICT,
  CONSTRAINT '' || table_prefix || ''_t3_fkey1 FOREIGN KEY (id1)
REFERENCES public.'' || table_prefix || ''_t1 (id) ON UPDATE  
RESTRICT ON DELETE RESTRICT

) INHERITS (t3)  WITHOUT OIDS '';

create_child4 := ''
CREATE TABLE '' || table_prefix || ''_t4 (
  CONSTRAINT '' || table_prefix || ''_t4_pkey PRIMARY KEY (id)
) INHERITS (t4)  WITHOUT OIDS '';

create_child5 := ''
CREATE TABLE '' || table_prefix || ''_t5 (
  CONSTRAINT '' || table_prefix || ''_t5_pkey PRIMARY KEY (id, ts),
  CONSTRAINT '' || table_prefix || ''_t5_fkey FOREIGN KEY (id)
REFERENCES public.'' || table_prefix || ''_t4 (id) ON UPDATE  
RESTRICT ON DELETE RESTRICT

) INHERITS (t5)  WITHOUT OIDS '';

create_child6 := ''
CREATE TABLE '' || table_prefix || ''_t6 (
) INHERITS (t6)  WITHOUT OIDS '';

create_child7 := ''
CREATE TABLE '' || table_prefix || ''_t7 (
  CONSTRAINT '' || table_prefix || ''_t7_pkey PRIMARY KEY (id),
  CONSTRAINT '' || table_prefix || ''_t7_fkey FOREIGN KEY (id)
REFERENCES public.'' || table_prefix || ''_t4 (id) ON UPDATE  
RESTRICT ON DELETE RESTRICT

) INHERITS (t7)  WITHOUT OIDS '';

create_child8 := ''
CREATE TABLE '' || table_prefix || ''_t8 (
  CONSTRAINT '' || table_prefix || ''_t8_pkey PRIMARY KEY (id),
  CONSTRAINT '' || table_prefix || ''_t8_fkey FOREIGN KEY (id)
REFERENCES public.'' || table_prefix || ''_t4 (id) ON UPDATE  
RESTRICT ON DELETE RESTRICT

) INHERITS (t8)  WITHOUT OIDS '';

create_child9 := ''
CREATE TABLE '' || table_prefix || ''_t9 (
  CONSTRAINT '' || table_prefix || ''_t9_pkey PRIMARY KEY (id),
  CONSTRAINT '' || table_prefix || ''_id2_id3_unique_idx UNIQUE  
(id2, id3)

) INHERITS (t9)  WITHOUT OIDS '';

create_child10 := ''
CREATE TABLE '' || table_prefix || ''_t10 (
  CONSTRAINT '' || table_prefix || ''_t10_pkey PRIMARY KEY (id)
) INHERITS (t10) WITHOUT OIDS '';

create_child11 := ''
CREATE TABLE '' || table_prefix || ''_t11 (
  CONSTRAINT '' || table_prefix || ''_t11_pkey PRIMARY KEY (id1,  
id2, col1, col2),

  CONSTRAINT '' || table_prefix || ''_t11_fkey1 FOREIGN KEY (id1)
REFERENCES t14 (id) ON UPDATE RESTRICT ON DELETE RESTRICT,
  CONSTRAINT '' || table_prefix || ''_t11_fkey2 FOREIGN KEY (id2)
REFERENCES '' || table_prefix || ''_t10 (id) ON UPDATE RESTRICT  
ON DELETE RESTRICT

) INHERITS (t11) WITHOUT OIDS '';

create_child12 := ''
CREATE TABLE '' || table_prefix || ''_t12 (
  CONSTRAINT '' || table_prefix || ''_t12_pkey PRIMARY KEY (id)
) INHERITS (t12) WITHOUT OIDS

Re: [GENERAL] CREATE TABLE (with INHERITS) and ACCESS EXCLUSIVE locks

2006-04-04 Thread Thomas F. O'Connell


On Apr 4, 2006, at 4:53 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

As for how this plays out in the real world, a pg_dumpall will start
and run for a few hours. Sometime during that, this function might
get called. When it does, an ACCESS EXCLUSIVE lock is held against
the table identified as t13, here directly referenced only as a
FOREIGN KEY.


It's the addition of a foreign key constraint that's biting you.  That
requires installing triggers on the pre-existing table (t13, also t14
in your example), and that requires an exclusive lock.

Since we don't currently allow any ON SELECT triggers, it's possible
that adding a trigger could be downgraded to just ExclusiveLock (which
wouldn't conflict with pg_dump's AccessShareLock), but I can't say  
that

I'm enthusiastic about that idea.

regards, tom lane


Thanks! At least we can create a workaround for the moment...

I've brought this up to an extent in the past, but is there an easy  
way to extend section 12.3.1 (or create some form of appendix) such  
that it reveals all possible locking paths for SQL commands in  
postgres? I've had a number of application design (actually, more  
often debugging) scenarios where it would be helpful to have a full  
reference that showed which locks were acquired by given commands or  
constructs and in which order.


From this specific instance, it seems like it wouldn't be too tough  
to patch the docs to include something like [ FOREIGN KEY ]  
REFERENCES, when used with CREATE TABLE  to the ACCESS EXCLUSIVE  
section of 12.3.1.


But I'd be as interested to have the detail visually available for  
all SQL commands. E.g., that when foreign key constraints are created  
that they install triggers, and that that process requires ACCESS  
EXCLUSIVE locking. I knew (from familiarity with postgres) that  
referential integrity was trigger-based, but I didn't know (and don't  
see any way of knowing from the docs) that it required ACCESS  
EXCLUSIVE locking.


I'd be happy to contribute to a chart or diagram of something like  
this if developers could give me some reasonable starting points and  
don't think this idea is so unwieldy as to be ultimately unworkable.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] pgpool ABORT + no transaction warning

2006-04-03 Thread Thomas F. O'Connell


On Apr 3, 2006, at 3:05 AM, Tatsuo Ishii wrote:


The problem is not reproduced here. Do you have any idea how to
reproduce it?


If I did, I'd resolve it. :P

I suppose I can try restarting the connection pools on this server  
just to see what happens. If I am able to do that, I will post the  
results.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)



On Apr 1, 2006, at 8:01 AM, Tatsuo Ishii wrote:

A while back, I posted about seeing a number of warnings from  
pgpool:


http://archives.postgresql.org/pgsql-admin/2005-03/msg00305.php

A typical pair of statements in my postgres logs looks like this:

WARNING:  there is no transaction in progress
STATEMENT:  ABORT

Tatsuo Ishii declared that these were harmless, and I took (and  
still

take) his word for it.

At some point in the past with my old configuration (postgres  
8.1.3 /

pgpool 2.5.1) I did something that prevented the warnings from
showing up. On a system running postgres 8.1.3, I recently upgraded
pgpool from 2.5.1 to 3.0.1. When I upgraded pgpool, I began seeing
the warnings again after a long window of not having seen them. My
configuration files for pgpool and postgres didn't change during  
the

upgrade, so I'm wondering what I might've been doing to avoid
polluting my logs with them previously and why they are showing up
again.


Could you tell me how to reproduce the problem?


Here's what happened: I performed an on-line backup and recovery to
move postgres from one server to another. On the new server, I
installed pgpool 3.0.1 and copied over my old configuration files.

I have two instances of pgpool running on the server and am only
using pgpool for connection pooling, not load balancing or
replication. The only three settings that are different between the
two configurations are port, logdir, and num_init_children.

Below is one pgpool.conf (with a few paths changed back to the
defaults just for obfuscation), which doesn't include
child_max_connections, insert_lock, or ignore_leading_white_space
because I just used my 2.5.1 configuration file. I assume it would
just  use defaults for these values if they weren't specified in the
file?

listen_addresses = '*'
port = 
socket_dir = '/tmp'
backend_host_name = ''
backend_port = 5432
backend_socket_dir = '/tmp'
secondary_backend_host_name = ''
secondary_backend_port = 0
num_init_children = 64
max_pool = 4
child_life_time = 300
connection_life_time = 10
logdir = '/tmp/pgpool1'
replication_mode = false
replication_strict = true
replication_timeout = 5000
load_balance_mode = false
weight_master = 0.5
weight_secondary = 0.5
replication_stop_on_mismatch = false
reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION  
DEFAULT'

print_timestamp = true
master_slave_mode = false
connection_cache = true
health_check_timeout = 20
health_check_period = 0
health_check_user = 'nobody'


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] CREATE TABLE (with INHERITS) and ACCESS EXCLUSIVE locks

2006-04-03 Thread Thomas F. O'Connell
I'm guessing that CREATE TABLE in itself doesn't take an ACCESS  
EXCLUSIVE lock because there's nothing yet to lock. But can CREATE  
TABLE ... INHERITS ... take an ACCESS EXCLUSIVE lock? Is it  
functioning as an ALTER TABLE?


I'm dealing with an application that can potentially do ad hoc DDL.  
It uses a PG/pgSQL function, and the only DDL statements in the  
function are CREATE TABLE and CREATE INDEX statements. But I'm  
noticing that during the backup process (with pg_dump or pg_dumpall),  
the function is acquiring ACCESS EXCLUSIVE locks and bringing the  
application to its knees. This seems to be a result of connections  
backing up waiting for the DDL to finish, and the DDL can't finish  
until the backup process finishes because of the function's ACCESS  
EXCLUSIVE lock conflicting with the database-wide ACCESS SHARE locks  
acquired by the backup process.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [Slightly OT] data model books/resources?

2006-03-31 Thread Thomas F. O'Connell
On Mar 30, 2006, at 2:03 AM, Aaron Glenn wrote:Anyone care to share the great books, articles, manifestos, notes,leaflets, etc on data modelling they've come across? Ideally I'd liketo find a great college level book on data models, but I haven't comeacross one that even slightly holds "definitive resource"-type status.Feel free to reply off list to keep the clutter down - I'd be happy tosummarize responses for the list.Thanks,aaron.glennI've found Database Modeling Essentials by Simsion and Witt (ISBN: 0-12-644551-6) to be a good resource.--Thomas F. O'ConnellDatabase Architecture and ProgrammingCo-FounderSitening, LLChttp://www.sitening.com/3004 B Poston AvenueNashville, TN 37203-1314615-260-0005 (cell)615-469-5150 (office)615-469-5151 (fax)

[GENERAL] pgpool ABORT + no transaction warning

2006-03-31 Thread Thomas F. O'Connell

A while back, I posted about seeing a number of warnings from pgpool:

http://archives.postgresql.org/pgsql-admin/2005-03/msg00305.php

A typical pair of statements in my postgres logs looks like this:

WARNING:  there is no transaction in progress
STATEMENT:  ABORT

Tatsuo Ishii declared that these were harmless, and I took (and still  
take) his word for it.


At some point in the past with my old configuration (postgres 8.1.3 /  
pgpool 2.5.1) I did something that prevented the warnings from  
showing up. On a system running postgres 8.1.3, I recently upgraded  
pgpool from 2.5.1 to 3.0.1. When I upgraded pgpool, I began seeing  
the warnings again after a long window of not having seen them. My  
configuration files for pgpool and postgres didn't change during the  
upgrade, so I'm wondering what I might've been doing to avoid  
polluting my logs with them previously and why they are showing up  
again.


pgpool itself seems to be working fine after the upgrade, so as long  
as the warnings are harmless, it's not a big deal, but I'd like a  
clean method of preventing log noise if one exists.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Autovacuum Daemon Disrupting dropdb?

2006-03-22 Thread Thomas F. O'Connell
In an interesting epilogue to this thread, I just encountered  
something unusual.


Since having disabled autovacuum in the cluster in which it was  
preventing dropdb from working (because autovacuum the autovacuum  
daemon was behaving as a user accessing the database), dropdb has  
been working fine.


Yesterday, though, I created a new cluster to house an on-line backup  
from a production server. The new cluster includes a postgresql.conf  
from the production server in which autovacuum was enabled. From the  
logs of the new cluster, autovacuum began processing as soon as the  
on-line backup recovery process was complete.


Then, last night, my dropdb failed for the first time since having  
disabled it in my original postgres cluster on this server with a  
warning that users were accessing the database.


Is it possible for an autovacuum daemon in a given cluster to have an  
impact on all postgres clusters running on a server?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Troubling On-line Backup LOG messages

2006-03-21 Thread Thomas F. O'Connell
After a successful same-server test of the postgres on-line backup  
process, I tried a test from a remote server to simulate a process  
I'll need to do in a production environment soon.


After a step-by-step on-line backup, I saw the following in my logs:

2006-03-21 10:38:53 CST 1412 :LOG:  archive recovery complete
2006-03-21 10:38:53 CST 1412 :LOG:  could not truncate directory  
pg_multixact/offsets: apparent wraparound
2006-03-21 10:38:53 CST 1412 :LOG:  could not truncate directory  
pg_multixact/members: apparent wraparound
2006-03-21 10:39:22 CST 1412 :LOG:  database system is  
ready2006-03-21 10:39:22 CST 1412 :LOG:  transaction ID wrap limit is  
2147484146, limited by database postgres


followed by dozens of these:

2006-03-21 10:39:22 CST 2499 :LOG:  invalid server process ID -1

None of these are errors, but they don't seem like healthy messages  
nonetheless. I don't see much discussion of them in the archives. Are  
they cause for concern?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Troubling On-line Backup LOG messages

2006-03-21 Thread Thomas F. O'Connell


On Mar 21, 2006, at 11:54 AM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

2006-03-21 10:38:53 CST 1412 :LOG:  archive recovery complete
2006-03-21 10:38:53 CST 1412 :LOG:  could not truncate directory
pg_multixact/offsets: apparent wraparound
2006-03-21 10:38:53 CST 1412 :LOG:  could not truncate directory
pg_multixact/members: apparent wraparound


What file name(s) appear in those directories?  Do you continue to get
the same log messages during subsequent checkpoints?


pg_multixact/offsets/0001
pg_multixact/members/0002

Thus far, there haven't been any subsequent checkpoints. These  
occurred immediately after recovery of an on-line backup, and I  
haven't really done much with the new cluster because it just exists  
for proof of concept.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Autovacuum Daemon Disrupting dropdb?

2006-03-12 Thread Thomas F. O'Connell


On Mar 11, 2006, at 4:13 PM, Tom Lane wrote:


For a real solution, perhaps DROP DATABASE could somehow look to
determine if there's an autovac daemon active in the target database,
and if so send it a SIGINT and wait for it to go away.


In general, it also seems like a --force option or something similar  
would be reasonable for dropdb because the state of the database in  
terms of user activity wouldn't seem to matter a whole lot if the  
intent is to drop it.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Autovacuum Daemon Disrupting dropdb?

2006-03-11 Thread Thomas F. O'Connell
I administer a network where a postgres database on one machine is  
nightly dumped to another machine where it is restored (for  
verification purposes) once the dump completes. The process is roughly:


pg_dump remotedb
dropdb localdb
pg_restore remotedb.pgd

We recently upgraded the system to 8.1.x and enabled autovacuum and  
the dropdb command has recently begun failing periodically. Is this  
because the autovacuum daemon runs it technically runs as a user and  
can thus prevent dropping a database? There is no public application  
that accesses the database. I note that the autovacuum daemon  
requires a superuser_reserved_connections slot.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Autovacuum Daemon Disrupting dropdb?

2006-03-11 Thread Thomas F. O'Connell


On Mar 11, 2006, at 2:44 PM, Matthew T. O'Connor wrote:


Thomas F. O'Connell wrote:
I administer a network where a postgres database on one machine is  
nightly dumped to another machine where it is restored (for  
verification purposes) once the dump completes. The process is  
roughly:

pg_dump remotedb
dropdb localdb
pg_restore remotedb.pgd
We recently upgraded the system to 8.1.x and enabled autovacuum  
and the dropdb command has recently begun failing periodically. Is  
this because the autovacuum daemon runs it technically runs as a  
user and can thus prevent dropping a database? There is no public  
application that accesses the database. I note that the autovacuum  
daemon requires a superuser_reserved_connections slot.


First off, are you sure it's autovacuum that is causing the failure?

The autovacuum connects to each database to look around and decided  
if any work should be done, so it's certainly possible that every  
once in a while, autovacuum just happens to be connected to the  
database you want to drop when you want to drop it.  With the  
integration of autovacuum in 8.1, you can now tell autovacuum to  
ignore tables, but I don't think there is a way to tell it to avoid  
a particular database, but might be a reasonable feature addition.


I suppose you could instead:

connect to local postmaster
disable autovacuum
pg_dump remotedb
dropdb localdb
pg_restore remotedb.pgd
enable autovacuum

This isn't totally bulletproof, but assuming that autovacuum never  
really spends much time in the database to be dropped it should be  
reaonably safe.


I'm not positive, but there aren't many other suspects. Is there an  
easy way to disable autovacuum automatically? I'm sure I could  
inplace edit postgresql.conf and reload or something.


For the short term, I'm just disabling it altogether on the server  
that holds the dump and does the restoration because performance is  
not really an issue.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] 'AS' column-alias beign ignored on outer select

2005-12-28 Thread Thomas F. O'Connell


On Dec 2, 2005, at 6:00 AM, Markus Wollny wrote:


Hi!

Is this the expected behaviour?

 select ID
  , USK_RATING AS USK
  from (
   select ID
, USK_RATING
   from MAIN_SONY_PS2_GAME
   where ID = 101400
   limit 1
   )
   as PRODUCT_LIST
 limit 1;

   id   | usk_rating
+
 101400 |
(1 row)

Note the column-header being labeled 'usk_rating', not 'usk'.  
Obviously

the 'AS' column alias of the outer select is being ignored in the
resultset.

 select ID
  , USK
  from (
   select ID
, USK_RATING AS USK
   from MAIN_SONY_PS2_GAME
   where ID = 101400
   limit 1
   )
   as PRODUCT_LIST
 limit 1;

   id   | usk
+-
 101400 |
(1 row)

If the column alias is being declared in the subselect, the column  
alias

is working.

 select version();
   version
--
 PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2

Is this working as expected or is this a bug?



I have to say, this looks like a regression. I can't reproduce it in  
an 8.0.3 database. The interesting thing is that the bug seems to be  
related to the limit clause of the inner query. Hackers?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] 'AS' column-alias beign ignored on outer select

2005-12-28 Thread Thomas F. O'Connell


On Dec 28, 2005, at 10:03 PM, Thomas F. O'Connell wrote:

[snip]

If the column alias is being declared in the subselect, the column  
alias

is working.

 select version();
   version
- 
-

 PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2

Is this working as expected or is this a bug?


I have to say, this looks like a regression. I can't reproduce it  
in an 8.0.3 database. The interesting thing is that the bug seems  
to be related to the limit clause of the inner query. Hackers?


Whoops. Just saw Tom's reply. Ignore noise. Threaded mail must've  
missed the reply.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SELECT Generating Row Exclusive Locks?

2005-11-30 Thread Thomas F. O'Connell

I'm monitoring locks using this query:

SELECT pgsa.procpid, pgsa.current_query, pgsa.query_start,  
pgc.relname, pgl.mode, pgl.granted

FROM pg_catalog.pg_class pgc, pg_locks AS pgl, pg_stat_activity AS pgsa
WHERE pgl.pid = pgsa.procpid
AND current_query  'IDLE'
AND pgl.relation = pgc.oid
ORDER BY pgsa.query_start DESC;

which was built as an extension of this information:

http://archives.postgresql.org/pgsql-novice/2004-08/msg00291.php

Interestingly, I'm seeing evidence that SELECTs are occasionally  
taking Row Exclusive locks. Should this surprise me?


PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4

--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SELECT Generating Row Exclusive Locks?

2005-11-30 Thread Thomas F. O'Connell


On Nov 30, 2005, at 9:22 PM, Thomas F. O'Connell wrote:


I'm monitoring locks using this query:

SELECT pgsa.procpid, pgsa.current_query, pgsa.query_start,  
pgc.relname, pgl.mode, pgl.granted
FROM pg_catalog.pg_class pgc, pg_locks AS pgl, pg_stat_activity AS  
pgsa

WHERE pgl.pid = pgsa.procpid
AND current_query  'IDLE'
AND pgl.relation = pgc.oid
ORDER BY pgsa.query_start DESC;

which was built as an extension of this information:

http://archives.postgresql.org/pgsql-novice/2004-08/msg00291.php

Interestingly, I'm seeing evidence that SELECTs are occasionally  
taking Row Exclusive locks. Should this surprise me?


PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4


Actually, let me clarify/rephrase my question. This query as written  
doesn't necessarily seem to help me distinguish clearly between  
queries that are waiting for a lock and those that are holding a lock.


What I would expect to see during contentious periods in a given  
database would be a core of rows in pg_locks with granted = true and  
then a stable of additional rows with granted = false.


For instance, if a long SELECT were running against table_foo and an  
UPDATE arrived wanting to update table_foo, I would expect to see in  
pg_locks an entry corresponding to the SELECT with granted = true and  
an entry corresponding to the UPDATE with granted = false.


In reality, I often see hundreds of rows in pg_locks and am lucky  
ever to see granted = false among them. And in the rows that I do  
see, I occasionally see a SELECT corresponding to  
pg_stat_activity.current_query with the same pid as a granted Row  
Exclusive lock in pg_locks.


I feel like I must be missing something in my interpretation, but I'm  
not sure what.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SELECT Generating Row Exclusive Locks?

2005-11-30 Thread Thomas F. O'Connell


On Nov 30, 2005, at 10:52 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

For instance, if a long SELECT were running against table_foo and an
UPDATE arrived wanting to update table_foo, I would expect to see in
pg_locks an entry corresponding to the SELECT with granted = true and
an entry corresponding to the UPDATE with granted = false.


Why would you expect to see that exactly?  SELECTs don't block  
UPDATEs.


Mm. I must've been projecting my notion of a problem onto one that  
wasn't there, reading (and not thinking) Row Exclusive instead of  
Access Exclusive for conflicts. Duh.


I guess I'm still somewhat puzzled by the original statement of the  
question, then. Why does that particular view of locks occasionally  
tie a SELECT to a granted Row Exclusive lock? I recognize that the  
pid in pg_locks can be the pid of the server process holding or  
awaiting the lock, but I'm seeing granted = true on these, which  
implies that the server process corresponding to the SELECT is  
holding a Row Exclusive, doesn't it?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SELECT Generating Row Exclusive Locks?

2005-11-30 Thread Thomas F. O'Connell


On Nov 30, 2005, at 11:24 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

I guess I'm still somewhat puzzled by the original statement of the
question, then. Why does that particular view of locks occasionally
tie a SELECT to a granted Row Exclusive lock?


You sure it's not left over from an update command earlier in the
same transaction?


Pretty sure, unless the query I posted earlier can cause the display  
of leftover commands...


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Index Administration: pg_index vs. pg_get_indexdef()

2005-11-22 Thread Thomas F. O'Connell
In an old thread http://archives.postgresql.org/pgsql-admin/2004-01/ 
msg00271.php, Tom Lane suggested that it would be unreasonable to  
use pg_index to reconstruct (expressional) indexes (in 7.4). The  
suggested alternative was to use pg_get_indexdef().


I administer a postgres 8.0.x database with thousands of inherited  
tables, each with what is supposed to be a consistently maintained  
set of indexes. As the application programmers change (and have  
changed) the DDL specification over time as it affects the creation  
of new tables, however, it is the case that some indexes have gotten  
out of sync or duplicated (in terms of the specified columns).


For the purposes of developing an index administration toolkit whose  
intent is to discover and remove duplicate indexes and to add missing  
indexes to pre-existing tables, it seems like it's easier to write an  
index verification process based on the contents of pg_index (e.g.,  
in order to compare the columns referenced by indexes on a table to  
determine whether any of them seem to be duplicative) than to have to  
parse the output of pg_get_indexdef(). Am I off base in this thinking?


P.S. Regardless of the wisdom of using pg_index for such purposes,  
the OP in the old thread raised what I think is a good question: why  
are techniques for accessing int2vector nowhere documented if the  
type itself makes its way into very user-visible documentation and  
catalogs/views?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Index Administration: pg_index vs. pg_get_indexdef()

2005-11-22 Thread Thomas F. O'Connell


On Nov 22, 2005, at 10:56 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:

In an old thread http://archives.postgresql.org/pgsql-admin/2004-01/
msg00271.php, Tom Lane suggested that it would be unreasonable to
use pg_index to reconstruct (expressional) indexes (in 7.4). The
suggested alternative was to use pg_get_indexdef().


IIRC, the point I was trying to make was that making client code  
try to

interpret the contents of pg_index.indexprs or pg_index.indpred is a
losing proposition.  If you feel that you'd rather read the other  
fields

of pg_index for yourself, I won't argue with you.


Yeah, I took a look at pg_index.indexprs and have already Perled up a  
parser for the pg_get_indexdef() output... :)


Out of curiosity (without much knowledge of how pg_get_indexdef()  
generates its output), would it be difficult to allow the view to  
have a more useful format? What is the intention of providing an  
expression tree? How could that be used?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Performance of a view

2005-11-17 Thread Thomas F. O'Connell


On Nov 14, 2005, at 7:40 PM, John McCawley wrote:


I have a view which is defined as follows:

//-
SELECT tbl_claim.claim_id, count(tbl_invoice.invoice_id) AS count,  
min(tbl_invoice.invoicedate) AS invoicedate

  FROM tbl_claim
  LEFT JOIN tbl_invoice ON tbl_claim.claim_id =  
tbl_invoice.claim_id AND tbl_invoice.active = 1

 GROUP BY tbl_claim.claim_id;
//-


snip

I roughly understand what is happening...in the first query, the  
dataset is being knocked down to one row, then somehow the view is  
being constructed using only that subset of the claim table.  In  
the second query, the view is being constructed from the entire  
dataset which is hundreds of thousands of rows, and thus is much  
slower.


My question is how would I go about obtaining the behavior from the  
faster query in the slower query?  I have switched the order of the  
tables, and tried many different permutations of the query, but no  
matter what I do, it seems that unless I specifically hard-code a  
claim_id filter on the claim_id, I am forced to run through every  
record.


Thoughts?


I'd be curious to see what would happen if you added claimnum as a  
field in your view. I don't have a complete understanding of the  
postgres internals in terms of how it is able to push outer clauses  
down in to its views, but I think it might be able to optimize in  
that fashion if it is able to add a WHERE clause internally to the  
view, which it can't do in the case of claimnum since it doesn't  
exist in the view.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Subqueries

2005-11-06 Thread Thomas F. O'Connell


On Nov 6, 2005, at 3:19 PM, Bob Pawley wrote:

Last week I received help form the list in inserting a serial row  
of one table (process) into a row in a table called specification.


I was able to expand that to include inserting the same information  
into a table called pipe.

---
create or replace function base() returns trigger as $$
begin
insert into specification (fluid_id) values  
(new.fluid_id);

insert into pipe (fluid_id) values (new.fluid_id);
return null;
end;
$$ language plpgsql;

create trigger trig1 after insert on process
for each row execute procedure base();

This works well.

Now I want to modify the program so that only those rows from  
process that have ip in the column ip_op_reactor are inserted into  
pipe.


Following is my best result after studying every piece of  
documentation available to me.


--
create or replace function base() returns trigger as $$
begin
insert into specification (fluid_id) values  
(new.fluid_id);


select fluid_id as fi from process pr;
select ip_op_reactor as iop from pr ;
select fluid_id as fi from pipe pi
(select fi
from pr where iop = 'ip',

insert into pi (fi) values (new.fi));

return null;
end;
$$ language plpgsql;

create trigger trig1 after insert on process
for each row execute procedure base();
-
This is the error I get.
--
ERROR:  SELECT query has no destination for result data
HINT:  If you want to discard the results, use PERFORM instead.
CONTEXT:  PL/pgSQL function base line 4 at SQL statement

I would very much appreciate any help as to where I am going wrong.

Thanks


Might be helpful to re-read the chapter on Basic Statements in PL/pgSQL:

http://www.postgresql.org/docs/8.0/static/plpgsql-statements.html

For SELECTs in PL/pgSQL, you either need a target (via INTO) or you  
need to use PERFORM instead.


That's what the HINT is about. It thinks because you're not  
specifying a target for your SELECTs that you might want to discard  
the results.


If (as I assume) you don't, you'll probably also want to declare  
variables to serve as targets for the results of your SELECTs.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Lock Modes (Documentation)

2005-11-02 Thread Thomas F. O'Connell
I thought about posting to pgsql-docs, but since this might require  
comment from developers, I thought -general might be a better  
starting point.


Anyway, I've occasionally run into monitoring situations where it  
would be immediately helpful to know the built-in SQL statements that  
generate given table-lock modes.


For instance, doesn't UPDATE implicitly mean that an ACCESS SHARE  
lock will be taken if there are foreign keys involved (at least in  
versions prior to 8.1)? Are there any other scenarios where a given  
SQL command might take a lock of one of these forms as a result of  
what it does under the hood? Maybe UPDATE is the only one since it's  
implicitly a SELECT, DELETE, and INSERT all rolled into one.


I'd love to see 12.3 http://www.postgresql.org/docs/8.0/static/ 
explicit-locking.html document this more thoroughly, but I don't  
know enough about the underlying locking requirements of each step of  
each SQL command to know when locks might implicitly be acquired.  
Even if UPDATE is the only special case, it seems like it'd be worth  
mentioning.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why is this function wrong

2005-10-24 Thread Thomas F. O'Connell


On Oct 24, 2005, at 11:27 AM, Johan Wehtje wrote:

This is probably obvious but I am not seeing what is going wrong  
with this function definition:


CREATE OR REPLACE FUNCTION public.cproc_hli (bgtemp NUMERIC,  
humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS

IF ($1  25)
THEN
SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3)
ELSE
SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 +  
2.4)))

END IF
LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY  
INVOKER;


ERROR:  syntax error at or near IF at character 119
LINE 2: IF ($1  25)

I have tried with dollar quoting around the function body, changing  
the maguage to sql and using CASE ... WHEN instead of IF , tried  
Brackets and no brackets around the Boolean expression.. and a  
number of other variations, as well as rereading the Documentation.


I have also tried not having aliases/names for the arguments.

In every case I always get the syntax error directly after AS.

Can somebody point what is probably obvious.

Cheers
Johan Wehtje


This version will compile, but it won't do anything useful:

CREATE OR REPLACE FUNCTION public.cproc_hli (bgtemp NUMERIC,  
humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS $$

BEGIN
IF (bgtemp  25) THEN
PERFORM (10.66 * (0.28 * humidity)+(1.3 * bgtemp) - wspeed);
ELSE
PERFORM (8.62 * (0.38 * humidity) + (1.55 * bgtemp) - (0.5 * wspeed)  
+ (exp(- wspeed + 2.4)));

END IF;
END;
$$ LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY  
INVOKER;


You were missing some basic syntactic requirements, including a BEGIN/ 
END block, as well as several semicolons. You will also need to  
SELECT into a variable if you want anything useful to happen with the  
results. I've replaced your SELECTs with PERFORMs to recreate your  
function as originally written.


I recommend a closer reading of the chapter on PL/pgSQL:

http://www.postgresql.org/docs/8.0/static/plpgsql.html

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Open Source Solutions. Optimized Web Development.

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Oracle buys Innobase

2005-10-08 Thread Thomas F. O'Connell


On Oct 8, 2005, at 6:40 PM, Mitch Pirtle wrote:


On 10/8/05, Mitch Pirtle [EMAIL PROTECTED] wrote:


This basically means that InnoDB table support must come out of the
commercial MySQL.


For that matter, I'm not sure they can release MySQL under a
commercial license while incorporating 3rd party GPL works, without
the express permission of the copyright holders for those included
works.

Whatever deal they used to have just got changed, that's for sure.

-- Mitch


All of which seems to beg the question: why did not MySQL buy  
Innobase themselves? As far as I've read, the terms of the  
transaction were not disclosed. I guess it's possible that MySQL  
didn't have the financial reach to pull off the deal.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to uninstall Postgres

2005-10-06 Thread Thomas F. O'Connell


On Oct 6, 2005, at 1:14 AM, suresh ramasamy wrote:


On 10/6/05, Ly Lam Ngoc Bich [EMAIL PROTECTED] wrote:


I have installed PostgresSQL  with postgresql-8.0.3.tar.gz . My
computer's OS is Linux Fedora 3 . Please show me to the way to  
uninstall

PostgresSQL.


hi,

have you customized your installation as instructed in the postgresql
manual page?
http://www.postgresql.org/docs/8.0/interactive/install-procedure.html
any way you have goto your source directory where you have started
your installation and give the command
# make clean
# make dist clean
and remove the directory.
It should work in most of the cases.

regards
suresh


There's actually a make uninstall rule, too.

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump output mode

2005-10-03 Thread Thomas F. O'Connell


On Oct 3, 2005, at 1:05 AM, Jeff Davis wrote:


What pg_dump backup modes do people usually use for a backup of an
entire database cluster? I usually use plaintext and pipe it through
gzip, however, that doesn't allow you to restore single objects from a
complete backup.

Is there a way to back up the entire cluster using the t or c  
output

mode for pg_dump? What is the difference between t and c?

Regards,
Jeff Davis


Come to think of it, I don't know a way to dump an entire cluster in  
the custom format, but that might be because pg_restore is designed  
to restore a database rather than a cluster.


I think the only difference between t and c is that c is  
compressed by default.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Replication

2005-09-19 Thread Thomas F. O'Connell

On Sep 19, 2005, at 7:10 PM, Bruce Momjian wrote:


Added to TODO:

* Allow WAL traffic to be steamed to another server for stand-by
  replication


steamed or streamed?

:)

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_autovacuum not doing anything

2005-09-11 Thread Thomas F. O'Connell

How do you know that it is not vacuuming or analyzing anything?

And which version of postgres did you build/acquire pg_autovacuum from?

It seems that in post-8.0 versions, there is almost no output by  
default. You could try increasing the debug output using -d 1 or -d 2  
on the command line to verify that any activity is occurring.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

On Sep 11, 2005, at 4:26 AM, Sim Zacks wrote:

I have pg_autovacuum running on my gentoo server, the same server  
with the

postgresql database.
ps shows that it is running and I have it start automatically with the
system. It is not vacuuming or analyzing the tables though.
Am I doing something wrong? Is there a better way to do this?

Thank You
Sim



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Insert Ignore or something similar...

2005-09-05 Thread Thomas F. O'Connell
I don't think any such behavior exists in PostgreSQL, and based on a reading of the behavior in MySQL, I can't imagine it ever existing considering the preference of PostgreSQL developers for correct (and sane) behavior. INSERT IGNORE seems like a foot-cannon... --Thomas F. O'ConnellCo-Founder, Information ArchitectSitening, LLCStrategic Open Source: Open Your i™http://www.sitening.com/110 30th Avenue North, Suite 6Nashville, TN 37203-6320615-469-5150615-469-5151 (fax) On Sep 5, 2005, at 5:20 PM, Cristian Prieto wrote:  Hello everybody,   I've just wondered if there are any way to implement an INSERT IGNORE in PostgreSQL, I know, I could catch an exception in PL/pgSQL and just handle it in the right way, but I would need to write a SP for the task. I've been lookin into the documentation and I found no similar INSERT IGNORE.   Thanks a lot!

Re: [GENERAL] Insert Ignore or something similar...

2005-09-05 Thread Thomas F. O'Connell


On Sep 5, 2005, at 10:51 PM, Alvaro Herrera wrote:


On Mon, Sep 05, 2005 at 10:35:49PM -0500, Thomas F. O'Connell wrote:


I don't think any such behavior exists in PostgreSQL, and based on a
reading of the behavior in MySQL, I can't imagine it ever existing
considering the preference of PostgreSQL developers for correct (and
sane) behavior. INSERT IGNORE seems like a foot-cannon...



What is it supposed to do?  If ignoring errors is the behavior  
Cristian

wants, it's possible to do with savepoints (or PL/pgSQL exceptions).


Yeah, I think savepoints might be his best bet. It sounded like he  
was explicitly trying to avoid PL/pgSQL.


Cristian, here's a pointer to the SAVEPOINT docs:

http://www.postgresql.org/docs/8.0/static/sql-savepoint.html

I was mostly remarking that I couldn't envision a flipped switch for  
incorrect/non-standard INSERT behavior.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] ORDER BY time consuming

2005-08-23 Thread Thomas F. O'Connell
You're also free to set sort_mem (7.4.x) or work_mem (8.0.x) on a per  
session basis, so you could try experimenting with raising the value  
of those settings during sessions in which your query is running.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

On Aug 21, 2005, at 12:01 PM, Jim C. Nasby wrote:


On Sun, Aug 21, 2005 at 12:04:01PM +0200, Ben-Nes Yonatan wrote:


Hi All,

I got a table with about 4.5 millions rows in it which is  
connected to
another table with about 60 millions rows which are used as  
keywords for

searching.

I succeded to create fast queries on the first table that finds a  
row at
the first table which is connected to up to 4 diffrent keywords at  
the
second table and LIMIT the result to 12 (I want to allow the  
surfers of
the site to press back and next to see more products so ill make  
it with

OFFSET).

I want to be able to order my result by a specific column but when I
insert ORDER BY into the query (and any other query that I tried) it
becomes extremly slow, what can I do to solve this problem?



Your question is too generic to answer specifically, but I suspect  
that

if you use your un-ordered query as a subquery in the FROM clause and
then order that it will work well. IE:

SELECT *
FROM (SELECT ...) a
ORDER BY f1, f2, f3
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to implement table caching

2005-08-15 Thread Thomas F. O'Connell

Andrus,

You might consider something like materialized views:

http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html

Whether table caching is a good idea depends completely on the  
demands of your application.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

On Aug 14, 2005, at 1:12 PM, Andrus Moor wrote:


To increase performance, I'm thinking about storing copies of less
frequently changed tables in a client computer.
At startup client application compares last change times and  
downloads newer

tables from server.

CREATE TABLE lastchange (
  tablename CHAR(8) PRIMARY KEY,
  lastchange timestamp without time zone );

INSERT INTO lastupdated (tablename) values ('mytable1');

INSERT INTO lastupdated (tablename) values ('mytablen');

CREATE OR REPLACE FUNCTION setlastchange() RETURNS trigger
AS $$BEGIN
UPDATE lastchange SET lastchange='now' WHERE tablename=TG_RELNAME;
RETURN NULL;
END$$  LANGUAGE plpgsql STRICT;

CREATE TRIGGER mytable1_trig BEFORE INSERT OR UPDATE OR DELETE ON  
mytable1

   EXECUTE PROCEDURE setlastchange();

CREATE TRIGGER mytablen_trig BEFORE INSERT OR UPDATE OR DELETE ON  
mytablen

   EXECUTE PROCEDURE setlastchange();

Is table caching good idea?
Is this best way to implement table caching ?

Andrus.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] initskript after db start; locks on transactions

2005-08-05 Thread Thomas F. O'Connell


On Aug 3, 2005, at 4:33 PM, Thomas Chille wrote:


Hi!

I have have two questions:

1.
What is the best approach to trigger a service script wich will clean
up something in the db after every db (re-)start? Has PG its own
mechanism for such things or have i to use my /etc/init.d/postgresql
script?


I don't think there's anything built in that will trigger a service  
script. I'm pretty sure you'll want to add to or supplement the  
existing init script.



2.
Sometime i read something about locks on transactions. Is this only an
internal thing or can i set them by my own and if yes, for what? With
the LOCK command i can only lock tables, or?


You can use any lock mode specified:

http://www.postgresql.org/docs/8.0/static/sql-lock.html

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Thomas F. O'Connell


On Jul 27, 2005, at 3:43 PM, Jonathan Villa wrote:


My approach will be/has been as follows:

I've used pg_dump of 7.4 to do

pgsql-7.4 $pg_dump --schema-only dbName  schema.sql

Aside from some tweaking, the import seemed to work fine.

Now, I'm attempting the following

pgsql-7.4 $ pg_dump --data-only --inserts dbName  data.sql

and when I attempt an import, I get

ERROR:  insert or update on table doc_data violates foreign key  
constraint

docdata_languageid_fk
DETAIL:  Key (language)=(1) is not present in table  
supported_languages.


Regarding the violations of the foreign key contraints, I've been  
able to

export/import from 7.1 to 7.1 ok.

When I was doing the schema.sql import, I did receive a lot of  
messages regarding
implicit indexes being created?  Is this something I should be  
worried about?


Reason I'm not moving to 8.0 is because the application I'm trying  
to get upgraded

does not give it it's seal of approval.


-Jonathan


Jonathan,

The implicit indexes are no big deal; they're just a sign of indexes  
getting created by PRIMARY KEYs on your tables.


I'm not sure why you're getting errors. Is there a reason you did the  
schema dump separately from the data dump rather than a monolithic  
dump/restore?


Once you get your data import working, you might want to check out  
contrib/adddepend, though, since you're coming from a pre-7.3 database.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] ROW SHARE vs. ACCESS EXCLUSIVE; UPDATE vs. SELECT ... FOR UPDATE

2005-07-21 Thread Thomas F. O'Connell
I encountered an issue today in a system I'm monitoring where a DDL statement that occurred during a data import caused a pileup. I just want to see if there's anything to do other than increase available resources and wait it out or break referential integrity.Here's the basic setup:CREATE TABLE referred (    id int PRIMARY KEY)CREATE TABLE parent (    id int PRIMARY KEY,    referred_id int NOT NULL REFERENCES referred( id ))CREATE TABLE child1 (    foo text)INHERITS (    parent)CREATE TABLE child2 (    bar text)INHERITS (    parent)So then what happens is that during an import a series of statements like UPDATE child1 SET foo = 'foo' WHERE ...This UPDATE takes a while.In the meantime, an ALTER TABLE is run on child2 like ALTER TABLE child2 ADD CONSTRAINT child2_referred_fkey FOREIGN KEY ( referred_id ) REFERENCES referred ( id ), which happens because the children don't inherit constraints, and new children are occasionally created during production.So despite the fact that the UPDATE doesn't actually include referred_id, it still takes a ROW SHARE lock, which blocks the ACCESS EXCLUSIVE taken by the DDL.I wouldn't expect the SELECT ... FOR UPDATE of referential integrity to be necessary since the UPDATE isn't affecting the column with the foreign key. Is there any value to be gained by making these foreign keys DEFERRABLE? Or would the ACCESS EXCLUSIVE still get blocked in the queueing process?Right now, it's easiest just to break the referential integrity between the children and referrred because there's not much risk of data loss or lack of integrity in this particular application at that point.I know that shared row locks are coming in 8.1, and I suspect they will alleviate this particular problem, but I'm still curious to know more about why/whether ROW SHARE is being acquired when a foreign key shouldn't have to be checked. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  

Re: [GENERAL] ERROR: could not open relation

2005-07-15 Thread Thomas F. O'Connell
One final final question: my suspicion is no, but I just want to ask:  
this would not affect all inherited tables with bgwriter, would it,  
in scenarios where a persistent inherited table gets dropped while a  
parent table is being queried? Could this result in a similar  
scheduling conflict for bgwriter? I'm under the impression that this  
is mostly an issue with the implementation of temp tables and the  
planner, but I'd like confirmation from folks who can read the code  
more easily...


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 4:44 PM, Thomas F. O'Connell wrote:


From this thread, these two bits about PostgreSQL stand out:

I have an old note to myself that persistent write errors could  
clog

the bgwriter, because I was worried that after an error it would
stupidly try to write the same buffer again instead of trying to make
progress elsewhere.  (CVS tip might be better about this, I'm not  
sure.)

A dirty buffer for a file that doesn't exist anymore would certainly
qualify as a persistent failure.

and

Hmm ... a SELECT from one of the actual tables would then scan the
temp tables too, no?

Thinking about this, I seem to recall that we had agreed to make the
planner ignore temp tables of other backends when expanding an
inheritance list --- but I don't see anything in the code implementing
that, so it evidently didn't get done yet.

I don't immediately see TODO items correpsonding to these. Should  
there be some? Or do these qualify as bugs and should they be  
submitted to that queue?


Thanks again to all developers and community folk who lent insight  
into this error -- diagnosis and recovery (which was, thankfully,  
virtually non-existent).


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005


---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that  
your

  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell
The oid in question does not correspond to a relfilenode, and  
oid2name -o 94144936 doesn't return anything when run against the  
database in question.


Could this be related to temp tables? We use a lot of them in data  
imports, and this was a point of discussion on IRC.


Having a limited understanding of postgres internals, I apologize if  
any of the following sound like nonsensically wild hairs:


Does VACUUM (under the auspices of pg_autovacuum) attempt to vacuum  
temp tables, which are otherwise not visible outside of a given session?


Does bgwriter operate on temp tables, and could there exist an edge  
condition in which bgwriter might have scheduled a write to disk for  
a file corresponding to a temp table that was removed by sudden  
termination of the session in which the temp table existed such that  
the file was removed?


One of the puzzling things to me, for instance, is that this error  
persisted, so we're wondering if maybe bgwriter refused to do any  
more writing because the thing it was scheduled to write ceased to  
exist without telling it.


In the aftermath, a pg_dump (combined with pg_restore for  
verification) completed successfully. Do I still have cause for concern?


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 7:57 AM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:


Anyway, if I do a lookup by oid for 94144936 in pg_class, I don't see
it. And, clearly, it's not in $PGDATA/base/32019395.



You should be looking at relfilenode.  See
http://www.postgresql.org/docs/8.0/static/storage.html
and/or use oid2name to figure out what table is being complained of.

regards, tom lane



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell
Sorry, I didn't have the evidence about the bgwriter before. It was  
based on conjecture on IRC last night and newly gathered evidence  
from this morning.


Here's a list of current postgres processes on the box.

postgres  1186  2.8  5.0 437812 417624 ? SJul13  22:37  
postgres: writer process
postgres  1187  0.3  0.0  5940 2688 ?SJul13   2:54  
postgres: stats buffer process
postgres  1188  3.1  0.1 13456 8856 ?SJul13  25:16  
postgres: stats collector process


My assumption is that it's typically the case that these three  
processes generally get allocated sequential pids when postgres starts.


In the postgres log, we see these two types of errors, which were the  
only ones that did not report an IP address:


2005-07-12 01:53:31 CDT 13390 :LOG:  statistics buffer is full
2005-07-13 17:44:51 CDT 13389 :ERROR:  could not open relation  
1663/32019395/94144936: No such file or directory


So if we assume that pid 13390 referred to the stats collector from  
yesterday, then presumably 13389 was the bgwriter.


Unfortunately, this is a system where the interloper is superuser  
(and, yes, changing this has been a TODO). But even so, I need help  
understanding how one backend could access the temp table of another.  
Which is what brings me to vacuum or some other system process as a  
culprit.


Recognizing that the application code will execute as superuser in  
postgres, here is what is happening in a session:


Several temporary tables (some of which inherit from actual tables)  
are constructed.

Data is loaded in.
If the data includes updates, in the same session, a VACUUM is  
performed, else an ANALYZE is performed.


So we know these things:

1. This import process was running.
2. It had started the vacuum, which occurs in the same session as  
temp tables that inherit from the table being vacuumed.
3. bgwriter reported an error about a missing relation file (I guess  
this is a strong suspicion more than knowledge, but we strongly  
suspect).


So could this be somehow related to the fact that VACUUM, as a result  
of the inheritance relationship in the temp tables, is explicitly  
attempting to access them?


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 11:07 AM, Tom Lane wrote:


Do you have some evidence that the bgwriter was what was reporting the
error?  You didn't say that before.

The bgwriter only works on dirty shared buffers, so the only way this
could be happening is if a page of a temp table had gotten loaded into
a shared buffer, which isn't supposed to happen really.  Is it  
possible

that you had some backend deliberately trying to read a temp table
created by another backend?  (You don't have to assume that the
interloper tried to modify the table; a mere SELECT could have created
the dirty-buffer condition due to hint-bit update.  You do have to
assume that the interloper was superuser, though, else permissions
would have stopped him from accessing someone else's temp table.)

regards, tom lane



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell


On Jul 14, 2005, at 12:51 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:


Unfortunately, this is a system where the interloper is superuser
(and, yes, changing this has been a TODO). But even so, I need help
understanding how one backend could access the temp table of another.


You'd have to do it pretty explicitly:

select * from pg_temp_NNN.foo ...

but it's certainly possible.  I wouldn't expect any application to try
this sort of thing, but if someone was manually poking around on the
box, they might have been tempted to do it.


I can almost guarantee this is not the cause of the problem.


Several temporary tables (some of which inherit from actual tables)
are constructed.


Hmm ... a SELECT from one of the actual tables would then scan the
temp tables too, no?

Thinking about this, I seem to recall that we had agreed to make the
planner ignore temp tables of other backends when expanding an
inheritance list --- but I don't see anything in the code implementing
that, so it evidently didn't get done yet.


In which case, my guess is that we either need to disconnect the  
temporary tables and not use inheritance or revert to a version of  
the application that does not use temporary tables at all. Otherwise,  
there's a risk of any query on a parent of the temp tables not  
restricted by ONLY causing this to occur again, no?


I guess we've been dodging bullets this whole time and were affected  
by two issues in postgres simultaneously: 1) bgwriter clogging and 2)  
inheritance ignorance of other backend temp tables.


Nice perfect storm, eh? An unrestricted (e.g., non-ONLY) query run  
against a _parent_ of a temporary table. Too bad it puts postgres in  
an unusable state...


Thanks to everyone for assistance in the sleuthing process.

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell
So my first instinct was to avoid use of temp tables in this scenario  
altogether, but now I'm thinking all I might need to do is unhook the  
temp tables from inheritance.


But I just want to raise a basic reliability issu raised in the  
nearby Autovacuum loose ends thread issue before I conclude that  
this approach is safe enough to prevent any more bgwriter errors:  
does pg_autovacuum as currently written in contrib vacuum temp  
tables, and, in 8.0, is this then able (however unlikely) to cause  
the sort of error I encountered yesterday? Or was that thread only  
talking about the new integrated version of the code as far as access  
to temp tables are concerned?


If contrib/pg_autovacuum, temp tables, and bgwriter don't mix well,  
I'll need to rethink our vacuum strategy.


Thanks!

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 2:53 PM, Thomas F. O'Connell wrote:


Several temporary tables (some of which inherit from actual tables)
are constructed.



Hmm ... a SELECT from one of the actual tables would then scan the
temp tables too, no?

Thinking about this, I seem to recall that we had agreed to make the
planner ignore temp tables of other backends when expanding an
inheritance list --- but I don't see anything in the code  
implementing

that, so it evidently didn't get done yet.



In which case, my guess is that we either need to disconnect the  
temporary tables and not use inheritance or revert to a version of  
the application that does not use temporary tables at all.  
Otherwise, there's a risk of any query on a parent of the temp  
tables not restricted by ONLY causing this to occur again, no?


I guess we've been dodging bullets this whole time and were  
affected by two issues in postgres simultaneously: 1) bgwriter  
clogging and 2) inheritance ignorance of other backend temp tables.


Nice perfect storm, eh? An unrestricted (e.g., non-ONLY) query run  
against a _parent_ of a temporary table. Too bad it puts postgres  
in an unusable state...


Thanks to everyone for assistance in the sleuthing process.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell

From this thread, these two bits about PostgreSQL stand out:

I have an old note to myself that persistent write errors could clog
the bgwriter, because I was worried that after an error it would
stupidly try to write the same buffer again instead of trying to make
progress elsewhere.  (CVS tip might be better about this, I'm not sure.)
A dirty buffer for a file that doesn't exist anymore would certainly
qualify as a persistent failure.

and

Hmm ... a SELECT from one of the actual tables would then scan the
temp tables too, no?

Thinking about this, I seem to recall that we had agreed to make the
planner ignore temp tables of other backends when expanding an
inheritance list --- but I don't see anything in the code implementing
that, so it evidently didn't get done yet.

I don't immediately see TODO items correpsonding to these. Should  
there be some? Or do these qualify as bugs and should they be  
submitted to that queue?


Thanks again to all developers and community folk who lent insight  
into this error -- diagnosis and recovery (which was, thankfully,  
virtually non-existent).


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] ERROR: could not open relation

2005-07-14 Thread Thomas F. O'Connell

One other detail: pg_autovacuum is running on this system.

I just noticed this from Tom's Autovacuum loose ends post from  
earlier today:


The code does not make a provision to ignore temporary tables.
Although vacuum.c and analyze.c will disregard the request to touch
such tables, it'd probably be better to recognize the situation further
upstream.  In particular it seems that autovacuum will continually throw
ANALYZE requests for a temp table due to lack of stats.

Is this an issue with the current pg_autovacuum implementation? Is it  
somehow involved?


Though I feel like we're a little closer to solving this mystery, I  
want to target the correct vacuuming process with a fix. I have a  
feeling that explicitly dropping the temp tables in the process that  
also calls VACUUM prior to the VACUUM is a good short-term fix, but I  
also want to know whether continuing to run pg_autovacuum with this  
architecture is a bad idea. If so, we can revert to not using temp  
tables at all.


Further, why have we only noticed it once when this version of code  
(and PostgreSQL) has been running for weeks?


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 11:42 AM, Thomas F. O'Connell wrote:

Sorry, I didn't have the evidence about the bgwriter before. It was  
based on conjecture on IRC last night and newly gathered evidence  
from this morning.


Here's a list of current postgres processes on the box.

postgres  1186  2.8  5.0 437812 417624 ? SJul13  22:37  
postgres: writer process
postgres  1187  0.3  0.0  5940 2688 ?SJul13   2:54  
postgres: stats buffer process
postgres  1188  3.1  0.1 13456 8856 ?SJul13  25:16  
postgres: stats collector process


My assumption is that it's typically the case that these three  
processes generally get allocated sequential pids when postgres  
starts.


In the postgres log, we see these two types of errors, which were  
the only ones that did not report an IP address:


2005-07-12 01:53:31 CDT 13390 :LOG:  statistics buffer is full
2005-07-13 17:44:51 CDT 13389 :ERROR:  could not open relation  
1663/32019395/94144936: No such file or directory


So if we assume that pid 13390 referred to the stats collector from  
yesterday, then presumably 13389 was the bgwriter.


Unfortunately, this is a system where the interloper is superuser  
(and, yes, changing this has been a TODO). But even so, I need help  
understanding how one backend could access the temp table of  
another. Which is what brings me to vacuum or some other system  
process as a culprit.


Recognizing that the application code will execute as superuser in  
postgres, here is what is happening in a session:


Several temporary tables (some of which inherit from actual tables)  
are constructed.

Data is loaded in.
If the data includes updates, in the same session, a VACUUM is  
performed, else an ANALYZE is performed.


So we know these things:

1. This import process was running.
2. It had started the vacuum, which occurs in the same session as  
temp tables that inherit from the table being vacuumed.
3. bgwriter reported an error about a missing relation file (I  
guess this is a strong suspicion more than knowledge, but we  
strongly suspect).


So could this be somehow related to the fact that VACUUM, as a  
result of the inheritance relationship in the temp tables, is  
explicitly attempting to access them?


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 14, 2005, at 11:07 AM, Tom Lane wrote:


Do you have some evidence that the bgwriter was what was reporting  
the

error?  You didn't say that before.

The bgwriter only works on dirty shared buffers, so the only way this
could be happening is if a page of a temp table had gotten loaded  
into
a shared buffer, which isn't supposed to happen really.  Is it  
possible

that you had some backend deliberately trying to read a temp table
created by another backend?  (You don't have to assume that the
interloper tried to modify the table; a mere SELECT could have  
created

the dirty-buffer condition due to hint-bit update.  You do have to
assume that the interloper was superuser, though, else permissions
would have stopped him from accessing someone else's temp table.)

regards, tom lane




---(end of  
broadcast)---

TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] ERROR: could not open relation

2005-07-13 Thread Thomas F. O'Connell
I have a production database where we just encountered the following error:ERROR:  could not open relation 1663/32019395/94144936: No such file or directory Here's the output of SELECT version():PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4Here's uname -a:Linux hostname 2.6.11.8 #8 SMP Tue Jun 21 11:18:03 CDT 2005 i686 unknownJFS is the filesystem.Interestingly, this isn't a FATAL error, but after it occurred, not a single query was working, and, in fact, all queries seemed to generate the error. I wasn't present when the error occurred, and by the time I became available, the box had been rebooted, and pg_autovacuum, which runs by default, had been started. Otherwise, everything seems to have come up as expected. I've since killed pg_autovacuum.Is there any way to get more information about why this error occurred and what else I might need to do to recover from it?I saw this post by Tom Lane in a thread from earlier this year:http://archives.postgresql.org/pgsql-admin/2005-04/msg00227.phpThis makes me ask a possibly unrelated question: what is the 1663 prefix in the relation string? When I examine $PGDATA/base, the directories within seem to be those that start after the 1663. As in, I see $PGDATA/base/32019395, not $PGDATA/base/1663/32019395.Anyway, if I do a lookup by oid for 94144936 in pg_class, I don't see it. And, clearly, it's not in $PGDATA/base/32019395.Are the recommendations the same as in the other thread? REINDEX DATABASE? (What is a "standalone backend"? A single-user version?) Avoid VACUUMing? pg_dump and reload?The database is currently running. Should I stop it to prevent further damage?-- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  

Re: [GENERAL] ERROR: could not open relation

2005-07-13 Thread Thomas F. O'Connell
I'm developing a habit of being the most frequent replier to my own posts, but anyway: I discovered the meaning of 1663, which is the default tablespace oid.But I still need help with diagnosis and treatment... -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  On Jul 13, 2005, at 8:06 PM, Thomas F. O'Connell wrote:I have a production database where we just encountered the following error:ERROR:  could not open relation 1663/32019395/94144936: No such file or directoryHere's the output of SELECT version():PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4Here's uname -a:Linux hostname 2.6.11.8 #8 SMP Tue Jun 21 11:18:03 CDT 2005 i686 unknownJFS is the filesystem.Interestingly, this isn't a FATAL error, but after it occurred, not a single query was working, and, in fact, all queries seemed to generate the error. I wasn't present when the error occurred, and by the time I became available, the box had been rebooted, and pg_autovacuum, which runs by default, had been started. Otherwise, everything seems to have come up as expected. I've since killed pg_autovacuum.Is there any way to get more information about why this error occurred and what else I might need to do to recover from it?I saw this post by Tom Lane in a thread from earlier this year:http://archives.postgresql.org/pgsql-admin/2005-04/msg00227.phpThis makes me ask a possibly unrelated question: what is the 1663 prefix in the relation string? When I examine $PGDATA/base, the directories within seem to be those that start after the 1663. As in, I see $PGDATA/base/32019395, not $PGDATA/base/1663/32019395.Anyway, if I do a lookup by oid for 94144936 in pg_class, I don't see it. And, clearly, it's not in $PGDATA/base/32019395.Are the recommendations the same as in the other thread? REINDEX DATABASE? (What is a "standalone backend"? A single-user version?) Avoid VACUUMing? pg_dump and reload?The database is currently running. Should I stop it to prevent further damage?--Thomas F. O'ConnellCo-Founder, Information ArchitectSitening, LLCStrategic Open Source: Open Your i™http://www.sitening.com/110 30th Avenue North, Suite 6Nashville, TN 37203-6320615-260-0005  

Re: [GENERAL] PostgreSQL Hosting

2005-07-11 Thread Thomas F. O'Connell
Okay, after setting up a hosting environment based on my original post, we immediately discovered a few caveats. One is that, as written, pg_user creates issues with pg_dump because a given user needs access to various system catalogs and postgres must exist in pg_user, so we updated the view. Secondly, though, we actually had to modify system_views.sql because pg_user, as a system catalog, behaves differently from a standard view and caused more issues with pg_dump.So here's what we added to system_views.sql:CREATE VIEW pg_user ASSELECT usename, usesysid, usecreatedb, usesuper, usecatupd, ''::text as passwd, valuntil, useconfigFROM pg_shadowWHERE usename IN (    (SELECT current_user),    (        SELECT ps.usename        FROM pg_database pd,pg_shadow ps        WHERE pd.datdba=ps.usesysid        AND datname=current_database()    ),    'postgres')Then we ran into the problem of allowing users to dump their own databases, which requires access to pg_database that we're trying to prevent at the user level. So we're now setting our schema search path for all user-created databases to public, pg_catalog, $user. Then we're creating a view called pg_database as:CREATE OR REPLACE VIEW pg_db ASSELECT oid, *FROM pg_databaseWHERE datname=(select current_user)Now pg_dump should be able to use our restricted version of pg_database with little trouble, although I don't know (yet) whether putting a customized replacement view in front of an actual system catalog in the search path is going to cause any other issues in other parts of the system.So, to summarize: we're shooting for a user-isolated PostgreSQL hosting environment.In order to accomplish this, we have to hack the following:1. system_views.sql2. phpPgAdmin to use the custom pg_grp view (we could probably perform similar surgery to overshadow pg_group as we did pg_database with a view in public)Then we use the built in PostgreSQL privilege system as outlined in my original post plus modify the schema search path per user database.At the end of this, it seems like the only hobble (other than having to hack stuff to achieve user isolation) is that we can no longer get a list of users as super-user from pg_user.A couple of years ago, Tom Lane said this with regard to isolating users for a PostgreSQL-based hosting environment:http://archives.postgresql.org/pgsql-novice/2003-10/msg00124.phpBut to me, that seems like a fairly draconian approach to creating a hosting environment.In MySQL, the hack is a privilege called SHOW DATABASES, which can be set for all databases. Their user setup seems to be wholly different because they don't seem to provide a cluster-wide mechanism for viewing users.Again, I'm wondering whether anyone else in the community has developed any best practices when it comes to PostgreSQL hosting. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  On Jul 1, 2005, at 3:45 PM, Thomas F. O'Connell wrote:After years of administering single-site PostgreSQL, I'm finally getting started on a PostgreSQL hosting project. I just wanted to check to see how our approach jives with what other folks in the community have done.Much of this is derived from the "hardening" techniques derived from phpPgAdmin.patch:http://www.linuxpages.org/phpPgAdmin.patch_en.phpHere is an excerpt from our current pg_hba.conf:pg_hba.conf# allow users to connect to database of same name, from network, with passwordhost    sameuser    all         192.168.1.0/24  md5# postgres connect from network with passwordhost    all     postgres        192.168.1.0/24  md5As far as I can tell, this solves 80% of the problem. Now users are restricted to databases corresponding to their usernames. The only difficulty seems to be unrestricted access to cluster-wide system catalogs.So, in order to restrict access to the system catalogs, we do the following (where all connections are performed as user postgres and dbuser is the example name of a user/database):# all connections as user postgrestemplate1=# create database dbuser;template1=# revoke all privileges on database dbuser from public;dbuser=# create or replace view pg_db as select oid, * from pg_database where datname=(select current_user);dbuser=# grant select on pg_db to public;dbuser=# revoke select on pg_database from public;dbuser=# create or replace view pg_grp as select * from pg_group where groname=(select current_user);dbuser=# grant select on pg_grp to public;dbuser=# revoke select on pg_group from public;dbuser=# create or replace view pg_user as select usename, usesysid, usecreatedb, usesuper, usecatupd, ''::text as passwd, valuntil, useconfig FROM pg_shadow where usename=(select current_user);dbuser=# grant select on pg_user to public;dbuser=# grant select on public.pg_user to public;dbuser=# revoke all privileges on schema public from public;template1=# c

Re: [GENERAL] PostgreSQL Hosting

2005-07-11 Thread Thomas F. O'Connell
In my haste, I neglected to update the name of this view in my post to pg_database from its original definition as pg_db, which was the original name of the hacked view.The point is that we want pg_catalog.pg_database to be superseded by public.pg_database from the point of view of both the user and pg_dump. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  On Jul 11, 2005, at 11:39 AM, Thomas F. O'Connell wrote:Then we ran into the problem of allowing users to dump their own databases, which requires access to pg_database that we're trying to prevent at the user level. So we're now setting our schema search path for all user-created databases to public, pg_catalog, $user. Then we're creating a view called pg_database as:CREATE OR REPLACE VIEW pg_db ASSELECT oid, *FROM pg_databaseWHERE datname=(select current_user)

Re: [GENERAL] PostgreSQL Hosting

2005-07-11 Thread Thomas F. O'Connell

Joshua,

Is there any difference between a catalog and a cluster? As in, are  
you saying a separate postmaster per user, as Tom Lane suggested in  
the post I referenced earlier in this thread?


Off-hand, do you (or anyone else) see any showstoppers with the  
implementation I laid out involving a bit of mucking with system  
catalogs and the schema search path?


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 11, 2005, at 12:01 PM, Joshua D. Drake wrote:

Although it is resource intensive, Command Prompt creates a new  
catalog

owned by the user for each account. So on a given machine we will have
25 postgresql catalogs running on separate ports.

This has worked very well for us for the last couple of years.

Sincerely,

Joshua D. Drake


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL Hosting

2005-07-11 Thread Thomas F. O'Connell

I don't really think it's that bad (if it proves to work):

1. A simple modification to system_views.sql
2. Modified schema search path: public, pg_catalog, $user
3. New public views: pg_database, pg_group

If the final two pieces work, we won't even need to modify phpPgAdmin.

Anyway, thanks for your insights. I don't think we're really in a  
position to support postmaster-per-client hosting, though, at the  
moment.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 11, 2005, at 1:30 PM, Joshua D. Drake wrote:


I honestly didn't read through the whole thing. It looked like a whole
bunch of administrative trouble to me ;)

Sincerely,

Joshua D. Drake


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Quoting $user as Parameter to SET

2005-07-11 Thread Thomas F. O'Connell
The default search_path is $user, public.Say I wanted to reverse this in a database called d:First of all:d= show search_path; search_path-- $user,public(1 row)Then:d= alter database d set search_path to public, '$user';Then, in a new session:d= show search_path; search_path- public, "$user"(1 row)This is an important distinction because testing reveals that the quoted $user after the reversal is no longer actually a dynamic variable that results in a search_path that resolves to the current user.I'm having trouble locating an example of how to include a variable in the parameter list to SET. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  

Re: [GENERAL] Quoting $user as Parameter to SET

2005-07-11 Thread Thomas F. O'Connell

Uh...

Just kidding, I guess. Wish I had a screen capture of what I had done  
before because I swear I was unable to create a table in the user  
namespace after having created it. But now that I look more closely  
(including when running current_schemas(true)), everything looks fine.


Sorry for the noise...

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 11, 2005, at 6:04 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:


This is an important distinction because testing reveals that the
quoted $user after the reversal is no longer actually a dynamic
variable that results in a search_path that resolves to the current
user.



Really?  It works fine for me:

regression=# create schema postgres;
CREATE SCHEMA
regression=# show search_path;
 search_path
--
 $user,public
(1 row)

regression=# select current_schemas(true);
   current_schemas
--
 {pg_catalog,postgres,public}
(1 row)

regression=# alter database regression set search_path = public,  
'$user';

ALTER DATABASE
regression=# \c -
You are now connected to database regression.
regression=# show search_path;
   search_path
-
 public, $user
(1 row)

regression=# select current_schemas(true);
   current_schemas
--
 {pg_catalog,public,postgres}
(1 row)

regression=#

regards, tom lane




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] PostgreSQL Hosting

2005-07-01 Thread Thomas F. O'Connell
After years of administering single-site PostgreSQL, I'm finally getting started on a PostgreSQL hosting project. I just wanted to check to see how our approach jives with what other folks in the community have done.Much of this is derived from the "hardening" techniques derived from phpPgAdmin.patch:http://www.linuxpages.org/phpPgAdmin.patch_en.phpHere is an excerpt from our current pg_hba.conf:pg_hba.conf# allow users to connect to database of same name, from network, with passwordhost    sameuser    all         192.168.1.0/24  md5# postgres connect from network with passwordhost    all     postgres        192.168.1.0/24  md5As far as I can tell, this solves 80% of the problem. Now users are restricted to databases corresponding to their usernames. The only difficulty seems to be unrestricted access to cluster-wide system catalogs.So, in order to restrict access to the system catalogs, we do the following (where all connections are performed as user postgres and dbuser is the example name of a user/database):# all connections as user postgrestemplate1=# create database dbuser;template1=# revoke all privileges on database dbuser from public;dbuser=# create or replace view pg_db as select oid, * from pg_database where datname=(select current_user);dbuser=# grant select on pg_db to public;dbuser=# revoke select on pg_database from public;dbuser=# create or replace view pg_grp as select * from pg_group where groname=(select current_user);dbuser=# grant select on pg_grp to public;dbuser=# revoke select on pg_group from public;dbuser=# create or replace view pg_user as select usename, usesysid, usecreatedb, usesuper, usecatupd, ''::text as passwd, valuntil, useconfig FROM pg_shadow where usename=(select current_user);dbuser=# grant select on pg_user to public;dbuser=# grant select on public.pg_user to public;dbuser=# revoke all privileges on schema public from public;template1=# create user dbuser with 'changeme';template1=# grant all privileges on database dbuser to dbuser;template1=# alter database dbuser owner to dbuser;dbuser=# grant all on schema public to dbuser;If we ever needed to remove a user/database, it should be as easy as:dropdb dbuserdropuser dbuserAs far as I can tell, this pretty well locks down anyone accessing the database through allowed interfaces into the databases they own and prevents them from accessing or modifying any other databases.Otherwise, we'll be letting users use phpPgAdmin to administer their databases. In phpPgAdmin, setting $conf['owned_only'] = true allows one to restrict the display of databases to those owned by the user who is logged in, but this setting does nothing to prevent arbitrary SQL being run to access and modify databases and tables not owned by the current user. Even though the above settings serve to fulfill this function, we still decided to set this.Is this idiomatic? Is this in the realm of best practices? Am I missing anything? Are there any less intrusive ways of doing things? Am I hurting my prospects for upgrade paths? There were a few minor patches made to phpPgAdmin, for instance, to get it to use the new pb_db view. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i™  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  

Re: [GENERAL] CPU-intensive autovacuuming

2005-06-10 Thread Thomas F. O'Connell
I was usleeping in tiny increments in each iteration of the loop. I  
didn't try break it into iterative groups like this.


Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather  
than O(n^2) table activity. At this point, though, I'm probably not  
too likely to have much time to hack pg_autovacuum before 8.1 is  
released, although if it doesn't become integrated by beta feature  
freeze, I might give it a shot.


But I hope if anyone completes the linear improvement, they'll post  
to the lists.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 10, 2005, at 9:12 AM, Shelby Cain wrote:


--- Thomas F. O'Connell [EMAIL PROTECTED] wrote:


Were you sleeping every time through the loop?  How about something
like:

if (j%500 == 1) usleep(10)

Regards,

Shelby Cain


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-09 Thread Thomas F. O'Connell

Phil,

If you complete this patch, I'm very interested to see it.

I think I'm the person Matthew is talking about who inserted a sleep  
value. Because of the sheer number of tables involved, even small  
values of sleep caused pg_autovacuum to iterate too slowly over its  
table lists to be of use in a production environment (where I still  
find its behavior to be preferable to a complicated list of manual  
vacuums performed in cron).


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 7, 2005, at 6:16 AM, Phil Endecott wrote:


Matthew T. O'Connor wrote:


Phil Endecott wrote:

 Could it be that there is some code in autovacuum that is O 
(n^2) in

 the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j  PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs-table_list);
while (tbl_elem != NULL)
{  Have I correctly understood what is going on here?




Indeed you have.  I have head a few similar reports but perhaps  
none as bad as yours.  One person put a small sleep value so that  
it doesn't spin so tight.  You could also just up the sleep delay  
so that it doesn't do this work quite so often.  No other quick  
suggestions.




I do wonder why autovacuum is keeping its table list in memory  
rather than in the database.


But given that it is keeping it in memory, I think the real fix is  
to sort that list (or keep it ordered when building or updating  
it).  It is trivial to also get the query results ordered, and they  
can then be compared in O(n) time.


I notice various other places where there seem to be nested loops,  
e.g. in the update_table_list function.  I'm not sure if they can  
be fixed by similar means.


--Phil.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] pg_dump in a production environment

2005-05-23 Thread Thomas F. O'Connell
I have a web application backed by a PostgreSQL 7.4.6 database. It's an application with a fairly standard login process verified against the database.I'd like to use pg_dump to grab a live backup and, based on the documentation, this would seem to be a realistic possibility. When I try, though, during business hours, when people are frequently logging in and otherwise using the application, the application becomes almost unusable (to the point where logins take on the order of minutes).According to the documentation, pg_dump shouldn't block other operations on the database other than operations that operate with exclusive locks.Ordinarily, I run pg_autovacuum on the box, so I tried again after killing that, thinking that perhaps any substantial vacuum activity might affect pg_dump. I tried again to no avail.Excepting the rest of the application, the login process should be completely read-only and shouldn't require any exclusive locks.Connections don't really pile up excessively, and load on the machine does not get in the red zone. Is there anything else I should be noticing?-tfo -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  

Re: [GENERAL] pg_dump in a production environment

2005-05-23 Thread Thomas F. O'Connell
Okay, I collated the three replies I got below for ease in replying.I vacuum full analyze and reindexdb approximately once a month, but I use pg_autovacuum as a matter of ongoing maintenance, and it seems to hit equilibrium pretty well and seems to prevent bloat. The last time I checked a vacuum analyze verbose, I had plenty of FSM to spare. The data grows, but it doesn't seem to grow so quickly that I'd already be out of FSM space.I actually run pg_dump from a remote machine, so I/O contention on the partition with $PGDATA shouldn't be an issue.And here is the actual command:pg_dump -h host -F c database  dumpfilePretty basic, although it is compressing.As far as I can tell, the postmaster handling the dump request takes up quite a bit of CPU, but not itself to the point where the database should be unusable under ordinary circumstances. E.g., when a query/backend eats up that much CPU, it doesn't prevent further access.I'm suspicious more of something involving locks than of CPU.Oh, and one other small(ish) detail: the dumping client is using a 7.4.8 installation, whereas the server itself is 7.4.6.-tfo -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC  Strategic Open Source: Open Your i  http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005  From: Scott Marlowe [EMAIL PROTECTED]Date: May 23, 2005 3:18:33 PM CDTTo: "Thomas F. O'Connell" [EMAIL PROTECTED]Cc: PgSQL General pgsql-general@postgresql.orgSubject: Re: [GENERAL] pg_dump in a production environmentBasically, it sounds like postgresql is doing a lot of very longsequential scans to do this backup. HAve you done a vacuum fulllately? It could be that you've got a lot of table bloat that's makingthe seq scans take so long.You could be I/O saturated already, and the backup is just pushing youover the edge of the performance knee.I do a 'vacuum analyze verbose' and see if you need more fsm setup foryour regular vacuums to keep up.From: "Matthew T. O'Connor" matthew@zeut.netDate: May 23, 2005 3:18:18 PM CDTTo: "Thomas F. O'Connell" [EMAIL PROTECTED]Cc: PgSQL General pgsql-general@postgresql.orgSubject: Re: [GENERAL] pg_dump in a production environmentCould this be an I/O saturation issue like the one the vacuum delay settings are supposed to help with? Perhaps we could either extend the vacuum delay settings to effect pg_dump, or make new option to pg_dump that would have it slow down the dump.BTW, have you tried running pg_dump from a separate machine? Or even just making sure that the dump file is being written to a different disk drive than PostgreSQL is running on. All that disk write activity is bound to slow the system down.MatthewFrom: Martijn van Oosterhout kleptog@svana.orgDate: May 23, 2005 3:25:23 PM CDTTo: "Thomas F. O'Connell" [EMAIL PROTECTED]Cc: PgSQL General pgsql-general@postgresql.orgSubject: Re: [GENERAL] pg_dump in a production environmentReply-To: Martijn van Oosterhout kleptog@svana.org What's you pg_dump command? Some options may take a lot of memory.If you list the processes while this is going on, do you see onechewing all your memory? i.e what's really causing the problem...Hope this helps,

Re: [GENERAL] pg_dump in a production environment

2005-05-23 Thread Thomas F. O'Connell
A note about database design, though: there are thousands of tables  
in this database, most of them inherited. I haven't looked at the  
internals of pg_dump, but generally, how do the sequential scans  
work? Why would these prevent the tables from being accessed by  
queries that don't require exclusive locks?


-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On May 23, 2005, at 3:18 PM, Scott Marlowe wrote:


Basically, it sounds like postgresql is doing a lot of very long
sequential scans to do this backup.  HAve you done a vacuum full
lately?  It could be that you've got a lot of table bloat that's  
making

the seq scans take so long.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump in a production environment

2005-05-23 Thread Thomas F. O'Connell
Actually, I would find this to be an interesting project, but we're  
on the verge of moving to 8.0 via Slony and will have a replicated  
cluster, reducing the need for live dumps on the primary read/write  
database.


It's too bad round tuits are so expensive!

I was trying to think of a way today in which pg_dump might be able  
to use statistics in almost the opposite way of pg_autovacuum, such  
that it steered clear of objects in heavy use, but I'm not familiar  
enough with the source to know how this might work.


-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On May 23, 2005, at 11:12 PM, Tom Lane wrote:


Thomas F. O'Connell [EMAIL PROTECTED] writes:


I'd like to use pg_dump to grab a live backup and, based on the
documentation, this would seem to be a realistic possibility. When I
try, though, during business hours, when people are frequently
logging in and otherwise using the application, the application
becomes almost unusable (to the point where logins take on the order
of minutes).



The pg_dump sources contain some comments about throttling the rate
at which data is pulled from the server, with a statement that this
idea was discussed during July 2000 and eventually dropped.  Perhaps
you can think of a better implementation.

regards, tom lane




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Increasing connection limit in postgres

2005-05-16 Thread Thomas F. O'Connell
There should be no need to recompile anything. See the entry for  
max_connections:

http://www.postgresql.org/docs/8.0/static/runtime-config.html#RUNTIME- 
CONFIG-CONNECTION

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On May 16, 2005, at 1:44 PM, Akash Garg wrote:
How do you recompile postgres to allow more than 1024 connections?
Thanks,
Akash
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] DDL from psql console?

2005-04-25 Thread Thomas F . O'Connell
Any reason not to use pg_dump -s?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 25, 2005, at 10:29 AM, John Browne wrote:
Hello,
I was curious if there was a way to get the DDL for a particular table
from the psql client console?  I have two postgres boxes (development
and production) and would like to copy  paste the DDL CREATE TABLE
statements from the development console to the production console when
I'm moving a particular table definition over.  I tried \dt+ but it
didn't appear to show it.
Any thoughts?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Table modifications with dependent views - best practices?

2005-04-23 Thread Thomas F . O'Connell
Why would DDL statements in a transaction cause deadlocks? I understand 
the prevention of concurrent access, but I'm curious to know more about 
how deadlocks arise in this situation, as this is something I've seen 
in a production environment during transactional DDL traffic. Why would 
DDL statements be more likely to cause lock acquisition at cross 
purposes?

A simple example would help me understand this.
Thanks!
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 22, 2005, at 6:11 AM, Michael Fuhr wrote:
On Fri, Apr 22, 2005 at 11:34:29AM +0100, David Roussel wrote:

I usually put DDL statements in a transaction, for a couple of
reasons: so that a mistake doesn't leave me with half-done work
(any error will cause the entire transaction to roll back), and to
make the changes atomic for the benefit of other transactions.
Can you do that in postgres?  Will it really make the DDL atomic?
Yes, although locking will probably prevent concurrent access and
can cause deadlock.  DDL statements like DROP, CREATE, and ALTER
acquire an AccessExclusiveLock on the objects they're modifying,
so the transaction doing the DDL will block until no other transactions
hold locks on those objects, and other transactions' attempts to
use those objects will block until the DDL transaction commits or
rolls back.  If the DDL transaction rolls back, then nobody else
will ever have seen the changes; if it commits then the changes all
become visible at the same time.
Try it and see what happens.  You might see blocking and you might
be able to cause deadlock, but you shouldn't ever see some changes
but not others.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] generating a parent/child relationship in a trigger

2005-04-15 Thread Thomas F . O'Connell
As long as your input is well-defined, you could certainly do something 
like this in a trigger.

Instinctively, I would think that plperl might be a little more wieldy 
than plpgsql for the required parsing, but I'm pretty sure a plpgsql 
solution could be created. Don't have one handy, though...

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 12, 2005, at 12:51 PM, Mark Harrison wrote:
Suppose I'm adding row to a table, and one of the columns is
a pathname.
I would like to generate in a separate table parent/child
relationships based on the path.
For example, adding /foo/bar/baz and /foo/bar/bot would
generate the following relationships
parent  child
--  -
/foo/foo/bar
/foo/bar/foo/bar/baz
/foo/bar/foo/bar/bot
Is this a reasonable thing to do in a trigger?  Does anybody
have a pl/sql snippet to do this?
Many TIA,
Mark
--
Mark Harrison
Pixar Animation Studios
---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] error with vacuumdb

2005-04-11 Thread Thomas F . O'Connell
You'll need to post the actual error to the list to have any hope of 
receiving good help.

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 10, 2005, at 5:58 PM, Michelle Konzack wrote:
Hello *,
I run a Database of curently 120 GByte on a 3Ware S-ATA Raid-5 with
3x 147 GByte.  Now it was a little bit bigger (around 190 GByte )
and I have tried to use vacuumdb...
Oops!!!  I can not cacuumize my Database...
How many diskspace does vacuumdb need ?
OK, if I pgdumpall my Database, I NEED diskspace but with vacuumdb ?
Good night
Michelle
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Recovering real disk space

2005-04-04 Thread Thomas F . O'Connell
Isn't this also a symptom of inappropriate FSM settings?
Try running a VACUUM VERBOSE and check the FSM settings at the end.
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source  Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 3, 2005, at 8:21 AM, Bruno Wolff III wrote:
On Wed, Mar 30, 2005 at 13:09:33 -0500,
  Adam Siegel [EMAIL PROTECTED] wrote:
We perform a vacuum full after each mass delete.  This cycle can 
happen
many times during over a couple of weeks.  We are in a test lab
environment and are generating a lot of data.

One of the problems we have is that the vacuum on the table can take 
up
to 10 hours.  We also expect to see the physical disk space go down, 
but
this does not happen.  If we accidently fill up the disk, then all 
bets
are off and we are unable to recover.  A vacuum never seems to finish
(several days).
This may mean that there are open transactions pinning the records you
have deleted so that they aren't being removed by the vacuum.
Also, under some circumstances a CLUSTER can be faster than a VACUUM 
FULL.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Referential integrity using constant in foreign key

2005-03-28 Thread Thomas F . O'Connell
Andrus, it's still not clear to me that you're understanding the role 
of referential integrity in database design. It exists to guarantee 
that the values in a column in a given table correspond exactly to the 
values in a column in another table on a per-row basis. It does not 
exist to guarantee that all values in a given column will have a 
specific value.

Referential integrity never dictates the need for dummy columns. If 
you have a column that you need to refer to a column in another table 
so strongly that you want the values always to be in sync, you create a 
foreign key, establishing referential integrity between a column (or 
columns) in the table with the foreign key and a column in another 
table (usually a primary key).

I don't understand what you're trying to accomplish well enough to be 
able to make a specific recommendation based on your examples that 
suits your needs.

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source  Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Mar 25, 2005, at 1:39 PM, Andrus Moor wrote:
Thomas,
thank you for reply. There was a typo in my code. Second table should 
be

 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY ('1', code1) REFERENCES classifier,
 FOREIGN KEY ('2', code2) REFERENCES classifier
 );
I try to explain my problem more precicely.
I can implement the referential integrity which I need in the 
following way:

CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
constant1  CHAR default '1',
constant2  CHAR default '2',
FOREIGN KEY (constant1, code1) REFERENCES classifier,
FOREIGN KEY (constant2, code2) REFERENCES classifier
);
This implementation requires 2 additional columns (constant1 and 
constant2)
which have always same values, '1' and '2' respectively, in all info 
table
rows.

I created those dummy columns since Postgres does not allow to write
REFERENCES clause like
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', code1) REFERENCES classifier,
FOREIGN KEY ('2', code2) REFERENCES classifier
);
Is it possible to implement referential integrity without adding 
additional
dummy columns to info table ?

It's somewhat unclear what you're attempting to do, here, but I'll 
give a
shot at interpreting. Referential integrity lets you guarantee that 
values
in a column or columns exist in a column or columns in another table.

With classifier as you've defined it, if you want referential 
integrity in
the info table, you could do this:

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY code1 REFERENCES classifier (category),
FOREIGN KEY code2 REFERENCES classifier (category)
);
But I'm not sure what you mean by references to category 1. There is
only a single category column in classifier, and referential 
integrity is
not for ensuring that a column in one table contains only values of a
single row.

Regardless, your syntax doesn't seem to reflect reality. Read the 
CREATE
TABLE reference thoroughly.

http://www.postgresql.org/docs/8.0/static/sql-createtable.html
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source  Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Mar 25, 2005, at 10:22 AM, Andrus Moor wrote:
I need to create referential integrity constraints:
CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );
-- code1 references to category 1,
-- code2 references to category 2 from classifier table.
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', category1) REFERENCES classifier,
FOREIGN KEY ('2', category2) REFERENCES classifier
);
Unfortunately, second CREATE TABLE causes error
ERROR:  syntax error at or near '1' at character 171
Any idea how to implement referential integrity for info table ?
It seems that this is not possible in Postgres.
Andrus.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Server load planning

2005-03-28 Thread Thomas F . O'Connell
Dan,
You can get a sense of how much memory you will need by the shorthand  
presented in table 16-2 for calculating the value of SHMMAX:

http://www.postgresql.org/docs/8.0/static/kernel- 
resources.html#SYSVIPC-PARAMETERS

Otherwise, you'll need to include some estimate of work_mem and  
maintenance_work_mem based on your knowledge of your queries:

http://www.postgresql.org/docs/8.0/static/runtime-config.html#RUNTIME- 
CONFIG-RESOURCE

As far as disk I/O and contention at that level, I'm not sure how that  
will be affected by sheer number of connections. There's a simple  
utility in contrib called pgbench that you could use to do some  
testing.

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source  Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Mar 27, 2005, at 11:38 AM, Dan Sugalski wrote:
I'm trying to get a handle on how an app I'm looking to roll out's  
going to impact the server I'm connecting to, and what sort of  
capacity planning's going to be needed to make it all work relatively  
well.

I'm looking at around 250-300 simultaneous users, nearly all of them  
doing interactive work. (Curses-based screen stuff for the most part)  
I'd not too worried about the server we've got for them being able to  
handle that except... for reasons that are fairly annoying, I'm  
looking at somewhere in excess of 9K simultaneous connections to the  
database server, and I'm not in a position to cut that down any. (The  
app suite's written in an old 4GL that assumes an ISAM database. We're  
porting to a modern database and runtime, but we have to preserve the  
DB semantics of the original database. Nasty, but there you go)

I know each of the back-end processes is going to suck down some  
resources on the server, but am I going to hit coordination or  
inter-process sync delays with that many different back ends going at  
once? (And is there a good way, short of just running some load tests,  
to estimate the costs involved?)
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


  1   2   >