[GENERAL] error codes in log file?

2005-03-10 Thread David Parker



Is there a 
postgresql.conf directive (7.4.5) which will result in error codes being emitted 
in the log file? I am thinking of the error codes in: http://www.postgresql.org/docs/7.4/interactive/errcodes-appendix.html. 
Or are these just client-interface errors?
 
- 
DAP--David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] create or replace trigger?

2005-03-26 Thread David Parker



Is there a 
technical reason that the "or replace" syntax isn't supported for CREATE 
TRIGGER?
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] create or replace trigger?

2005-03-27 Thread David Parker
I only wondered because I had a situation recently where I had to create
a trigger based on an event in an application (don't want to initiate
the trigger processing until other stuff has happened in the
environment), and if I'd had the CREATE OR REPLACE I could have avoided
the step of checking if the trigger existed already (I had CREAT OR
REPLACE on the trigger function). But that's certainly not a hardship,
and probably more "correct", anyway!

Thanks.

- DAP

>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED] 
>Sent: Sunday, March 27, 2005 1:50 AM
>To: David Parker
>Cc: postgres general
>Subject: Re: [GENERAL] create or replace trigger? 
>
>"David Parker" <[EMAIL PROTECTED]> writes:
>> Is there a technical reason that the "or replace" syntax isn't 
>> supported for CREATE TRIGGER?
>
>Lack of round tuits.
>
>Does it seem like a high priority to you?  I'd think that 
>changes in the trigger function (which you *can* do with 
>CREATE OR REPLACE) would be much more common than alterations 
>in the trigger parameters per se.
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] limited disk space

2005-03-30 Thread David Parker



We need to run a 
server (7.4.5, Solaris 9/Intel) in an environment with a defined limit on disk 
size. We know basically the set of data we will be working with, and can size 
the disk space accordingly, but there will be a fair amount of update churn in 
the data.
 
We are running 
autovacuum, but we still seem to be running out of disk space in our long 
running tests. The testers claim that the disk usage is not going down with a 
VACUUM FULL, but I have not verified that independently. 
 
Given that our 
"real" dataset is fairly fixed, and the growth in the database size is due to 
updates, I'm wondering if there is a way that I can allocate enough disk space 
at the outset to allow the database to have a large enough "working set" of free 
pages so that once it reaches a certain threshold it doesn't have to grow the 
database files anymore. 
 
I'm also wondering 
how WAL settings may affect the disk usage. It's not an option to place the logs 
on a separate device in this case, so I imagine I want to limit the size there, 
too.
 
Is anybody running postgres in a similar constrained 
environment, or are there any general tips on controlling disk usage that 
somebody could point me to?
 
Thanks.
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] monitoring database activity on solaris

2005-04-06 Thread David Parker



According to the 7.4 
doc section on monitoring database activity, one should be able to see the 
current activity happening in a given postgres process. It mentions that on 
Solaris (which we are running on) you need to use /usr/ucb/ps, and it also says 

 
" your 
original invocation of the postmaster command must have a shorter ps status display than that 
provided by each server process "
 
All I seem to get in 
my ps is the full postmaster command with all its invocation parameters, and I 
don't see the postgres process itself. I can't seem to get that full path + 
parameters out of the postmaster display, if that is even the 
problem
 
I realize that this 
might be more a Solaris question than a postgres one, but we have a recurring 
situation where we have a lot of processes chewing up cpu long after all clients 
have gone away, and that process display would help a great deal! I'd be 
grateful to anybody running Solaris for any advice.
 
Thanks.
- 
DAP------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] monitoring database activity on solaris

2005-04-06 Thread David Parker
OK, thanks. We're using pg_ctl to start it at the moment, but we can
obviously change that.

- DAP 

>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED] 
>Sent: Wednesday, April 06, 2005 6:18 PM
>To: David Parker
>Cc: postgres general
>Subject: Re: [GENERAL] monitoring database activity on solaris 
>
>"David Parker" <[EMAIL PROTECTED]> writes:
>> According to the 7.4 doc section on monitoring database 
>activity, one 
>> should be able to see the current activity happening in a given 
>> postgres process. It mentions that on Solaris (which we are running 
>> on) you need to use /usr/ucb/ps, and it also says
>> 
>> " your original invocation of the postmaster command must have a 
>> shorter ps status display than that provided by each server process "
>
>Yeah.  This is a Solaris peculiarity that we only figured out 
>by chance.
>The best bet is to make sure that your postmaster start script 
>invokes the postmaster as
>   postmaster
>no more.  No path (set PATH beforehand instead).  No 
>command-line switches (whatever you might want there can be 
>put into postgresql.conf instead).  Redirection is OK, but 
>keep the command line itself to a minimum.  "postmaster" and 
>nothing else will be shorter than any of the ps display 
>settings the postmaster's children use ... but if you have a 
>bunch of command-line arguments, it'll be longer, and Solaris' 
>ps will choose to display the wrong thing.
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] WAL on a RAM disk

2005-04-07 Thread David Parker



As part of our 
application we are running a postgres server on a RAM disk. All of the data 
stored in this database is obviously disposable, and we need to optimize access 
as much as possible. This is on Solaris 9/intel, postgres 7.4.5. Some things I'm 
wondering about:
 
1) is it possible to 
turn off WAL entirely? If not, what configuration of WAL/Checkpoint settings 
would result in minimal WAL activity?
 
2) Given that the 
whole database is in memory, does it make sense to set random_page_cost to a 
high value...or, um... a low value? (Does it show that I don't understand this 
parameter?)
 
3) Any other 
settings I should be looking at?
 
Thanks. I don't know 
if anybody else has done this, but I'd be interested to hear about it, if 
so.
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] client interfaces

2005-04-12 Thread David Parker



Is there anything 
like a client library for postgres that does not use tcp/ip as the protocol? As 
part of the performance tuning of our application, the question was posed to me 
whether there was a more "direct" sort of API that would not require going 
through the tcp/ip stack. I assume the answer to this is "no", and I see no hint 
of anything in the source code, but I thought I would ask the question, just to 
be sure.
 
Thanks.
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] debug levels in server

2005-04-21 Thread David Parker



I'm trying to track 
down a hang in a slony "lock set" command, which creates triggers on a set of 
tables. I assume that it's getting blocked by a lock on a table somewhere, and 
I'd like to print out all the "create trigger" statements in the postgres 
log.
 
I have 
log_min_messages = DEBUG5, and I'm seeing a lot of logging, but I don't see the 
create trigger statements showing up in the log. What do I need to set to get 
this output?
Thanks.
- 
DAP------David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] rollback vs. commit for closing read-only transaction

2005-04-25 Thread David Parker



If an 
application transaction is known to be read-only, is there any reason to 
prefer COMMIT or ROLLBACK for closing that transaction? Would there be any 
performance difference between the two commands?
- 
DAP--David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] explicit cursor vs. for loop in pl/pgsql

2005-05-17 Thread David Parker



I need to process a 
large table a few "chunks" at a time, commiting in between chunks so that 
another process can pick up and start processing the data.
 
I am using a 
pl/pgsql procedure with a "FOR rec in Select * from tab order by" 
statement. The chunksize is passed in to the procedure, and in the FOR loop I 
iterate until I reach chunksize. The procedure then returns and the calling code 
issues the commit, etc.
 
I know from the 
documentation that the FOR implicitly opens a cursor, but I'm wondering if there 
would be any performance advantages to explicitly declaring a cursor and moving 
through it with FETCH commands?
 
I have to use the 
ORDER BY, so I imagine I'm taking the hit of processing all the records in the 
table anyway, regardless of how many I ultimately fetch. The nature of the data 
is that chunksize doesn't necessarily match up one-for-one with rows, so I can't 
use it as a LIMIT value.
 
The table in 
question gets inserted pretty heavily, and my procedure processes rows then 
deletes those it has processed. My main concern is to keep the processing fairly 
smooth, i.e., not have it choke on a select when the table gets 
huge.
 
Any suggestions 
appreciated!
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] explicit cursor vs. for loop in pl/pgsql

2005-05-17 Thread David Parker
Thanks for the info. I've got an index, so I guess it's as good as it
gets!

The data is actually copied over from the slony transaction log table,
and there's no way to know how many statements (=rows) there might be
for any given transaction, so assigning an arbitrary limit seems too
risky, and I don't want to take the hit of a select count. 

Thanks again.

- DAP

>> The nature of the data is that chunksize doesn't necessarily 
>match up 
>> one-for-one with rows, so I can't use it as a LIMIT value.
>
>Can you set an upper bound on how many rows you need?  If you 
>can put a LIMIT into the select, it'll encourage the planner 
>to use an indexscan, even if you break out of the loop before 
>the limit is reached.
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] ceci n'est pas un btree

2005-05-18 Thread David Parker



As part of a nightly 
build process we are building an 8.0.1 database, creating a schema, etc. 
Intermittently we are getting the error
 
 ERROR:  
index "xyz" is not a btree
 
In the archives I 
found a post from Tom http://archives.postgresql.org/pgsql-admin/2005-05/msg00103.php that suggested this 
was a bad pg build or a flaky harddrive. I will look into the hardware issue, 
but I was wondering in what way I could have screwed up the build of postgres to 
get this kind of error? I see where the error is being emitted in the 
_bt_getroot and _bt_gettrueroot methods, but I don't what would be 
configure/build sensitive in there.
 
Incidentally, 
when I try to search the archives on "is not a btree", I get all hits for btree, 
presumably because all of those other "little" words in the phrase are 
stop-listed? Is there an escape sequence I can use to get the search to work for 
only that whole phrase (I realize this is probably basic google stuff, but I was 
born before the web)
 
Thanks.
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


[GENERAL] pg_listener records

2005-05-26 Thread David Parker



In failover testing 
we have been doing recently (postgres 7.4.5 w/ slony 1.0.2) we have seen several 
times when the database comes back up after a power failure it still has old 
pg_listener records hanging around from its previous life. This causes some 
problems with slony, but of course it is easy enough to implement a procedure to 
clean those records out, which we have done.
 
But I'm wondering - 
shouldn't that be part of normal server startup, cleaning out the pg_listener 
table? Or has this been addressed in 8.X.? Or is there a reason this isn't a 
good idea?
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] pg_listener records

2005-05-26 Thread David Parker
Thanks. Yeah, I know slony 1.0.5 cleans up after itself, and is better
in general, and I want to get there, but upgrading is not an option at
the moment, unfortunately. Same for postgres 8. 

But it still seems like this is something the server itself should be
taking care of, not a client process

- DAP

>-Original Message-
>From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, May 26, 2005 2:51 PM
>To: David Parker
>Cc: postgres general
>Subject: Re: [GENERAL] pg_listener records
>
>On Thu, 2005-05-26 at 13:41, David Parker wrote:
>> In failover testing we have been doing recently (postgres 7.4.5 w/ 
>> slony 1.0.2) we have seen several times when the database comes back 
>> up after a power failure it still has old pg_listener 
>records hanging 
>> around from its previous life. This causes some problems with slony, 
>> but of course it is easy enough to implement a procedure to clean 
>> those records out, which we have done.
>>  
>> But I'm wondering - shouldn't that be part of normal server startup, 
>> cleaning out the pg_listener table? Or has this been addressed in 
>> 8.X.? Or is there a reason this isn't a good idea?
>
>You should really be running the latest version of slony, 
>1.0.5.  There were plenty of little niggling bugs in the 
>earlier version that have been fixed.  I'd upgrade postgresql 
>while I was at it too, but slony DEFINITELY needs to be the 
>latest version.
>
>I'm pretty sure the problem you speak of was in fact fixed in 
>later versions, btw.
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] another failover testing question

2005-05-26 Thread David Parker



Something that we 
end up doing sometimes in our failover testing is removing slony replication 
from an "active" (data provider) server. Because this involves removing triggers 
from tables, we end up with currently connected clients getting a bunch of "OID 
123 not found" errors, where the OID is that of the recently removed 
trigger.
 
Is there any way 
short of cycling all client connections to have the server processes clean that 
information out of their cache when an object disappears like this from the 
database?
 
(I'm posting here 
rather than the slony list because it seems like a general 
question)
 
Thanks.
- 
DAP------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] another failover testing question

2005-05-26 Thread David Parker
Sorry, neglected the version yet again: 7.4.5. What happens is that we
have active connections accessing tables that are being replicated by
slony. Then somebody does an uninstall of slony, which removes the slony
trigger from those tables. Then we start getting the OID error. 

If this should indeed not be an issue in 7.4.5, I will try to come up
with a test case independent of a slony install.

Thanks.

- DAP 

>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, May 26, 2005 4:30 PM
>To: David Parker
>Cc: postgres general
>Subject: Re: [GENERAL] another failover testing question 
>
>"David Parker" <[EMAIL PROTECTED]> writes:
>> Something that we end up doing sometimes in our failover testing is 
>> removing slony replication from an "active" (data provider) server.
>> Because this involves removing triggers from tables, we end up with 
>> currently connected clients getting a bunch of "OID 123 not found"
>> errors, where the OID is that of the recently removed trigger.
>
>> Is there any way short of cycling all client connections to have the 
>> server processes clean that information out of their cache when an 
>> object disappears like this from the database?
>
>AFAICS, there already *is* adequate interlocking for this.  
>What PG version are you testing, and can you provide a 
>self-contained test case?
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] another failover testing question

2005-05-27 Thread David Parker
>It should not be ... at least, assuming that Slony is using 
>the standard DROP TRIGGER operation, rather than playing 
>directly with the system catalogs ...

AFAICS, the slony uninstall command is not doing anything exotic, though
it DOES do a little bit of fiddling with pg_catalog to RESTORE
previously disabled triggers. Otherwise it is using plain vanilla drop
trigger.

I found a slony list thread from a few months ago that discussed this
issue: http://archives.postgresql.org/pgsql-general/2005-02/msg00813.php

The discussion there centered around cached plans causing the "no
relation with OID" problem. The area of our code that experiences these
problems is calling libpq - we have a wrapper for it that plugs into our
Tcl environment - but it is not using prepared statements, and the
commands it is executing are not calls to stored procedures, etc.

I cannot repro this problem simply using psql, so it must have something
to do with the way we are using libpq, but I have no idea what object(s)
we are holding onto that reference slony OIDs.

- DAP

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] another failover testing question

2005-05-27 Thread David Parker
I know better what is happening now. I had the scenario slightly wrong.

Slony creates a trigger on all replicated tables that calls into a
shared library. The _Slony_I_logTrigger method in this library
establishes a saved plan for inserts into its transaction log table
sl_log_1. I can create the missing OID error with:

1) configure replication
2) establish a client connection, perform operations on replicated
tables
3) remove replication (drops sl_log_1 table)
4) operations on replicated tables on client connection are still fine
5) re-configure replication (re-creates sl_log_1 table)
6) now the OID error appears in the client connection. The OID refers
to the previous version of the sl_log_1 table

I was pawing through our code to figure out where we might be saving a
prepared statement, and was forgetting that the slony1_funcs library
does this. This saved plan is executed with SPI_execp, and the
documentation states:

"If one of the objects (a table, function, etc.) referenced by the
prepared plan is dropped during the session then the results of
SPI_execp for this plan will be unpredictable."

I'm pretty sure I understand the problem now (corrections appreciated),
but I'm left with the operational question of how I get around this
issue. Is there any way short of PQreset to get a postgres process to
refresh its saved plans? I can generally avoid the
drop-replication/re-configure replication thing happening in our
procedures, but I can't prevent it completely

- DAP

>> Sorry, neglected the version yet again: 7.4.5. What happens 
>is that we 
>> have active connections accessing tables that are being 
>replicated by 
>> slony. Then somebody does an uninstall of slony, which removes the 
>> slony trigger from those tables. Then we start getting the OID error.
>> If this should indeed not be an issue in 7.4.5, I will try 
>to come up 
>> with a test case independent of a slony install.
>
>It should not be ... at least, assuming that Slony is using 
>the standard DROP TRIGGER operation, rather than playing 
>directly with the system catalogs ...
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker



Is there a function 
call that will return the pid of the postgres process associated with the 
current client connection?
- 
DAP--David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker
Yeah, simple enough. I just wanted to make sure I wasn't duplicating
something that was already there.

Thanks.

- DAP 

>-Original Message-
>From: Bruno Wolff III [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, June 02, 2005 6:02 PM
>To: David Parker
>Cc: postgres general
>Subject: Re: SQL call to get pid of current connection
>
>On Thu, Jun 02, 2005 at 17:04:22 -0400,
>  David Parker <[EMAIL PROTECTED]> wrote:
>> Is there a function call that will return the pid of the postgres 
>> process associated with the current client connection?
>
>I thought I remembered seeing one, but I looked through the 
>development docs and didn't see a function or a GUC variable 
>with that information.
>It wouldn't be too hard to write a C or Perl function to get 
>that information.
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker
That certainly looks like it! Thanks!

- DAP 

>-Original Message-
>From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, June 02, 2005 8:47 PM
>To: Bruno Wolff III; David Parker; postgres general
>Subject: Re: [GENERAL] SQL call to get pid of current connection
>
>On Thu, Jun 02, 2005 at 05:02:14PM -0500, Bruno Wolff III wrote:
>> On Thu, Jun 02, 2005 at 17:04:22 -0400,
>>   David Parker <[EMAIL PROTECTED]> wrote:
>> > Is there a function call that will return the pid of the postgres 
>> > process associated with the current client connection?
>> 
>> I thought I remembered seeing one, but I looked through the 
>> development docs and didn't see a function or a GUC variable 
>with that information.
>
>Are you looking for pg_backend_pid()?  It's documented in the 
>"Statistics Collector" section of the "Monitoring Database Activity"
>chapter; it's been around since 7.3:
>
>http://www.postgresql.org/docs/7.3/interactive/release-7-3.html
>http://www.postgresql.org/docs/7.3/interactive/monitoring-stats.html
>http://www.postgresql.org/docs/7.4/interactive/monitoring-stats.html
>http://www.postgresql.org/docs/8.0/interactive/monitoring-stats.html
>
>--
>Michael Fuhr
>http://www.fuhr.org/~mfuhr/
>

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] index xyz is not a btree

2005-06-10 Thread David Parker



We are getting the 
"index xyz is not a btree" error pretty frequently in our build system (in which 
we build several databases). This is with 8.0.1 on Solaris 9/intel. During a 
database build we create the database, import metadata, import some application 
data, then finally run a vacuum on it.
 
The error only 
happens intermittently, but every occurrence is consistent:
 
    
1) always the same index - "attribute_pkey" on our "attirbute" 
table - one created implicitly by a primary key 
constraint
    
2) the error happens during a vacuum command
 
From what I have 
found in the archives the consensus is that this error comes from a bad disk or 
a bad build of postgres, but I don't know what exactly I would have to screw up 
in configure to get a "bad build" of postgres (everything seems to be working 
fine with it, otherwise).
 
When this occurs we 
can generally just re-run the build, and it finishes fine, so maybe it is some 
kind of disk weirdness, but I have not seen this with any of our 7.4.5 builds 
(which may or may not mean anything)
 
Any suggestions 
about where to look in the source code for clues or specific debugging I can 
turn on would be appreciated. Thanks!
- 
DAP------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] index xyz is not a btree

2005-06-10 Thread David Parker
No, I haven't tried that - I wasn't aware of pg_filedump. What's tricky
is that when the build detects a failure, it deletes the output
directory (so we don't get left with a half-built db) so I'll have to
fiddle with it

I just downloaded pg_filedump from the redhat site, so I'll play around
with it, and report back if/when I find anything.

Thanks.

- DAP


>> 1) always the same index - "attribute_pkey" on our "attirbute" 
>> table
>> - one created implicitly by a primary key constraint
>> 2) the error happens during a vacuum command
>
>Hmm ... that seems a bit too consistent for a hardware issue 
>doesn't it?
>
>Have you looked at the index file with pg_filedump, or even just od?
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] startup time

2005-06-21 Thread David Parker



We have a script 
process that needs to start a database, execute some SQL statements, then stop 
the database. This is done as part of an application reset, in which we are 
clearing out database data as part of the process. This is with 7.4.5 on Solaris 
9/intel.
 
The problem we are 
having is that in a customer installation, the startup on the database is taking 
significantly longer than we have ever seen it take before. The script needs to 
be fixed because it has a hard-coded wait interval value (looping over a "pg_ctl 
status"), and it's not stopping properly, so we end up trying to make a 
connection before the database is up, getting the "FATAL: database is starting 
up" error.
 
But what I'm curious 
about is what set of things have to happen between startup and the server being 
ready to accept requests. This happens on a fresh install, so I don't *think* it 
should be recovery processing. It's difficult to diagnose because I don't have 
access to the installation.
 
If somebody could 
point me to the area of code that handles startup, a link to documentation, 
and/or a bullet list of things that get done in this startup window, it 
would give me a starting point for asking questions of our field people about 
the installation environment.
 
Thanks.
- 
DAP------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] startup time

2005-06-21 Thread David Parker
>> The problem we are having is that in a customer installation, the 
>> startup on the database is taking significantly longer than we have 
>> ever seen it take before.
>
>Are we talking seconds, minutes, hours, days?

It's in the seconds range, I think, probably not more than a minute, but
I don't have access to specific times from the installation. The root
problem is that our script is not flexible enough, so that needs to
change - I just wanted to get an idea of what "normal" startup times I
should expect.

>> But what I'm curious about is what set of things have to happen 
>> between startup and the server being ready to accept requests. This 
>> happens on a fresh install, so I don't *think* it should be 
>recovery processing.
>
>If it's not doing recovery then the Postgres time proper 
>should be no more than a second or so, in my experience.  Look 
>for outside-the-database factors.  One possibility is a broken 
>DNS configuration leading to long delays in trying to resolve 
>socket addresses and such.  I've never seen such a problem 
>causing a delay over a minute though 

Thanks, I'll look into the DNS angle - that certainly seems like a
possibility.

Thanks, again.

- DAP

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] dump/restore bytea fields

2005-06-22 Thread David Parker



In 7.4.5 I'm trying 
to dump and selectively load a database that contains bytea fields. I dumped it 
with the command:
 
pg_dump -p 24000 -Fc 
-f /home/dparker/temp/tazz.dmp tazz
 
When I try to 
restore it, after having generated/modified a list-file, I get the 
following error: 
 
% pg_restore -p 
24600 -a -d tazz -L ./tazz.toc /home/dparker/temp/tazz.dmppg_restore: 
ERROR:  out of memoryDETAIL:  Failed on request of size 
8388608.CONTEXT:  COPY library, line 1: 
"137   
DD93266ED40DC09A6B502DB31A254168    
1.0.0   
agents.so   
library  t   \\177ELF\\001\\001\\..."pg_restore: 
[archiver (db)] error returned by PQendcopy
What am I 
missing?
 
Thanks!
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] dump/restore bytea fields

2005-06-22 Thread David Parker



Never mind. Appears to have been local to a particular 
machine. I was able to pg_restore the same dump file on another box (which I 
should have tried before posting!) Sorry for the noise.
 
- DAP

  
  
  From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED] On Behalf Of David 
  ParkerSent: Wednesday, June 22, 2005 4:05 PMTo: postgres 
  generalSubject: [GENERAL] dump/restore bytea 
  fields
  
  In 7.4.5 I'm 
  trying to dump and selectively load a database that contains bytea fields. I 
  dumped it with the command:
   
  pg_dump -p 24000 
  -Fc -f /home/dparker/temp/tazz.dmp tazz
   
  When I try to 
  restore it, after having generated/modified a list-file, I get the 
  following error: 
   
  % pg_restore -p 
  24600 -a -d tazz -L ./tazz.toc /home/dparker/temp/tazz.dmppg_restore: 
  ERROR:  out of memoryDETAIL:  Failed on request of size 
  8388608.CONTEXT:  COPY library, line 1: 
  "137   
  DD93266ED40DC09A6B502DB31A254168    
  1.0.0   
  agents.so   
  library  t   \\177ELF\\001\\001\\..."pg_restore: 
  [archiver (db)] error returned by PQendcopy
  What am I 
  missing?
   
  Thanks!
  - 
  DAP------David 
  Parker    Tazz Networks    (401) 
  709-5130 
   


Re: [GENERAL] startup time

2005-06-24 Thread David Parker
Does pg_ctl status return true even if the database is not ready yet to
receive requests? We are using pg_ctl status to tell us if the database
is up, but I'm wondering if it could return true, but a client could
connect and still get the "FATAL: database is starting up" error?

- DAP


>"David Parker" <[EMAIL PROTECTED]> writes:
>> The problem we are having is that in a customer installation, the 
>> startup on the database is taking significantly longer than we have 
>> ever seen it take before.
>
>Are we talking seconds, minutes, hours, days?
>
>> But what I'm curious about is what set of things have to happen 
>> between startup and the server being ready to accept requests. This 
>> happens on a fresh install, so I don't *think* it should be 
>recovery processing.
>
>If it's not doing recovery then the Postgres time proper 
>should be no more than a second or so, in my experience.  Look 
>for outside-the-database factors.  One possibility is a broken 
>DNS configuration leading to long delays in trying to resolve 
>socket addresses and such.  I've never seen such a problem 
>causing a delay over a minute though 
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] pg_dump and write locks

2005-07-19 Thread David Parker



We have an issue 
with running pg_dump while a database is under reasonably heavy update load. 
This is 7.4.5 on Solaris 9/intel.
 
The observed 
behavior was that a pg_dump running with nothing else going on takes a couple of 
minutes, but when we are running some system tests that do heavy updates to a 
selection of application tables, it appears that pg_dump blocks until the update 
run is done. This didn't make sense because everything I can find about pg_dump 
indicates that it should only be taking read locks, so I don't see why it should 
be blocked.
 
We looked at 
pg_locks, and saw that the pg_dump process was acquiring locks 
like:
 
14764 | 
ExclusiveLock |   124576072 | COPY public.stats (id, description, 
lastsavedate, lastsaveuser) TO stdout;
(we are using 
pg_dump with -Fc) If COPY is taking a ROW EXCLUSIVE lock, then that would 
explain why we are seeing contention, but I don't understand why COPY is locking 
at that level. Is there a better way to run pg_dump against a database with a 
lot of update activity?
 
Thanks.
 
- DAP
======
David Parker    Tazz 
Networks
 


Re: [GENERAL] pg_dump and write locks

2005-07-19 Thread David Parker
What happened was that I realized that here it was already noon on
Tuesday, and I hadn't made a fool of myself in public yet this week

I blindly took the output from somebody else's query and didn't look at
the pg_locks table myself. Once I was able to look at it with a live
test running, everything is in order, just slow.

Thanks for your response, and apologies for the noise.

- DAP
 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 19, 2005 10:45 AM
To: David Parker
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_dump and write locks 

"David Parker" <[EMAIL PROTECTED]> writes:
> The observed behavior was that a pg_dump running with nothing else 
> going on takes a couple of minutes, but when we are running some 
> system tests that do heavy updates to a selection of application 
> tables, it appears that pg_dump blocks until the update run is done.

Are you sure the other processes aren't taking any exclusive locks?
Are you sure your system isn't saturated to the point where pg_dump just
can't make progress very fast?

> We looked at pg_locks, and saw that the pg_dump process was acquiring 
> locks like:
> 
> 14764 | ExclusiveLock |   124576072 | COPY public.stats (id,
> description, lastsavedate, lastsaveuser) TO stdout;

It's impossible to tell what you are actually looking at here --- that's
not the raw output of pg_locks, and you've conveniently omitted any
column headers --- but I wonder whether that isn't just the
transaction's standard lock on its own XID.

If pg_dump is actually getting blocked, that will show as a row with
granted = false and pg_dump's PID.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] external function libraries on non-build machine

2004-10-09 Thread David Parker
I have built slony 1.0.2 with postgres 7.4.5. This needs to be deployed
on a system other than the one on which is was built. Somebody on our
team had earlier gotten around the problem of not being able to move a
postgres install by manually updating the pg_catalog.pg_proc table. But
it seems like this worked *after* the procs calling external libraries
were installed. With the slony stuff I can't install the procs until the
database has been deployed on its target machine, so that hack doesn't
seem to help me.

Now I'm trying to run a slonik script on the new system, and I am
getting the following error, which I assume is because $libdir no longer
references the build file system:

:3: Initializing the cluster
:5: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could not
access file "$libdir/xxid": No such file or directory
:5: Error: the extension for the xxid data type cannot be loaded
in database 'dbname=test1 host=engtst port=24005 user=slony
password=slony'
:7: Could not initialize the cluster!

I'm a little bit desperate, hence the cross-post. Is there anything I
can do on the target system to fix up the slony sql files to get them
installable in the new environment? I've tried setting the "AS
$libdir/xxid' clauses to various strings with full paths, but I don't
seem to be getting it.

Thank you for any help. Apologies for the cross-post.

-  DAP
== 
David ParkerTazz Networks(401) 709-5130


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] process hang during drop schema

2004-10-25 Thread David Parker
467 | AccessExclusiveLock | t
20503 |17142 | | 19467 | AccessExclusiveLock | t
20504 |17142 | | 19467 | AccessExclusiveLock | t
20506 |17142 | | 19467 | AccessExclusiveLock | t
20508 |17142 | | 19467 | AccessExclusiveLock | t
20510 |17142 | | 19467 | AccessExclusiveLock | t
20512 |17142 | | 19467 | AccessExclusiveLock | t
20514 |17142 | | 19467 | AccessExclusiveLock | t
  |  |1301 | 19467 | ExclusiveLock   | t
16759 |17142 | | 19472 | AccessShareLock | t
  |  |1304 | 19472 | ExclusiveLock   | t
(59 rows)

To my untrained eye, it doesn't look as though there is any lock contention here, but 
haven't
dealt with postgres locking before, so

Is there another table I should be looking at, or another debug switch I should be 
setting?

TIA for any suggestions.

- DAP
----------
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] disabling constraints

2004-10-28 Thread David Parker
I would like to be able to truncate all of the tables in a schema without worrying 
about FK constraints. I tried issuing a "SET CONSTRAINTS ALL DEFERRED" before 
truncating, but I still get constraint errors. Is there a way to do something like:

1) disable all constraints
2) truncate all tables
3) re-enable all constraints

?

In the slony project there is a procedure "altertableforreplication" that appears to 
do 1), but since it is updating pg_catalog tables directly, I don't want to cut/paste 
before I understand what it's doing!

Is there any "standard" way of doing this?

- DAP
----------
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] disabling constraints

2004-11-04 Thread David Parker
Hi. Thanks for responding.

I don't *think* there are circular references, but the nature of the
problem is that the code doesn't know anything about the semantics of
the schema - it just gets the list of tables, and there are multiple
schemas it needs to handle, all of which may change over time. That's
why I was hoping for some kind of global "disable constraints" command.

I guess it should be possible to generate the proper table order based
on loading up all of the constraints from the catalog. It seems like
more trouble than I want to go to for this problem, but maybe it
wouldn't be too difficult...

- DAP

>-Original Message-
>From: [EMAIL PROTECTED] 
>[mailto:[EMAIL PROTECTED] On Behalf Of Vivek Khera
>Sent: Thursday, November 04, 2004 3:05 PM
>To: [EMAIL PROTECTED]
>Subject: Re: [GENERAL] disabling constraints
>
>>>>>> "DP" == David Parker <[EMAIL PROTECTED]> writes:
>
>DP> I would like to be able to truncate all of the tables in a schema 
>DP> without worrying about FK constraints. I tried issuing a "SET 
>DP> CONSTRAINTS ALL DEFERRED" before truncating, but I still get 
>DP> constraint errors. Is there a way to do something like:
>
>Why don't you truncate your tables in an order that won't 
>violate FK's?  Or do you have circular references?
>
>--
>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>Vivek Khera, Ph.D.Khera Communications, Inc.
>Internet: [EMAIL PROTECTED]   Rockville, MD  +1-301-869-4449 x806
>AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/
>
>---(end of 
>broadcast)---
>TIP 8: explain analyze is your friend
>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] checkpoints

2004-11-11 Thread David Parker
We are starting to ramp-up testing of our application, and we started getting 
these messages in our log file:

LOG:  checkpoints are occurring too frequently (15 seconds apart)
HINT:  Consider increasing the configuration parameter "checkpoint_segments".
LOG:  recycled transaction log file "0045"
LOG:  recycled transaction log file "0046"
LOG:  recycled transaction log file "0044"

So I gather that we should bump up the number for checkpoint_segments, but I'm 
wondering:

1) what is reasonable rule-of-thumb for what this value should be? (we are at 
the default right now)
2) should checkpoint_timeout be changed at the same time?
3) should I increase wal_buffers in conjunction with checkpoint_segments?

Thanks!

- DAP
------
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] logging

2004-11-11 Thread David Parker
What is the relationship between the "-d" parameter to postmaster and the 
various "log_*" directives in the postgresql.conf? Specifically, does a certain 
-d level have to be specified before certain log_* directives take effect?

- DAP
----------
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] file descriptors

2005-02-11 Thread David Parker



We have started 
getting the error
 
   
FATAL:  terminating connection due to administrator 
command
in some of our 
processes. Searching in the archives, I gather that this is caused by a SIGTERM, 
and might be coming from a ulimit problem.
 
We are running 
Solaris 9/Intel, and the ulimit for nofiles for the database owner process is 
256. I suspect this needs to be set to "unlimited", which I don't think should 
cause a problem on Solaris (?).
 
Is there any rule of 
thumb for determining how many file descriptors a given postmaster would use, 
e.g., a  way to find out how many data files exist on the server, and/or an 
estimate for how many of them would have to be open at any given time? Do ALL of 
the data files get kept open, or do they get opened and 
closed?
Thanks.
- 
DAP----------David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] file descriptors

2005-02-11 Thread David Parker
OK, well that's good to know. You mentioned ulimit in

http://archives.postgresql.org/pgsql-bugs/2003-12/msg00080.php 

which ulimit parameters were you thinking of? That post is what set me
barking up 
this tree ;-) The only other thing not set to "unlimited" is stack,
which is set to
8480 for the database owner on this system (not sure where that number
came from).

Thanks.

- DAP

>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED] 
>Sent: Friday, February 11, 2005 6:17 PM
>To: David Parker
>Cc: pgsql-general@postgresql.org
>Subject: Re: [GENERAL] file descriptors 
>
>"David Parker" <[EMAIL PROTECTED]> writes:
>> We have started getting the error
>>FATAL:  terminating connection due to administrator 
>command in some 
>> of our processes. Searching in the archives, I gather that this is 
>> caused by a SIGTERM, and might be coming from a ulimit problem.
>
>It is coming from a SIGTERM, but I'm not aware of any 
>platforms that respond to exceeding the ulimit open-files 
>limit by SIGTERM'ing the process.  I think you're barking up 
>the wrong tree.
>
>> We are running Solaris 9/Intel, and the ulimit for nofiles for the 
>> database owner process is 256. I suspect this needs to be set to 
>> "unlimited", which I don't think should cause a problem on 
>Solaris (?).
>
>I think it *would* cause a problem, unless Solaris can support 
>unlimited numbers of open files --- we have certainly seen PG 
>eat all available file table slots on other kernels.  I don't 
>recommend raising nofiles.
>The backends are perfectly capable of working within the 
>nofiles limit you set, and 256 seems high enough to avoid thrashing.
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] file descriptors

2005-02-11 Thread David Parker
It's just an individual backend. Unfortunately I don't have the postgres
log file yet, which would obviously help - I only have the application
client's log, but the people reporting the problem know they need to get
me the database log file next time it happens. I haven't been able to
repro this on my own yet.

Thanks. 

- DAP 

>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED] 
>Sent: Friday, February 11, 2005 6:40 PM
>To: David Parker
>Cc: pgsql-general@postgresql.org
>Subject: Re: [GENERAL] file descriptors 
>
>"David Parker" <[EMAIL PROTECTED]> writes:
>> OK, well that's good to know. You mentioned ulimit in 
>> http://archives.postgresql.org/pgsql-bugs/2003-12/msg00080.php
>
>Well, that was a different scenario --- or at least I thought it was.
>Are you seeing unwanted shutdown of the entire database 
>cluster, or just an individual backend?  What shows up in the 
>postmaster log when this happens?
>
>   regards, tom lane
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [Slony1-general] Re: [GENERAL] Slony uninstall info/warning

2005-02-15 Thread David Parker
We recently ran into this as well, because in testing we had people
leaving an application running against the database while they
uninstalled slony.

I'm curious, what OIDs would be missing exactly, since the application
does not refer directly to any slony objects? Does the cached plan
"know" about the slony trigger on a given table? I don't know the extent
of information stored in plans.

Thanks, because I was just beginning to be puzzled by this!

- DAP

>-Original Message-
>From: [EMAIL PROTECTED] 
>[mailto:[EMAIL PROTECTED] On Behalf 
>Of John Sidney-Woollett
>Sent: Tuesday, February 15, 2005 7:03 AM
>To: Richard Huxton
>Cc: postgres general; [EMAIL PROTECTED]
>Subject: [Slony1-general] Re: [GENERAL] Slony uninstall info/warning
>
>Thanks for the info, Richard.
>
>I didn't think that it was a slony issue per se, but that a 
>note should be added to the slony docs warning to recycle 
>connections after making substantive changes to the schema.
>
>You're right, we use both (java) prepared statements and 
>pl/pgsql functions.
>
>The data loss aspect is not so clear cut (for us). We 
>definitely got records that failed to insert (missing sequence 
>numbers in tables) while others did insert correctly into the 
>same tables (using the same pl/pgsql functions). So we can't 
>figure out what the pattern is.
>
>Although we do have our connection pool recycle idle 
>connections - it could be that things worked when a new 
>recycled connection was used by the web app, and failed when 
>one of the "old/original" connections was used. This is 
>probably what masked the errors for us...
>
>It would be great if postgres could "recompile" pl/pgsql 
>functions whenever it found a missing object referenced within 
>the function - chances are that it would compile cleanly (in 
>this case) and then could be executed without error.
>
>something along the lines of
>
>execute function
>OK -> return result
>ERROR - OID -> recompile function, and re-execute
>   OK -> return result
>   ERROR - OID -> report error
>
>This would help get rid of the temp table in functions work 
>around having to use an execute statement.
>
>Thanks for your help and feedback.
>
>John Sidney-Woollett
>
>Richard Huxton wrote:
>
>> John Sidney-Woollett wrote:
>> 
>>> Hopefully this will prevent data loss or problems for others using 
>>> slony 1.0.5 and pg 7.4.6...
>>>
>>> We just got bitten by something we didn't foresee when completely 
>>> uninstalling a slony replication cluster from the master 
>and slave...
>>>
>>> MAKE SURE YOU STOP YOUR APPLICATION RUNNING AGAINST YOUR MASTER 
>>> DATABASE WHEN REMOVING THE WHOLE SLONY CLUSTER, or at least 
>re-cycle 
>>> all your open connections after the event!
>>>
>>> The connections appear to "remember" or refer to objects which are 
>>> removed by the uninstall node script. And you get lots of 
>errors as a 
>>> result (and possible data loss??)...
>>>
>>> Question: Why do our database objects still refer to removed slony 
>>> objects after they are removed?
>> 
>> 
>> Well, there are two areas I know cache plans/OIDs:
>>  1. Prepared statements
>>  2. plpgsql functions
>> 
>> At a guess, since you mention Java the first almost 
>certainly applies 
>> to you.
>> 
>> This isn't a slony issue so much as a cached plan issue. I'm 
>guessing 
>> the same problems would occur if you were manually changing the 
>> database schema.
>> 
>> Don't think you could get data loss (unless the application ignores 
>> errors). You will however get to see a wide range of OID 
>related errors.
>> 
>> -- 
>>   Richard Huxton
>>   Archonet Ltd
>___
>Slony1-general mailing list
>[EMAIL PROTECTED]
>http://gborg.postgresql.org/mailman/listinfo/slony1-general
>

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] building 7.4.3 on Solaris 9/Intel

2004-06-16 Thread David Parker
Thanks! 

-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 15, 2004 11:49 PM
To: David Parker
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] building 7.4.3 on Solaris 9/Intel


OK, I have fixed this so the configure test happens with the proper
thread flags.  Patch attached, and it will be in 7.4.4.


---

David Parker wrote:
> Seems to be, which is why I originally asked about configure switches,

> but it seems like just --enable-thread-safety ought to be enough.
> 
> -Original Message-
> From: Jim Seymour [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, June 15, 2004 5:41 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] building 7.4.3 on Solaris 9/Intel
> 
> "David Parker" <[EMAIL PROTECTED]> wrote:
> > 
> > Thanks for the response. Yup, I'm sure it's 7.4.3. 
> > 
> > From the config.log, it looks like the check for getpwuid_r test 
> > program uses -D_POSIX_PTHREAD_SEMANTICS, which turns on the POSIX 
> > version of the call, but the check for "fifth argument" test compile

> > doesn't include this define, so it fails, and the GETPWUID_R_5ARG
> doesn't get set:
> [snip]
> 
> So it's actually a configure problem.
> 
> Jim
> 
> ---(end of 
> broadcast)---
> TIP 8: explain analyze is your friend
> 
> 
> ---(end of 
> broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faqs/FAQ.html
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] moving an installation

2004-07-15 Thread David Parker
I need to be able to build postgres (7.4.3) with a given --prefix, but
then pack up the installation directory and deploy it in another
directory (as part of a product install, for instance).

I configured with --disable-rpath, but I'm still getting the 

   creating conversions... ERROR: could not access file
"$libdir/ascii_and_mic": No such file or directory

error when I try to run initdb from the new location. This is on Solaris
9, and I've verified that my LD_LIBRARY_PATH includes the postgres/lib
dir in the new location.

Is there a way I can get around this at runtime? I was under the
impression that --disable-rpath prevented library paths from being
compiled into the code, but maybe this "$libdir" is a different thing?

Thanks in advance for any help.

-  DAP
====== 
David ParkerTazz Networks(401) 709-5130
 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SELECT based on function result

2004-07-15 Thread David Parker
Did you try creating a view with that select statement, then just
selecting from the view? I don't know if there is a restriction on using
a function like that in a view definition or not.

-DAP 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Robert
Fitzpatrick
Sent: Thursday, July 15, 2004 4:31 PM
To: PostgreSQL
Subject: [GENERAL] SELECT based on function result

I have a function that tells me if a record is positive and negative
based on several field values. I use it in select statements:

ohc=> SELECT sample_id, is_wipe_positive(tblleadwipe.sample_id) AS
positive FROM tblleadwipe WHERE hud_building_id IS NOT NULL;  sample_id
| positive
---+--
73 | f
81 | t
(2 rows)

I see that I cannot change my WHERE statement to WHERE positive = 't'
because the column positive does not exist. Now I am looking for the
best way to return all records that are found positive or negative using
a query. Can anyone offer any guidance to how I can return all the
positives (or negatvies)? Or do I need to write another function that
does that?

--
Robert


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] moving an installation

2004-07-15 Thread David Parker
OK, I see the problem is in the share/conversion_create.sql script,
which references $libdir as part of the library path. So modifying the
script and replacing $libdir with an actual path fixes the problem,
allowing initdb to run.

But is there any way around this without actually modifying the script?
e.g., an environment variable or something? (setting the "libdir" env
var doesn't seem to do it).

Thanks.

- DAP

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Parker
Sent: Thursday, July 15, 2004 4:05 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] moving an installation

I need to be able to build postgres (7.4.3) with a given --prefix, but
then pack up the installation directory and deploy it in another
directory (as part of a product install, for instance).

I configured with --disable-rpath, but I'm still getting the 

   creating conversions... ERROR: could not access file
"$libdir/ascii_and_mic": No such file or directory

error when I try to run initdb from the new location. This is on Solaris
9, and I've verified that my LD_LIBRARY_PATH includes the postgres/lib
dir in the new location.

Is there a way I can get around this at runtime? I was under the
impression that --disable-rpath prevented library paths from being
compiled into the code, but maybe this "$libdir" is a different thing?

Thanks in advance for any help.

-  DAP
========== 
David ParkerTazz Networks(401) 709-5130
 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] moving an installation

2004-07-15 Thread David Parker
Either

a) modify share/conversion_create.sql to specify a full path
  or
b) modify share/conversion_create.sql to simply remove "$libdir/"
everywhere,
then setting dynamic_library_path in share/postgresql.conf.sample

allows initdb and createdb to run, which was my initial stumbling block.
I haven't done anything in particular with the resulting database yet.
Do you expect the installation to break in other places?

It would be nice if that $libdir just weren't in the
conversion_create.sql at all: it only represents the compiled-in
default, which the program knows anyway, and having it there prevents
the dynamic_library_path setting from having any effect - at least
that's what seems to be happening. I don't what other effects that
variable might be having, of course.

Thanks.

- DAP


-Original Message-
From: Peter Eisentraut [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 15, 2004 5:33 PM
To: David Parker; [EMAIL PROTECTED]
Subject: Re: [GENERAL] moving an installation

David Parker wrote:
> I need to be able to build postgres (7.4.3) with a given --prefix, but

> then pack up the installation directory and deploy it in another 
> directory (as part of a product install, for instance).
>
> I configured with --disable-rpath, but I'm still getting the

This doesn't work.  There are a lot more paths compiled in than just the
rpath.  If you want to change the installation paths, you need to
rebuild from scratch.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] estimating table size

2004-07-26 Thread David Parker
Given a table, foo, created in a database but not populated, is there a
procedure that will return an estimate of the size of a given tuple in
that table? It looks like pgstattuple reports on actual table pages; I'm
looking for something that reads the lengths of each row, and knows what
the storage overhead would be (extra 4 bytes for varchar, for instance).

Alternatively, is there documentation somewhere that can give me a good
rule of thumb for this? I can add up the lengths of the columns and make
my own judgements about averages for varchar columns. The "hidden"
storage overhead is what I'm wondering about, I guess.

Thanks.

-  DAP
====== 
David ParkerTazz Networks(401) 709-5130
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] installation problem...

2004-07-29 Thread David Parker
There should be a config.log file in the directory where you ran
configure: output in that file around the error is the first place to
look.

- DAP 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jonathan Wynett
Sent: Friday, July 23, 2004 5:37 PM
To: '[EMAIL PROTECTED]'
Subject: [GENERAL] installation problem...

Hi:

I'm installing postgresql 7.4.3 on Solaris and during ./configure I get
the following error:

checking types of arguments for accept()... configure: error: could not
determine argument types

Any ideas?

Jon Wynett
Senior Java Developer
Research Systems, Inc.
303-413-3985
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] trash talk

2004-08-03 Thread David Parker
In an article about IBM's releasing Cloudscape to Apache, 

http://www.eweek.com/article2/0,1759,1630856,00.asp

eWeek says:

"The developer community for Cloudscape now consists of about 80 IBM
developers, Rivot said. IBM of course anticipates that population will
explode when the open-source community gets its hand on the code, but
just because a product goes open source doesn't mean it will succeed, as
can be witnessed by the failure of the PostgreSQL database to thrive
under this model."

It's amazing how much list activity there is for this "dead" database!
;-)

-  DAP
====== 
David ParkerTazz Networks(401) 709-5130
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] To OID or not to OID

2004-09-28 Thread David Parker
I've been looking through the archives for guidance on whether or not we
should create our tables WITHOUT OIDS. I've seen some discussions about
making WITHOUT the default for user tables, and seen some caveats about
the oid int value wrapping, etc., but I haven't been able to find a
discussion of the relative merits (I'm sure there's been one)

Is there any concrete reason NOT to create tables WITHOUT OIDS? We are
going to have some very large tables in our app, so saving the space and
not worrying about int-wrap seems like a good idea, but I'm worried that
I'm missing something.

-  DAP
====== 
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] mirroring/replication

2004-10-04 Thread David Parker
Is anybody using contrib/dbmirror in a production environment?

-  DAP
== 
David ParkerTazz Networks(401) 709-5130
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match