Re: [ADMIN] creating user

2006-04-10 Thread kevin
On Monday 10 April 2006 07:18, sandhya wrote:
> Hi...
>  I want to know more about alter user /creating users.I am facing error if
> i give user with Login/Nologin..Why is it so? But it is working fine for
> Createdb/NoCreatedb.etc.
>
> sample=# ALTER user sandhya with LOGIN;
> ERROR:  syntax error at or near "LOGIN" at character 25
> LINE 1: ALTER user sandhya with LOGIN;
>
> Please explain me about this/Suggest Some Manual where in i can find more
> about this.I have gone through postgres site.
>
> -Sandhya

My experience creating users has always been from the command line. Not sure 
if this is what your asking about but just in case...

from the command line as postgres user (or another user with privelages to 
create users):

Usage:
  createuser [OPTION]... [USERNAME]

Options:
  -a, --adduser user can add new users
  -A, --no-adduser  user cannot add new users
  -d, --createdbuser can create new databases
  -D, --no-createdb user cannot create databases
  -P, --pwpromptassign a password to new user
  -E, --encrypted   encrypt stored password
  -N, --unencrypted do not encrypt stored password
  -i, --sysid=SYSID select sysid for new user
  -e, --echoshow the commands being sent to the server
  -q, --quiet   don't write any messages
  --helpshow this help, then exit
  --version output version information, then exit

Connection options:
  -h, --host=HOSTNAME   database server host or socket directory
  -p, --port=PORT   database server port
  -U, --username=USERNAME   user name to connect as (not the one to create)
  -W, --passwordprompt for password to connect

If one of -a, -A, -d, -D, and USERNAME is not specified, you will
be prompted interactively.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] oralink documentation

2006-06-06 Thread kevin
I pulled down the oralink contrib module from the pg foundry but it has no docs.
Anyone know of ant docs/help/etc for this contrib module?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] moving databases

1998-06-22 Thread Kevin Heflin

what would be the easiest way to move databases from one postgresql
server to another?

Kevin




[ADMIN] Re: [GENERAL] date & time

1999-03-06 Thread Kevin Heflin

On Sun, 7 Mar 1999, hoelc wrote:

> Hello,
> Can some one please tell me how to set the date&time in the PostgreSQL
> system?
> I try to use date('now') and time('now') to keep tract of the data and
> time when the data is inserted or updated.  When I use sql " insert into
> table (data1, date_chg, time_chg) values ('abc',date('now'),time('now'))
> " to insert the date and time data, the data successfully inserted but
> when I retrive the data, it shows that the date and time is always "
> 01-01-2000 " and " 08:00:00 " , now is of couse not year 2000 and the
> time is also not 8 o'clock.  Why?  How should I correct this?
> I am using PostgreSQL in Linux system, and the date & time for Linux
> system are correct.


I'm not real sure, but I would try CURRENT_DATE or CURRENT_DATETIME ?

Kevin





Kevin Heflin  | ShreveNet, Inc.  | Ph:318.222.2638 x103
VP/Mac Tech   | 333 Texas St #619| FAX:318.221.6612
[EMAIL PROTECTED]| Shreveport, LA 71101 | http://www.shreve.net





[ADMIN] Problems with RedHat 5.2 kernel 2.0.36 and Postgres 6.4.2

1999-05-28 Thread Kevin Meldorf

It seems that every couple of days Postgres crashes
and it cannot be restarted even after doing an IPC clean.
Here is the error message:

root@nowplay init.d]# ./postgresql start
Starting postgresql service: bash: /root/.bashrc: Permission denied
IpcMemoryCreate: shmget failed (Identifier removed) key=5432010,
size=24588, permission=700
IpcMemoryIdGet: shmget failed (Identifier removed) key=5432010,
size=24588, permission=0
IpcMemoryAttach: shmat failed (Invalid argument) id=-2
FATAL 1:  AttachSLockMemory: could not attach segment
postmaster []


The box is an HP with a 450PII and  256meg of memory. 

Any ideas?



-
Sincerely,

Kevin Meldorf
Magnitude Network
1525 West Homer #202
Chicago IL 60622
phn 773.645.3210 x286
fax 773.645.3211




Re: [ADMIN] postgresql on win32

2000-01-03 Thread Kevin Lo

Bohdan KRAVCHUK wrote:

> Hello:

Hi, there,

> Is there anyone who runs postgresql on win32 platform? If so, i would wish
> very much to get in touch with that person.
>
> The reason, I need help with installing postgres. I tried compiling both
> by VC++ and sygwin - both times unsuccessfully.
>
> Somebody more successful than me, please turn up!

You can run PostgreSQL with cygwin on Windows NT. The binary
is available at my ftp site:

ftp://203.79.167.135/pub/postgres-nt-binaries.tar.gz

> Bohdan
>
> 
> Bohdan KRAVCHUK
> Wolfson College
> Barton Road, Cambridge CB3 9BB, ENGLAND
> Email: [EMAIL PROTECTED]
> ++++

- Kevin







Re: [ADMIN] postgresql on Win 2k??????

2000-11-14 Thread Kevin Lo

Bryan Bateman wrote:

> Is it possible???

Yes, why not? :-)

Please see http://people.freebsd.org/~kevlo/postgres/portNT.html

- Kevin




[ADMIN] Granting Permissions to User To Access Database

2001-01-06 Thread Kevin Schachter

I recently installed Postgres on my server to rid of the horrible
limitations of Mysql. However I am having some trouble setting up
permissions as they are in MySQL.

In MySQL you can grant a user select permissions to all tables in a database
with the command "GRANT select on db.* to .". Is there a similar way to
accomplish this in Postgres? Reading up on GRANT I see that it only works
for individual tables and not a full database.

As well, how can I restrict a user to only one database. In pg_hba.conf
there is nothing that specifies the user name. I can grant access to the
database to all users on an ip, but I can't grant it to only one user.

Thanks in advance,

Kevin




[ADMIN] Granting Permissions to User To Access Database

2001-01-06 Thread Kevin Schachter

I recently installed Postgres on my server to rid of the horrible
limitations of Mysql. However I am having some trouble setting up
permissions as they are in MySQL.

In MySQL you can grant a user select permissions to all tables in a database
with the command "GRANT select on db.* to .". Is there a similar way to
accomplish this in Postgres? Reading up on GRANT I see that it only works
for individual tables and not a full database.

As well, how can I restrict a user to only one database. In pg_hba.conf
there is nothing that specifies the user name. I can grant access to the
database to all users on an ip, but I can't grant it to only one user.

Thanks in advance,

Kevin




[ADMIN] vacuumdb -v output

2005-04-21 Thread Kevin Copley



Hi,
I've just put a system into production in which 
some tables are updated frequently - several times per second.
I'm doing a nightly vacuumdb -v, but am not sure if 
it's achieving anything. Here's the output for one table:
 

INFO:  vacuuming 
"public.fip_track_circuit"INFO:  index "fip_track_circuit_pk" now 
contains 1557427 row versions in 4538 pagesDETAIL:  10 index row 
versions were removed.0 index pages have been deleted, 0 are currently 
reusable.CPU 0.22s/0.14u sec elapsed 6.51 sec.INFO:  
"fip_track_circuit": removed 10 row versions in 9 pagesDETAIL:  CPU 
0.00s/0.00u sec elapsed 0.00 sec.INFO:  "fip_track_circuit": found 10 
removable, 1557427 nonremovable row versions in 14305 pagesDETAIL:  
1555321 dead row versions cannot be removed yet.There were 1 unused item 
pointers.0 pages are entirely empty.CPU 0.42s/0.24u sec elapsed 6.82 
sec.INFO:  vacuuming "public.fip_xl_switch"

 
I'm concerned about " 1555321 dead row versions cannot be removed 
yet"
 
I'd be extremely grateful if anyone could cast some light on this.
Thanks, Kevin
 


[ADMIN] Question regarding blocking locks

2005-08-25 Thread Kevin Keith
I have a question regarding blocking locks in the pg database. I ran into a 
process which terminated abnormally, and to fully clear the locks it left 
behind I had to reboot the system (probably restarting postmaster would have 
had the same effect). This was a personal development system so this was no 
big deal to reboot it. I would like to know what other options I have so if 
this was to occur in a production environment in the future I had a less 
drastic measure to take to resolve the issue.


I saw the locks in the pg_locks view (the mode was Exclusivelock),

Can someone point me in the right direction to addressing such a problem 
should it occur in the future?


Thanks,

Kevin

_
Express yourself instantly with MSN Messenger! Download today - it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Question regarding blocking locks

2005-08-26 Thread Kevin Keith

The version of Postgres is 7.4.

When I said the process had terminated abnormally - what I meant was a UNIX 
process (C program) had opened a database connection, run some updates 
without a COMMIT and then exited without closing the connection to the 
database or committing the transactions.


From what I see below, could I assume that the best method is to kill the 
offending process in UNIX, and then postmaster should recognize there was a 
problem and restart the database. Correct?


Thanks,

Kevin


From: Tom Lane <[EMAIL PROTECTED]>
Subject: Re: [ADMIN] Question regarding blocking locks Date: Thu, 25 Aug 
2005 12:08:11 -0400


"Kevin Keith" <[EMAIL PROTECTED]> writes:
> I have a question regarding blocking locks in the pg database. I ran 
into a
> process which terminated abnormally, and to fully clear the locks it 
left
> behind I had to reboot the system (probably restarting postmaster would 
have

> had the same effect).

Define "terminated abnormally".  You really aren't going to get helpful
answers without giving full details of what happened.  (If the
postmaster thought the backend had crashed, it would have forced a
database restart which would have wiped shared memory.  So it's not
clear from your comment what did happen.)

Also, exactly which PG version is this?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


_
Is your PC infected? Get a FREE online computer virus scan from McAfee® 
Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Disabling WAL for bulk data loads

2005-09-09 Thread Kevin Keith
I am coming from an Oracle background - which in the case of bulk data loads 
there were several options I had where I could disable writing to the redo 
log to speed up the bulk data load (i.e. direct load, set the user session 
in no archive logging, set the affected tables to have no logging).


I know the COPY command is one option - however it appears the data would 
need to be in formatted file in order to use it correct? I want to avoid 
writing a new file out for the COPY command and loading that.


What other options does Postgres 7.4 provide which would allow data loads to 
bypass writing to the WAL? I don't need to have this enabled - because in 
the event of a database crash, I would simply reload the data from the 
source files as recovery.


Thanks,

Kevin

_
Don’t just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[ADMIN] ERROR: canceling query due to user request

2005-09-12 Thread Kevin Grittner
I started with this issue on the pgsql-jdbc list, but was assured that this 
could not be a bug in the JDBC driver -- it is either a bug in the 
configuration or runtime environment or it is a server-side bug.  This list 
seemed like the logical next step.
 
We have seen this behavior in 8.0.3 with the 311 JDBC driver.  Today I tested 
with the latest snapshot of the 8.1 development code and the tip of the JDBC 
driver code from the root.  It still happens.
 
The environment for the latest test is:
SuSE Linux 9.3 (i586)
2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005
Dual Xeon
Disk on 7 spindle RAID 5 via SAN
All access through JDBC connections.
A single thread accessing the database.
A connection pool in use -- the thread may grab a different connection each 
time.
No other process concurrently executing against the database.
Happens with or without autovacuum running.
 
Failures have also occured on Windows 2000 and Windows XP servers with local 
hard drives.  Xeons in all cases.
 
The software involved "optimistically" tries to do a series of inserts, 
updates, and/or deletes in a single database transaction.  If an exception 
occurs (for example an insert of a row which already exists) or an update 
affects zero rows, the transaction is rolled back and the operations are 
attempted in "cautious" mode -- a commit after each insert update or delete of 
a single row.
 
The client is running full out, with a steady supply of ready-to-go requests.  
Client CPU seems to be the bottleneck, rather than anything on the database 
server hardware.  We are only seeing this problem in "cautious" mode -- a 
database transaction has been rolled back and we're committing each statement 
as we go.
 
If we run the same set of data multiple times, the error occurs on different 
requests each time, indicating it is not data dependent.  If we turn on logging 
at the JDBC driver level, it never happens, indicating that it is timing 
sensitive.  We've never seen it while stepping through the debugger, but since 
it seems to happen randomly once every few thousand requests, that would be a 
unlikely anyway.
 
The error is manifest by this message and stack trace:
 
org.postgresql.util.PSQLException: ERROR: canceling query due to user request
at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1499)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1284)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:175)
at 
org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:617)
at 
org.postgresql.jdbc2.AbstractJdbc2Connection.commit(AbstractJdbc2Connection.java:637)
at (our application code)
 
There is no method in the JDBC interface to cancel a commit from the client 
side, and there is nothing in the client code which is trying to do so.  There 
are no processes running on the server except services from the SuSE install 
and the processes started by pg_ctl start.
 
We tried setting the ulimit of everything to unlimited, where allowed.  We set 
the open files limit to 4096.  These ulimit changes had no affect on the 
problem.
 
Does anyone have any ideas on a possible cause, or any diagnostic steps we 
should take?
 
Thanks,
 
-Kevin
 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] ERROR: canceling query due to user request

2005-09-13 Thread Kevin Grittner
Thanks, Tom.
 
The statement_timeout setting was also raised on the JDBC list, and has been 
checked -- there is nothing setting statement_timeout.  The connection shows 
this value at zero.  None of our code contains anything PostgreSQL specific, so 
there is nothing in our framework or applications that would be aware of the 
"set statement_timeout" command to be able to issue it.  I created the database 
instance, the user and the database, and have not set this as a default 
anywhere.
 
Regarding the possibility that a ulimit setting is sending the signal, I am 
using "su" to set user id to root, issuing the ulimit statements to set 
everything unlimited or very high, then using "su" to set user id to postgres.  
At that point the settings from root still show.  Then I'm starting postgres 
using pg_ctl.  Is there any reason the ulimit settings would not carry through 
with this approach?  Is there a better way to do it?
 
Other than that, what external causes might be at fault when I have both 
Windows machines and Linux machines which have nothing installed but the 
operating system and PostgreSQL?  The only ulimit settings I couldn't set to 
"unlimited" were pipe size and open files.  Is there any chance that the 
error/rollback path in the code is leaking open files on the server?  Is there 
anything I should run during the test to watch for potential resource 
exhaustion?
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 09/12/05 5:39 PM >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> org.postgresql.util.PSQLException: ERROR: canceling query due to user request

The only possible trigger of that message is a SIGINT sent to the backend.
Now the backend will SIGINT itself if a statement timeout expires, so one
possibility is that you have statement_timeout set and it's getting
exceeded.  Otherwise you need to be looking for external causes.

regards, tom lane


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] ERROR: canceling query due to user request

2005-09-13 Thread Kevin Grittner
One more thought -- I keep coming back to the fact that when we turn on logging 
in the JDBC driver on the client side, the problem does not occur.  The only 
possible reason I can see for this having any affect on the problem is the 
small delay introduced by the synchronous logging.  Since this is only showing 
up on commit of a database transaction which follows close on the heels of a 
rollback on the same connection, is there any chance that there is some very 
small "settling time" needed for a rollback, and we're sometimes getting in 
ahead of this?
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 09/12/05 5:39 PM >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> org.postgresql.util.PSQLException: ERROR: canceling query due to user request

The only possible trigger of that message is a SIGINT sent to the backend.
Now the backend will SIGINT itself if a statement timeout expires, so one
possibility is that you have statement_timeout set and it's getting
exceeded.  Otherwise you need to be looking for external causes.

regards, tom lane


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] ERROR: canceling query due to user request

2005-09-13 Thread Kevin Grittner
I'm having a really hard time coming up with theories about the cause
or things to check.
 
We ran the test again with logging to disk, and it didn't happen in an
hour of testing.  The logging boosted the average run time of the
series of database modificates we attempt as a single transaction
from 44 ms to 58 ms.  We logged to a dummy PrintWriter, which just
returned to driver code without doing anything, and the time went to
50 ms.  We got our first error after 9 minutes with that configuration.
 
The only thing running on the server is the postgres back end.  It would
be hard to imagine something outside of the postgres software itself
which would be able to send the signal only when a rollback occurred.
Can you think of anything which could be coming through the protocol
stream which would cause this signal during the commit after a rollback?
 
About the only other thing I can think to do is to try to come up with
a RAM-based PrintWriter to keep a rolling buffer of JDBC logging
which it would dump when we get the error.  Since a PrintWriter which
did absolutely nothing was right on the edge of blocking the problem,
I'm skeptical that adding even that much will allow the problem to show.
 
I welcome all suggestions on what to try or what to monitor.
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 09/13/05 11:31 AM >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> One more thought -- I keep coming back to the fact that when we turn
> on logging in the JDBC driver on the client side, the problem does not
> occur.  The only possible reason I can see for this having any affect
> on the problem is the small delay introduced by the synchronous
> logging.  Since this is only showing up on commit of a database
> transaction which follows close on the heels of a rollback on the same
> connection, is there any chance that there is some very small
> "settling time" needed for a rollback, and we're sometimes getting in
> ahead of this?

(1) No.

(2) Even if we posit the existence of such a serious bug as that, it
wouldn't explain how control gets to the SIGINT response code.

regards, tom lane


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] ERROR: canceling query due to user request

2005-09-13 Thread Kevin Grittner
Thanks, Tom.  An excellent suggestion.  (This 50 year old dog has
today learned a new trick.)
 
There is good news and bad news.  The good news is that I found
the cause, and we can keep this from happening with a change on
our end.  The bad news is that I think it also points to a backend bug,
although not as serious as the potential (hypothetical) one I was
asking about earlier.
 
For the record (and the benefit of anyone with similar problems who
may search the archives), I got the connection pool set up, and found
the pids for the connection processes like this:
 
# ps aux|grep postgres
postgres 18307  0.0  0.0   3052  1100 pts/0SSep09   0:00 su postgres
postgres 18308  0.0  0.0   3408  1828 pts/0S+   Sep09   0:00 bash
postgres 15463  0.0  0.1  89508  3852 pts/0S10:09   0:05 
/usr/local/pgsql/bin/postmaster -D /var/pgsql/data
postgres 15466  0.0  4.0  89652 83004 pts/0S10:09   0:00 postgres: 
writer process
postgres 15467  0.0  0.1   7052  2796 pts/0S10:09   0:02 postgres: 
stats buffer process
postgres 15468  0.0  0.0   6388  1996 pts/0S10:09   0:03 postgres: 
stats collector process
postgres   926  0.0  0.3  90484  7576 pts/0S14:34   0:00 postgres: dtr 
dtr 165.219.88.77(4436) idle
postgres   927  0.0  0.2  89956  4684 pts/0S14:34   0:00 postgres: dtr 
dtr 165.219.88.77(4437) idle
postgres   928  3.0  1.1  90404 23764 pts/0S14:34   0:07 postgres: dtr 
dtr 165.219.88.77(4438) idle
postgres   929  2.7  1.1  90468 23260 pts/0S14:34   0:07 postgres: dtr 
dtr 165.219.88.77(4439) idle
root   935  0.0  0.0   1796   648 pts/2S+   14:38   0:00 grep postgres
 
I then established an strace session for each connection like this:
 
strace -tt -o strace.926 -p 926
 
Other flags may have been useful, but strace is new to me, so I took
the route I was sure I understood.  I ran until we got an error, which
involved running through 7,278 transactions.  I used Ctrl+C to stop
each strace and searched the results for SIGINT.  There were 1,799
of them.  They always came in a set of lines like this:
 
13:59:19.625498 recv(7, "P\0\0\0Y\0SELECT lcmtr.\"relationName"..., 8192, 0) = 
125
13:59:19.625976 _llseek(32, 0, [0], SEEK_END) = 0
13:59:19.626057 _llseek(33, 0, [8192], SEEK_END) = 0
13:59:19.626159 _llseek(32, 0, [0], SEEK_END) = 0
13:59:19.626257 send(7, 
"1\0\0\0\0042\0\0\0\4T\0\0\0D\0\2relationName\0\0\0"..., 97, 0) = 97
13:59:19.626352 recv(7, 0x82b1000, 8192, 0) = ? ERESTARTSYS (To be restarted)
13:59:19.628477 --- SIGINT (Interrupt) @ 0 (0) ---
13:59:19.628559 sigreturn() = ? (mask now [])
 
The SELECT statement was easy to find, and it became clear that a
programmer had code which was incorrectly canceling a JDBC
Statement after reaching the end of the (empty) ResultSet.  One
time out of 1,799 this was causing the error we were seeing on
the subsequent commit, which strikes me as a bug.
 
We've already changed the offending code to avoid the invocation
of the Statement.cancel method.  Not that it merits high priority, but
it might make sense for PostgreSQL to behave more consistently on
a commit when a statement within the database transaction has been
canceled.  There currently is a race condition where if the commit
comes fast enough after the Statement.cancel, it receives the error
which is the subject of this thread.
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 09/13/05 1:18 PM >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> I'm having a really hard time coming up with theories about the cause
> or things to check.

Have you tried strace'ing the backend process to see if you can see a
signal being delivered to it?

regards, tom lane


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [JDBC] [ADMIN] ERROR: canceling query due to user request

2005-09-15 Thread Kevin Grittner
One more thought on the topic -- in the particular case where we hit
this, the Statement did not have any requests active on its connection.
If the driver can determine this, it could return from the cancel method
without doing anything.
 
-Kevin
 
 
>>> Oliver Jowett <[EMAIL PROTECTED]> 09/13/05 6:00 PM >>>
Tom Lane wrote:

> This has been discussed before (try the pgsql-jdbc list archives).
> I believe we concluded that an appropriate fix was to not consider the
> cancel request "done" until the client sees the separate connection
> dropped by the postmaster.  libpq's cancel functions wait for that to
> happen, and I thought that JDBC had been fixed as well --- maybe you are
> using an old driver version?

I thought this got done too, but looking at the current JDBC driver code
 I don't see it..

-O


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Copy command not writing complete data to text file

2005-09-22 Thread Kevin Keith
I am having some problems with the COPY... to FILE command - and I am 
wondering if anyone has experienced similar problems in the past, and what 
you may have done to resolve the problem?


The platform is Free BSD, Postgres version 7.4.5 and the program triggering 
the COPY command is a CGI script.


I am copying out to a text files using PQexec - there are no more than 500 
records in the tables being copied. What is happening, is one instance of 
the COPY command is always stopping after dumping 16,384 bytes (16K) to the 
text file - the last record is always truncated (i.e. the if the table has 
20 columns, the last record may only have 7 columns in the flat file). There 
is also no error returned in the program - to the program calling the copy 
command everything gives an appearance of working.


What makes this problem difficult to resolve is that if I run the exact same 
COPY command from the PSQL prompt, everything works with no problems.


Disk space is not an issue - there is sufficient memory to store the text 
files many times over.


Has anyone else experienced similar issues, or can you point me to something 
that may be causing this behavior to occur?


Thanks,

Kevin

_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Copy command not writing complete data to text file

2005-09-22 Thread Kevin Keith
It is a CGI (shell) script that is calling a C program. I found the issue - 
which was external to Postgres.


I still have one remaining question - if the COPY to FILE command fails, 
does it return an error? If the program uses:

int i;
i = PQexec("copy from <> to <>");

and the table has (for example) 500 records, but only 350 were copied to the 
file, what is returned in i? Or is the only method to check that all the 
data was exported correctly to wc -l the file and select count(*) to check 
against what was supposed to be exported?


Thanks,

Kevin


From: Michael Fuhr <[EMAIL PROTECTED]>
To: Kevin Keith <[EMAIL PROTECTED]>
CC: [email protected]
Subject: Re: [ADMIN] Copy command not writing complete data to text file
Date: Thu, 22 Sep 2005 08:52:36 -0600

On Thu, Sep 22, 2005 at 08:27:00AM -0500, Kevin Keith wrote:
> The platform is Free BSD, Postgres version 7.4.5 and the program 
triggering

> the COPY command is a CGI script.

What language and API are you using?  In the next paragraph you
mention PQexec(), which implies C or C++ and libpq (some people
argue that a C/C++ program isn't a "script," so that word could be
misleading).

> I am copying out to a text files using PQexec - there are no more than 
500
> records in the tables being copied. What is happening, is one instance 
of
> the COPY command is always stopping after dumping 16,384 bytes (16K) to 
the
> text file - the last record is always truncated (i.e. the if the table 
has

> 20 columns, the last record may only have 7 columns in the flat file).
> There is also no error returned in the program - to the program calling 
the

> copy command everything gives an appearance of working.

Could you post the code that does the COPY and error checking?
Better yet, a short but complete program that exhibits the problem.

> What makes this problem difficult to resolve is that if I run the exact
> same COPY command from the PSQL prompt, everything works with no 
problems.


Does the CGI program behave differently if you run it from the
command line instead of through the web server?

--
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


_
On the road to retirement? Check out MSN Life Events for advice on how to 
get there! http://lifeevents.msn.com/category.aspx?cid=Retirement



---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] Need help with corrupt pg_statistic

2005-10-02 Thread Kevin Seghetti
I am using postgres 7.4.5, and have a corrupt pg_statistic.
Many commands cause the following error:
ERROR:  invalid page header in block 10 of relation "pg_statistic"

I want to preserve my data if at all possible, I figured I would just
dump the database and reload it. I tried
pg_dump, but it gives me:
 pg_dump: query to obtain list of data types failed: ERROR:  invalid page 
header in block 11 of relation "pg_statistic"
even if I just try to dump a single table. 

I tried postgres in single-user mode:
 delete from pg_statistic
gives me the same error
 drop table pg_statistic
gives me
 ERROR:  permission denied: "pg_statistic" is a system catalog

Is there any way to nuke pg_statistic (since its contents can be
recalculated anyway?)

Thanks, 

-- 
Kevin Seghetti: E-Mail: [EMAIL PROTECTED], HTTP: www.tenetti.org
GPG public key: http://tenetti.org/cgi-bin/twiki/view.cgi/KevinSeghetti/GPGKey
Check out www.worldfoundry.org for my GPL'ed 3D video game engine
Copyright is a temporary loan from the public domain, not property


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Kevin Grittner
 72 rows in sample, 72 estimated total rows
VACUUM
dtr=# reindex table "DbTranImageStatus";
REINDEX
dtr=# vacuum analyze verbose "DbTranImageStatus";
INFO:  vacuuming "public.DbTranImageStatus"
INFO:  index "DbTranImageStatusPK" now contains 72 row versions in 2 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "DbTranImageStatus": found 0 removable, 72 nonremovable row versions in 
1 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 48 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  vacuuming "pg_toast.pg_toast_272793"
INFO:  index "pg_toast_272793_index" now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_272793": found 0 removable, 0 nonremovable row versions in 0 
pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.DbTranImageStatus"
INFO:  "DbTranImageStatus": scanned 1 of 1 pages, containing 72 live rows and 0 
dead rows; 72 rows in sample, 72 estimated total rows
VACUUM
 
These all ran sub-second.  We updated postgresql.conf for more
aggressive autovacuum, and restarted postgres, then restarted the
application.
 
This failure seems rather different from the one I previously posted,
since it was blocking on the application table, rather than
pg_constraint_contypid_index, and it did not wake up when all
other processes where stopped.
 
As before, both boxes are running 8.1beta2.  Windows is not showing
these problems with the autovacuum blocking; just Linux.
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 10/02/05 8:53 PM >>>
"Jeff Kirby" <[EMAIL PROTECTED]> writes:
> the Linux box however is still chugging away this morning... and
> appears to be stuck on vacuuming "pg_constraint_contypid_index".  How
> do I know... well I don't really... I'm inferring based on the order
> of the log output on the Windows box.

Looking in pg_locks would give you a more reliable indicator of what the
VACUUM is currently working on.

Is the Linux box otherwise idle?  There was another report recently of a
vacuum hung up for a long time on pg_constraint_contypid_index, but it
seemed to be due to extremely heavy usage of domain types ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Kevin Grittner
In my previous post I failed to mention that after we stopped the
applications and found that autovacuum remained idle for several
minutes, we restarted postgres before cleaning up the bloat on
the problem table.
 
Also, the log shows that autovacuum stopped kicking in after
4:43 p.m. on Friday.  It was at about this point that I was
dropping and creating indexes through psql.  I did not use
BEGIN TRANSACTION, but I went home with an index being
built on a large table, so that connection remained open until
Sunday at 9:00 p.m.  I don't know if that matters, but I'm trying
to be thorough.
 
-Kevin
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Kevin Grittner
We will use gdb and strace the next time we see this.
 
I've tried to be specific about which vacuum is running in all cases.  If
the posts have been confusing on that issue, I apologize.  I'll try to be
clear on this in future posts.
 
To summarize past events, the case involving the constraint index
was indeed a "vacuum full" of the entire database under heavy load.
Autovacuum failed to keep the small, high-update table clean in that
scenario, but I am not sure whether that caused the failure of the
vacuum full, or was the result of it.  This weekend, it seemed like the
first thing which failed (and the last) were autovacuum attempts.
Vacuum full was run through psql during attempts to recover
performance after the failure of autovacuum caused performance
to slow noticably.  We didn't capture info which would tell us whether
the explicit vacuum was blocked by an autovacuum process.
 
There were a few very small single-source tests under 8.0.3, but all
tests involving any significant load were under 8.1beta1 or 8.1beta2.
We did not see this in any of those small tests under 8.0.3.
 
-Kevin
 
 
>>> Tom Lane <[EMAIL PROTECTED]> 10/03/05 3:48 PM >>>
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> However, I'm looking at the autovacuum code to see why it's sitting
> holding locks on the small table and not vacuuming it.  I see on the
> pg_locks output that process 3158 (autovacuum) has got locks on the
> table and index, but it apparently isn't vacuuming the table.  If this
> is correct, it's a bug.  However I can't seem to find out why this
> happens.

We can see clearly from the pg_locks output that VACUUM isn't waiting
for an lmgr lock, so the problem must be at a lower level.  The
hypothesis I'm thinking about is that VACUUM is trying to do
LockBufferForCleanup() and for some reason it never finishes.  There are
a number of possible scenarios that could explain this: leaked buffer
pin, dropped signal, etc.

> Kevin, Jeff, next time this happens please attach gdb to the autovacuum
> process and get a stack trace ("bt" to gdb), if at all possible, and/or
> strace it to see what it's doing.

Please!

Also, we need to keep a little clarity about what we are dealing with.
This thread has mentioned hangups in both plain vacuum (autovacuum) and
VACUUM FULL.  It seems very likely to me that there are different
mechanisms involved --- since VACUUM FULL takes an exclusive lock on the
whole table, that eliminates an entire class of possible explanations
for the plain-VACUUM case, while introducing a whole new set of
explanations having to do with the VACUUM being queued up behind
ordinary table locks.  Please be perfectly clear about which scenario
each report is about.

Finally, I'm wondering whether this bug is new in 8.1 or is
pre-existing.  Has this same application been running successfully
in 8.0?

regards, tom lane


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Kevin Grittner
My goal is to avoid vacuum full in production.  My understanding is
that it is never necessary if vacuums are done aggressively enough,
but I felt that while we were in beta test mode, it was worthwhile for
me to have it run periodically, with the verbose option, to provide
information about where we might need to adjust our vacuum
schedule or fsm settings.
 
Since the long-term blocking on the constraint index occurred, I have
asked that we run these during non-peak loads, and I'm getting to
the point where I think I can be satisfied with the verbose results of
a normal vacuum for these purposes, even though it provides less
detail.  Are there any tools which provide the level of detail you get
from vacuum full verbose without the problems?
 
When our small table has bloated, we have tried vacuum full in an
attempt to eliminate the bloat, but we have had to resort to reindex
table to clean things up adequately.  We have tried cluster, which
also worked -- but there doesn't seem to me to be any real
advantage over vacuum followed by reindex for our small, frequently
updated table, since it is rarely accessed sequentially.  Am I missing
something there?
 
-Kevin
 
 
>>> "Jim C. Nasby" <[EMAIL PROTECTED]> 10/03/05 4:48 PM >>>
On Mon, Oct 03, 2005 at 04:37:17PM -0500, Kevin Grittner wrote:
> We will use gdb and strace the next time we see this.
>  
> I've tried to be specific about which vacuum is running in all cases.  If
> the posts have been confusing on that issue, I apologize.  I'll try to be
> clear on this in future posts.
>  
> To summarize past events, the case involving the constraint index
> was indeed a "vacuum full" of the entire database under heavy load.
> Autovacuum failed to keep the small, high-update table clean in that
> scenario, but I am not sure whether that caused the failure of the
> vacuum full, or was the result of it.  This weekend, it seemed like the
> first thing which failed (and the last) were autovacuum attempts.
> Vacuum full was run through psql during attempts to recover
> performance after the failure of autovacuum caused performance
> to slow noticably.  We didn't capture info which would tell us whether
> the explicit vacuum was blocked by an autovacuum process.

Keep in mind that vacuum full is *very* aggressive for use in a
production environment. It aquires exclusive locks on tables, which
means everything else will grind to a complete halt while it's running.
Unless you have a very specific reason to use vacuum full, you should
just use plain vacuum (not related to autovacuum). If you are going to
vacuum full, you should consider using the cluster command which has
some added benefits.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Kevin Grittner
I hate to clutter the list with another post on this, but I just did
exactly what Tom asked me not to do, which is to confuse what
type of vacuum was run.  The vacuum involved in the constraint
index problem was NOT a vacuum full, but a normal vacuum of
the database.  Sorry for mis-stating the issue a few minutes ago.
 
-Kevin
 
 
>>> "Kevin Grittner" <[EMAIL PROTECTED]> 10/03/05 5:27 PM >>>
My goal is to avoid vacuum full in production.  My understanding is
that it is never necessary if vacuums are done aggressively enough,
but I felt that while we were in beta test mode, it was worthwhile for
me to have it run periodically, with the verbose option, to provide
information about where we might need to adjust our vacuum
schedule or fsm settings.
 
Since the long-term blocking on the constraint index occurred, I have
asked that we run these during non-peak loads, and I'm getting to
the point where I think I can be satisfied with the verbose results of
a normal vacuum for these purposes, even though it provides less
detail.  Are there any tools which provide the level of detail you get
from vacuum full verbose without the problems?
 
When our small table has bloated, we have tried vacuum full in an
attempt to eliminate the bloat, but we have had to resort to reindex
table to clean things up adequately.  We have tried cluster, which
also worked -- but there doesn't seem to me to be any real
advantage over vacuum followed by reindex for our small, frequently
updated table, since it is rarely accessed sequentially.  Am I missing
something there?
 
-Kevin
 
 
>>> "Jim C. Nasby" <[EMAIL PROTECTED]> 10/03/05 4:48 PM >>>
On Mon, Oct 03, 2005 at 04:37:17PM -0500, Kevin Grittner wrote:
> We will use gdb and strace the next time we see this.
>  
> I've tried to be specific about which vacuum is running in all cases.  If
> the posts have been confusing on that issue, I apologize.  I'll try to be
> clear on this in future posts.
>  
> To summarize past events, the case involving the constraint index
> was indeed a "vacuum full" of the entire database under heavy load.
> Autovacuum failed to keep the small, high-update table clean in that
> scenario, but I am not sure whether that caused the failure of the
> vacuum full, or was the result of it.  This weekend, it seemed like the
> first thing which failed (and the last) were autovacuum attempts.
> Vacuum full was run through psql during attempts to recover
> performance after the failure of autovacuum caused performance
> to slow noticably.  We didn't capture info which would tell us whether
> the explicit vacuum was blocked by an autovacuum process.

Keep in mind that vacuum full is *very* aggressive for use in a
production environment. It aquires exclusive locks on tables, which
means everything else will grind to a complete halt while it's running.
Unless you have a very specific reason to use vacuum full, you should
just use plain vacuum (not related to autovacuum). If you are going to
vacuum full, you should consider using the cluster command which has
some added benefits.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] PostgreSQL running in the background....

2005-10-18 Thread Kevin Grittner
Hello Faeiz,

When setting up our SuSE server, I found this page, and figured that I
should probably comply with the recommendations I found there, even
though we were setting things up "by hand" rather than through an RPM:

http://www.novell.com/coolsolutions/feature/11256.html

We started with the linux script in the contrib/start-scripts directory
of the distribution.  We added this near the front:

### BEGIN INIT INFO
# Provides: postresql
# Required-Start: $local_fs $network $syslog
# Should-Start: nthd
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: PostgreSQL RDBMS
# Description: PostgreSQL RDBMS service on port 5432
### END INIT INFO

We found it necessary to change the script to use pg_ctl to start the
service rather than using postmaster directly.

We copied the script to /etc/init.d/ as postgresql, set the executable
flag, and ran the following:

/usr/lib/lsb/install_initd /etc/init.d/postgresql

After that, you can use chkconfig as needed.

This has worked for us, although if there's a flaw in it, I'd welcome
any suggestions.  I'm reluctant to post my entire script publicly
without some review by someone who kinows Linux better than I do, for
fear that my ignorance could cause people more harm than good.

As usual, YMMV, proceed with caution, etc.

The usual Linux advice regarding SHMMAX, scheduling (elevator=deadline),
and file system configuration (/etc/fstab noatime, etc.) applies.  If
you need help with these on SuSE, let me know.

I hope this helps.

-Kevin


>>> faeiz <[EMAIL PROTECTED]>  >>>
Dear Admins,

I am a new user, and I am running Suse Linux 9.3...
I have sucessfully installed PostgreSQL 8.0.3
The problem I am having is that my postgreSQL does not run in the
background. I have looked at the readme also they do have a command
to get postgres running in the background, but after I restart my
machine, it stops running.. I was wandering if you people could provide
me with a step by step walk through getting postgres running permantly
in the back ground.

Thanks.



--
faeiz


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] how do you automate database backups?

2005-11-02 Thread Kevin Grittner
Since you mentioned multiple backups per day, I thought you
might want to look at the the Point In Time Recovery option, as
well as the pg_dump approach.

http://www.postgresql.org/docs/8.0/interactive/backup.html


>>> Ferindo Middleton Jr <[EMAIL PROTECTED]>  >>>
Are there tools available to automate backing up databases? I would like

to setup my PostgreSQL install to automatically backup databases and 
save the file to specific location at specific times of the day with 
little to no user intervention... I'd like to know how to do this in 
both Windows and Linux. I imagine I could write some kind of script or 
something on my Linux server but I have absolutely no idea how to 
implement this on a computer running Windows.

I would imagine this could be implemented through the OS but are there 
built-in features of the database itself that can implement automated 
backups?

Ferindo


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[ADMIN] ERROR: could not read block

2005-11-14 Thread Kevin Grittner
A programmer ran a query to fix some data against two "identical"
databases -- one on Linux and one on Windows.  They are both 8.1.0,
running on dual hyperthreaded Xeons, with data on RAID5.  The Linux
update went fine.  The Windows attempt give this:
 
dtr=> UPDATE
dtr-> "DbTranRepository"
dtr-> SET "userId" = UPPER("userId")
dtr-> WHERE (
dtr(> ("userId" <> UPPER("userId")) AND
dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
ERROR:  could not read block 649847 of relation 1663/16385/16483:
Invalid argument

Table "public.DbTranRepository"
  Column  |  Type  | Modifiers
--++---
 countyNo | "CountyNoT"| not null
 tranImageSeqNo   | "TranImageSeqNoT"  | not null
 timestampValue   | "TimestampT"   | not null
 transactionImage | "ImageT"   |
 status   | character(1)   | not null
 queryName| "QueryNameT"   |
 runDuration  | numeric(15,0)  |
 userId   | "UserIdT"  |
 functionalArea   | character varying(50)  |
 sourceRef| character varying(255) |
 url  | "URLT" |
 tranImageSize| numeric(15,0)  |
Indexes:
"DbTranRepositoryPK" PRIMARY KEY, btree ("countyNo",
"tranImageSeqNo") CLUSTER
"DbTranRepository_timestamp" btree ("countyNo", "timestampValue")
"DbTranRepository_userId" btree ("countyNo", "userId",
"timestampValue")

 
fsync is on.  We have not had any crashes that I know of (one key player
here is currently unavailable, so I may get a surprise when I can talk
to him).  The table being updated has about 23.3 million rows, each of
which has a bytea column which should normally be toasted.  The database
was in use during this fix query.  This table is normally insert-only. 
The fix query was running for about an hour, during which time about
45,000 rows were inserted.
 
This looks to me like a database corruption.  We can recover from the
Linux machine, but we're interested in the cause.  (Management is
divided on using Linux versus Windows as our db server platform.)  Any
suggestions on causes or what to check?

-Kevin


 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] ERROR: could not read block

2005-11-14 Thread Kevin Grittner
Both machines are IBM xSeries 346 model 884042U with 6 drives
in a RAID 5 array through an IBM battery backed controller.  We
had a couple of these lying around after replacing them with better,
but they have been pretty stable workhorses for us.

I'm checking on whether the RAM is ECC -- the techs available at
the moment aren't sure.  The current machines are "transitional",
and it may not be too late to set the permanent servers up with ECC
memory.  Is it something I should fight for?

For specs on the base machines, before we dressed them up:

http://www-132.ibm.com/webapp/wcs/stores/servlet/ProductDisplay?productId=8741193&langId=-1

-Kevin


>>> Scott Marlowe <[EMAIL PROTECTED]>  >>>
On Mon, 2005-11-14 at 17:20, Kevin Grittner wrote:

> running on dual hyperthreaded Xeons, with data on RAID5.

> ERROR:  could not read block 649847 of relation 1663/16385/16483:
> Invalid argument

If you were running on top of a RAID 1+0 or RAID 5 array, such an error
would likely never have happened, since it would have been detected by
the controller, and either the bad block would be mapped out or the
drive would be kicked out of the array and you'd get a system alert
telling you you had a bad drive in your array.

Are you running on quality hardware (ECC memory, Server class SCSI
drives, battery backed cache hardware RAID array, etc...) or just
whatever was laying about unused.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Grittner
It appears that the log file is not being written -- I'll start a
separate thread on that issue.

I reran the query.  Same error, same relation, different block.

dtr=> UPDATE
dtr-> "DbTranRepository"
dtr-> SET "userId" = UPPER("userId")
dtr-> WHERE (
dtr(> ("userId" <> UPPER("userId")) AND
dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
ERROR:  could not read block 1118215 of relation 1663/16385/16483:
Invalid argument

-Kevin


>>> Joshua Marsh <[EMAIL PROTECTED]>  >>>

Does the log file tell you anymore information?

Have you tried to rerun the query? If so, did you get similar results?

-Josh


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Grittner
Could my issue be the same problem as this thread?:

http://archives.postgresql.org/pgsql-bugs/2005-11/msg00114.php

The references to "Invalid Argument" caught my eye.  That thread
did start from a very different point, though.

-Kevin


>>> "Kevin Grittner" <[EMAIL PROTECTED]>  >>>
It appears that the log file is not being written -- I'll start a
separate thread on that issue.

I reran the query.  Same error, same relation, different block.

dtr=> UPDATE
dtr-> "DbTranRepository"
dtr-> SET "userId" = UPPER("userId")
dtr-> WHERE (
dtr(> ("userId" <> UPPER("userId")) AND
dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
ERROR:  could not read block 1118215 of relation 1663/16385/16483:
Invalid argument

-Kevin


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] Best practices for logging from a Windows service

2005-11-15 Thread Kevin Grittner
I am looking for advice on how best to configure logging from PostgreSQL
when it is run as a Windows service.  Under Linux we allow it to log to
'stderr' and we use the pg_ctl -l switch to direct that to a file.  This
doesn't seem to be supported under Windows, so I'm looking for "best
practices" advice from those experienced in this area.

-Kevin



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Grittner
The table has about 23.3 million rows, of which about 200,000 will
be affected by this update.  Run time is about an hour.  During the
first run, the table was the target of about 45,000 inserts.  This rerun
was done as the only task.  A third run (also by itself) gave this:

ERROR:  could not read block 1482762 of relation 1663/16385/16483:
Invalid argument

So the block number is increasing each time.  I'm inclined to think
that this is the result of the scan passing over rows added by itself.

-Kevin


>>> Qingqing Zhou <[EMAIL PROTECTED]>  >>>

> I reran the query.  Same error, same relation, different block.
>
> dtr=> UPDATE
> dtr-> "DbTranRepository"
> dtr-> SET "userId" = UPPER("userId")
> dtr-> WHERE (
> dtr(> ("userId" <> UPPER("userId")) AND
> dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
> ERROR:  could not read block 1118215 of relation 1663/16385/16483:
> Invalid argument
>

So this is a big query? Somebody reported this error when IO is
intensive.
To conclude the reports we have received:

(1) If IO is intensive (no matter you have anti-virus software or not),
you may encounter "invalud argument" error;

(2) In some cases, anti-virus software could cause "invalid argument"
error;

(3) You may encounter "permission denied" problem (no matter you have
anti-virus software or not) in some unknown condition;

Unfortunately we haven't found out the cause of this problem ... I am
thinking an urgent need is let PG print the GetLastError() for Windows
version as well.

Regards,
Qingqing


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Grittner
Correction:

dtr=> select count(*) from "DbTranRepository"
dtr-> WHERE (
dtr(> ("userId" <> UPPER("userId")) AND
dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
 count

 611255
(1 row)

I'm becoming more convinced that this happens as the UPDATE
runs into rows inserted by itself.

To respond to a concern expressed earllier -- all of our database
servers, including these two, use ECC memory.

-Kevin


>>> "Kevin Grittner" <[EMAIL PROTECTED]>  >>>
The table has about 23.3 million rows, of which about 200,000 will
be affected by this update.  Run time is about an hour.  During the
first run, the table was the target of about 45,000 inserts.  This rerun
was done as the only task.  A third run (also by itself) gave this:

ERROR:  could not read block 1482762 of relation 1663/16385/16483:
Invalid argument

So the block number is increasing each time.  I'm inclined to think
that this is the result of the scan passing over rows added by itself.

-Kevin


>>> Qingqing Zhou <[EMAIL PROTECTED]>  >>>

> I reran the query.  Same error, same relation, different block.
>
> dtr=> UPDATE
> dtr-> "DbTranRepository"
> dtr-> SET "userId" = UPPER("userId")
> dtr-> WHERE (
> dtr(> ("userId" <> UPPER("userId")) AND
> dtr(> ("timestampValue" BETWEEN '2005-10-28' AND '2005-11-15'));
> ERROR:  could not read block 1118215 of relation 1663/16385/16483:
> Invalid argument
>

So this is a big query? Somebody reported this error when IO is
intensive.
To conclude the reports we have received:

(1) If IO is intensive (no matter you have anti-virus software or not),
you may encounter "invalud argument" error;

(2) In some cases, anti-virus software could cause "invalid argument"
error;

(3) You may encounter "permission denied" problem (no matter you have
anti-virus software or not) in some unknown condition;

Unfortunately we haven't found out the cause of this problem ... I am
thinking an urgent need is let PG print the GetLastError() for Windows
version as well.

Regards,
Qingqing


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Grittner
I got the error log working on Windows (with redirect_stderr).  I had
to stop and restart postgres to do so.  I ran the query (for the fourth
time), and it completed successfully.
 
I'm not inclined to believe that changing the redirect_stderr setting
would change this behavior, so I guess that either it is a
coincidence or the restart cleared some bad state within postgres.
 
Is there anything that anyone wants me to do at this point, to try
to pin down a cause, or do I drop it here?
 
-Kevin
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] ERROR: could not read block

2005-11-15 Thread Kevin Keith
If I have followed the chain correctly, I saw that you were trying to 
run an update statement on a large number of records in a large table 
right? I have changed my strategy in the past for this type of problem. 
I don't know if it would have fixed this problem or not, but I have seen 
with Postgres and Oracle that updates like this on a large table may 
take very long and cause contention processes.


Anyway, the method I normally use is to write a script that selects the 
records into a cursor and updates one at a time by the primary key, and 
then committing the transaction every so often (i.e. 500 - 1000 
records). This way there will not be the overhead associated with an 
extremely large transaction, and the script will complete much faster (I 
have seen hours vs. minutes).


Hope this helps.

Kevin

Kevin Grittner wrote:


I got the error log working on Windows (with redirect_stderr).  I had
to stop and restart postgres to do so.  I ran the query (for the fourth
time), and it completed successfully.

I'm not inclined to believe that changing the redirect_stderr setting
would change this behavior, so I guess that either it is a
coincidence or the restart cleared some bad state within postgres.

Is there anything that anyone wants me to do at this point, to try
to pin down a cause, or do I drop it here?

-Kevin



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
 




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] ERROR: could not read block

2005-11-16 Thread Kevin Grittner
I will patch, build, and run similar updates to try to hit the problem.
Hopefully I can have something to post later today.

-Kevin


>>> Qingqing Zhou <[EMAIL PROTECTED]>  >>>

On Tue, 15 Nov 2005, Kevin Grittner wrote:

>
> Is there anything that anyone wants me to do at this point, to try
> to pin down a cause, or do I drop it here?
>

Since you can reproduce the error with big chance, I think it is a good
chance to pin down a cause now. It would be useful to know the
GetLastError() instead of a tranlated errno. Can you patch the code like
this (patch smgrread()):

http://archives.postgresql.org/pgsql-bugs/2005-10/msg00050.php


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] ERROR: could not read block

2005-11-16 Thread Kevin Grittner
Is there anything you would like me to include in my build for my
test runs, or any steps you would like me to take during the tests?

-Kevin


>>> Tom Lane <[EMAIL PROTECTED]>  >>>

As I said before, we
really really need to find out what the Windows-level error code is
--- "Invalid argument" isn't telling us anything useful here.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] ERROR: could not read block

2005-11-16 Thread Kevin Grittner
This code generates warnings on both Linux and Windows.  My C
is too rusty to feel confident of what to do.

On Linux:

md.c:445: warning: implicit declaration of function `GetLastError'

On Windows:

md.c:445: warning: int format, DWORD arg (arg 6)
md.c:457: warning: int format, DWORD arg (arg 7)

How should I proceed?

-Kevin


>>> Tom Lane <[EMAIL PROTECTED]>  >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> Is there anything you would like me to include in my build for my
> test runs, or any steps you would like me to take during the tests?

You might want to insert some debugging elog's into mdread() in md.c,
rather than in its caller smgrread.  I'm concerned that by the time
control comes back to smgrread, the Windows last-error might have
been changed; in any case, we'd not know exactly which of the steps
in mdread failed.  I'd suggest something like

if (FileSeek(v->mdfd_vfd, seekpos, SEEK_SET) != seekpos)
+   {
+   elog(LOG, "seek failed on relation %u/%u/%u: %d",
+reln->smgr_rnode.spcNode,
+reln->smgr_rnode.dbNode,
+reln->smgr_rnode.relNode,
+GetLastError());
return false;
+   }

status = true;
if ((nbytes = FileRead(v->mdfd_vfd, buffer, BLCKSZ)) != BLCKSZ)
{
+   elog(LOG, "read failed on relation %u/%u/%u: %d bytes,
%d",
+reln->smgr_rnode.spcNode,
+reln->smgr_rnode.dbNode,
+reln->smgr_rnode.relNode,
+nbytes,
+GetLastError());
/*
 * If we are at or past EOF, return zeroes without
complaining. Also
 * substitute zeroes if we found a partial block at EOF.

(untested, but something like this should do it)

regards, tom lane


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] ERROR: could not read block

2005-11-16 Thread Kevin Grittner
Ran with this change.  Didn't take long to hit it.

Let me know if there's anything else I can do.

[2005-11-16 11:59:29.015 ] 4904  LOG: 
read failed on relation 1663/16385/1494810: -1 bytes, 1450
[2005-11-16 11:59:29.015 ] 4904  ERROR: 
could not read block 25447 of relation 1663/16385/1494810: Invalid
argument

>>> Tom Lane <[EMAIL PROTECTED]>  >>>

I think this is just cosmetic, but try %ld instead of %d in the elog
format strings.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] ERROR: could not read block

2005-11-16 Thread Kevin Grittner
Our DBAs reviewed the Microsoft documentation you referenced,
modified the registry, and rebooted the OS.  We've been beating
up on the database without seeing the error so far.  We'll keep at
it for a while.

-Kevin


>>> Qingqing Zhou <[EMAIL PROTECTED]>  >>>


On Wed, 16 Nov 2005, Kevin Grittner wrote:

> Ran with this change.  Didn't take long to hit it.
>
> [2005-11-16 11:59:29.015 ] 4904  LOG:
> read failed on relation 1663/16385/1494810: -1 bytes, 1450
> [2005-11-16 11:59:29.015 ] 4904  ERROR:
> could not read block 25447 of relation 1663/16385/1494810: Invalid
> argument
>

1450 ERROR_NO_SYSTEM_RESOURCES
Insufficient system resources exist to complete the requested service

SQL Server 7.0 experienced the same problem before:
http://support.microsoft.com/default.aspx?scid=kb;en-us;274310

Some registration value tweak to solve it (for backup service):
http://support.microsoft.com/default.aspx?scid=kb;en-us;304101

I have to go out for a while ... Kevin, can you take a look at the 2nd
thread to see if it is possible to change some reg values to temporarily
solve the problem. A more robust solution will follow a retry style
following thread 1 I guess.

Cheers,
Qingqing


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [ADMIN] ERROR: could not read block

2005-11-17 Thread Kevin Grittner
1) We run a couple Java applications on the same box to provide
middle tier access.  When the box is heavily loaded, I think I've
seen about 80% PostgreSQL, 20% Java load.

2) I checked that no antivirus software was running, and had the
techs pare down the services running on that box to the absolute
minimum after the second failure, so that we could eliminate such
issues as possible causes.

3) The aforementioned Java apps hold open 21 database
connections.  (One for a software publisher to query a list of jar
files for access to the database, and 20 for a connection pool in
the middle tier.)  The way the pool is configured, six of those are
used for queries of normal priority, so we rarely have more than
six connections doing anything an any one moment.  During the
initial failure, the middle tier was under normal load, so 45,000
inserts were made to the table in question during the ujpdate.
After we hit the problem, we removed that middle tier from the
list of targets, so it was running, but totally idle during the
remaining tests.

None of this seems material, however.  It's pretty clear that the
problem was exhaustion of the Windows page pool.  Our Windows
experts have reconfigured the machine (which had been tuned
for Sybase ASE).  Their changes have boosted the page pool
from 20,000 entries to 180,000 entries.  We're continuing to test
to ensure that the problem is not showing up with this
configuration; but, so far, it looks good.

If we don't want to tell Windows users to make highly technical
changes to the Windows registry in order to use PostgreSQL,
it does seem wise to use retries, as has already been discussed
on this thread.

-Kevin


>>> "Magnus Hagander" <[EMAIL PROTECTED]>  >>>
[copying this one over to hackers]

> Our DBAs reviewed the Microsoft documentation you referenced, 
> modified the registry, and rebooted the OS.  We've been 
> beating up on the database without seeing the error so far.  
> We'll keep at it for a while.

Very interesting. As this seems to be a resource error, a couple of
questions. Sorry if you've already answered some of them, couldn't find
it in the archives.
 
1) Is this a dedicated pg server, or does it have something else on it?

2) We have to ask this - do you run any antivirus on it, that might nto
be releasing resources the right way? Anything else that might stick in
a kernel driver?

3) Are you hitting the database with many connections, or is this a
single/few connection scenario? Are the other connections typically
active when this shows up?


Seems like we could just retry when we get this failure. The question is
we need to do a small amount of sleep before we do? Also, we can't just
retry forever, there has to be some kind of end to it...
(If you read the SQL kb, it can be read as retrying is the correct
thing, because the bug in sql was that it didn't retry)

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [ADMIN] ERROR: could not read block

2005-11-17 Thread Kevin Grittner
I'm not an expert on that, but it seems reasonable to me that the
page pool would free space as the I/O system caught up with
the load.  Also, I'm going on what was said by Qingqing and
in one of the pages he referenced:

http://support.microsoft.com/default.aspx?scid=kb;en-us;274310

-Kevin


>>> Tom Lane <[EMAIL PROTECTED]>  >>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> None of this seems material, however.  It's pretty clear that the
> problem was exhaustion of the Windows page pool.
> ...
> If we don't want to tell Windows users to make highly technical
> changes to the Windows registry in order to use PostgreSQL,
> it does seem wise to use retries, as has already been discussed
> on this thread.

Would a simple retry loop actually help?  It's not clear to me how
persistent such a failure would be.

regards, tom lane


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [ADMIN] ERROR: could not read block

2005-11-17 Thread Kevin Grittner
There weren't a large number of connections -- it seemed to be
that the one big update query, by itself, would do this.  It seemed
to get through a lot of rows before failing.  This table is normally
"insert only" -- so it would likely be getting most or all of the space
for inserting the updated rows from extending the table.  Also, the
only reasonable plan for this update would be a table scan, so
it is possible that the failure occurred some time after the scan got
to rows added by the update statement.

It appears that the techs cleared the eventlog when they
reconfigured the machine, so I can no longer check for events
from the failures.   :-(

-Kevin


>>> "Magnus Hagander" <[EMAIL PROTECTED]>  >>>
> None of this seems material, however.  It's pretty clear that 
> the problem was exhaustion of the Windows page pool.  Our 
> Windows experts have reconfigured the machine (which had been 
> tuned for Sybase ASE).  Their changes have boosted the page 
> pool from 20,000 entries to 180,000 entries.  We're 
> continuing to test to ensure that the problem is not showing 
> up with this configuration; but, so far, it looks good.

Nope, with these numbers it doesn't. I was looking for a reason as to
why it would exhaust the pool - such as a huge number of connections.
Which doesn't appear to be so :-(

Another thing that will affect this is if you have a lot of network
sockets open. Anything like that?


BTW; do you get any eventid 2020 in your eventlog?


> If we don't want to tell Windows users to make highly 
> technical changes to the Windows registry in order to use 
> PostgreSQL, it does seem wise to use retries, as has already 
> been discussed on this thread.

Yeah, I think it's at least worth a try at that.

//Magnus


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [ADMIN] ERROR: could not read block

2005-11-17 Thread Kevin Grittner
A couple clarifications:

There were only a few network sockets open.

I'm told that the eventlog was reviewed for any events which
mgiht be related to the failures before it was cleared.  They
found none, so that makes it fairly certain there was no 2020
event.

-Kevin


>>> "Kevin Grittner" <[EMAIL PROTECTED]>  >>>
There weren't a large number of connections -- it seemed to be
that the one big update query, by itself, would do this.  It seemed
to get through a lot of rows before failing.  This table is normally
"insert only" -- so it would likely be getting most or all of the space
for inserting the updated rows from extending the table.  Also, the
only reasonable plan for this update would be a table scan, so
it is possible that the failure occurred some time after the scan got
to rows added by the update statement.

It appears that the techs cleared the eventlog when they
reconfigured the machine, so I can no longer check for events
from the failures.   :-(

-Kevin


>>> "Magnus Hagander" <[EMAIL PROTECTED]>  >>>
> None of this seems material, however.  It's pretty clear that 
> the problem was exhaustion of the Windows page pool.  Our 
> Windows experts have reconfigured the machine (which had been 
> tuned for Sybase ASE).  Their changes have boosted the page 
> pool from 20,000 entries to 180,000 entries.  We're 
> continuing to test to ensure that the problem is not showing 
> up with this configuration; but, so far, it looks good.

Nope, with these numbers it doesn't. I was looking for a reason as to
why it would exhaust the pool - such as a huge number of connections.
Which doesn't appear to be so :-(

Another thing that will affect this is if you have a lot of network
sockets open. Anything like that?


BTW; do you get any eventid 2020 in your eventlog?


> If we don't want to tell Windows users to make highly 
> technical changes to the Windows registry in order to use 
> PostgreSQL, it does seem wise to use retries, as has already 
> been discussed on this thread.

Yeah, I think it's at least worth a try at that.

//Magnus


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Copy Command out of memory

2005-12-14 Thread Kevin Keith

I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.

After loading about 3,500,000 records it ran out of memory - I am 
assuming because it ran out of space to store such a large transaction. 
Does the COPY command offer a similar feature to Oracle's SQL*Loader 
where you can specify the number of records to load between commit 
statements, or will I have to break the file I am loading into smaller 
files?


Or can a transaction be bypassed altogether with the COPY command since 
any failure (the load is going to an empty table) could easily be solved 
with  a reload of the data anyway.


Thanks,

Kevin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[ADMIN] Wisconsin Circuit Court Access (WCCA) on PostgreSQL

2006-03-13 Thread Kevin Grittner
The Consolidated Court Automation Programs (CCAP) of the Wisconsin Court
System has migrated to PostgreSQL for all of its Circuit Court web
operations.  Eight production databases have been converted, six of them
around 180 GB each, holding statewide information replicated real-time
from 72 county databases.  The central copies support audit functions,
statewide statistics and report generation, and this web site:

http://wcca.wicourts.gov/

Given the success of this effort, we expect to be converting the other
court databases to PostgreSQL.

We've been very happy with both the performance of the product and the
support we have received from the mailing lists.  Overall, PostgreSQL
has been faster than the commercial product from which we converted. 
Even more important is the fast response we have had when posting
problems to the lists.  We have normally had a fix within 24 hours. 
Frankly, the support has been amazing.

We wonder how widespread the use of PostgreSQL is within government
agencies; I would love to hear from anyone with experience with this. 
With tight budgets, it seems likely that there may be others moving in
this direction.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] [pgsql-advocacy] Wisconsin Circuit Court Access (WCCA) on

2006-03-14 Thread Kevin Grittner
>>> On Tue, Mar 14, 2006 at  2:08 am, in message
<[EMAIL PROTECTED]>, Simon Riggs
<[EMAIL PROTECTED]> wrote: 
> On Mon, 2006- 03- 13 at 13:27 - 0600, Kevin Grittner wrote:
>> Even more important is the fast response we have had when posting
>> problems to the lists.  We have normally had a fix within 24 hours.

>> Frankly, the support has been amazing.
> 
> Kevin, well done. We've all watched your progress with interest.

Thanks to all who have offered congratulations.

> The reason you've got excellent support is because of the detailed
> postings you've made, together with responses to all replies. Doing
all
> your homework before posting is essential; unfortunately many people
> don't do this and then leave disappointed.

Here I think you underestimate how well the community helps people in
these lists.  I have witnessed remarkable patience here when people post
vague messages asking for help.  You (as a community) generally succeed
in drawing out sufficient detail to provide good advice, and / or
identify areas for product improvement.  I do try to give as much
information as I can, including reproducible test cases where
practicable; but, I have done so with commercial vendors to whom my
clients have paid big money for support, and been very disappointed.

With one commercial vendor we've routinely been told by first line
support staff that the product was functioning as intended.  After days
of effort, sometimes involving calls from top management, we've gotten
through to someone who can actually understand the problem and
acknowledge the bug; only to have it take months (sometimes over a year)
to get a fix,  

With another open source vendor, from whom no support is available
without a paid license and a paid support contract, we (after paying for
a commercial license and a support contract) have been told that such
things as using an OR predicate within the ON clause of a JOIN was an
"unimplemented feature" (even though it worked in simple cases).  They
said they might "add the feature" in the next major release, but that
wouldn't be for at least a year, and no guarantees.

It was unexpected and quite refreshing to provide the same level of
detail in a post to a PostgreSQL list, and get a patch file fast enough
to be running a fixed version within 24 hours of posting the problem. 
When we have been able to provide sufficient detail and / or a test
case, this has usually been the result.  When we participated in the
beta test phase, people were quite helpful in leading me through the use
of unfamiliar tools to capture the information they needed to identify
and fix problems before the official release.

After decades of working as an independent consultant, I've recently
(eight days ago) accepted employment with the Wisconsin Court System as
a DBA, and I'm told that as a court employee I'm not allowed to endorse
one product over another; but, I can speak of my experiences with
products so long as I don't violate any constraints of the license
agreements.  I have worked with quite a few database products in my
career and can say unequivocally that the support I've seen provided for
PostgreSQL is superior to that which I've seen provided for any other
database product.

I don't want to name any names, because I would undoubtedly forget
several very helpful people here, but I have to admit that my personal
favorite was when I posted information about a bug in the JDBC driver
shortly before I left for the day, and while I was sleeping a user in
Germany created a program to cause the race condition, tracked down the
cause, and posted a patch with a suggested fix.  By the time I'd
finished my coffee the next morning, the patch had been reviewed, scaled
back to the minimum change required to effect a fix, applied to CVS, and
a new jar file deployed for download.  Wow.

I can't really accept congratulations for this successful deployment
without offering it right back to the community for all the help you've
provided, as well as the product itself.  Absolutely fantastic, all
around!

-Kevin


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] reg:FilePath error

2006-03-18 Thread Kevin Grittner
If you can't easily use the PostgreSQL extension to SQL, but need to
have string literal behavior conform to the ANSI and ISO standards, I
have submitted a patch for the 8.2 TODO item to support this.  Since we
needed it in production mode now, the patch I submitted was relative to
the 8.1 stable branch.  As far as I know, it has not been reviewed, so
there may be issues I missed, but I can say that we have been using in
production for several weeks, with maybe 20 million database
transactions per day across 8 servers runing on Windows and Linux, and
have not seen any problems with it.

If you search the lists for it, be sure to take the last patch from the
patches list, there were several iterations to this, and you don't want
the earlier ones.

-Kevin


>>> "sandhya" <[EMAIL PROTECTED]>  >>>
Hi

When i am trying to insert a string data into my table which has few
escape sequence characters...The entire string is not getting inserted.
Like Ex:
If my String data is 

C:\ProgramFiles\SampleFiles\General\back.doc

C:\ProgramFiles\SampleFiles\General\rback.gif

C:\ProgramFiles\SampleFiles\General\nback.rar etc like this.

But in my table the string is getting stored as,

C:ProgramFilesSampleFilesGeneraack.doc 

Where as the other Strings without these characters are getting inserted
fine.How is this happening?How the Strings are getting stored into the
Tables?

Bcoz of this \b,\n and \r characters..I am unable to store my data into
the database and retrieve it.

Is there any Solution that we have?Or we need to Take care of these
situations/cases?

Please give me your Suggestions to resolve this issue..
Thank you very much..
Regards,
Sandhya R


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] pg_stat_activity showing non-existent processes

2006-04-03 Thread Kevin Grittner
>>> On Sat, Mar 25, 2006 at  8:40 pm, in message
<[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> Jerry Sievers <[EMAIL PROTECTED]> writes:
>> At any rate; I'm wondering what possible causes might be
responsible
>> for pg_stat_activity's underlying functions to lose track of the
valid
>> process list?
> 
> It sounds like the stats collector missed a few "backend quit"
> messages.  This isn't real surprising: the stats messaging mechanism
is
> intentionally designed to drop messages under severe load, rather
than
> slow down backends.

Is there any way to tweak this in favor of more accurate information,
even if has a performance cost?  We're finding that during normal
operations we're not seeing most connections added to the
pg_stat_activity table.  We would like to be able to count on accurate
information there.  We've been considering adding functions to get at
the underlying structures to be able to retrieve it, but it would make a
lot of sense (for us, anyway) to make this table accurate instead.  What
would be involved in that?  Would it improve the accuracy of the other
statistics, as well?  Would anyone else be interested in something like
this (probably controlled by a configuration option), or are we unique
in this regard?

-Kevin


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] pg_stat_activity showing non-existent processes

2006-04-03 Thread Kevin Grittner
>>> On Mon, Apr 3, 2006 at 11:52 am, in message
<[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> Is there any way to tweak this in favor of more accurate
information,
>> even if has a performance cost?  We're finding that during normal
>> operations we're not seeing most connections added to the
>> pg_stat_activity table.  We would like to be able to count on
accurate
>> information there.
> 
> That's basically a non- starter because of the delay in reporting
from
> the stats collector process (ie, even if the information was
"completely
> accurate" it'd still be stale by the time that your code gets its
hands
> on it).  I think you'd be talking about a complete redesign of the
stats
> subsystem to be able to use it that way.

We want this for our monitoring software, to raise an alert when the
connection pool diverges from its nominal configuration beyond
prescribed limits or in excess of a prescribed duration.  What we're
looking for is not necessarily a table which is accurate immediately,
but one which won't entirely miss a connection.  Even then, if it only
misbehaves under extreme load, that would be OK; such extreme usage
might be worthy of note in and of itself.

Since we have converted to PostgreSQL we have not had this monitoring,
and folks are nervous that we will not detect a struggling middle tier
before it fails.  (Not something that happens often, but we really hate
having users tell us that something is broken, versus spotting the
impending failure and correcting it before it fails.)

> Having said that, though, I'd be pretty surprised if the stats
subsystem
> was dropping more than a small fraction of messages ---  I would
think
> that could only occur under very heavy load, and if that's your
normal
> operating state then it's time to upgrade your hardware ;- ).

We have a pair of database servers for our transaction repository. 
Each has four Xeon processors.  One of these is Windows, one is Linux. 
On the Windows machine, I see 10% CPU utilization.  On the Linux machine
I see a load average of 0.30.  The Linux machine seems to be very
reliable about showing the connections.  The Windows machine, when I
refresh a 20-connection pool, I either get no connections showing, or
only a few.

>  Maybe you
> should investigate a bit more closely to find out why it's dropping
so
> much.

It is probably related to something we've been seeing in the PostgreSQL
logs on the Windows servers:

[2006-04-03 08:28:25.990 ] 2072 FATAL:  could not read from statistics
collector pipe: No error
[2006-04-03 08:28:26.068 ] 2012 LOG:  statistics collector process (PID
3268) was terminated by signal 1

We're going to patch to try to capture more info from WinSock.

In src/port/pipe.c we plan to add before return ret in piperead():

if (ret == SOCKET_ERROR)
{
   ereport(LOG, (errmsg_internal("SOCKET ERROR: %ui",
WSAGetLastError(;
}

I hope to post more info, and possibly a patch, tomorrow.

-Kevin


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] Error Handling with COPY command

2006-04-05 Thread Kevin Keith
I was wondering if anything has been implemented or is in the works in a 
future version - where when provided setting / flag / max number of 
errors - the COPY command would not fail on the error and continue 
loading data. It would then put the data that didn't load due to error 
in another location (i.e. an error data file, or an error table). When 
performing bulk data loads, it would be useful to have this type of 
feature, and avoid the need to vacuum the table after a large data load 
quits due to one error.


I have worked with Oracle in the past, and it has a similar feature the 
SQL Loader. A threshold is set to x, and it will only stop after x 
number of errors were encountered.


I have seen a few threads on this subject - I was wondering if this is / 
will be included in a future version, or whether the concept is dead 
because of its potential risks (i.e. the COPY command would be told to 
ignore errors and continue).


Thanks,
Kevin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] Alter table set statistics with partitioned tables

2006-07-24 Thread Kevin Keith
I have had to bump the stats on a partitioned table in order to get the 
planner to use an index over a seqscan. This has worked well in making 
the system perform where it needs to as it reduced one query's execution 
from > 45 seconds to < 1 second.


The one problem I have run into is that when I create a new child table 
/ table partition, the stats value (attstattarget in pg_attribute) does 
not get carried over from the parent - instead the system default (-1) 
is set. Is it supposed to behave this way - meaning that I need to 
explicitly define the custom value for each child table? Or is there a 
way for this to be implicitly copied when the child table is created?


I prefer not to change the system default on all the tables in the 
database because the value needs to be increased for one case.


Thanks,

Kevin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Disk space consumed by pk not returned after vacuum or reindex

2006-09-13 Thread Kevin Johnson




Hello,

I am running into a problem on a RHEL3 systems, running PostgreSQL 7.4

We have a database, which consistently consumes more and more of the
disk space in it's lvol until it reaches 100%.  So far, we have tried
to run a full vacuum on the database, with limited success. 
Eventually, we had to drop and reload the database with the same data
inside.  It brought the disk usage down to 73%.  It then began to creep
once more toward 100%.  After some research, I was able to use the
pg_class catalog to find that the items which are expanding the
quickest are primary key (btree) indexes.  I attempted to run a REINDEX
on one of the tables with the pk taking up the largest amount of
space.  The usage according to pg_class dropped dramatically, however
the disk space was not returned to the system.  So I attempted another
full vacuum afterwards, and still nothing was returned to the system. 
These tables are updated extremely frequently (although their total
number of rows is close to constant), which is my guess as to why the
pk indexes increase so rapidly in terms of their disk usage. 
Unfortunately, PostgreSQL knowledge is limited, and I was wondering if
anyone had experienced something similar / knows what else we can do to
return this disk space back to the system? 

Thank you in advance for any/all help!

Kevin



begin:vcard
fn:Kevin Johnson
n:Johnson;Kevin
org:Raytheon AWIPS Team
adr:SMCC II;;;1325 East West Highway;Silver Spring;MD;20910
email;internet:[EMAIL PROTECTED]
title:Sr. Systems Engineer
tel;work:301.713.9362 x325
tel;cell:301.787.0648
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Disk space consumed by pk not returned after vacuum or reindex

2006-10-05 Thread Kevin Johnson




Thank you for the suggestion,
Bruno.  The clustering did the trick in reducing the current disk
usage, however eventually the disk space get consumed once more.  I
fear that we may just need to update the version of Postgres to help
alleviate index bloat! 


Bruno Wolff III wrote:

  On Wed, Sep 13, 2006 at 09:53:16 -0400,
  Kevin Johnson <[EMAIL PROTECTED]> wrote:
  
  
We have a database, which consistently consumes more and more of the 
disk space in it's lvol until it reaches 100%.  So far, we have tried to 
run a full vacuum on the database, with limited success.  Eventually, we 
had to drop and reload the database with the same data inside.  It 
brought the disk usage down to 73%.  It then began to creep once more 
toward 100%.  After some research, I was able to use the pg_class 
catalog to find that the items which are expanding the quickest are 
primary key (btree) indexes.  I attempted to run a REINDEX on one of the 
tables with the pk taking up the largest amount of space.  The usage 
according to pg_class dropped dramatically, however the disk space was 
not returned to the system.  So I attempted another full vacuum 
afterwards, and still nothing was returned to the system.  These tables 
are updated extremely frequently (although their total number of rows is 
close to constant), which is my guess as to why the pk indexes increase 
so rapidly in terms of their disk usage.  Unfortunately, PostgreSQL 
knowledge is limited, and I was wondering if anyone had experienced 
something similar / knows what else we can do to return this disk space 
back to the system?

  
  
This is possibly index bloat due to new keys always being larger than existing
keys. This was fixed in later releases. There is still some potential for
bloat due to fragmentation, but I believe that has a constant bound.
You might try using the cluster command. I think that will both clean up
the indexes and remove dead rows and do it faster than using a vacuum full
and reindexing. The downside is that the table will be unavailable during
the cluster which might be a deal breaker for you.
  


-- 
Kevin Johnson
Raytheon
AWIPS Sr. Systems Engineer
NWS Network Control Facility
p: 301.713.9362x325
f: 301.713.1905




Re: [ADMIN] Disk space consumed by pk not returned after vacuum or

2006-10-06 Thread Kevin Johnson




You are correct, it is 7.4 we are
running on a RHEL3 system.
The database itself is vacuumed via cron 6 times a day.  

Jim Nasby wrote:
I didn't see you mention what version you're running;
index bloat shouldn't be a big issue in 7.4 and above. You also didn't
mention how often you're vacuuming the table. If you don't vacuum the
table frequently enough, you're going to get bloat, plain and simple.
  
  
On Oct 5, 2006, at 11:24 AM, Kevin Johnson wrote:
  
  Thank you for the suggestion, Bruno.  The
clustering did the trick in reducing the current disk usage, however
eventually the disk space get consumed once more.  I fear that we may
just need to update the version of Postgres to help alleviate index
bloat!


Bruno Wolff III wrote:

On Wed, Sep 13, 2006 at 09:53:16 -0400,
Kevin Johnson <[EMAIL PROTECTED]> wrote:
  
  We have a database, which consistently
consumes more and more of the disk space in it's lvol until it reaches
100%. So far, we have tried to run a full vacuum on the database, with
limited success. Eventually, we had to drop and reload the database
with the same data inside. It brought the disk usage down to 73%. It
then began to creep once more toward 100%. After some research, I was
able to use the pg_class catalog to find that the items which are
expanding the quickest are  primary key (btree) indexes. I attempted to
run a REINDEX on one of the tables with the pk taking up the largest
amount of space. The usage according to pg_class dropped dramatically,
however the disk space was not returned to the system. So I attempted
another full vacuum afterwards, and still nothing was returned to the
system. These tables are updated extremely frequently (although their
total number of rows is close to constant), which is my guess as to why
the pk indexes increase so rapidly in terms of their disk usage.
Unfortunately, PostgreSQL knowledge is limited, and I was wondering if
anyone had experienced something similar / knows what else we can do to
return this disk space back to the system?

  
This is possibly index bloat due to new keys always being larger than
existing keys. This was fixed in later releases. There is still some
potential for bloat due to fragmentation, but I believe that has a
constant bound. You might try using the cluster command. I think that
will both clean up the indexes and remove dead rows and do it faster
than using a vacuum full and reindexing. The downside is that the table
will be unavailable during the cluster which might be a deal breaker
for you.
  

  
--
  
Jim Nasby    [EMAIL PROTECTED]
  
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
  
  
  
--
  
Jim Nasby    [EMAIL PROTECTED]
  
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
  
  
  


-- 
Kevin Johnson
Raytheon
AWIPS Sr. Systems Engineer
NWS Network Control Facility
p: 301.713.9362x325
f: 301.713.1905


begin:vcard
fn:Kevin Johnson
n:Johnson;Kevin
org:Raytheon AWIPS Team
adr:SMCC II;;;1325 East West Highway;Silver Spring;MD;20910
email;internet:[EMAIL PROTECTED]
title:Sr. Systems Engineer
tel;work:301.713.9362 x325
tel;cell:301.787.0648
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] Strange error on start up on a RH9 system - Help Please

2004-01-26 Thread Kevin Verma
Hello Every one,

I have a local server here and this is what's happening:

[EMAIL PROTECTED] root]# service postgresql start
Starting postgresql service: su: incorrect password
[FAILED]

I am able to do perform all the actions manually on command line but not
with postgresql init script.

Please help me with the trouble shooting procedure.

Regards,
Kevin


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] Mac OS 10.3 Panther make questions

2004-02-16 Thread Kevin Barnard
I am trying to get PostgreSQL 7.4.1 to make with Mac OS 10.3 I run 
configure which complains about readline.  I'm not sure how Apple 
compiled bash which is the new default shell but I can't find the 
library or headers either.  I have installed Xcode and found a few 
notes online about postgres 7.3.4 and OS 10.2 which talk about making 
readline and using the /sw directory which does not exist on system.  
Does anybody know what I am missing?

Thank you in advance for any help
Kevin Barnard
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] Mac OS 10.3 Panther make questions

2004-02-18 Thread Kevin Barnard
On Feb 16, 2004, at 8:27 PM, Tom Lane wrote:

Jeremy Buchmann <[EMAIL PROTECTED]> writes:
...  Readline is available, and this may be the way to go if you
can't get readline to compile on its own.
There isn't anything difficult about installing readline from source
on Panther.  Untar, configure, make, sudo make install.
Alternatively you can just configure postgres --without-readline.
psql is a lot less pleasant to use without it, but if you don't
use psql much you may not care.
			regards, tom lane

One would think this to be the case.  I keep getting the following link 
error

warning -dylib_install_name /usr/local/lib/libreadline.4.3.dylib not 
found in segment address table LD_SEG_ADDR_TABLE 
/sw/var/lib/fink/prebound/seg_addr_table

This is really a readline issue not a postgres issue.  I will be using 
psql quite often thus readline is very key to my sanity. :-)
I was able to compile the fink version so for now I'll just use that 
and of course postgres configure is happy with it.

It just doesn't make sense that Apple would take the time to build bash 
and not bother to include the readline library.  I wonder if they've 
modified readline a little and therefore use a static version.  I guess 
I could go look at the source in Darwin and see.  Even with all of this 
trouble I still think OS X beats a cygwined Windows for a client 
machine.

Thanks again everyone for your help.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[ADMIN] Transaction log file screwed up

2004-04-22 Thread Kevin Schroeder
Hello,
A client database crashed last night and I think I found the solution to
the problem at http://www.varlena.com/varlena/GeneralBits/45.php "Database
Recovery Procedures ".  The solution was to append the file with enough
zeros to get the file size correct.  The solution was to run

dd bs=8k count=1 < /dev/zero >> $PGDATA/pg_clog/0005

which added the zeros to the end.  However my error values are a little
different from ones in the example and the person who posted this noted that
this should NOT be tried haphazardly.  Since I'd rather not crash a client's
database to the point of no recovery I wouldn't mind having someone with
more knowledge of PostgreSQL post the command with the correct arguments to
fix the problem, or tell me I'm off my rocker trying to fix it this way.

Following is pg_ctl's output when trying to start the database.  Following
that is the directory listing for the pg_clog directory.

LOG:  checkpoint record is at 1/383BDFC0
LOG:  redo record is at 1/383BDFC0; undo record is at 0/0; shutdown TRUE
LOG:  next transaction id: 15038948; next oid: 3293693
LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  ReadRecord: unexpected pageaddr 1/363BE000 in log file 1, segment 56,
offset 3923968
LOG:  redo is not required
PANIC:  XLogWrite: write request 1/383BE000 is past end of log 1/383BE000
LOG:  startup process (pid 1555) was terminated by signal 6
LOG:  aborting startup due to startup process failure

pgsql/data/pg_clog/*

-rw---1 postgres  postgres   262144 Mar 14 13:10 
-rw---1 postgres  postgres262144 Mar 24 15:35 0001
-rw---1 postgres  postgres262144 Mar 27 00:04 0002
-rw---1 postgres  postgres262144 Mar 29 18:06 0003
-rw---1 postgres  postgres262144 Apr  1 00:39 0004
-rw---1 postgres  postgres262144 Apr  3 09:54 0005
-rw---1 postgres  postgres262144 Apr  5 21:39 0006
-rw---1 postgres  postgres262144 Apr  8 03:26 0007
-rw---1 postgres  postgres262144 Apr 10 10:15 0008
-rw---1 postgres  postgres262144 Apr 12 21:05 0009
-rw---1 postgres  postgres262144 Apr 15 00:32 000A
-rw---1 postgres  postgres262144 Apr 17 03:38 000B
-rw---1 postgres  postgres262144 Apr 19 13:07 000C
-rw---1 postgres  postgres262144 Apr 21 13:38 000D
-rw---1 postgres  postgres 90112 Apr 22 07:01 000E

Thanks

Kevin


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] initdb error

2004-05-04 Thread Kevin Schroeder
Hello,
I'm trying to install PostgreSQL on Solaris 9.  Everything's compiled
properly but when I run initdb I get the following error:

Sun Microsystems Inc.   SunOS 5.9   Generic May 2002
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /usr/local/pgsql/data... ok
creating directory /usr/local/pgsql/data/base... ok
creating directory /usr/local/pgsql/data/global... ok
creating directory /usr/local/pgsql/data/pg_xlog... ok
creating directory /usr/local/pgsql/data/pg_clog... ok
selecting default max_connections... 40
selecting default shared_buffers... 800
creating configuration files... ok
creating template1 database in /usr/local/pgsql/data/base/1... ok
initializing pg_shadow... ok
enabling unlimited row size for system tables... ok
initializing pg_depend... ok
creating system views... ok
loading pg_description... ok
creating conversions... ERROR:  could not load library
"/usr/local/pgsql/lib/ascii_and_mic.so": ld.so.1:
/usr/local/pgsql/bin/postgres: fatal: libgcc_s.so.1: open failed: No such
file or directory

I found libgcc_s.so.1 in /usr/local/lib and that directory is in
$LD_LIBRARY_PATH.  Is there anything that I'm missing here or that I need to
look for in my configure settings?

Kevin


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] initdb error

2004-05-05 Thread Kevin Schroeder
That would seem to be the problem.

Thanks.

Kevin

- Original Message - 
From: "Jim Seymour" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 04, 2004 6:26 PM
Subject: Re: [ADMIN] initdb error


> "Kevin Schroeder" <[EMAIL PROTECTED]> wrote:
> >
> > Hello,
> > I'm trying to install PostgreSQL on Solaris 9.  Everything's
compiled
> > properly but when I run initdb I get the following error:
> >
> [snip]
> > creating conversions... ERROR:  could not load library
> > "/usr/local/pgsql/lib/ascii_and_mic.so": ld.so.1:
> > /usr/local/pgsql/bin/postgres: fatal: libgcc_s.so.1: open failed: No
such
> > file or directory
> >
> > I found libgcc_s.so.1 in /usr/local/lib and that directory is in
> > $LD_LIBRARY_PATH.  Is there anything that I'm missing here or that I
need to
> > look for in my configure settings?
>
> Is that directory in LD_LIBRARY_PATH when you're su'd to the postgreSQL
> user?
>
> I added
>
> LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib; export
LD_LIBRARY_PATH
>
> to /etc/init.d/postgresql and created a .profile in its home directory
> with that in it.
>
> Solved all those problems :).
>
> You can also solve such problems by using the -R switch during build
> (man ld) and with Solaris 8/9's new "crle" utility.
>
> -- 
> Jim Seymour  | PGP Public Key available at:
> [EMAIL PROTECTED] |
http://www.uk.pgp.net/pgpnet/pks-commands.html
> http://jimsun.LinxNet.com|
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] Shared memory error using initdb on Solaris 8

2004-05-21 Thread Kevin Schroeder
Hello,
I'm trying to install PostgreSQL 7.4.2 on a brand new SunFire 120 with
2GB of RAM but when I run initdb -D /usr/local/pgsql/data I get the
following error:

creating directory /usr/local/pgsql/data... ok
creating directory /usr/local/pgsql/data/base... ok
creating directory /usr/local/pgsql/data/global... ok
creating directory /usr/local/pgsql/data/pg_xlog... ok
creating directory /usr/local/pgsql/data/pg_clog... ok
selecting default max_connections... 10
selecting default shared_buffers... 50
creating configuration files... ok
creating template1 database in /usr/local/pgsql/data/base/1... FATAL:  could
not create shared memory segment: Invalid argument
DETAIL:  Failed system call was shmget(key=1, size=1081344, 03600).
HINT:  This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter.  You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.  To
reduce the request size (currently 1081344 bytes), reduce PostgreSQL's
shared_buffers parameter (currently 50) and/or its max_connections parameter
(currently 10).
If the request size is already small, it's possible that it is less
than your kernel's SHMMIN parameter, in which case raising the request size
or reconfiguring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared
memory configuration.

initdb: failed
initdb: removing data directory "/usr/local/pgsql/data"

When I run ulimit -a I get

time(seconds)unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes)8192
coredump(blocks) 0
nofiles(descriptors) 256
vmemory(kbytes)  unlimited

There does not seem to be an option in initdb to reduce the shared buffers
size.  Plus, with 2GB of RAM I don't know that I'd want to go below the
"lowest common denominator" that Postgres defaults to.

Kevin


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Shared memory error using initdb on Solaris 8

2004-05-21 Thread Kevin Schroeder
There we go.  For reference, what page did you get that info and what did
you search for to get it?

Kevin

- Original Message - 
From: "Jim Seymour" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 21, 2004 10:57 AM
Subject: Re: [ADMIN] Shared memory error using initdb on Solaris 8


> >
> > Hello,
> > I'm trying to install PostgreSQL 7.4.2 on a brand new SunFire 120
with
> > 2GB of RAM but when I run initdb -D /usr/local/pgsql/data I get the
> > following error:
> >
> [snip]
> > creating template1 database in /usr/local/pgsql/data/base/1... FATAL:
could
> > not create shared memory segment: Invalid argument
> > DETAIL:  Failed system call was shmget(key=1, size=1081344, 03600).
> > HINT:  This error usually means that PostgreSQL's request for a shared
> > memory segment exceeded your kernel's SHMMAX parameter.
> [snip]
> >
> > When I run ulimit -a I get
> >
> [snip]
>
> You're looking at the wrong thing.  You need to do:
>
> sysdef |egrep -i 'shm|sem'
>
> >
> > There does not seem to be an option in initdb to reduce the shared
buffers
> > size.
>
> It tries to reduce things as far as it "sanely" can to fit within
> what's available.  You can hand-tweak initdb to over-ride its
> limits, but you'd end-up with a sub-optimal installation.
>
> >Plus, with 2GB of RAM I don't know that I'd want to go below the
> > "lowest common denominator" that Postgres defaults to.
>
> Nope.  You need to adjust certain values by placing settings in
> /etc/system and rebooting.  I use:
>
> /etc/system
> set shmsys:shminfo_shmmax=0x200  (33554432 decimal)
> set shmsys:shminfo_shmmin=1
> set shmsys:shminfo_shmmni=256
> set shmsys:shminfo_shmseg=256
>
> set semsys:seminfo_semmap=256
> set semsys:seminfo_semmni=512
> set semsys:seminfo_semmns=512
> set semsys:seminfo_semmsl=32
>
> I arrived at the above values from Google'ing.
>
> Jim
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Shared memory error using initdb on Solaris 8

2004-05-21 Thread Kevin Schroeder
Searching Google brought up this page.

http://www.postgresql.org/docs/current/interactive/kernel-resources.html



- Original Message - 
From: "Jim Seymour" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 21, 2004 1:14 PM
Subject: Re: [ADMIN] Shared memory error using initdb on Solaris 8


> >
> > There we go.  For reference, what page did you get that info and what
did
> > you search for to get it?
>
> Like I said: I derived it all from Google'ing on various search
> terms, IIRC.  Probably things like "postgresql+Solaris+shmmax" and
> the like.  I didn't save any URL references.
>
> Jim
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Stupid mistake: Forgot to dump when upgrading. (7.3 -> 7.4.2)

2004-06-23 Thread Kevin Schroeder
Try re-installing PostgreSQL, dumping it, then re-installing the new
version.

Kevin

- Original Message - 
From: "Jón Ragnarsson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, June 23, 2004 12:29 PM
Subject: [ADMIN] Stupid mistake: Forgot to dump when upgrading. (7.3 ->
7.4.2)


> I upgraded Postgres on my computer and forgot to dump the data. (Yes,
> very stupid) And now 7.4.2 refuses to read the datafiles. Is there
> anything that I can do?
> Maybe I should ask people over at redhat/fedora if I could downgrade
> Postgres?
> J.
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
>
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] psql won't stayed connected

2004-08-05 Thread Kevin Izzet


Hi All,

Just installed postgresql-7.4.3 for the first time and all appears to have gone fine apart from :-

When I try to connect to a test database from a remote client the psql appears to connect fine
but it then drops straight back to the command prompt without any errors or messages.

I have enabled debugging but all I get on the client side is

bash-2.03$ psql -h ukl** -d test -U kevin
bash-2.03$ DEBUG:  InitPostgres
bash-2.03$

And on the serverside I get

Aug  5 17:01:38 uklnx04 postgres[27887]: [222-1] DEBUG:  0: forked new backend, pid=28522 socket=10
Aug  5 17:01:38 uklnx04 postgres[27887]: [222-2] LOCATION:  BackendStartup, postmaster.c:2235
Aug  5 17:01:38 uklnx04 postgres[28522]: [222-1] LOG:  0: connection received: host=madge.nsc.com port=44996
Aug  5 17:01:38 uklnx04 postgres[28522]: [222-2] LOCATION:  BackendFork, postmaster.c:2395
Aug  5 17:01:38 uklnx04 postgres[28522]: [223-1] LOG:  0: connection authorized: user=kevini database=test
Aug  5 17:01:38 uklnx04 postgres[28522]: [223-2] LOCATION:  BackendFork, postmaster.c:2457
Aug  5 17:01:38 uklnx04 postgres[28522]: [224-1] DEBUG:  0: /usr/local/pgsql/bin/postmaster child[28522]: starting with (
Aug  5 17:01:38 uklnx04 postgres[28522]: [224-2] LOCATION:  BackendFork, postmaster.c:2554
Aug  5 17:01:38 uklnx04 postgres[28522]: [225-1] DEBUG:  0:         postgres
Aug  5 17:01:38 uklnx04 postgres[28522]: [225-2] LOCATION:  BackendFork, postmaster.c:2557
Aug  5 17:01:38 uklnx04 postgres[28522]: [226-1] DEBUG:  0:         -v196608
Aug  5 17:01:38 uklnx04 postgres[28522]: [226-2] LOCATION:  BackendFork, postmaster.c:2557
Aug  5 17:01:38 uklnx04 postgres[28522]: [227-1] DEBUG:  0:         -p
Aug  5 17:01:38 uklnx04 postgres[28522]: [227-2] LOCATION:  BackendFork, postmaster.c:2557
Aug  5 17:01:38 uklnx04 postgres[28522]: [228-1] DEBUG:  0:         test
Aug  5 17:01:38 uklnx04 postgres[28522]: [228-2] LOCATION:  BackendFork, postmaster.c:2557
Aug  5 17:01:38 uklnx04 postgres[28522]: [229-1] DEBUG:  0: )
Aug  5 17:01:38 uklnx04 postgres[28522]: [229-2] LOCATION:  BackendFork, postmaster.c:2559
Aug  5 17:01:38 uklnx04 postgres[28522]: [230-1] DEBUG:  0: InitPostgres
Aug  5 17:01:38 uklnx04 postgres[28522]: [230-2] LOCATION:  PostgresMain, postgres.c:2639
Aug  5 17:01:38 uklnx04 postgres[28522]: [231-1] DEBUG:  0: proc_exit(0)
Aug  5 17:01:38 uklnx04 postgres[28522]: [231-2] LOCATION:  proc_exit, ipc.c:95
Aug  5 17:01:38 uklnx04 postgres[28522]: [232-1] DEBUG:  0: shmem_exit(0)
Aug  5 17:01:38 uklnx04 postgres[28522]: [232-2] LOCATION:  shmem_exit, ipc.c:126
Aug  5 17:01:38 uklnx04 postgres[28522]: [233-1] DEBUG:  0: exit(0)
Aug  5 17:01:38 uklnx04 postgres[28522]: [233-2] LOCATION:  proc_exit, ipc.c:113
Aug  5 17:01:38 uklnx04 postgres[27887]: [223-1] DEBUG:  0: reaping dead processes
Aug  5 17:01:38 uklnx04 postgres[27887]: [223-2] LOCATION:  reaper, postmaster.c:1819
Aug  5 17:01:38 uklnx04 postgres[27887]: [224-1] DEBUG:  0: child process (PID 28522) exited with exit code 0
Aug  5 17:01:38 uklnx04 postgres[27887]: [224-2] LOCATION:  LogChildExit, postmaster.c:2078

The server is running on a linux HA (Suse 9.0+DRBD+Heartbeat) and the client is a Solaris box Solaris 8,
I have also tried connecting from a linux client on a remote box.
Connecting locally works fine

Any help would be much appreciated, I'm trying to get a new project off the ground :-)

Thanks in advance

Kevin

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]


*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] psql won't stayed connected

2004-08-06 Thread Kevin Izzet


Hi Tom,

Thanks for the reply but forgive my ignorance but how do I setup the debugger breakpoint ?




Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Tom Lane" <[EMAIL PROTECTED]>
05/08/2004 20:13

        
        To:        "Kevin Izzet" <[EMAIL PROTECTED]>
        cc:        [EMAIL PROTECTED]
        Subject:        Re: [ADMIN] psql won't stayed connected


"Kevin Izzet" <[EMAIL PROTECTED]> writes:
> Aug  5 17:01:38 uklnx04 postgres[28522]: [230-1] DEBUG:  0: 
> InitPostgres
> Aug  5 17:01:38 uklnx04 postgres[28522]: [230-2] LOCATION:  PostgresMain, 
> postgres.c:2639
> Aug  5 17:01:38 uklnx04 postgres[28522]: [231-1] DEBUG:  0: 
> proc_exit(0)
> Aug  5 17:01:38 uklnx04 postgres[28522]: [231-2] LOCATION:  proc_exit, 
> ipc.c:95

That's pretty bizarre ... I didn't think there were any code paths that
would get to proc_exit without emitting an error message.

Can you set a debugger breakpoint at proc_exit and see what the call
stack looks like?  (You can slow things down enough to attach to the
backend with gdb by using the -W switch:
                 PGOPTIONS="-W 30" psql ...

                                                   regards, tom lane






*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] psql won't stayed connected

2004-08-06 Thread Kevin Izzet

Hi Tom,

Managed to get the debugger to work and here are the results, don't understand the results mind :-)

#0  0x08155d55 in proc_exit ()
#1  0x08161c17 in PostgresMain ()
#2  0x0813eee0 in BackendFork ()
#3  0x0813e8d6 in BackendStartup ()
#4  0x0813d1c6 in ServerLoop ()
#5  0x0813cda3 in PostmasterMain ()
#6  0x08110f56 in main ()

Thanks for your help sofar...

p.s. Installed phpPgAdmin and that appears to work fine, don't know if that makes any difference




Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Tom Lane" <[EMAIL PROTECTED]>
06/08/2004 15:41

        
        To:        "Kevin Izzet" <[EMAIL PROTECTED]>
        cc:        [EMAIL PROTECTED]
        Subject:        Re: [ADMIN] psql won't stayed connected


"Kevin Izzet" <[EMAIL PROTECTED]> writes:
> Thanks for the reply but forgive my ignorance but how do I setup the 
> debugger breakpoint ?

Something like this:

                 PGOPTIONS="-W 30" psql ...
                 Use ps to determine PID of backend connected to psql
                 gdb /path/to/postgres-executable PID-of-backend
                 gdb> break proc_exit
                 gdb> continue
                 (wait for rest of timeout to expire)
                 gdb will report reaching proc_exit breakpoint
                 gdb> bt
                 ... interesting result is here ...
                 gdb> quit

If you've never done this before it will probably take you more than 30
seconds to get into gdb --- adjust the W option accordingly.

                                                   regards, tom lane






*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] psql won't stayed connected

2004-08-09 Thread Kevin Izzet

Hi Tom,

Below is an extract from a truss of the psql login, looks fine to me, I used the same source to build the Solaris client as I used for building the
linux server, I have also installed the client on a separate Linux server and get the same results.

I've tried using ident,md5 and trust for login , when using md5 the command scrolls offf the window repeatedly asking for a password, the only way to stop it
is to kill the pid of the psql command or stop/restart the server..

We don't have ssl setup here and I unfortunately don't have the time to work on that one

Your help is much appreciated..

937:    stat64("/home/kevini/.pgpass", 0xFFBEEEC8)      Err#2 ENOENT
937:    open("/etc/netconfig", O_RDONLY|O_LARGEFILE)    = 4
937:    brk(0x00048C48)                                 = 0
937:    brk(0x0004AC48)                                 = 0
937:    fcntl(4, F_DUPFD, 0x0100)                   Err#22 EINVAL
937:    read(4, " # p r a g m a   i d e n".., 1024)     = 1024
937:    read(4, " t s           t p i _ c".., 1024)     = 215
937:    read(4, 0x00048C00, 1024)                       = 0
937:    lseek(4, 0, SEEK_SET)                           = 0
937:    read(4, " # p r a g m a   i d e n".., 1024)     = 1024
937:    read(4, " t s           t p i _ c".., 1024)     = 215
937:    read(4, 0x00048C00, 1024)                       = 0
937:    close(4)                                        = 0
937:    open("/dev/udp", O_RDONLY)                      = 4
937:    ioctl(4, 0xC00C6982, 0xFFBEEB7C)                = 0
937:    close(4)                                        = 0
937:    door_info(3, 0xFFBECB00)                        = 0
937:    door_call(3, 0xFFBECAE8)                        = 0
937:    door_info(3, 0xFFBECA80)                        = 0
937:    door_call(3, 0xFFBECA68)                        = 0
937:    brk(0x0004AC48)                                 = 0
937:    brk(0x0004CC48)                                 = 0
937:    so_socket(2, 2, 0, "", 1)                       = 4
937:    setsockopt(4, 6, 1, 0xFFBEEC2C, 4, 1)           = 0
937:    fstat64(4, 0xFFBEEAC8)                          = 0
937:    getsockopt(4, 65535, 8192, 0xFFBEEBC8, 0xFFBEEBC4, 0) = 0
937:    setsockopt(4, 65535, 8192, 0xFFBEEBC8, 4, 0)    = 0
937:    fcntl(4, F_SETFL, 0x0080)                   = 0
937:    connect(4, 0x0003F300, 16, 1)                   Err#150 EINPROGRESS
937:    poll(0xFFBEED78, 1, -1)                         = 1
937:    getsockopt(4, 65535, 4103, 0xFFBEEE5C, 0xFFBEEE58, 1) = 0
937:    getsockname(4, 0x0003F978, 0x0003FA78, 1)       = 0
937:    poll(0xFFBEED78, 1, -1)                         = 1
937:    sigaction(SIGPIPE, 0xFFBEEAC8, 0xFFBEEB48)      = 0
937:    send(4, "\0\0\0 #\003\0\0 u s e r".., 35, 0)    = 35
937:    sigaction(SIGPIPE, 0xFFBEEAC8, 0xFFBEEB48)      = 0
937:    poll(0xFFBEED78, 1, -1)                         = 1
937:    recv(4, " R\0\0\0\b\0\0\0\0 N\0\0".., 16384, 0) = 75
937:    write(2, " D E B U G :     I n i t".., 21)      = 21
937:    poll(0xFFBEED78, 1, -1)                         = 1
937:    recv(4, " S\0\0\01E c l i e n t _".., 16384, 0) = 155
937:    access("/home/kevini/.psqlrc-7.4.3", 4)         Err#2 ENOENT
937:    access("/home/kevini/.psqlrc", 4)               Err#2 ENOENT
937:    getcontext(0xFFBEEDE0)
937:    sigaction(SIGINT, 0xFFBEEED8, 0xFFBEEF58)       = 0
937:    ioctl(0, TCGETA, 0xFFBEE9BC)                    Err#6 ENXIO
937:    fstat64(0, 0xFFBEEA30)                          = 0
937:    brk(0x0004CC48)                                 = 0
937:    brk(0x0004EC48)                                 = 0
937:    read(0, 0x0004ADB4, 8192)                       = 0
937:    sigaction(SIGINT, 0xFFBEEED8, 0xFFBEEF58)       = 0
937:    sigaction(SIGPIPE, 0xFFBEECC0, 0xFFBEED40)      = 0
937:    send(4, " X\0\0\004", 5, 0)                     = 5
937:    sigaction(SIGPIPE, 0xFFBEECC0, 0xFFBEED40)      = 0
937:    close(4)                                        = 0
937:    sigaction(SIGPIPE, 0xFFBEEF10, 0xFFBEEF90)      = 0
937:    llseek(0, 0, SEEK_CUR)                          = 0
937:    _exit(0)



Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Tom Lane" <[EMAIL PROTECTED]>
06/08/2004 17:40

        
        To:        "Kevin Izzet" <[EMAIL PROTECTED]>
        cc:        [EMAIL PROTECTED]
        Subject:        Re: [ADMIN] psql won't stayed connected


"Kevin Izzet" <[EMAIL PROTECTED]> writes:
> Managed to get the debugger to work and here are the results, don't 
> understand the results mind :-)

> #0  0x08155d55 in proc_exit ()
> #1  0x08161c17 in PostgresMain ()
> #2  0x0813eee0 in BackendFork ()
> #3  0x081

Re: [ADMIN] psql won't stayed connected

2004-08-09 Thread Kevin Izzet

Hi Tom,

Nope nothing silly, just trying to get a command line connection..

Am I maybe missing some kind of default logicals ?
Am I correct in thinking that apart from compiling the client from source I don't need to modify any of the conf files ?

The fact that I get the same result from a Linux Client as a Solaris client may point to something I've configured wrongly..
:-(


Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Tom Lane" <[EMAIL PROTECTED]>
09/08/2004 15:45

        
        To:        "Kevin Izzet" <[EMAIL PROTECTED]>
        cc:        [EMAIL PROTECTED]
        Subject:        Re: [ADMIN] psql won't stayed connected


"Kevin Izzet" <[EMAIL PROTECTED]> writes:
> Below is an extract from a truss of the psql login, looks fine to me,

Not really.  Here we have the successful connection to the server:

> 937:    send(4, "\0\0\0 #\003\0\0 u s e r".., 35, 0)    = 35
> 937:    sigaction(SIGPIPE, 0xFFBEEAC8, 0xFFBEEB48)      = 0
> 937:    poll(0xFFBEED78, 1, -1)                         = 1
> 937:    recv(4, " R\0\0\0\b\0\0\0\0 N\0\0".., 16384, 0) = 75
> 937:    write(2, " D E B U G :     I n i t".., 21)      = 21
> 937:    poll(0xFFBEED78, 1, -1)                         = 1
> 937:    recv(4, " S\0\0\01E c l i e n t _".., 16384, 0) = 155

and here is psql doing its normal initialization:

> 937:    access("/home/kevini/.psqlrc-7.4.3", 4)         Err#2 ENOENT
> 937:    access("/home/kevini/.psqlrc", 4)               Err#2 ENOENT
> 937:    getcontext(0xFFBEEDE0)
> 937:    sigaction(SIGINT, 0xFFBEEED8, 0xFFBEEF58)       = 0
> 937:    ioctl(0, TCGETA, 0xFFBEE9BC)                    Err#6 ENXIO
> 937:    fstat64(0, 0xFFBEEA30)                          = 0
> 937:    brk(0x0004CC48)                                 = 0
> 937:    brk(0x0004EC48)                                 = 0

and here it's trying to read the first command from stdin,
and getting EOF back:

> 937:    read(0, 0x0004ADB4, 8192)                       = 0

whereupon it quite correctly decides to close up shop:

> 937:    sigaction(SIGINT, 0xFFBEEED8, 0xFFBEEF58)       = 0
> 937:    sigaction(SIGPIPE, 0xFFBEECC0, 0xFFBEED40)      = 0
> 937:    send(4, " X\0\0\004", 5, 0)                     = 5
> 937:    sigaction(SIGPIPE, 0xFFBEECC0, 0xFFBEED40)      = 0
> 937:    close(4)                                        = 0
> 937:    sigaction(SIGPIPE, 0xFFBEEF10, 0xFFBEEF90)      = 0
> 937:    llseek(0, 0, SEEK_CUR)                          = 0
> 937:    _exit(0)

So why the heck is it getting EOF from stdin?  You're not doing
anything as silly as "psql 

                                                   regards, tom lane






*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] psql won't stayed connected

2004-08-09 Thread Kevin Izzet


Ok  get to the Donkey Ears for this week..   :-(

We run all our apps from a central Linux HA service, when we add a new app we add a new link via a wrapper which allows the
app to choose the correct OS and path for running that app

The default for the apps is to have an & at the end of the command line so that the command is spawned.oos

So psql was doing exactly what I asked it too, running and backgrounding. DUH...

Apologies for wasting your time Tom, will have to drink more black coffee and try not to multi task so much..


Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Tom Lane" <[EMAIL PROTECTED]>
09/08/2004 16:45

        
        To:        "Kevin Izzet" <[EMAIL PROTECTED]>
        cc:        [EMAIL PROTECTED]
        Subject:        Re: [ADMIN] psql won't stayed connected


"Kevin Izzet" <[EMAIL PROTECTED]> writes:
> Am I maybe missing some kind of default logicals ?

Darn if I know.

> The fact that I get the same result from a Linux Client as a Solaris 
> client may point to something I've configured wrongly..

I could believe that on Solaris but on Linux it's very unlikely that the
default configuration wouldn't work.  The common factor is more likely
pilot error ;-).  How *exactly* are you invoking psql, anyway?  Could
we see a cut-and-paste from your terminal session?

                                                   regards, tom lane






*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] Looking for tool for Graphic Database Design

2004-08-23 Thread Kevin Izzet

Not sure if this will fit the bill

http://www.thekompany.com/products/rekall/





Regards

Kevin Izzet

Database / Unix  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]


*
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is prohibited. If you are not the intended or authorised recipient please contact the sender by reply email and delete all copies of this message

Re: [ADMIN] Running sql files

2004-12-07 Thread Kevin Izzet

Hi,

You could try running the command like this 

$psql dbname < sqlfile > /dev/null

I believe this  should work fine





Regards

Kevin Izzet

Database / Unix / Linux  Administrator
Tel:     (Code)+44(0)1475 655606
Fax:    (Code)+44(0)1475 637755
Email:  [EMAIL PROTECTED]






"Pradeepkumar, Pyatalo (IE10)" <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
07/12/2004 11:55

        
        To:        [EMAIL PROTECTED]
        cc:        
        Subject:        [ADMIN] Running sql files



Hi, 
I just wanted to know how to disable the results being displayed on the screen. 
When I try the following command - 
$psql dbname < sqlfile 
The results of  the command are displayed on to the screen. In one file I am trying to populate a table with values ( There are more than 5000 tuples to be inserted).
When I run this command, it displays the results on to the screen which I don't want to do. 
Could anyone help me in this. 
Thanks in advance. 
Regards, 
Pradeep 



[ADMIN] STDERR vs. SYSLOG logging

2007-05-01 Thread Kevin Kempter
Hi List;

Anyone have any thoughts per which logging method (SYSLOG vs STDERR) is the 
better approach ?

Thanks in advance...

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[ADMIN] Explained by known hardware failures, or keep looking?

2007-06-18 Thread Kevin Grittner
essages indicating corruption at startup -- I can post more 
detail if someone thinks these would help diagnose the issue, but they include:
 
The most recent start included these lines:
 
[2007-06-18 13:24:37.490 CDT] 11981 LOG:  redo starts at 1D7/D7DC8D8C
[2007-06-18 13:24:37.526 CDT] 11981 LOG:  unexpected pageaddr 1D7/C1FE8000 in 
log file 471, segment 215, offset 16678912
[2007-06-18 13:24:37.526 CDT] 11981 LOG:  redo done at 1D7/D7FE6730
[2007-06-18 13:24:37.895 CDT] 11981 LOG:  database system is ready
[2007-06-18 13:24:57.380 CDT] 12034  ERROR:  
could not access status of transaction 10
[2007-06-18 13:24:57.380 CDT] 12034  DETAIL:  
Could not open file "pg_clog/": No such file or directory.
 
Could all of this be reasonably explained by the controller failure and/or the 
subsequent abrupt power loss, or should I be looking for another cause?  
Personally, as I look at this, I'm suspicious that either the controller didn't 
persist dirty pages in the June 14th failure or there is some ongoing hardware 
problem.
 
Thoughts or suggestions?
 
-Kevin
 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Explained by known hardware failures, or keep looking?

2007-06-19 Thread Kevin Grittner
>>> On Tue, Jun 19, 2007 at  8:07 AM, in message <[EMAIL PROTECTED]>,
Chander Ganesan <[EMAIL PROTECTED]> wrote: 
> Kevin Grittner wrote:
>>  
>> weekly maintenance process which builds a new version of a table based on 
>> records retention rules.  It is built under a temporary name; then the 
>> previous version of the table is dropped and the new table is renamed.  This 
>> leaves a fraction of a second during which queries may fail on the missing 
>> table, but it seems to be better than the alternatives.  (If a query doesn't 
>> complete within 20 seconds, it is an error for the users of these tables, 
>> since our web app times out.  The alternatives which blocked rather than 
>> giving outright errors blocked for more than 20.25 seconds, so this 
>> alternative generates the fewest errors from a user perspective.)
> 
> With PostgreSQL 8.2 you have the ability to dynamically add and remove 
> child tables, so you could create a "blank" parent table and make the 
> data table its child, built the new data table, and then simply change 
> the inheritance (remove the old child and add the new child).  
> Applications would not need to change (since PostgreSQL's inheritance 
> rules would have the same "parent" table name, and only the child would 
> change).
> 
> That would give you a much, much, much smaller window of unavailability 
 
I'm curious what the "much, much, much smaller" amount of time would be.
Our current technique seems to result in between 80 ms and 250 ms of
"down time" around our weekly maintenance.  Even though our site gets
about two million hits a day, we usually don't see any queries trying to
touch this table during the replacement.  On a bad day we might see five
errors, which would result in the users getting a "try again" sort of message
in their browsers.

It would be totally unacceptable, by the way, for there to be any window of
time during which the table appeared empty -- an error would be much
preferred.  This means that we would either need to bracket the inheritance
changes within a transaction or add the new table as a child (with mostly
duplicate rows) before dropping the old one.  Either would be OK -- does
one sound more promising than the other?  Is there any chance that using
this technique would have a negative impact on performance?  (Many of
these queries join a large number of tables and also use several correlated
subqueries.)
 
-Kevin
 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] archive_command does not execute

2007-06-19 Thread Kevin Grittner
>>> On Tue, Jun 19, 2007 at 11:16 AM, in message <[EMAIL PROTECTED]>,
Tom Arthurs <[EMAIL PROTECTED]> wrote: 
> 
> Looks like you are expecting the archive command to run when you shut 
> down the data base. It won't.  It only runs when the xlog gets full and 
> the system needs to recycle to a new logfile.
 
If you need to force a WAL file to test your backup process, try running
something like:
 
select pg_switch_xlog();
 
http://www.postgresql.org/docs/8.2/interactive/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE
 
-Kevin
 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] RESOLVED: Explained by known hardware failures, or keep looking?

2007-06-20 Thread Kevin Grittner
Thanks, all.  Just an FYI to wrap up the thread.

>>> On Mon, Jun 18, 2007 at  3:25 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> I'm suspicious that either the controller
>> didn't persist dirty pages in the June 14th failure
> 
> That's what it looks like to me --- it's hard to tell if the hardware or
> the filesystem is at fault, but one way or another some pages that were
> supposedly securely down on disk were wiped to zeroes.  You should
> probably review whether the hardware is correctly reporting write-complete.
 
The hardware tech found many problems with this box.  I may just give it
a heavy update load and pull both plugs to see if it comes up clean now.
 
The following was done:
 
Replaced 2 failed drives
Controller firmware updated
SCSI micro code updated
Performed Yast Online updates
Connected second power supply
 
Our newer boxes have monitoring software which alerts us before a box
gets into this bad a state.
 
-Kevin
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] aborting startup due to startup process failure

2007-06-28 Thread Kevin Grittner
>>> On Thu, Jun 28, 2007 at  2:55 PM, in message
<[EMAIL PROTECTED]>, "George Wilk"
<[EMAIL PROTECTED]> wrote: 
> 
> FATAL:  could not restore file "0001003A" from archive:
> return code 34048
> 
> LOG:  startup process (PID 13994) exited with exit code 1
> 
> LOG:  aborting startup due to startup process failure

You're not terminating the warm standby while the database is still logging
"the database system is starting up", are you?
 
If so, I would wait for a minute until it is settled in.
 
-Kevin
 



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] missing history file

2007-06-29 Thread Kevin Grittner
>>> On Fri, Jun 29, 2007 at  9:52 AM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>> Just ignore 0001. Recovery will work fine even if absent. Don't
>> ignore all history files though, just that one. Hmmm, come to think of
>> it, why is it requesting it at all? We should just skip that request.
> 
> No, because then people would misdesign their recovery scripts to not
> be able to deal with not finding a history file.  As things are, they
> will certainly be exposed to that case in any testing they do.  If we
> optimize this call away, then they won't see the case until they're in
> very deep doo-doo.
 
We certainly were exposed to the case.  We weren't able to turn up any
documenation on it, so we added these lines to our recovery script:
 
if [[ $1 == *.history ]] ; then
  exit 1
fi
 
Our warm standbys have apparently been working fine since.
 
Is there documentation of this that we missed?
 
Are our warm standby databases useful at this point, or have we wandered
into very deeep doo-doo already?
 
Based on Simon's email, I went and modified one line of our script.
I'll paste the current form below my "signature".  Please let me know
if we're off base.
 
-Kevin

 
#! /bin/bash

# Pick out county name from the back of the path.
# The value of $PWD will be: /var/pgsql/data/county//data
countyName=`dirname $PWD`
countyName=`basename $countyName`

while [ ! -f /var/pgsql/data/county/$countyName/wal-files/$1.gz \
 -a ! -f /var/pgsql/data/county/$countyName/DONE \
  -o -f /var/pgsql/data/county/$countyName/wal-files/rsync-in-progress ]
do
if [ $1 == 0001.history ] ; then
  exit 1
fi
sleep 10   # /* wait for ~10 sec */
done

gunzip < /var/pgsql/data/county/$countyName/wal-files/$1.gz > "$2"


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] missing history file

2007-06-29 Thread Kevin Grittner
>>> On Fri, Jun 29, 2007 at 11:47 AM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> 
> History files are only created when you do a PITR recovery that stops
> short of the end of WAL (ie, you gave it an explicit stopping point
> criterion).  So basically they never appear except by manual
> intervention on the primary server.  A standby script should probably
> handle requests for them by looking to see if they're available, and
> returning 'em if so, but not waiting if they are not.
> 
> Offhand I would recommend the same strategy for any requested filename
> that's not a plain WAL segment file (ie, all hex digits).
 
I suspect that it's worth waiting for something like this, too?:
 
0001000A00CF.E744.backup
 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] missing history file

2007-06-29 Thread Kevin Grittner
>>> On Fri, Jun 29, 2007 at 12:29 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote: 
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Offhand I would recommend the same strategy for any requested filename
>>> that's not a plain WAL segment file (ie, all hex digits).
>> 
>> I suspect that it's worth waiting for something like this, too?:
>> 0001000A00CF.E744.backup
> 
> No, I don't think so.  AFAICS the slave server would only ask for one
> of those during its initial cold start from a base backup, and it'll be
> looking for the one that should have been generated at completion of
> that base backup.  If it ain't there, it's unlikely to appear later.

Fair enough.  It would have saved us some time if this was mentioned
in the warm standby documentation.  I'll try to put a doc patch together.
 
-Kevin
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] replicating postgresql database to ms-sql database

2007-07-06 Thread Kevin Grittner
>>> On Fri, Jul 6, 2007 at 12:48 PM, in message
<[EMAIL PROTECTED]>, Mary Anderson
<[EMAIL PROTECTED]> wrote: 
> 
> I have been asked if it is possible to asynchronously replicate a 
> postgresql database to ms-sql.  My answer is yes, provided postgresql 
> ORM features aren't used in the design of the database.  One does an 
> ascii dump of the tables and then bulk loads them into MSSQL.  The 
> database in question will be about 10G.  Is there a sane way to do this?
 
Look at pg_dump and the switches to cause it to generate INSERT
statements instead of COPY statements.  That has worked for me.
 
-Kevin
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] replicating postgresql database to ms-sql database

2007-07-09 Thread Kevin Kempter
On Sunday 08 July 2007 16:21:42 Ben Kim wrote:
>   On Fri, Jul 06, 2007 at 10:48:48AM -0700, Mary Anderson wrote:
> >I have been asked if it is possible to asynchronously replicate a
> > postgresql database to ms-sql.  My answer is yes, provided postgresql
> > ORM features aren't used in the design of the database.  One does an
> > ascii dump of the tables and then bulk loads them into MSSQL.  The
> > database in question will be about 10G.  Is there a sane way to do this?
>
> As a pull option, it's also possible to use MS SQL's DTS tasks. You can
> define complex import tasks and (I believe) run the task regularly using
> the windows scheduler.
>
> Massaging can be done by custom conversion scripts, in perl or vb, within
> the task definition.
>
>
> My 2 cents.
>
>
>
> Regards,
>
> Ben K.
> Developer
> http://benix.tamu.edu
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
 


Have a look at dbi-link on the pg foundry

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Checking database version

2007-07-27 Thread Kevin Grittner
>>> On Fri, Jul 27, 2007 at  9:49 AM, in message
<[EMAIL PROTECTED]>, "George Wilk"
<[EMAIL PROTECTED]> wrote: 
> Is there a reliable way of verifying version of the database by looking at
> the PGDATA directory?
 
Could you use the first couple lines of output from pg_controldata?
 
-Kevin
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] PITR backup to Novell Netware file server

2007-08-07 Thread Kevin Grittner
We have a database in each Wisconsin county which is part of the official
court record for the circuit courts in that county.  We are required to keep
a backup in each county, such that we could recover a lost or corrupted
database with what is in the county (as well as keeping at least four
separate sources, each regularly confirmed as usable, at our central site).
Currently, the only two boxes available besides the database server (in most
counties) are a "utility server" (a Windows desktop class machine used to
push workstation images and other support tasks) and a Novell Netware file
server.
 
We had to get something going quickly when we started rolling PostgreSQL out
to the counties as a replacement of a commercial database product; the quick
and (way too) dirty approach was to create a samba share on the utility
server for the target of the archive_command and our base backups.  We rsync
from there back to the central site using the samba share, too.  We
frequently lose the mount points or they become unresponsive.  We also
frequently have corrupted files when they rsync across the WAN, so we want
to move away from all use of samba.
 
The Netware server supports ssh, scp, and an rsync daemon.  I don't see how
the ssh implementation is helpful, though, since it just gets you to the
Netware console -- you can't cat to a disk file through it, for example.
(At least not as far as we have been able to see.)  It appears that the scp
and rsync techniques both require the initial copy of the file to be saved
on the database server itself, which we were hoping to avoid for performance
reasons.
 
We could create ncp mounts on the database servers; but, frankly, we haven't
had much better luck in keeping those connected than we have with samba.
Has anyone else been through this and found a robust and reliable way to
do PITR backups from a PostgreSQL database on one machine to a Netware file
server on another?
 
Apparently we will be moving to a Linux-based implementation of Netware at
some unspecified future date, at which point we will apparently be able to
deal directly with the Linux layer.  At that point, there are obvious, clean
solutions; but we've got to have something reasonable until such date, which
is most likely a few years off.
 
-Kevin
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] PITR backup to Novell Netware file server

2007-08-07 Thread Kevin Grittner
> >>> Decibel! <[EMAIL PROTECTED]> 08/07/07 1:28 PM >>> 
> On Tue, Aug 07, 2007 at 06:29:35AM -0500, Kevin Grittner wrote:
>> We have a database in each Wisconsin county which is part of the official
>> court record for the circuit courts in that county.  We are required to keep
>> a backup in each county, such that we could recover a lost or corrupted
>> database with what is in the county (as well as keeping at least four
>> separate sources, each regularly confirmed as usable, at our central site).
>> Currently, the only two boxes available besides the database server (in most
>> counties) are a "utility server" (a Windows desktop class machine used to
>> push workstation images and other support tasks) and a Novell Netware file
>> server.
>>  
>> We had to get something going quickly when we started rolling PostgreSQL out
>> to the counties as a replacement of a commercial database product; the quick
>> and (way too) dirty approach was to create a samba share on the utility
>> server for the target of the archive_command and our base backups.  We rsync
>> from there back to the central site using the samba share, too.  We
>> frequently lose the mount points or they become unresponsive.  We also
>> frequently have corrupted files when they rsync across the WAN, so we want
>> to move away from all use of samba.
>>  
>> The Netware server supports ssh, scp, and an rsync daemon.  I don't see how
>> the ssh implementation is helpful, though, since it just gets you to the
>> Netware console -- you can't cat to a disk file through it, for example.
>> (At least not as far as we have been able to see.)  It appears that the scp
>> and rsync techniques both require the initial copy of the file to be saved
>> on the database server itself, which we were hoping to avoid for performance
>> reasons.
>>  
>> We could create ncp mounts on the database servers; but, frankly, we haven't
>> had much better luck in keeping those connected than we have with samba.
>> Has anyone else been through this and found a robust and reliable way to
>> do PITR backups from a PostgreSQL database on one machine to a Netware file
>> server on another?
>>  
>> Apparently we will be moving to a Linux-based implementation of Netware at
>> some unspecified future date, at which point we will apparently be able to
>> deal directly with the Linux layer.  At that point, there are obvious, clean
>> solutions; but we've got to have something reasonable until such date, which
>> is most likely a few years off.
> 
> Given your situation, I certainly wouldn't trust WAL files from the
> Samba server going back to your central site.
 
We don't.  We feed them all into warm standbys as they arrive, which fail over
to production when a gzip is corrupted.  Then we gunzip all the files for that
county to /dev/null to identify any other corrupted files (the corruptions seem
to come in clusters) and delete all the bad ones.  We restart the warm standby,
let rsync try again, and are back in business.  Due to our monitoring software,
we get both a jabber popup and an email within a minute of when the failover
occurs, so it's not down for long.
 
Keep in mind that when we go to the file server instead of the utility server,
we will eliminate the samba shares, and I don't think that we'll see this sort
of problem with either an rsync daemon or scp -- at least not at this order of
magnitude.
 
> I think you've got 2 options here:
> 
> 1) Write a script that copies to the local backup as well as the remote
> backup and call that script from archive_command. Make certain that the
> script returns a non-zero exit code if *either* copy operation fails.
 
Absolutely not an option.  The management mandate is clear that a WAN failure
which blocks the transfer of files back to the central cite must not block the
movement of files off of the database server to another box on its LAN.
 
> 2) Have archive_command copy to someplace on the database server, and
> have another process copy from there to both the local backup as well as
> the central backup.
 
A possible option; although if the rsync daemon on the file server proves
reliable, I don't see the benefit over having the archive command make a copy
on the database server which flows to the file server and from the file server
back to the central site.  I'd rather add the load to the file server than the
database server.
 
> As for performance, just how hard are you pushing these machines?
 
That varies tremendously with the county and the activity.  Milwaukee County
keeps a pretty steady load on the database during the business day, and even
a moderately si

Re: [ADMIN] PITR backup to Novell Netware file server

2007-08-07 Thread Kevin Grittner
> >>> Decibel! <[EMAIL PROTECTED]> 08/07/07 4:51 PM >>> 
> On Tue, Aug 07, 2007 at 02:12:29PM -0500, Kevin Grittner wrote:
> > > Copying a 16MB file that's already in memory isn't exactly an intensive
> > > operation...
> >  
> > That's true for the WAL files.  The base backups are another story.  We will
> > normally have a database vacuum analyze between the base backup and the 
> > users
> > being in there to care about performance, but that's not always the case --
> > sometimes jury trials go late into the night and could overlap with this a
> > base backup.  And some judges put in a lot of late hours; although they 
> > don't
> > tend to bang on the database very heavily, they hate to be made to wait.
> 
> Ahh... well, that's something where rsync could actually help you since
> it allows you to put a bandwidth cap on it. Another option is that some
> OSes (FreeBSD for one) will respect process priority when it comes to
> scheduling IO as well, so if you nice the backup process it hopefully
> wouldn't impact the database as much.

Thanks for the suggestions.  A new OS is not in the cards any time soon, but
I think the --bwlimit option makes sense -- there's not a lot of point moving
it from the database server to the file server faster than the WAN can take it,
anyway.  I suppose I could "nice" the rsync requester on the database side, too.
 
-Kevin
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] PITR backup to Novell Netware file server

2007-08-08 Thread Kevin Grittner
>>> On Tue, Aug 7, 2007 at 10:31 PM, in message
<[EMAIL PROTECTED]>, Andrew Kroeger
<[EMAIL PROTECTED]> wrote: 
> Kevin Grittner wrote:
>> The Netware server supports ssh, scp, and an rsync daemon.  I don't see how
>> the ssh implementation is helpful, though, since it just gets you to the
>> Netware console -- you can't cat to a disk file through it, for example.
> 
> scp & rsync can't really deal well with stdin.  However, you can
> accomplish something with ssh like the following (on Linux):
> 
>   cat source_file | ssh remote_host "cat >/path/to/file"
> 
 
Right, that's what I was saying I couldn't see how to do with Netware,
because the ssh just gets you to the Netware console.  We have been
using that technique a lot with Linux.  I trust that error checking is
covered by the ssh layer and the TCP layer on which it rides.
 
-Kevin
 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] entries in serverlog file

2007-08-09 Thread Kevin Grittner
>>> On Thu, Aug 9, 2007 at  4:37 PM, in message
<[EMAIL PROTECTED]>, "Tena Sakai"
<[EMAIL PROTECTED]> wrote: 
> I have, by default, on a linux machine, a file called
> serverlog in the data directory, whose entries often
> prove to be very useful.  What would be even more useful
> would be a timestamp along with each entry.
> 
> Is there any way I can add such timestamp at this point?
> Perhaps via postgresql.conf file (also in the data directory)?
 
By all means, do read the documentation referred to by Joshua.
 
For a quick starting point, we have found it useful to use
these settings:
 
redirect_stderr = on
log_line_prefix = '[%m] %p %q<%u %d %r> '
 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Audit Trail

2007-08-13 Thread Kevin Grittner
>>> On Mon, Aug 13, 2007 at 12:24 AM, in message
<[EMAIL PROTECTED]>,
"Khan, Mahmood Ahram" <[EMAIL PROTECTED]> wrote: 
> To enable the auditing I need to create this function which I am unable
> to & the error below.
 
Add a line like this after you include the PostgreSQL .h files:
 
PG_MODULE_MAGIC;
 
I was unable to quote the actual error in my reply.
Try a different way to show it next time, please.
 
-Kevin
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] except command

2007-08-13 Thread Kevin Grittner
>>> On Mon, Aug 13, 2007 at 12:50 PM, in message <[EMAIL PROTECTED]>,
olivier boissard <[EMAIL PROTECTED]> wrote: 
> I tried to use the except command in postgresql 8.1
> I don't understand as it works
> When I read doc , I understand that it was like a difference betwwen two 
> queries

That isn't how I read it:
 
"The EXCEPT operator returns the rows that are in the first result set
but not in the second."
 
( http://www.postgresql.org/docs/8.1/interactive/sql-select.html )
 
Are you seeing something different?
 
This gets me all the eye color codes that don't start with the letter 'B'.
(There are of course easier ways to get that...)
 
bigbird=> select * from "EyeColorCode" except select * from "EyeColorCode" 
where "eyeColorCode" like 'B%';
 eyeColorCode |  descr  | isActive | dotEyeColorCode
--+-+--+-
 GRN  | Green   | t| GRN
 GRY  | Gray| t| GRY
 HAZ  | Hazel   | t| HAZ
 MAR  | Maroon  | t| MAR
 MUL  | Multicolored| t| DIC
 PNK  | Pink| t| PNK
 XXX  | Unknown | t| UNK
(7 rows)
 
-Kevin
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] except command

2007-08-13 Thread Kevin Grittner
>>> On Mon, Aug 13, 2007 at  1:13 PM, in message
<[EMAIL PROTECTED]>, Steve Holdoway
<[EMAIL PROTECTED]> wrote: 
> On Mon, 13 Aug 2007 13:07:43 -0500
> "Kevin Grittner" <[EMAIL PROTECTED]> wrote:
>> This gets me all the eye color codes that don't start with the letter 'B'.
>> (There are of course easier ways to get that...)
> so is this faster than usiung 'where not exists' ??

I was trying to show a valid use of EXCEPT, not an alternative.
 
-Kevin
 



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] except command

2007-08-13 Thread Kevin Grittner
>>> On Mon, Aug 13, 2007 at  4:30 PM, in message <[EMAIL PROTECTED]>,
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: 
> So it's like a filter on the first query

Exactly; I think that sums it up better than anything I said.
 
By the way, it does strike me as an odd omission that there is no set
operator in the ANSI standard to get you directly to the set of disjoint
elements.  With two datasets, a and b, you could always get there with:
 
(a EXCEPT b) UNION ALL (b EXCEPT a)
 
or with:
 
(a UNION ALL b) EXCEPT (a INTERSECT b)
 
Of course, you could store the sets in temporary tables to get there without
generating from scratch each time, if that is expensive.
 
-Kevin
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Downloading PostgreSQL source code version 7.1 through CVS

2007-08-14 Thread Kevin Grittner
>>> On Tue, Aug 14, 2007 at  6:29 AM, in message
<[EMAIL PROTECTED]>,
"Kuriakose, Cinu Cheriyamoozhiyil" <[EMAIL PROTECTED]> wrote: 
> Can anyone please tell me how to download the PostgreSQL-7.1 source code
> through CVS, i use the following set of commands to get the source code
> of postgreSQL.

Did you try adding -r REL7_1 to the checkout command?
 
I sure hope this is for some academic purpose and not for production use
 
-Kevin
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] merge two rows where value are null

2007-08-14 Thread Kevin Grittner
>>> On Tue, Aug 14, 2007 at  8:48 AM, in message
<[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
wrote: 
> I was studying a solution which foresees the 'case when' construct...
> now I can use it in addition to  array_to_string . In fact my solution
> failed too if , but if there are some more
> values I get them all as well.
 
Is there a timestamp, date, or sequence number that you can use to break
ties?  Without something like that, I don't think you can do what you want
with set-based logic -- you would need to do something procedural without
any guarantee that you'd get the same results in two different runs.
 
-Kevin
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] adding a user with the permission to create trimmed users

2007-08-17 Thread Kevin Grittner
>>> On Fri, Aug 17, 2007 at  4:50 AM, in message
<[EMAIL PROTECTED]>, Peter Elmers <[EMAIL PROTECTED]>
wrote: 
> I want to know whether there exist the possibility to create a user  
> who has the permission to create users with preset limited permissions.
> 
> In detail:
> I do not want that the admin user (a seperate limited one for the  
> customer) is able to create full admin users or adequate ones.
 
Without more detail it is hard to give a specific answer, but it is possible
to grant a user the right to grant specific permissions.  See:
 
http://www.postgresql.org/docs/8.2/interactive/sql-grant.html
 
http://www.postgresql.org/docs/8.2/interactive/sql-revoke.html
 
Pay particular attention to "WITH GRANT OPTION" and "WITH ADMIN OPTION", as
well as the discussion of the public group.  (You will probably need to
revoke some of the default rights of the public group.)
 
If you can't get it to work as desired, please post again with  more
specifics.
 
-Kevin
 



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


  1   2   3   4   5   6   7   8   9   10   >