Re: [ADMIN] RAID vs. Single Big SCSI Disk

2000-12-12 Thread bob

"G. Anthony Reina" wrote:

> We have three databases for our scientific research and are getting
> close to filling our 12 Gig partition. My boss thinks that just getting
> a really big (i.e. > 30 Gig) SCSI drive will be cheaper and should do
> nicely. Currently, we only have 4 people accessing the database and
> usually only have 1-2 jobs (e.g. selects, updates, etc.) going at any
> one time (probably a high estimate). The db sits on a Pentium II/400 MHz
> with RedHat 6.0.
>
> Other than mirroring, are there any other advantages (e.g. speed, cost)
> of just getting a RAID controller over, say, a 73 Gig Ultra SCSI Cheetah
> drive (which cost in the neighborhood of $1300).

It sounds like you would be much better off with an Ultra ATA 66
software or hardware RAID solution.   Maxtor 40 Gb ATA100 disks
can be had for $100. each.   Alone they operate near 20 Mb/sec
and in a striped 2 disk Raid they can do 30-40 Mb/sec, probably
faster than your Cheetah configuration for a fraction of the cost.
3ware makes a hardware RAID controller that would get you to
40 Mb/sec with two, or 70 mb/sec with four of these disks in RAID 0.
With four disks in RAID 01 you can mirror and still get near 40 Mb/sec.
The 3ware solution also relieves your  cpu from the usual ATA overhead.

>
>
> Also, can Postgres handle being spread over several disks? I'd think
> that the RAID must control disk spanning, but just want to make sure
> that Postgres would be compatible.

That is transparent.





[ADMIN] undefined reference to `crypt'

1999-06-08 Thread Bob Williams

What am I missing?  Is this a bug in the rpm?:

I just upgraded using the Red Hat 6.4.2 rpm (from 6.3).
Now I get a compile time error:

cc -g mail2sql.c -o mail2sql -lpq \
-I/usr/include/pgsql
/usr/lib/libpq.so: undefined reference to `crypt'
collect2: ld returned 1 exit status
__

I've tried other things...


cc -g mail2sql.c -o mail2sql /usr/lib/libpq.a \
-I/usr/include/pgsql
/usr/lib/libpq.a(fe-auth.o): In function `pg_password_sendauth':
fe-auth.o(.text+0x27): undefined reference to `crypt'
collect2: ld returned 1 exit status
___

Header of source:
__

#include 
#include 
#include 
#include "defines_m2s.h"
#include "defines_db.h" /* Created by Makefile */
#include "pgsql/libpq-fe.h"

void
exit_nicely(PGconn* conn)
{
  PQfinish(conn);
  exit(1);
}


int main()
{
 ..






[ADMIN] pgsql: "reseting connection" msg. (fwd)

1999-06-10 Thread Bob Parkinson

Dear All,

I'm getting this msg. out of postgres 6.4.2 when trying to do a copy as a
normal user. The cmd. is run from a perl script thusly:

`psql -d omni_core -c \"copy template from stdin using delimiters '|'\" <
$tname`;


What does this msg. imply? (Been through some of the archive and not seen
this text).

Any suggestions how can I go about debugging this please?

TIA,

Bob

Bob Parkinson
[EMAIL PROTECTED]
--
Technical Officer:  OMNI projecthttp://omni.ac.uk/

Greenfield Medical Library,
Queens Medical Centre,
Nottingham.

--
We are stardust






[ADMIN] Preserving Users/Passwords

1999-06-14 Thread Bob Kruger


I am prepping to upgrade from version 6.4.2 to 6.5.  

Could someone point me in the right direction for a method to preserve user
names and passwords during the upgrade?

Thanks in advance for any assistance.

Regards - Bob Kruger




[GENERAL] Readline library

1999-06-23 Thread Bob Kruger



Last week I asked a question concerning the ability to use the up arrow key
to scroll back over previous commands when using psql.  I got a number of
good leads, and would like to thank everyone for them.

I found the readline library on sunsite, downloaded, compiled, and
installed.  I insured that the the shared libraries were installed as well.
 Shared libraries are installed in /usr/lib (libreadline.so.4.0), as well
as the various links.  The same structure exists for libhistory.so.4.0,
e.g. same links and location.  The system in question is running slakware
v4.0, kernel 2.2.10.  I also ran ldconfig after installing the libraries.
Just to be safe, I even added /usr/lib to the /etc/ld.so.conf file.

I am configuring using the following command line:

./configure --with-odbc --with-perl --with-libs=/usr/lib

Caputring the output still shows that configure is not picking up the
readline or history libraries.

Has anyone who is running slakware gotten this feature to work properly?

Thanks in advance for any assistance.

Regards - Bob Kruger




[GENERAL] Datetime <> ODBC <> Access

1999-06-23 Thread Bob Kruger



In looking at a linked table on MSAcess via ODBC, I noticed that the
Datetime field shows up in Access as only showing the last two digits for
the date field.  Example - today's date of 23 June 1999 shows up as 6/23/99.

Has anyone else experienced this?  If so, any hints on making the Access
side Posix/Y2K compliant?

Thanks in advance for any assistance.

Regards - Bob Kruger




[ADMIN] mysql to pgsql

2000-02-17 Thread Bob Zoller

I'm trying to make the switch from mysql to pgsql, but I'm having trouble
figuring out how to transfer my databases over!  I tried doing a dump in
mysql and then importing that into pgsql, but I got *tons* of errors.. I
searched the list archives and docs, but I haven't found anything that
talks about this..

Does anyone know how I can move my databases over?

Thanks,
--Bob






Re: [ADMIN] 'user' sql reserved word ?

2000-03-01 Thread Bob Zoller

I ran into the same thing.. it must be a reserved word..
--Bob

-
You got two choices jack: start talkin' or start hurtin'
   -- Mr. T

On Wed, 1 Mar 2000, Andre Antonio Parmeggiani wrote:

> 
> Hi All,
> 
> I tried to create a silly table :
> 
> => create table user (user text);
> ERROR:  parser: parse error at or near "user"
> 
> On postgres 6.5.3 it was ok .. and  
> create table foo(foo text);
> works fine :-)
> 
> Any idea ?
> 
> Thanks,
> Andre
> 
> 
> 
> 
> 






[ADMIN] pg_hba.conf is inscrutable to me

2000-04-29 Thread Bob Miller

Hello all.  I hope this is the right place for this question.  On a
machine called falstaff.trgrdc.mc.xerox.com, IP address 13.137.84.27,
I've got the following pg_hba.conf:

localalltrust
host all 13.137.84.27 255.255.255.255   trust
host all127.0.0.1 255.255.255.255   trust
host all  0.0.0.0 0.0.0.0   trust

While logged on to this machine as UID postgres, I do the following:

psql -h falstaff.trgrdc.mc.xerox.com

... and get the following message:

Connection to database 'postgres' failed.
No pg_hba.conf entry for host 13.137.84.27, user postgres, database
postgres

... tried lots of variations on the pg_hba.conf, none of which worked.
I can't see what's wrong with this one.  Any
suggestions?

--
Robert Miller
The Rochester Group
600 Park Avenue
Rochester, NY 14607

I will say one thing.  It is amazing how many drivers, even at the
Formula One level, think that the brakes are for slowing the car down.
Mario Andretti





[ADMIN] Error message on loading

2000-06-02 Thread Bob Cregan

When loading a dumped database I get the ERROR message

psql:/tmp/savedump:574: ERROR:  btree: index item size 3128 exceeds
maximum 2717

The database was dumped from postgresql 6.5 using that versions
pg_dumpall and was loaded into postgresql 7.0

Any ideas?

Bob

--
Bob Cregan  [EMAIL PROTECTED]
Unix Systems Administrator
Institute for Learning and Research Technology (ILRT)
University of Bristol, UK.
www.ilrt.bristol.ac.uk






[ADMIN] Win32 2003 Front end

2006-06-21 Thread Farrell,Bob








Trying to connect from an ASP.NET front end to a Postgresql
814 backend.

 

ODBC is connecting but get this error on some pages:

 

ADODB.Field error '80020009' 

Either BOF or EOF is True, or the current
record has been deleted. Requested operation requires a current record. 

/index.asp, line 

 








[ADMIN] Referential constraints in version 8

2005-04-15 Thread Bob Smith
We have just finished upgrading Postgres from 7.2 to 8, and I must live 
right or something because there was only one glitch.  When the dump 
from 7.2 was restored into 8, some foreign key references which should 
have been initially deferred had become non-deferrable.  I had to 
re-define those references by dropping the corresponding triggers and 
using ALTER TABLE to put them back as foreign key constraints, which 
seems to have fixed the problem.  However, those references which I 
re-defined now show up explicitly in the table descriptions as foreign 
key constraints, and the corresponding triggers are not listed.  This 
is great since it makes the table descriptions much more intelligible, 
but my concern is that all the other references which I didn't 
re-define still show as triggers and not as foreign key constraints.  
Is this just a cosmetic issue with psql's table description, or is 
there actually a functional difference?  Should I re-define all the 
other foreign key constraints to be safe?

Thanks!
Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] PITR wrm-standby startup fails

2008-08-24 Thread Bob Lunney
I'm trying to vet the PITR/warm-standby process so set up a primary and 
secondary server on two different Windows machines.  (Yeah, I know, but I don't 
have a choice in the matter.)

The primary works fine and copies its WAL files to the archive directory.  As 
long as the secondary is in recovery mode life is good.  When I introduce the 
trigger file, however, to kick the secondary out of recovery mode it bombs, 
complaining that its looking for the next WAL file (which is never going to 
arrive, especially not in the local pg_xlog directory), then gives me an 
"invalid parameter" error.  The relevent part of the secondary's log file is 
here:

2008-08-24 23:02:56 CDT LOG:  restored log file "000100040088" from 
archive
2008-08-24 23:03:01 CDT LOG:  restored log file "000100040089" from 
archive
2008-08-24 23:03:06 CDT LOG:  restored log file "00010004008A" from 
archive
2008-08-24 23:07:02 CDT LOG:  could not open file 
"pg_xlog/00010004008B" (log file 4, segment 139): No such file or 
directory
2008-08-24 23:07:05 CDT LOG:  startup process (PID 1468) was terminated by 
exception 0xC00D
2008-08-24 23:07:05 CDT HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2008-08-24 23:07:05 CDT LOG:  aborting startup due to startup process failure

My recovery command is:

restore_command = '..\bin\pg_standby -d -s 5 -t .\pgsql.trigger.5442 
..\data\archive %f %p %r 2>>pg_log\standby.log'


I found this on the forums.  Could it be the source of the problem I'm 
experiencing?

Thanks in advance for the wizardly advice!

Bob Lunney



  

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Database freezes during backup then generates drwtsn.exe process

2008-12-02 Thread Bob Lunney
Ward,I've experienced the exact problem you describe.  The two machines where identical in every way: make, model, disk layout, OS, etc., and this scenario happens regardless of which machine was the primary and which was the warm-standby.  Note I was not running pgAgent.I was using pg_standby to implement copying of WAL files between machines.  It would copy the WAL file to a network shared directory, where the warm-standby would pick up the file and use it, until the fatal error you describe happened.I had discovered that during a copy operation Windows will allocate the entire file size on the target prior to completing the file copy.  This differs from Unix, and may have something to do with the errors we are seeing.  I'm speculating here, but I believe when the recovery code
 "sees" a 16 Mb file it thinks the entire file contents are available, which is not necessarily the case with Windows.I know some folks recommend rsync, but that requires installing cygwin and my client isn't happy with that idea.  Possibly copying the WAL file to a temporary location, then moving it to the target location may mitigate the problem, since move operations (on the same disk drive, anyway) in Windows simpy rejigger the file descriptor and don't reallocate any disk space.  I haven't tried it yet, but I'm moving in that direction.Regards,Bob Lunney--- On Tue, 12/2/08, Ward Eaton <[EMAIL PROTECTED]> wrote:From: Ward Eaton <[EMAIL PROTECTED]>Subject: [ADMIN] PostgreSQL Database freezes during backup then generates drwtsn.exe processTo:
 [email protected]: Tuesday, December 2, 2008, 9:01 AM


 
 

 
 





I’m running PostgreSQL 8.3 on a Windows
 2003 machine (pgAgent service is also installed and
running).  The system is running as a redundant database server.  Thus, backing
up from one machine to another, and transferring over WAL logs.  In the postageSQL
log files there are several error messages that read: 

‘could not rename file, no such file or directory’ 
 

   

Eventually, postgreSQL ‘freezes’, I’m
unable to access the database, and there is no logging.  I find a drwtsn.exe
process running under the postgres account in Task Manager.  If I kill the
drwtsn PostgreSQL service will stop.  I can then restart it and the database
will run as expected for a few days.   

   

Has anyone encountered this problem before?  If so, were you
able to find a solution? 

   

Thanks for any help in advance. 

   


Ward Eaton
Project
Engineer Automation

RAD-CON Inc.
TECHNOLOGY: Innovative &
Proven
Office:

 +1.216.706.8927
Fax:

 +1.216.221.1135
Website: www.RAD-CON.com
E-mail: [EMAIL PROTECTED] 

   



 





  

[ADMIN] Large Number of Files in pg_xlog

2009-11-30 Thread Bob Lunney
Is it possible to reduce the number of files in pg_xlog.  Currently, 
checkpoint_segments is set to 512, which if I understand correctly, means there 
could be up to 1025 files in pg_xlog, 16Gb of WAL.  Right now there are 833 
files.  I've tried setting checkpoint_segments to a lower number and restarting 
the database, but to no avail.  Is there another method that doesn't involve 
running a dump, initdb, and restore?

Thanks in advance,

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] AIX - Out of Memory

2010-02-15 Thread Bob Lunney


--- On Mon, 2/15/10, Kenneth Marshall  wrote:

> From: Kenneth Marshall 
> Subject: Re: [ADMIN] AIX - Out of Memory
> To: "Tom Lane" 
> Cc: "Thorne, Francis" , [email protected]
> Date: Monday, February 15, 2010, 11:18 AM
> On Mon, Feb 15, 2010 at 10:57:06AM
> -0500, Tom Lane wrote:
> > "Thorne, Francis" 
> writes:
> > > Looking for some help with regards to an 'Out of
> Memory' issue I have
> > > with our Postgresql install on AIX.  When
> running large updates or
> > > select queries we get an out of memory error
> returned and details
> > > entered in the log file like below.  This is
> a 64-bit install and I have
> > > set the ulimit for the postgres user to
> unlimited.  
> > 
> > The bloat seems to be here:
> > 
> > >     AfterTriggerEvents:
> 131063808 total in 26 blocks; 576 free (7
> > > chunks); 131063232 used
> > 
> > but it's hard to believe you'd be getting "out of
> memory" after only
> > 130MB in a 64-bit build.  Are you *sure* the
> postgres executable is
> > 64-bit?  Are you *sure* the postmaster has been
> launched with
> > nonrestrictive ulimit?  On lots of setups that
> takes modifying the
> > PG startup script, not just fooling with some user's
> .profile.
> > 
> > > This is a 64-bit install (8.3) on AIX 5.3
> > 
> > 8.3.what?
> > 
> >        
>     regards, tom lane
> 
> I no longer have an AIX box, but I had similar problems
> with other
> applications that needed large amounts of memory. Some OS
> specific
> steps needed to be taken to allow normal users to allocate
> large
> blocks of memory. The information needed was in their
> on-line docs
> as I recall, but I do not remember the details. The
> executables may
> need to be built with specific options/flags to work.
> 
> Regards,
> Ken
> 

Ken,

I recently saw a similar issue.  It is two-fold:

1.  I used "su -" to become the postgres user, and inherited the previous 
account's memory limits,
2.  AfterTriggerEvents queues are caused by foreign key constraints, one per 
row.  If you're loading data, dropping or disabling that constraint makes a 
world of difference.  Just be sure to check afterwards if the RI has been 
violated prior to recreating the FK constraint.

Bob




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Querying the same column and table across schemas

2010-03-05 Thread Bob Lunney


--- On Fri, 3/5/10, John A. Sullivan III  wrote:

> From: John A. Sullivan III 
> Subject: [ADMIN] Querying the same column and table across schemas
> To: [email protected]
> Date: Friday, March 5, 2010, 2:44 PM
> Hello, all.  I'm working on a
> project using the X2Go terminal server
> project (www.x2go.org).  They record session data in a
> postgresql
> database.  Our environment is a little more secure
> than typical and we
> do not want it possible for one user to see another's
> session data.  We
> thus have divided the session database into schemas each
> with an
> identical set of tables.  Each user only writes and
> reads from their
> schema.
> 
> However, we need to query all schemas as if they were
> one.  Is there a
> way to do that?
> 
> In other words, if we were a single schema database, we
> could do
> 
> select session_id from sessions;
> 
> to list all sessions.  How can we accomplish the same
> thing to list all
> the sessions across all the schemas in a single query?
> 
> I'm trying to avoid making a thousand call like
> 
> select user1.session_id from user1.sessions;
> 
> when I could do it in a single query especially since the
> database is
> remote and secured with SSL.
> 
> Thanks - John
> 

John,

How about creating a central admin schema and putting a trigger on all the 
sessions tables to write changes to the central admin schema's session table?  
The function could belong to the admin role and run with definer's security.

Bob




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Handling of images via Postgressql

2010-04-07 Thread Bob Lunney
Suresh,
The real question is:  does manipulation of the images have to be 
transactional?  If so, store them in the database.  If not, store the images in 
the file system and put the file name in the database, thereby avoiding 
unecessary WAL overhead.
Bob Lunney

--- On Wed, 4/7/10, Suresh Borse  wrote:

From: Suresh Borse 
Subject: [ADMIN] Handling of images via Postgressql
To: [email protected]
Date: Wednesday, April 7, 2010, 8:50 AM




  
  

 




How does PostGreSQL perform in case we have to store and use huge no of images:



 Eg 4000 – 1 images, each approx 2 MB size.



 



The Questions that we have are:



How do we handle such huge no of images so that the application does not slow 
down?



How does PostGreSQL use caching? In case of images does it cache?



Do you suggest having the images in a different database by itself?







Regards,



Suresh
 




  

Re: [ADMIN] Runaway Locks

2010-05-04 Thread Bob Lunney
If you're using Tomcat or some other Java container that does connection 
management restart it and the lock should go away.

Bob Lunney

--- On Fri, 4/30/10, Kamcheung Sham  wrote:

> From: Kamcheung Sham 
> Subject: [ADMIN] Runaway Locks
> To: [email protected]
> Date: Friday, April 30, 2010, 9:03 AM
> 
> I was connecting to Postgres 8.3 through JDBC. During my
> unit tests, something when wrong and now leaving with the
> following locks in the db:
> 
> arc_dev=# select locktype, mode, relname,
> virtualtransaction, pid from pg_locks l join pg_class c on
> l.relation = c.oid;           
>                
>                 
> locktype |       mode   
>    |         
> relname           |
> virtualtransaction | pid  
> --+--+++--
> 
> relation | AccessShareLock  | pg_locks   
>            
>    | 1/38         
>      | 1816 
> relation | RowShareLock     |
> hibernate_sequences        |
> -1/2091555         | 
>     
> relation | RowExclusiveLock | hibernate_sequences 
>       | -1/2091555     
>    |      
> relation | AccessShareLock  | pg_class_oid_index 
>        | 1/38     
>          | 1816 
> relation | AccessShareLock  |
> pg_class_relname_nsp_index | 1/38       
>        | 1816 
> relation | AccessShareLock  | pg_class   
>            
>    | 1/38         
>      | 1816
> (6 rows)arc_dev=# 
> 
> The locks on 'hibernate_sequences' is causing any update to
> the locked row to hang. There is currently no running
> database client process anymore (as I've restarted by server
> a few times).
> 
> My question is how do i kill the virtual transaction and
> have the locks released?
> 
> Thanks,
> kam
> -- 
> Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
> 




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] alter column resize triggers question

2010-06-21 Thread Bob Lunney
Mike,
Doesn't look like it, at least on 8.4.  Give the script below a try for 
yourself.
Another approach would be to create a new table with the schema you need, 
insert the rows from your existing table into it, rename the two tables 
appropriately, then recreate the indexes and trigger on the new table.  That 
way you won't have to worry about the trigger firing at all.
Bob Lunney
==
create table blah (blah int, ts timestamptz);                                   
                                  
create function update_timestamp() returns trigger as $$begin  new.ts = 
now();  return new;end;$$ language plpgsql;
create trigger blah_tbefore insert or update on blah for each row execute 
procedure update_timestamp();
insert into blah values (1);insert into blah values (2);insert into blah values 
(3);select * from blah;
 blah |              ts--+---    1 | 2010-06-21 
14:33:32.14576-04    2 | 2010-06-21 14:33:34.545739-04    3 | 2010-06-21 
14:33:36.097878-04(3 rows)
alter table blah alter column blah type bigint;select * from blah;
 blah |              ts--+---    1 | 2010-06-21 
14:33:32.14576-04    2 | 2010-06-21 14:33:34.545739-04    3 | 2010-06-21 
14:33:36.097878-04(3 rows)
===
--- On Mon, 6/21/10, Mike Broers  wrote:
From: Mike Broers 
Subject: [ADMIN] alter column resize triggers question
To: [email protected]
Date: Monday, June 21, 2010, 2:18 PM

Pg v8.3.8

I have a table whose column size needs to be increased:

 \d dim_product
  Table "report.dim_product"
    Column    |   Type   
|  
Modifiers   

--+--+--
 product_id | integer  | not null default 
nextval('dim_product_id_seq'::regclass)

 application_id   | integer  | not null
 source_product_cd    | integer  | not null
 product_type | character varying(20)    | not null
 product_name | character varying(100)   | not null

 vendor_offer_cd  | character varying(30)    | 
 service_name | character varying(20)    | 
 category | character varying(40)    | 
 svc_line_cd  | character varying(40)    | 
 established  | timestamp with time zone | not null

 modified | timestamp with time zone | not null
Indexes:
    "dim_product_pkey" PRIMARY KEY, btree (product_id)
    "idx_dim_product_modified" btree (modified)
    "idx_dim_product_source_product_cd" btree (source_product_cd)

Triggers:
    t_dim_product_timestamp_b_iu BEFORE INSERT OR UPDATE ON dim_product FOR 
EACH ROW EXECUTE PROCEDURE public.update_timestamps()


I need to change service_name column to varchar(55), my plan was to backup the 
table with pg_dump, then run the below alter statement:


alter table dim_product alter column service_name type varchar(55);

But i am worried about the triggers because I believe that the alter table 
statement will rewrite the table and I dont want those triggers firing.  Does 
anyone know if I need to disable these triggers prior to the alter table 
statement, or if there are any other dependencies or precautions I should 
review before attempting this action?  I have also seen there is a workaround 
with running updates to the pg_attribute table but frankly that makes me a 
little nervous.


Thanks in advance,
Mike







  

[ADMIN] Slony DDL/DML Change and "attempted to lock invisible tuple" PG Error

2010-06-25 Thread Bob Lunney
I'm not sure if this is really a bug, so I'm posting here instead of pgsql-bugs.

Last night I attempted to make a DDL and DML change through Slony, using the 
execute script command.  The changes (adding a column and updating some rows) 
worked on the master but failed on the slave, with the PG database on the slave 
throwing an "attempted to lock invisible tuple" error.  Neither DDL or DML 
change stuck on the slave, but did on the master.

Slony, of course, restarted its worker thread and tried again (and again and 
again...).  I stopped the retries by updating the sl_event entry on the master 
node, changing it to a SYNC record, dropping the table in question from the 
replication set, applying the DDL/DML manually on the slave, then re-adding the 
table to the replication set.  (All of which worked fine, BTW.)

So, the question is: Is this a Slony or a PostgreSQL problem, and what 
should/can I do about it?

I'm running SLES 10 (Linux slave1 2.6.16.60-0.33-smp #1 SMP Fri Oct 31 14:24:07 
UTC 2008 x86_64 x86_64 x86_64 GNU/Linux), PostgreSQL 8.4.1 (PostgreSQL 8.4.1 on 
x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.0 (SUSE Linux), 
64-bit), and Slony 2.0.3-rc2.  This setup has been running fine for 5 months 
under very heavy daily load.  (BTW, Slony 2.0.3-rc2 has been working great 
replicating data and servicing DDL requests just fine, with no problems up to 
now, but I'm still going to test 2.0.4 and upgrade if the test results pan out).


The Slony log contains:

2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL request with 7 
statements
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 0: [

-- -*- SQL -*-

set session authorization main_usr;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 1: [
set search_path to public;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 2: [


alter table public.rte add column dols boolean default false not null;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 3: [
update public.rte set dols = true where mctr in ('AA', 'YY');]
2010-06-24 18:06:09 EDT ERROR  DDL Statement failed - PGRES_FATAL_ERROR
2010-06-24 18:06:09 EDT INFO   slon: retry requested
2010-06-24 18:06:09 EDT INFO   slon: notify worker process to shutdown


The relevant PG log entries are:

10.192.2.1(41547):2010-06-24 18:06:09.913 EDT:LOG:  statement:


alter table public.rte add column dols boolean default false not null;
10.192.2.1(41547):2010-06-24 18:06:09.935 EDT:ERROR:  attempted to lock 
invisible tuple
10.192.2.1(41547):2010-06-24 18:06:09.935 EDT:STATEMENT:
update public.rte set dols = true where mctr in ('AA', 'YY');
10.192.2.1(41547):2010-06-24 18:06:09.958 EDT:LOG:  unexpected EOF on client 
connection
10.192.2.1(41547):2010-06-24 18:06:09.958 EDT:LOG:  disconnection: session 
time: 0:00:00.071 user=rep_usr database=main_db host=10.192.2.1 port=41547

Thanks in advance for your help!

Regards,

Bob Lunney



  

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] out of memory error

2010-08-05 Thread Bob Lunney
Silvio , 

I had a similar problem when starting the database from an account that didn't 
have the appropriate ulimits set.  Check the ulimit values using ulimit -a.

HTH,

Bob Lunney

--- On Thu, 8/5/10, Silvio Brandani  wrote:

> From: Silvio Brandani 
> Subject: [ADMIN] out of memory error
> To: [email protected]
> Date: Thursday, August 5, 2010, 9:01 AM
> Hi,
> 
> a query on our production database give following errror:
> 
> 
> 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR:  out
> of memory
> 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: 
> Failed on request of size 48.
> 
> 
> 
> 
> any suggestion ?
> 
> -- Silvio Brandani
> Infrastructure Administrator
> SDB Information Technology
> Phone: +39.055.3811222
> Fax:   +39.055.5201119
> 
> ---
> 
> 
> 
> 
> 
> 
> Utilizziamo i dati personali che la riguardano
> esclusivamente per nostre finalità amministrative e
> contabili, anche quando li comunichiamo a terzi.
> Informazioni dettagliate, anche in ordine al Suo diritto di
> accesso e agli altri Suoi diritti, sono riportate alla
> pagina http://www.savinodelbene.com/news/privacy.html
> Se avete ricevuto questo messaggio per errore Vi preghiamo
> di ritornarlo al mittente eliminandolo assieme agli
> eventuali allegati, ai sensi art. 616 codice penale 
> http://www.savinodelbene.com/codice_penale_616.html
> L'Azienda non si assume alcuna responsabilità giuridica
> qualora pervengano da questo indirizzo messaggi estranei
> all'attività lavorativa o contrari a norme.
> --
> 
> -- Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
> 




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] password administration

2010-08-05 Thread Bob Lunney
Mark,

Look into kerberos.  You will have to recompile your server to use it.

Bob Lunney

--- On Thu, 8/5/10, Mark Steben  wrote:

> From: Mark Steben 
> Subject: [ADMIN] password administration
> To: [email protected]
> Date: Thursday, August 5, 2010, 3:58 PM
> 
> 
> Hi postgres gurus:
> 
> I would like to set up a facility that enforces password
> changes for roles
> After a predefined period (30 days for instance) when
> logging into psql
> Or, at the very least, send an email out to notify that
> your current 
> Password period is about to expire.  Preferably, I'd
> like to use
> The 'rolvaliduntil' column in pg_roles.
> 
> I'm wondering if there is an app inside or outside of
> postgres
> that I can use or do I have to design from scratch.
> 
> Thanks for your time,
> 
> 
> Mark Steben | Database Administrator 
> @utoRevenue® - "Keeping Customers Close" 
> 95D Ashley Ave, West Springfield, MA 01089 
> 413.243.4800 x1512 (Phone) |413.732-1824 (Fax) 
> @utoRevenue is a registered trademark and a division of
> Dominion
> Enterprises 
>  
> 
> 
> 
> 
> -- 
> Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
> 




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] GSS for both Windows and Linux

2010-10-01 Thread Bob Lunney
Is it possible to have clients authenticate via Kerberos to a PG 8.4 server 
running on linux from both linux and windows hosts?  I have authentication 
working (using GSSAPI) between the linux clients and the server.  I'd like to 
use the Windows binary install (which already has GSSAPI compiled in) against 
the linux-based database server.  

o  Do I need to have separate AD principals for the Windows clients or is it 
possible for Windows clients to get a ticket granting ticket from the 
linux-based KDC?

o  How do I combine the linux and Windows keytab files the server needs to 
authenticate if separate principals are required?

o  Since Windows (mis)uses "POSTGRES" and linux uses "postgres" for the service 
name, do I need to force either one to use the other's service name, since the 
database server can only use one version? 

Any pointers will be appreciated.  I've scoured Google  but can't find the 
missing piece of information I need to get Windows clients working.  Thanks in 
advance!

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump/restore problems

2010-10-26 Thread Bob Lunney
Glen,
Did you drop the indexes prior to the restore?  If not, try doing so and 
recreating the indexes afterwards.  That will also speed up the data load.
Bob Lunney

--- On Mon, 2/15/10, Glen Brown  wrote:

From: Glen Brown 
Subject: [ADMIN] pg_dump/restore problems
To: [email protected]
Date: Monday, February 15, 2010, 1:25 PM

I am not sure where I should post this but I am running into problems trying to 
restore a large table. I am running 8.4.1 on all servers. The table is about 
25gb in size and most of that is toasted. It has about 2.5m records. When I 
dump this table using pg_dump -Fc it creates a 15 gb file. I am trying to 
restore in into a database that has 100gb of free disk space and it consumes it 
all and fails to finish the restore. The table is not partitioned and has a few 
indexes on it. What can I do?

 
thanks
-glen

Glen Brown




  

Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos  wrote:

> From: Victor Hugo dos Santos 
> Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
> To: [email protected]
> Date: Tuesday, January 4, 2011, 8:48 AM
> Hello,
> 
> Actually I use postgresql version  8.4.6-0ubuntu10.04
> in bacula server
> to save information about backups from bacula.
> But, 2 days ago, the postgresql make a error when I run the
> command
> pg_dump. This is the error:
> 
> 
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> SQL command failed
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> Error message
> from server: ERROR:  invalid memory alloc request size
> 4294967293
> 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos  wrote:

> From: Victor Hugo dos Santos 
> Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
> To: [email protected]
> Date: Tuesday, January 4, 2011, 8:48 AM
> Hello,
> 
> Actually I use postgresql version  8.4.6-0ubuntu10.04
> in bacula server
> to save information about backups from bacula.
> But, 2 days ago, the postgresql make a error when I run the
> command
> pg_dump. This is the error:
> 
> 
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> SQL command failed
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> Error message
> from server: ERROR:  invalid memory alloc request size
> 4294967293
> 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos  wrote:

> From: Victor Hugo dos Santos 
> Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
> To: [email protected]
> Date: Tuesday, January 4, 2011, 8:48 AM
> Hello,
> 
> Actually I use postgresql version  8.4.6-0ubuntu10.04
> in bacula server
> to save information about backups from bacula.
> But, 2 days ago, the postgresql make a error when I run the
> command
> pg_dump. This is the error:
> 
> 
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> SQL command failed
> 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
> Error message
> from server: ERROR:  invalid memory alloc request size
> 4294967293
> 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres Backup Utility

2011-01-19 Thread Bob Lunney
Brad,
Google for "SQL Power Architect", download it, and try a schema comparison.  
That might get you a ways down to road to what you want.
Bob Lunney

--- On Wed, 1/19/11, French, Martin  wrote:

From: French, Martin 
Subject: Re: [ADMIN] Postgres Backup Utility
To: "Bradley Holbrook" 
Cc: [email protected]
Date: Wednesday, January 19, 2011, 2:12 AM




 
 






 



Ok, you say that you cannot drop
and recreate, so you need to do this via alter statements only? That’s
obviously going to complicate matters, as a straight dump, drop, recreate,
restore would be the fastest and by far simplest method. 

   

So, Ideally, you’ll need
to do a table def comparison over the two databases, and generate the necessary
sql to amend the tables in test accordingly?  

   

Querying the pg_catalog/information_schema
over the two db’s should give you the table ddl from which you can diff,
and then generate the alter statements from the results.  

   

Cheers  

   

Martin 

   





From: Bradley Holbrook
[mailto:[email protected]] 

Sent: 18 January 2011 16:57

To: French, Martin

Cc: [email protected]

Subject: RE: [ADMIN] Postgres Backup Utility 





   

Well,
I can’t just go dropping and recreating tables… it needs to create
the correct alter statements if existing tables and or functions already exist. 

   

Secondly,
when I’m finished changing the structure, I need to be able to select the
list of tables that will have content updates. 

   

Using
a script might be more work maintaining then it’s worth. I have a backup
utility that can do the job, but 3 tedious steps per schema, that only work
about 10% of the time (and no batching options so that I can create a list of
actions and run the list). 

   

   





From: French, Martin
[mailto:[email protected]] 

Sent: January-18-11 5:47 AM

To: Bradley Holbrook; [email protected]

Subject: RE: [ADMIN] Postgres Backup Utility 





   

I’m assuming that this
needs to be tightly controlled and as such a replication tool is out of the
question? 

   

In that case; The first thing to
pop into my head here would be to use either use shell scripting, or to use the
pg API and write a c program to handle it. 

   

I remember doing something very
similar with Oracle a few years back. 

   

Cheers 

   

Martin 

   

   





From: [email protected]
[mailto:[email protected]] On Behalf Of Bradley Holbrook

Sent: 18 January 2011 00:08

To: [email protected]

Subject: [ADMIN] Postgres Backup Utility 





   

Hello! 

   

First
day on the new mailing list as I have need of some expert’s advice. 

   

I
need to be able to quickly apply the structure updates from a development
database to a testing database, and do selective data updates (like on lookup
tables, but not content tables). 

   

Any
help would be appreciated! 

   

Brad 



___ 



This email is intended for the named recipient. The information contained 

in it is confidential. You should not copy it for any purposes, nor 

disclose its contents to any other party. If you received this email 

in error, please notify the sender immediately via email, and delete 

it from your computer. 



Any views or opinions presented are solely those of the author and do not 

necessarily represent those of the company. 



PCI Compliancy: Please note, we do not send or wish to receive banking, 

credit or debit card information by email or any other form of 

communication. 



Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive 

Wigston, Leicester LE18 1AT. Tel 0116 2888000 

Registered in England and Wales, Reg No 00986161 

VAT GB 115 5713 87 900 

__  



 


___ 

  

This email is intended for the named recipient. The information contained 

in it is confidential.  You should not copy it for any purposes, nor 

disclose its contents to any other party.  If you received this email 

in error, please notify the sender immediately via email, and delete

it from your computer. 

  

Any views or opinions presented are solely those of the author and do not 

necessarily represent those of the company. 

  

PCI Compliancy: Please note, we do not send or wish to receive banking,

credit or debit card information by email or any other form of  

communication. 

   

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive

Wigston, Leicester LE18 1AT. Tel 0116 2888000

Registered in England and Wales, Reg No 00986161

VAT GB 115 5713 87 900

__




  

[ADMIN] PG Server Crash

2011-03-07 Thread Bob Lunney
OK, I got a nasty surprise today.  The server threw everybody out of the pool 
and logged this:

10.224.12.4(52763) idle: pthread_mutex_lock.c:80: __pthread_mutex_lock: 
Assertion `mutex->__data.__owner == 0' failed.
:4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG:  server process (PID 21298) was 
terminated by signal 6: Aborted
:4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG:  terminating any other active 
server processes

It then stopped all the backends, restarted, recovered and worked fine for the 
rest of the day.

The particulars:

SLES 10
PG 8.4.1
8-way Intel Xeon E5345 @ 2.33GHz
32 GB RAM

max_connections = 1500
shared_buffers = 1536MB
work_mem = 32MB 
maintenance_work_mem = 256MB
checkpoint_segments = 2048
effective_cache_size = 20GB
effective_io_concurrency = 6
random_page_cost = 2.0


This server has been rock solid running 8.4.1 for over a year.  It has Slony 
replicating to another mirror image server that has not had any problems 
whatsoever.

Any ideas?

Thanks in advance for all your help, and especially for the amazing database 
software!

Regards,

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PG Server Crash

2011-03-07 Thread Bob Lunney
Sorry, I should have included the fact that PG was compiled from source on the 
same machine where it runs using gcc 4.1.0, and config.log has:

./configure --with-python --with-gssapi --enable-thread-safety

Bob Lunney

--- On Mon, 3/7/11, Bob Lunney  wrote:

> From: Bob Lunney 
> Subject: [ADMIN] PG Server Crash
> To: [email protected]
> Date: Monday, March 7, 2011, 11:24 PM
> OK, I got a nasty surprise
> today.  The server threw everybody out of the pool and
> logged this:
> 
> 10.224.12.4(52763) idle: pthread_mutex_lock.c:80:
> __pthread_mutex_lock: Assertion `mutex->__data.__owner ==
> 0' failed.
> :4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG: 
> server process (PID 21298) was terminated by signal 6:
> Aborted
> :4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG: 
> terminating any other active server processes
> 
> It then stopped all the backends, restarted, recovered and
> worked fine for the rest of the day.
> 
> The particulars:
> 
> SLES 10
> PG 8.4.1
> 8-way Intel Xeon E5345 @ 2.33GHz
> 32 GB RAM
> 
> max_connections = 1500
> shared_buffers = 1536MB
> work_mem = 32MB           
>              
> maintenance_work_mem = 256MB       
>     
> checkpoint_segments = 2048
> effective_cache_size = 20GB
> effective_io_concurrency = 6
> random_page_cost = 2.0
> 
> 
> This server has been rock solid running 8.4.1 for over a
> year.  It has Slony replicating to another mirror image
> server that has not had any problems whatsoever.
> 
> Any ideas?
> 
> Thanks in advance for all your help, and especially for the
> amazing database software!
> 
> Regards,
> 
> Bob Lunney
> 
> 
>       
> 
> -- 
> Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
> 




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] full vacuum of a very large table

2011-03-29 Thread Bob Lunney

--- On Tue, 3/29/11, Nic Chidu  wrote:

> From: Nic Chidu 
> Subject: [ADMIN] full vacuum of a very large table
> To: [email protected]
> Date: Tuesday, March 29, 2011, 11:56 AM
> Got a situation where a 130 mil rows
> (137GB) table needs to be brought down in size to  10
> mil records (most recent)
> with the least amount of downtime. 
> 
> Doing a full vacuum would be faster on:
>  - 120 mil rows deleted and 10 mil active (delete most of
> them then full vacuum)
>  - 10 mil deleted and 120 mil active. (delete small batches
> and full vacuum after each delete). 
> 
> Any other suggestions?
> 
> Thanks,
> 
> Nic
> 
> -- 

Nic,

Since you know the where clause to delete the 120 mil rows why not use the 
converse of that to select the 10 mil rows to retain into another table, then 
drop the original table?  No vacuum required!

Be sure to use the "create table as select..." syntax to avoid WAL during 
creation of the new table, and use a transaction to drop the original table and 
rename the new one.  That way users will be querying the original table right 
up until the switch over, when they will start using the new table.  

Foreign keys and other constraints may complicate things a bit, so check those 
out first.  Also, don't forget to index and analyze the new table before the 
switch over.

Good luck!

Bob Lunney




-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] grant select script

2011-03-30 Thread Bob Lunney
Marc,
Try pg_stat_user_tables - it will eliminate the tables in pg_catalog, 
information_schema, and the toast tables.
Bob Lunney

--- On Wed, 3/30/11, Marc Fromm  wrote:

From: Marc Fromm 
Subject: [ADMIN] grant select script
To: "[email protected]" 
Date: Wednesday, March 30, 2011, 1:54 PM



 
 


 

I am working with a script to automate grant select to all tables in a database 
to a user. 
  1 #!/bin/bash 
  2 for table in 'echo "SELECT relname FROM pg_stat_all_tables;" | psql cswe2 | 
grep -v "pg_" | grep "^ "'; 
  3 do 
  4 echo "GRANT SELECT ON TABLE $table to tom;" 
  5 echo "GRANT SELECT ON TABLE $table to tom;" | psql cswe2 
  6 done 
   
The script works—meaning it grants the select to the user, but it generates 
errors on tables that do not exist like the following. 
The data base cswe2 does not contain the table sql_languages, unless it is 
hidden. Is there a way to tell the script to ignore them? 
GRANT SELECT ON TABLE sql_languages to tom; 
ERROR:  relation "sql_languages" does not exist 
   
Thanks 
   
Marc
 
   
   
Marc Fromm

Information Technology Specialist II

Financial Aid Department

Western Washington University

Phone: 360-650-3351

Fax:   360-788-0251 

 




  

Re: [ADMIN] PKs without indexes

2011-04-19 Thread Bob Lunney

--- On Tue, 4/19/11, Jerry Sievers  wrote:

> From: Jerry Sievers 
> Subject: Re: [ADMIN] PKs without indexes
> To: [email protected]
> Cc: [email protected]
> Date: Tuesday, April 19, 2011, 11:19 AM
> John P Weatherman 
> writes:
> 
> > Hi all,
> >
> > I am attempting to set up slony-i and have run into a
> minor
> > glitch...apparently whoever designed the database I
> have inherited
> > didn't consistently build indexes to go along with
> Primary Keys, or at
> > least that's the error message I have been
> getting.  I am far from
> > confident in my own sqlfu in the catalog tables. 
> Does anyone have a
> > script for identifying tables without indexes that
> correspond to their
> > PKs?  I'm just trying to avoid re-inventing the
> wheel if I can help it.
> 
> Here's an example for you...
> 
> begin;
> 
> create schema foo;
> set search_path to foo;
> 
> create table haspk (a int primary key);
> create table missingpk (a int);
> 
> select relname
> from pg_class c
> join pg_namespace n on c.relnamespace = n.oid
> where nspname = 'foo'
> and relkind = 'r'
> and c.oid not in (
>     select conrelid
>     from pg_constraint
>     where contype = 'p'
> );
> 
> abort;
> 
> HTH

Slony will use any unique index on a table for replication purposes, so the 
list of tables should come from:

select relname
  from pg_class c
  join pg_namespace n on c.relnamespace = n.oid
 where nspname = current_schema()
   and relkind = 'r'
   and c.oid not in (
   select indrelid
 from pg_index
where indisprimary or indisunique
)
 order by 1;

Bob Lunney



-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] best practice for moving millions of rows to child table when setting up partitioning?

2011-04-27 Thread Bob Lunney
Mark,

Comments inline below for items 2 and 3.  What you are planning should work 
like a charm.

Bob Lunney

--- On Wed, 4/27/11, Mark Stosberg  wrote:

> From: Mark Stosberg 
> Subject: [ADMIN] best practice for moving millions of rows to child table 
> when setting up partitioning?
> To: [email protected]
> Date: Wednesday, April 27, 2011, 10:48 AM
> 
> Hello,
> 
> I'm working on moving a table with over 30 million to rows
> to be
> partitioned. The table seeing several inserts per second.
> It's
> essentially an activity log that only sees insert activity
> and is
> lightly used for reporting, such that queries against it
> can safely be
> disabled during a transition.
> 
> I'm looking for recommendations for a way to do this that
> will be least
> disruptive to the flow of inserts statements that will
> continue to
> stream in.
> 
> Here's the plan which seems best to me at the moment. Is it
> is
> reasonable?
> 
> 1. Handling creating the empty/future partitions is easy. I
> have the
> code for this done already, and will make several
> partitions in advance
> of needing them.
> 
> 2. To create the partitions that should have data moved
> from the parent,
> I'm thinking of creating them, and then before they are
> "live",
> using INSERT ... SELECT to fill them with data from the
> parent table.
> I'll run the INSERT first, and then add their indexes.
> 

Use "create table as select ..." (CTAS) instead of creating the table, then 
inserting.  Since the table is created and populated atomically there is no 
need to log the inserts in WAL, and the operation is much faster.

> 3. I will then install the trigger to redirect the inserts
> to the child
> table.
> 

If possible, its better to have the code simply do inserts directly into the 
child table - after all, if the partitioning is based on date, both the code 
and database know the date, so the code knows to which child table it should 
write at any given moment.

> 4. There will still be a relatively small number of new
> rows from the
> parent table to be deal with that came in after the INSERT
> from #2 was
> started, so a final INSERT .. SELECT statement will be made
> to copy the
> remaining rows.
> 
> 5. Finally, I'll drop the indexes on the parent table and
> truncate it.
> 
> Thanks for advice here. If there's a tutorial out there
> about this that
> I've missed, I'm happy to review it instead having it
> rehashed here.
> 
> Thanks for the help!
> 
>     Mark
> 
> 
> -- 
> Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] visualizing database schema - png/jpeg?

2011-05-19 Thread Bob Lunney
Schema Spy works for me.

Bob Lunney

--- On Wed, 5/18/11, neubyr  wrote:

> From: neubyr 
> Subject: [ADMIN] visualizing database schema - png/jpeg?
> To: [email protected]
> Date: Wednesday, May 18, 2011, 3:09 PM
> Anyone knows of tools that can
> generate PNG/JPEG image from pgsql
> database schema?
> 
> --
> neuby.r
> 
> -- 
> Sent via pgsql-admin mailing list ([email protected])
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
> 

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Parallel pg_dump on a single database

2011-06-24 Thread Bob Lunney
Is it possible (or smart!) to run multiple pg_dumps simulataneously on a single 
database, dumping different parts of the database to different files by using 
table and schema exclusion?  I'm attempting this and sometimes it works and 
sometimes when I check the dump files with 


  pg_restore -Fc  > /dev/null

I get 

  pg_restore: [custom archiver] found unexpected block ID (4) when reading data 
-- expected 4238

I suspect that locks are colliding sometimes and not others, but I'm not sure.  
Little help?  

Thanks in advance,

Bob Lunney

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Parallel pg_dump on a single database

2011-07-01 Thread Bob Lunney
Tom,

Thanks for the response, but I figured out the error is mine, not pg_dump's.  
In short (to minimize my embarrassment!) don't write to the same file from 
three different pg_dumps. 

The good news is running multiple pg_dumps simultaneously on a single database 
with exclusive coverage of different table sets works great, and my overall 
dump times have been reduced to one-fifth the time it takes to run a single 
pg_dump.  

BTW, I'm using PG 8.4.1, going to 8.4.8 soon, and its working great.  Thanks to 
all for the excellent database software.


Regards,

Bob Lunney


From: Tom Lane 
To: Bob Lunney 
Cc: "[email protected]" 
Sent: Friday, July 1, 2011 2:09 PM
Subject: Re: [ADMIN] Parallel pg_dump on a single database 

Bob Lunney  writes:
> Is it possible (or smart!) to run multiple pg_dumps simulataneously on a 
> single database, dumping different parts of the database to different files 
> by using table and schema exclusion?  I'm attempting this and sometimes it 
> works and sometimes when I check the dump files with 
>   pg_restore -Fc  > /dev/null

> I get 

>   pg_restore: [custom archiver] found unexpected block ID (4) when reading 
>data -- expected 4238

That sure sounds like a bug.  What PG version are you using exactly?
Can you provide a more specific description of what you're doing,
so somebody else could reproduce this?

            regards, tom lane

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problem retrieving large records (bytea) data from a table

2011-07-20 Thread Bob Lunney
PostgreSQL has to accumulate all the rows of a query before returning the 
result set to the client.  It is probably spooling those several 400-450 Mb 
docs, plus all the other attributes, to a temporary file prior to sending the 
results back.  If you have just three document stored in the database you're 
looking at > 1 Gb for the spool file alone.

Remember, select * is a convenience.  You will probably get the same response 
time as before is you name the columns, except doc_data, in the select clause 
of your query.  See the 'extended' attribute of doc_data?  That means the bytea 
data is stored out-of-line from the other columns like id, create_date and by.  
 See http://www.postgresql.org/docs/9.0/interactive/storage-toast.html for the 
particulars of TOAST.

If you need to remove the doc data from the table a quick way to do that would 
be to either update the table and set doc_data to NULL, or use the "create 
table as select" (CTAS) syntax and specify NULL as the value for doc_date, then 
drop the original table and rename the new one to doc_table.  Note if you use 
the CTAS method you will have to alter the table afterwards to re-establish the 
not null and default attributes of each column.  Don't forget to recreate the 
primary key, too.

Finally, talk with the developers to see if the document data really needs to 
be in the database, or could just be in a file outside of the database.  If you 
need transactional semantics (ACID properties) to manage the documents you may 
be stuck.  If not, replace doc_data with doc_filename (or maybe file_n is that 
column already) and move on from there.

Good luck,

Bob Lunney



Στις Wednesday 20 July 2011 17:31:45 ο/η pasman pasmański έγραψε:
> You may do a backup of this table. Then with ultraedit search your
> documents and remove them.
> 
> 2011/7/5, [email protected] :
> > I am having a hang condition every time I try to retrieve a large
> > records (bytea) data from  a table
> > The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
> > running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
> > has been changed)
> > I have the following table called doc_table
> >       Column  |              Type              |  Modifiers     |
> > Storage  | Description
> > ++---
> >  id           | numeric                        | not null    | main |
> >  file_n       | character varying(4000)        |             |
> > extended |
> >  create_date  | timestamp(6) without time zone | not null
> >                 default (clock_timestamp())
> >                 ::timestamp(0)without time zone              | plain |
> >  desc         | character varying(4000)        |             |
> > extended |
> >  doc_cc       | character varying(120)         | not null    |
> > extended |
> >  by           | numeric                        | not null    | main |
> >  doc_data     | bytea                          |             |
> > extended |
> >  mime_type_id | character varying(16)          | not null    |
> > extended |
> >  doc_src      | text                           |             |
> > extended |
> >  doc_stat     | character varying(512)         | not null
> >                default 'ACTIVE'::character varying           |
> > extended |
> > Indexes:
> >    "documents_pk" PRIMARY KEY, btree (document_id)
> >
> >
> > A while ago the some developers inserted several records with a
> > document (stored in doc_Data) that was around 400 - 450 MB each. Now
> > when you do a select * (all) from this table you get a hang and the
> > system becomes unresponsive.  Prior to these inserts, a select * (all,
> > no where clause) worked.  I'm also told a select * from doc_table
> > where id = xxx still works.  I haven't seen any error message in the
> > postgresql log files.
> > So I'm not sure how to find these bad records and why I am getting a
> > hang.  Since this postgresql is running with the default config files
> > could I be running out of a resource?  If so I'm not sure how to or
> > how much to add to these resources to fix this problem since I have
> > very little memory on this system.  Does anyone have any ideas why I
> > am getting a hang.  Thanks
> >
> > --
> > Sent via pgsql-admin mailing list ([email protected])
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-admin
> >
> 
> 
> -- 
> 
> pasman
> 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuumdb question/problem

2011-07-21 Thread Bob Lunney
Dave,

You're on the right track now, however, unloading the table to a flat file 
using pg_dump may create a very large flat file.  Make sure you use compression 
("-Fc" or pipe the dump through gzip, which will use two CPU's, one for pg_dump 
and one for gzip) on the dump file to minimize its size.  If the dump is 
successful you don't have to drop the table, you can just truncate it and that 
will recover the used space and hand it back to the file system.  Truncate is 
very fast, but then again so is drop table.  Be careful.  You can then use 
pg_restore to put the data  back into the original table and that will reclaim 
the space.

Take what Kevin said earlier about autovacuum and possible scheduled vacuum 
analyze verbose jobs to make dead space reusable very, very seriously.  
Upgrading to 8.4 will remove any need to manually manage the free space map in 
8.2 and is worth it, particularly to get a nicer version of autovacuum, 
although there are differences in automatic casting of data type between 8.2 
and 8.4 (and 9.x) that you should test before making a wholesale commitment to 
upgrading.  The fixes to your code aren't hard, but need to be done for you to 
get consistent results pre- and post-upgrade.

Finally, if there are natural partitions to the data in that table consider 
using PostgreSQL's partition feature.  What those partitions are depends 
entirely on your use case(s).  If the technique fits, you would be able to drop 
and create new partitions to clear out and populate data quite quickly.  Check 
out http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html for 
details of partitioning in PG 8.4.

Good luck!

Bob Lunney


- Original Message -
From: David Ondrejik 
To: pgsql-admin 
Cc: 
Sent: Thursday, July 21, 2011 2:12 PM
Subject: Re: [ADMIN] vacuumdb question/problem

I think I see a (my) fatal flaw that will cause the cluster to fail.


>>  From the info I received from previous posts, I am going to change
>> my game plan. If anyone has thoughts as to different process or
>> can confirm that I am on the right track, I would appreciate your
>> input.
>> 
>> 1. I am going to run a CLUSTER on the table instead of a VACUUM
>> FULL.
Kevin Grittner stated:
> If you have room for a second copy of your data, that is almost
> always much faster, and less prone to problems.

I looked at the sizes for the tables in the database and the table I am trying 
to run the cluster on is 275G and I only have 57G free.  I don't know how much 
of that 275G has data in it and how much is empty to allow for a second copy of 
the data. I am guessing the cluster would fail due to lack of space.

Are there any other options??

If I unload the table to a flat file; then drop the table from the database; 
then recreate the table; and finally reload the data - will that reclaim the 
space?

Kevin - thanks for the book recommendation.  Will order it tomorrow.

Thanks again for all the technical help!

Dave


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] revoked permissions on table still allows users to see table's structure

2011-07-22 Thread Bob Lunney
Juan,

That is what schemas, permissions and search paths are for.  You create 
multiple schemas, put the tables in the appropriate ones, grant usage 
permissions to those users that need access to the schemas and set the search 
path to search the schemas for objects.  Below is the test case.  It helps if 
you reset the psql prompt to display the current user:

\set PROMPT1 '%m:%>:%n:%/:%R%x%# '

As the database owner:

create schema seethat;
create schema seewhat;
create user al_low;
create user dee_ny;
grant usage on schema seethat to al_low, dee_ny;
grant usage on schema seewhat to al_low;
set search_path to seethat, seewhat, public;

create table seethat.open(open_id int);
create table seewhat.closed(closed_id int);


set session authorization al_low;

\d


set session authorization dee_ny;

\d


Hope that helps!

Bob Lunney

- Original Message -
From: Juan Cuervo (Quality Telecom) 
To: Scott Marlowe 
Cc: [email protected]
Sent: Friday, July 22, 2011 8:24 AM
Subject: Re: [ADMIN] revoked permissions on table still allows users to see 
table's structure

Hi Scott

Thanks for your answer.

It should be a way to prevent this from normal users who only need 
access to a set of tables, a view or even a store procedure. (Maybe a 
VIEW_SCHEMA privilege of roles?). View a table's structure should only 
be allowed to users who has at least one privilege on the table.

It doesnt make much sense to me that every user with access to the 
database , would be able to see the whole database design.

Do you know if this is common in other RDBMS ?

Regards,

Juan R. Cuervo Soto
Quality Telecom Ltd
www.quality-telecom.net
PBX : (575) 3693300
CEL : (57)  301-4174865


El 21/07/2011 08:48 p.m., Scott Marlowe escribió:
> On Thu, Jul 21, 2011 at 6:08 PM, Juan Cuervo (Quality Telecom)
>   wrote:
>> Hi All
>>
>> I'm new to the list, but have a few years as postgres user. I want to share
>> what I consider a rare behavior of postgresql regarding database object's
>> premissions:
>>
>> I have noticed that there is no way (at least no one I know) to prevent a
>> user from seeing the table's structures in a database.
>>
>> Is this a normal behavior of the product ?
> Yep.  Completely normal.
>
>> Is there a way to prevent a user from seeing my table's, procedure's and
>> function's code ?
> Don't let them connect to the db?  That's all I can think of.
>

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Schema diagramming tool?

2012-09-04 Thread Bob Lunney
SchemaSpy is kind of handy. 

Bob

Sent from my iPhone

On Sep 4, 2012, at 11:35 AM, Craig James  wrote:

> Can anyone recommend a good tool for producing a good drawing of an existing 
> database schema?  I don't need a design tool, but rather one that can take an 
> existing schema and produce a nice diagram that can be further edited and 
> "beautified."  I want something I can print and hang on the wall as a 
> reference document for developers writing code and SQL.
> 
> Thanks,
> Craig
> 


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] log files

2001-05-28 Thread Bob Himes

I've discovered this huge file, >16Meg, in the data/pg_xlog 
directory but information for it is alluding my discovery too. 

What is it and how can i manage it? Can it be deleted or is 
some tool necessary to deal with it?


-- 
Bob Himes
Pure Matrix, Inc.
303-245-1045ext. 105
http://www.purematrix.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [ADMIN] tuning SQL

2002-03-13 Thread Bob Hairgrove

On Tue, 29 Jan 2002 17:45:34 + (UTC), [EMAIL PROTECTED] ("Zhang,
Anna") wrote:

>Thanks Peter Darley, Ross J. Reedstrom and Tom lane!!
>How silly am I! Your messages reminds me. Actually I want to insert rows of
>contact_discard table which are not exists in contact table to contact table
>(some duplicates in two tables), first I run
>
>insert into contact
>select * from contact_discard a
>where not exists ( select 1 from contact b where b.contacthandle =
>a.contacthandle);
>
>It seems takes forever, I killed it after hours(note: contact table has 4
>indexes). Then I tried to figure out how many rows that are not duplicated.
>Now my problem turns to insert performance, in oracle it takes only a few
>minues.
>
>Thanks!
>
>Anna Zhang
>

With millions of rows, you also might want to create a temporary
(real) table with just the select statement, then drop ALL the indexes
on contact, do the update, then recreate the indexes.


Bob Hairgrove
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[ADMIN] Errors on VACUUM

2002-03-15 Thread Bob Smith


I received the following error from a VACUUM ANALYZE:

NOTICE:  FlushRelationBuffers(place, 454): block 231 is referenced 
(private 0, global 4)
FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2

and psql lost the connection right after that.  This was repeatable, 
after the first occurrence I re-started the postmaster and tried again, 
same result.  The particular table in question is static data, that is 
it was originally filled in once by COPY and there were no subsequent 
inserts or deletes.  But I had just added a column to the table and 
updated it from another table to eliminate a frequently-done join, that 
was the reason for doing the VACUUM.

Since I had the text file with the complete table contents from the 
original COPY, I decided to re-build the table, so next I did DROP 
TABLE, and I got this:

NOTICE:  Buffer Leak: [004] (freeNext=-3, freePrev=-3, 
relname=place_pid, blockNum=1, flags=0xc, refcount=2 -1)
NOTICE:  Buffer Leak: [005] (freeNext=-3, freePrev=-3, relname=place, 
blockNum=231, flags=0xc, refcount=4 -1)
NOTICE:  Buffer Leak: [008] (freeNext=-3, freePrev=-3, relname=place, 
blockNum=85, flags=0xc, refcount=3 -1)
NOTICE:  Buffer Leak: [011] (freeNext=-3, freePrev=-3, relname=place, 
blockNum=0, flags=0xc, refcount=2 -1)
.
.

repeating about 20 more times with blockNum varying, and finally ending 
with the table being dropped.  I then re-built the table from scratch 
and everything seems to be fine now, VACUUM no longer gives errors.

Here is my system configuration:

PostgreSQL 7.0.2
RedHat Linux 7.0
P-III 800, 768MB RAM, 80GB disk

So I'm wondering what happened here, what might have caused the original 
error, if something else is potentially still corrupted, and if I should 
maybe re-build the whole database to be safe?

Advice much appreciated!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] Errors on VACUUM

2002-03-15 Thread Bob Smith


On Friday, March 15, 2002, at 09:53 , Tom Lane wrote:

> Bob Smith <[EMAIL PROTECTED]> writes:
>> PostgreSQL 7.0.2
>
> Time to update ...

I know, I know...  But I'm about two weeks from bringing a whole new 
server on-line with the latest version of Postgres, so I don't want to 
hassle with upgrading the old one.  I just need to keep it limping along 
for a little while longer.

> I'm rather surprised that restarting the postmaster didn't make the
> error go away, but it's unlikely that anyone will care to investigate
> --- unless you can reproduce the problem in 7.1.3 or later.

So this isn't an error that would be caused by some sort of file 
problem?  The server got new hard drives a few months ago and files went 
through a backup/restore with tar, I didn't use pg_dump/pg_restore, so 
it occurred to me that something might have been scrambled.  But until 
this VACUUM failure, the only problem I've _ever_ had, before or after 
the drive upgrade, was one occurrence of the "Buffer Leak" error which I 
couldn't reproduce.

Well, I guess I just keep my fingers crossed for a couple of weeks until 
I have my new server up.

Thanks!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [ADMIN] Errors on VACUUM

2002-03-15 Thread Bob Smith


On Friday, March 15, 2002, at 03:07 , Tom Lane wrote:

> It's not clear to me why a restart (which would naturally clear shared
> memory) didn't fix the problem.

Er, um, (humble look), I just studied my syslog, and in fact the 
postmaster _didn't_ restart.  I tried to restart it using the Red Hat 
control script from /etc/rc.d/init.d,  but apparently that script lied 
to me when it said the restart succeeded.  The syslog clearly shows the 
postmaster didn't actually restart until about 10 minutes later, _after_ 
the second error.  So I was talking to the same postmaster every time.  
Arrggh.

I'm reassured now that this is just a passing memory management issue 
and not a symptom of a more serious problem, I'm sure the error wouldn't 
have happened the second time if the postmaster had in fact restarted.  
Everything looks fine now so I'm going to stop worrying about it.

Thanks much for your help!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[ADMIN] news groups

2002-04-07 Thread Bob Hartung

Hi again,
   How can I log into news.postgresql.org to follow the messages in the 
news groups.  I am using Mozilla-0.9.8 and don't seem to be able to get it 
to see any of the lists.

Thanks,

Bob
-- 

Bob Hartung, Bettendorf, IA


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[ADMIN] Meaning of message in logfile

2002-07-09 Thread Bob Smith


I'm running Postgres 7.2.1 on Mac OS X Server 10.1.5, and the logfile is 
getting quite large because it logs this:

bootstrap_look_up() failed (ipc/send) invalid destination port

about once a minute.  I have not changed the configuration file at all 
from the defaults.  The server seems to be working just fine.  Anyone 
know what this message means, or how to make it stop?

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [ADMIN] Meaning of message in logfile

2002-07-09 Thread Bob Smith


The postmaster is being started "postmaster ... >> 
/usr/local/pgsql/logfile 2>&1", and the messages are appearing in 
/usr/local/pgsql/logfile.  They do not appear in the syslog.  I just 
assumed it was Postgres doing this, but it must be a message the OS is 
writing from the postmaster's process.  Sounds like I need the 
assistance of an OS X expert, I'll try some OS X related discussion 
groups.

Thanks for your help!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]

On Tuesday, July 9, 2002, at 08:03 , Tom Lane wrote:

> Bob Smith <[EMAIL PROTECTED]> writes:
>> I'm running Postgres 7.2.1 on Mac OS X Server 10.1.5, and the logfile 
>> is
>> getting quite large because it logs this:
>
>> bootstrap_look_up() failed (ipc/send) invalid destination port
>
> Are you certain that message is coming from Postgres?  I've never seen
> it before, and there's no instance of the string "bootstrap_look_up"
> (nor obvious variants) anywhere in the Postgres sources.
>
>   regards, tom lane
>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[ADMIN] Transaction isolation and UNION queries

2003-02-27 Thread Bob Smith
I have a question about transaction isolation that I can't find an 
answer to in the docs.  I'm working with a database that has some data 
split over two tables.  One table is the ultimate destination for all 
the data, the other is a "pending" table which holds rows during data 
entry.  Rows from the pending table are moved to the permanent table 
once data entry is complete.  For some purposes I want to see rows from 
both tables, so I do a UNION.  My question is, with only read committed 
isolation, could a commit by another transaction make changes appear 
between the separate parts of the UNION query?  In other words, could a 
row appear to be missing or duplicated because a transaction that was 
moving the row from pending to permanent committed while the UNION was 
running?

Thanks!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[ADMIN] Performance question

2003-03-05 Thread Bob Smith
When I execute a query on a new connection, the performance is many 
times slower than if the query is repeated.  In other words, if I start 
psql, execute the query, then repeat it immediately, the second time it 
takes only about 20% as long to execute.  Now here's the confusing 
part, if I exit psql then start it up again, the same thing will occur 
on the new connection as well, the first execution takes 5x as long 
again.  I don't understand this, it would make sense to me that the 
second execution being faster is due to disk caching on the server, but 
then why is it slower again on every new connection?  Disk caching 
should benefit all current and new connections until the cache is 
flushed, which on this server shouldn't happen for a long time, the 
load is light and it has lots of RAM.  Is Postgres doing some kind of 
caching itself that lasts only for the life of one backend process?  If 
so, is there any way to make this caching persistent across backends?

Server particulars:

Postgres 7.2.1, Mac OS X Server 10.1.5, dual 1GHz CPUs, 1.5GB memory

Thanks!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Followup Re: [ADMIN] Performance question

2003-03-05 Thread Bob Smith
OK, I seem to have fixed my own problem here again, sorry.  I restarted 
the postmaster, now _all_ queries are about 10x faster, and the first 
execution on a new connection is no longer significantly slower than 
the second.  The server (and the original postmaster) had been up for:

11:18AM  up 162 days, 17:48, 3 users, load averages: 0.57, 0.51, 0.51

Maybe I should have a cron script restart postmaster every now and 
then, like once a week?

Bob

On Wednesday, Mar 5, 2003, at 10:11 US/Pacific, Bob Smith wrote:

When I execute a query on a new connection, the performance is many 
times slower than if the query is repeated.  In other words, if I 
start psql, execute the query, then repeat it immediately, the second 
time it takes only about 20% as long to execute.  Now here's the 
confusing part, if I exit psql then start it up again, the same thing 
will occur on the new connection as well, the first execution takes 5x 
as long again.  I don't understand this, it would make sense to me 
that the second execution being faster is due to disk caching on the 
server, but then why is it slower again on every new connection?  Disk 
caching should benefit all current and new connections until the cache 
is flushed, which on this server shouldn't happen for a long time, the 
load is light and it has lots of RAM.  Is Postgres doing some kind of 
caching itself that lasts only for the life of one backend process?  
If so, is there any way to make this caching persistent across 
backends?

Server particulars:

Postgres 7.2.1, Mac OS X Server 10.1.5, dual 1GHz CPUs, 1.5GB memory

Thanks!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] Restrict the number of emails to certain keywords with a filter?

2003-04-04 Thread Bob Wheldon
Hi all,

Can I restrict the number of emails to certain keywords with a filter?

I am getting too many irrelevant emails (but I do not want to stop all the 
emails from [ADMIN]!

Regards,

Bob Wheldon
[EMAIL PROTECTED]
Tel.  +49 8444/7309
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] unsubscribe

2003-05-30 Thread Bob Wheldon


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] unsubscribe

2003-06-01 Thread Bob Wheldon
unsubscribe





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html


[ADMIN] which file of the RH9 jdbc provides Java2 functionality?

2003-08-01 Thread Bob Hartung
Hi all,
  I am a certified newbie to java, jdbc, and have very little experience 
with PostgreSQL to date.  I am trying to install a dicom server as a test 
for a medical office. I have everything set except for the required jdbc 
driver for postgresql.

  I have installed the RH9 rpm postgresql-jdbc-7.3 that contains three 
files:
 pg73bljdbc1.jar
 pg73bljdbc2.jar
 pb73bljdbc3.jar.

  I have installed the IBM Java2 SDK.  Which of the jar files in the rpm 
postgresql package provides the Java2 level functionality?  Or are all 
three required?  The installation docs of the package, dcm3che, suggest 
only one file needs to be copied to one of its' folders.

I have searched the IBM and Postgres sites and have been overwhelmed with 
the volume of information, but I can't find the specific piece of 
information that I need.

  Thanks for your help.

Bob Hartung

---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] unsubscribe

2003-09-22 Thread Bob Wheldon




UNSUBSCRIBE


[ADMIN] Initdb problems on 6.4.2 install

1999-03-10 Thread Judy/Bob Dilworth

All:

I'm attempting to install postgresql 6.4.2 on my RedHat 5.1 system.  The
build and install all go very smoothly and generate no errors in the log
files.  When I attempt the initial initdb, however, I get the following
errors:

initdb: using /usr/local/pgsql/lib/local1_template1.bki.source as input 
to create the template database.
initdb: using /usr/local/pgsql/lib/global1.bki.source as input to create 
the global classes.
initdb: using /usr/local/pgsql/lib/pg_hba.conf.sample as the host-based 
authentication control file.
 
We are initializing the database system with username postgres 
(uid=100).
This user will own all the files and must also own the server process.
 
Creating Postgres database system directory /home/postgres/data
 
Creating Postgres database system directory /home/postgres/data/base
 
initdb: creating template database in /home/postgres/data/base/template1
Running: postgres -boot -C -F -D/home/postgres/data -Q template1
syntax error 2305 : parse error
Creating global classes in /base
Running: postgres -boot -C -F -D/home/postgres/data -Q template1
 
Adding template1 database to pg_database...
Running: postgres -boot -C -F -D/home/postgres/data -Q template1 <
/tmp/create.15280
ERROR:  pg_atoi: error in "template1": can't parse "template1"
ERROR:  pg_atoi: error in "template1": can't parse "template1"
initdb: could not log template database
initdb: cleaning up.

I looked through some of the postings archived on this problem but
didn't see any solutions.  Does anyone have any idea what might be going
on here.

Thanks in advance

Bob Dilworth
Toledo Ohio
[EMAIL PROTECTED] (home)
[EMAIL PROTECTED] (work)



[ADMIN] Insert NULL value with to_numeric()

2006-07-19 Thread Ghislain Bob Hachey
Hello all,

I'm a newbie.  PostgreSQL 8.1.4. Fedora Core 5.

I'm writing a small java application that will import CSV txt files into
my DB.  The SQL statement I'm sending to pgsql looks like this:

"INSERT into table (col1,col2,col3) values (to_number(?,
'999'),to_timestamp(?, 'MM/DD/ HH24:MI:SS'),to_number(?, '');

Values are taken from a String array.

sometimes in col3 (smallint in pgsql DB) I need to insert a NULL value
but when I do I get the following error message:

invalid input syntax for type numeric: " "

I've search the archives and Internet and found similar situations but I
guess not experienced enough to fix this myself with info that I've
found.

Any help would be greatly appreciated.

Regards,

Ghislain Hachey

 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Insert NULL value with to_numeric()

2006-07-19 Thread Ghislain Bob Hachey

> 
> Why use to_number or to_timestamp?  I assume you are using setString
> in your prepared statement.  In your Java code you can use setNull if
> the value is null and setInt or setLong or setTimestamp if it is not.
> Then you don't need the to_number or to_timestamp. 

You're absolutely right.  I was trying to make it work with setString
first cause I'm new to java and it was easier at first look.  But I will
take your advice.

Thanks a lot for your time

Ghislain Hachey
> 


---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] Odd behavior with NULL value

2001-12-20 Thread Bob Smith, Hammett & Edison, Inc.

See example below of using NULL values with type DATE.  It behaves
strangely in expressions, "(x <> null)" gives an entirely different
result than "not(x = null)".  Is this intended behavior, if so, why?
If not, is this a bug?

On a related note, does anyone know if 'current' works with DATE?  With
TIMESTAMP it always evaluates to the time as of the retrieval of the
value, not as of the insertion, but for DATE it looks like it gets
evaluated at insertion (I can't tell for sure from my test db for another
8 hours or so, I could just roll the date forward on the server, but I'd
like to keep my job...)

Thanks to anyone who can shed some light on this!

rsj=> \d test
  Table "test"
 Attribute |  Type   | Modifier 
---+-+--
 key   | integer | 
 date  | date| 

rsj=> select * from test;
 key |date
-+
   1 | 2001-12-20
   2 | 2001-12-20
   3 | 
(3 rows)

rsj=> select * from test where date = null;
 key | date 
-+--
   3 | 
(1 row)

rsj=> select * from test where date <> null;
 key | date 
-+--
(0 rows)

rsj=> select * from test where not(date = null);
 key |date
-+
   1 | 2001-12-20
   2 | 2001-12-20
(2 rows)

rsj=> 


   |\  _,,,---,,_Bob Smith
   /,`.-'`'-.  ;-;;,_Hammett & Edison, Inc.
  |,4-  ) )-,_. ,\ (  `'-'   [EMAIL PROTECTED]
 '---''(_/--'  `-'\_)   

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] Odd behavior with NULL value

2001-12-20 Thread Bob Smith, Hammett & Edison, Inc.


On Thu, Dec 20, 2001, 18:55:18 Tom Lane wrote:

>[EMAIL PROTECTED] (Bob Smith, Hammett & Edison, Inc.) writes:
>> On a related note, does anyone know if 'current' works with DATE?
>
>DATE does not have an internal representation of 'current', so the DATE
>input parser treats it the same as 'now'.  AFAIR only timestamp (nee
>datetime) has that concept.
>
>FYI, the concept of 'current' has been deemed broken and removed
>altogether for 7.2.  See discussions in the pgsql-hackers archives
>if you want to know why.
>
>   regards, tom lane

Here is the problem I'm trying to solve, perhaps someone can help.  For an
invoicing system database, I have a table that defines employees.  Each has a
begin and end date defining the employment period.  For current employees, the
end date is "open".  How do I express that to keep queries as simple as
possible?  The three choices I came up with are:

  (1)  Use end_date = 'current'

  (2)  Store end_date as TEXT and cast it to DATE in expressions, so it can
   contain the text constant 'today' for current employees

  (3)  Use end_date = NULL and have an extra expression in the queries

Because 'current' doesn't work for DATE types, (1) is a bust.  (2) and (3)
both work, but I'm not sure which is better from a performance point of view.
 For example, if I want all the employees that are current as of a particular
date, for (2) it would be:

  SELECT * FROM employee WHERE (foo >= employee.start_date) AND
   (foo <= employee.end_date::DATE)

and for (3) it would be:

  SELECT... WHERE (foo >= employee.start_date) AND
  ((foo <= employee.end_date) OR (employee.end_date IS NULL))

(Thanks to all who posted with explanations of why (x IS NULL) should be used
instead of (x = NULL)).  The cast has a performance penalty, but then so does
using OR, especially in a join.  Which would be worse?

I just noticed that (3) does have one advantage over (2); if the system allows
end_date to be set into the future, (3) works for dates in the future, but (2)
does not.  But that isn't one of my requirements so it isn't a deciding
factor.

Any opinions on which approach is better, or does anyone see a fourth
alternative?

Thanks!


   |\  _,,,---,,_Bob Smith
   /,`.-'`'-.  ;-;;,_Hammett & Edison, Inc.
  |,4-  ) )-,_. ,\ (  `'-'   [EMAIL PROTECTED]
 '---''(_/--'  `-'\_)   

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[ADMIN] VACUUM question, OID range

2001-12-31 Thread Bob Smith, Hammett & Edison, Inc.


I have a db which has many tables that are mirrors of data from an
outside source.  I download text export files and do weekly updates
of the entire tables using TRUNCATE and COPY.  Is there any benefit
to a VACUUM ANALYZE immediately afterwards?  There are a number of
indices on these tables.  Do I ever benefit from VACUUM at all,
considering that the server never inserts/deletes rows from these
tables except during the update, which replaces every row?

Also, what happens if the OID value "rolls over"?  Each of these
updates consumes about 3,000,000 OIDs, which is not a problem for
weekly updates, but I've been asked to consider doing daily updates.
Assuming the OID range is 2^31, with daily updates the OID "rolls
over" in less than 2 years (maybe sooner, as there are several other
databases on the same server as well).  Nothing to panic about,
true, but I'd still like to know if I might eventually have to re-
build the whole site due to running out of OIDs.

The server is a P3-800, 768MB RAM, 80GB disk, running Red Hat 7.0
and PostgreSQL 7.0.2.

Thanks!

Bob Smith
Hammett & Edison, Inc.
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])