Re: [ADMIN] large database: problems with pg_dump and pg_restore

2010-10-26 Thread mark
A long time ago, (8.1.11 IIRC)

We got much better speed not using the compression flag with pg_dump instead 
piping to gzip (or better yet something like pbzip2 or pigz, but I haven't used 
them).  


I think there was a thread about this that had a test case and numbers. 


IIRC it's because you will further bottleneck a core when using the compression 
flag. Using a pipe the compression can be done on another core (or cores if 
using pbzip2 or pigz) and throughput will be faster. 




On the restore side hopefully people are now able to use parallel restore to 
improve things when reloading. 


Just my thoughts, 



~mark


-Original Message-
From: pgsql-admin-ow...@postgresql.org 
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Jehan-Guillaume (ioguix) 
de Rorthais
Sent: Tuesday, October 26, 2010 4:22 PM
To: Martin Povolny
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] large database: problems with pg_dump and pg_restore

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Or even compress AND split it !
  pg_dump -Fc dbname | split -b 1G - dump_dbname

and restore:
  cat dump_dbname* | pg_restore -d dbname

or
  cat dump_dbname* | pg_restore | psql dbname

Le 26/10/2010 23:51, Samuel Stearns a écrit :
> You can also try piping the dump through gzip and then restoring using cat:
> 
>  
> 
> pg_dumpall | gzip > db.out-`date +\%Y\%m\%d\%H`.gz
> 
>  
> 
> cat db.out-`date +\%Y\%m\%d\%H`.gz | gunzip | psql template1
> 
>  
> 
> Sam
> 
>  
> 
>  
> 
>  
> 
> *From:* pgsql-admin-ow...@postgresql.org
> [mailto:pgsql-admin-ow...@postgresql.org] *On Behalf Of *Martin Povolny
> *Sent:* Tuesday, 26 October 2010 10:12 PM
> *To:* pgsql-admin@postgresql.org
> *Subject:* [ADMIN] large database: problems with pg_dump and pg_restore
> 
>  
> 
> Hallo, 
> 
>  
> 
> I have some quite grave problems with dumping and restoring large
> databases (>4GB of dump).
> 
> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I
> was unable to make a dump in the default 'tar' format. I got this message:
> 
>  
> 
> pg_dump: [tar archiver] archive member too large for tar format
> 
>  
> 
> I got over this issue by using the 'custom' format.
> 
>  
> 
> Unfortunately later on I was only able to restore 3 of the 5 databases
> -- any of the 2 dumps that would get over 4GB in the 'tar' format would
> fail.
> 
>  
> 
> /var/tmp# ls -l dumps/
> 
> total 16294020
> 
> -rw-r--r-- 1 root root  742611968 2010-10-16 20:36 archiv1.dump
> 
> -rw-r--r-- 1 root root  317352448 2010-10-16 20:37 archiv2.dump
> 
> -rw-r--r-- 1 root root 1137477632 2010-10-16 20:41 archiv3.dump
> 
> -rw-r--r-- 1 root root 3712833536 2010- 10-16 20:57 archiv4.dump
> 
> -rw-r--r-- 1 root root 8735429632 2010-10-16 21:35 archiv5.dump
> 
> -rw-r--r-- 1 root root 1253325453 2010-10-16 22:59 bb.dump
> 
>  
> 
> archiv1-4 went OK, archiv5 -- the largest in 'tar' format -- failed and
> the 'bb.dump' which is in the 'custom' format, failed too.
> 
>  
> 
> I got these messages:
> 
>  
> 
> for the archiv5 in the 'tar' format: 
> 
>  
> 
> pg_restore: [tar archivář] nalezena poškozená tar hlavička v STEX
> (předpokládáno 100, vypočteno 34044) pozice souboru 7750193152
> 
>  
> 
> sorry, it's in my native locale, but is says "found a corrupted tar
> header in STEX (expected 100, calculated 34044) file position 7750193152
> 
>  
> 
> for the bb.dump in the 'custom' format:
> 
>  
> 
> pg_restore: [vlastní archivář] unexpected end of file
> 
>  
> 
> 'vlastní archiv ář' is again in my locale, it should be in English "own
> archiver"
> 
>  
> 
> Later I tried to utilize the -I and -i switches of pg_restore to restore
> data that are in the archive behing the table that was not restored. But
> got the same error message.
> 
>  
> 
> The dump was created on postgresql-8.3  8.3.3-1~bpo40+1 from debian
> backports. I was trying to restore on this version and later on using
> postgresql-8.4 8.4.5-1~bpo50+1 from debian backports, finally I tried
> 64bit version of postgresql-8.4 8.4.5-0ubuntu10.10. No change, still the
> same error messages.
> 
>  
> 
> I welcome any help and/or hints on this issue as I need to dump and
> restore several large databases.
> 
>  
> 
> Regards,
> 
> 
> --
> Mgr. Martin Povolný, soLNet, s.r.o.,
> +42014458, martin.povo...@solnet.cz
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzHVIAACgkQxWGfaAgowiL30ACglAXjKXTOZBsmrW5LFZzb8G83
XawAoIVc1UVkW4UQy5lK/jLNARxCb2QN
=AR/f
-END PGP SIGNATURE-

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] locales and encodings Oh MY!

2010-11-11 Thread mark
Hi all,

So today while I investigating some slow queries I found out that some time
ago some of our databases (due, innocently I suspect, to upgrades or
hardware migrations) are now not using C as lc_ctype. And why some 'things'
got a lot slower...

To correct this problem it seems like I am going to have to do a dump,
intidb, then restore.  (or add some new indexes and eat the extra overhead).
Unless someone knows something I don't about changing ctype and collation
with some sleight of hand (if you do I am very interested)


I have listed what I think I will be doing with regards to initdb. if anyone
sees problems with the following mixture during my dump -> init-> restore I
would be most keen in hearing about it.



initdb /path/to/data/dir --lc_ctype=C --lc_collation=C
--lc_message=en_US.UTF8 --lc_monetary=en_US.UTF8 --lc_numeric=en_US.UTF8
--lc_time=en_US.UTF8 -E UTF8




Thank you,

Mark


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] locales and encodings Oh MY!

2010-11-12 Thread mark
On Fri, Nov 12, 2010 at 12:45 AM, Gabriele Bartolini
 wrote:
> Hi Mark,
>
> Il 12/11/10 03:31, mark ha scritto:
>>
>> I have listed what I think I will be doing with regards to initdb. if
>> anyone
>> sees problems with the following mixture during my dump ->  init->
>>  restore I
>> would be most keen in hearing about it.
>
> FYI, PostgreSQL 8.4 introduced database level collation. Before that, yes,
> you need to re-init your data dir.

Guess I missed that, I have 8.3.X boxes in production and 9.0.1 boxes
in dev so I guess only someone of them will require a re-init.

>>
>> initdb /path/to/data/dir --lc_ctype=C --lc_collation=C
>> --lc_message=en_US.UTF8 --lc_monetary=en_US.UTF8 --lc_numeric=en_US.UTF8
>> --lc_time=en_US.UTF8 -E UTF8
>
> Maybe you meant --lc_collate ?
>

Yes I did mean lc_collate - thanks

> Cheers,
> Gabriele
>

With how similar straight C and en_US.UTF8 are it was suggested to me,
by persons who are far more C knowledgeable then I in my office, that
this is something the PG community could "fix" . A "fix" being so that
"col LIKE 'foo%' " could use btree indexes in locales like en_US.UTF8
(and probably some others).


@hackers ->
is the request unreasonable ? anyone got any idea of the price tag to
make that happen ?



> --
>  Gabriele Bartolini - 2ndQuadrant Italia
>  PostgreSQL Training, Services and Support
>  gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it
>
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [HACKERS] [ADMIN] locales and encodings Oh MY!

2010-11-12 Thread mark
On Fri, Nov 12, 2010 at 12:03 PM, Tom Lane  wrote:
> Robert Haas  writes:
>>> With how similar straight C and en_US.UTF8 are it was suggested to me,
>>> by persons who are far more C knowledgeable then I in my office, that
>>> this is something the PG community could "fix" . A "fix" being so that
>>> "col LIKE 'foo%' " could use btree indexes in locales like en_US.UTF8
>>> (and probably some others).
>
>>> is the request unreasonable ? anyone got any idea of the price tag to
>>> make that happen ?
>
>> I thought it already did that.
>
> No, and the odds of it ever happening are insignificant.  The sort order
> associated with en_US (and other "dictionary order" locales) is just too
> randomly different from what you need to optimize a LIKE search.
> (Whoever told you en_US sorts similarly to C is nuts.)
>
> The solution if you want the database's prevailing sort order to be en_US
> is to put an extra text_pattern_ops index on the column you want to do
> LIKE searches on.  We might eventually have the ability to spell that
> "put a C-locale index on the column", but text_pattern_ops is the way to
> do it today.
>
>                        regards, tom lane
>

Ok I hear you loud and clear.I am going to eat the overhead until
I get to 9.0.1, currently on 8.3.X in some places.

I will either take an outage and do a dump - re-init-restore or
inplace upgrade and then do some locking, copy, drop old, rename new
db path.

thanks all.


..: Mark

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] FW: PG 9 OpenLDAP 2.4 dependency ? (in a jam)

2011-02-11 Thread mark

Is this true ?  I am having trouble finding the documentation on this.

Thanks for someone with better google-foo than me if they can find it, and
maybe why?

I have an LDAP master running 2.3.(something) on a box and it is looking
like getting that to 2.4 is a no-go. 

Postgres does not auth with that LDAP instance. 


Thanks 

Mark



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] PG 9 OpenLDAP 2.4 dependency ? (in a jam)

2011-02-12 Thread mark
Is this true ?  I am having trouble finding the documentation on this.

Thanks for someone with better google-foo than me if they can find it, and
maybe why?

I have an LDAP master running 2.3.(something) on a box and it is looking
like getting that to 2.4 is a no-go. 

Postgres does not auth with that LDAP instance. 


Thanks 

Mark


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to tell if a replication server has stopped replicating

2011-08-26 Thread mark


> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of Bill MacArthur
> Sent: Friday, August 26, 2011 10:21 AM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] how to tell if a replication server has stopped
> replicating
> 
> Hello,
> 
> We recently discovered, quite by accident, that our streaming
> replication server was no longer replicating. We noticed this in our
> master server log file:
> 2011-08-26 00:00:05 PDT postgres 192.168.17.4 [unknown]LOG:
> replication connection authorized: user=postgres host=192.168.17.4
> port=53542
> 2011-08-26 00:00:05 PDT postgres 192.168.17.4 [unknown]FATAL:
> requested WAL segment 000101D1006B has already been removed
> 
> As it turned out this has been going on for at least a week as
> everyday's log files were crammed with these messages. Whatever caused
> the replication server to end up needing the WAL file is a mystery for
> another day. What I would like to do is setup a simple method of
> alerting us if replication stops. We could do a simple grep of log
> files on the replication side, but I am guessing that there is some SQL
> command that could be run against the postgres internals that would be
> cleaner. Is there such an animal?
> 
> Thank you,
> Bill MacArthur
> 


* http://archives.postgresql.org/pgsql-hackers/2010-11/msg00198.php

* http://archives.postgresql.org/pgsql-hackers/2010-11/msg00252.php 


Those two posts should cover the basics. There are other ways some people use 
to do it, but this seems to be the generally accepted way. 

I think 9.1 has some stuff in the works that should make it far easier to 
monitor. 

-Mark

 


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] timestamp in 7.2

2002-03-27 Thread mark

Hello all.

I'v migrated my data to 7.2.
Now I'm  testing aplications, but looks like timestamp function in 
postgres is broken:
klienti=# select date(now());
date

 2002-03-27
(1 row)

klienti=# select timestamp(date(now()));
ERROR:  parser: parse error at or near "date"

I need to make querys with timestamp(date,time);
But timestamp don't work even with sample from manual:

klienti=# select timestamp(date '1998-02-24',time '23:07');
ERROR:  parser: parse error at or near "date"

Mark


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[ADMIN] oids

2003-09-02 Thread Mark
Hello, All!

Is there other meanings of oids than unique row identifier?

Can I make two identical tables in the same database (even with oids)?
Something like this
CREATE TABLE table2 AS (SELECT oid,* FROM table1);

Or oids must be unique in whole database?

Mark

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] Upgrade

2003-09-04 Thread Mark
Hello, All!

I'm planing to upgrade from 7.2.1 to 7.3.4.
Will pg_dumpall and pg_restore be sufficient for propper data migration?

Mark


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Installation failure Solaris2.6/Sparc

1999-02-04 Thread Mark Linvill

Greetings,

Looks like you're using the lex that came with Solaris.  You need to
get flex.  IIRC, you can get it from ftp.gnu.org.  

I just installed on Solaris/Sparc for the first time ~2 weeks ago, so
it's still reasonably fresh in my mind.  I had to diddle several
options in the makefile to get it to work right.

You can email me if you have other problems.

    -Mark


>
>Date: Wed, 3 Feb 1999 11:55:09 -0700 (MST)
>From: "Marc Wrubleski (403)220-7598" <[EMAIL PROTECTED]>
>Subject: Installation failure Solaris2.6/Sparc
>
>I know this has been posted before, but I cannot find a solution in the 
>archives. Please post if you know the answer to this .
>
>At the stage gmake all I get the following prior to failure:
>
>gmake[3]: Entering directory 
>`/home/postgres/postgresql-6.4.2/src/interfaces/ecpg/preproc'
>/usr/ccs/bin/yacc -d preproc.y
>"preproc.y", line 576: warning: redeclaration of precedence of UNLISTEN.
>gmake[3]: *** Warning: File `y.tab.h' has modification time in the future 
>(918065629 > 918065615)
>gcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes 
>- -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=4 -DPATCHLEVEL=4 
>- -DINCLUDE_PATH=\"/opt/postgres/include\"-c y.tab.c -o y.tab.o
>/usr/ccs/bin/yaccpar: In function `yyparse':
>/usr/ccs/bin/yaccpar:274: warning: implicit declaration of function `yylex'
>/usr/ccs/bin/yaccpar:374: warning: label `yyerrlab' defined but not used
>/usr/ccs/bin/yaccpar:164: warning: label `yynewstate' defined but not used
>lex pgc.l
>"pgc.l":line 53: Error: missing translation value
>gmake[3]: *** [pgc.c] Error 1
>gmake[3]: Leaving directory 
>`/home/postgres/postgresql-6.4.2/src/interfaces/ecpg/preproc'
>gmake[2]: *** [all] Error 2
>gmake[2]: Leaving directory 
>`/home/postgres/postgresql-6.4.2/src/interfaces/ecpg'
>gmake[1]: *** [all] Error 2
>gmake[1]: Leaving directory `/home/postgres/postgresql-6.4.2/src/interfaces'
>gmake: *** [all] Error 2
>
>This is with gcc-2.8.1 and also with SUN C Compiler.
>
>Marc Wrubleski 
>Math & Stats   
>University of Calgary  
>
>--
>
>End of pgsql-admin-digest V1 #148
>*
>




[ADMIN] Re: HELP

1999-03-04 Thread Mark Linvill

Greetings,

Well, you should take the time to find the lists on www.postgresql.org
as I don't represent the PostgreSQL project in any way and can't offer
the same level of support as the lists...

Having said that, 

Try diddling the ld lib path environmental variable to include
/usr/local/pgsql/lib, IIRC it's called LDPATH.  Just append that dir
to it...

-Mark

Mark Linvill
Programmer 
Winstar Broadband Services


On Thu, 4 Mar 1999, Daniel Herssein wrote:

>Hi,
>I cannot find a user newsgroup on pgsql, but I saw your name on a reply
>for help...
>I hope you don't mind my taking the liberty of asking directly.
>
>I am trying to install from a binary distribution, and it all goes well
>until the
>last step...(per README.Solaris instructions)
>I did the earlier steps in order indicated. Then:
>
>$ csh
>namsb% set path = ( $path /usr/local/pgsql/bin )
>namsb% initdb --pglib=/usr/local/pgsql/lib
>--pgdata=/usr/local/pgsql/data -d
>Running with debug mode on.
>initdb: using /usr/local/pgsql/lib/local1_template1.bki.source as input
>to create the template database.
>initdb: using /usr/local/pgsql/lib/global1.bki.source as input to create
>the global classes.
>initdb: using /usr/local/pgsql/lib/pg_hba.conf.sample as the host-based
>authentication control file.
>
>ld.so.1: pg_id: fatal: libgen.so.1: open failed: No such file or
>directory
>Unable to determine a valid username.  If you are running
>initdb without an explicit username specified, then there
>may be a problem with finding the Postgres shared library
>and/or the pg_id utility.
>
>FYI...I did an "env" and here is what is shows. What am I missing?
>namsb% env
>HOME=/usr/local/pgsql
>HZ=100
>LOGNAME=pgsql
>PATH=/usr/bin:.:/usr/local/pgsql/bin
>SHELL=/bin/sh
>TERM=vt100
>TZ=US/Eastern
>PWD=/usr/local/pgsql
>USER=pgsql
>namsb% 
>
>Thanks for your help!
>-- 
>Life is complicated. But the simpler alternatives are not very
>desirable. (R' A. Kahn)
>




[ADMIN] RE: [HELP] Install Problem 6.4.2 on Solaris 2.6

1999-03-09 Thread Mark Linvill


Are you using gnu flex?  From your output it kinda looks like you're
using the lex that comes with solaris.  It won't work with Postgres.
See the FAQ for the correct version of flex...

    -Mark


>--
>
>Date: Tue, 09 Mar 1999 00:14:24 -0500
>From: Daniel Herssein <[EMAIL PROTECTED]>
>Subject: [HELP] Install Problem 6.4.2 on Solaris 2.6
>
>Using a vanilla "./configure --without-CXX "
>All is well...then a 
>gmake all 
>Produces this error...
>cc -I../../../include -I../../../backend -I../include
>- -DMAJOR_VERSION=2 -DMINOR_VERSION=4 -DPATCHLEVEL=4
>- -DINCLUDE_PATH=\"/usr/local/pgsql/include\"-c y.tab.c -o y.tab.o
>lex pgc.l
>"pgc.l":line 53: Error: missing translation value
>
>Running on SunOS  5.6 Generic_105181-04 sun4u sparc SUNW,Ultra-4
>Any ideas?
>
>- -- 
>Life is complicated. But the simpler alternatives are not very
>desirable. (R' A. Kahn)
>




Re: [ADMIN] illegal page faults. . .

1999-05-28 Thread Mark Hayden

This is just a guess, but could the problem have to do with those huge varchar fields 
in your
tables?  I had problems using varchar over 255 characters from Windoze ODBC clients 
quite a long
time ago (I was using PostgreSQL 6.0, and later 6.2, with quite an early version of 
PostODBC that
still had a lot of bugs).  The tables would only cause problems with the Windoze ODBC 
clients--the
Web-based application and Linux shell operations were fine and the database integrity 
was not
affected.

I ended up solving the problem some time later.  First I upgraded the ODBC driver (it 
was changing
almost daily at the time it seemed), then I changed the large varchar fields to the 
'text' type.
When I linked the tables the 'text' type in PostgreSQL translated to the MS Access 
'memo' type.
That seemed to remedy my problem.  Although I don't believe the 'text' data type is 
standard SQL,
it seemed to work better for large fields and I had better luck using it with 
PostgreSQL's
built-in string functions.

I hope this helps...

Mark.

JT Kirkpatrick wrote:

> i got no response on the interfaces list -- maybe someone here has
> experienced this too??
>
> (msaccess97 / win98, 6.40.0005 odbc driver, postgres 6.4.2, linux mandrake
> 5.3) i wrote earlier about getting illegal page faults every time i CLOSE a
> certain form (it lets me use it just fine!) that is used to ADD RECORDS
>

[...]

> of access!  i figure that is because of the size of the table.  i have
> cache set to 100, maxvarchar = 550, maxlongvarchar=15000.  in the table i
> have one field varchar(300) and another varchar(550).  a few "checks"
> (state char(2) not null check (state<>' ')). -- ok that's all i know that
> would be suspicious.  but the edit forms work fine.  i also have several
> other add-new-record forms set up in the same way that use other tables --
> they work fine.  any ideas???

[...]

>
>
> jt




[ADMIN] Re: vacum, locks, indicies and whatnot

1999-06-29 Thread Mark Dalphin

On Monday,  Jun 28, Todd R. Eisenschink wrote:

... a program called AutoVac which loops over all the databases controlled by
Postgresql, drops each index, VACUUM ANALYZE each table and then re-creates the
index. An interesting mixture of shell, Perl and psql (no complaint, mind you;
it is more than I have done) about which he asked for comments.

I am very pleased to see the program written and think it may be useful for
others. I wonder if it might be written "more elegantly", somehow accessing the
Postgresql meta-data directly, but that is by way of musing rather than
critisism, for I don't know how to do it.

My comment on the code is:

As I read the Perl regular expressions, I don't see provision made for:
"CREATE UNIQUE INDEX"

Todd uses:
if( /CREATE\s+INDEX ...) to match the index lines

Shouldn't it be:
if(/CREATE.+INDEX ...) ?

Thanks for the code and ideas, Todd; I have kept a copy for my files.

Mark

--
Mark Dalphin  email: [EMAIL PROTECTED]
Mail Stop: 29-2-A phone: +1-805-447-4951 (work)
One Amgen Center Drive   +1-805-375-0680 (home)
Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)






Re: Re: [ADMIN] How to set up users etc for Web access?

1999-08-04 Thread Mark Linvill

On Tue, 3 Aug 1999, pgsql-admin-digest wrote:

>
>Date: Tue, 03 Aug 1999 19:08:43 -0400
>From: Lamar Owen <[EMAIL PROTECTED]>
>Subject: Re: [ADMIN] How to set up users etc for Web access?
>
>> Gary Robertson wrote:
>[snip]
>
>> We are setting up a Web site using pgsql. I am unsure of which users I
>> should set up, & who should own the html & php3 files in our directory
>> etc.
>
>[snip]
>
>While I use AOLserver and not apache+php3, the concepts are similar:
>
>1.)The httpd run user (which should not be root, but some unprivileged
>account, such as 'webserver' (in my case, it's actually 'aolserver'))
>should own all html and php files necessary, with perms of 600 or 700.
>

No way I would ever(!) do this.  If your http daemon can modify
the files it should only be serving, and the daemon is "php aware"...
*shudder*

Here's an example situation that I consider reasonable:

Webserver runs as it's own uid, say "nobody"

Create some other unix account to admin the interface files, say "www"
or "webmaster"

"www" owns, and is the only user able to edit the files in the
interface.  The files are at least readable by user "nobody"

Same thing with the database, you create the schema with one user, say
"pgsql".  Then you grant the least priviledges necessary to the user
that the webserver is running as.  

Of course this assumes you have strict control over who is putting
databases and php interfaces on your box, a way around that would be
to run a separate webserver under a different uid for your database
application.

My $.02.

-Mark


>2.)You should run a 'createuser ${HTTPD_RUN_USER}', making sure NOT to
>give that user an special privileges (such as create user, create table,
>or superuser). Of course, substitute the actual httpd run user up
>there...
>
>3.)You may or may not want the HTTPD_RUN_USER to actually own the
>tables in question -- however, you will have to GRANT the perms desired.
>
>In the case of AOLserver, the connections from the aolserver process
>(nsd) are pooled and are made with the userid of 'aolserver'.  For
>AOLserver, all files the nsd process (singular, since AOLserver is
>multithreaded) accesses are owned by 'aolserver' and chmod'd 600. 
>AOLserver uses a tcl API -- those files (*.tcl and *.adp) are also
>chmod'd 600, as AOLserver does its own interpreting -- php may need
>execute permission; I don't know.
>
>HTH
>
>Lamar Owen
>WGCR Internet Radio
>





Re: [ADMIN] RE: Security Question

1999-12-22 Thread Mark Jewiss

Hello,

On Tue, 21 Dec 1999, Tim Perdue wrote:

> There's got to be a way to create multiple
> databases and have each one protected by a
> different username/password. Right now, any
> authorized user can connect to any database. Grant
> only works on tables, which is not useful in this
> case (grant against a DB would be useful).

One way to achieve this that I'm playing with at the moment is to edit the
pg_hba.conf file with entries like this:

host  dbname  IP_ADDRESS  NETMASK  password  password.dbname

Each password.file contains the username and encrypted password of the
user(s) that you want to be able to connect to the database - I'd include
an admin account of your own to be on the safe side.

Presumably you want users to be able to grant other users access to their
database if they want to. Give them a web interface that allows them to
add and remove users from the password file (but not the admin user of
yours). Look at HTTPD-User-Manage to start with.

This allows you to store the password files in the users home directory,
with UNIX permissions so that they can only read or write to the file
through the web interface.

Of course this means editing the pg_hba.conf file every time a new
database is created, but you can get around that with a couple of scripts
and a trigger.

Hope this helps, any queries about the theory of what I've said, just ask.

Regards,

Mark.
-- 
Mark Jewiss
Knowledge Matters Limited
http://www.knowledge.com






RE: [ADMIN] RE: Security Question

1999-12-22 Thread Mark Jewiss

Hello,

On Wed, 22 Dec 1999, Nicolas Huillard wrote:

> The problem of the pg_passwd command (used to feed the password.dbname
> files) is that is only stores 8 characters from the username provided,
> even if you typed more, and even if the Postgres username have more
> characters. I don't know if one can add the missing characters or
> create the file with another tool (htpasswd from Apache). The problem
> is that you can't log in using the 8 chars username, nor the full
> username stored in the pg_shadow table.

Yeah, the tools provided to do what I've suggested are not ideal, but
there are ways around it. To start with, you could simply restrict
usernames to 8 characters.

You can make up password files using other tools - the encrypt command in
OpenBSD works for sure.

Regards,

Mark.
-- 
Mark Jewiss
Knowledge Matters Limited
http://www.knowledge.com






[ADMIN] Re: Perl Script and Postgres DB running via Apache Web Server

2000-02-29 Thread Mark Dalphin

Hi Carla and Chris,

> I have the following Perl Script (with Postgres code):
>
> [ stuff deleted]

> This Script will run from the command line, and inserts values into the
> existing DB and table. When it is run via the web browser, nothing happens
> (browser hangs). The script is in the /usr/local/httpd/cgi-bin directory,
> owned by a user (not root), and has 755 permissions. We have exhaustively
> searched perl, cgi, postgres, and apache documentation both printed and
> web-based, and cannot determine how to correct this problem. We suspect that
> it may be a permissions / execution rights for the httpd, but we are stumped.
> Any help will be greatly appreciated. Thanks, -Carla and Chris :)

I suspect the problem is not "who owns the script", but rather "who runs the
script".  Usually your httpd runs as user "nobody", who also happens to have
very few permissions for anything on your system.  Meanwhile, your Postgres
database table was create by you and probably doesn't include permissions for
user "nobody" to even "SELECT" the table, much less modify it.

Guessing here, but you probably don't have a Postgres user named "nobody" (see
"createuser") nor have you granted permissions on any tables to user "nobody"
(see the SQL command "grant").  As user "nobody" usually has no login account
for security reasons, you may wish to experiment with permissions by using two
standard login accounts to see who can read what and when.

For what it is worth, I often find with this kind of problem (CGI scripts that
mis-behave especially when interacting with a database) that changing my httpd
logging "up" from a level of "warn" to "debug" and then reading both the httpd
log and the postgresql log file helps greatly in tracking down the problem.

Hope this helps,
Mark

--
Mark Dalphin  email: [EMAIL PROTECTED]
Mail Stop: 29-2-A phone: +1-805-447-4951 (work)
One Amgen Center Drive   +1-805-375-0680 (home)
Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)








[ADMIN] Feedback on auto-pruning approach

2006-02-14 Thread Mark Liberman
Title: Feedback on auto-pruning approach






Hi,

We have a system build on PG 8.1 that is constantly running and adding data to a bunch of tables, and can grow by over a Gig a day.  We deliver this as a packaged solution on a server that runs unattended.  So, we need to make sure we delete some of the data periodically (we have auto-vacuum running to minimize the amount of obsolete tuples).  What I am working on is a solution to delete the appropriate amount of historical data at any time to keep the free disk space above a certain threshold.  This will allow us to maximize the length of historical data we can keep without running out of disk space.

So, I have built a function that returns an approximation of the space taken up by the non-obsolete tuples, I call this used_size, and the physical disk space taken up by the data files - I call this disk_size.

The following sql query is what I use to return these values:

SELECT sum((bytes_per_row + row_overhead_bytes) * tuples) as used_size,
   sum(pages)::float * 8192  as disk_size,
   sum(pages) as total_pages
from
(
select c.relname,
   sum(case when a.attlen = -1 then _var_col_bytes
    else a.attlen
   end) as bytes_per_row,
   max(CASE WHEN c.relkind = 'i' THEN 4 ELSE 32 END) as row_overhead_bytes,
   max(c.reltuples) as tuples,
   max(c.relpages) as pages
from pg_class c,
 pg_attribute a,
 pg_namespace n
where c.oid = a.attrelid
  and c.relnamespace = n.oid
  and c.relkind in ('i','r')
  and a.atttypid not in (26,27,28,29)
group by c.relname) by_table;

A few notes:

1) I have used 32 bytes for the row tuple header overhead and 4 bytes for index tuple overhead
2) The attribute types 26,27,28,29 are oid,tid,xid,cid - which, I believe are already counted in the row overhead
3) _var_col_bytes is an input parameter to the function which measures the assumed size of any variable length columns

I then use the function to actively manage the used_size (after statistics are computed via ANALYZE, of course).  Through a process I track the daily growth of used_size and this tells me how many days of data I need to remove to stay within my limits.  The disk_size is not actively managed, but just represents the "high-water-mark" of the used_size.

Questions:

1) I have found the 32 bytes overhead mentioned in a few places, but have not seen any specific reference to the byte overhead of an index header row.  Does know the best number to use here for an assumption?

2) Are there any other holes in my logic/query?

3) Has anyone solved this entire problem in another fashion (e.g. auto-pruning - only delete what's necessary to stay within limits).

Any feedback is greatly appreciated,

Mark










Re: [ADMIN] Feedback on auto-pruning approach

2006-03-27 Thread Mark Liberman
Title: RE: [ADMIN] Feedback on auto-pruning approach 






So, I have finally complete this auto-pruning solution.  It has proven effective in keeping the size of the db under whichever threshold I set in an unattended fashion.

I have one final question.  If my goal is to maximize the amount of historical data that we can keep - e.g. set the db size limit to be as large as possible - how much disk space should I reserve for standard Postgres operations - e.g. sort space, WAL, etc..  I'm sure this depends a bit on our configuration, etc.. but if someone can point me in the direction as to what factors I should consider, I'd greatly appreciate it.

Thanks,

Mark






Re: [ADMIN] Feedback on auto-pruning approach

2006-03-28 Thread Mark Liberman
Title: RE: [ADMIN] Feedback on auto-pruning approach






> Probably your biggest issue will be temporary files created by temporary tables, sorts that spill to disk, etc.

Is there any way to monitor this so I can estimate?

> What I'm confused by is the concern about disk space in the first place.

We provide a device to customers that must run in an unattended fashion for as long as the hardward holds up.  So, regardless of the disk size, they will run out at some time.  Some of our solutions grow by 3.5 Gigs per day - and 6 months of history is not an unreasonable expectation.  We've just decided we want to keep as much history as possible given space limitations.


-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]]
Sent: Tue 3/28/2006 10:19 AM
To: Mark Liberman
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Feedback on auto-pruning approach

On Mon, Mar 27, 2006 at 06:32:42PM -0800, Mark Liberman wrote:
> So, I have finally complete this auto-pruning solution.  It has proven effective in keeping the size of the db under whichever threshold I set in an unattended fashion.
>
> I have one final question.  If my goal is to maximize the amount of historical data that we can keep - e.g. set the db size limit to be as large as possible - how much disk space should I reserve for standard Postgres operations - e.g. sort space, WAL, etc..  I'm sure this depends a bit on our configuration, etc.. but if someone can point me in the direction as to what factors I should consider, I'd greatly appreciate it.

Probably your biggest issue will be temporary files created by temporary
tables, sorts that spill to disk, etc.

What I'm confused by is the concern about disk space in the first place.
Drives are very cheap, people are normally much more concerned about IO
bandwidth.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461









[ADMIN] Release plans for improvements to partitioning

2006-03-29 Thread Mark Liberman
Title: Release plans for improvements to partitioning






I was wondering if anyone has any insight into if/when future improvements to the 8.1 partitioning capabilities are planned.

The current implementation of partitioning in postgres 8.1 appears to be just a first step.  While it would provide some very nice benefits (such as the absence of vacuum processes after dropping partitions), the additional burden it places on administrative DDL whenever you add a partition might be a little too daunting to make it a viable solution for us.

Currently, a single partition, of which we would like to create one per table per day, involves the following definitions:

1) Create table definition. This is rather straight-forward ... e.g. CREATE TABLE programs_20060101 INHERITS (programs_1min)

2) Check condition. This tells postgres which range of data resides in this partition ... e.g. CHECK ( end_time >= '2006-01-01' AND end_time < '2006-01-02' ).

Note: there are no checks to guarantee that these are non-overlapping.

3) Insert rule. This tells postgres which partition to insert into for inserts into the master ... e.g.

CREATE RULE pgm_1min_insert_20060101 AS ON INSERT TO programs_1min WHERE ( end_time >= '2006-01-01' AND end_time < '2006-01-02' ) DO INSTEAD INSERT INTO programs_1min_20060101 VALUES ( NEW.programs_id, NEW.begin_time, NEW.end_time);

Again, there are no guarantees that these are non-overlapping.

4) All Indexes. Indexes must be created seperately on each partition. The existance of an index on the master table does not imply such an index on the underlying partitions.

This is the major area that would involve too much effort.  Whenever a new index is added we would need to write a script that dynamically added that new index to all partitions of a given table.  While this is certainly achievable with scripting, it simply adds too much margin for error if we are trying to build an automated solution for all of our customers.

>From my understanding, there are other limitations as well, such as:

1) No delete rules for deleting across partitions
2) No update rules for updating across partitions

In an ideal solution the addition of a new partition would be a single (ALTER TABLE ADD PARTITION) statement (much like Oracle).  This should take care of the table definition, the acceptable ranges (which cannot be overlapping) and any insert rules.  Additionally, the index definition that applies to the table should apply to all underlying partitions so that any create index statement on the master table would be sufficient to index each underlying partition.  Once created, the partitions should then be "invisible" to the sql writer, such that all inserts, updates, and deletes appropriately hit only the required partitions.

Hopefully, some/much of this is underway, but any insight would be appreciated because we need to determine whether we want to go down the path of implementing a solution based on the current partitioning, or waiting for the next generation of PG partitioning.

Thanks,

Mark






Re: [ADMIN] Release plans for improvements to partitioning

2006-03-30 Thread Mark Liberman
Title: RE: [ADMIN] Release plans for improvements to partitioning






Re-sending:  for some reason original post was truncated and did not have carriage returns.



I was wondering if anyone has any insight into if/when future improvements to the 8.1 partitioning capabilities are planned.

The current implementation of partitioning in postgres 8.1 appears to be just a first step.  While it would provide some very
nice benefits (such as the absence of vacuum processes after dropping partitions), the additional burden it places on
administrative DDL whenever you add a partition might be a little too daunting to make it a viable solution for us.

Currently, a single partition, of which we would like to create one per table per day, involves the following definitions:

1) Create table definition. This is rather straight-forward ... e.g. CREATE TABLE programs_20060101 INHERITS (programs_1min)

2) Check condition. This tells postgres which range of data resides in this partition ...
e.g. CHECK ( end_time >= '2006-01-01' AND end_time < '2006-01-02' ).

Note: there are no checks to guarantee that these are non-overlapping.

3) Insert rule. This tells postgres which partition to insert into for inserts into the master ... e.g.

CREATE RULE pgm_1min_insert_20060101 AS ON INSERT TO programs_1min
WHERE ( end_time >= '2006-01-01' AND end_time < '2006-01-02' )
DO INSTEAD INSERT INTO programs_1min_20060101 VALUES ( NEW.programs_id, NEW.begin_time, NEW.end_time);

Again, there are no guarantees that these are non-overlapping.

4) All Indexes. Indexes must be created seperately on each partition. The existance of an index on the master table
does not imply such an index on the underlying partitions.

This is the major area that would involve too much effort.  Whenever a new index is added we would need to write a script
that dynamically added that new index to all partitions of a given table.  While this is certainly achievable with scripting,
it simply adds too much margin for error if we are trying to build an automated solution for all of our customers.

>From my understanding, there are other limitations as well, such as:

1) No delete rules for deleting across partitions
2) No update rules for updating across partitions

In an ideal solution the addition of a new partition would be a single (ALTER TABLE ADD PARTITION) statement
(much like Oracle).  This should take care of the table definition, the acceptable ranges (which cannot be
overlapping) and any insert rules.  Additionally, the index definition that applies to the table should apply to
all underlying partitions so that any create index statement on the master table would be sufficient to index each
underlying partition.  Once created, the partitions should then be "invisible" to the sql writer, such that all
 inserts, updates, and deletes appropriately hit only the required partitions.

Hopefully, some/much of this is underway, but any insight would be appreciated because we need to determine whether we
want to go down the path of implementing a solution based on the current partitioning, or waiting for the next generation
of PG partitioning.

Thanks,

Mark









Re: [ADMIN] [JDBC] [SQL] Thoughts on a Isolation/Security problem.

2006-04-18 Thread Mark Lewis
On Tue, 2006-04-18 at 14:32 +0300, Achilleus Mantzios wrote:
> Thanx for your thoughts, but this would require touching
> 173 tables +
> 2,594 SQL statements in a sum of 324 programs (which sum into 125,085 
> lines of code)


We did a very similar conversion to the one proposed here a couple of
years back, and switched an old Java app from a dedicated schema-per-
client model to a shared schema with company ID model.  I figured I'd
share our results, to at least give you a data point on what you could
expect if you went that route.

At the time, the application consisted of roughly 90 tables and 90,000
lines of code.  Never counted the SQL statements there were probably
very roughly 1000.

We had the advantage of having the 90,000 LOC spread across only two
larger applications instead of a bunch of small programs.

It actually ended up being a much easier process than we had expected,
because the application logic didn't really need change all that much.
The major steps were:

1. Add an extra company FK to the top-level objects in our data model.
For example, we added companyId to the userdetail table, but not to the
address table which depends on the userdetail table.  In our case, this
meant that we needed to add a column to about 30 (one third) of our
tables.  Not very difficult.

2. Inserts into those 30 tables were modified to include an extra
parameter.  The actual insert change is trivial, but depending on your
plumbing it might take some refactoring to get the companyId to the code
that does the inserts.  This involved a moderate code refactoring, but
the changes were very straightforward and not too prone to errors.

3. Because we didn't support moving from one company to another, the
update and most of the select logic didn't change at all.

4. Needed to update all of the search queries to only search within the
current company.  Fortunately, this type of query is relatively rare.
In our case, there were only about 20 that needed updates out of our
1000 queries.  The change is also simple, just add one AND to the WHERE
clause.

5. Security checks.  This is the only step that took a good chunk of
time, because our application is web-based and so a mischievous user
could muck with id's in the HTTP POSTS on pretty much any page in our
application.  This meant that we needed to be careful to always do
company checks before processing any user input.  Fortunately for us, at
the same time we were enhancing our security model anyway, so doing the
extra company check really added no extra time on top of the work that
needed to be done anyway.

Steps 1-4 took a single developer 3 days to implement.  Step 5 was
harder to measure because as I mentioned we bundled it together with a
larger security overhaul, but to perform it separately (and do an audit
to make sure it was correct) would have taken perhaps a week or two.

Anyway, overall in our case we had a pretty successful conversion.  It
ended taking significantly less time than anticipated, and the decreased
maintenance of only managing one database and the ease of building new
global administrative and reporting tools definitely made the project
worth while.

-- Mark Lewis

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Tale partitioning

2006-05-04 Thread Mark Liberman
Title: RE: [ADMIN] Tale partitioning






>BTW, I should have mentioned that partitioning is a very new feature and
>that folks probably would like to know about shortcommings you find
>while using it.

We just implemented partitioning, and have found it very useful for dropping old data, as opposed to deleting and enduring the subsequent vacuum.

One unforeseen issue we ran into had to do with postgres basing it's execution plans on the master-table (which is generally empty) and therefore choosing inefficient merge-joins, in our case.  The work-around, however, was for us to delete the rows in pg_statistics for the master tables, as this thread indicates.

http://archives.postgresql.org/pgsql-performance/2006-05/msg00024.php

Once we employed the work-around things seem good.

- Mark







Re: [ADMIN] Autovacuum Question

2006-05-16 Thread Mark Liberman
Title: RE: [ADMIN] Autovacuum Question






>Does this mean that each run of autovacuum will vacuum/analyze all tables
>that need work in the database that autovacuum is looking at?

Yes, it will do all tables that satisfy the criteria.

>Also, with over 200 databases, does this mean that a database will only get
>checked once about every 3.5 hours, or does autovacuum run on the database
>that needs the vacuuming the most?

If, you have it set to 60 seconds, then yes.  It hits one db each minute.  Actually, to be more clear, it sleeps 1 minute between each check.  So, if, on average, it takes 60 seconds to perform the necessary vacuum/analyzes, then it there will be, on average, 2 minutes between each db, or 400 minutes to get through all 200 dbs.






[ADMIN] PGSQL Database Recovery in Portland Oregon Area needed ASAP

2006-05-17 Thread Mark Holm
Title: PGSQL Database Recovery in Portland Oregon Area needed ASAP






I have a client that is running an older version of Lyris List Manager against PostGres 7.1.2, that has crashed their database beyond my ability to recover it. The error that I am getting when we try and restart the database is:

    /etc/init.d/postgresql start
    Starting postgresql service: postmaster successfully started
    DEBUG: database system was interrupted being in recovery at 2006-05-15 14:01:54 PDT
    This propably means that some data blocks are corrupted
    and you will have to use last backup for recovery.
    DEBUG: CheckPoint record at (230, 4199276628)
    DEBUG: Redo record at (230, 4199276628); Undo record at (0, 0); Shutdown TRUE
    DEBUG: NextTransactionId: 159854941; NextOid: 34688714
    DEBUG: database system was not properly shut down; automatic recovery in progress...
    DEBUG: redo starts at (230, 4199276692)
    [ OK ] 

    FATAL 2: btree_split_redo: uninitialized next right page
    /usr/bin/postmaster: Startup proc 30311 exited with status 512 - abort

I have gone through the stuff on-line concerning this error and attempted to reinitialize the database and restore the night before's backup, but apparently the backups were not quite setup correctly as I cannot get a valid restore either. I am out of my depth on this one and am willing to contract somebody to help get this database back on-line again ASAP, as the client is getting frantic. If you have proven experience doing this sort of recovery, please contact me at the number or email address below. I am not a member of the lists, so please contact me directly. Rates are negotiable, but I will have to clear them with client before we proceed.

    markh




Mark A. Holm President
InfoArch, Inc.                  Cell:    (503) 750-9741
7456 SW Baseline, PMB#123. Office: (503) 943-3202
Hillsboro, OR 97123 Fax:   (503) 591-8584
http://www.infoarch.com  <mailto:[EMAIL PROTECTED]> 






[ADMIN] DBA tasks

2006-07-23 Thread Mark Matthews

Hello,

Can anyone recommend good reference text, web or other, for general  
PostgreSQL DBA tasks.


Stuff like:
* Things to do to clean up periodically (CLUSTER, VACUUM FULL, etc)
* How to analyse queries to optimise data retrieval (Use of EXPLAIN,  
etc)

* etc ...

We are running Debian stable (sarge) and therefore 7.4. Moving to 8.1  
soon (hopefully)


I am a developer who has adopted the role of DBA.

Thanks

-
Mark Matthews
Object Craft Pty Ltd
http://www.object-craft.com.au


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[ADMIN] Corrupted DB - Help

2006-07-27 Thread Mark Liberman
Title: Corrupted DB - Help






Running postgres 8.1.2 on gentoo linux 2.6.14

Server crashed (not sure if due to pg operation).  Upon rebooting server and trying to start postgres, get the following log entries:

<4983:2006-07-27 18:04:24 PDT>LOG:  database system was interrupted at 2006-07-27 17:58:51 PDT
<4983:2006-07-27 18:04:24 PDT>LOG:  checkpoint record is at 2A4/2A23F328
<4983:2006-07-27 18:04:24 PDT>LOG:  redo record is at 2A4/2A23F328; undo record is at 0/0; shutdown FALSE
<4983:2006-07-27 18:04:24 PDT>LOG:  next transaction ID: 70649528; next OID: 345957
<4983:2006-07-27 18:04:24 PDT>LOG:  next MultiXactId: 1742; next MultiXactOffset: 3498
<4983:2006-07-27 18:04:24 PDT>LOG:  database system was not properly shut down; automatic recovery in progress
<4983:2006-07-27 18:04:24 PDT>LOG:  redo starts at 2A4/2A23F36C
<4983:2006-07-27 18:04:24 PDT>WARNING:  could not truncate relation 1663/128873/129263 to 398 blocks: Success
<4984:2006-07-27 18:04:24 PDT>LOG:  connection received: host=[local]
<4984:2006-07-27 18:04:24 PDT>FATAL:  the database system is starting up
<4981:2006-07-27 18:04:24 PDT>DEBUG:  forked new backend, pid=4984 socket=7
<4981:2006-07-27 18:04:24 PDT>DEBUG:  server process (PID 4984) exited with exit code 0
<4985:2006-07-27 18:04:25 PDT>LOG:  connection received: host=[local]
<4985:2006-07-27 18:04:25 PDT>FATAL:  the database system is starting up
<4981:2006-07-27 18:04:25 PDT>DEBUG:  forked new backend, pid=4985 socket=7
<4981:2006-07-27 18:04:25 PDT>DEBUG:  server process (PID 4985) exited with exit code 0
<4983:2006-07-27 18:04:26 PDT>PANIC:  could not open relation 1663/128873/129201: No such file or directory
<4981:2006-07-27 18:04:26 PDT>LOG:  startup process (PID 4983) was terminated by signal 6
<4981:2006-07-27 18:04:26 PDT>LOG:  aborting startup due to startup process failure
<4982:2006-07-27 18:04:26 PDT>LOG:  logger shutting down


The last commmand that I did right before the crash was a vacuum full on a very small table.  After it completed, I exited psql, tried a non-pg command and it hung up.  Here are the last log entries before the crash:

<1054:2006-07-27 18:00:18 PDT>LOG:  statement: vacuum full verbose port_etr_state_offline;
<1054:2006-07-27 18:00:18 PDT>INFO:  vacuuming "public.port_etr_state_offline"
<1054:2006-07-27 18:00:18 PDT>INFO:  "port_etr_state_offline": found 0 removable, 213 nonremovable row versions in 16 pages
<1054:2006-07-27 18:00:18 PDT>DETAIL:  0 dead row versions cannot be removed yet.
    Nonremovable row versions range from 45 to 45 bytes long.
    There were 2445 unused item pointers.
    Total free space (including removable row versions) is 109896 bytes.
    12 pages are or will become empty, including 0 at the end of the table.
    16 pages containing 109896 free bytes are potential move destinations.
    CPU 0.00s/0.00u sec elapsed 0.00 sec.
<1054:2006-07-27 18:00:18 PDT>INFO:  "port_etr_state_offline": moved 213 row versions, truncated 16 to 2 pages
<1054:2006-07-27 18:00:18 PDT>DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
<1054:2006-07-27 18:00:18 PDT>LOG:  duration: 16.078 ms

Any way to salvage this db?

Thanks,

Mark





Re: [ADMIN] CMS - portal server Question

2006-08-25 Thread Mark Matthews


On 25/08/2006, at 7:07 PM, Achilleas Mantzios wrote:


Hi,
i am thinking of deploying a CMS system for our corporate web server.

I have seen/installed/tested :

Jboss Portal : Seeems good and standards complying but too unstable
Apache Lenya : Very unstable - could not get it to any useful work
Php-nuke : (despite our favor towards java, this seems stable  
but not what

we want)
OpenCMS  : Very stable but not so open source - some one has to  
pay to get

anything more than the basic

Practically from just browsing and using google one could conclude  
that there

are 10ths of
open source tools that do content management.
Also it is impractical to install even 10% of them.
Moreover web articles/reviews/comparisons rarely give anything but  
biased

views.

So since i think that our type of need falls in the same big family  
as the

ones
working in the unix/PostgreSQL/java world i would like to ask you  
about your

experience
in this field.

Since we dont consider interfering with the DB backend in our  
immediate future

intentions, MySQL will be just as good for us.
Ofcourse i prefer postgresql for our applications, but if lets say  
opencms
started supporting postgresql just a month ago, then postgresql  
will not be

as strong a candidate in that case.

In the same sense java is prefered over PHP, since we dont intent  
to interfere
with CMS code, but if some killer PHP app does the job, then it  
will be the

one selected.

Thanx

Achilleas Mantzios.


Try plone.

http://plone.org/ Fantastic CRM system.

I realise you have a preference for Java/PHP but this is well worth a  
look. Python is a great language to work with - very powerful.


"By default, Plone stores its contents in Zope's built in  
transactional object database, the ZODB. There are products and  
techniques, however, to share information with other sources, such as  
relational databases, LDAP, filesystem files, etc." from http:// 
plone.org/about/plone


Good luck

-----
Mark Matthews
General Manager
Object Craft Pty Ltd
Phone: +61 3 9654 9099
http://www.object-craft.com.au


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[ADMIN] New to PostgreSQL

2006-11-01 Thread Mark Steben








Good afternoon,

 

I am a database
administrator who has worked with mainframe DB2 databases for 16 years.  I am
brand new to PostgreSQL.  I just started a new job where

I am the only DBA in the
company and am expected to ramp up as soon as possible.  Are there any courses
available to me - either self-taught or

Classroom - that could
provide me with a broad-brush introduction?  Right now my main resource is the PostgreSQL - The Comprehensive Guide,

By Korry and Susan Douglas -
a very good reference that I am using as I tackle specific tasks.  I think a
good overview would help me as I basically

Self-educate.

 

Thank you and I look forward
to working with you through your mailing list.

 

Mark Steben

AutoRevenue

Lee, Massachusetts

[EMAIL PROTECTED] 

 








[ADMIN] Question

2006-11-07 Thread Mark Steben








I am very new to PostgreSQL.  Is it appropriate to pose
questions to this email list?

 

I am trying to come up with a query that will list the names of the database indexes that

Have been chosen as clustering indexes.  I know I can get
the INDEXRELID from PG.INDEX

But have yet to figure out how to get the index name from
there.  Any help would be appreciated.

And, if this is not an appropriate forum to ask questions
please tell me.

 

Thank you,

Mark Steben

 

AutoRevenue








Re: [ADMIN] Question

2006-11-07 Thread Mark Steben
Thank you, Jeff.  That worked like a champ.  My initial problem
Had to do with not understanding the concept of OID datatypes.
My next learning task is to read up on OID.

Thanks again, Mark

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 07, 2006 3:29 PM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Question

On Tue, 7 Nov 2006, Mark Steben wrote:

> I am very new to PostgreSQL.  Is it appropriate to pose questions to this
> email list?

It might be better on pgsql-sql, but I'm not sure.

> I am trying to come up with a query that will list the names of the
database
> indexes that
>
> Have been chosen as clustering indexes.  I know I can get the INDEXRELID
> from PG.INDEX
>
> But have yet to figure out how to get the index name from there.  Any help
> would be appreciated.
>
> And, if this is not an appropriate forum to ask questions please tell me.

You want to join on pg_class.oid.  You can see the info on pg_index here in 
the docs: http://www.postgresql.org/docs/8.1/static/catalog-pg-index.html

select i.indexrelid,c.relname from pg_index i, pg_class c where i.indexrelid
= 
c.oid;

will likely give you what you're after.

-- 
Jeff Frost, Owner   <[EMAIL PROTECTED]>
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[ADMIN] Question on partitioning

2006-11-10 Thread Mark Steben








Thanks to all who responded to my question
on querying on cluster index names.

 

I have another question.  We are considering
partitioning 2 or 3 of our larger tables.

Each has 12 million or more rows.  We
want to partition on customer_id and
build

Primary key indexes on each of the
children partitions which inherit column characteristics

From the master table.  This would
resolve any issues about queries on customer-id.


 

 My question is this:  How would
I handle a query that queries

On another Non-partitioned column, say,
ENTRY_DATE?  Could I design an index

On the master table that would reference
ENTRY-DATE across partitions or will

I need to build indexes and queries on
each partition?

 

Thank you for your time,

Mark Steben









From: Mark Steben
[mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 07, 2006
3:18 PM
To: 'pgsql-admin@postgresql.org'
Subject: Question



 

I am very new to PostgreSQL.  Is it appropriate to pose
questions to this email list?

 

I am trying to come up with a query that will list the names of the database indexes that

Have been chosen as clustering indexes.  I know I can
get the INDEXRELID from PG.INDEX

But have yet to figure out how to get the index name from
there.  Any help would be appreciated.

And, if this is not an appropriate forum to ask questions
please tell me.

 

Thank you,

Mark Steben

 

AutoRevenue








[ADMIN] Question on partitioning

2006-11-15 Thread Mark Steben








Hi, the new guy is back with another
question.

 

Regarding partitioning – I set up a
testing table – parent table and 4 partitioning children’s tables
inheriting the

Attributes of the parent.  I inserted
1800 rows in all – 450 equitably in each partition. The CHECK constraints
work great.

  The PART_ID column, defined as
SMALLINT, Is the partitioning column.  

When I query for PART_ID’S I know to
be in the first partition only, the EXPLAIN says that

The query plan says that it does a
sequential table scan on the first partition, which is fine because I do SELECT
*.  However it also

Does index scans on partitions 2, 3, and 4
which I did not expect since I limited the range in my WHERE clause to rows in
partition 1.

Do I need to load more data?  Any
help would be appreciated.  I did set my CONSTRAINT_EXCLUSION to ON.

 

Thank  you,

Mark Steben

 









From: Mark Steben
[mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 07, 2006
3:18 PM
To: 'pgsql-admin@postgresql.org'
Subject: Question



 

I am very new to PostgreSQL.  Is it appropriate to pose
questions to this email list?

 

I am trying to come up with a query that will list the names of the database indexes that

Have been chosen as clustering indexes.  I know I can
get the INDEXRELID from PG.INDEX

But have yet to figure out how to get the index name from
there.  Any help would be appreciated.

And, if this is not an appropriate forum to ask questions
please tell me.

 

Thank you,

Mark Steben

 

AutoRevenue








Re: [ADMIN] Question on partitioning

2006-11-16 Thread Mark Steben
Simon, you're right on the money.. the guys on the IRC chatroom suggested
that I cast the partitioning column as SMALLINT as the optimizer assumed
Integer and it worked.  Thx for getting back to me
Mark Steben
AutoRevenue

-Original Message-
From: Simon Riggs [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 16, 2006 12:39 PM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Question on partitioning

On Wed, 2006-11-15 at 15:13 -0500, Mark Steben wrote:

> Regarding partitioning - I set up a testing table - parent table and 4
> partitioning children's tables inheriting the
> 
> Attributes of the parent.  I inserted 1800 rows in all - 450 equitably
> in each partition. The CHECK constraints work great.
> 
>   The PART_ID column, defined as SMALLINT, Is the partitioning
> column.  
> 

You'll probably want to look at the caveats here
http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html#DDL-PARTITIO
NING-CAVEATS

especially the ones about cross-datatype comparisons in CHECK
constraints.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Problem with pg_dump / pg_restore

2003-11-28 Thread Mark Carew

Hi Armel,

 I have searched hi and low on the net to try to get answers regarding
pg_dump and pg_restore, specifically to be allow them to run with crontab on
the red hat servers that I look after as programmer and general IT dogs
body. I would appear from the general brevity of the replys ; that nobody
has the answers or if they do its a well garded trade secret. I hope I don't
get a knock on the door later tonight from the knowledge police but maybe my
experience can be of assistance to you.

The only way that I have been able to successfully backup and restore is to
adhere to the following strategy.

 As a programmer, I have developed a data dictionary that sits on the client
side in windows and is able to contain within its classes sufficient
information to be able to write out the complete sql statements for the
creation of all the components of my postgresql databases. This includes
tables, views, indices, foreign keys, sequences etc. I issue a "dbfmain PSQL
", at the windows command prompt,  and out pops a ???xxxPG.txt file specific
to the system I'm working on.

This sql containg text file has drop and create statements and is able to
create an empty database with sequences preset to starting numbers and all
tables and indices with no data. {It gets starting sequences values by
looking up existing tables (if they exist) obtaining the highest primary key
numeric value}

Thus I have an empty-database generation script that I can point at any
database with .psql -e ???xxx < ???xxxPG.txt .   as user
postgres .

Now I'll skip a few steps  and get to the nitty gritty. The following file
lives in the / directory of each file server, away from samba shared folders
and runs each night
automatically - courtesy of an entry in /etc/crontab. It backs up data but
not schema. This allows me (with my simplistic understanding of the whole
process) to restore the backed up data to other databases than the one from
which the data was extracted. I may even, heaven forbid, restore the data to
 the original database from which it came. This way the views that you
created remain intact, just the data is restored.

I have had a few glitches with sequences being out of wack but as they are
always used by me a distinct primary key values, the postgres data manager
soon straigtens out any anomalies and the sequences quickly grow beyond
issuing any contentious values.

--->>>>>>>>>>>>>>>>>>>>>

#!/bin/bash
#
# script to include in crontab for 0 1 * * * root /postgresback
# for backup and vacuum of the following two databases
#
# written by Mark Carew 28-11-2003
#
# Dump only the data
# pg_dump -a  etc
#
# The data base structure is recreated by issuing
# as postgres user when in /mnt/samba
# psql -e hcc??? < HCC???PG.txt
# e.g. hcc??? == hcchel, hcckal, hcctest
# e.g. HCC???.txt == HCCHELPG.txt, HCCKALPG.txt
#
# to restore a database first recreate the structure as above
# then use pg_restore as follows
# pg_restore -a -Ft -d hcc??? HCC???.SQL
# this command only restores the data
# so that the database should be an empty structure
# as above (psql -e etc) makes it this way
#
/bin/su - postgres -c '/usr/bin/pg_dump -a -f  /mnt/samba/HCCHEL.SQL
 hcchel -Ft'
/bin/su - postgres -c '/usr/bin/pg_dump -a -f  /mnt/samba/HCCKAL.SQL
 hcckal -Ft'
#
#  reindex and pack
#
/bin/su - postgres -c '/usr/bin/vacuumdb hcchel'
/bin/su - postgres -c '/usr/bin/vacuumdb hcckal'
#
exit 0

--->>>>>>>>>>>>>>>>>>>>>>>>>>>>>

off topic

I suppose from the reading that I have done regarding 7.2 to 7.4 conversion
that I will have to write my 7.2 data back out to .dbf files so that I can
then
use my data dictionary windows program to convert this tempory .dbf data
back to 7.4 sql?

 Does anybody know if the -F{t,c,p} options all work in 7.4. I only seem to
be able to use -Ft successfully in 7.2 within pg_dump and pg_restore?

Regards
Mark Carew
Brisbane Australia
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Problem with pg_dump / pg_restore

2003-12-01 Thread Mark Carew

Hi Armel,

Why not do a structure only dump to plain text and see if you can use
that as your start .sql ext file for 'psql -e .. etc'. Eliminates the
necessity to have an active client side data dictionary. BTW, your english
is fine.

Regards
markcarew@
magicwanddept.com.au



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] Shutting down pgsql

2003-12-11 Thread Mark Warner
Greetings,

   Is it possible, advisable, or terribly bad to shutdown a pgsql 
database with the do.maintenance script running? I have a 12gb db with 
about 780 tables that has been running maintenance now for approximately 
67 hours, and it is beginning to seriously degrade the performance of 
the application that drives. I am running on debian linux testing, with 
pgsql 7.3.4.

   So I am wondering if it is possible and advisable to either kill the 
task, or shut down the database in order to do some work on it.

Thanks,
Mark Warner.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] log file: showing disconnects as well as connects?

2004-01-28 Thread Mark Harrison
Here's a quickie script to watch a particular session:

tail -f $LOG | awk '
/connection received: host='$1'/ { pid = $3 }
$3 == pid { print }'
Is there something in the log file which will indicate
a disconnect as well?
Many TIA,
Mark
---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] hung postmaster when client machine dies?

2004-01-29 Thread Mark Harrison
We recently had an incident where a linux box went down with
a kernel error.  A process on the box had an open connection
to a postgres session, and was in a transaction.
This situation was noticed when other processes connected to
postgres would not respond.
We observed that there was postmaster connected to the ip address
of the downed machine with an "in transaction" status.  Killing
that processes unblocked the other processes.
Is this expected behavior?  Was postgres simply waiting for
a failure from the TCP/IP layer?
We're now running a watchdog process that pings machines for
which a postmaster is running and killing that process if
the machine is not contactable for a certain period of time.
Thanks to whoever made the status information show up in ps
output!
Unfortunately, we didn't capture the process data... if this
would help we can attempt to reproduce the situation.
Many TIA,
Mark
--
Mark Harrison
Pixar Animation Studios
Emeryville, CA
---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] pg_dump privileges

2004-02-03 Thread Mark Lubratt
Hello,

I'll get right to the point.

I'd like to create a user for my pg_dump backup process.  I'd like that 
user to have only the privileges required to run pg_dump.  I have a 
language, a function and sequences, views and tables in my schema.  
What privileges are needed for pg_dump to run in addition to USAGE (for 
the language), EXECUTE (for the function) and SELECT (tables, views and 
sequences)?

Thanks!
Mark
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[ADMIN] pg-watchdog

2004-02-05 Thread Mark Harrison
Here is the final version of my script that will watch for disconnected
client machines and kill their associated server side processes.
If I'm doing anything really stupid I would appreciate hearing about it.
If it's useful to anyone, feel free to pass it along.
Thanks,
Mark
#!/bin/sh
# the next line restarts using wish \
exec tclsh8.3 "$0" "$@"
# pg-watchdog -- make sure postgres clients are reachable.
#
# This program is a watchdog for postgresql.  It monitors all
# postgresql client processes and kills them if the host upon
# which they are running becomes inaccessible over the network.
#
# This is necessary because if a machine goes down "hard" or
# if a network cable becomes disconnected, it can be several
# hours before postgres will be informed of this (for details,
# see chapter 23 of W. Richard Steven's TCP/IP Illustrated,
# volume 1).  If a postgres client on that machine has a
# transaction lock, all other clients will be blocked waiting
# for that lock to be released.
#
# This program uses the Scotty package available at:
# http://wwwhome.cs.utwente.nl/~schoenw/scotty
# If you do not wish to install Scotty you can change
# the icmp command to instead use whatever flavor of
# ping is available on your system.
#
# Please send feedback, improvements, etc, to
# Mark Harrison, [EMAIL PROTECTED]
package require Tnm

proc doit {} {
after 6 doit
set pids("") ""
unset pids("")
set lines [exec ps auxww | awk {$11=="postgres:" {print $2 "/" $14}}]
foreach s [split $lines \n] {
set x [split $s /]
set pid [lindex $x 0]
set ip  [lindex $x 1]
if {[regexp {[0-9]+[.][0-9]+[.][0-9]+[.][0-9]+} $ip]} {
lappend pids($ip) $pid
}
}
if {[array size pids] > 0} {
set times [icmp -timeout 20 echo [lsort [array names pids]]]
foreach pair $times {
set ip [lindex $pair 0]
set time  [lindex $pair 1]
if {$time == -1} {
foreach pid $pids($ip) {
puts "killing $pid from $ip"
    exec kill $pid
}
}
}
}
}
after 0 doit
vwait forever
--
Mark Harrison
Pixar Animation Studios
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] talking to port 5432

2004-02-06 Thread Mark Lubratt

On Fri, 6 Feb 2004, Ernst Roebbers wrote:

Dear Sirs,

I would like to communicate with the postgres backend
using port 5432 from a small remote machine where only
a socket is available - no application layer.
Do You have any hints where I can find some information
what kind of input the postmaster needs?


There is a whole chapter on this at http://www.postgresql.org/docs/ .

Go there, select the version you're using and look for Frontend/Backend 
Protocol in the Developer's Guide.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] IDE and write cache

2004-02-10 Thread Mark Lubratt
Interesting discussions on IDE drives and their write caches.

I have a question...

You mentioned that you'd see the problem during a large number of 
concurrent transactions.  My question is, is this a necessary condition 
for the database crashing when the plug was pulled, or did you need use 
a large number of concurrent transactions to "guarantee" that when you 
pulled the plug, that it would be at an inopportune time?  In other 
words, is an IDE drive still "more" susceptible to a power outage 
problem even under light load?

Thanks!
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[ADMIN] 7.3.2 - 7.4.1 on RedHat 9.0

2004-02-15 Thread Mark Lubratt
I've finally decided to upgrade my postgresql installation.  I'm 
currently running 7.3.2 on RedHat 9.0.

Here's my dilemma:  Since RedHat is no longer supporting their "RedHat" 
distributions, I can't really get a RPM for the upgrade.  I'm concerned 
about orphaning files if I do a straight compile from source (not 
getting the configured directories to be the same as the original 
installation).  So the question is, do I uninstall the original RPM and 
do a compile from source?  Do I go to a SRPM?  Do I uninstall and then 
use a SRPM?

TIA, Mark

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[ADMIN] Schema comparisons

2004-02-27 Thread Mark Lubratt
Hello!

I have two databases running on separate machines.  One is a production 
server and the other is development.

I've been trying to be careful, but I've gotten out of synch with 
whether or not I've applied the changes I've made to the development 
system to the production system.  Is there a utility that will compare 
the tables, functions, trigger, views, etc. between two systems and 
flag the schema elements that aren't in synch between the two?

If not, and I need to write one, would such a system be of interest to 
anyone else?

Thanks!
Mark
---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] Any Gentoo users interested in a slotted PostgreSQL ebuild?

2004-02-26 Thread Mark Gibson
Hello folks.

Are there any Gentoo users here?

Do you wish the PostgreSQL ebuild made use of SLOTS?
- to allow installing of major versions together (eg. 7.3 and 7.4 on the 
same host)
- to ease migration of databases to new major versions (where a 
dump/reload is required)

I've started a thread at:
http://bugs.gentoo.org/show_bug.cgi?id=42894
With a initial plan for relocation of files to support slotting.
Comments, suggestions, corrections, or messages to say it's a silly idea 
(and why)
would be very much appreciated before I attempt this next week.

Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[ADMIN] Multiple inserts without COPY

2004-03-03 Thread Mark Lubratt
Hello!

I have a routine in my application where I have the potential to 
generate hundreds of inserts or deletes at one time.  Right now, I 
issue each insert and delete separately.  I'm hoping there might be a 
way of generating a single SQL statement to send to the backend for 
each.  The deletes look something like

delete from CL where CL_id = i

where i could be a list of several hundred integers.  Again, right now 
I iterate through the list.

The inserts might look like

insert into CL (CO_id, PE_ID) values (j, k)

where j and k are also integers and I could have a list of several 
hundred pairs of j and k.

MySQL has a multiple insert feature where you simply append a bunch of 
(j, k)'s separated by a comma.  Does PostgreSQL have anything like 
this?  I was hoping I might be able to use COPY, but I see that's 
really only for psql.

Are there any options?  Or, do I simply send a bunch of queries?

Thanks!
Mark
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] Multiple inserts without COPY

2004-03-03 Thread Mark Lubratt
On Mar 3, 2004, at 10:22 PM, Tom Lane wrote:

Mark Lubratt <[EMAIL PROTECTED]> writes:
The deletes look something like
delete from CL where CL_id = i
where i could be a list of several hundred integers.  Again, right now
I iterate through the list.
Consider
	delete from CL where CL_id in (i,j,k,...);
If you have hundreds of target values, it might be better to put them 
in
a temp table and go
	delete from CL where CL_id in (select id from temp_table);
The latter should be reasonably quick in 7.4, but be warned that it'll
suck in prior releases.

Yeah, that's what I was looking for!  I thought I might be able to do 
that.  Cool.

I was hoping I might be able to use COPY, but I see that's
really only for psql.
Huh?  You can use COPY FROM STDIN in most of our client libraries,
certainly so with libpq.  What are you using?
Actually, I'm using REALbasic.  All the communication is happening 
through a TCP connection.  I tried emulating what the command might 
look like in pgAdmin.  But, of course, after the semi-colon, the parser 
got confused when it hit the actual data.  I tried:

COPY MyTable (id, val) FROM STDIN;
2   Hello There!
\.
It choked at the 2.  I was just trying to see if the backend suspended 
parsing and would just start copying like psql does.  But, I guess not. 
 How does psql shovel a COPY at the backend?

Oooh.  I just remembered.  There is a new method in the REALbasic 
classes that provide the PostgreSQL functionality.  I'll have to check 
it out...

I was hoping that there might be a syntax trick  with INs or something 
like the delete command above.  Something that might expand in the 
parser to do what I want to do.

Thanks!

Mark

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Multiple inserts without COPY

2004-03-03 Thread Mark Lubratt
On Mar 3, 2004, at 11:20 PM, Tom Lane wrote:

Mark Lubratt <[EMAIL PROTECTED]> writes:
Huh?  You can use COPY FROM STDIN in most of our client libraries,
certainly so with libpq.  What are you using?

Actually, I'm using REALbasic.
Um.  I have no idea what sort of support they have, but you do need a
client interface library that knows about the COPY data protocol.
Typically there will be separate API calls for pushing COPY data 
through
after you issue the COPY command.


I figured it out.  They do provide a string to push COPY FROM STDIN 
through.

Thanks anyway!
Mark
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] [HACKERS] Any Gentoo users interested in a slotted PostgreSQL

2004-03-05 Thread Mark Gibson
Mark Gibson wrote:

Are there any Gentoo users here?
Do you wish the PostgreSQL ebuild made use of SLOTS?
I've submitted a slotted ebuild for testing at:
http://bugs.gentoo.org/show_bug.cgi?id=42894
It's the first slotted ebuild i've attempted, so it may be total rubish!
But I have managed to get 7.3.5 and 7.4.1 installed and running in parallel
(see attachment notes on bugs.gentoo.org).
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] Explain Analyze help

2004-03-09 Thread Mark Lubratt
I've just completed my application and my customer is starting to 
populate the database.  I've put a table in where queries get logged so 
that I can start pinpointing my optimizations to the most frequently 
used queries.  I've read the manual on the explain analyze statement 
and I have one (which will probably turn into many) question.

Obivously, I have a lot of sequential scans going on, at about what 
cost does it start to make sense to make an index?  Or, should I just 
put in a bunch of indexes and let the optimizer decide whether or not 
to use them?

OK, so it was 2 questions.

Anyone?  Bueller?

Mark

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [ADMIN] syslog slowing the database?

2004-03-10 Thread Mark Harrison
Tom Lane wrote:
Greg Spiegelberg <[EMAIL PROTECTED]> writes:

If the log and database were on the same disk I'd be okay with the
current workaround.  If the ``-'' gave me near the same performance as
turning syslog off I'd be okay with that too.  However, neither of these
are the case so there has to be something else blocking between the two
processes.


You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use Apache's log rotator for this with good
results.
I do this... here's the relevant lines from my startup script:

ROTATE="/inst/apache/bin/rotatelogs $PGLOGS/postgresql 86400"
$PGBIN/pg_ctl start -s -D $PGDATA | $ROTATE &
Following is a patch to rotatelogs that does two things:

- makes a symbolic link 'foo.current' that points to the
  current output file.
- gzips the rotated logfile

If you have gnu tools installed, you can
tail --retry --follow=name foo.current
and it will automatically track the most recent
log file.
HTH,
Mark
--
Mark Harrison
Pixar Animation Studios
*** rotatelogs.c-orig   2004-03-10 10:24:02.0 -0800
--- rotatelogs.c2004-03-10 11:01:55.0 -0800
***
*** 25,30 
--- 25,32 
  int main (int argc, char **argv)
  {
  char buf[BUFSIZE], buf2[MAX_PATH], errbuf[ERRMSGSZ];
+ char linkbuf[MAX_PATH];
+ char oldbuf2[MAX_PATH];
  time_t tLogEnd = 0, tRotation;
  int nLogFD = -1, nLogFDprev = -1, nMessCount = 0, nRead, nWrite;
  int utc_offset = 0;
***
*** 75,80 
--- 77,84 
  setmode(0, O_BINARY);
  #endif
+ sprintf(linkbuf, "%s.current", szLogRoot);
+ sprintf(oldbuf2, "");
  use_strftime = (strstr(szLogRoot, "%") != NULL);
  for (;;) {
  nRead = read(0, buf, sizeof buf);
***
*** 99,104 
--- 103,111 
  sprintf(buf2, "%s.%010d", szLogRoot, (int) tLogStart);
  }
  tLogEnd = tLogStart + tRotation;
+ printf("oldbuf2=%s\n",oldbuf2);
+ printf("buf2=%s\n",buf2);
+ printf("linkbuf=%s\n",linkbuf);
  nLogFD = open(buf2, O_WRONLY | O_CREAT | O_APPEND, 0666);
  if (nLogFD < 0) {
  /* Uh-oh. Failed to open the new log file. Try to clear
***
*** 125,130 
--- 132,146 
  }
  else {
  close(nLogFDprev);
+ /* use: tail --follow=name foo.current */
+ unlink(linkbuf);
+ symlink(buf2,linkbuf);
+ if (strlen(oldbuf2) > 0) {
+ char cmd[MAX_PATH+100];
+ sprintf(cmd, "gzip %s &", oldbuf2);
+ system(cmd);
+ }
+ strcpy(oldbuf2, buf2);
  }
  nMessCount = 0;
  }
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] [PERFORM] Benchmarking postgres on Solaris/Linux

2004-03-25 Thread Mark Kirkwood


Josh Berkus wrote:

Mark,

 

It might be worth considering Apple if you want a 64-bit chip that has a
clock speed comparable to Intel's - the Xserv is similarly priced to Sun
V210 (both dual cpu 1U's).
   

Personally I'd stay *far* away from the XServs until Apple learns to build 
some real server harware.The current XServs have internal parts more 
appropriate to a Dell desktop (promise controller, low-speed commodity IDE 
drives), than a server.

If Apple has prices these IU desktop machines similar to Sun, then I sense 
doom ahead for the Apple Server Division.

 

(thinks...) Point taken - the Xserv is pretty "entry level"...

However, having recently benchmarked  a 280R vs a PIII Dell using a 
Promise ide raid controller - and finding the Dell comparable (with 
write cache *disabled*), I suspect that the Xserv has a pretty good 
chance of outperforming a V210  (certainly would be interesting to try 
out)

What I think has happened is that over the last few years then "cheap / 
slow" ide stuff has gotten pretty fast - even when you make "write mean 
write"

cheers

Mark

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] [PERFORM] Benchmarking postgres on Solaris/Linux

2004-03-25 Thread Mark Kirkwood
The hardware platform to deploy onto may well influence your choice :

Intel is usually the most cost effective , which means using Linux makes 
sense in that case (anybody measured Pg performance on Solaris/Intel?).

If however, you are going to run a very "big in some sense" database, 
then 64 bit hardware is desirable and you can look at the Sun offerings. 
In this case you can run either Linux or Solaris (some informal 
benchmarks suggest that for small numbers of cpus, Linux is probably 
faster).

It might be worth considering Apple if you want a 64-bit chip that has a 
clock speed comparable to Intel's - the Xserv is similarly priced to Sun 
V210 (both dual cpu 1U's).

Are you free to choose any hardware?

best wishes

Mark

Subbiah, Stalin wrote:

(snipped) what is the preferred OS
for postgres deployment if given an option between linux and solaris.
 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] code editing

2004-04-05 Thread Mark Bross
What is your recommendations for code editin in Postgresql?
I'm a student at Regis university in Denver co.
Thanks


Mark Bross
13565 Detroit St.
Thornton, Co. 80241
Email: [EMAIL PROTECTED]
Ph:  303-252-9255
Fax: 303-252-9556


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] [PERFORM] Quad processor options - summary

2004-05-14 Thread Mark Kirkwood
I would recommend trying out several stripe sizes, and making your own 
measurements.

A while ago I was involved in building a data warehouse system (Oracle, 
DB2) and after several file and db benchmark exercises we used 256K 
stripes, as these gave the best overall performance results for both 
systems.

I am not saying "1M is wrong", but I am saying "1M may not be right" :-)
regards
Mark
Bjoern Metzdorf wrote:
1. Get many drives and stripe them into a RAID0 with a stripe width of 
1MB. I am not quite sure if this stripe width is to be controlled at 
the application level (does postgres support this?) or if e.g. the 
"chunk size" of the linux software driver is meant. Normally a chunk 
size of 4KB is recommended, so 1MB sounds fairly large.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[ADMIN] no ftp

2004-06-02 Thread Mark Espinoza
To whom it may concern,
I really had no idea who to send my email to, but here it goes.
I can't get to any of your ftp sites... NONE!
I talked to my ISP, patched my modem and router, nada, zilch, nothing.
Seems there's a router problem, except I can't for the life of me figure 
out why I can't get to any of your ftp sites, not even across the globe. 
HELP! Are all the ftp requests handled by www.postgresql.org   h, 
interesting.

Thank you very much.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] out of memory error

2004-06-10 Thread Mark Striebeck
Hi,
we are using Postgres with a J2EE application (JBoss) and get 
intermittent "out of memory" errors on the Postgres database. We are 
running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the 
following parameters:

shared_buffers = 8192
sort_mem = 8192
effective_cache_size = 234881024
random_page_cost = 2
(everything else is default)
The error message in the log is:
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-1] ERROR:  53200: 
out of memory
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-2] DETAIL:  Failed 
on request of size 208.
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-3] LOCATION:  
AllocSetAlloc, aset.c:700

All failures are with the following query (again, it only fails every 
now and then). The query returns only a few results:
STATEMENT:  SELECT
   Tracker_Artifact0.id AS id,
   Tracker_Artifact0.priority AS priority,
   Tracker_Artifact_extension_f1.path AS projectPathString,
   Tracker_Artifact_extension_f1.title AS projectTitle,
   Tracker_Artifact_extension_f0.project_id AS projectId,
   Tracker_Artifact_extension_f0.path AS folderPathString,
   Tracker_Artifact_extension_f0.title AS folderTitle,
   Tracker_Artifact_extension0.folder_id AS folderId,
   Tracker_Artifact_extension0.title AS title,
   Tracker_Artifact_extension0.name AS name,
   Tracker_Artifact0.description AS description,
   Tracker_Artifact_group0.value AS artifactGroup,
   Tracker_Artifact_status0.value AS status,
   Tracker_Artifact_status0.value_class AS statusClass,
   Tracker_Artifact_category0.value AS category,
   Tracker_Artifact_customer0.value AS customer,
   Tracker_Artifact_extension_c0.username AS submittedByUsername,
   Tracker_Artifact_extension_c0.full_name AS submittedByFullname,
   Tracker_Artifact_extension0.date_created AS submittedDate,
   Tracker_Artifact0.close_date AS closeDate,
   Tracker_Artifact_ArtifactAss0.username AS assignedToUsername,
   Tracker_Artifact_ArtifactAss0.full_name AS assignedToFullname,
   Tracker_Artifact_extension0.date_last_modified AS lastModifiedDate,
   Tracker_Artifact0.estimated_hours AS estimatedHours,
   Tracker_Artifact0.actual_hours AS actualHours,
   Tracker_Artifact_extension0.version AS version
FROM
   field_value Tracker_Artifact_group0,
   item Tracker_Artifact_extension0,
   relationship Tracker_Artifact_relation_Ar0,
   sfuser Tracker_Artifact_ArtifactAss0,
   field_value Tracker_Artifact_status0,
   field_value Tracker_Artifact_category0,
   field_value Tracker_Artifact_customer0,
   folder Tracker_Artifact_extension_f0,
   artifact Tracker_Artifact0,
   project Tracker_Artifact_extension_f1,
   sfuser Tracker_Artifact_extension_c0
WHERE
   Tracker_Artifact0.id=Tracker_Artifact_extension0.id
   AND 
Tracker_Artifact_extension0.folder_id=Tracker_Artifact_extension_f0.id
   AND 
Tracker_Artifact_extension_f0.project_id=Tracker_Artifact_extension_f1.id
   AND Tracker_Artifact0.group_fv=Tracker_Artifact_group0.id
   AND Tracker_Artifact0.status_fv=Tracker_Artifact_status0.id
   AND Tracker_Artifact0.category_fv=Tracker_Artifact_category0.id
   AND Tracker_Artifact0.customer_fv=Tracker_Artifact_customer0.id
   AND 
Tracker_Artifact_extension0.created_by_id=Tracker_Artifact_extension_c0.id
   AND Tracker_Artifact_relation_Ar0.is_deleted=false
   AND 
Tracker_Artifact_relation_Ar0.relationship_type_name='ArtifactAssignment'
   AND 
Tracker_Artifact_relation_Ar0.origin_id=Tracker_Artifact_ArtifactAss0.id
   AND Tracker_Artifact0.id=Tracker_Artifact_relation_Ar0.target_id
   AND (Tracker_Artifact_extension_f1.path='projects' OR 
Tracker_Artifact_extension_f1.path LIKE 'projects.%')
   AND Tracker_Artifact_extension0.is_deleted=false
   AND Tracker_Artifact_status0.value_class='Open'
   AND Tracker_Artifact_ArtifactAss0.username='foundr12622313'
ORDER BY
   priority ASC,
   lastModifiedDate DESC

Can anyone see anything dangerous about this query? What's the best way 
to analyze this further?

Thanks
   MarkS
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] [HACKERS] Point in Time Recovery

2004-07-14 Thread Mark Kirkwood
I noticed that compiling with 5_1 patch applied fails due to 
XLOG_archive_dir being removed from xlog.c , but 
src/backend/commands/tablecmds.c still uses it.

I did the following to tablecmds.c :
5408c5408
<   extern char XLOG_archive_dir[];
---
>   extern char *XLogArchiveDest;
5410c5410
<   use_wal = XLOG_archive_dir[0] && !rel->rd_istemp;
---
>   use_wal = XLogArchiveDest[0] && !rel->rd_istemp;
Now I have to see if I have broken it with this change :-)
regards
Mark
Simon Riggs wrote:
On Wed, 2004-07-14 at 16:55, [EMAIL PROTECTED] wrote:
 

On 14 Jul, Simon Riggs wrote:
   

PITR Patch v5_1 just posted has Point in Time Recovery working
Still some rough edgesbut we really need some testers now to give
this a try and let me know what you think.
Klaus Naumann and Mark Wong are the only [non-committers] to have tried
to run the code (and let me know about it), so please have a look at
[PATCHES] and try it out.
 

 

I just tried applying the v5_1 patch against the cvs tip today and got a
couple of rejections.  I'll copy the patch output here.  Let me know if
you want to see the reject files or anything else:
   

I'm on it. Sorry 'bout that all - midnight fingers.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] [HACKERS] Point in Time Recovery

2004-07-21 Thread Mark Kirkwood
Here is one for the 'idiot proof' category:
1) initdb and set archive_command
2) shutdown
3) do a backup
4) startup and run some transactions
5) shutdown and remove PGDATA
6) restore backup
7) startup
Obviously this does not work as the backup is performed with the 
database shutdown.

This got me wondering for 2 reasons:
1) Some alternative database servers *require* a procedure like this to 
enable their version of PITR - so the potential foot-gun thing is there.

2) Is is possible to make the recovery kick in even though pg_control 
says the database state is shutdown?

Simon Riggs wrote:
I was hoping some fiendish plans would be presented to me...
But please start with "this feels like typical usage" and we'll go from
there...the important thing is to try the first one.
I've not done power off tests, yet. They need to be done just to
check...actually you don't need to do this to test PITR...
We need to exhaustive tests of...
- power off
- scp and cross network copies
- all the permuted recovery options
- archive_mode = off (i.e. current behaviour)
- deliberately incorrectly set options (idiot-proof testing)
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] [HACKERS] Point in Time Recovery

2004-07-21 Thread Mark Kirkwood
Well that is interesting :_)
Here is what I am doing on the removal front (I am keeping pg_xlog *now*):
$ cd $PGDATA
$ pg_ctl stop
$ ls|grep -v pg_xlog|xargs rm -rf
The contents of the archive directory just before recovery starts:
$ ls -l $PGDATA/../7.5-archive
total 49212
-rw---1 postgres postgres 16777216 Jul 22 14:59 
0001
-rw---1 postgres postgres 16777216 Jul 22 14:59 
00010001
-rw---1 postgres postgres 16777216 Jul 22 14:59 
00010002

But here is recovery startup log:
LOG:  database system was shut down at 2004-07-22 14:58:57 NZST
LOG:  starting archive recovery
LOG:  restore_command = "cp /data1/pgdata/7.5-archive/%f %p"
cp: cannot stat `/data1/pgdata/7.5-archive/0001.history': No such 
file or directory
LOG:  restored log file "0001" from archive
LOG:  checkpoint record is at 0/A4D3E8
LOG:  redo record is at 0/A4D3E8; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 496; next OID: 17229
LOG:  archive recovery complete
LOG:  database system is ready

regards
Mark
Tom Lane wrote:
Huh?  It works fine.
The bit you may be missing is that if you blow away $PGDATA including
pg_xlog/, you won't be able to recover past whatever you have in your WAL
archive area.  The archive is certainly not going to include the current
partially-filled WAL segment, and it might be missing a few earlier
segments if the archival process isn't speedy.  So you need to keep
those recent segments in pg_xlog/ if you want to recover to current time
or near-current time.
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] [HACKERS] Point in Time Recovery

2004-07-22 Thread Mark Kirkwood
Excellent - Just updated and it is all good!
This change makes the whole "how do I do my backup" business nice and 
basic - which the right way IMHO.

regards
Mark
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
 

2) Is is possible to make the recovery kick in even though pg_control 
says the database state is shutdown?
   

Yeah, I think you are right: presence of recovery.conf should force a
WAL scan even if pg_control claims it's shut down.  Fix committed.
			regards, tom lane
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] [HACKERS] Point in Time Recovery

2004-07-22 Thread Mark Kirkwood
I have tested the "cold" backup - and retested my previous scenarios 
using "hot" backup (just to be sure) . They all work AFAICS!

cheers
Mark 

Simon Riggs wrote:
On Thu, 2004-07-22 at 21:19, Tom Lane wrote:
 

Mark Kirkwood <[EMAIL PROTECTED]> writes:
   

2) Is is possible to make the recovery kick in even though pg_control 
says the database state is shutdown?
 

Yeah, I think you are right: presence of recovery.conf should force a
WAL scan even if pg_control claims it's shut down.  Fix committed.
   

This *should* be possible but I haven't tested it.
There is a code path on secondary checkpoints that indicates that crash
recovery can occur even when the database was shutdown, since the code
forces recovery whether it was or not. On that basis, this may work, but
is yet untested. I didn't mention this because it might interfere with
getting hot backup to work...
Best Regards, Simon Riggs
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Connecting Postgres using other network

2004-08-11 Thread Taber, Mark








Dumb question, but are you sure you want
port 6432 instead of 5432?  Or was this a typo?

 

Cheers,

 



Mark Taber

State of California

Department of Finance

Infrastructure & Architecture Unit

916.323.3104 x2945











From: Mohammad Tanvir
Huda [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 11, 2004
12:40 PM
To: [EMAIL PROTECTED]
Subject: [ADMIN] Connecting
Postgres using other network 



 



Hello Everyone 





 





I have installed Postgres server in a network and  i am trying to
use the database using 





Java code from other network. 





 





I have set the tcp_ip=true in postgresql.conf file. 





I have also add the follwing line in the pg_hba.conf

host    all
all   
127.0.0.0 
255.255.255.255   trust





 





Then i start the server in port 6432. 





 





After that i want to access the application using the following
javacode. 





 





import java.sql.*;   // All we need for JDBC
import java.text.*;
import java.io.*;





public class HelloPost
{
  Connection  
db;    // A connection to the database
  Statement   
sql;   // Our statement to run queries with
  DatabaseMetaData dbmd;  // This is
basically info the driver delivers
 
// about the DB it just connected to. I use
 
// it to get the DB version to confirm the
 
// connection in this example.





  public HelloPost()
    throws ClassNotFoundException, SQLException
  {
    String database = "test";
    String username = "abc";
    String password = "xyz";





    try{
     Class.forName("org.postgresql.Driver"); file://load the driver
    db = DriverManager.getConnection("jdbc:postgresql://abc.csse.uts.edu:6432:test",

username,

password); file://connect to the db
     dbmd = db.getMetaData(); file://get
MetaData to confirm connection
     System.out.println("Connection to
"+dbmd.getDatabaseProductName()+" "+
  
dbmd.getDatabaseProductVersion()+" successful.\n");
     sql = db.createStatement(); file://create
a statement that we can use later
    }catch (ClassNotFoundException e) {
   
System.err.println( "Driver not found: " + e + "\n" +
e.getMessage() );
    }





    db.close();
  }





 public static void main (String args[])
   {





 try
 {
   HelloPost demo = new HelloPost();
 }
 catch (Exception ex)
 {
  
System.out.println("***Exception:\n"+ex);
   ex.printStackTrace();
 }
   }
 }





But this
give me the following error 






Exception: org.postgresql.util.PSQLException: Connection refused.
Check that the hostname and port are correct and that the post
 accepting TCP/IP connections.
Connection refused. Check that the
hostname and port are correct and that the postmaster is accepting TCP/IP
connections.





Can anyone please tell me where is the error. what should i doo .. 





 





regards 





 





Shayer 





 









Do you Yahoo!?
Yahoo!
Mail - 50x more storage than other providers!








[ADMIN] Kerberos, Apache2, mod_auth_kerb, PHP, and PostgreSQL in harmony! How?

2004-08-17 Thread Mark Gibson
Hi,
   I've been trying to Kerberize our Apache and PostgreSQL servers for
our company's web applications.
Goal: To connect from a PHP web app to a PostgreSQL database
using the users credentials, so all authorization is managed via
privileges within the database.
Our IT dept has recently installed Windows 2003 Server to provide
authentication & directories via Kerberos and LDAP.
I've managed to configure Apache (2.0.49) to authenticate users using
mod_auth_kerb (5.0-rc6), and also PostgreSQL (7.4.3) to use Kerberos.
(Linux hosts use MIT KerberosV5 1.3.3 client libs, KDC is Windows 2003)
mod_auth_kerb is configured with:
KrbSaveCredentials on
So in PHP (4.3.8) we end up with the variables:
$_SERVER['REMOTE_USER'](eg: '[EMAIL PROTECTED]')
$_SERVER['KRB5CCNAME'] (eg: 'FILE:/tmp/krb5cc_apache_tVFJCd')
Even HTTP Negotiate works with Firefox/Linux (but not IE/XP yet!) :)
But this is where I get stuck.
How do I use the supplied credentials file to connect to PostgreSQL?
In the PostgreSQL docs it says:
(http://www.postgresql.org/docs/7.4/interactive/auth-methods.html#KERBEROS-AUTH)
> If you use mod_auth_kerb from http://modauthkerb.sf.net and mod_perl
> on your Apache web server, you can use AuthType
> KerberosV5SaveCredentials with a mod_perl script. This gives secure
> database access over the web, no extra passwords required.
I'm assuming this is out of date, or has changed with mod_auth_kerb 5.0,
and that the KrbSaveCredentials directive does this job instead.
Is there any examples of this mod_perl script?
Can the alleged mod_perl method be adapted to PHP?
Has anyone got this to work?
What are the alternatives to my goal stated above?
Cheers
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] [PHP] Kerberos, Apache2, mod_auth_kerb, PHP, and PostgreSQL in

2004-08-17 Thread Mark Gibson
Mark Gibson wrote:
Hi,
   I've been trying to Kerberize our Apache and PostgreSQL servers for
our company's web applications.
Goal: To connect from a PHP web app to a PostgreSQL database
using the users credentials, so all authorization is managed via
privileges within the database.
Our IT dept has recently installed Windows 2003 Server to provide
authentication & directories via Kerberos and LDAP.
I've managed to configure Apache (2.0.49) to authenticate users using
mod_auth_kerb (5.0-rc6), and also PostgreSQL (7.4.3) to use Kerberos.
(Linux hosts use MIT KerberosV5 1.3.3 client libs, KDC is Windows 2003)
mod_auth_kerb is configured with:
KrbSaveCredentials on
So in PHP (4.3.8) we end up with the variables:
$_SERVER['REMOTE_USER'](eg: '[EMAIL PROTECTED]')
$_SERVER['KRB5CCNAME'] (eg: 'FILE:/tmp/krb5cc_apache_tVFJCd')
Even HTTP Negotiate works with Firefox/Linux (but not IE/XP yet!) :)
But this is where I get stuck.
How do I use the supplied credentials file to connect to PostgreSQL?
In the PostgreSQL docs it says:
(http://www.postgresql.org/docs/7.4/interactive/auth-methods.html#KERBEROS-AUTH) 

 > If you use mod_auth_kerb from http://modauthkerb.sf.net and mod_perl
 > on your Apache web server, you can use AuthType
 > KerberosV5SaveCredentials with a mod_perl script. This gives secure
 > database access over the web, no extra passwords required.
I'm assuming this is out of date, or has changed with mod_auth_kerb 5.0,
and that the KrbSaveCredentials directive does this job instead.
I'VE DONE IT! THE HOLY GRAIL OF WEB/DB APPS! :)
All it takes it this line your PHP script:
 putenv("KRB5CCNAME={$_SERVER['KRB5CCNAME']}");
Then pg_connect works :)
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] [PHP] Kerberos, Apache2, mod_auth_kerb, PHP, and PostgreSQL in

2004-08-17 Thread Mark Gibson
Mark Gibson wrote:
Mark Gibson wrote:
Hi,
   I've been trying to Kerberize our Apache and PostgreSQL servers for
our company's web applications.
[snip]
I'VE DONE IT! THE HOLY GRAIL OF WEB/DB APPS! :)
All it takes it this line your PHP script:
 putenv("KRB5CCNAME={$_SERVER['KRB5CCNAME']}");
Then pg_connect works :)
Or, not, as the case may be.
Well, sometimes it connects and sometimes it doesn't,
PostgreSQL logs the following error:
Kerberos recvauth returned error 103
Any ideas???
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] securing pg_catalog

2004-09-16 Thread Mark . Miller
I want to create a user that has read only access to the tables in a
schema.  I was able to configure that fine, but that user can see all of
the information in the pg_catalog schema.  I do not want this user to be
able to see anything other than the one schema, I tried revoking rights to
schema pg_catalog but the user could still see data from the pg_catalog.
Is there a way to prevent access to pg_catalog schema for a specified user?

TIA,
Mark
   
 Mark MillerTavve Software Co. 
 Tavve Software Co.  One Copley Parkway, Suite 480 
 www.tavve.com  Morrisville, NC  27560 
 [EMAIL PROTECTED] +1 919-654-1220 
   fax +1 919-380-7147 
   





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] postmaster -i option...

2004-10-18 Thread Taber, Mark
I restarted our dev postgres instance today with the -i option, and now,
using 'ps,', I am not able to see connections to the database any longer.
The production instance (running only with the -D option) shows the
connections.  Am I going crazy, missing something obvious?  I'm using the
command 'ps -ax | grep postgres'

Thanks,

Mark Taber

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] What is this phantom user 128?

2004-11-17 Thread Taber, Mark
Title: What is this phantom user 128?





While using pg_dump and pg_restore (or attempting to restore, more precisely), the process failed because of a missing user, user '128'.  I never created such a user, and when going to Privileges, such a user does not exist in the list.  However, while going out and checking some of the tables, this user shows up as a user with privileges on them (however, not on all tables).  How do I get rid of this user?  And how do I ensure that it doesn't get created again?

Thanks in advance.


Mark Taber
State of California
Department of Finance
Infrastructure & Architecture


916.323.3104 x 2945
[EMAIL PROTECTED]





[ADMIN] PITR WAL backups

2005-02-02 Thread Mark Reidenbach
The PITR documentation recommends that a user's archive command should 
not overwrite an existing wal archive file, but it appears you need at 
least the wal archive file that matches the .backup marker for the 
recovery process to succeed.  This makes sense because the recovery 
process needs to replay what happened during the time it took to make 
the backup.  The problem I'm having is that you need to backup this wal 
file before it's been archived to have a backup which is consistent, but 
the documentation also states you need to make sure not to overwrite 
existing files in your archive command.

Is the solution to copy this wal file to a directory other than the 
archive directory and make sure it's part of the base backup.  If you 
run a cron job to save the current wal file every minute or so, should 
this file also be copied to a different directory than the archive 
directory to prevent the check for existing files from failing?  Is this 
how other people have their systems set up?

If this is correct, maybe the documentation should mention a separate 
directory should be used for the wal file that matches the .backup file 
and for the files copied by cron jobs so that checks for existing files 
won't find the partially wal files and fail to overwrite them.  It might 
also state you need to copy the wal file matching the .backup marker to 
the archive directory if it wasn't archived through the system's archive 
command and that the file copied by the cron job also needs to be copied 
to your archive directory before running recovery.

Is any of this correct or am I messing up the procedure entirely?
Thanks,
Mark Reidenbach
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] Cannot get postgres started on Fedora core 3

2005-03-16 Thread Mark Travis
System: 1.5 GB memory; 160 GB hard drive with 140 GB free (plenty of space
in the database directories); 1.5 Mhz AMD Athlon processor. 
- Fedora Core 3 with all fixes applied
- Postgres 7.4 as supplied by up2date (RedHat network)

I've been trying to get postgres to start on my machine for a few days (on
and off) and have not been successful in understanding why it will not
start.

It had been running fine for several weeks as the database backend to the
OpenGroupware server. One day, it would not start (service postgres start
[FAILED] ) I do not know the cause of the failure, it could have been a
power failure that caused an unclean shutdown, it could have been too big of
a file going into the team server (OpenGroupware), or ??

I've tried to shutdown and restart the Linux server several times. Postgres
fails to start in the startup. [FAILED]

I've tried every way possible to start postmaster and postgres to no avail,
including using pg_ctl.

I can't find any logs to tell my why it is failing, and the only book I
have, "Practical PostgreSQL" is of no help.

The conf files are standard with only minor modifications to allow the tcp
port, etc.

Please point me in the right direction! Thanks!

Mark 




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Cannot get postgres started on Fedora core 3

2005-03-17 Thread Mark Travis
I've placed several "echo" statements into /etc/rc.d/init.d/postgresql to
see what branches the scripts are executing and what the variables are.

I've narrowed it down to the final call
$SU -l postgres -c "$PGENGINE/postmaster -p 
(snipped the rest of the line from this post because it's the standard line
in the file)

All of the env variables are correctly set in this statement, but no pid is
generated.

If I manually enter the command on the command line 

postgres -c /usr/bin/postmaster -p 5432 -D /var/lib/pgsql 

I get nothing not even a warning that I shouldn't start postgres as
root.

If I just type "postgres" on the command line.... nothing. No warnings, no
nothing.

Mark 
> I've been trying to get postgres to start on my machine for a few days (on
> and off) and have not been successful in understanding why it will not
> start.

>The pre-8.0 RPMs have a bad habit of sending the postmaster log to
>/dev/null :-(.  You could edit /etc/rc.d/init.d/postgresql to point
>the log (i.e. postmaster's stderr) someplace more useful, or change the
>postgresql.conf file to log to syslog instead of stderr.  Once you have
>hold of the postmaster's dying words, let us know ...

>   regards, tom lane



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Cannot get postgres started on Fedora core 3

2005-03-17 Thread Mark Travis
Tom, I owe you bigtime. That was exactly the problem. I would remove selinux
from my machine if I wasn't worried that it wasn't actually protecting me
from the outside world. I had problems installing OpenGroupware as well with
selinux, but I thought I had them resolved. I bet it got overwritten on an
update cycle.

I guess I better get educated on selinux so I can turn it on and off at will
without it driving the rest of my system insane.

Is there a config file I can change to keep that from happening in the
future?

Thanks again! 

Mark 

> If I manually enter the command on the command line 
> postgres -c /usr/bin/postmaster -p 5432 -D /var/lib/pgsql 
> I get nothing not even a warning that I shouldn't start postgres as
> root.
> If I just type "postgres" on the command line nothing. No warnings, no
> nothing.

I'll bet you have SELinux enforcement on.  One of the less fun
properties of that thing (at least in Red Hat's current "targeted"
policy) is that it forbids daemons from writing on /dev/tty.  And
the postgres executable is classified as a daemon even when you
invoke it from the command line.  Net result: no error messages.

Does it work any better after "sudo /usr/sbin/setenforce 0" ?

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] REINDEX question

2007-06-22 Thread Mark Steben
Good morning,

 

We are at Postgres 7.4.5.  I  have encountered inaccuracies 

 after reindexing related to the statistics posted back to the PG_CLASS
table

For instance:

 1.  We have a 'queues' table with 6 indexes.  We are at 110,000 rows in
this table.

  When I reindex, the RELPAGE count decreases as expected but the
RELTUPLE

  Count increases to about 132,000 in all indexes.  I check and there
are still

  Only 110,000 rows on the table.

 2.  We have a table that is quite volatile in its numbers, ranging from 100
to 6 million rows.

 Last week we were at 50,000 rows but PG_CLASS.RELTUPLES showed 6
million for

 The indexes.  I reindexed and, again the RELPAGES count went down but
the RELTUPLES

 Number went down to just 3 million. 

 Again a count after reindex gave the actual row count at 52,000.

 

Does anybody have any insight why this is happening and what I can do in the
short term?

I know we are at a back-leveled version so perhaps the long-term answer is
to upgrade.

 

Thank you,

Mark Steben 

Senior DBA - AutoRevenue

 

Direct dial phone number: 413-327-3045

Please visit our new website at www.autorevenue.com
<http://www.autorevenue.com/> 

"We do all the work  you make all the money"

 



[ADMIN] Index on a COALESCE function

2007-07-16 Thread Mark Steben
, Hi, I'm trying to index on a coalesce function:

  The index definition is CREATE INDEX  emailrcpts_cid_coalesce_dtofpurch"
btree (cid, (COALESCE(date_of_purchase, '1900-01-01'::date)))

 

A portion of the query I am trying to optimize follows:

 

 select 45198705,false,id,next_avail_senddate(id,date(send_date)) from 

 (select
id,timeclause_combine(date(estmileage_timeclause(service_date,mileage,last_s
ervice_date,last_mileage,

   date_of_purchase,year_purchased,10750,cid,serindex_date,id)),

   date(date_larger(date(coalesce(date_of_purchase,
'1900-01-01')+timespan('45 days')),

   date(service_date+timespan('44 days',

   date(null),date(create_date),9,false) as send_date,last_send


I can't seem to get this index to work on the function, just on the cid
column. Any ideas would be appreciated

 

Thank you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] row level locking

2007-07-20 Thread Mark Steben
Hi listers,

I have a table that is accessed by only one transaction.  It has 1500 rows
each time that it processes 

This transaction.  As soon as this transaction ends, it waits 10 seconds and
starts again.  

It has another maximum 1500 rows loaded. .  

 

My question is this:  I believe a RowExclusive level lock is acquired 1500
times during each transaction.

Is this true?  Is there a way to disable RowExclusive locks and reduce
overhead since only this transaction

Accesses this table.   I know there are ways to specify locking levels in
other DBMSs. To page or table levels.

We are currently at version 7.4.5.

 

Thank you for your time,

 

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] update within trigger function

2007-07-23 Thread Mark Steben
Hello listers,

 

We have 'table a' which has defined to it an 'after insert or update'
trigger function

That inserts to 'table b', obviously for every update or insert of table a.
The challenge

I have now is to update a new date column on 'table a' with a value coming
out of

Another inhouse written function within the trigger function.  But I cannot
use the UPDATE 

statement within the update trigger function for fear of an infinite loop.  

All I need to do is to populate the date field of the row that is currently
being processed by the trigger function.  

Is there a command within plpgsql that can help me with this?

 

This seems like an elementary question, but I am a very elementary user of

Postgres functions.  Thank you for your time,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN]

2007-08-09 Thread Mark Steben
Good afternoon,

 

We have a situation where we call our central queue processor from a remote
machine using a Perl script.

We can process up to 1500 'type 1' records followed by 5 'type 2 records.
We process any number successfully

Before we get the following error:

  ERROR:  could not send data to client: Broken pipe

 

There is no pattern or record that we see causing this error.

 

When we run this manually on the local machine we do not get this error.  It
is a Postgres function that is called.

 

I would appreciate any insight into this matter.

 

Thank  you,

 

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] Mass updates on a large table

2007-08-10 Thread Mark Steben
Good afternoon,

I am attempting an update on two new date field columns on a 17 million row
table.  Every row gets updated.

The update statement is a simple one:

  UPDATE EMAILRCPTS SET ID = ID

 And the update of the new date fields themselves occurs as the result of a
before trigger.

 

The update took 3 days, 10 hours to complete on the testing box.  I have
already adjusted the CHECKPOINT_SEGMENTS parameter up by 3X

To minimize the impact of checkpoints.   The SHARED_BUFFERS parameter has
been bumped up to 14 on a 20meg RAM box.

There are about 9 indexes on this table although none of them reference the
date fields so since there are no inserts I don't think they would have an
impact on the update  (I've been wrong before though)

 

Would an update statement referencing the date fields work faster than a
trigger?  Do you have any other suggestions to speed this up?

We simply cannot afford this table to be down for 3+ days during a
production update.  The production box is a 32meg RAM box.

 

We are at Postgres 7.4.5. 

 

Thanks for your time and brainpower

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] postgres 8.2.4 new installation - trying to get autovacuum to work

2007-10-11 Thread Mark Steben
Hi, I'm new to 8.2 and trying to get our testing server to autovacuum.

I have a table that  has roughly 116000 rows

 I have applied 40,000 updates.

 

The applicable postgresql.conf parameters are as follows:

 

 autovacuum| on

 autovacuum_analyze_scale_factor   | 0.1

 autovacuum_analyze_threshold   | 250

 autovacuum_freeze_max_age | 2

 autovacuum_naptime  | 60

 autovacuum_vacuum_cost_delay| -1

 autovacuum_vacuum_cost_limit  | -1

 autovacuum_vacuum_scale_factor  | 0.2

 autovacuum_vacuum_threshold  | 500

 

stats_row_level   | on

 stats_start_collector   | on

 

given the scale factor and the threshold I expected autovacuum to kick in at
about the 23000 th update

 

I am looking at the pg_logs   Should I be looking elsewhere?
Log_min_duration_statement currently set to 2000 milliseconds.

The updates show in the logs.  The vacuum does not.

 

Any help would be appreciated.  Thank  you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] BGWRITER in Postgres 8.2

2007-10-18 Thread Mark Steben
I would like to test the new components of 8.2 as we upgrade from 7.4 next
month.

Is there any way to verify that the background writer is operational?

Thank you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] bgwriter update question

2007-10-22 Thread Mark Steben
I'm running on 7.4.5 (pre bgwriter) and moving to 8.2 in 3 weeks. I turn
fsync off on my testing server. When I update a table I expect the file in
the base directory to remain the same size until the next syncpoint when it
would then update. But it increases immediately. I thought updates remain in
memory until syncpoint in pre-8.0. Am I missing somethng?

 

Thanks for your time

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



Re: [ADMIN] bgwriter update question

2007-10-22 Thread Mark Steben
Thanks Tom, every piece of new information adds to my education.
I'm going to put to bed this effort of trying to verify BGWRITER, 
knowing that it works in 8.2 and move on.

Regards,
Mark Steben

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 22, 2007 2:18 PM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] bgwriter update question 

"Mark Steben" <[EMAIL PROTECTED]> writes:
> I'm running on 7.4.5 (pre bgwriter) and moving to 8.2 in 3 weeks. I turn
> fsync off on my testing server. When I update a table I expect the file in
> the base directory to remain the same size until the next syncpoint when
it
> would then update. But it increases immediately. I thought updates remain
in
> memory until syncpoint in pre-8.0. Am I missing somethng?

The updates don't go to disk immediately (they didn't in 7.4 either).
But we always allocate disk space immediately when needed, to make sure
it's available.  Otherwise we could face the embarrassing situation of
having noplace to put rows inserted by an already-reported-committed
transaction.

regards, tom lane


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] Maintenance_work_mem question

2007-11-07 Thread Mark Steben
We are upgrading from postgres 7.4.5 to 8.2.4 in a couple weeks.  I am
debating as to what

I should set my maintenance_work_mem and shared_buffers parameters to.

Here are some of our current resources

 Total memory on machine: 32GB

 Total disks - 5 Raid 10 (so five primary,  five mirrored)

 Total database size: 60GB

 Total number of tables:   300

 Total number of indexes: 500

 Max connections: 200

 Average connections at one time: 10 - 15

 

We currently set our 7.4.5 vacuum_mem to 1 GB.  I understand we don't want

That any higher as we risk overflowing our 32 bit configuration.  But, since

We will be running autovacuum for the first time I assume that this will be
pulling memory

According to our maintenance_work_mem value perpetually during the day, so
perhaps

I should set it smaller than 1 Gig? 

 

Also, our 7.4.5 shared buffers value is currently at 150,000 buffers.  I
know this is

Very large for 7.4.5 but may be more appropriate for 8.2.4.  Any thoughts?

 

Thank you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



[ADMIN] where can I find doc for pg_buffercache?

2007-11-13 Thread Mark Steben
Quick question from the newbie.  I read that pg_buffercache can be used to
determine how postgres is using memory.

I read it is a 'contrib' module.  Where can I find this 'contrib' - is it a
directory?  We are running Linux redhat and I'm

Testing Postgres 8.2.4 (Soon to be 8.2.5 per Scott M.)

 

Thanks, 

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



[ADMIN] Looking for an E/R tool for Postgres

2007-12-17 Thread Mark Steben
Does anybody have suggestions as to what Entity_Relationship diagram tool

Is out there that'll work for Postgres?

 

Thank you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



[ADMIN] reconfiguring diskspace while upgrading to 8.2.5

2007-12-28 Thread Mark Steben
Admin team,

 

We are upgrading from Postgres 7.4.5 to 8.2.5 in mid January.  We are also
adding new disks

As we go.  I am investigating using the new tablespace facility to move our
biggest and most accessed

Table to the new disk.  Here are the statistics.  We are running RAID 10

  Current database size -   63 GIG

  Heavy accessed table to move:

   Table ---   7.4 GIG

   2 indexes ---   3 GIG apiece

   2 other indexes -   2.5 GIG apiece

 

Current database disk configuration

  TOTAL space -- 404 GIG

   TOTAL spindles ---  3

   TOTAL mirrors   3

 New additional disk configuration

   TOTAL  space    290 GIG

   TOTAL spindles ---   2

   TOTAL mirrors   ---  2

 

The choices we see are:

1. Adding the two new spindles to the other three making one huge
logical partition

And all 350+ tables and 400+ indexes continue to reside there

2. Keeping the two new spindles separate and dedicating the heavy access
table

 And its 4 indexes to it.

3. Keeping the two new spindles separate and dedicating the heavy access
table

  To it, but keeping the 4 indexes on the old partition.

 

I know that maintaining almost 700 GIG of total disk space being utilized by
a 63 GIG

  Database looks like disk-space overkill but we do expect massive growth
over the 

  Next 2 - 3 years.

 

Any thoughts / comments would be appreciated.

 

Also are there tools out there that monitor disk I/O and disk speed?

 

Thanks for your time,  

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



Re: [ADMIN] reconfiguring diskspace while upgrading to 8.2.5

2008-01-02 Thread Mark Steben
Postgres admin team,

 

I posed this question last week.  Probably a bad week to ask questions as
many people off

For the holidays so response was light.  Thanks to Pete Eisentraut for his
feedback.

 

I'd like to pose the question again.  Our upgrade time-table is getting
close.  Any and

All feedback would be appreciated.

 

Thank you,

Mark Steben 

 

  _  

From: Mark Steben [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 28, 2007 10:09 AM
To: 'pgsql-admin@postgresql.org'
Cc: '[EMAIL PROTECTED]'; 'David Parks'; 'Craig Brothers'
Subject: reconfiguring diskspace while upgrading to 8.2.5

 

Admin team,

 

We are upgrading from Postgres 7.4.5 to 8.2.5 in mid January.  We are also
adding new disks

As we go.  I am investigating using the new tablespace facility to move our
biggest and most accessed

Table to the new disk.  Here are the statistics.  We are running RAID 10

  Current database size -   63 GIG

  Heavy accessed table to move:

   Table ---   7.4 GIG

   2 indexes ---   3 GIG apiece

   2 other indexes -   2.5 GIG apiece

 

Current database disk configuration

  TOTAL space -- 404 GIG

   TOTAL spindles ---  3

   TOTAL mirrors   3

 New additional disk configuration

   TOTAL  space    290 GIG

   TOTAL spindles ---   2

   TOTAL mirrors   ---  2

 

The choices we see are:

1. Adding the two new spindles to the other three making one huge
logical partition

And all 350+ tables and 400+ indexes continue to reside there

2. Keeping the two new spindles separate and dedicating the heavy access
table

 And its 4 indexes to it.

3. Keeping the two new spindles separate and dedicating the heavy access
table

  To it, but keeping the 4 indexes on the old partition.

 

I know that maintaining almost 700 GIG of total disk space being utilized by
a 63 GIG

  Database looks like disk-space overkill but we do expect massive growth
over the 

  Next 2 - 3 years.

 

Any thoughts / comments would be appreciated.

 

Also are there tools out there that monitor disk I/O and disk speed?

 

Thanks for your time,  

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



[ADMIN] no entry on pg_database for a live database

2008-01-22 Thread Mark Steben
I have a situation I thought was impossible.

 

I have a live postgres database with 4 application tables.  I can access it
through PSQL. 

But there is no record of this database on the PG_DATABASE catalog table.

And I therefore cannot perform a PG_DUMP backup because it errors with,

You guessed it, 'No entry on pg_database'  We haven't done backups 

As these are workfile temporary tables. But I've primed one of these

Tables with permanent data for performance.

 

I'm running version 7.4.5.   Do I have to do a drop-recreate or is there

Some other way around this?

 

Any ideas would be appreciated

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



Re: [ADMIN] no entry on pg_database for a live database

2008-01-23 Thread Mark Steben
Thanks Tom - that did the trick
We will be upgrading to 8.2 within the next couple weeks.

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 22, 2008 12:20 PM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] no entry on pg_database for a live database 

"Mark Steben" <[EMAIL PROTECTED]> writes:
> I have a live postgres database with 4 application tables.  I can access
it
> through PSQL. 
> But there is no record of this database on the PG_DATABASE catalog table.

Try vacuuming pg_database.

If that works, it means your routine vacuuming procedures need some
re-thinking, because they are missing (at least) pg_database.

> I'm running version 7.4.5.

You're doing yourself no favors by failing to update.  The current
release in that branch is 7.4.19.

regards, tom lane


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] production error

2008-01-28 Thread Mark Steben
Hi folks, getting this error in production:

 

unexpected chunk number 0 (expected 1) for toast value 34761

 

Any ideas on how to approach this?

 

Two of the tables in the query have toast tables, one does not.

 

Thanks,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



[ADMIN] Error on CLUSTER command

2008-02-08 Thread Mark Steben
Hello listers,

 

I am getting the following error when I attempt to CLUSTER on a table:

 

 mavmail=# cluster mailer_queue;

ERROR:  could not create relation "pg_temp_752992924": File exists

 

The 752992924 is the oid of one of the indexes of this table (not the
clustering index)

 

This apparently is the result of a previous cluster operation that had not
cleaned up properly.

Where can I find this file?  There are several tables that have this
problem.  Do I need to

Do a database REINDEX or some other type of system maintenance?

 

Again we are running 7.4.5.  8.2.5 scheduled for implementation later this
month or early March.

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

Come Visit Us at NADA!  Booth #5735N 
February 9th - February 12th 
Moscone Center in San Francisco, CA 

 



Re: [ADMIN] Error on CLUSTER command

2008-02-08 Thread Mark Steben
The doc says to do an initdb and reload the data whenever you run
pg_resetxlog.  Do you concur with that even if we are only resetting the OID
counter?

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 08, 2008 11:36 AM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Error on CLUSTER command 

"Mark Steben" <[EMAIL PROTECTED]> writes:
> The number may change pointing to another index or stay the same.
> The error persists if I repeat 3 or 4 times.  Usually after the
> 4th or 5th time the cluster is successful.

Hm.  I'd guess that the installation is old enough that the OID counter
has wrapped around, and is currently in a region that is pretty densely
populated with existing tables.  (You could check that out by looking
into the database directory.)  7.4.x doesn't have the logic that was
added later to skip over filenames that are already in use; instead
you just get an error if there's a collision.

I forget whether 7.4's pg_resetxlog has an option to set the OID
counter, but if it does then a temporary fix is to move the OID counter
to some less densely populated range.

> What is the risk of performing an upgrade to 7.4.19?

Low, but it won't make this particular problem go away.

regards, tom lane


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Error on CLUSTER command

2008-02-08 Thread Mark Steben
The number may change pointing to another index or stay the same.
The error persists if I repeat 3 or 4 times.  Usually after the
4th or 5th time the cluster is successful.

What is the risk of performing an upgrade to 7.4.19?

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 08, 2008 11:19 AM
To: Mark Steben
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Error on CLUSTER command 

"Mark Steben" <[EMAIL PROTECTED]> writes:
> I am getting the following error when I attempt to CLUSTER on a table:
>  mavmail=# cluster mailer_queue;
> ERROR:  could not create relation "pg_temp_752992924": File exists

If you repeat the command, does the error persist?  Does the number
stay the same?

> Again we are running 7.4.5.

Tsk tsk.  Why not 7.4.19, or at least some reasonably recent 7.4.x
update?

regards, tom lane


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Newly converted to 8.2.5 and getting this error

2008-03-10 Thread Mark Steben
Hi folks,

 

  We have an application that cuts and pastes from Word and is getting this
error now that we have converted to 8.2.5.   

  The characters that are erroring are imbedded in the document. We would
prefer not having to remove them 

 

 invalid byte sequence for encoding "UTF8": 0x85

HINT:  This error can also happen if the byte sequence does not match
the encoding expected by the

server, which is controlled by "client_encoding".

 

Our client_encoding is obviously set at UTF8.  Is there a client_encoding
value that is less restrictive?

 

Any help would be appreciated.  Thank you,

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



[ADMIN] auto_vacuum question

2008-03-17 Thread Mark Steben
We upgraded from postgres 7.4.5 to 8.2.5 a couple weekends ago.  I just
turned auto vacuum on today.

I manually inserted 4 entries in PG_AUTOVACUUM with enabled = 'f' to exclude
these larger tables

That we manually vacuum every night.  But the first tables that autovacuum
chooses to vacuum are

These 4 tables.  Am I missing something?

 

Another, possibly related question:  the 4 tables mentioned above are
vacuumed via a scheduled

Daemon that invokes the VACUUMDB shell.  But the LAST_VACUUM column in
PG_STAT_USER_TABLES

Is not updated to reflect this vacuum.  Other tables vacuumed manually
during the day do have this

LAST_VACUUM column updated.  The autovacuum does populate the
LAST_AUTOVACUUM column

For these tables.

 

Any insights?  Thanks

 

Mark Steben

Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax

msteben mailto:[EMAIL PROTECTED]> @autorevenue.com

Visit our new website at 
 http://www.autorevenue.com/> www.autorevenue.com

 

IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.

 



  1   2   3   >