Re: [GENERAL] DBI module for postgres 1.4.3

2006-12-04 Thread Albe Laurenz
 Trying to connect to it throught perl code.
 Just wondering if DBI would be the best tool to use to 
 accomplish this task.
 Which version of DBI should I be using.
 I mean if any one of you could give me exact pointers to it, 
 would be highly appreciated. 

Yes, perl(DBI) is the canonical way to connect to a database
from Perl. You will need the DBD::Pg driver too.
Both modules can be obtained from CPAN (e.g. http://www.cpan.org),
maybe there are even binary packages for your operating
system available.
I'd use the latest stable version.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Unsuccessful SIGINT

2006-12-04 Thread Albe Laurenz
 I have a connection that I am unable to kill with a sigint.
 
 ps auxww for the process in question:
 postgres  3578  0.3  3.6 6526396 1213344 ? SDec01   0:32  
 postgres: postgres ssprod 192.168.0.52(49333) SELECT
 
 and gdb shows:
 (gdb) bt
 #0  0x2ba62c18f085 in send () from /lib64/libc.so.6
 #1  0x00504765 in internal_flush ()
 #2  0x00504896 in internal_putbytes ()
 #3  0x005048fc in pq_putmessage ()
 #4  0x00505ea4 in pq_endmessage ()
 #5  0x0043e37a in printtup ()
 #6  0x004e9349 in ExecutorRun ()
 #7  0x00567931 in PortalRunSelect ()
 #8  0x005685f0 in PortalRun ()
 #9  0x00565ea8 in PostgresMain ()
 #10 0x00540624 in ServerLoop ()
 #11 0x0054131a in PostmasterMain ()
 #12 0x0050676e in main ()
 
 lsof on the client machine (192.168.0.52) shows no connections on  
 port 49333, so it doesn't appear to be a simple matter of killing the

 client connection. If I have to, I can reboot the client machine, but

 this seems like overkill and I'm not certain this will fix the  
 problem. Anything else I can try on the server or the client short of

 restarting the database or rebooting the client?

Do I get it right that there is no process on the client machine
using port 49333?
Maybe you can reboot the client machine to make sure.

I'd wait for some time, because the send() might be stuck in kernel
space, and I guess it should timeout at some point. Then the process
will go away.

If the server process is still there after a couple of hours, hmm,
I don't know. Maybe resort to a kill -9. If that does not get rid
of the server process, it is stuck in kernel space for good and
probably nothing except a reboot will get rid of it.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] logs not ending up at Syslog

2006-12-04 Thread nicky

Hello All,

I'm trying to configure my PostgreSQL 8.1.x (on FreeBSD 6-STABLE) to log 
through syslog, yet somehow my logging doesn't end up in the specified 
log files.
I'm kind of at a loss as to where i messed up. I'm hoping someone can 
help me fix it.


Below are uncommented lines in my postgresql.conf file, related to logging:

log_destination = 'syslog'

# These are relevant when logging to syslog:
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

log_min_messages = debug5   # Values, in order of decreasing 
detail:
log_min_duration_statement = 0  # -1 is disabled, 0 logs all 
statements
   # and their durations, in 
milliseconds.

silent_mode = on

log_connections = on
log_disconnections = on
log_duration = on

log_statement = 'all'   # none, mod, ddl, all


And i've appended the line below to my /etc/syslog.conf

local0.*/var/log/pgsql


Restarted both PostgreSQL and syslog, yet, nothing gets displayed in 
/var/log/pgsql.



Can someone give me a clue?

Thanks in advance.

Nick

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] sudden drop in delete performance

2006-12-04 Thread Alban Hertroys
surabhi.ahuja wrote:
 after my update
 I had my entire data drectory PGDATA removed,
 i had done initdb again
 and did lot of inserts (the inserts have given the similar performance)

So you _didn't_ vacuum analyze. You need one right at this point, or the
database is optimizing your queries using statistical data that is no
longer accurate (If I understand correctly, the statistics describe an
empty database - I need to read up on this topic sometime, I use it a
lot...).

 i then do a remove from the db, which is taking time.
 when i had postgres 8.0.0 i did not turn fsyn off.

 that time i had moved the pg_xlog directory to a diff partition and
 created a link from PGDATA to its new location.
 (i did this because i had heard tha it boosts performanne)

I'm rather certain creating a tablespace on that partition would be
faster than a symlink, although I suppose the filesystem cache will help
a bit.

 but I am doing the same here also (i mean with Postgres 8.1.5)

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] trouble inserting into new partitions of partitioned

2006-12-04 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

=
6. thinking it might be the prepared stmt causing the problem I
tried a direct call to the stored proc, to no avail:

pg SELECT silly_insert('','va',999) ;
ERROR:  inserts only allowed into silly partition tables (state was va)

=
7. a direct insert does work, however:
pg INSERT INTO silly(bcid,state,some_value) VALUES('asdf','ny',) ;
INSERT 0 0

8. if the process from (2) disconnects and reconnects everything
works as expected (i.e. it can insert Virgina rows).


What you're missing is the fact that queries within a function have 
their query-plan cached. That means silly_insert()'s INSERT INTO 
statement gets re-written on the first call and the plan saved.


Workarounds:
1. Reconnect (as you discovered) thus re-planning the function's query
2. Re-create the function (CREATE OR REPLACE FUNCTION ...)
3. Use the EXECUTE statement to dynamically construct your query
4. Use a different language that doesn't cache query-plans

We probably need a de-cache function command, but no-one's implemented 
such a thing yet.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Richard Huxton

Mark Morgan Lloyd wrote:

If there's a risk that multiple clients will try to execute a 'create or replace
function' simultaneously, what's the recommended practice for putting it in a
transaction and/or locking it? If a lock's incolved what should this be applied
to- the table that the function is most likely to be involved with, an arbitrary
table, or a dummy table specifically reserved for this purpose?


What problem are you trying to prevent here? Do you want a particular 
version of the function to be available for a certain amount of time?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Storing files in postgres db

2006-12-04 Thread Richard Huxton

tam wei wrote:

Dear all,

I am intending to store the files inside the postgres DB using the
type text (all the files will be pre-encode into base64. The reason
for not using the type bytea as I encountered some undesired
format(the original file alignment can't be preserved) while
extracting the content and display iit using php).


Hmm - not sure what you mean here. You should just get back whatever you 
store.



Will it be a bad idea for storing the files in DB? the file size is
about 20 ~ 40 KB. 3K files need to be stored per day. Is there any
impact on the DB performance?


This shouldn't be any different from storing a similar number of text 
records.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] DBI module for postgres 1.4.3

2006-12-04 Thread Tshimanga Minkoka
You can also give a try to DBD::PgPP !

TSHIMANGA Minkoka
VAS Administrator
Mobile: +243 814443113
Office: +243 813131347
Fax:+243 813010373
Email:  [EMAIL PROTECTED]
VODACOM CONGO (DRC) s.p.r.l.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Albe Laurenz
Sent: Monday, 04 December 2006 09:27
To: Jasbinder Singh Bali *EXTERN*; pgsql-general@postgresql.org
Subject: Re: [GENERAL] DBI module for postgres 1.4.3

 Trying to connect to it throught perl code.
 Just wondering if DBI would be the best tool to use to 
 accomplish this task.
 Which version of DBI should I be using.
 I mean if any one of you could give me exact pointers to it, 
 would be highly appreciated. 

Yes, perl(DBI) is the canonical way to connect to a database
from Perl. You will need the DBD::Pg driver too.
Both modules can be obtained from CPAN (e.g. http://www.cpan.org),
maybe there are even binary packages for your operating
system available.
I'd use the latest stable version.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Mark Morgan Lloyd
Richard Huxton wrote:
 
 Mark Morgan Lloyd wrote:
  If there's a risk that multiple clients will try to execute a 'create or
  replace function' simultaneously, what's the recommended practice for
  putting it in a transaction and/or locking it? If a lock's incolved what
  should this be applied to- the table that the function is most likely to
  be involved with, an arbitrary table, or a dummy table specifically
  reserved for this purpose?
 
 What problem are you trying to prevent here? Do you want a particular
 version of the function to be available for a certain amount of time?

I don't anticipate that the function will change, but it's (re)defined by a
script triggered periodically on a client system. I'm pretty sure that I've seen
a problem whilst I was doing maintenance when two clients tried to redefine it
simultaneouly (i.e. on one of them the redefinition failed rather than waiting),
in the interim I've set up a transaction with a lock on the table that is most
likely to be involved noting that by default the lock type is the most
restrictive.

-- 
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Storing files in postgres db

2006-12-04 Thread Michal Taborsky - Internet Mall

Hello Tam Wei,

tam wei wrote:

I am intending to store the files inside the postgres DB using the
type text (all the files will be pre-encode into base64. The reason
for not using the type bytea as I encountered some undesired
format(the original file alignment can't be preserved) while
extracting the content and display iit using php).


You must be doing something wrong on PHP side. Make sure you use 
pg_escape_bytea/pg_unescape_bytea when storing/loading. We use it and it 
works with binary files, no problem. You are wasting space and slowing 
things down with base64.



Will it be a bad idea for storing the files in DB? the file size is
about 20 ~ 40 KB. 3K files need to be stored per day. Is there any
impact on the DB performance?


We use it to store files of 50MB without any difficulty. It all depends, 
of course, on your hardware and load.


--
Michal Táborský
chief systems architect
Internet Mall, a.s.
http://www.MALL.cz

---(end of broadcast)---
TIP 6: explain analyze is your friend


Fix for 8.2 release. Was: [GENERAL] Problems to create the portuguese dictionary

2006-12-04 Thread Oleg Bartunov

Hello,

we just released fix for 8.2 release, which updates Snowball API.
Patch is available from 
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/tsearch_snowball_82.gz


It's too late to apply fix to 8.2 release, sorry.

Oleg


On Fri, 1 Dec 2006, Luiz Claudio da Silva Le?o wrote:


Hi,

I am trying to create de protuguese dictionary in order to use tserch2.

The steps I followed are described below:

1) Login system as user1
2) cd
3) tar -xvzf postgresql-8.1.5.tar.gz
4) cd postgresql-8.1.5
5) ./configure
6) gmake
7) su
8) gmake install
9) adduser postgres
10) mkdir /usr/local/pgsql/data
11) chown postgres /usr/local/pgsql/data
12) su - postgres
13) /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data --locale=pt_BR.utf8
14) /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
15) /usr/local/pgsql/bin/createdb test
16) /usr/local/pgsql/bin/psql test

Postgresql working.

17) \q (inside psql)
18) exit
19) cd /home/user1/postgresql-8.1.5/contrib/tsearch2
20) gmake
21) gmake install

Tsearch OK.

22) cd ./gendict
23) wget http://snowball.tartarus.org/algorithms/portuguese/stem.c
24) wget http://snowball.tartarus.org/algorithms/portuguese/stem.h
25) ./config  -n pt -s -p portuguese_ISO_8859_1 -v -C'Snowball stemmer for 
Portuguese'

26) cd ../../dict_pt
27) make


Now I receive a lot of erros...


I think there are differences on interfaces involving snowball and tsearch2.

Does anybody know how to create a new dictionary or have old versions of 
stem.c and stem.h for portuguese?


Thanks,

Luiz Claudio Leao



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Richard Huxton

Mark Morgan Lloyd wrote:

Richard Huxton wrote:

Mark Morgan Lloyd wrote:

If there's a risk that multiple clients will try to execute a 'create or
replace function' simultaneously, what's the recommended practice for
putting it in a transaction and/or locking it? If a lock's incolved what
should this be applied to- the table that the function is most likely to
be involved with, an arbitrary table, or a dummy table specifically
reserved for this purpose?

What problem are you trying to prevent here? Do you want a particular
version of the function to be available for a certain amount of time?


I don't anticipate that the function will change, but it's (re)defined by a
script triggered periodically on a client system. I'm pretty sure that I've seen
a problem whilst I was doing maintenance when two clients tried to redefine it
simultaneouly (i.e. on one of them the redefinition failed rather than waiting),


Was it tuple concurrently updated? You can reproduce this fairly 
simply by issuing BEGIN...CREATE OR REPLACE FUNCTION f... in two 
different psql sessions and delaying COMMIT appropriately. AFAIK it's 
harmless, but does abort your transaction.



in the interim I've set up a transaction with a lock on the table that is most
likely to be involved noting that by default the lock type is the most
restrictive.


You probably want a userlock (see contrib/), or as low-impact a lock as 
you can get away with. Perhaps lock your dummy table (row contains 
function schema/name?). You'll still want to code your application in 
such a way that it copes with errors though - the lock attempt can 
always time out (in theory anyway).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Storing files in postgres db

2006-12-04 Thread MaXX
On Mon, 4 Dec 2006 00:28:52 +0800
[EMAIL PROTECTED] (tam wei) wrote:

 Dear all,
 
 I am intending to store the files inside the postgres DB using the
 type text (all the files will be pre-encode into base64. The reason
 for not using the type bytea as I encountered some undesired
 format(the original file alignment can't be preserved) while
 extracting the content and display iit using php).
Hi,

Make sure you don't have any unwanted caracters before or after the php tags 
(? ? or ?php ?) in your main script and includes. Php will output them, 
thus breaking your file.
A more robust workaround is using output buffering to clear any unwanted output 
before sending the file... 

I have no problem storing bytea objects and retreiving them. Using output 
buffering allows you to use ob_gzhandler to reduce network bandwith if 
needed... 

HTH,
-- 
MaXX 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] GiN for 8.1 patch updated

2006-12-04 Thread Oleg Bartunov

Hello,

we just updated patch for 8.1 release, which introduced
GiN (Generalized Inverted Index) with tsearch2 support and 
full multibyte  support (UTF-8 as well). It contains VACUUM fix
in GiN code. Patch is available from Tsearch2 page 
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Unsuccessful SIGINT

2006-12-04 Thread Brian Wipf

On 4-Dec-06, at 1:43 AM, Albe Laurenz wrote:

lsof on the client machine (192.168.0.52) shows no connections on
port 49333, so it doesn't appear to be a simple matter of killing the



client connection. If I have to, I can reboot the client machine, but



this seems like overkill and I'm not certain this will fix the
problem. Anything else I can try on the server or the client short of



restarting the database or rebooting the client?


Do I get it right that there is no process on the client machine
using port 49333?
Maybe you can reboot the client machine to make sure.

I'd wait for some time, because the send() might be stuck in kernel
space, and I guess it should timeout at some point. Then the process
will go away.
The Java process on the client machine that held the connection was  
killed off and lsof no longer showed a process with a connection on  
port 49333. I waited about 7 hours and the database server still  
showed the hung connection from port 49333 of the client. I finally  
reboot the client computer, which fixed the problem. I suppose  
something lower level than the application process was hanging on to  
the connection somehow and lsof couldn't even detect it. The client  
is a Mac OS X 10.4.8 box. It would have been nice if I could have  
killed the process from the server side as well, but I'm sure there's  
a good reason why you can't when it's in this state:

send () from /lib64/libc.so.6
in internal_flush ()
in internal_putbytes ()
in pq_putmessage ()
in pq_endmessage ()
in printtup ()
in ExecutorRun ()
in PortalRunSelect ()


If the server process is still there after a couple of hours, hmm,
I don't know. Maybe resort to a kill -9. If that does not get rid
of the server process, it is stuck in kernel space for good and
probably nothing except a reboot will get rid of it.
The last time I tried a kill -9 on a server process the database  
instantly reboot itself and it had to perform some kind of crash  
recovery. Is a kill -9 okay in some cases? I suppose a restart of the  
database would have worked as well, but that was my last resort.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Mark Morgan Lloyd
Richard Huxton wrote:

 Was it tuple concurrently updated? You can reproduce this fairly
 simply by issuing BEGIN...CREATE OR REPLACE FUNCTION f... in two
 different psql sessions and delaying COMMIT appropriately. AFAIK it's
 harmless, but does abort your transaction.

I /think/ so, but it was buried deep in custom scripting and (usual story) I was
under pressure to get something else done at the time :-)

  in the interim I've set up a transaction with a lock on the table that is
  most likely to be involved noting that by default the lock type is the most
  restrictive.
 
 You probably want a userlock (see contrib/), or as low-impact a lock as
 you can get away with. Perhaps lock your dummy table (row contains
 function schema/name?). You'll still want to code your application in
 such a way that it copes with errors though - the lock attempt can
 always time out (in theory anyway).

Thanks, noted. I'm not expecting this to be a regular occurence since in general
the only time multiple sessions will be running will be during maintenance.

If I might ask a related question- assuming that a client has grabbed a
restrictive lock during a transaction that e.g. is create/replacing functions,
what happens to other sessions that attempt to run a select or update- will they
fail (i.e. an implicit NOWAIT) or will they wait until the lock is released?
Does this vary depending on whether a select/update is within an explicit
transaction?

-- 
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Richard Huxton

Mark Morgan Lloyd wrote:


If I might ask a related question- assuming that a client has grabbed a
restrictive lock during a transaction that e.g. is create/replacing functions,
what happens to other sessions that attempt to run a select or update- will they
fail (i.e. an implicit NOWAIT) or will they wait until the lock is released?
Does this vary depending on whether a select/update is within an explicit
transaction?


They will wait without an explicit NOWAIT. This applies whether the 
transaction is explicit or implicit.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pgsql bug found?

2006-12-04 Thread Ronin
Hi when I do the following function it fills 2 dates per day from 1970
to 2050, except that some months  (typical 2 months per year) have 4
dates for one day. this is totally freaky.. I wonder if postgresql is
tripping over itself making a double entry every now and again.

for instance I constantly get the following entries

2006-10-01 00:00:00
2006-10-01 23:59:59.999
2006-10-01 00:00:00
2006-10-01 23:59:59.999

Any ideas?

Here the function

DECLARE
yearcnt integer;
monthcnt integer;
daycnt integer;

BEGIN

   FOR yearcnt IN 1970..2050 LOOP
monthcnt=1;
FOR monthcnt IN 1..12 LOOP
daycnt = 1;
FOR daycnt IN 1..31 LOOP
insert into datepool values
(to_timestamp(yearcnt||'-'||to_char(monthcnt,'FM00')||'-'||to_char(daycnt,'FM09')||'
00:00:00.000',' MM DD HH24:MI:SS.MS'));

insert into datepool values
(to_timestamp(yearcnt||'-'||to_char(monthcnt,'FM00')||'-'||to_char(daycnt,'FM09')||'
23:59:59.999',' MM DD HH24:MI:SS.MS'));


END LOOP;
END LOOP;   
   END LOOP;

return;

END;


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Restore database from files (not dump files)?

2006-12-04 Thread wheel
In article [EMAIL PROTECTED], [EMAIL PROTECTED] 
says...
 wheel wrote:
  In article [EMAIL PROTECTED], [EMAIL PROTECTED] 
  says...
  
 wheel wrote:
 
 Can a pgsql 8.1 database be restored from the raw file? For one database 
 I have only the files found in  the base folder, example C:\PostgreSQL
 \data\base\16404, there are many files called 1247, 1248, etc (no 
 extension). I think these are the files that make up the database? For 
 this db I don't have a dump file, just what is on disk. I do know the 
 users and passwords etc. I've hunted around quite a bit in various 
 places and most of the restore discussions center around files created 
 with pg_dump.
 
 Uh, do you have the entier /data directory tree?  If so, just restore
 the directory start it up a binary.
 
 
  
  Yes I have the entire dir/file set. But what does If so, just restore
  the directory start it up a binary mean? Restore the dir, you mean copy 
  it to it's location under \base? What does start it up a(s?) binary 
  mean? I'm new to postgres.
  
  I have copied the folders back to the base dir (like C:\PostgreSQL\data
  \base\16404) if that's step one but what after that?
  
 
 Just start Postgres. If the data dir is ok, it should run fine.
 
 I'm assuming the binary is 8.1 but are the data files from the same version?
 
 b

I see...I'm using pgAdmin III with 8.1.5, and the dbs don't reappear in 
the pgAdmin list of databases automatically. I'll see if I can figure 
out how to get pgAdmin to 'see' the dbs that have been moved here. At 
least it seems from what you're saying it's just a matter of the 
restored dbs being invisible to pgAdmin.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Locking for function creation

2006-12-04 Thread Mark Morgan Lloyd
Richard Huxton wrote:
 
 Mark Morgan Lloyd wrote:
 
  If I might ask a related question- assuming that a client has grabbed a
  restrictive lock during a transaction that e.g. is create/replacing 
  functions,
  what happens to other sessions that attempt to run a select or update- will 
  they
  fail (i.e. an implicit NOWAIT) or will they wait until the lock is released?
  Does this vary depending on whether a select/update is within an explicit
  transaction?
 
 They will wait without an explicit NOWAIT. This applies whether the
 transaction is explicit or implicit.

Just what I was hoping. Many thanks for your help :-)

-- 
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pgsql bug found?

2006-12-04 Thread Martijn van Oosterhout
On Mon, Dec 04, 2006 at 06:52:19AM -0800, Ronin wrote:
 Hi when I do the following function it fills 2 dates per day from 1970
 to 2050, except that some months  (typical 2 months per year) have 4
 dates for one day. this is totally freaky.. I wonder if postgresql is
 tripping over itself making a double entry every now and again.
 
 for instance I constantly get the following entries

It's either a wierd daylight savings thing, or something to do with the
fact that not all months have 31 days.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Restore database from files (not dump files)?

2006-12-04 Thread John D. Burger
I have copied the folders back to the base dir (like C:\PostgreSQL 
\data

\base\16404) if that's step one but what after that?


Just start Postgres. If the data dir is ok, it should run fine.


It's unclear from your description whether the raw DB files were  
moved from another installation - note that the above is only  
guaranteed to work if architecture/compiler/etc. are all the same.   
If the files were created by exactly the same PG instance, then you  
should be okay.


- John D. Burger
  MITRE



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pgsql bug found?

2006-12-04 Thread Csaba Nagy
   FOR daycnt IN 1..31 LOOP
How about months with less than 31 days ? What do you get for those if
the day is 31 ?

Cheers,
Csaba.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Speed of postgres compared to ms sql, is this

2006-12-04 Thread novnov

I'm the OP, ran into this today and thought I'd give it some exposure

http://spyced.blogspot.com/2006/12/benchmark-postgresql-beats-stuffing.html

From the article

Benchmark: PostgreSQL beats the stuffing out of MySQL

This is interesting, because the conventional wisdom of idiots on slashdot
continues to be use postgresql if you need advanced features, but use mysql
if all you care about is speed, despite all the head-to-head benchmarks
I've seen by third parties showing PostgreSQL to be faster under load. 



Josh Rovero wrote:
 
 _  
 
 From: novnov [mailto:[EMAIL PROTECTED]
 To: pgsql-general@postgresql.org
 Sent: Sat, 11 Nov 2006 20:15:08 -0500
 Subject: Re: [GENERAL] Speed of postgres compared to ms sql, is this
 
 
   I agree with what you say. And I'd still be interesting in hearing of
 first
   hand experience with the speed of the two databases from someone who is
   'good' at both. The article commentor was obviously not a pgsql expert.
 I've
   heard recently that pgsql is as fast as mysql, so it seems odd that ms
 sql
   would be faster than pgsql. The actual test, what was use to benchmark,
   would make a difference of course.
   
   I'm hoping someone with a good handle on both databases has direct
   experience can chime in here.We use both native PostgreSQL and SQL
 Server Desktop Edition (MSDE),
  but not generally for exactly the same things.
 
 We use PostgreSQL on a multi-platform (HP-UX, Solaris, Linux, Windows)
   commercial application suite.  Works great handles tens of gigabytes
 per day in data insertions, updates, and retirements with almost
 zero administration.  Servers running the database stay up forever.
 We chose PostgreSQL as the best value after analyzing competing commercial 
 and open source RDBMS systems in a formal DAR (Decision Analysis
 Resolution)
 process.
 
 We use MSDE on a government project where both the specific database
 version 
 and committee-designed database schema from hell were mandated.
 There are plenty of instances when SQL Server is less than optimal and
 maddening.  The MSDE version also limits connections, throttles
 performance
 for simultaneous queries, etc.  And there's no way you'd ever pick it for
 multiplatform use or portability.
 

-- 
View this message in context: 
http://www.nabble.com/Speed-of-postgres-compared-to-ms-sql%2C-is-this-article-comment-off--tf2614575.html#a7682549
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [HACKERS] getting index columns from information_schema?

2006-12-04 Thread Jeff Davis
On Mon, 2006-12-04 at 07:18 -0800, Timasmith wrote:
 I cant seem to find the right query to retreive the discrete columns,
 column position, for a specified index.
 
 This is towards the purpose of identifying the schema differences
 between two databases and creating the changes needed.
 

Note: this question is appropriate on pgsql-general, not on pgsql-
hackers. 

The columns of an index can be found in pg_attribute with an indexrelid
equal to the oid of the index's entry in pg_class.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] PG Admin

2006-12-04 Thread Bob Pawley
I just installed PostgreSQL 8.1 and PG Admin 1.6.1 .

These versions have a sequential column, that is not part of the table, 
identifying the rows.

Is there any method of accessing those numbers and identifying them with 
elements within the table??

Bob Pawley

Re: [GENERAL] PG Admin

2006-12-04 Thread Richard Huxton

Bob Pawley wrote:

I just installed PostgreSQL 8.1 and PG Admin 1.6.1 .

These versions have a sequential column, that is not part of the
table, identifying the rows.

Is there any method of accessing those numbers and identifying them
with elements within the table??


Are you sure it's not just numbering the rows as it displays them?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley

That's what they are doing.

That is also what I am looking for, if it is accessable. If so, I can use 
that information to add a sequential numerical element to my information 
that doesn't have the restrictions of a serial column.


Bob
- Original Message - 
From: Richard Huxton dev@archonet.com

To: Bob Pawley [EMAIL PROTECTED]
Cc: Postgresql pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 9:39 AM
Subject: Re: [GENERAL] PG Admin



Bob Pawley wrote:

I just installed PostgreSQL 8.1 and PG Admin 1.6.1 .

These versions have a sequential column, that is not part of the
table, identifying the rows.

Is there any method of accessing those numbers and identifying them
with elements within the table??


Are you sure it's not just numbering the rows as it displays them?

--
  Richard Huxton
  Archonet Ltd




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] logs not ending up at Syslog

2006-12-04 Thread brian

nicky wrote:

Hello All,

I'm trying to configure my PostgreSQL 8.1.x (on FreeBSD 6-STABLE) to log 
through syslog, yet somehow my logging doesn't end up in the specified 
log files.
I'm kind of at a loss as to where i messed up. I'm hoping someone can 
help me fix it.


Below are uncommented lines in my postgresql.conf file, related to logging:

log_destination = 'syslog'

# These are relevant when logging to syslog:
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

log_min_messages = debug5   # Values, in order of decreasing 
detail:
log_min_duration_statement = 0  # -1 is disabled, 0 logs all 
statements
   # and their durations, in 
milliseconds.

silent_mode = on

log_connections = on
log_disconnections = on
log_duration = on

log_statement = 'all'   # none, mod, ddl, all


And i've appended the line below to my /etc/syslog.conf

local0.*/var/log/pgsql


Restarted both PostgreSQL and syslog, yet, nothing gets displayed in 
/var/log/pgsql.





What's in /usr/local/etc/rc.d/postgresql? Does it specify anything  for 
logging? If so, it's likely to have something like 
PGLOG=$PGDATA/postgres.log.


brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Speed of postgres compared to ms sql, is this

2006-12-04 Thread Ian Harding

On 11/13/06, Scott Marlowe [EMAIL PROTECTED] wrote:

On Mon, 2006-11-13 at 15:36, novnov wrote:
 OK, thanks everyone, I gather from the responses that postgres performance
 won't be an issue for me then. If MS SQL Server and Postgres are in the same
 ballpark performance-wise, which seems to be the upshot of your comments, no
 problem. I'd only have worried if there was something like the major
 difference between the two with more complicated queries. I am puzzled by
 the commentor's post to the article, it could be FUD of course but didn't
 particularly sound like the commentor was anti pgsql.

I will say this. Most other databases are more forgiving of bad
queries.  Make a bad query and postgresql is more likely to punish you
for it.


Amen.  When I migrated from MSSQL to PostgreSQL (4 years ago), I found
out exactly how seriously MS SQL coddles you when it comes to its Oh,
I know what you really meant query planning.  I committed some sins
MS SQL covered up nicely and PostgreSQL flat out crawled when
presented to it.

However, I suspect that if I tried those bad queries with a current
version of PostgreSQL they would run much better, given all the work
that has been put  in over the last few years.

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Instal/Uninstall Problem

2006-12-04 Thread Andrew Raia
Sorry if this is a double post, I realized I was't subscribed when I sent
the first email.
 
Hoping Someone can give me a hand.
 
I had a developer who integrated the postgres msi into an installshield 10
installer. Somehow the uninstaller seemed to have half uninstalled it. All
the files are off the harddrive. When I go to add/remove programs it says no
installer package can be found. When I try to run the msi manually it says
the administrator has policies in effect to prevent this installation.
 
Im running XP Pro as administrator
 
Thanks in advance.

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.6/567 - Release Date: 12/4/2006
 


Re: [GENERAL] PG Admin

2006-12-04 Thread Tony Caduto

Bob Pawley wrote:

That's what they are doing.

That is also what I am looking for, if it is accessable. If so, I can 
use that information to add a sequential numerical element to my 
information that doesn't have the restrictions of a serial column.


Bob


Hi Bob,

Well, if you create your tables WITH OIDs then each row has a OID 
associated with it.


What exactly are the restrictions of a serial column?  It's just 
standard integer value with a default value that calls the nextval function.


Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Windows Binary for pgDesigner

2006-12-04 Thread John McCawley
A bunch of people have asked me about it, and I finally have a Windows 
machine sitting around with QT, so I built a Windows binary for 
pgDesigner that you can get here:


http://www.hardgeus.com/projects/pgdesigner/pgdesigner_win_0.8.zip

It's a pretty useful little tool for visualizing existing databases, and 
also for the initial design of a datamodel.  It supports importing of 
tables from an existing PostgreSQL database, and will generate SQL to 
build a new one.


Once QT4 stabilizes on the different Linux distros I'll actually support 
the app, and create a sane build environment, but in the meantime I'm 
just releasing this binary in the wild in the hope that it's useful.


John

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] n00b RAID + wal hot standby question

2006-12-04 Thread Anton Melser

Hi,
I am just starting at a company and we are inheriting a previously
built solution. It looks pretty good but my previous experience with
pg is seriously small-time compared with this...
I am very new at the job, and don't know what hd config we have but it
will be RAID-something I imagine (hey I was working with desktop
servers before this!). If that is very important I can find out. We
seem to be saving our WAL to the same partition as PGDATA, and I
notice that we are maxing out a reasonable looking server. The db is
not very big (~4gig, 400meg pgdump), and though I can't see any vacuum
strategy (autovacuum on a 8.1.4 is disabled), we didn't have as much
consistent CPU usage at my old job (with a 6 gig db and MUCH less CPU
and RAM, and probably as many connections), and my vacuum strategy was
also pitiful!  Sure, completely different environments, but I am
thinking that WAL replication could be a factor.
So my question... being in complete ignorance of how RAID works (the
performance details)... would it be better to try and separate the WAL
destination from PGDATA? How much of a difference could it make?
Should we wait till the customer starts complaining (no explosion in
traffic/db size realistic for the foreseeable future...)?
Any abuse welcome.
Cheers
Antoine

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] n00b RAID + wal hot standby question

2006-12-04 Thread Brandon Aiken
Unless you can separate PGDATA and the WAL destination to be on wholly
independent physical disks and not just different partitions of the same
hardware array, the physical limitations will still be present.

I believe the recommended method is to use RAID 5 or RAID 10 data
partitions and then use RAID 1 for transaction logs.  Additionally,
you're supposed to match the stripes size of the arrays to the block
sizes of your database, but I can never remember the math involved to do
it.

Database guides like this are still a bit beyond what I can understand:
http://www.lc.leidenuniv.nl/awcourse/oracle/server.920/a96520/hardware.h
tm

This one is a bit easier:
http://www.dbazine.com/oracle/or-articles/ault1

This is the best RAID primer I've seen:
http://www.pcguide.com/ref/hdd/perf/raid/index.htm



I'm not convinced I/O is your problem, though.  High CPU and memory
usage is indicative of many different problems, and poor disk I/O is
usually not one of them.  In a modern system, I'd expect to see poor
disk I/O causing *low* CPU usage combined with poor SELECT and awful
INSERT/UPDATE/DELETE performance.  Maybe it's caching the database state
in memory while it's waiting for writing, though.

It seems more likely that the database is either pushing more
transactions per minute, pushing more complex transactions, dealing with
larger queries and result sets, maintaining more indexes, or running
more complex pl/SQL procedures, triggers, and constraints.

Additionally, if my understanding is right then running with autovacuum
disabled and no batch process vacuum strategy on a database with lots of
INSERTs and DELETEs is essentially like running without indexes.


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Anton Melser
Sent: Monday, December 04, 2006 4:11 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] n00b RAID + wal hot standby question

Hi,
I am just starting at a company and we are inheriting a previously
built solution. It looks pretty good but my previous experience with
pg is seriously small-time compared with this...
I am very new at the job, and don't know what hd config we have but it
will be RAID-something I imagine (hey I was working with desktop
servers before this!). If that is very important I can find out. We
seem to be saving our WAL to the same partition as PGDATA, and I
notice that we are maxing out a reasonable looking server. The db is
not very big (~4gig, 400meg pgdump), and though I can't see any vacuum
strategy (autovacuum on a 8.1.4 is disabled), we didn't have as much
consistent CPU usage at my old job (with a 6 gig db and MUCH less CPU
and RAM, and probably as many connections), and my vacuum strategy was
also pitiful!  Sure, completely different environments, but I am
thinking that WAL replication could be a factor.
So my question... being in complete ignorance of how RAID works (the
performance details)... would it be better to try and separate the WAL
destination from PGDATA? How much of a difference could it make?
Should we wait till the customer starts complaining (no explosion in
traffic/db size realistic for the foreseeable future...)?
Any abuse welcome.
Cheers
Antoine

---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Problem working with dates and times.

2006-12-04 Thread Alejandro Michelin Salomon \( Adinet \)
Hi :
 
I have some problems working with date and times, in my web page.
 
I have table with tasks.
This table has :
 
Inicial date
Inicial hour
Duration
Final Date
Final time 
 
Final Date and Final time are calculate based in Inicial date, Inicial hour,
Duration.
 
In my test:
Inicial date  2000-12-04
Inicial hour  20:00
Duration  5 (Hours)
 
Final time = time 'Inicial Hour' + interval 'Duration hours'
Ex: Final Time is -- time '20:00' + interval '5 hours' === 01:00  ( OK )
 
The problem cames when i try to add the duration to Inicial date.
 
if i execute this select :
select '2006-12-04 20:00'::timestamp + interval '5 hours'
the result is ok ( 2006-12-05 01:00:00 )
 
But i only need 2006-12-05 from the resulting timestamp.
 
How to cut only the date from this timestamp?
 
Thanks in advance
 
Alejandro Michelin Salomon


Re: [GENERAL] Problem working with dates and times.

2006-12-04 Thread Raymond O'Donnell
On 4 Dec 2006 at 20:13, Alejandro Michelin Salomon ( Adinet ) wrote:

 But i only need 2006-12-05 from the resulting timestamp.
 How to cut only the date from this timestamp?

Use date_trunc() just to lop off the time part of the timestamp:

http://www.postgresql.org/docs/8.2/static/functions-
datetime.html#FUNCTIONS-DATETIME-TRUNC

Alternatively, use to_char() to format the output exactly as you'd 
like it:

http://www.postgresql.org/docs/8.2/static/functions-formatting.html


--Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Problem working with dates and times.

2006-12-04 Thread Michael Glaesemann


On Dec 5, 2006, at 8:13 , Alejandro Michelin Salomon (( Adinet )) wrote:


This table has :

Inicial date
Inicial hour
Duration
Final Date
Final time

Final Date and Final time are calculate based in Inicial date,  
Inicial hour, Duration.


snip /


But i only need 2006-12-05 from the resulting timestamp.

How to cut only the date from this timestamp?


# select cast('2006-12-04 20:00'::timestamp + interval '5 hours' as  
date);

date

2006-12-05
(1 row)

That should do what you want.

I'd suggest changing your schema a little bit to remove the derived  
columns.


inicial_timestamp timestamp with time zone
duration interval

I'd also recommend using timestamp with time zone, as it uniquely  
identifies a global time.


Inicial date, inicial hour, final date, and final hour can be derived  
from these two columns, e.g..


select cast('2006-12-04 20:00'::timestamptz + interval '5 hours' as  
time);

   time
--
01:00:00
(1 row)

This also saves you the need to check that final hour and final date  
columns are correct compared to the inicial date, inicial hour, and  
duration columns.


Hope this helps.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] n00b RAID + wal hot standby question

2006-12-04 Thread Scott Marlowe
On Mon, 2006-12-04 at 15:11, Anton Melser wrote:
 Hi,
 I am just starting at a company and we are inheriting a previously
 built solution. It looks pretty good but my previous experience with
 pg is seriously small-time compared with this...

OK, how you set up RAID depends largely on how you'll be using your
database.  

First things first though...

Since your vacuum looks to have been turned off for a while, I'd
recommend doing a vacuum full and a reindexdb and see if that helps,
then either schedule regular backups or let autovacuum do it from now
on.

After that, you can worry about setting up your RAID.  Generally RAID 10
is better for transactional dbs (airline reservations, e commerce
etc...) while RAID5 or 50 is often better for large reporting
databases.  This depends on your controller of course.  Some can layer
RAID levels and some can't, and some that can probably shouldn't try
(i.e. their performance is really bad when layering RAID levels.)

Putting your xlog on physically separate disks helps IF you are writing
enough to justify it.  If you run bulk updates at 2 in the morning and
the rest of the day 0.1% of your db activity is insert / update then
you would be better spending your time elsewhere.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] n00b RAID + wal hot standby question

2006-12-04 Thread Casey Duncan

On Dec 4, 2006, at 1:11 PM, Anton Melser wrote:


Hi,
I am just starting at a company and we are inheriting a previously
built solution. It looks pretty good but my previous experience with
pg is seriously small-time compared with this...
I am very new at the job, and don't know what hd config we have but it
will be RAID-something I imagine (hey I was working with desktop
servers before this!). If that is very important I can find out. We
seem to be saving our WAL to the same partition as PGDATA, and I
notice that we are maxing out a reasonable looking server. The db is
not very big (~4gig, 400meg pgdump), and though I can't see any vacuum
strategy (autovacuum on a 8.1.4 is disabled), we didn't have as much
consistent CPU usage at my old job (with a 6 gig db and MUCH less CPU
and RAM, and probably as many connections), and my vacuum strategy was
also pitiful!  Sure, completely different environments, but I am
thinking that WAL replication could be a factor.
So my question... being in complete ignorance of how RAID works (the
performance details)... would it be better to try and separate the WAL
destination from PGDATA? How much of a difference could it make?
Should we wait till the customer starts complaining (no explosion in
traffic/db size realistic for the foreseeable future...)?
Any abuse welcome.


When you say maxing out, what do you mean? Posting some vmstat output  
under load would be very helpful (assuming *nix, otherwise can't help  
you ;^).


My very general RAID recommendation would be:

Put the wal on a 2 disk RAID-1 array either by itself or together  
with the system if you expect little disk activity from the rest of  
the system.


Put the data on a RAID-10 array (at least 4 disks, but more would be  
much better).


As for the hardware itself, we've had good luck with 10k rpm WD  
raptor SATA drives + 3ware 95xx raid controllers (the 9550SX being  
preferred due to their smarter caching and higher bandwidth).


THe most important thing though is too keep as much of the database  
in RAM as possible. That means putting as much RAM in the database  
box as you can afford and not sharing it with other processes (i.e.,  
move other RAM-hungry things to another box). And configure postgres  
to use the available RAM by tuning shared_buffers, work_mem, etc.


Tuning checkpoint_segments and wal_buffers can help with write  
performance.


And running autovacuum (or regular full database vacuums) is very  
important to the performance of the database over time. Otherwise you  
will definitely notice significant performance degradation as the  
garbage in the tables mounts (unless of course the db is read-only).  
If it's been off for a long time (or never run), you'll probably need  
to spend some quality time doing a VACUUM FULL and possibly a  
REINDEX DATABASE first.


-Casey

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pgsql bug found?

2006-12-04 Thread Michael Glaesemann


On Dec 4, 2006, at 23:52 , Ronin wrote:


Hi when I do the following function it fills 2 dates per day from 1970
to 2050, except that some months  (typical 2 months per year) have 4
dates for one day. this is totally freaky.. I wonder if postgresql is
tripping over itself making a double entry every now and again.

for instance I constantly get the following entries

2006-10-01 00:00:00
2006-10-01 23:59:59.999
2006-10-01 00:00:00
2006-10-01 23:59:59.999

Any ideas?

Here the function

DECLARE
yearcnt integer;
monthcnt integer;
daycnt integer;

BEGIN

   FOR yearcnt IN 1970..2050 LOOP
monthcnt=1;
FOR monthcnt IN 1..12 LOOP
daycnt = 1;
FOR daycnt IN 1..31 LOOP
insert into datepool values
(to_timestamp(yearcnt||'-'||to_char(monthcnt,'FM00')||'-'||to_char 
(daycnt,'FM09')||'

00:00:00.000',' MM DD HH24:MI:SS.MS'));

insert into datepool values
(to_timestamp(yearcnt||'-'||to_char(monthcnt,'FM00')||'-'||to_char 
(daycnt,'FM09')||'

23:59:59.999',' MM DD HH24:MI:SS.MS'));


END LOOP;
END LOOP;   
   END LOOP;

return;

END;



I think both Martijn and Csaba have the right idea. Here's an  
alternative that should work around those issues:


create table datepool(pool_ts timestamp primary key);

create function fill_date_range(start_date date, end_date date)
returns void
language plpgsql as $func$
declare
this_date date;
begin
this_date := start_date;
loop
insert into datepool(pool_ts) values (this_date);
insert into datepool(pool_ts) values ((this_date +  
1)::timestamp - interval '.001 second');

exit when this_date = end_date;
this_date := this_date + 1;
end loop;
return;
end;
$func$;

select fill_date_range('1970-01-01','2050-12-31');

# select * from datepool where pool_ts = '2006-10-01' limit 10;
 pool_ts
-
2006-10-01 00:00:00
2006-10-01 23:59:59.999
2006-10-02 00:00:00
2006-10-02 23:59:59.999
2006-10-03 00:00:00
2006-10-03 23:59:59.999
2006-10-04 00:00:00
2006-10-04 23:59:59.999
2006-10-05 00:00:00
2006-10-05 23:59:59.999
(10 rows)

Hope that helps.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley
When a row is deleted the serial number and oid are also deleted. The 
sequence then has gaps which are inadmissible.


Bob
- Original Message - 
From: Tony Caduto [EMAIL PROTECTED]

To: Bob Pawley [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 10:43 AM
Subject: Re: [GENERAL] PG Admin



Bob Pawley wrote:

That's what they are doing.

That is also what I am looking for, if it is accessable. If so, I can use 
that information to add a sequential numerical element to my information 
that doesn't have the restrictions of a serial column.


Bob


Hi Bob,

Well, if you create your tables WITH OIDs then each row has a OID 
associated with it.


What exactly are the restrictions of a serial column?  It's just standard 
integer value with a default value that calls the nextval function.


Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PG Admin

2006-12-04 Thread Raymond O'Donnell
On 4 Dec 2006 at 15:40, Bob Pawley wrote:

 When a row is deleted the serial number and oid are also deleted. The
 sequence then has gaps which are inadmissible.

This is an issue which has come up at various times on this list in 
the past - it may be worth having a look through the archives. Here's 
one recent thread:

http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php


--Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Install/Uninstall Issues

2006-12-04 Thread Andrew Raia
Hoping Someone can give me a hand.
 
I had a developer who integrated the postgres msi into an installshield 10
installer. Somehow the uninstaller seemed to have half uninstalled it. All
the files are off the harddrive. When I go to add/remove programs it says no
installer package can be found. When I try to run the msi manually it says
the administrator has policies in effect to prevent this installation.
 
Im running XP Pro as administrator
 
Thanks in advance.

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.6/567 - Release Date: 12/4/2006
 


Re: [GENERAL] Install/Uninstall Issues

2006-12-04 Thread Andrew Raia
Incase anyone else has this problem i solved it by searching the registry
for Postgresql and deleting everything and then going to command prompt and
using the SC commnd to remove the service manually.

   _  

From: Andrew Raia [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 04, 2006 1:18 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Install/Uninstall Issues


Hoping Someone can give me a hand.
 
I had a developer who integrated the postgres msi into an installshield 10
installer. Somehow the uninstaller seemed to have half uninstalled it. All
the files are off the harddrive. When I go to add/remove programs it says no
installer package can be found. When I try to run the msi manually it says
the administrator has policies in effect to prevent this installation.
 
Im running XP Pro as administrator
 
Thanks in advance.


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.6/567 - Release Date: 12/4/2006



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.6/567 - Release Date: 12/4/2006



-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.6/567 - Release Date: 12/4/2006
 


Re: [GENERAL] PG Admin

2006-12-04 Thread Scott Marlowe
On Mon, 2006-12-04 at 17:53, Raymond O'Donnell wrote:
 On 4 Dec 2006 at 15:40, Bob Pawley wrote:
 
  When a row is deleted the serial number and oid are also deleted. The
  sequence then has gaps which are inadmissible.
 
 This is an issue which has come up at various times on this list in 
 the past - it may be worth having a look through the archives. Here's 
 one recent thread:
 
 http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php

Yeah.  The requirement for no gaps is a real no starter for any kind of
large data set.

Imagine modelling all the people in the US.  350 million entries.  No
big deal.  I can create that in minutes.

You each one give a number.  Again, no big deal.  A couple of minutes.

For everyone that dies, you remove the name.  No big deal.  a couple
seconds to run a delete.

For every one that is born, you add it to the list, giving it the next
number.  again, no big deal.  My workstation could probably handle the
load.

Now, turn that on its head.  Every time you delete someone, you have to
renumber the data set, and for everyone added you have to make sure
there are no gaps.

Suddenly, you've got a problem that could bring even big iron to its
knees.  All because some buearocrat (sp) hadn't the imagination to think
of non-sequential numbering systems.

While there are occasional systems where it is reasonable to actually
have no sequential gaps, most of the time the only justification is I
don't like them.  If that is the case, you should rethink your design. 
If you're stuck with them because of some idiotic rule from on high,
then at least abstract the numbers to some degree to improve performance
and keep you from having to update about half of an entire table several
times a minute.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley
I'm doing something similar - using a control table, dropping and creating 
the serial column and updating in a manner that does the job.


It works - barely. I am seeking a more elegent and stable method. Having a 
simple update recognizing the row numbers (in version 1.6.1) would be 
better - perhaps.


Bob


- Original Message - 
From: Raymond O'Donnell [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 3:53 PM
Subject: Re: [GENERAL] PG Admin



On 4 Dec 2006 at 15:40, Bob Pawley wrote:


When a row is deleted the serial number and oid are also deleted. The
sequence then has gaps which are inadmissible.


This is an issue which has come up at various times on this list in
the past - it may be worth having a look through the archives. Here's
one recent thread:

http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php


--Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley

Your missing the point.

I am creating a design system for industrial control.

The control devices need to be numbered. The numbers need to be sequential. 
If the user deletes a device the numbers need to regenerate to again become 
sequential and gapless.


Bob
- Original Message - 
From: Scott Marlowe [EMAIL PROTECTED]

To: Raymond O'Donnell [EMAIL PROTECTED]
Cc: pgsql general pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 4:09 PM
Subject: Re: [GENERAL] PG Admin



On Mon, 2006-12-04 at 17:53, Raymond O'Donnell wrote:

On 4 Dec 2006 at 15:40, Bob Pawley wrote:

 When a row is deleted the serial number and oid are also deleted. The
 sequence then has gaps which are inadmissible.

This is an issue which has come up at various times on this list in
the past - it may be worth having a look through the archives. Here's
one recent thread:

http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php


Yeah.  The requirement for no gaps is a real no starter for any kind of
large data set.

Imagine modelling all the people in the US.  350 million entries.  No
big deal.  I can create that in minutes.

You each one give a number.  Again, no big deal.  A couple of minutes.

For everyone that dies, you remove the name.  No big deal.  a couple
seconds to run a delete.

For every one that is born, you add it to the list, giving it the next
number.  again, no big deal.  My workstation could probably handle the
load.

Now, turn that on its head.  Every time you delete someone, you have to
renumber the data set, and for everyone added you have to make sure
there are no gaps.

Suddenly, you've got a problem that could bring even big iron to its
knees.  All because some buearocrat (sp) hadn't the imagination to think
of non-sequential numbering systems.

While there are occasional systems where it is reasonable to actually
have no sequential gaps, most of the time the only justification is I
don't like them.  If that is the case, you should rethink your design.
If you're stuck with them because of some idiotic rule from on high,
then at least abstract the numbers to some degree to improve performance
and keep you from having to update about half of an entire table several
times a minute.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PG Admin

2006-12-04 Thread Steve Atkins


On Dec 4, 2006, at 3:53 PM, Raymond O'Donnell wrote:


On 4 Dec 2006 at 15:40, Bob Pawley wrote:


When a row is deleted the serial number and oid are also deleted. The
sequence then has gaps which are inadmissible.


This is an issue which has come up at various times on this list in
the past - it may be worth having a look through the archives. Here's
one recent thread:

http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php


Or, better, http://archives.postgresql.org/pgsql-general/2006-09/ 
msg00948.php


I'm sure that PG Admin just generates the numbers in the GUI as it
displays them (as they're meaningless as persistent data).

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley

Perhaps - but they aren't necessarily meaningless as pure information.

Bob


- Original Message - 
From: Steve Atkins [EMAIL PROTECTED]

To: PgSQL General pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 4:33 PM
Subject: Re: [GENERAL] PG Admin




On Dec 4, 2006, at 3:53 PM, Raymond O'Donnell wrote:


On 4 Dec 2006 at 15:40, Bob Pawley wrote:


When a row is deleted the serial number and oid are also deleted. The
sequence then has gaps which are inadmissible.


This is an issue which has come up at various times on this list in
the past - it may be worth having a look through the archives. Here's
one recent thread:

http://archives.postgresql.org/pgsql-general/2006-08/msg00535.php


Or, better, http://archives.postgresql.org/pgsql-general/2006-09/ 
msg00948.php


I'm sure that PG Admin just generates the numbers in the GUI as it
displays them (as they're meaningless as persistent data).

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PG Admin

2006-12-04 Thread Richard Broersma Jr
 Your missing the point.
 I am creating a design system for industrial control.
 The control devices need to be numbered. The numbers need to be sequential. 
 If the user deletes a device the numbers need to regenerate to again become 
 sequential and gapless.

Is it a bill of material line number, an ISA instrument number, or a JIC 
component tag number?

Regards,

Richard Broersma Jr.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] unaccent as stored procedure?

2006-12-04 Thread Stephen Woodbridge

Hi all,

I was wondering if anyone has unac.c which is the lib used in 
Text::Unaccent built and wrap as a plpgsql stored procedure not using 
plperl. Or maybe there is another general solution that I am no aware of.


Thanks,
  -Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG Admin

2006-12-04 Thread Berend Tober

Bob Pawley wrote:

Your missing the point.

I am creating a design system for industrial control.

The control devices need to be numbered. The numbers need to be 
sequential. If the user deletes a device the numbers need to 
regenerate to again become sequential and gapless.
Could you explain what it is about industrial control that requires the 
reassignment of numbers? Seems to me to make for confusion because over 
time, you then have a particular instrument referred to by different 
identifiers. So if you had other data, such as written logs, shop floor 
design diagrams, or other data not included in the data base, for 
example, you'ld have the problem of keeping track of which instruments 
were really being talked about because the names (identifying number, 
that is) keep changing.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PG Admin

2006-12-04 Thread Adrian Klaver
On Monday 04 December 2006 04:17 pm, Bob Pawley wrote:
 Your missing the point.

 I am creating a design system for industrial control.

 The control devices need to be numbered. The numbers need to be sequential.
 If the user deletes a device the numbers need to regenerate to again become
 sequential and gapless.

 Bob
I am trying to figure how you keep track of the physical devices. Do they get 
renumbered also?

-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PG Admin

2006-12-04 Thread Bob Pawley

I am talking about designing the control system.

No one makes a perfect design at first go. Devices are deleted and others 
added. Until the end of the design stage the numbers need to be sequential 
with no gaps. After the design the numbers of each device are static and new 
devices are added to the sequence or fill in for abandoned devices - but 
that is another, separate problem.


But that is beside the point. What I am looking for is a gapless sequence 
generator which has the ability to justify for deletions as well as 
additions.


What I am looking for is a very simple adaptation of the serial function. 
All that I need it to do is to justify for design changes and not care that 
if it is reassinged to a different device. The fact that a particular device 
may, by happenstance, change it's assigned number - once twice or multiple 
times, during the design stage,  is of no consequence - as long as the 
totallity of numbers assigned are sequential and gapless.


Bob


- Original Message - 
From: Berend Tober [EMAIL PROTECTED]

To: Bob Pawley [EMAIL PROTECTED]
Cc: pgsql general pgsql-general@postgresql.org
Sent: Monday, December 04, 2006 7:15 PM
Subject: Re: [GENERAL] PG Admin



Bob Pawley wrote:

Your missing the point.

I am creating a design system for industrial control.

The control devices need to be numbered. The numbers need to be 
sequential. If the user deletes a device the numbers need to regenerate 
to again become sequential and gapless.
Could you explain what it is about industrial control that requires the 
reassignment of numbers? Seems to me to make for confusion because over 
time, you then have a particular instrument referred to by different 
identifiers. So if you had other data, such as written logs, shop floor 
design diagrams, or other data not included in the data base, for example, 
you'ld have the problem of keeping track of which instruments were really 
being talked about because the names (identifying number, that is) keep 
changing.






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PG Admin

2006-12-04 Thread Adrian Klaver

 - Original Message -
 From: Berend Tober [EMAIL PROTECTED]
 To: Bob Pawley [EMAIL PROTECTED]
 Cc: pgsql general pgsql-general@postgresql.org
 Sent: Monday, December 04, 2006 7:15 PM
 Subject: Re: [GENERAL] PG Admin

  Bob Pawley wrote:
  Your missing the point.
 
  I am creating a design system for industrial control.
 
  The control devices need to be numbered. The numbers need to be
  sequential. If the user deletes a device the numbers need to regenerate
  to again become sequential and gapless.
 
  Could you explain what it is about industrial control that requires the
  reassignment of numbers? Seems to me to make for confusion because over
  time, you then have a particular instrument referred to by different
  identifiers. So if you had other data, such as written logs, shop floor
  design diagrams, or other data not included in the data base, for
  example, you'ld have the problem of keeping track of which instruments
  were really being talked about because the names (identifying number,
  that is) keep changing.

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
On Monday 04 December 2006 07:39 pm, Bob Pawley wrote:
 I am talking about designing the control system.

 No one makes a perfect design at first go. Devices are deleted and others
 added. Until the end of the design stage the numbers need to be sequential
 with no gaps. After the design the numbers of each device are static and
 new devices are added to the sequence or fill in for abandoned devices -
 but that is another, separate problem.

 But that is beside the point. What I am looking for is a gapless sequence
 generator which has the ability to justify for deletions as well as
 additions.

 What I am looking for is a very simple adaptation of the serial function.
 All that I need it to do is to justify for design changes and not care that
 if it is reassinged to a different device. The fact that a particular
 device may, by happenstance, change it's assigned number - once twice or
 multiple times, during the design stage,  is of no consequence - as long as
 the totallity of numbers assigned are sequential and gapless.

 Bob

I see now. My thought would to hold the device numbers in a regular integer 
column. Have an AFTER INSERT/UPDATE/DELETE trigger that does a count on the 
table and renumbers all  the rows in the id column. This is the brute force 
method. The alternative would be to search for the gaps and renumber from the 
first gap up.


-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PG Admin

2006-12-04 Thread Michael Glaesemann


On Dec 5, 2006, at 13:08 , Bob Pawley wrote:

The physical devices don't get numbered until the design is  
established and stable. This is known as the construction stage.


I guess I would set up a couple of tables to track this ordering  
independently of the devices themselves. Rough schema:


create table devices
(
device_id serial primary key
device_name text not null unique
);

create table plans
(
plan_id serial primary key
, plan_name text not null unique
);

create table plan_devices
(
plan_id integer not null
references plans
, device_id integer not null
references devices
, device_order serial not null
, unique (plan_id, device_id)
, unique (plan_id, device_order)
);

This idea is based around the idea that every time you make a change  
to the plan, it's in essence a new plan. You insert a new plan in  
plans, reset the plan_devices_device_order_seq (created by the  
device_order serial column), and insert the devices for the new plan  
into plan_devices in the order they should be. Of course, sequences  
aren't transaction safe, but unless others are pulling from the  
sequence while the new devices are being assigned to the plan, it  
should be safe. You can also check the integrity of the device_order  
column after the insert to make sure it's gapless.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[Fwd: Re: [GENERAL] PG Admin]

2006-12-04 Thread Guy Rouillier

Bob Pawley wrote:

Your missing the point.

I am creating a design system for industrial control.

The control devices need to be numbered. The numbers need to be 
sequential. If the user deletes a device the numbers need to regenerate 
to again become sequential and gapless.


How many control devices are there, and how often do deletes happen?  If
there are only 30 devices, and deletes only happen on the order of one
per week, then you could simply run a delete trigger to renumber them.

--
Guy Rouillier

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] logs not ending up at Syslog

2006-12-04 Thread nicky



brian wrote:

nicky wrote:

Hello All,

I'm trying to configure my PostgreSQL 8.1.x (on FreeBSD 6-STABLE) to 
log through syslog, yet somehow my logging doesn't end up in the 
specified log files.
I'm kind of at a loss as to where i messed up. I'm hoping someone can 
help me fix it.


Below are uncommented lines in my postgresql.conf file, related to 
logging:


log_destination = 'syslog'

# These are relevant when logging to syslog:
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

log_min_messages = debug5   # Values, in order of 
decreasing detail:
log_min_duration_statement = 0  # -1 is disabled, 0 logs all 
statements
   # and their durations, in 
milliseconds.

silent_mode = on

log_connections = on
log_disconnections = on
log_duration = on

log_statement = 'all'   # none, mod, ddl, all


And i've appended the line below to my /etc/syslog.conf

local0.*/var/log/pgsql


Restarted both PostgreSQL and syslog, yet, nothing gets displayed in 
/var/log/pgsql.





What's in /usr/local/etc/rc.d/postgresql? Does it specify anything  
for logging? If so, it's likely to have something like 
PGLOG=$PGDATA/postgres.log.


brian


I don't see any reference to logging in my start script (appended at the 
end). I'm using the default flags. The only flag i see that might be 
interesting is the -s. However, removing it doesn't change a thing.



Nick

---

# $FreeBSD: ports/databases/postgresql81-server/files/pgsql.sh.tmpl,v 
1.22 2006/05/23 21:18:58 girgen Exp $

#
# PROVIDE: postgresql
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable PostgreSQL:
#
#  postgresql_enable=YES
#  # optional
#  postgresql_data=/usr/local/pgsql/data
#  postgresql_flags=-w -s -m fast
#
# This scripts takes one of the following commands:
#
#   start stop restart reload status initdb
#
# For postmaster startup options, edit ${postgresql_data}/postgresql.conf

prefix=/usr/local
command=${prefix}/bin/pg_ctl

. /etc/rc.subr

load_rc_config postgresql

# set defaults
postgresql_enable=${postgresql_enable:-NO}
postgresql_flags=${postgresql_flags:--w -s -m fast}
postgresql_user=pgsql
eval postgresql_data=${postgresql_data:-~${postgresql_user}/data}
postgresql_class=${postgresql_class:-default}

name=postgresql
rcvar=`set_rcvar`
command_args=-D ${postgresql_data} ${postgresql_flags}
extra_commands=reload initdb

start_cmd=postgresql_command start
stop_cmd=postgresql_command stop
restart_cmd=postgresql_command restart
reload_cmd=postgresql_command reload
status_cmd=postgresql_command status

initdb_cmd=postgresql_initdb

postgresql_command()
{
   su -l ${postgresql_user} -c exec ${command} ${command_args} ${rc_arg}
}

postgresql_initdb()
{
   su -l -c ${postgresql_class} ${postgresql_user} -c exec 
${prefix}/bin/initdb -D ${postgresql_data}

}

run_rc_command $1


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster