[GENERAL] Acting on dropped/timed-out connections

2005-09-08 Thread Milen A. Radev
Hi list!

We have a daemon programme that acts as a pgsql client. It writes in a
DB the status of its own clients. And we have a different daemon that
needs to read that status information and decide upon it. The problem
is that the first daemon is a little fragile and from time to time it
crashes. I need to be able to monitor its connection to the PgSQL and
if it drops (times-out) to mark the status of all of its clients as
unavailable or unknown.

At first I thought about attaching a trigger to pg_stat_activity
table, so on delete I could take action. The only problem was it is
not a table.

Any ideas?


-- 
Milen A. Radev

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Cost based SELECT/UPDATE

2005-09-08 Thread Richard Huxton

Leonid Safronie wrote:

Hi, ppl

Is there any way to do SELECTs with different priorities?



The issue is that response time for
these 50 processes is very important unlike for report generation, and
time spent by these processes while report running is unacceptable for
my production environment (response time grows from 1-3 seconds up to
1-2 minutes).


The most important question is why response time drops so much? Does it 
look like it's disk I/O that's the problem?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Access management for DB project.

2005-09-08 Thread Bohdan Linda

Hi,

I started thinking of some security access management. Basically imagine
this scenario according users:

1) Writer does only inserts to black hole.  

2) Reader does only reports on inserted data, cannot modify or add
anything 

3) Maintainer can run a task on the data, but cannot read or add anything.
The task has to have read/write access to the tables.

The first 2 types are easily solvable, but with the third type I have
problem. I have created task in plpgsql, I granted permissions to an user
to execute the task, but revoked on him all rights to tables. Logically
task failed.

The task sits in different schema, but operates on tables in other schema.

How would you solve this task?

Regards,
Bohdan 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Wolfgang Keller
Hello,

I'vesearchedforandtriednearlyeverydatabase
administration/design  tool  available  and  it  seems  my  choice has
narrowed  down  to  these  two.  Now a second opinion would be nice to
have.  Has  anyone  used  one, better both of these? What do you think
about them?

TIA,

Sincerely,

Wolfgang Keller

-- 
P.S.: My From-address is correct



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Access management for DB project.

2005-09-08 Thread Martijn van Oosterhout
On Thu, Sep 08, 2005 at 12:08:25PM +0200, Bohdan Linda wrote:
 The first 2 types are easily solvable, but with the third type I have
 problem. I have created task in plpgsql, I granted permissions to an user
 to execute the task, but revoked on him all rights to tables. Logically
 task failed.

You're looking for the SECURITY DEFINER attribute. It causes the
procedure to run with the permissions of the user who created it,
rather than the user who runs it.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp3Sdee15dR3.pgp
Description: PGP signature


Re: [GENERAL] Access management for DB project.

2005-09-08 Thread Adam Witney
On 8/9/05 11:08 am, Bohdan Linda [EMAIL PROTECTED] wrote:

 
 Hi,
 
 I started thinking of some security access management. Basically imagine
 this scenario according users:
 
 1) Writer does only inserts to black hole.
 
 2) Reader does only reports on inserted data, cannot modify or add
 anything 
 
 3) Maintainer can run a task on the data, but cannot read or add anything.
 The task has to have read/write access to the tables.
 
 The first 2 types are easily solvable, but with the third type I have
 problem. I have created task in plpgsql, I granted permissions to an user
 to execute the task, but revoked on him all rights to tables. Logically
 task failed.

You could create the function with SECURITY DEFINER, that way the function
will have the permissions of the user that creates it as opposed to the user
that runs it

CREATE my_func(int) RETURNS int SECURITY DEFINER AS '.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL and XML support

2005-09-08 Thread Andrey Fomichev
Thank you Doug, Oleg for your responses.
I hoped there would be much more answers,
but... thanks anyway

Friendly,
Andrey

 -Original Message-
 From: Oleg Bartunov [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, September 06, 2005 11:54 PM
 To: Doug Bloebaum
 Cc: Andrey Fomichev; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] PostgreSQL and XML support


 I have XML support in PostgreSQL in my Todo
 http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo
 Hopefully, we'll have something for 8.2

   Oleg
 On Tue, 6 Sep 2005, Doug Bloebaum wrote:

  On 9/6/05, Andrey Fomichev [EMAIL PROTECTED] wrote:
 
  - Are there any of you who need to store and query XML data?
 
  - Do you already use PostgreSQL to store XML data or you are
  just thinking about it?
 
 
  The project I'm currently working on uses XML for both data
 extraction from
  Postgres and as a means to transmit data to Postgres for storage.
 
  - If you store XML data in PostgreSQL, how do you do it? What tool
  do you use?
 
 
  We're using Oracle's XSQL servlet (
 
 http://download-west.oracle.com/docs/cd/B14117_01/appdev.101/b1079
4/adx09xsq.htm)
 with Apache tomcat (http://jakarta.apache.org/tomcat/) as its servlet
 container. The only change we've made to XSQL is the addition of a custom
 tag (xsql:pg-func) in order to make use of Postgres functions returning
 REF CURSOR. Aside from that, we're using XSQL out-of-the-box.

 Granted, we're not really manipulating XML within the database, rather
we're
 using XML as a sort of database-neutral interface.

 - Do you need some advanced query capabilities? Like XQuery, XPath 2.0


 No.

 - Do you need some advanced update capabilities? Like node-level updates


 No.


Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Partial commit within the trasaction

2005-09-08 Thread Bohdan Linda


Hello,


I have read that 7.x version had a model all or nothing in transactions.
Thus I have upgraded to version 8 and would like to do the following:

plpgsq code does time intensive data manipulation + database vacuuming.
This data manipulation is splitted logically into several steps. After
each step I would like to give a message to the status table, what the
procedure is performing.

Currently I pass the information to the table via insert, but this is also
the limitation. I would like to allow another user see the progress of the
current pgplsql procedure, but no insert is commited, till procedure ends.

How this can be solved? 

Regards,
Bohdan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Acting on dropped/timed-out connections

2005-09-08 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-09-08 11:31:25 +0300:
 We have a daemon programme that acts as a pgsql client. It writes in a
 DB the status of its own clients. And we have a different daemon that
 needs to read that status information and decide upon it. The problem
 is that the first daemon is a little fragile and from time to time it
 crashes. I need to be able to monitor its connection to the PgSQL and
 if it drops (times-out) to mark the status of all of its clients as
 unavailable or unknown.

Do you prefer knowing that the daemon failed, or having it
automaticly restarted the very same moment it crashes? If the
latter, take a look at djb's daemontools, or one of its open source
lookalikes (http://smarden.org/runit/,
http://offog.org/code/freedt.html).

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Acting on dropped/timed-out connections

2005-09-08 Thread Milen A. Radev
On 08/09/05, Roman Neuhauser [EMAIL PROTECTED] wrote:
 # [EMAIL PROTECTED] / 2005-09-08 11:31:25 +0300:
  We have a daemon programme that acts as a pgsql client. It writes in a
  DB the status of its own clients. And we have a different daemon that
  needs to read that status information and decide upon it. The problem
  is that the first daemon is a little fragile and from time to time it
  crashes. I need to be able to monitor its connection to the PgSQL and
  if it drops (times-out) to mark the status of all of its clients as
  unavailable or unknown.
 
 Do you prefer knowing that the daemon failed, or having it
 automaticly restarted the very same moment it crashes? If the
 latter, take a look at djb's daemontools, or one of its open source
 lookalikes (http://smarden.org/runit/,
 http://offog.org/code/freedt.html).
 

It's the former - in this case I'm not responsible (read I don't
care) about the daemon. I care about the (reasonable) veracity of the
data in the DB.


-- 
Milen A. Radev

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How to write jobs in postgresql

2005-09-08 Thread Karsten Hilbert
  3.  An integrated way of logging what ran when (rather than either
  stuffing logging code into each cron job or rummaging thru
  cron logs)
Cron can log to syslog.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Axel Straschil
Hello!

 have.  Has  anyone  used  one, better both of these? What do you think
 about them?

Both have trials - try it ;-)
Im using EMS, like it, but for all features you have to use windows.

Lg,
AXEL.
-- 
Gentoo? Debian? RedHat? SuSE? *BSD? Stop the distri-war, make little user!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Partial commit within the trasaction

2005-09-08 Thread Michael Fuhr
On Thu, Sep 08, 2005 at 01:23:56PM +0200, Bohdan Linda wrote:
 plpgsq code does time intensive data manipulation + database vacuuming.
 This data manipulation is splitted logically into several steps. After
 each step I would like to give a message to the status table, what the
 procedure is performing.
 
 Currently I pass the information to the table via insert, but this is also
 the limitation. I would like to allow another user see the progress of the
 current pgplsql procedure, but no insert is commited, till procedure ends.
 
 How this can be solved? 

One way would be to use contrib/dblink to open another connection
to the database so the status messages could be inserted in a
separate transaction.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] change column data type from smallint to integer

2005-09-08 Thread [EMAIL PROTECTED]

Alvaro Herrera wrote:

On Thu, Sep 08, 2005 at 04:22:07AM +0200, [EMAIL PROTECTED] wrote:

Hi,

we run 'out of space' in one of our columns which is smallint and we 
need to make it integer.


I did some research and found out that the only way is to create a new 
column with integer data type, then SET new = old, then drop old and 
rename new like old [1].


Could somebody confirm if this is really the best way and if it is 
stable in respect to indexes, RI, triggers, stored procedures, etc.


In 8.0, you can alter the type directly in the table.  In releases
before 8.0, you have found the right workaround.


Thank you, Alvaro, for the confirmation!


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[EMAIL PROTECTED]: Re: [GENERAL] Access management for DB project.]

2005-09-08 Thread Bohdan Linda

Tanks guys,
That was exactly what I was looking for.

B.
---BeginMessage---
On 8/9/05 11:08 am, Bohdan Linda [EMAIL PROTECTED] wrote:

 
 Hi,
 
 I started thinking of some security access management. Basically imagine
 this scenario according users:
 
 1) Writer does only inserts to black hole.
 
 2) Reader does only reports on inserted data, cannot modify or add
 anything 
 
 3) Maintainer can run a task on the data, but cannot read or add anything.
 The task has to have read/write access to the tables.
 
 The first 2 types are easily solvable, but with the third type I have
 problem. I have created task in plpgsql, I granted permissions to an user
 to execute the task, but revoked on him all rights to tables. Logically
 task failed.

You could create the function with SECURITY DEFINER, that way the function
will have the permissions of the user that creates it as opposed to the user
that runs it

CREATE my_func(int) RETURNS int SECURITY DEFINER AS '.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
---End Message---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Bash script to update sequences

2005-09-08 Thread Matt
Hi all,

I've recently been using some older version of postgres on linux
systems, and one of the most annoying problems i've come across is to
do with sequence values not being updated when a database backup is
restored.  This is because the insert and copy routines used to restore
the data into tables do not trigger the nextval function on insert, and
so you get a lot of annoying errors about duplicate keys if your ID
fields are based on sequences.

Anyway, if you know what im talking about then im sure you understand
how annoying it is.  I'm pretty sure the problem is fixed now in
postgres 8, however older systems such as 7.4 and 7.3.10 do not
correctly update sequence values when restoring from backups.

So in short (or not so short), i've written a simple bash script that
will connect to a given database, get a list of all the sequences for
that database, and then update the current value of the sequence to be
one greater than the max value of the corresponding ID field for that
table.

The script should be run as your postgres user on a linux-ish system
with bash.
Code follows. Watch out for wrapped sentences when copying (!).

-
#!/bin/bash

usage=Description:\nA script to update every sequence in a postgres
database to the current\n
value of the count of the number of rows in a corresponding table\n
NB: this script assumes that the sequence includes the name of the
table\n
at the start of the sequence name, and parses the sequence name as
such\n

if [ $# -lt 1 ];
then
echo -e $0: Error: Please enter the name of the database to
connect to. \n
echo -e Usage: $0 database\n
echo -e $usage
exit 1;
fi


database=$1
sequences=`echo \ds | psql -t ${database} | cut -d | -f2`

for i in $sequences
do
table=`echo $i | cut -d _ -f1`
idfield=`echo \d \${table}\ | psql -t ${database} | grep $i
| cut -d | -f1 | sed -e 's/ //g'`

countsql=select max(\$idfield\) from \$table\
tablecount=`echo $countsql | psql -t ${database} | sed -e 's/
//g'`

updateseq=select setval('\$i\', (select max(\$idfield\)
from \$table\)+1);

newval=`echo $updateseq | psql -t ${database} |  sed -e 's/
//g'`
echo Table $table: $idfield has sequence: $i, with max value:
$tablecount, updated to $newval
done
-

Hopefully this may solve someone some grief.

Cheers,
Matt


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Partial commit within the trasaction

2005-09-08 Thread Bohdan Linda
On Thu, Sep 08, 2005 at 02:53:47PM +0200, Michael Fuhr wrote:
 One way would be to use contrib/dblink to open another connection
 to the database so the status messages could be inserted in a
 separate transaction.

This could do the trick for logging, even writting of a package that would
do all the stuff should not be hard. But what  if you want to flush
something processed to db. Consider you are doing massive updates/deletes.
Again in logical blocks. You as a programmer may decide: ok so far I am
done and even if I crash I want to preserve these changes. It happened
me, that db aborted processing such huge updates with out of memory
message.  Would calling stored procedure from stored procedure solved
this? Or if parent procedure is not commited then even called procedure
will not commit?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] back references using regex

2005-09-08 Thread Michael Fuhr
[Please copy the mailing list on replies so others can participate
in and learn from the discussion.]

On Wed, Sep 07, 2005 at 10:40:22PM -0700, Matthew Peter wrote:
 I did read the docs ;)  I always do. The question I
 really wanted answered is how to reference the back
 references in my regular expressions parentheses. Like
 the 2nd position or 4th from a group. Like \2 or $2.
 Can I do this in postgres in the query?

Are you looking for something like this?

SELECT substring('abc.foo.foo.xyz' FROM '(([[:alpha:]]+)\\.\\2)');
 substring 
---
 foo.foo
(1 row)

That is, one or more alphabetic characters followed by a dot followed
by the same set of characters (this is a simplistic example: it would
also match 'foo.oog' and return 'oo.oo').

Note that the back reference is \2 because it refers to the inner
set of parentheses (i.e., the subexpression with the second opening
parenthesis); the outer set is used here for capturing.  And again,
note the escaped backslashes because we're using ordinary quotes.
With dollar quotes the above query would be:

SELECT substring('abc.foo.foo.xyz' FROM $$(([[:alpha:]]+)\.\2)$$);

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] pg_restore - authentication failed?

2005-09-08 Thread Zlatko Matić

Hello.
I have a strange problem when trying to use pg_restore to restore data from 
a backup, on WIN XP, PostgreSQL 8.0.3.


If I use PgAdmin it works, but if I copy the command to .bat file 
authentication fails although I enter correct password.

The .bat script is the following:

cd D:\Program Files\PostgreSQL\8.0\bin
pg_restore.exe -i -h localhost -p 5432 -U postgres -d 
MYDATABASE -a --disable-triggers -t mytablename -v 
C:\BACKUP\MYDATABASE_DATA_BCP_Fc.fcbackup


and the error is:
pg_restore: [archiver (db)] connection to database MYDATABASE failed: 
FATAL: password authentication failed for user postgres


I also tried to pass password by pgpass.conf file, but without result. If I 
use PgAdmin, it works


What could be the reason ? Is this a bug ?

Zlatko 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Bash script to update sequences

2005-09-08 Thread Tino Wildenhain

Matt schrieb:

Hi all,

I've recently been using some older version of postgres on linux
systems, and one of the most annoying problems i've come across is to
do with sequence values not being updated when a database backup is
restored.  This is because the insert and copy routines used to restore
the data into tables do not trigger the nextval function on insert, and
so you get a lot of annoying errors about duplicate keys if your ID
fields are based on sequences.

Anyway, if you know what im talking about then im sure you understand
how annoying it is.  I'm pretty sure the problem is fixed now in
postgres 8, however older systems such as 7.4 and 7.3.10 do not
correctly update sequence values when restoring from backups.

So in short (or not so short), i've written a simple bash script that
will connect to a given database, get a list of all the sequences for
that database, and then update the current value of the sequence to be
one greater than the max value of the corresponding ID field for that
table.

The script should be run as your postgres user on a linux-ish system
with bash.
Code follows. Watch out for wrapped sentences when copying (!).


Actually sequences have always been updated by pg_dump/restore.

How do you do your restore to avoid this?

Regards
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Wolfgang Keller
Hello,

 Both have trials - try it ;-)

That's  what  I've done, as I mentioned. But sometimes others know/see
things  that  I don't see/know, at least not within a restricted trial
period.

Sincerely,

Wolfgang Keller

-- 
P.S.: My From-address is correct



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Partial commit within the trasaction

2005-09-08 Thread Michael Fuhr
On Thu, Sep 08, 2005 at 03:39:50PM +0200, Bohdan Linda wrote:
 On Thu, Sep 08, 2005 at 02:53:47PM +0200, Michael Fuhr wrote:
  One way would be to use contrib/dblink to open another connection
  to the database so the status messages could be inserted in a
  separate transaction.
 
 This could do the trick for logging, even writting of a package that would
 do all the stuff should not be hard. But what  if you want to flush
 something processed to db. Consider you are doing massive updates/deletes.
 Again in logical blocks. You as a programmer may decide: ok so far I am
 done and even if I crash I want to preserve these changes. It happened
 me, that db aborted processing such huge updates with out of memory
 message.  Would calling stored procedure from stored procedure solved
 this? Or if parent procedure is not commited then even called procedure
 will not commit?

Functions are executed in the context of an outer transaction, so
if that outer transaction fails then the function's changes will
be rolled back.  Another implication of this is that functions can't
start or commit/rollback transactions because they're already inside
a transaction, although in 8.x they can use exception handlers to
do partial rollbacks (i.e., functions can use savepoints, albeit
not directly with a SAVEPOINT statement).

If you want changes to survive a database or system crash then
you'll have to commit them.  Since server-side functions can't start
or commit transactions, you'll have to do those commits with client
code, either from a client application or by using dblink or something
similar from a server-side function, effectively making the server-side
function a client application.

This might not be what you're after, but 8.1 will have two-phase
commit, which is a way to tell the database get ready to commit,
but don't actually do it until I tell you.  Those prepared commits
will survive a crash, so after you recover you can say, Remember
that transaction that you prepared before the crash?  Go ahead and
commit it now.  You have to do some extra bookkeeping and you can't
commit several prepared transactions atomically (as far as I know),
but that's one way you could make changes durable without actually
committing them until later.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] 8.1beta timezone question

2005-09-08 Thread Bricklen Anderson
I may have missed it in the docs, but were certain timestamp abbreviations
phased out between 8.0.3 and 8.1 beta1?

eg.
(8.0.3)
#SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'PST';
timezone
-
 16/02/2001 20:38:40 PST


(8.1beta1)
#SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'PST';
ERROR:  time zone PST not recognised

The tzname still works fine, though:
SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'Canada/Pacific';
timezone
-
 16/02/2001 20:38:40 PST



Or maybe there is a setting that I neglected to adjust to make this work on
8.1beta1? The same error occurs with several other timezone abbreviations that I
tried.


Cheers,

Bricklen
-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql Hosting

2005-09-08 Thread Jan Wieck

On 8/4/2005 10:28 PM, Richard Sydney-Smith wrote:
I have asked my internet host to include postgresql as part of their 
service but it seems that there are issues in getting it to work with 
cpanel which is their support service for their clients. Is their a 
reason why Postgresql is harder to host than mysql? Is their any docs I 
can point the server admin to that would help him?


If not what service providers are people recommending?


http://cwihosting.com/

Apache with Frontpage extensions (if you want them), PHP, PostgreSQL and 
ssh access including crontab support. Having pl/pgsql added to template1 
was done in no time. I only had to put a binary cvs executable there so 
that I can develop somewhere else and deploy the changes via cvs update.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_restore - authentication failed?

2005-09-08 Thread Richard Huxton

Zlatko Matić wrote:
If I use PgAdmin it works, but if I copy the command to .bat file 
authentication fails although I enter correct password.

The .bat script is the following:



and the error is:
pg_restore: [archiver (db)] connection to database MYDATABASE failed: 
FATAL: password authentication failed for user postgres


I also tried to pass password by pgpass.conf file, but without result. 
If I use PgAdmin, it works


What could be the reason ? Is this a bug ?


Unlikely. Is pgadmin connecting to localhost on port 5432? I'm thinking 
there might be something in your pg_hba.conf


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Joshua D. Drake

Wolfgang Keller wrote:


Hello,

 


Both have trials - try it ;-)
   



That's  what  I've done, as I mentioned. But sometimes others know/see
things  that  I don't see/know, at least not within a restricted trial
period.
 


It seems to me that EMS PostgreSQL Manager is more actively developed
which probably is a good thing.

Other than that, I agree with the, both have trials statement.

SIncerely,

Joshua D. Drake




Sincerely,

Wolfgang Keller

 




--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Partial commit within the trasaction

2005-09-08 Thread Bohdan Linda
On Thu, Sep 08, 2005 at 04:35:51PM +0200, Michael Fuhr wrote:
 On Thu, Sep 08, 2005 at 03:39:50PM +0200, Bohdan Linda wrote:
 commit it now.  You have to do some extra bookkeeping and you can't
 commit several prepared transactions atomically (as far as I know),
 but that's one way you could make changes durable without actually
 committing them until later.

In case of durable transactions, would they be released from memory? Thus
could the transaction be more respectfull to the HW when processing too
much data?

And what about nested transactions? Are they planned? The point is
connected to my previous question of the secured access to stored
procedures. If I move part of database logic to the client, I will have to
introduce parameters to the procedures. This may be potentialy abusable.

If I try to use dblink from server to server (both are the same), is there
some perfromance penalty? How big?

Regards,
Bohdan 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Tony Caduto
Just a FYI, you should also look at PG Lightning Admin (PGLA)  it 
compares well with EMS and has some nifty features they don't have.
i.e. EMS cannot cancel a long running query, they don't receive raise 
notices (last time I checked) and they can't view unicode data.

Check it out at:
http://www.amsoftwaredesign.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Alvaro Herrera
On Thu, Sep 08, 2005 at 07:52:57AM -0700, Joshua D. Drake wrote:
 Wolfgang Keller wrote:
 
 Hello,
 
 Both have trials - try it ;-)
 
 That's  what  I've done, as I mentioned. But sometimes others know/see
 things  that  I don't see/know, at least not within a restricted trial
 period.

 It seems to me that EMS PostgreSQL Manager is more actively developed
 which probably is a good thing.

Note that theKompany departed from the developers of the Rekall product,
which they now distribute separately.  See http://www.totalrekall.co.uk/
and in particular

http://www.totalrekall.co.uk/modules.php?name=FAQmyfaq=yesid_cat=6categories=Rekall+%3A+TheKompany

So if you are trying a theKompany product, know that it may be obsolete
and no longer in development.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Aprender sin pensar es inútil; pensar sin aprender, peligroso (Confucio)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql Hosting

2005-09-08 Thread Joshua D. Drake



http://cwihosting.com/

Apache with Frontpage extensions (if you want them), PHP, PostgreSQL and 
ssh access including crontab support. Having pl/pgsql added to template1 
was done in no time. I only had to put a binary cvs executable there so 
that I can develop somewhere else and deploy the changes via cvs update.


If you need business class hosting I suggest Command Prompt (I work for 
them). Our hosting is PostgreSQL only so...


http://www.commandprompt.com/

There is also Hub.org if you want FreeBSD/PostgreSQL hosting.

Sincerely,

Joshua D. Drake






Jan




--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Formatting TimeStamp

2005-09-08 Thread Alex

Hi,

I am using the timestamp various tables but dont like to see the 
microseconds.
Is there way to format the output of timestamp fields globally (in the 
postgres.conf)?

Preferably i only would like to see the MMDD HH:MM:SS.

Or can a formatting be passed to a SELECT *  hmmm probably not.

Thanks for any advise

A



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] pgsql

2005-09-08 Thread CHARABOUSKA Christel
suscribe
end

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql Hosting

2005-09-08 Thread Marc G. Fournier

On Thu, 8 Sep 2005, Joshua D. Drake wrote:




http://cwihosting.com/

Apache with Frontpage extensions (if you want them), PHP, PostgreSQL and 
ssh access including crontab support. Having pl/pgsql added to template1 
was done in no time. I only had to put a binary cvs executable there so 
that I can develop somewhere else and deploy the changes via cvs update.


If you need business class hosting I suggest Command Prompt (I work for 
them). Our hosting is PostgreSQL only so...


http://www.commandprompt.com/

There is also Hub.org if you want FreeBSD/PostgreSQL hosting.


http://www.hub.org/standard_features.php

We host *everything* that nobody else wants to ... and give you full root 
access to add more if you need ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql Hosting

2005-09-08 Thread Randall Perry
We offer Postgresql hosting with phpPgAdmin on Mac Dual G5 Xserve servers:
http://www.systame.com/html/macwebhosting/

-- 
Randall Perry
sysTame

Xserve Web Hosting/Co-location/Leasing
QuickTime Streaming
Mac Consulting/Sales

http://www.systame.com/




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Gavin M. Roy
I have a license of DataArchitect and it was a waste of money.  I've  
used much better applications.  I currently use ChiliSoft Database  
Design Studio.  It has direct PgSQL support.


Regards,

Gavin

On Sep 8, 2005, at 2:58 AM, Wolfgang Keller wrote:


Hello,

I'vesearchedforandtriednearlyeverydatabase
administration/design  tool  available  and  it  seems  my  choice has
narrowed  down  to  these  two.  Now a second opinion would be nice to
have.  Has  anyone  used  one, better both of these? What do you think
about them?

TIA,

Sincerely,

Wolfgang Keller

--
P.S.: My From-address is correct



---(end of  
broadcast)---

TIP 6: explain analyze is your friend



Gavin M. Roy
800 Pound Gorilla
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Formatting TimeStamp

2005-09-08 Thread Joshua D. Drake

Alex wrote:

Hi,

I am using the timestamp various tables but dont like to see the 
microseconds.
Is there way to format the output of timestamp fields globally (in the 
postgres.conf)?

Preferably i only would like to see the MMDD HH:MM:SS.

Or can a formatting be passed to a SELECT *  hmmm probably not.


http://www.postgresql.org/docs/8.0/static/functions-datetime.html

And yes you can do it in a SELECT.

Sincerely,

Joshua D. Drake





Thanks for any advise

A



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Joshua D. Drake


Note that theKompany departed from the developers of the Rekall product,
which they now distribute separately.  See http://www.totalrekall.co.uk/
and in particular

http://www.totalrekall.co.uk/modules.php?name=FAQmyfaq=yesid_cat=6categories=Rekall+%3A+TheKompany

So if you are trying a theKompany product, know that it may be obsolete
and no longer in development.


I believe the person was asking about DataArchitect not Rekall.

Sincerely,

Joshua D. Drake








--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to write jobs in postgresql

2005-09-08 Thread Chris Travers

Karsten Hilbert wrote:


3.  An integrated way of logging what ran when (rather than either
stuffing logging code into each cron job or rummaging thru
cron logs)
   


Cron can log to syslog.

Karsten
 

And your cron scripts could log to your database log tables if that is 
what you were getting at...


Chris Travers
Metatron Technology Consulting

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] back references using regex

2005-09-08 Thread Matthew Peter
That doesn't seem to work with digits 

SELECT substring('12.00.00.34' FROM $$((\d+)\.\2)$$);
 or 
SELECT substring('12.00.00.34' FROM $$((\w+)\.\2)$$);

but works with strings

SELECT substring('abc.foo.foo.xyz' FROM
$$((\w+)\.\2)$$);

What I basically want to do is have a slice function
like Python, where I can slice out items from a \s, \.
or \n\n separated list. 

Where I could start it at a certain point and end it
at another. Like slicing out paragraph 3-6 (delimiter
\n\n) or the 2nd-6th sentence in a article (delimiter
\.). That is what I am trying to do. I know if I can
figure how to get that working I can figure out how to
deal with extending it/handling misc. characters/etc
in pgsql. I only need a working vanilla example.
Thanks for all you help.


--- Michael Fuhr [EMAIL PROTECTED] wrote:
 SELECT substring('abc.foo.foo.xyz' FROM
 '(([[:alpha:]]+)\\.\\2)');
  substring 
 ---
  foo.foo
 (1 row)
 
 That is, one or more alphabetic characters followed
 by a dot followed
 by the same set of characters (this is a simplistic
 example: it would
 also match 'foo.oog' and return 'oo.oo').
 
 Note that the back reference is \2 because it refers
 to the inner
 set of parentheses (i.e., the subexpression with the
 second opening
 parenthesis); the outer set is used here for
 capturing.  And again,
 note the escaped backslashes because we're using
 ordinary quotes.
 With dollar quotes the above query would be:
 
 SELECT substring('abc.foo.foo.xyz' FROM
 $$(([[:alpha:]]+)\.\2)$$);
 
 -- 
 Michael Fuhr
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Email Verfication Regular Expression

2005-09-08 Thread Cristian Prieto
Well, I guess this could be a hard-expensive way to do it but I've done this 
little Stored Function, it doesn't use a regular expresion (you could pass 
your email first to one to check it out I guess).


#include postgres.h
#include fmgr.h
#include netinet/in.h
#include arpa/nameser.h
#include resolv.h

PG_FUNCTION_INFO_V1(digmx);

Datum
digmx(PG_FUNCTION_ARGS)
{
int res;
char *name;
char answer[1024];
text *arg;

arg = PG_GETARG_TEXT_P(0);

res = res_init();
if(res != 0) {
 // Aki reporto un error
}
name = (char *) palloc(VARSIZE(arg)-VARHDRSZ);
strcpy(name, VARDATA(arg));

res = res_query(name, C_IN, T_MX, answer, sizeof(answer));

if(res == -1) {
 PG_RETURN_BOOL(false);
} else {
 // Aki imprimimos lo que debe escupir
 PG_RETURN_BOOL(true);
}
}

You can pass the domain to that function and It would check using resolv if 
the domains has an mx entry in the nameserver. I guess it is a little slow 
(it was not thinking to use it for speed, but I accept suggestions for it!) 
but I think it is enough easy and it could be usefull for somebody.


mydb# SELECT digmx('hotmail.com');
digmx
--
t
(1 row)

mydb# SELECT digmx('hotmail.co');
digmx
--
f
(1 row)

I know, it could be a very dumb to check the domain, but I consider myself 
as a totally newbie database/unix/programmer.


Thanks a lot!

PD: Please, I accept suggestion to improve this function. 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] back references using regex

2005-09-08 Thread Peter Fein
Matthew Peter wrote:
 That doesn't seem to work with digits 
 
 SELECT substring('12.00.00.34' FROM $$((\d+)\.\2)$$);
  or 
 SELECT substring('12.00.00.34' FROM $$((\w+)\.\2)$$);
 
 but works with strings
 
 SELECT substring('abc.foo.foo.xyz' FROM
 $$((\w+)\.\2)$$);
 
 What I basically want to do is have a slice function
 like Python, where I can slice out items from a \s, \.
 or \n\n separated list. 

You could always just write it in pl/python...

-- 
Peter Fein [EMAIL PROTECTED] 773-575-0694

Basically, if you're not a utopianist, you're a schmuck. -J. Feldman

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Michael Schmidt




I like both EMS PostgreSQL Manager and pgAdmin III. Both have their 
strengths and weaknesses, and I go back and forth between them depending on what 
I'm doing. I had a problem with the trial version of PG Lightning Admin - 
it seemed to "hang" when I backed up my database. No experience with 
DataArchitect. 

Michael Schmidt


Re: [GENERAL] Email Verfication Regular Expression

2005-09-08 Thread Randal L. Schwartz
 Cristian == Cristian Prieto [EMAIL PROTECTED] writes:

Cristian  res = res_query(name, C_IN, T_MX, answer, sizeof(answer));

This incorrectly fails if an address has an A record but no MX
record.  According to RFC 2821 Section 5:

   The lookup first attempts to locate an MX record associated with
   the name.  If a CNAME record is found instead, the resulting name
   is processed as if it were the initial name.  If no MX records are
   found, but an A RR is found, the A RR is treated as if it was
   associated with an implicit MX RR, with a preference of 0, pointing
   to that host.

So, your function will say no good if the domain has an A record but
no MX record, even though the RFC says that's OK and deliverable.

Man, is there a lot of bogus knowledge and cargo culting around this
subject!

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] per user logging

2005-09-08 Thread snacktime
I'm pretty sure the answer to this is no, but just in case I've missed
something. Is there a way to configure the server so it only logs
for specific users? 

Chris


Re: [GENERAL] back references using regex

2005-09-08 Thread Matthew Peter
Ya, but I'd have to recompile to get python in. Plus,
I don't want to use Python. I want to use and learn
more pgsql. Keep things clean and lean if possible...
I just got a postgres book yesterday for additional
reading which it only had 2 pages on regex's in the
index :(

--- Peter Fein [EMAIL PROTECTED] wrote:

 Matthew Peter wrote:
  That doesn't seem to work with digits 
  
  SELECT substring('12.00.00.34' FROM
 $$((\d+)\.\2)$$);
   or 
  SELECT substring('12.00.00.34' FROM
 $$((\w+)\.\2)$$);
  
  but works with strings
  
  SELECT substring('abc.foo.foo.xyz' FROM
  $$((\w+)\.\2)$$);
  
  What I basically want to do is have a slice
 function
  like Python, where I can slice out items from a
 \s, \.
  or \n\n separated list. 
 
 You could always just write it in pl/python...
 
 -- 
 Peter Fein [EMAIL PROTECTED]  
   773-575-0694
 
 Basically, if you're not a utopianist, you're a
 schmuck. -J. Feldman
 





__
Click here to donate to the Hurricane Katrina relief effort.
http://store.yahoo.com/redcross-donate3/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] per user logging

2005-09-08 Thread Peter Eisentraut
snacktime wrote:
 I'm pretty sure the answer to this is no, but just in case I've
 missed something. Is there a way to configure the server so it only
 logs for specific users?

No, but you can configure PostgreSQL to insert the user name into each 
log message, and then you could configure your syslog daemon to 
redirect (or delete) messages according to that user name.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] back references using regex

2005-09-08 Thread Michael Fuhr
On Thu, Sep 08, 2005 at 12:45:40PM -0700, Matthew Peter wrote:
 Ya, but I'd have to recompile to get python in.

Recompiling to add support for another procedural language is a
one-time operation and it's easy to do, so that's not a good argument.

 Plus, I don't want to use Python. I want to use and learn
 more pgsql.  Keep things clean and lean if possible...

Clean and lean suggests using the right tool for the job.  Languages
like Perl and Python are better at string manipulation than PL/pgSQL,
and one of PostgreSQL's strengths is that it allows you to write
server-side functions in those languages.  Exploit such strengths
when it makes sense.

 I just got a postgres book yesterday for additional
 reading which it only had 2 pages on regex's in the
 index :(

Regular expressions aren't specific to PostgreSQL; there's ample
material covering them elsewhere.  See for example _Mastering
Regular Expressions_ by Jeffrey Friedl.  I'm sure a search engine
would yield many free tutorials on the subject.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Tony Caduto
So it seemed to hang or it did hang?  Have you tried a newer version?  I 
routinely backup 100mb databases and it never hangs.
I think one of the earlier versions did not change the cursor during the 
backup, is that what you mean by it appeared to hang? i.e. not being 
able to tell if it was doing anything?  Also, I don't think you can even 
do backups with EMS.


Thanks,

Tony

Michael Schmidt wrote:

I like both EMS PostgreSQL Manager and pgAdmin III.  Both have their 
strengths and weaknesses, and I go back and forth between them 
depending on what I'm doing.  I had a problem with the trial version 
of PG Lightning Admin - it seemed to hang when I backed up my 
database.  No experience with DataArchitect. 
 
Michael Schmidt




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] per user logging

2005-09-08 Thread Richard Huxton

snacktime wrote:
I'm pretty sure the answer to this is no, but just in case I've missed 
something. Is there a way to configure the server so it only logs for 
specific users? 


Look into ALTER USER - that lets you set (some) configuration options 
on a per-user level. Don't know if the logging is one of the things you 
can change though. Be interested in finding out though (hint, hint).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] data Transfer rate priority on Postgres ?

2005-09-08 Thread [EMAIL PROTECTED]
I just installed
Postgres 8.0 (Windows native) on a HP server with Windows 2000 Server
Service Pack 4. This database is accessed by several windows PC’s with an
ODBC (ver. 7.03.02.00) through a 100Mbps intranet. 

I have notice that when I run a program that requires about 2 MB of
information from the server to the PC, it takes about 2-3 seconds to load
the information in some PC’s and 20 to 30 seconds in other PC’s (at the
same time). This creates a situation where the program runs “faster” in
some PC’s than in others. But this situation is not hardware related
because one day a PC might have a “fast” connection and the next day the
same PC it will have a “slow” connection. Also if I swap a PC that has a
“fast” connection in the net point of another with a “slow” connection,
both PC´s will keep their rate of data transfer. It seems to me that something is assigning the rate of
data transfer and giving it a priority within the net.

Does Postgres have any kind of configuration that determines the
speed of data transfer to the clients? Is there any kind of connection
priority? Or, this is a Windows configuration problem?


We didn’t have this problem with version 7.4 (Cygwin), al the PC had
a “fast” connection.



Re: [GENERAL] per user logging

2005-09-08 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 snacktime wrote:
 I'm pretty sure the answer to this is no, but just in case I've
 missed something. Is there a way to configure the server so it only
 logs for specific users?

 No, but you can configure PostgreSQL to insert the user name into each 
 log message, and then you could configure your syslog daemon to 
 redirect (or delete) messages according to that user name.

I believe it is possible for a superuser to do something like
ALTER USER victim SET log_min_messages = whatever, so that the
log verbosity is different for different users.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] back references using regex

2005-09-08 Thread Matthew Peter
I knew I should never have said Python. I know regular
expressions, just not how postgresql handles them. The
fact of the matter is I don't want to use Python, it
was an example of the functionality I'm interested in
accomplishing with pgsql. Plus, I would like to use
other regex's once I figure out how they are used. 

I only need a regular expression in the substring of a
where cluase. Not entire language support for a single
function. 

It's not a complex regex as I have wrote one that does
what I want, yet not at the database level. The docs
didn't help clarify anything. I'm still not clear on
how it determines where the back reference comes from
in the previous example you gave. And why digits
wouldn't work. 

I would like a basic example that accomplishes what
I'm trying to do if at all possible?


My original message/problem...

What I basically want to do is have a slice function
like Python, where I can slice out items from a \s, \.
or \n\n separated list. Where I'll just change the
delimiter for the query that it applies. 

Where I could start it at a certain point and end it
at another. Like slicing out paragraph 3-6 (delimiter
\n\n) or the 2nd-6th sentence in a article (delimiter
\.). That is what I am trying to do. I know if I can
figure how to get that working I can figure out how to
deal with extending it/handling misc. characters/etc
in pgsql. I only need a working vanilla example.
Thanks for all your help.

--- Michael Fuhr [EMAIL PROTECTED] wrote:

 On Thu, Sep 08, 2005 at 12:45:40PM -0700, Matthew
 Peter wrote:
  Ya, but I'd have to recompile to get python in.
 
 Recompiling to add support for another procedural
 language is a
 one-time operation and it's easy to do, so that's
 not a good argument.
 
  Plus, I don't want to use Python. I want to use
 and learn
  more pgsql.  Keep things clean and lean if
 possible...
 
 Clean and lean suggests using the right tool for
 the job.  Languages
 like Perl and Python are better at string
 manipulation than PL/pgSQL,
 and one of PostgreSQL's strengths is that it allows
 you to write
 server-side functions in those languages.  Exploit
 such strengths
 when it makes sense.
 
  I just got a postgres book yesterday for
 additional
  reading which it only had 2 pages on regex's in
 the
  index :(
 
 Regular expressions aren't specific to PostgreSQL;
 there's ample
 material covering them elsewhere.  See for example
 _Mastering
 Regular Expressions_ by Jeffrey Friedl.  I'm sure a
 search engine
 would yield many free tutorials on the subject.
 
 -- 
 Michael Fuhr
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Michael Schmidt




Mr. Caduto,
I downloaded the version from the website recently, so I think it was the 
newest version. My database backs up in about 10 seconds with pgAdmin 
III. When I tried to back it up with PG Lightning Admin, I got a sql wait 
cursor for a couple of minutes, at which point I closed the program. I 
don't mean to criticize your product (which has some very nice features), just 
stating what happened. 

I agree - EMS PostgreSQL manager does not offer backups and restores via 
pg_dump and pg_restore.

Michael Schmidt


Re: [GENERAL] back references using regex

2005-09-08 Thread Alvaro Herrera
On Thu, Sep 08, 2005 at 01:52:35PM -0700, Matthew Peter wrote:

 What I basically want to do is have a slice function
 like Python, where I can slice out items from a \s, \.
 or \n\n separated list. Where I'll just change the
 delimiter for the query that it applies. 

There is a function for some sort of text slicing, though I'm not sure
if it does what you want.  It's called split_part().  Have a look at the
documentation.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Aprender sin pensar es inútil; pensar sin aprender, peligroso (Confucio)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] EMS PostgreSQL Manager vs. TheKompany DataArchitect

2005-09-08 Thread Tony Caduto

I understand.
there was a issue with the pg_dump compression level defaulting to 0 
which may have had some effect, or it may have been some other non 
related network issue.  PGLA uses pg_dump.exe and pg_restorel.exe in the 
exact same way PG Admin III does (via createprocesses API command) so 
speed bewteen the two should be exactly the same.  If it takes 10 
seconds in PG Admin, it's going to take 10 seconds on PGLA


You should download it again and give it another go :-)  I believe that 
PGLA gives a much better user experience on the windows platform than 
PGAdmin does.(I am biased of course:-)


Thanks,

Tony


Michael Schmidt wrote:


Mr. Caduto,
I downloaded the version from the website recently, so I think it was 
the newest version.  My database backs up in about 10 seconds with 
pgAdmin III.  When I tried to back it up with PG Lightning Admin, I 
got a sql wait cursor for a couple of minutes, at which point I closed 
the program.  I don't mean to criticize your product (which has some 
very nice features), just stating what happened. 
 
I agree - EMS PostgreSQL manager does not offer backups and restores 
via pg_dump and pg_restore.
 
Michael Schmidt




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] back references using regex

2005-09-08 Thread Michael Fuhr
On Thu, Sep 08, 2005 at 01:52:35PM -0700, Matthew Peter wrote:
 It's not a complex regex as I have wrote one that does
 what I want, yet not at the database level. The docs
 didn't help clarify anything. I'm still not clear on
 how it determines where the back reference comes from
 in the previous example you gave. And why digits
 wouldn't work. 

Back references work as they usually do in regular expressions:
they refer to the matched value of a previous parenthesized
subexpression.  If you have multiple open parentheses then you
need to refer to the correct subexpression to get what you want.
Example:

CREATE TABLE foo (t text);
INSERT INTO foo VALUES ('abc.foo.foo.xyz');
INSERT INTO foo VALUES ('12.00.00.34');
INSERT INTO foo VALUES ('abc.def.ghi');

SELECT t FROM foo WHERE t ~ $$(\w+)\.\1$$;
t
-
 abc.foo.foo.xyz
 12.00.00.34
(2 rows)

In the above query the regular expression has only one set of
parentheses, so the back reference refers to \1.  The result set
contains the two rows that have one or more word characters followed
by a dot followed by the same set of characters.

In the following query, note the difference between the regular
expression in the select list and the one in the where clause:

SELECT substring(t FROM $$((\w+)\.\2)$$)
FROM foo
WHERE t ~ $$(\w+)\.\1$$;
 substring 
---
 foo.foo
 00.00
(2 rows)

In the regular expression in the select list, we use the outermost
set of parentheses for grouping, so the back reference needs to
refer to the subexpression that begins with the second open
parenthesis (i.e., we must use \2).  In the regular expression
in the where clause, we have only one set of parentheses so the
back reference is \1.

Regarding digits, you didn't post any output in your example, so
we don't know if it really doesn't work or if it just doesn't do
what you were expecting.  Here's what I get from your examples:

SELECT substring('12.00.00.34' FROM $$((\d+)\.\2)$$);
 substring 
---
 00.00
(1 row)

SELECT substring('12.00.00.34' FROM $$((\w+)\.\2)$$);
 substring 
---
 00.00
(1 row)

SELECT substring('abc.foo.foo.xyz' FROM $$((\w+)\.\2)$$);
 substring 
---
 foo.foo
(1 row)

Do you get different results, or do these results surprise you?
They all appear to be correct.

 What I basically want to do is have a slice function
 like Python, where I can slice out items from a \s, \.
 or \n\n separated list. Where I'll just change the
 delimiter for the query that it applies. 
 
 Where I could start it at a certain point and end it
 at another. Like slicing out paragraph 3-6 (delimiter
 \n\n) or the 2nd-6th sentence in a article (delimiter
 \.). That is what I am trying to do.

You can use split_part() to get a single item or string_to_array()
to build an array from which you can extract multiple items.

CREATE TABLE foo (t text);
INSERT INTO foo VALUES ('one.two.three.four.five.six.');

SELECT (string_to_array(t, '.'))[3:5] FROM foo;
  string_to_array  
---
 {three,four,five}
(1 row)

SELECT array_to_string((string_to_array(t, '.'))[3:5], '.') FROM foo;
 array_to_string 
-
 three.four.five
(1 row)

Is that what you're looking for?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Support for Limit in Update, Insert...

2005-09-08 Thread Cristian Prieto



Would be any future support for limit in 
update/insert queries? so you could do something like

update table1 set col1=value1 limit 
1000;

would update just the first 1000 rows in the table. 
I've been playing a little with the SPI and I get the SPI already has the 
support for limit the number of changes or elements in a query (the SPI_execute 
function).

Any comments?

Thanks a lot


Re: [GENERAL] back references using regex

2005-09-08 Thread Matthew Peter
Thank you for your patience and such a complete
answer. I'm not on the pgbox right now but those
examples did help clarify how to reference the back
references, which was my problem. 

I wasn't aware the 1st parenthesis must be counted as
part of the regex, I assumed it was a wrapper. Thanks
for helping me out and putting up with me. :)

matt

--- Michael Fuhr [EMAIL PROTECTED] wrote:

 On Thu, Sep 08, 2005 at 01:52:35PM -0700, Matthew
 Peter wrote:
  It's not a complex regex as I have wrote one that
 does
  what I want, yet not at the database level. The
 docs
  didn't help clarify anything. I'm still not clear
 on
  how it determines where the back reference comes
 from
  in the previous example you gave. And why digits
  wouldn't work. 
 
 Back references work as they usually do in regular
 expressions:
 they refer to the matched value of a previous
 parenthesized
 subexpression.  If you have multiple open
 parentheses then you
 need to refer to the correct subexpression to get
 what you want.
 Example:
 
 CREATE TABLE foo (t text);
 INSERT INTO foo VALUES ('abc.foo.foo.xyz');
 INSERT INTO foo VALUES ('12.00.00.34');
 INSERT INTO foo VALUES ('abc.def.ghi');
 
 SELECT t FROM foo WHERE t ~ $$(\w+)\.\1$$;
 t
 -
  abc.foo.foo.xyz
  12.00.00.34
 (2 rows)
 
 In the above query the regular expression has only
 one set of
 parentheses, so the back reference refers to \1. 
 The result set
 contains the two rows that have one or more word
 characters followed
 by a dot followed by the same set of characters.
 
 In the following query, note the difference between
 the regular
 expression in the select list and the one in the
 where clause:
 
 SELECT substring(t FROM $$((\w+)\.\2)$$)
 FROM foo
 WHERE t ~ $$(\w+)\.\1$$;
  substring 
 ---
  foo.foo
  00.00
 (2 rows)
 
 In the regular expression in the select list, we use
 the outermost
 set of parentheses for grouping, so the back
 reference needs to
 refer to the subexpression that begins with the
 second open
 parenthesis (i.e., we must use \2).  In the regular
 expression
 in the where clause, we have only one set of
 parentheses so the
 back reference is \1.
 
 Regarding digits, you didn't post any output in your
 example, so
 we don't know if it really doesn't work or if it
 just doesn't do
 what you were expecting.  Here's what I get from
 your examples:
 
 SELECT substring('12.00.00.34' FROM
 $$((\d+)\.\2)$$);
  substring 
 ---
  00.00
 (1 row)
 
 SELECT substring('12.00.00.34' FROM
 $$((\w+)\.\2)$$);
  substring 
 ---
  00.00
 (1 row)
 
 SELECT substring('abc.foo.foo.xyz' FROM
 $$((\w+)\.\2)$$);
  substring 
 ---
  foo.foo
 (1 row)
 
 Do you get different results, or do these results
 surprise you?
 They all appear to be correct.
 
  What I basically want to do is have a slice
 function
  like Python, where I can slice out items from a
 \s, \.
  or \n\n separated list. Where I'll just change the
  delimiter for the query that it applies. 
  
  Where I could start it at a certain point and end
 it
  at another. Like slicing out paragraph 3-6
 (delimiter
  \n\n) or the 2nd-6th sentence in a article
 (delimiter
  \.). That is what I am trying to do.
 
 You can use split_part() to get a single item or
 string_to_array()
 to build an array from which you can extract
 multiple items.
 
 CREATE TABLE foo (t text);
 INSERT INTO foo VALUES
 ('one.two.three.four.five.six.');
 
 SELECT (string_to_array(t, '.'))[3:5] FROM foo;
   string_to_array  
 ---
  {three,four,five}
 (1 row)
 
 SELECT array_to_string((string_to_array(t,
 '.'))[3:5], '.') FROM foo;
  array_to_string 
 -
  three.four.five
 (1 row)
 
 Is that what you're looking for?
 
 -- 
 Michael Fuhr
 
 ---(end of
 broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Support for Limit in Update, Insert...

2005-09-08 Thread Tom Lane
Cristian Prieto [EMAIL PROTECTED] writes:
 Would be any future support for limit in update/insert queries? so you =
 could do something like
 update table1 set col1=3Dvalue1 limit 1000;
 would update just the first 1000 rows in the table.

That seems like a spectacularly bad idea, considering that you could
have no guarantees about *which* 1000 rows get updated.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Support for Limit in Update, Insert...

2005-09-08 Thread Cristian Prieto

Maybe the first 1000 rows based in the primary index

- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: Cristian Prieto [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Thursday, September 08, 2005 6:05 PM
Subject: Re: [GENERAL] Support for Limit in Update, Insert... 




Cristian Prieto [EMAIL PROTECTED] writes:

Would be any future support for limit in update/insert queries? so you =
could do something like
update table1 set col1=3Dvalue1 limit 1000;
would update just the first 1000 rows in the table.


That seems like a spectacularly bad idea, considering that you could
have no guarantees about *which* 1000 rows get updated.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Formatting TimeStamp

2005-09-08 Thread Alex

Joshua,
I know that i can format it explicitly, but instead of using a SELECT * 
FROM ..  I have to name the columns which is not very convenient. Right 
now I have created views for each table just to re-format the output of 
the one timestamp field.


Mysql for example has a in addition to the TIMESTAMP also a DATETIME 
type that returns |'-MM-DD HH:MM:SS'. So, I was wondering if there 
is anything similar in postgres or can i set the precision or format of 
the timestamp in the postgres.conf or can I define a new type.


Alex

|
Joshua D. Drake wrote:


Alex wrote:


Hi,

I am using the timestamp various tables but dont like to see the 
microseconds.
Is there way to format the output of timestamp fields globally (in 
the postgres.conf)?

Preferably i only would like to see the MMDD HH:MM:SS.

Or can a formatting be passed to a SELECT *  hmmm probably not.



http://www.postgresql.org/docs/8.0/static/functions-datetime.html

And yes you can do it in a SELECT.

Sincerely,

Joshua D. Drake





Thanks for any advise

A



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly








---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Support for Limit in Update, Insert...

2005-09-08 Thread Alvaro Herrera
On Thu, Sep 08, 2005 at 07:19:34PM -0600, Cristian Prieto wrote:
 Maybe the first 1000 rows based in the primary index

No, this is not a satisfactory answer, because 1. it's possible that
there's no primary key at all, or 2. said index may not get used for the
execution of the update.  Maybe something else I'm forgetting right now.

This has been discussed before, and rejected.  Please see the archives.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Java is clearly an example of a money oriented programming  (A. Stepanov)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Cost based SELECT/UPDATE

2005-09-08 Thread Leonid Safronie
 Is there any way to do SELECTs with different priorities?
 
 The issue is that response time for
 these 50 processes is very important unlike for report generation, and
 time spent by these processes while report running is unacceptable for
 my production environment (response time grows from 1-3 seconds up to
 1-2 minutes).
 
 The most important question is why response time drops so much? Does it
 look like it's disk I/O that's the problem?
 
 
  Yes, I/O grows as much as 300 - 700 tps (100% load) according to systat 
  -vmstat.
  I'm having 2 x 160Gb HDDs, data on one of them, pg_xlog on another

 Hmm - with your pg_xlog on a separate disk, updates should be relatively
 unaffected by a large SELECT going through. With these 50 other
 processes are most going through fairly quickly (less than 10 seconds),
 with some taking longer and a few taking 2 minutes or do they all take
 1-2 minutes?
Results differ, but in range 30-120 secs...
Now looking whether some kind of RAID can improve my situation...
(workaround i'm currently using is COPY to another server, then SELECT
from it, but this does not work if report period includes, e.g.
current day)

--
Leonid Safronie
DIAS-RIPE

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Support for Limit in Update, Insert...

2005-09-08 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 This has been discussed before, and rejected.  Please see the archives.

For SELECT, both LIMIT and OFFSET are only well-defined in the presence
of an ORDER BY clause.  (One could argue that we should reject them when
no ORDER BY, but given that the database isn't getting changed as a side
effect, that's probably too anal-retentive.  When the database *is*
going to be changed, however, I for one like well-defined results.)

If this proposal included adding an ORDER BY to UPDATE/DELETE, then it
would at least be logically consistent.  I have not seen the use-case
for it though.  In any case you can usually get the equivalent result
with something like

UPDATE foo SET ...
WHERE pkey IN (SELECT pkey FROM foo ORDER BY ... LIMIT ...);

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SQL - planet redundant data

2005-09-08 Thread Poul Jensen
I have ~500,000 data files each containing ~1,000 records that I want to 
put into a database for easy access.
Fictive example for illustration: File w. meteorological data from a 
given station.


stat_id |  yr  | d_o_y | hr | mn | tmp  | wind
-|--|---|||--|--
   78   | 2001 |  211  | 14 |  0 | 15.3 |  4.7
   78   | 2001 |  211  | 14 |  1 | 15.4 |  5.4
   78   | 2001 |  211  | 14 |  2 | 15.4 |  5.0
   ..   |  |  ...  | .. |  . |  |  ...
   78   | 2001 |  211  | 14 | 24 | 15.8 |  2.7

Some parameters are constant for any file, e.g. station id. Others are 
constant for a varying percentage of the files (hr~58%, d_o_y~98%, 
yr~99.995% due to the oddly chosen 25 min. pr. file)


It seems like a table with file summaries (1 row for each file) would be 
very useful. Some of the columns/rows could be:


   file | st_id |  yr  | d_o_y | avg_tmp | std_dev_tmp
-|---|--|---|-|--
78_f6548.dat |   78  | 2001 |  211  |   15.5  |0.24
57_f4296.dat |   57  | 2000 |   -1  |   8.3   |0.11
   ...  |  ...  |  ... |  ...  |   ...   |...
(-1 for day of year indicates an odd file spanning across midnight)

To store the detailed records the SQL novice would construct one table 
pr. file and exclude any constant columns since these are redundant 
(given in summary table). These detailed tables would then have 
different column schemas - (mn,tmp,wind) for ~58%, (hr,mn,tmp,wind) for 
~40%, (d_o_y,hr,mn,tmp,wind) for ~2% and (yr,d_o_y,hr,mn,tmp,wind) for 
~0.005%.


SQL will discourage this novice. To enable simple queries data tables 
must be combined. Say I want to find all summer data with high 
temperature standard deviation and low wind. If I made the ultimate 
table for easy queries, containing all columns for both detailed data 
and file summary, I could find the data like this:


SELECT file, std_dev_tmp, tmp, wind FROM huge_table WHERE 
std_dev_tmp0.2 AND d_o_y150 AND d_o_y240 AND wind4.0


The ability to make this easy query comes with a ridiculous amount of 
redundant data in huge_table. In order to remove all the stuff that is 
already given by the summary table I must split the huge table into a 
number of tables; one for each of above mentioned column schemas (in my 
case many more than in the example above). Difficult to build and no 
easy queries anyway - I'd need to write a C program to launch the many 
queries that would in effect return the same data as the single query 
above. Defeats the purpose of easy, efficient access.


This novice must be missing a sneaky way to avoid massive redundancy and 
still maintain easy access. I've been suggested to look at inheritance 
and foreign keys. Foreign keys I don't see how to use, but I could make 
an inheritance hierarchy of the split up tables to make the parent table 
columns (mn,tmp,wind in above example) available across tables. But 
accessing the rest of the columns still require child-table specific 
queries, and my many child tables will not be nicely nested as in above 
example, so I still don't see how it can get me there. I could really 
use some input.


One thought: PostgreSQL adds to every table a system column tableoid 
containing a constant value. Is that value really stored 1,000 times for 
a 1,000-row table? Or...?


(Apologies for the length of this post)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL - planet redundant data

2005-09-08 Thread Bruno Wolff III
On Thu, Sep 08, 2005 at 19:58:55 -0800,
  Poul Jensen [EMAIL PROTECTED] wrote:
 
 This novice must be missing a sneaky way to avoid massive redundancy and 
 still maintain easy access. I've been suggested to look at inheritance 
 and foreign keys. Foreign keys I don't see how to use, but I could make 
 an inheritance hierarchy of the split up tables to make the parent table 
 columns (mn,tmp,wind in above example) available across tables. But 
 accessing the rest of the columns still require child-table specific 
 queries, and my many child tables will not be nicely nested as in above 
 example, so I still don't see how it can get me there. I could really 
 use some input.

You can also use views. Updatable views can be created using the rule system
if you need them.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SQL - planet redundant data

2005-09-08 Thread Tom Lane
Poul Jensen [EMAIL PROTECTED] writes:
 One thought: PostgreSQL adds to every table a system column tableoid 
 containing a constant value. Is that value really stored 1,000 times for 
 a 1,000-row table? Or...?

No, tableoid is sort of a virtual column ... it doesn't exist on disk.
When you query it you get a value fetched from the internal data
structure representing the table.

There is plenty of *real* overhead per row, mind you, but tableoid isn't
part of it.  You can read some details here:
http://developer.postgresql.org/docs/postgres/storage-page-layout.html

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly