Re: [GENERAL] slony error --need help

2007-12-18 Thread Richard Huxton

SHARMILA JOTHIRAJAH wrote:

I get theis error

stdin:21: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could not load library 
/export/home/josh/postgres7.4/lib/xxid.so: ld.so.1: postgres: fatal: relocation 
error: file /export/home/josh/postgres7.4/lib/xxid.so: symbol GetTopTransactionId: referenced 
symbol not found

stdin:21: Error: the extension for the xxid data type cannot be loaded in 
database 'dbname=testdb1 host=172.31.0.67 user=josh'
stdin:21: ERROR: no admin conninfo for node 134701624


Does this file exist on the host in question?
Does the postgres user have permission to access it?
Is there anything else (e.g. selinux) that could interfere with access 
to it?
What is different between your installations of 7.4, 8.1, 8.2 on host 
172.31.0.67? Were they all compiled from source?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Restoring only some data from a backup file

2007-12-18 Thread mljv
hi,

i run a webapp where lots of accounts are managing something. I do a nightly 
backup of my database. Sometime some users want to have their account restored 
from a backup days, weeks or months ago.

At the moment i use (multi-column) natural keys. So each and every table has 
at least a column account_name (sometimes part of a multi-column primary 
key). If i want to restore i use pg_restore and grep to filter out the lines 
i need. It is not very sophisticated but it works.

Our new Databse design removes natural keys in favor of surrogate keys.

Now i still want to easily restore an account from a given dump file.

I can't use pg_restore and grep anymore as the hierarchical structure of the 
tables can't be easily greped as i dont have the account_name in every table 
anymore.

I came across the following ideas:

1. Scripting pg_restore output
- use a perl script which greps for the base table (accounts) 
- grep the member table for foreign keys to account_id
- remember all member_ids
- grep the entries table for alle memorized member_ids
- and so on for each hierarchical level.

2. PITR
I could use PITR using a backup database replying it to the timestamp where i 
want to restore an account.
Then i can select all entries with regular sql and replay them in the original 
database. 
Nice side effect: more backups are made

3. Install all backup databases
I could install my dumps on a backup server and name the databases according 
to their backup date. 
advantage: i could connect my app directly to the backup database to get a 
historical view.
disadvantage: This needs to much disk space.

Are their other solutions?
What are you doing if you want to restore only some specific and hierarchical 
data from a backup?

kind regards,
Janning

PS: i recently used oracle for the first time in a project. Postgresql is so 
much better! I do not want to start a discussion about this, just wanted to 
cheer all those pg developers. Great job! I love it even more after using 
oracle.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-18 Thread Martijn van Oosterhout
On Mon, Dec 17, 2007 at 10:13:54AM -0800, Jeff Davis wrote:
 http://www.postgresql.org/docs/8.3/static/release-8-3.html
 
 Ensure that chr() cannot create invalidly-encoded values (Andrew) 

Ok, but that doesn't apply in this case, his database appears to be
LATIN1 and this character is valid for that encoding...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


[GENERAL] dblink does not connect when activated in a remote server

2007-12-18 Thread Abraham, Danny
dblink problem:

I am connecting to a remote server and run a functions that calls
another one using db_link. 
It fails with the error message is 08001 - could not establish
connection.

When the function is located in a local server - it runs fine. 

The connection string is: 

'hostaddr=127.0.0.1 .'

The following were tried but did not help:
'host=localhost' and even  providing the real target server
('host=tlvl0390') did not help.

Happens on WIndows as well as Unix 

Can you help? 

Thanks

Danny


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] dblink does not connect when activated in a remote server

2007-12-18 Thread Terry Fielder

Does the remote server have the relevant port opened up in its firewall?

Is postgres set to allow TCP/IP connections in the postgresql.conf?

Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Abraham, Danny wrote:

dblink problem:

I am connecting to a remote server and run a functions that calls
another one using db_link. 
It fails with the error message is 08001 - could not establish

connection.

When the function is located in a local server - it runs fine. 

The connection string is: 


'hostaddr=127.0.0.1 .'

The following were tried but did not help:
'host=localhost' and even  providing the real target server
('host=tlvl0390') did not help.

Happens on WIndows as well as Unix 

Can you help? 


Thanks

Danny


---(end of broadcast)---
TIP 6: explain analyze is your friend

  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] logging arguments to prepared statements?

2007-12-18 Thread rihad
Dec 18 15:49:41 myhost postgres[29832]: [35-1] ERROR:  23505: duplicate 
key value violates unique constraint foo_key

Dec 18 15:49:41 myhost postgres[29832]: [35-4] INSERT INTO foo
Dec 18 15:49:41 myhost postgres[29832]: [35-5](a,b,c)
Dec 18 15:49:41 myhost postgres[29832]: [35-7] VALUES ($1,$2,$3)
Dec 18 15:49:41 myhost postgres[29832]: [35-8]

And that's it, leaving me wondering which value triggered the error. Any 
way to tweak postgres to include the values too, without setting 
log_statements=all?


changed log settings:
log_destination = 'syslog'
log_error_verbosity = verbose
log_min_error_statement = notice
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_statements = 'none'
log_temp_files = 0
log_autovacuum_min_duration = 250

other log_* settings kept as default (commented).

PostgreSQL 8.3-beta2 (FreeBSD port is lagging behind a bit).

Thanks.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] slony error --need help

2007-12-18 Thread Josh Harrison
On Dec 17, 2007 3:14 PM, Shane Ambler [EMAIL PROTECTED] wrote:

 Not sure I can help much, I only read out of interest, but thought
 clarifying a few things may help.

 SHARMILA JOTHIRAJAH wrote:
  Hi I had posted this in the slony mailing list but no luck in getting
  any answers...Pls help me as I'm stuck with this error for the last 4
  days
 
 
 
  Im trying to replicate between postgres version 7.4.18 and version
  8.1.10.

 7.4 is the master?

yes



  I configured postgres-7.4 with enable-thread-safety option
 
  I configured slony1 with this command
 
  ./configure --prefix=/export/home/josh/slony7.4
  --enable-thread-safety
  --with-pgconfigdir=/export/home/josh/postgres7.4/bin
  --with-pgsourcetree=/export/home/josh/postgresql-7.4.18
 
 This is the 7.4 config? is the 8.1 config the same?
 (your not building slony for 8.1 against a 7.4 source tree?)

 Are both versions running on the same machine?

Yes



 What version of Slony? same for both servers?

Yes slony version 1.2.12



 
  When i try to execute the this script

 On the 7.4 machine?

yes



  #!/bin/sh
 
  slonik _EOF_ cluster name = slony_example;
 
  node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST
  user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME
  host=$SLAVEHOST user=$REPLICATIONUSER';
 
  init cluster ( id=1, comment = 'Master Node');
 
  create set (id=1, origin=1, comment='All pgbench tables'); set add
  table (set id=1, origin=1, id=1, fully qualified name =
  'public.sample1', comment='accounts table'); store node (id=2,
  comment = 'Slave node');
 
  store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME
  host=$MASTERHOST user=$REPLICATIONUSER'); store path (server = 2,
  client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST
  user=$REPLICATIONUSER');
 
  _EOF_
 
 
 
  I get theis error
 
  stdin:21: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could
  not load library /export/home/josh/postgres7.4/lib/xxid.so:
  ld.so.1: postgres: fatal: relocation error: file
  /export/home/josh/postgres7.4/lib/xxid.so: symbol
  GetTopTransactionId: referenced symbol not found
 
  stdin:21: Error: the extension for the xxid data type cannot be
  loaded in database 'dbname=testdb1 host=172.31.0.67 user=josh'
  stdin:21: ERROR: no admin conninfo for node 134701624
 
 
 
 
  The same works fine between postgresql versions 8.1.10 and 8.2.5 .
 
  Why do I get this error when replicating between versions 7.4 and8.1.
   Does slony1 replicate between these 2 versions? If so is there any
  other settings that needs to be done?
 

 I sorted out the problem. I think I had too many postgres installations in
my system. I removed them and compiled both postgresql and slony fro scratch
and it just worked...Thanks again


Re: [GENERAL] slony error --need help

2007-12-18 Thread Tom Lane
Josh Harrison [EMAIL PROTECTED] writes:
 What version of Slony? same for both servers?

 Yes slony version 1.2.12

Judging from the error message, you can't do that --- you need at least
different versions of the .so file for 7.4 and 8.1.
(GetTopTransactionId did not exist in 7.4.)  But this is the wrong place
to be asking about that; the slony lists would have people who know
more.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-18 Thread Tom Lane
Martijn van Oosterhout [EMAIL PROTECTED] writes:
 On Mon, Dec 17, 2007 at 10:13:54AM -0800, Jeff Davis wrote:
 http://www.postgresql.org/docs/8.3/static/release-8-3.html

 Ok, but that doesn't apply in this case, his database appears to be
 LATIN1 and this character is valid for that encoding...

You know what, I think the test in the code is backwards.

is_mb = pg_encoding_max_length(encoding)  1;

if ((is_mb  (cvalue  255)) || (!is_mb  (cvalue  127)))
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
 errmsg(requested character too large for encoding: %d,
cvalue)));

Shouldn't we be allowing up-to-255 for single-byte encodings, not
multibyte?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-18 Thread Martijn van Oosterhout
On Tue, Dec 18, 2007 at 10:35:39AM -0500, Tom Lane wrote:
 Martijn van Oosterhout [EMAIL PROTECTED] writes:
  Ok, but that doesn't apply in this case, his database appears to be
  LATIN1 and this character is valid for that encoding...
 
 You know what, I think the test in the code is backwards.
 
 is_mb = pg_encoding_max_length(encoding)  1;
 
 if ((is_mb  (cvalue  255)) || (!is_mb  (cvalue  127)))

It does seem to be a bit wierd. For single character encodings anything
up to 255 is OK, well, sort of. It depends on what you want chr() to do
(oh no, not this discussion again). If you subscribe to the idea that
it should use unicode code points then the test is completely bogus,
since whether or not the character is valid has nothing to with whether
the encoding is multibyte or not.

If you want the output of th chr() to (logically) depend on the encoding
then the test makes more sense, but ten it's inverted. Single-byte
encodings are by definition defined to 255 characters. And multibyte
encodings (other than UTF-8 I suppose) can only see the ASCII subset.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


[GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Weber, Geoffrey M.
Hi all,

Thanks to those on this list who contribute their knowledge for our 
consumption!  I have another question:

The problem I'm having is in one particular spot where I'm trying to run a 
parametized query inside a PL/PgSQL function.  When I run the query directly, 
the planner correctly chooses to use an index.  When I run the query with 
hard-coded values inside the function, the planner uses the index again.  
However, when I try to pass in one of the values as a parameter, the query 
suddenly takes 45+ seconds to run (and is obviously no longer using the query). 
 Inside the function, I've tried using bound cursors, unbound cursors, and a 
direct query as shown in the test_unlock function below.


Here are the details:

EXPLAIN ANALYZE SELECT ah.* FROM alert ah WHERE ( (ah.replaced_by_id = '0') AND 
(ah.not_displayed_id = '7714598') );

Index Scan using idx_acurr on alert ah  (cost=0.00..4.44 rows=1 width=768) 
(actual time=61.100..61.100 rows=0 loops=1)
  Index Cond: ((replaced_by_id = 0) AND (not_displayed_id = 7714598))
Total runtime: 61.459 ms


CREATE OR REPLACE FUNCTION test_unlock ( id_locked alert.id%TYPE ) RETURNS 
alert.id%TYPE AS $test_unlock$
DECLARE
last_alert  alert%ROWTYPE;
BEGIN
RAISE NOTICE 'Fetching data...';
SELECT ah.* INTO last_alert FROM alert ah where ( (ah.replaced_by_id = 
'0') AND (ah.not_displayed_id = id_locked ) );
RAISE NOTICE 'Data fetched...';

IF NOT FOUND THEN
RAISE NOTICE 'No locked out alert was found!';
ELSE
RAISE NOTICE 'Alert id % was found!', last_alert.id;
END IF;

RETURN last_alert.id;
END;
$test_unlock$ LANGUAGE plpgsql;

SELECT * FROM test_unlock( '7714598');


Using the function, I get a 45+ second delay between the fetching notice and 
the fetched notice.  Is there some way I can help the planner out further?  I 
ever tried adding an 'ORDER BY replaced_by_id, not_displayed_id to help it 
find the index, but no luck there...

Thanks in advance for any help!

NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying, or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-18 Thread Andrew Dunstan



Martijn van Oosterhout wrote:

On Tue, Dec 18, 2007 at 10:35:39AM -0500, Tom Lane wrote:
  

Martijn van Oosterhout [EMAIL PROTECTED] writes:


Ok, but that doesn't apply in this case, his database appears to be
LATIN1 and this character is valid for that encoding...
  

You know what, I think the test in the code is backwards.

is_mb = pg_encoding_max_length(encoding)  1;

if ((is_mb  (cvalue  255)) || (!is_mb  (cvalue  127)))




Yes.



It does seem to be a bit wierd. For single character encodings anything
up to 255 is OK, well, sort of. It depends on what you want chr() to do
(oh no, not this discussion again). If you subscribe to the idea that
it should use unicode code points then the test is completely bogus,
since whether or not the character is valid has nothing to with whether
the encoding is multibyte or not.
  


We are certainly not going to revisit that discussion at this stage. It 
was thrashed out months ago.

If you want the output of th chr() to (logically) depend on the encoding
then the test makes more sense, but ten it's inverted. Single-byte
encodings are by definition defined to 255 characters. And multibyte
encodings (other than UTF-8 I suppose) can only see the ASCII subset.
  


Right. There is a simple thinko on my part in the line of code Tom 
pointed to, which needs to be fixed.


cheers

andrew



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] logging arguments to prepared statements?

2007-12-18 Thread Ted Byers
--- rihad [EMAIL PROTECTED] wrote:
 Dec 18 15:49:41 myhost postgres[29832]: [35-1]
 ERROR:  23505: duplicate 
 key value violates unique constraint foo_key
 Dec 18 15:49:41 myhost postgres[29832]: [35-4]
 INSERT INTO foo
 Dec 18 15:49:41 myhost postgres[29832]: [35-5]   
 (a,b,c)
 Dec 18 15:49:41 myhost postgres[29832]: [35-7]
 VALUES ($1,$2,$3)
 Dec 18 15:49:41 myhost postgres[29832]: [35-8]
 
 And that's it, leaving me wondering which value
 triggered the error. Any 
Why?  It seems simple enough.  You have a table called
foo, with at least three columns: a, b, and c.  And
you have a violation of your unique constraint.  If it
isn't that simple, you have left out useful
information.  You did not say, for example, which of
your columns, if any, are involved in your unique
constraint.  If the answer to that is none, then you
need to show how the constraint is defined.

Which of the three columns are involved in a unique
constraint?  If none of the columns you use are
involved in a unique constraint, there must be other
columns that are, and that would imply that there is
either a problem with your prepared statement,
ignoring certain columns that can't be ignored, or a
problem with how you set up the default values for
another column that is involved in a unique
constraint; or the table has grown so big that it is
impossible to add a new record without violating the
existing unique constraint (unlikely as that is in
most cases, especially during development).

I could see creating a before insert trigger that
stores the values to be inserted in a log table with a
timestamp, but I don't see the profit in that. Doesn't
such an error generate a SQL exception to your client?
 If so, the client code will know immediately what
insert attempt failed, and therefore what values are
involved in the problem.  Using JDBC, for example, all
of the JDBC functions that execute a prepared
statement (or any other SQL) will throw a
java.sql.SQLException.  One therefore knows
immediately when there is a problem of the sort you
describe, and so you can determine quickly what the
values were that resulting in your error.  If need be,
that could be stored in your application's log.  If
one needed full audit functionality, one could create
the tables to store the details of every SQL
statement, including who is responsible for the
statement and a timestamp.  But if you don't need to
support that kind of detailed audit, why bother when
there are easier ways to address your issue?

HTH

Ted

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Tom Lane
Weber, Geoffrey M. [EMAIL PROTECTED] writes:
 The problem I'm having is in one particular spot where I'm trying to
 run a parametized query inside a PL/PgSQL function.

I wonder whether the parameter is actually of the same datatype as the
indexed column.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Weber, Geoffrey M.
Hmm - good question!  However, it is - both the id and not_displayed_id are 
INTEGERs.  Changing the function header to:

CREATE OR REPLACE FUNCTION test_unlock ( id_locked alert.not_displayed_id%TYPE 
) RETURNS alert.id%TYPE AS $test_unlock$

sadly doesn't affect the performance at all.  I should have been a little more 
careful with the datatypes there, but this was a temporary function used to 
help me debug the problem and also help show it to the world.  The original 
function has a bit more to it and is called by a higher-level function, but 
I've tracked the slowness down to this issue :)...

Just for grins, I also changed the query to:

SELECT ah.* INTO last_alert FROM alert ah where ( (ah.replaced_by_id = 
'0') AND (not_displayed_id = id_locked::INTEGER ) ) ORDER BY replaced_by_id, 
not_displayed_id;

Still no improvement :(.

Thanks for the suggestion though!



From: Tom Lane [EMAIL PROTECTED]
Sent: Tuesday, December 18, 2007 10:11 AM
To: Weber, Geoffrey M.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL  
 function...

Weber, Geoffrey M. [EMAIL PROTECTED] writes:
 The problem I'm having is in one particular spot where I'm trying to
 run a parametized query inside a PL/PgSQL function.

I wonder whether the parameter is actually of the same datatype as the
indexed column.

regards, tom lane

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__



NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.




NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying, or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Tom Lane
Weber, Geoffrey M. [EMAIL PROTECTED] writes:
 Hmm - good question!  However, it is - both the id and
 not_displayed_id are INTEGERs.

Well, in that case it must be a statistics issue --- does the indexed
column have a badly skewed distribution?

You could investigate how many rows the planner thinks will be fetched
via

PREPARE foo(int) AS
SELECT ah.* FROM alert ah where ( (ah.replaced_by_id = '0') AND 
(not_displayed_id = $1 ) ) ORDER BY replaced_by_id, not_displayed_id;

EXPLAIN EXECUTE foo(42);

which will set up exactly the same planning situation as occurs in the
plpgsql function: no knowledge of the exact value being compared to.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Weber, Geoffrey M.
Tom,

Yes, the distribution must be what's doing it.  I guess I knew that 
subconciously, but was looking for something like hints to force the planner to 
do what I wanted.  Instead it looks like I'll have to do a bit of tweaking with 
my indexes.  Probably a partial index on the 'not_displayed_id' column.  It'll 
be very small and shouldn't cause much overhead.  I was trying to keep my index 
count down, and have had a dual-column index on (replaced_by_id, 
not_displayed_id) to this point.

Thanks once again for your help!



From: Tom Lane [EMAIL PROTECTED]
Sent: Tuesday, December 18, 2007 10:36 AM
To: Weber, Geoffrey M.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL  
 function...

Weber, Geoffrey M. [EMAIL PROTECTED] writes:
 Hmm - good question!  However, it is - both the id and
 not_displayed_id are INTEGERs.

Well, in that case it must be a statistics issue --- does the indexed
column have a badly skewed distribution?

You could investigate how many rows the planner thinks will be fetched
via

PREPARE foo(int) AS
SELECT ah.* FROM alert ah where ( (ah.replaced_by_id = '0') AND 
(not_displayed_id = $1 ) ) ORDER BY replaced_by_id, not_displayed_id;

EXPLAIN EXECUTE foo(42);

which will set up exactly the same planning situation as occurs in the
plpgsql function: no knowledge of the exact value being compared to.

regards, tom lane

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__



NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.




NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying, or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Partitioned tables Slony

2007-12-18 Thread Chris Browne
Goboxe [EMAIL PROTECTED] writes:
 What need to be taken care of when replicating data from partitioned
 tables?

 I have several master tables that are inherited by date either daily,
 weekly and monthly.

 How to automate addition of newly created child tables into Slony
 cluster?

There's an outline of how to do it, in the partitioning test...

http://main.slony.info/viewcvs/viewvc.cgi/slony1-engine/tests/testpartition/
-- 
cbbrowne,@,linuxdatabases.info
http://linuxfinances.info/info/slony.html
If we believe in data structures, we must believe in independent
(hence simultaneous) processing.  For why else would we collect items
within a structure?  Why do we tolerate languages that give us the one
without the other? -- Alan J. Perlis

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-18 Thread Colin Wetherbee

Colin Wetherbee wrote:
My guess, having written this, is that your approach might be more 
useful for applications that rely heavily on interaction with a 
database.  I'd appreciate any more comments you have on this, though.


Tom, Sam, and Ted (a lovely assortment of three-letter names), thank you 
very much for your input on my situation.  You've given me quite a lot 
to consider.


For now, I will stick with Tom's and Ted's recommendations for a while, 
but I'll also keep Sam's ideas in mind if this application becomes more 
heavily reliant on the database.


Colin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] logging arguments to prepared statements?

2007-12-18 Thread rihad

Ted Byers wrote:

--- rihad [EMAIL PROTECTED] wrote:

Dec 18 15:49:41 myhost postgres[29832]: [35-1]
ERROR:  23505: duplicate 
key value violates unique constraint foo_key

Dec 18 15:49:41 myhost postgres[29832]: [35-4]
INSERT INTO foo
Dec 18 15:49:41 myhost postgres[29832]: [35-5]   
(a,b,c)

Dec 18 15:49:41 myhost postgres[29832]: [35-7]
VALUES ($1,$2,$3)
Dec 18 15:49:41 myhost postgres[29832]: [35-8]

And that's it, leaving me wondering which value
triggered the error. Any 

Why?  It seems simple enough.  You have a table called
foo, with at least three columns: a, b, and c.  And
you have a violation of your unique constraint.  If it


I was wondering if there was a way to see the _values_ themselves in 
case of errors, as is possible with log_statements=all, without turning 
it on. Apparently there isn't. Thanks anyway.



isn't that simple, you have left out useful
information.  You did not say, for example, which of
your columns, if any, are involved in your unique
constraint.  If the answer to that is none, then you
need to show how the constraint is defined.

Which of the three columns are involved in a unique
constraint?  If none of the columns you use are
involved in a unique constraint, there must be other
columns that are, and that would imply that there is
either a problem with your prepared statement,
ignoring certain columns that can't be ignored, or a
problem with how you set up the default values for
another column that is involved in a unique
constraint; or the table has grown so big that it is
impossible to add a new record without violating the
existing unique constraint (unlikely as that is in
most cases, especially during development).

I could see creating a before insert trigger that
stores the values to be inserted in a log table with a
timestamp, but I don't see the profit in that. Doesn't
such an error generate a SQL exception to your client?
 If so, the client code will know immediately what
insert attempt failed, and therefore what values are
involved in the problem.  Using JDBC, for example, all
of the JDBC functions that execute a prepared
statement (or any other SQL) will throw a
java.sql.SQLException.  One therefore knows
immediately when there is a problem of the sort you
describe, and so you can determine quickly what the
values were that resulting in your error.  If need be,
that could be stored in your application's log.  If
one needed full audit functionality, one could create
the tables to store the details of every SQL
statement, including who is responsible for the
statement and a timestamp.  But if you don't need to
support that kind of detailed audit, why bother when
there are easier ways to address your issue?

HTH

Ted





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Specify client encoding for backups...

2007-12-18 Thread Nathan Wilhelmi
Hello - We are using PG_ADMIN backup to create the schema DDL scripts 
for our application. Is there any way to get PGAdmin to not include the 
encoding value or configure PostgreSQL to set the client encoding to 
UNICODE? Having problems with the JDBC driver and the UTF8 encoding when 
we try to execute the scripts as part of our build process.


Thanks!
-Nate


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Scott Marlowe
On Dec 18, 2007 10:54 AM, Weber, Geoffrey M.
[EMAIL PROTECTED] wrote:
 Tom,

 Yes, the distribution must be what's doing it.  I guess I knew that 
 subconciously, but was looking for something like hints to force the planner 
 to do what I wanted.  Instead it looks like I'll have to do a bit of tweaking 
 with my indexes.  Probably a partial index on the 'not_displayed_id' column.  
 It'll be very small and shouldn't cause much overhead.  I was trying to keep 
 my index count down, and have had a dual-column index on (replaced_by_id, 
 not_displayed_id) to this point.

Fix not with a hammer that which you can fix with a screwdriver.  Fix
not with a screwdriver that which you can fix with a knob

Have you tried increasing the stats target of the guilty column and
reanalyzing to see if that helps?

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] free ETL tool using postgreSQL, new major release

2007-12-18 Thread Benoît Carpentier
Dear all,

I've released a graphical and simple ETL tool using postgreSQL under Windows.
Is working with .txt or .csv files.
It is called Benetl and you can find (and freely download) it at :
www.benetl.net

This is a second release is out (version 1.1), your comments are welcomed,
there is a forum you can use it to report troubles or needs. You can also
reply to my current e-mail.

The third release will be focused on database use, especially it will
provide a way to create group in order to regroup entities in.
This will be very helpful if you have to calculate indicators or something
else, on binded datas.

Thanks for your attention and you interest,

-- 
Benoît Carpentier
www.benetl.net
Founder of Benetl  Java Developer



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL function...

2007-12-18 Thread Weber, Geoffrey M.
OK - in that same vain, I thought of something even better - using dynamic SQL 
instead.  It sped things up right away!

Thanks for putting me on the right track!



From: Scott Marlowe [EMAIL PROTECTED]
Sent: Tuesday, December 18, 2007 12:02 PM
To: Weber, Geoffrey M.
Cc: Tom Lane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Problem with index not being chosen inside PL/PgSQL  
 function...

On Dec 18, 2007 10:54 AM, Weber, Geoffrey M.
[EMAIL PROTECTED] wrote:
 Tom,

 Yes, the distribution must be what's doing it.  I guess I knew that 
 subconciously, but was looking for something like hints to force the planner 
 to do what I wanted.  Instead it looks like I'll have to do a bit of tweaking 
 with my indexes.  Probably a partial index on the 'not_displayed_id' column.  
 It'll be very small and shouldn't cause much overhead.  I was trying to keep 
 my index count down, and have had a dual-column index on (replaced_by_id, 
 not_displayed_id) to this point.

Fix not with a hammer that which you can fix with a screwdriver.  Fix
not with a screwdriver that which you can fix with a knob

Have you tried increasing the stats target of the guilty column and
reanalyzing to see if that helps?

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__



NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.




NOTICE: This electronic mail transmission may contain confidential information 
and is intended only for the person(s) named.  Any use, copying, or disclosure 
by any other person is strictly prohibited. If you have received this 
transmission in error, please notify the sender via e-mail.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] combining semi-duplicate rows

2007-12-18 Thread hjenkins
Hello,

I have a table of rows which partially duplicate one another and need to
be merged and moved into a table with a primary key.

As an additional complication, some of the duplicates contain different
information, ex.:
schema1.datatable: key1  None None 34
schema2.datatable: key1  127None

desired result:
schema1.datatable: key1  1234

I looked for a specific function that would do this sort of merge and
failed. So I tried:

UPDATE schema1.datatable SET schema1.datatable.field1 =
schema2.datatable.field1 FROM schema2.datatable
WHERE schema2.datatable.keyfield = schema1.datatable.keyfield
AND schema1.datatable.field1 = None;

Which is suboptimal because I'd need a command for each field, but it
would be a start. However, the schema names are not recognised. I get
error messages to the effect that cross-database references are not
implemented or relation schema1/2 does not exist.Even the much simpler

SELECT DISTINCT schema2.datatable INTO schema1.datatable;

...gives me these messages. Qualifying right up to the database level
produces improper qualified name (too many dotted names).

I'm pretty sure that this isn't a capitalization/quoting problem as
described in the FAQ. Is it not possible to use these functions between
schemas? Or am I misusing the functions in a more basic way?

The problem is somewhat similar to this one:
http://archives.postgresql.org/pgsql-sql/2007-02/msg00055.php

Namely,

Regards,
H.Jenkins


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Josh Harrison
Hi
I have a postgres version 7.4 and version 8.3 installed in my system.
 7.4 uses port 5432 and 8.3 uses port 5433.
I started 7.4 and the database is running fine. Now i started the database
server in version 8.3
and it started fine.

pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
server starting
-sh-3.00$ pg_ctl status
pg_ctl: server is running (PID: 4408)
/usr4/postgres8.3/bin/postgres  -D  /export/home/josh/postgres8.3/pgsql/data

But when I type psql -l   I get this error
-sh-3.00$ psql -l
psql: FATAL:  database postgres does not exist

why? Is it not possible to have multiple version installations i the same
machine(in different ports)?

thanks
josh


Re: [GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Richard Huxton

Josh Harrison wrote:

Hi
I have a postgres version 7.4 and version 8.3 installed in my system.
 7.4 uses port 5432 and 8.3 uses port 5433.
I started 7.4 and the database is running fine. Now i started the database
server in version 8.3
and it started fine.

pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
server starting
-sh-3.00$ pg_ctl status
pg_ctl: server is running (PID: 4408)
/usr4/postgres8.3/bin/postgres  -D  /export/home/josh/postgres8.3/pgsql/data

But when I type psql -l   I get this error
-sh-3.00$ psql -l
psql: FATAL:  database postgres does not exist


Yep.


why? Is it not possible to have multiple version installations i the same
machine(in different ports)?


It is, and lots of people do.

Answer the following questions and you'll find out why you get your 
error (or just guess, and you'll probably figure it out).


Q1. Do you have two versions of the psql command installed?
Q2. Which one are you running above?
Q3. What is the default port, user, database that it is connecting as to 
run -l?
Q4. Does the version of the PostgreSQL on that port have that user and 
database?


Personally I wrap my commands in aliases:
  alias psql82='/usr/local/pgsql82/bin/psql -p5433'
  alias psql83='/usr/local/pgsql83/bin/psql -p5483'

Others prefer a small wrapper script, or even a set default 
installation command.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Josh Harrison
Thanks...It works...
josh

On Dec 18, 2007 3:06 PM, Richard Huxton [EMAIL PROTECTED] wrote:

 Josh Harrison wrote:
  Hi
  I have a postgres version 7.4 and version 8.3 installed in my system.
   7.4 uses port 5432 and 8.3 uses port 5433.
  I started 7.4 and the database is running fine. Now i started the
 database
  server in version 8.3
  and it started fine.
 
  pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
  server starting
  -sh-3.00$ pg_ctl status
  pg_ctl: server is running (PID: 4408)
  /usr4/postgres8.3/bin/postgres  -D
  /export/home/josh/postgres8.3/pgsql/data
 
  But when I type psql -l   I get this error
  -sh-3.00$ psql -l
  psql: FATAL:  database postgres does not exist

 Yep.

  why? Is it not possible to have multiple version installations i the
 same
  machine(in different ports)?

 It is, and lots of people do.

 Answer the following questions and you'll find out why you get your
 error (or just guess, and you'll probably figure it out).

 Q1. Do you have two versions of the psql command installed?
 Q2. Which one are you running above?
 Q3. What is the default port, user, database that it is connecting as to
 run -l?
 Q4. Does the version of the PostgreSQL on that port have that user and
 database?

 Personally I wrap my commands in aliases:
   alias psql82='/usr/local/pgsql82/bin/psql -p5433'
   alias psql83='/usr/local/pgsql83/bin/psql -p5483'

 Others prefer a small wrapper script, or even a set default
 installation command.

 --
   Richard Huxton
   Archonet Ltd



Re: [GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Bill Moran
In response to Josh Harrison [EMAIL PROTECTED]:

 Hi
 I have a postgres version 7.4 and version 8.3 installed in my system.
  7.4 uses port 5432 and 8.3 uses port 5433.
 I started 7.4 and the database is running fine. Now i started the database
 server in version 8.3
 and it started fine.
 
 pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
 server starting
 -sh-3.00$ pg_ctl status
 pg_ctl: server is running (PID: 4408)
 /usr4/postgres8.3/bin/postgres  -D  /export/home/josh/postgres8.3/pgsql/data
 
 But when I type psql -l   I get this error
 -sh-3.00$ psql -l
 psql: FATAL:  database postgres does not exist
 
 why? Is it not possible to have multiple version installations i the same
 machine(in different ports)?

PostgreSQL 7.4 doesn't install a postgres database by default.  Try
explicitly connecting to template1.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] logging arguments to prepared statements?

2007-12-18 Thread Merlin Moncure
On Dec 18, 2007 12:14 PM, rihad [EMAIL PROTECTED] wrote:
 Ted Byers wrote:
  --- rihad [EMAIL PROTECTED] wrote:
  Dec 18 15:49:41 myhost postgres[29832]: [35-1]
  ERROR:  23505: duplicate
  key value violates unique constraint foo_key
  Dec 18 15:49:41 myhost postgres[29832]: [35-4]
  INSERT INTO foo
  Dec 18 15:49:41 myhost postgres[29832]: [35-5]
  (a,b,c)
  Dec 18 15:49:41 myhost postgres[29832]: [35-7]
  VALUES ($1,$2,$3)
  Dec 18 15:49:41 myhost postgres[29832]: [35-8]
 
  And that's it, leaving me wondering which value
  triggered the error. Any
  Why?  It seems simple enough.  You have a table called
  foo, with at least three columns: a, b, and c.  And
  you have a violation of your unique constraint.  If it

 I was wondering if there was a way to see the _values_ themselves in
 case of errors, as is possible with log_statements=all, without turning
 it on. Apparently there isn't. Thanks anyway.

which client api are you using? you can wrap the execution on the
client and log there.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Matt Magoffin
Hello, I'm trying to find a way to use a text[] index lookup using an
xpath() function in 8.3, but I suspect this situation is not specific to
8.3 or this exact query style. The query plan looks like

 Bitmap Heap Scan on lead  (cost=37.39..7365.22 rows=2206 width=8)
   Recheck Cond:
((xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.TypeFlag]/text()'::text,
xml, '{{als,http://autoleadservice.com/xml/als}}'::text[]))::text[] 
'{foo,bar}'::text[])
   -  Bitmap Index Scan on lead_type_flag_gin_idx  (cost=0.00..36.83
rows=2206 width=0)
 Index Cond:
((xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.TypeFlag]/text()'::text,
xml,
'{{als,http://autoleadservice.com/xml/als}}'::text[]))::text[] 
'{foo,bar}'::text[])

The problem for me is, the Recheck Cond is then on the xpath() function
used by the function-based index. My understanding is that then the
database must actually call the xpath() function again on all matches from
the index lookup. Are there ways to re-write the query so the recheck
condition is not necessary? Or a way to define the index differently so
that I might be able to still compare text[] values from the index without
needing the recheck?

-- m@

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] combining semi-duplicate rows

2007-12-18 Thread Tom Lane
hjenkins [EMAIL PROTECTED] writes:
 So I tried:

 UPDATE schema1.datatable SET schema1.datatable.field1 =
 schema2.datatable.field1 FROM schema2.datatable
 WHERE schema2.datatable.keyfield = schema1.datatable.keyfield
 AND schema1.datatable.field1 = None;

 Which is suboptimal because I'd need a command for each field, but it
 would be a start. However, the schema names are not recognised. I get
 error messages to the effect that cross-database references are not
 implemented or relation schema1/2 does not exist.

The target column of a SET clause can't be qualified with the relation
name; it would introduce ambiguity in the case of composite-type fields,
and it's useless anyway since the target relation was already given.
Your example works for me (syntactically at least) as

regression=# UPDATE schema1.datatable SET field1 =
schema2.datatable.field1 FROM schema2.datatable
WHERE schema2.datatable.keyfield = schema1.datatable.keyfield
AND schema1.datatable.field1 = 'None';
UPDATE 0

Personally, though, I'd use some aliases to improve readability and
forestall the onset of carpal tunnel syndrome:

regression=# UPDATE schema1.datatable t SET field1 =
s.field1 FROM schema2.datatable s
WHERE s.keyfield = t.keyfield
AND t.field1 = 'None';
UPDATE 0

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Tom Lane
Matt Magoffin [EMAIL PROTECTED] writes:
 The problem for me is, the Recheck Cond is then on the xpath() function
 used by the function-based index. My understanding is that then the
 database must actually call the xpath() function again on all matches from
 the index lookup.

This is mistaken.  It only happens if there are so many hits that the 
bitmap becomes lossy (which you can control to some extent anyway by
adjusting work_mem).

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Matt Magoffin
 The problem for me is, the Recheck Cond is then on the xpath() function
 used by the function-based index. My understanding is that then the
 database must actually call the xpath() function again on all matches
 from
 the index lookup.

 This is mistaken.  It only happens if there are so many hits that the
 bitmap becomes lossy (which you can control to some extent anyway by
 adjusting work_mem).

Ah, great. Thanks for clarifying.

-- m@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Gregory Stark
Tom Lane [EMAIL PROTECTED] writes:

 Matt Magoffin [EMAIL PROTECTED] writes:
 The problem for me is, the Recheck Cond is then on the xpath() function
 used by the function-based index. My understanding is that then the
 database must actually call the xpath() function again on all matches from
 the index lookup.

 This is mistaken.  It only happens if there are so many hits that the 
 bitmap becomes lossy (which you can control to some extent anyway by
 adjusting work_mem).

But it's true that it's possible for a slow expression to make the recheck
very expensive. The planner doesn't have a very good understanding of how to
tell whether the expression is likely to be slow.

The case I ran into is thing like WHERE x = ANY $1::integer[] which become
very slow for very large arrays. So I'm sure xpath() could possibly trigger
the same case.

But the number of matching pages would have to be quite large. And in that
case the alternative (regular index scans) is going to suck too.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] thank you

2007-12-18 Thread Kevin Hunter
Hullo List,

This is aimed at everyone in this community who contributes to the
Postgres project, but especially at the core folks who continually make
this community great through energy, time, money, responses, and
what-have-you.

I see lots of Thank yous go by for this problem or that conundrum
solved, but I don't think I've seen a general thank you for all that all
y'all do.  (If I've missed them in the blur of emails I get, I do
apologize.)  Thank you for all that you do.

Thank you for the countless times you've saved my butt from my own
stupidity.  Thank you for the absolute awesome help you've been while
tracking down bugs.  Thank you for signing the NDA without fuss and
searching my DB for a bug that I may have caused.  Thank you for
answering my frustrated emails at some ungodly hour of the morning (do
you sleep?!).  Thank you for engaging others in discussion, sometimes
beyond the point of annoyance ... your enthusiasm and love for the
project is duly noted. Thank you for constantly preaching the right
way.  Thank you . . . well you get the drift.  I'll stop before this
turns into some chain-letter type ordeal.

The point is that I hope you realize just how much you all mean to the
community.

Kevin


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] thank you

2007-12-18 Thread Kevin Kempter
On Tuesday 18 December 2007 19:26:40 Kevin Hunter wrote:
 Hullo List,

 This is aimed at everyone in this community who contributes to the
 Postgres project, but especially at the core folks who continually make
 this community great through energy, time, money, responses, and
 what-have-you.

 I see lots of Thank yous go by for this problem or that conundrum
 solved, but I don't think I've seen a general thank you for all that all
 y'all do.  (If I've missed them in the blur of emails I get, I do
 apologize.)  Thank you for all that you do.

 Thank you for the countless times you've saved my butt from my own
 stupidity.  Thank you for the absolute awesome help you've been while
 tracking down bugs.  Thank you for signing the NDA without fuss and
 searching my DB for a bug that I may have caused.  Thank you for
 answering my frustrated emails at some ungodly hour of the morning (do
 you sleep?!).  Thank you for engaging others in discussion, sometimes
 beyond the point of annoyance ... your enthusiasm and love for the
 project is duly noted. Thank you for constantly preaching the right
 way.  Thank you . . . well you get the drift.  I'll stop before this
 turns into some chain-letter type ordeal.

 The point is that I hope you realize just how much you all mean to the
 community.

 Kevin


I'll second that. Plus thank you for a database system that's a joy to work 
with.

/Kevin (Although not the same Kevin as above)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] thank you

2007-12-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 18 Dec 2007 19:31:57 -0700
Kevin Kempter [EMAIL PROTECTED] wrote:

  The point is that I hope you realize just how much you all mean to
  the community.
 
  Kevin
 
 
 I'll second that. Plus thank you for a database system that's a joy
 to work with.
 
 /Kevin (Although not the same Kevin as above)

/me takes note of all the Kevins that are handing out hugs...

:)

Sincerely,

Joshua D. Drake

- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHaIUHATb/zqfZUUQRAi2lAJ9Jp8AWTfjs2HBVbGhtyRyDS4Zx4gCePbEs
N0KcDkAWmNj/HYdEJkGtCiQ=
=nW2G
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] thank you

2007-12-18 Thread Martin Gainty
2 questions

NDA...what NDA??

I would suggest concentrating on implementing your contact page
(in other words publish the page only when it works correctly..)

M--
- Original Message -
From: Kevin Kempter [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Tuesday, December 18, 2007 9:31 PM
Subject: Re: [GENERAL] thank you


 On Tuesday 18 December 2007 19:26:40 Kevin Hunter wrote:
  Hullo List,
 
  This is aimed at everyone in this community who contributes to the
  Postgres project, but especially at the core folks who continually make
  this community great through energy, time, money, responses, and
  what-have-you.
 
  I see lots of Thank yous go by for this problem or that conundrum
  solved, but I don't think I've seen a general thank you for all that all
  y'all do.  (If I've missed them in the blur of emails I get, I do
  apologize.)  Thank you for all that you do.
 
  Thank you for the countless times you've saved my butt from my own
  stupidity.  Thank you for the absolute awesome help you've been while
  tracking down bugs.  Thank you for signing the NDA without fuss and
  searching my DB for a bug that I may have caused.  Thank you for
  answering my frustrated emails at some ungodly hour of the morning (do
  you sleep?!).  Thank you for engaging others in discussion, sometimes
  beyond the point of annoyance ... your enthusiasm and love for the
  project is duly noted. Thank you for constantly preaching the right
  way.  Thank you . . . well you get the drift.  I'll stop before this
  turns into some chain-letter type ordeal.
 
  The point is that I hope you realize just how much you all mean to the
  community.
 
  Kevin


 I'll second that. Plus thank you for a database system that's a joy to
work
 with.

 /Kevin (Although not the same Kevin as above)

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] thank you

2007-12-18 Thread Gregory Williamson
Kevin H. wrote on Tue 12/18/2007 7:26 PM

 Hullo List,
 
 This is aimed at everyone in this community who contributes to the
 Postgres project, but especially at the core folks who continually make
 this community great through energy, time, money, responses, and
 what-have-you.

...snipped...

 The point is that I hope you realize just how much you all mean to the
 community.
 

+1

Greg Williamson


Re: [GENERAL] thank you

2007-12-18 Thread Paul Lambert

Gregory Williamson wrote:

Kevin H. wrote on Tue 12/18/2007 7:26 PM

  Hullo List,
 
  This is aimed at everyone in this community who contributes to the
  Postgres project, but especially at the core folks who continually make
  this community great through energy, time, money, responses, and
  what-have-you.

...snipped...

  The point is that I hope you realize just how much you all mean to the
  community.
 

+1

Greg Williamson



+2

I'm just disappointed that I finish up work with my current employer on 
Friday and where I am going I won't get to work with PG anymore and thus 
won't have as much opportunity to interact with the PG community.


That is until I can convince my new employer to realise the dark side of 
 Microsoft SQL Server. :)


--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds  Reynolds Company


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] thank you

2007-12-18 Thread Ow Mun Heng

On Tue, 2007-12-18 at 20:12 -0700, Gregory Williamson wrote:
 Kevin H. wrote on Tue 12/18/2007 7:26 PM
  This is aimed at everyone in this community who contributes to the
  Postgres project, but especially at the core folks who continually
 make this community great through energy, time, money, responses, and
  what-have-you.
 
 ...snipped...
 
  The point is that I hope you realize just how much you all mean to
 the community.
 
 
 +1
+1

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] thank you

2007-12-18 Thread Erik Jones


On Dec 18, 2007, at 8:26 PM, Kevin Hunter wrote:


Hullo List,

This is aimed at everyone in this community who contributes to the
Postgres project, but especially at the core folks who continually  
make

this community great through energy, time, money, responses, and
what-have-you.

I see lots of Thank yous go by for this problem or that conundrum
solved, but I don't think I've seen a general thank you for all  
that all

y'all do.  (If I've missed them in the blur of emails I get, I do
apologize.)  Thank you for all that you do.

Thank you for the countless times you've saved my butt from my own
stupidity.  Thank you for the absolute awesome help you've been while
tracking down bugs.  Thank you for signing the NDA without fuss and
searching my DB for a bug that I may have caused.  Thank you for
answering my frustrated emails at some ungodly hour of the morning (do
you sleep?!).  Thank you for engaging others in discussion, sometimes
beyond the point of annoyance ... your enthusiasm and love for the
project is duly noted. Thank you for constantly preaching the right
way.  Thank you . . . well you get the drift.  I'll stop before this
turns into some chain-letter type ordeal.

The point is that I hope you realize just how much you all mean to the
community.


I'll second and third and ... everything he just said!  Over the past  
couple of years I've come to realize that PostgreSQL's greatest  
strength is that it is more than software, or even open source  
software -- the software is really just the gold nugget at the center  
of a vibrant community of individuals help, explain things, explain  
things again, explain things again :), crack jokes, get to know each  
other and so much more.  To the core team:  thanks for continuing to  
make this possible, to everyone:  thanks for making it all happen!


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] thank you

2007-12-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 19 Dec 2007 12:20:32 +0900
Paul Lambert [EMAIL PROTECTED] wrote:

 I'm just disappointed that I finish up work with my current employer
 on Friday and where I am going I won't get to work with PG anymore
 and thus won't have as much opportunity to interact with the PG
 community.
 
 That is until I can convince my new employer to realise the dark side
 of Microsoft SQL Server. :)

503-667-4564 extension 101... I am available anytime after 9:00am PST

Sincerely,

Joshua D. Drake

- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHaJCmATb/zqfZUUQRAsPqAJ46VFZBcuEyg2XpYdeAlZkUYRqARQCfS9ly
A6fL44YcDr+wSI7LCbRNmrk=
=spdE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] thank you

2007-12-18 Thread brian

Joshua D. Drake wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 19 Dec 2007 12:20:32 +0900
Paul Lambert [EMAIL PROTECTED] wrote:


That is until I can convince my new employer to realise the dark side
of Microsoft SQL Server. :)



503-667-4564 extension 101... I am available anytime after 9:00am PST

Sincerely,

Joshua D. Drake


Now THAT is an upstanding gesture. Well done, sir!

(AND he arrives to work early!)

all the best,
brian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] thank you

2007-12-18 Thread Colin Wetherbee

Gregory Williamson wrote:

Kevin H. wrote on Tue 12/18/2007 7:26 PM

  Hullo List,
 
  This is aimed at everyone in this community who contributes to the
  Postgres project, but especially at the core folks who continually make
  this community great through energy, time, money, responses, and
  what-have-you.

...snipped...

  The point is that I hope you realize just how much you all mean to the
  community.
 

+1


+1 from me, too.  I haven't been here long, but I enjoy reading the 
threads, and I've gained quite a lot of insight from the few threads in 
which I've been involved.  I don't even use databases professionally -- 
I'm a stock trader -- but, I have been interested in PostgreSQL for 
personal projects for a long time.  I'm glad I'm finally involved in a 
community where I can learn from other peoples' experience and interact 
with those people on a sort of informal but still professional level. 
Thanks go out to Tom, Joshua, Merlin, and the rest of the big names on 
the list.


Colin


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] foreign key constraint, planner ignore index.

2007-12-18 Thread Andrew Nesheret

Greetings, List.

Environment: Linux, (PostgreSQL) 8.3beta4 or  (PostgreSQL) 8.2.4, same 
results.


Billing database with two tables.

1. Small table with nodes (23 rows)
inms= \d nodes
 Table public.nodes
  Column|  Type  |
Modifiers
-++--
id  | integer| not null default 
nextval('nodesidseq'::regclass)

description | character varying(256) |
identifier  | character varying(256) | not null
Indexes:
   nodes_pkey PRIMARY KEY, btree (id)
   NodeIdentifierIndex UNIQUE, btree (identifier)
inms= analyze verbose nodes;
INFO:  analyzing public.nodes
INFO:  nodes: scanned 1 of 1 pages, containing 23 live rows and 4 dead 
rows; 23 rows in sample, 23 estimated total rows


2. Large table with collected traffic ( 15795383 rows )
inms= \d sf_ipv4traffic
  Table public.sf_ipv4traffic
  Column|   Type   | Modifiers
-+--+---
timeframe   | integer  | not null
timemark| timestamp with time zone | not null
node| integer  | not null
source_address  | bytea| not null
source_port | integer  | not null
destination_address | bytea| not null
destination_port| integer  | not null
protocol_type   | integer  | not null
octets_counter  | bigint   |
packets_counter | integer  |
Indexes:
   sf_ipv4traffic_pkey PRIMARY KEY, btree (timeframe, timemark, node, 
source_address, source_port, destination_address, destination_port, 
protocol_type)

   fki_nodes btree (node)
   sf_ipv4traffic_idx btree (source_port, timeframe, source_address)
   sf_ipv4traffic_idx1 btree (timeframe, node, timemark)
   sf_ipv4traffic_idx3 btree (destination_address, destination_port, 
timeframe)

   sf_ipv4traffic_idx4 btree (protocol_type, timeframe)
Foreign-key constraints:
   nodes FOREIGN KEY (node) REFERENCES nodes(id) ON UPDATE RESTRICT 
ON DELETE RESTRICT
   sf_ipv4traffic_timeframe_fkey FOREIGN KEY (timeframe) REFERENCES 
sf_timeframes(id) ON UPDATE CASCADE ON DELETE RESTRICT


inms= ANALYZE verbose sf_ipv4traffic;
INFO:  analyzing public.sf_ipv4traffic
INFO:  sf_ipv4traffic: scanned 3000 of 162839 pages, containing 291000 
live rows and 0 dead rows; 3000 rows in sample, 15795383 estimated total 
rows


Problem is.
Planner ignore index when delete some node from nodes tables.

Test script:
begin;  

--set enable_seqscan to 
off;

delete from decimalnodeattributes where 
node=2003; 
delete from stringnodeattributes where 
node=2003;  
delete from datenodeattributes where 
node=2003;
delete from topology where fromnode=2003 or 
tonode=2003;   
explain analyze delete from nodes where 
id=2003;   
rollback;   

 QUERY 
PLAN  
---
Seq Scan on nodes  (cost=0.00..1.29 rows=1 width=6) (actual 
time=0.046..0.047 rows=1 loops=1)

  Filter: (id = 2003)
Trigger for constraint booleannodeattributes_node_fkey: time=1.315 calls=1
Trigger for constraint datenodeattributes_node_fkey: time=0.361 calls=1
Trigger for constraint decimalnodeattributes_node_fkey: time=0.288 calls=1
Trigger for constraint node: time=28.109 calls=1
Trigger for constraint nodes: time=71011.395 calls=1
~~~
Trigger for constraint snmp_nodes_access_nodeid_fkey: time=372.504 calls=1
Trigger for constraint stringnodeattributes_node_fkey: time=7.008 calls=1
Trigger for constraint topology_fromnode_fkey: time=0.368 calls=1
Trigger for constraint topology_tonode_fkey: time=0.274 calls=1
Total runtime: 71430.159 ms
(12 rows)

---  ---
Trigger for constraint nodes: time=71011.395 calls=1


But if, turn off seqscan, same test

begin;  

set enable_seqscan to off; -- 
!!!   
delete from 

[GENERAL] Password as a command line argument to createuser

2007-12-18 Thread Jane Ren
Hi,

I need to write a script that creates a new user with a password
automatically.

Is there a way I can specify the password as a command line argument to
createuser?

It looks like postgres does not read from stdin, but from /dev/tty.

Thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Password as a command line argument to createuser

2007-12-18 Thread Tom Lane
Jane Ren [EMAIL PROTECTED] writes:
 Is there a way I can specify the password as a command line argument to
 createuser?

No, and it would be a really bad idea if you could, as the password
would be exposed to everyone else on the machine (via ps) while
createuser runs.

There are various ways to do this securely, but putting the password
on a program's command line isn't one of them.  I'd suggest looking
at how psql's \password command does it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Password as a command line argument to createuser

2007-12-18 Thread Joshua D. Drake

Jane Ren wrote:

Hi,

I need to write a script that creates a new user with a password
automatically.

Is there a way I can specify the password as a command line argument to
createuser?



Since you have access to the shell use psql -U user -c create role ...

Joshua D. Drake


It looks like postgres does not read from stdin, but from /dev/tty.

Thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Password as a command line argument to createuser

2007-12-18 Thread A. Kretschmer
am  Tue, dem 18.12.2007, um 22:04:13 -0800 mailte Jane Ren folgendes:
 Hi,
 
 I need to write a script that creates a new user with a password
 automatically.
 
 Is there a way I can specify the password as a command line argument to
 createuser?

From a unix shell? You can call psql with -c your command.

Try this:

psql -U ... database -c create user foo password 'secret';



Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Password as a command line argument to createuser

2007-12-18 Thread Greg Smith

On Wed, 19 Dec 2007, A. Kretschmer wrote:


psql -U ... database -c create user foo password 'secret';


This seems like a reasonable example, but it will also show the password 
you're assigning on the command line to anybody who happens to run ps, 
which is the reason why this isn't allowed by createuser in the first 
place.


In your typical shell nowadays the echo command is a built-in one--it 
executes directly rather than calling a separate echo binary, so it won't 
leak what you tell it onto a command line.  That means this line in a 
script would be simplest way to do this that's not completely insecure:


echo create user foo password 'secret' | psql ...

This is not recommended on the command line (I think other people can 
still see the whole thing), but in a script I believe others just see the 
psql executing against standard input.


Of course you need the surrounding script to not do the wrong thing 
either, where the wrong thing includes any approach where you put the 
password on the command line.  Last time I had to do a batch creation of a 
bunch of accounts I put them into a file with the format 
username:password, read that directly from the shell (a good sample to 
borrow from for that part is 
http://www.askdavetaylor.com/how_do_i_read_lines_of_data_in_a_shell_script.html 
) and used echo | psql as above to create them.  This is not an approach 
I'd want to use as a long-term tool, but for hacking something together 
it's not an awful way to do it.


Like all questions with security implications, I highly recommend you 
believe nothing I said above and confirm each suggestion through your own 
research and testing.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings