Re: [GENERAL] Function returning SETOF

2005-12-02 Thread Terry Lee Tucker
Simpler is better ;o)

Thanks for the input...

On Thursday 01 December 2005 10:31 pm, David Fetter saith:
 On Thu, Dec 01, 2005 at 12:32:02PM -0500, Terry Lee Tucker wrote:
  List,
 
  I have a simple function:

 I have a simpler one :)

 CREATE OR REPLACE FUNCTION split_to_rows(TEXT, TEXT) /* Descriptive name */
 RETURNS SETOF TEXT
 STRICT
 LANGUAGE sql
 AS $$
 SELECT (string_to_array($1, $2))[s.i]
 FROM generate_series(
 1,
 array_upper(string_to_array($1, $2), 1)
 ) AS s(i);
 $$;

 Cheers,
 D


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgresSQL Halting System Boot

2005-12-02 Thread Richard Huxton

Matias Silva wrote:

Hi everyone, I'm totally new here and I got a little issue.
I just got done installing PostgreSQL 8.1.  I have configured PostgreSQL
to start automatically upon system start via an init script.  Problem
is that when CentOS boots it halts on Starting PostgeSQL.  I have
to press the Enter key to resume the boot.  It doesn't time out either.

How do I fix this?

I check the PostgreSQL logs and the only thing that stands out is
Do you want to choose a different one? [n]Your default context is
user_u:system_r:unconfined_t.. However when I search Google, I find
nothing.


This sounds like something to do with selinux, which Redhat build in to 
their more recent offerings. I'm afraid I don't know enough about it to 
give you specific advice, but you should be able to find out more by 
checking the archives and googling.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postmaster / resolv.conf / dns problem

2005-12-02 Thread Richard Huxton

Cott Lang wrote:

Within 5 minutes, one server would not accept new remote connections. I
could log in fine w/ psql locally. 

This is pretty bizarre ... offhand I would not have thought that the
postmaster depended on DNS service at all.  Were you maybe using DNS
names instead of IP addresses in pg_hba.conf?  What exactly does
would not accept mean --- what was the exact error message,
and was there anything in the postmaster log?



I'm using only IP addresses in pg_hba.conf.

There was nothing in the postmaster log indicating a problem.

The only thing I saw strange was multiple postmasters spawning and
disappearing.

The errors I got in the JDBC drivers was the connection pool timing out
trying to get a connection, so it's possible they were working, just
taking horribly long to connect.  Timeouts for Nagios monitoring PG was
10 seconds; pools were 20 seconds. In three years, I've probably seen 3
time outs. :)


Could it be name-lookups for logging purposes? I've been caught out by 
that elsewhere.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] user_write_lock_oid questions

2005-12-02 Thread Marian Naghen



I'm new to PostgreSQL and I'm a litle 
confused.

I have 2 questions about using user_write_lock 
functions:

1. if a user set a write lock and then, for some 
reason (bad connections, app chashes, etc), disconect from server without releasing, what happened with the lock ? It will be 
still
holding ?

2. there is a way to set/obtain informations about 
the user who sets a lock?


thanks.


[GENERAL] deadlock on the same relation

2005-12-02 Thread Francesco Formenti - TVBLOB S.r.l.

Hi all,
I have a problem about deadlock. I have several stored procedures; only 
one of them uses ACCESS EXCLUSIVE LOCK on a table; however, all the 
stored procedures can access to that table, using SELECT, INSERT or UPDATE.
The stored procedures are called by different processes of an external 
application.


In a non-predictable way, I obtain error messages like this one:

2005-11-29 18:23:06 [12771] ERROR:  deadlock detected
DETAIL:  Process 12771 waits for AccessExclusiveLock on relation 26052 
of database 17142; blocked by process 12773.
   Process 12773 waits for AccessExclusiveLock on relation 26052 of 
database 17142; blocked by process 12771.

CONTEXT:  PL/pgSQL function set_session_box_status line 7 at SQL statement

I don't understand what's happening; is it possible that a deadlock is 
detected on the same relation (that is, the stored procedure using the 
ACCESS EXCLUSIVE LOCK mentioned before) ?


I'm using Postgres 7.4.7 on a Debian Sarge.

Thanks!
Francesco


--

Francesco Formenti - TVBLOB S.r.l.
Software Engineer

Via G. Paisiello, 9 20131 Milano, Italia
-
Phone +39 02 36562440
Fax +39 02 20408347
Web Site http://www.tvblob.com
E-mail [EMAIL PROTECTED]



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] encoding problem

2005-12-02 Thread Volkan YAZICI
On 12/1/05, marcelo Cortez [EMAIL PROTECTED] wrote:
  i have problems with encodings

PostgreSQL case conversion functions is a little bit buggy.
(Especially for Latin-N and Unicode encodings.) I've prepared a patch
[1] to fix a similar problem for Latin5 encoding. It wasn't tested so
much but works for your problem too:

template1=# CREATE DATABASE testLatin ENCODING = 'LATIN9';
CREATE DATABASE
template1=# \c testLatin
You are now connected to database testLatin.
testLatin=# CREATE TABLE test
testLatin-# ( nombre varchar(20));
CREATE TABLE
testLatin=# COPY test FROM stdin;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
 maricón
 ñañoso pícaro
 \.
testLatin=# select  * from test where upper(nombre) like 'ÑA%';
nombre
---
 ñañoso pícaro
(1 row)

[1] You can find related patch (and discussion) @
http://archives.postgresql.org/pgsql-patches/2005-11/msg00173.php
address. It fixes case conversion problems for ILIKE, upper() and
lower() functions.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] 'AS' column-alias beign ignored on outer select

2005-12-02 Thread Markus Wollny
Hi!

Is this the expected behaviour?

 select ID
  , USK_RATING AS USK
  from (
   select ID
, USK_RATING
   from MAIN_SONY_PS2_GAME
   where ID = 101400
   limit 1
   )
   as PRODUCT_LIST
 limit 1;

   id   | usk_rating
+
 101400 |
(1 row)

Note the column-header being labeled 'usk_rating', not 'usk'. Obviously
the 'AS' column alias of the outer select is being ignored in the
resultset.

 select ID
  , USK
  from (
   select ID
, USK_RATING AS USK
   from MAIN_SONY_PS2_GAME
   where ID = 101400
   limit 1
   )
   as PRODUCT_LIST
 limit 1;

   id   | usk
+-
 101400 |
(1 row)

If the column alias is being declared in the subselect, the column alias
is working.

 select version();
   version
--
 PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2

Is this working as expected or is this a bug?

Kind regards

   Markus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgresSQL Halting System Boot

2005-12-02 Thread Jerry Sievers
Matias Silva [EMAIL PROTECTED] writes:

 How do I fix this?
 
 I check the PostgreSQL logs and the only thing that stands out is
 Do you want to choose a different one? [n]Your default context is
 user_u:system_r:unconfined_t.. However when I search Google, I find
 nothing.

Dunno the actual cause.  Someone else will.

Ok, so if you start the server again after booting you're going to get
prompted the same way.  Try starting it with logging NOT directed to a
file.

$ su - postgres
$ pg_ctl start

Should prompt you.  Now see if redirecting the null file to stdin lets
the server start without waiting.

$ pg_ctl start /dev/null

Or...

$ echo n | pg_ctl start

If any of these workarounds suffice, edit them into your
/etc/init.d/postgresql script.

HTH

-- 
---
Jerry Sievers   305 854-3001 (home) WWW ECommerce Consultant
305 321-1144 (mobilehttp://www.JerrySievers.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Table design

2005-12-02 Thread Sean Davis
This might be a bit off-topic, but I'm curious what folks would do with this
situation:

I have about 6 different tab-delimited file types, all of which store
similar information (microarray gene expression).  However, the files come
from different manufacturers, so each has slightly different fields with
different meanings.  However, there are a few columns that are shared.  I
may need to add table formats in the future (as we get more manufacturers).
I can think of at least three ways to go about storing these data:

1) Create a single table that has as many columns as needed for ALL formats
and make manufacturer-specific views, naming columns in the view as
appropriate.  Then put rules on the view for inserts, updates, etc.  This is
my first choice, I think, but adding a new manufacturer's format means
creating a new view and possibly adding columns; some columns may NULL for
large portions of the table.

2) Use postgres inheritance, but even shared columns in our data may have
different names depending on the manufacturer, so there may be views
involved anyway.

3) Use a fully-normalized strategy that stacks each column into one very
long table--this would be my last choice.

Thanks for any insight.

(For replies, please try to reply to me directly as well as the list as I
just get digests right now).

Thanks,
Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Table design

2005-12-02 Thread Gevik
Perhaps this is not a postgresql solution, but you could;

1. first design a common data structure in postgresql.
2. then convert each type of the tab-delimited file to a basic xml
structures.
3. map the structures to the common data structure using xslt.

I hope this helps,


 This might be a bit off-topic, but I'm curious what folks would do with
 this
 situation:

 I have about 6 different tab-delimited file types, all of which store
 similar information (microarray gene expression).  However, the files come
 from different manufacturers, so each has slightly different fields with
 different meanings.  However, there are a few columns that are shared.  I
 may need to add table formats in the future (as we get more
 manufacturers).
 I can think of at least three ways to go about storing these data:

 1) Create a single table that has as many columns as needed for ALL
 formats
 and make manufacturer-specific views, naming columns in the view as
 appropriate.  Then put rules on the view for inserts, updates, etc.  This
 is
 my first choice, I think, but adding a new manufacturer's format means
 creating a new view and possibly adding columns; some columns may NULL for
 large portions of the table.

 2) Use postgres inheritance, but even shared columns in our data may have
 different names depending on the manufacturer, so there may be views
 involved anyway.

 3) Use a fully-normalized strategy that stacks each column into one very
 long table--this would be my last choice.

 Thanks for any insight.

 (For replies, please try to reply to me directly as well as the list as I
 just get digests right now).

 Thanks,
 Sean


 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Table design

2005-12-02 Thread Adam Witney

Hi Sean,

We use something similar to approach 1) to store our microarray data.

We have a data table that has a few specific columns (signal median, bkg
median etc) as these exist in all the file formats... Plus also some generic
columns for the rest of the data fields.

Then we have a definitions table that maps the column header from the file
format to the column name in the database.

It seems to work well for us. I can send you the table definitions if they
are any use to you?

Cheers

Adam

 This might be a bit off-topic, but I'm curious what folks would do with this
 situation:
 
 I have about 6 different tab-delimited file types, all of which store
 similar information (microarray gene expression).  However, the files come
 from different manufacturers, so each has slightly different fields with
 different meanings.  However, there are a few columns that are shared.  I
 may need to add table formats in the future (as we get more manufacturers).
 I can think of at least three ways to go about storing these data:
 
 1) Create a single table that has as many columns as needed for ALL formats
 and make manufacturer-specific views, naming columns in the view as
 appropriate.  Then put rules on the view for inserts, updates, etc.  This is
 my first choice, I think, but adding a new manufacturer's format means
 creating a new view and possibly adding columns; some columns may NULL for
 large portions of the table.
 
 2) Use postgres inheritance, but even shared columns in our data may have
 different names depending on the manufacturer, so there may be views
 involved anyway.
 
 3) Use a fully-normalized strategy that stacks each column into one very
 long table--this would be my last choice.
 
 Thanks for any insight.
 
 (For replies, please try to reply to me directly as well as the list as I
 just get digests right now).
 
 Thanks,
 Sean
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
  http://archives.postgresql.org


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Unicode Corruption and upgrading to 8.0.4. to 8.1

2005-12-02 Thread Howard Cole
Hi everyone, I have a problem with corrupt UTF-8 sequences in my 8.0.4 
dump which is preventing me from upgrading to 8.1 - which spots the 
errors and refuses to import the data. Is there some SQL command that I 
can use to fix or cauterise the sequences in the 8.0.4 database before 
dumping to 8.1?


I think the problem arose using invalid client encodings - which were 
not rejected prior to 8.1.


Regards,

Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postmaster / resolv.conf / dns problem

2005-12-02 Thread Tom Lane
Cott Lang [EMAIL PROTECTED] writes:
 What exactly does
 would not accept mean --- what was the exact error message,
 and was there anything in the postmaster log?

 There was nothing in the postmaster log indicating a problem.

 The only thing I saw strange was multiple postmasters spawning and
 disappearing.

 The errors I got in the JDBC drivers was the connection pool timing out
 trying to get a connection, so it's possible they were working, just
 taking horribly long to connect.  Timeouts for Nagios monitoring PG was
 10 seconds; pools were 20 seconds.

In that case, the multiple postmasters were probably backends spawned
in response to JDBC connection attempts, which went away when they
noticed the client had disconnected.  These symptoms seem consistent
with the idea that backend startup was taking a real long time, which
given the context has to be blamed on a DNS lookup timing out.  Do you
have log_hostname enabled?  If so, the backends would be trying to do
reverse lookups on the IP address of their connected client, and we
could explain all the facts with the assumption that that lookup was
encountering a 30-second-or-so timeout.

Why this should be happening after you change resolv.conf isn't real
clear to me, but in any case if you have a gripe about it you should
gripe to your libc or libbind supplier, not us.  Whatever the problem
is, it's down inside the getnameinfo() library routine.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] deadlock on the same relation

2005-12-02 Thread Tom Lane
Francesco Formenti - TVBLOB S.r.l. [EMAIL PROTECTED] writes:
 I have a problem about deadlock. I have several stored procedures; only 
 one of them uses ACCESS EXCLUSIVE LOCK on a table; however, all the 
 stored procedures can access to that table, using SELECT, INSERT or UPDATE.
 The stored procedures are called by different processes of an external 
 application.

 In a non-predictable way, I obtain error messages like this one:

 2005-11-29 18:23:06 [12771] ERROR:  deadlock detected
 DETAIL:  Process 12771 waits for AccessExclusiveLock on relation 26052 
 of database 17142; blocked by process 12773.
 Process 12773 waits for AccessExclusiveLock on relation 26052 of 
 database 17142; blocked by process 12771.
 CONTEXT:  PL/pgSQL function set_session_box_status line 7 at SQL statement

Probably you have been careless about avoiding lock upgrade
situations.  If you are going to take an exclusive lock on a relation,
it is dangerous to already hold a non-exclusive lock on the same
relation, because that prevents anyone else from getting an exclusive
lock; thus if another process is doing the exact same thing you are in
a deadlock situation.

Since SELECT/INSERT/UPDATE take non-exclusive locks, you can't do one of
those and later ask for exclusive lock within the same transaction.
The general rule is get the strongest lock you will need first.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 'AS' column-alias beign ignored on outer select

2005-12-02 Thread Tom Lane
Markus Wollny [EMAIL PROTECTED] writes:
 Is this the expected behaviour?

No, it's a bug introduced in 8.1.  It's fixed in CVS tip.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Table design

2005-12-02 Thread Guy Rouillier
Sean Davis wrote:
 This might be a bit off-topic, but I'm curious what folks would do
 with this 
 situation:
 
 I have about 6 different tab-delimited file types, all of which store
 similar information (microarray gene expression).  However, the files
 come from different manufacturers, so each has slightly different
 fields with different meanings.  However, there are a few columns
 that are shared.  I may need to add table formats in the future (as
 we get more manufacturers). I can think of at least three ways to go
 about storing these data:  
 
 1) Create a single table that has as many columns as needed for ALL
 formats and make manufacturer-specific views, naming columns in the
 view as appropriate.  Then put rules on the view for inserts,
 updates, etc.  This is my first choice, I think, but adding a new
 manufacturer's format means creating a new view and possibly adding
 columns; some columns may NULL for large portions of the table. 
 
 2) Use postgres inheritance, but even shared columns in our data may
 have different names depending on the manufacturer, so there may be
 views involved anyway.  

I'm unclear if this is just a naming issue, or if the fields in the
files have different meaning.  If it's just a case that supplier A names
a field foo while supplier B names a field with the same meaning
bar, I would think you'd want to coalesce all these incoming files
into a single table containing columns that have meaning to your
organization.  The effort then just becomes one of mapping incoming
fields into the proper columns, but the end result would be something
much more consistent and meaningful to your organization.

If on the other hand all these incoming fields have different meaning
and you need to keep them stored separately, I would look into option
(4): just keep a separate table for each supplier, since you said that
even shared fields may have different meaning; then use a view over all
the tables to answer any queries across suppliers.

But I definitely wouldn't let the way your suppliers name their fields
in the files they send you drive how you design your database.  That's
just a data mapping issue which is easily solved during data import.

 
 3) Use a fully-normalized strategy that stacks each column into one
 very long table--this would be my last choice. 
 
 Thanks for any insight.
 
 (For replies, please try to reply to me directly as well as the list
 as I just get digests right now). 
 
 Thanks,
 Sean
 
 
 ---(end of
 broadcast)--- 
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org



-- 
Guy Rouillier


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Table design

2005-12-02 Thread Sean Davis
On 12/2/05 10:21 AM, Guy Rouillier [EMAIL PROTECTED] wrote:

 Sean Davis wrote:
 This might be a bit off-topic, but I'm curious what folks would do
 with this 
 situation:
 
 I have about 6 different tab-delimited file types, all of which store
 similar information (microarray gene expression).  However, the files
 come from different manufacturers, so each has slightly different
 fields with different meanings.  However, there are a few columns
 that are shared.  I may need to add table formats in the future (as
 we get more manufacturers). I can think of at least three ways to go
 about storing these data:
 
 1) Create a single table that has as many columns as needed for ALL
 formats and make manufacturer-specific views, naming columns in the
 view as appropriate.  Then put rules on the view for inserts,
 updates, etc.  This is my first choice, I think, but adding a new
 manufacturer's format means creating a new view and possibly adding
 columns; some columns may NULL for large portions of the table.
 
 2) Use postgres inheritance, but even shared columns in our data may
 have different names depending on the manufacturer, so there may be
 views involved anyway.
 
 I'm unclear if this is just a naming issue, or if the fields in the
 files have different meaning.  If it's just a case that supplier A names
 a field foo while supplier B names a field with the same meaning
 bar, I would think you'd want to coalesce all these incoming files
 into a single table containing columns that have meaning to your
 organization.  The effort then just becomes one of mapping incoming
 fields into the proper columns, but the end result would be something
 much more consistent and meaningful to your organization.
 
 If on the other hand all these incoming fields have different meaning
 and you need to keep them stored separately, I would look into option
 (4): just keep a separate table for each supplier, since you said that
 even shared fields may have different meaning; then use a view over all
 the tables to answer any queries across suppliers.

The columns are a mixture of both cases, hence the thought about
inheritance.  However, the column names do have meaning for users of the
manufacturer's products, so they need to be present, also.  I'm not the
first to tackle this problem, and most use a common table structure for all
the data, column naming aside.

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Andrew Schmidt
And InnoDB is actually quite a bit worse than PostgreSQL (ignoring the 
slight difference in row numbers)


InnoDB:

olp_live select count(*) from team_players;
+--+
| count(*) |
+--+
|   465004 |
+--+
1 row in set (1.54 sec)


PostgreSQL:

olp_live=# select count(*) from team_players;
++
| count  |
++
| 464747 |
++
(1 row)

Time: 934.935 ms


Rodrigo Gonzalez wrote:


Poul,

2 thingsfirst, why do you think it will have an impact on inserts?

And the second oneuse InnoDb tables in MySQL, and you will have 
the same than with PostgreSQL, it's because of MVCC


Best regards

Rodrigo

Poul Møller Hansen wrote:

I can see it has been discussed before, why COUNT(*) tends to be slow 
on PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering 
on the rows in the indexes and that there should be plenty of reasons 
not to implement that in PostgreSQL, not that I found an explanation.

However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !


Thanks,
 Poul


db=# explain analyze select count(*) from my.table;
 QUERY PLAN
- 

 Aggregate  (cost=60008.28..60008.28 rows=1 width=0) (actual 
time=32028.469..32028.474 rows=1 loops=1)
   -  Seq Scan on table  (cost=0.00..54962.62 rows=2018262 width=0) 
(actual time=14.492..19592.014 rows=2018252 loops=1)

 Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node = 
'1234567890';


QUERY PLAN
--- 

 Aggregate  (cost=50023.14..50023.14 rows=1 width=0) (actual 
time=1790.967..1790.971 rows=1 loops=1)
   -  Index Scan using idx_node_date_id on table  
(cost=0.00..49968.76 rows=21753 width=0) (actual 
time=80.218..1570.747 rows=34648 loops=1)

 Index Cond: ((node)::text = '1234567890'::text)
 Total runtime: 1792.084 ms
(4 rows)

mysqlselect count(*) from table;
+--+
| count(*) |
+--+
|  2018160 |
+--+
1 row in set (0.01 sec)

mysqlselect count(*) from table where node = '1234567890';
+--+
| count(*) |
+--+
|34648 |
+--+
1 row in set (0.23 sec)



---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

.



--
_
G l o b a l   D i a g n o s t i c sI n c.
Andrew Schmidtt.416-304-0049 x206
[EMAIL PROTECTED] f.866-697-8726
_ 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Rodrigo Gonzalez

I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about autoincrement as called in mysql? use sequences please


Best regards

Rodrigo

Poul Møller Hansen wrote:


Rodrigo Gonzalez wrote:


Poul,

2 thingsfirst, why do you think it will have an impact on inserts?

And the second oneuse InnoDb tables in MySQL, and you will have 
the same than with PostgreSQL, it's because of MVCC


Best regards

Rodrigo



If the index is numbered, I assume or rather guessing that the indexes 
must be renumbered at inserts.


I was not trying to favourite MySQL to PostgreSQL, I prefer PostgreSQL 
to MySQL at any time, I was just wondering why it was much faster in 
MySQL and how I can achieve the same result in PostgreSQL ?



Regards, Poul




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_restore error

2005-12-02 Thread Scott Frankel


Using custom output format instead of tar output (-Fc instead if -Ft)  
appears to work without error.  My initial tests with the backup db  
seem to match the original db.


The compressed output files are also a lot smaller ;)

Note that the problematic tar files were never transfered between  
platforms.  They are written to a local disk and are accessed  
directly from that location.


Thanks for the info!
Scott


On Dec 1, 2005, at 8:02 PM, Tom Lane wrote:


Scott Frankel [EMAIL PROTECTED] writes:

Yes, the tar file contains a file called 1765.dat.  A `cat` of that
file shows nothing more than an empty line (i.e.: a carriage return).



-rw---   1 frankel  prod1 Nov 29 11:20 1765.dat



Extracting the archive, tar reported a lone zero block.  I don't
know what this refers to.


Hmm, how big is the tarfile, and would you be willing to send it to  
me?



I'm happy to either help debug Ft or switch to Fc in my scripts.  I
was under the impression, though, that Ft was required to backup db's
with blobs.  I am storing some thumbnail jpg images in my db.


Either -Fc or -Ft can handle blobs ... and actually, in 8.1 the issue
is gone entirely because plain text pg_dump can too.


I'd also be interested to know if the pg_restore error is due to my
upgrade to postgres 8.1 or macosx 10.4.3.


Your guess is as good as mine at the moment.  One thought that  
comes to

mind --- did you move the tarfile across machines at any point, and if
so could it have been munged by a Unix/DOS newline conversion?

regards, tom lane

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How: single db, multiple users

2005-12-02 Thread P G
Greetings,

I am new to PostgreSQL and come from an DB2
background.  I currently have v8.0.3 up and running. 
So far, it looks great!

One area I am confused about is how to set up
databases and users.  I can currently create
databases, but I am having difficulty in assigning
mutiple users to those databases.  For example, I have
set up one database for a large project.  There are
several users that need to have access to that
database: create tables, insert, delete, select,
update, etc.  The documentation talks about a single
user-database associations but is not clear on how to
have multiple users on a single database.

Are there any best practices that I could follow in
settings these up for various projects and databases? 
Or if this is a simple thing to do, any pointers on
how to do it?










__ 
Yahoo! DSL – Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How: single db, multiple users

2005-12-02 Thread Eric E
Postgres' users are not specific to a database, they are shared across 
all databases on a particular server.


At the database level, you can assign creation and other permissions to 
users, but (AFAIK) you cannot deny users to a database. However, you can 
effectively control users' access to a particular database by granting 
or denying them access to a schema within that database.


Hope that helps,

Eric

P G wrote:


Greetings,

I am new to PostgreSQL and come from an DB2
background.  I currently have v8.0.3 up and running. 
So far, it looks great!


One area I am confused about is how to set up
databases and users.  I can currently create
databases, but I am having difficulty in assigning
mutiple users to those databases.  For example, I have
set up one database for a large project.  There are
several users that need to have access to that
database: create tables, insert, delete, select,
update, etc.  The documentation talks about a single
user-database associations but is not clear on how to
have multiple users on a single database.

Are there any best practices that I could follow in
settings these up for various projects and databases? 
Or if this is a simple thing to do, any pointers on

how to do it?










__ 
Yahoo! DSL – Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org

 




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] 8.1, OID's and plpgsql

2005-12-02 Thread Jaime Casanova
On 12/1/05, Uwe C. Schroeder [EMAIL PROTECTED] wrote:
 On Thursday 01 December 2005 10:24, Jaime Casanova wrote:
  On 12/1/05, Uwe C. Schroeder [EMAIL PROTECTED] wrote:
   Hi everyone,
  
   in 8.1 by default tables have no OID's anymore. Since OID's are 4 byte
   it's probably a good idea to discourage the use of them (they produced a
   lot of trouble in the past anyways, particularly with backup/restores
   etc)
  
   Now there's the issue with stored procs. A usual construct would be to
   ...
   ...
   INSERT xx;
   GET DIAGNOSTICS lastoid=RESULT_OID;
   SELECT  oid=lastoid;
   
   
  
   Is there anything one could sanely replace this construct with?
   I personally don't think that using the full primary key is really a good
   option. Say you have a 3 column primary key - one being a serial, the
   others for example being timestamps, one of them generated with default
   options. In order to retrieve the record I just inserted (where I don't
   know the serial value or the timestamp) I'd have to
  
   1) store the nextval of the sequence into a variable
   2) generate the timestamp and store it to a variable
   3) generate the full insert statement and retain the other values of the
   primary key
   4) issue a select to get the record.
  
   Personally I think this adds unneccessary overhead. IMHO this diminishes
   the use of defaults and sequences unless there is some easier way to
   retrieve the last record. I must be missing something here - am I ?
  
  UC
 
  If you are using a SERIAL in your PK, why you need the other two
  fields? The serial will undoubtly identify a record?
 
  you just retrieve the current value you inserted with currval
 

 No it doesn't. the serial identifies the record, the timestamp identifies the
 version/time-validity of the record.

you don't need valid_from to be part of the PK, just the serial...

 If a primary key needs to be something as simple as a serial then we could
 just keep the OID's as well and pump them up to 32 bytes.


No. because tables with OIDs are not the default anymore and is not
recomended to use OIDs as PK

 curval() doesn't do it, since that will only identify a group of records since
 my PK is not just a simple int4.


currval() identifies the last value you inserted... that's one of the
reason to prefer SERIAL over OIDs... an API for manage them...

 sample:

 create table xxx (
  id serial,
  field varchar,
  ...
  ...
  valid_from timestamptz
 )

 PK is id,valid_from
 There may be several records with the same id but different valid_from dates.
 I'm storing a full timestamp, but the application only uses the date part -
 the timestamp is just to correct for timezones.


obviously you are using wrong the datatype serial if you let the
serial column insert always its default then there won't be several
record with the same id

 From the application logic a record is considered valid until a record with a
 newer valid_from is found. From that point on the records are referenced
 depending on several legal factors (this is commercial insurance, lots of
 lawyers and state/fed regulations)


and? you still don't need valid_from as part of the PK if id is a serial...

i think what you really want is to make id an integer and then let
valid_from as part of PK...

and make a select to retrieve the valid one

SELECT * FROM xxx WHERE id = ??? ORDER BY valid_from DESC LIMIT 1


 I guess I either stick to the OID's which work fine, or I just have to store
 the whole PK in variables and forget about defaults.

 Why not have something like the rowid in oracle?


UC





--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Reduce NUMERIC size by 2 bytes, reduce max length to 508 digits

2005-12-02 Thread Bruce Momjian

There is a patch under consideration for 8.2 that would reduce the
storage requirement for numeric values by two bytes, but also reduce the
range of allowed numeric values to 508 digits.  The current specified
maximum NUMERIC length is 1000 (NUMERIC(1000,0)), and the maximum
computational length is 4096 digits.  (Computations over 4096 digits are
silently truncated. Throwing an error instead is a TODO item I hope will
be worked on as part of this change.)

Is that an acceptable tradeoff (reduced size, reduced range) for our
users?

---

Simon Riggs wrote:
 
 Now we're into 8.2devel mode, its time to submit the previously
 discussed patch that:
 
 - reduces Numeric storage format by 2 bytes
 - limits scale to +/- 508 decimal places
 
 This is sufficient to allow Numeric to continue to be used as the
 default numeric representation for all numbers in the parser.
 
 Passes: make check on cvstip, as well as some tests not in there.
 
 Code comments explain the new format and consequences.
 
 As previously agreed, reviewing this is a 2 stage process:
 1. review/possibly agree OK to commit
 2. check with everybody on GENERAL that the restriction to 508 is
 acceptable
 
 Figure there's no point doing (2) until we agree the proposal/code is
 workable.
 
 As Atsushi-san point out, there is also come CPU optimization to be done
 on Numeric comparison, and also on other areas such as aggregation. I've
 not done this yet.
 
 Best Regards, Simon Riggs

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
Simon Riggs wrote:
 
 Now we're into 8.2devel mode, its time to submit the previously
 discussed patch that:
 
 - reduces Numeric storage format by 2 bytes
 - limits scale to +/- 508 decimal places
 
 This is sufficient to allow Numeric to continue to be used as the
 default numeric representation for all numbers in the parser.
 
 Passes: make check on cvstip, as well as some tests not in there.
 
 Code comments explain the new format and consequences.
 
 As previously agreed, reviewing this is a 2 stage process:
 1. review/possibly agree OK to commit
 2. check with everybody on GENERAL that the restriction to 508 is
 acceptable
 
 Figure there's no point doing (2) until we agree the proposal/code is
 workable.
 
 As Atsushi-san point out, there is also come CPU optimization to be done
 on Numeric comparison, and also on other areas such as aggregation. I've
 not done this yet.
 
 Best Regards, Simon Riggs

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] accessing text of the query in a rule

2005-12-02 Thread Toni Casueps
I am making rules for a table and I want to know what values or variables 
can I use apart from the new... and old... fields, for example I would 
like to read the SQL query that the user wrote that caused this rule to 
execute. Can this be done?




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Reduce NUMERIC size by 2 bytes, reduce max length to 508 digits

2005-12-02 Thread Roger Hand
Bruce Momjian
Sent: Friday, December 02, 2005 9:39 AM
 
 There is a patch under consideration for 8.2 that would reduce the
 storage requirement for numeric values by two bytes, but also reduce the
 range of allowed numeric values to 508 digits.  
...
 Is that an acceptable tradeoff (reduced size, reduced range) for our
 users?

I would be in favor of this change. What's the plan for anyone who
may be currently using  508 digits (if there is anyone!)?

-Roger

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[Fwd: Re: [GENERAL] deadlock on the same relation]

2005-12-02 Thread Francesco Formenti - TVBLOB S.r.l.

Tom Lane wrote:


Francesco Formenti - TVBLOB S.r.l. [EMAIL PROTECTED] writes:
 

I have a problem about deadlock. I have several stored procedures; only 
one of them uses ACCESS EXCLUSIVE LOCK on a table; however, all the 
stored procedures can access to that table, using SELECT, INSERT or UPDATE.
The stored procedures are called by different processes of an external 
application.
   



 


In a non-predictable way, I obtain error messages like this one:
   



 


2005-11-29 18:23:06 [12771] ERROR:  deadlock detected
DETAIL:  Process 12771 waits for AccessExclusiveLock on relation 26052 
of database 17142; blocked by process 12773.
   Process 12773 waits for AccessExclusiveLock on relation 26052 of 
database 17142; blocked by process 12771.

CONTEXT:  PL/pgSQL function set_session_box_status line 7 at SQL statement
   



Probably you have been careless about avoiding lock upgrade
situations.  If you are going to take an exclusive lock on a relation,
it is dangerous to already hold a non-exclusive lock on the same
relation, because that prevents anyone else from getting an exclusive
lock; thus if another process is doing the exact same thing you are in
a deadlock situation.

Since SELECT/INSERT/UPDATE take non-exclusive locks, you can't do one of
those and later ask for exclusive lock within the same transaction.
The general rule is get the strongest lock you will need first.

regards, tom lane


 




Unfortunately, the first operation I do after the BEGIN declaration is 
the LOCK TABLE in access exclusive mode, and is the only explicit lock I 
perform in all the stored procedures. I'm wondering: if other functions 
access to the same table, via SELECT or UPDATE, not specifying an 
explicit lock, could this generate a deadlock? The fact that I don't 
understand is the common resource on which the two processes are locked 
into. 
I can imagine a flow like this:


Transaction 1: ---lock table A (for an UPDATE, for instance)
Transaction 2: ---lock access exclusive on table B (at the beginning of 
the stored procedure)

Transaction 1: ---try to lock table B (for an UPDATE, for instance)
Transaction 2: ---try to lock table A (for an UPDATE, for instance)

But I think it doesn't generate a deadlock error message on the same 
resource (in this case, table B), like the one I've got.


Thanks
Regards,
Francesco





--

Francesco Formenti - TVBLOB S.r.l.
Software Engineer

Via G. Paisiello, 9 20131 Milano, Italia
-
Phone +39 02 36562440
Fax +39 02 20408347
Web Site http://www.tvblob.com
E-mail [EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Poul Møller Hansen

Rodrigo Gonzalez wrote:

I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about autoincrement as called in mysql? use sequences please




Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's 
(MyISAM) of handling counts.


I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.


Thanks,
 Poul


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Adding order bys to the view definitions (7.4)

2005-12-02 Thread elein

Is there any case when PostgreSQL adds an order by
to a view *definition* in 7.4?  I have evidence
that 
   create view 
results in 
create view  ORDER BY ...
as shown by \d viewname.
The view does a five way ordinary join.

Any definitive answer would be very helpful.

Thanks,

Elein
[EMAIL PROTECTED]
--
[EMAIL PROTECTED]Varlena, LLCwww.varlena.com
(510)655-2584(o) (510)543-6079(c)

  PostgreSQL Consulting, Support  Training   

PostgreSQL General Bits   http://www.varlena.com/GeneralBits/
--
I have always depended on the [QA] of strangers.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Rodrigo Gonzalez

No problem at all.

MyISAM stores the row count in it's header (.frm file).

You can do something like this using a table that has for example 
tablename, rowcount.


On the other hand, MySQL with MyISAM does not have row locking, so this 
problem is not considered.


Maybe if you tell what you want and why you want to know exact row count 
someone can help you





Poul Møller Hansen wrote:


Rodrigo Gonzalez wrote:


I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about autoincrement as called in mysql? use sequences please




Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's 
(MyISAM) of handling counts.


I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.


Thanks,
 Poul


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Gary Horton
We're migrating from 7.3.4 to 8.1.0, and I'm seeing different behavior 
with the createuser command. What we are wanting is to streamline our 
product installs, to require minimal user interaction, so we basically 
embed the password like this in our bourne-shell wrapper script:


$POSTGRESQL_HOME/bin/createuser  -d -P $PGUSER END
$PGPASSWORD
$PGPASSWORD
END

...and in 7.3.4, this works just fine, with the shell consuming the 
stdin and redirecting it to the createuser command, which subsequently 
doesn't bother the user with the prompt (i.e. the prompts are responded 
to automatically). However, this doesn't work at all in 8.1.0 -- 
regardless of the stdin redirection, the createuser program appears to 
ignore the stdin information and insists on prompting for the input 
instead.


Is this a known expected behavior? Do we have a workaround to get this 
to behave automatically like it did for 7.3.4?


Thanks so much for anyone's insights. And, if you would be so kind, 
please cc me at my email since I'm not signed up to receive 
interest-list messages...


-Gary Horton

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] accessing text of the query in a rule

2005-12-02 Thread Richard Huxton

Toni Casueps wrote:
I am making rules for a table and I want to know what values or 
variables can I use apart from the new... and old... fields, for 
example I would like to read the SQL query that the user wrote that 
caused this rule to execute. Can this be done?


No. And there's isn't necessarily an original query that's relevant. 
What about a rule activated due to actions taken by a trigger. Are you 
still interested in the user's query, or the query the trigger issued?


If there's a particular problem you're trying to solve, the details 
might well give someone enough info to help.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Richard Huxton

Gary Horton wrote:
...and in 7.3.4, this works just fine, with the shell consuming the 
stdin and redirecting it to the createuser command, which subsequently 
doesn't bother the user with the prompt (i.e. the prompts are responded 
to automatically). However, this doesn't work at all in 8.1.0 -- 
regardless of the stdin redirection, the createuser program appears to 
ignore the stdin information and insists on prompting for the input 
instead.


Don't know what's changed, but it could be something that happened when 
roles were introduced. A way to work around it would be to issue the SQL 
CREATE USER... PASSWORD 'foo' via psql.


Oh - anything else you come across in the upgrade would be of interest 
to many here, myself included ;-)


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Jaime Casanova
On 12/2/05, Poul Møller Hansen [EMAIL PROTECTED] wrote:
 Rodrigo Gonzalez wrote:
  I answer here so each one can help you.
 
  Cannot understand what you meanindex is numbered? Are you talking
  about autoincrement as called in mysql? use sequences please
 

 Sorry, forgot that your email address was in the reply-to field.

 I was seeking a solution on how to make a faster count on a lot of rows,
 and I was wondering on the difference between PostgreSQL's and MySQL's
 (MyISAM) of handling counts.

 I understand the advantages of MVCC compared to row/table locking.
 And as far as I have understood PostgreSQL count the rows looping
 through all rows, and that's why it takes that long when there are many
 rows.

 But how is MySQL (MyISAM) doing it, and why doesn't that way work in
 the MVCC model.


 Thanks,
  Poul


That's because MyISAM isn't concerned about transactions and
visibility stuff... it simply stores and increments...

in postgres you have to now if the row is visible to the transaction
that is counting, if the row was deleted by a concurrent
transaction... etc, etc... it's not as easy as insert, increment...

so the way to do it is create a trigger that record in a table the
number of rows...
in postgres there isn't such mechanism implicit for all tables because
it will be a penalty for both: insert and deletes in all tables and
the case is that there few tables were you want know exact counts, if
any

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How: single db, multiple users

2005-12-02 Thread P G
--- Eric E [EMAIL PROTECTED] wrote:
 At the database level, you can assign creation and
 other permissions to 
 users, but (AFAIK) you cannot deny users to a
 database. However, you can 
 effectively control users' access to a particular
 database by granting 
 or denying them access to a schema within that
 database.

Eric, 

Thanks for the carification.  It is quite helpful for
me early in my introduction to PostgreSQL.

So that I hear you correctly, it is OK then for only
one user to be the owner of a database.  Then I will
give other users privileges on a one by one basis. 
And I can do this for a particular schema in a
database or all schemas in it, without enumerating all
the tables in those schemas?  Am I correct?  

What kind of privileges do I need to give other users
so that they can create tables, delete them, update
them, etc. in that database?   Can I use ALL
priveleges here?






__ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Jan Wieck

On 12/2/2005 2:02 PM, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...


Neither, because now you have to update one single row in that new 
table, which causes a row exclusive lock. That is worse than an 
exclusive lock on the original table because it has the same 
serialization of writers but the additional work to update the count 
table as well as vacuum it.


What you need is a separate table where your trigger will insert delta
rows with +1 or -1 for insert and delete. A view will sum() over that 
and tell you the true number of rows. Periodically you condense the 
table by replacing all current rows with one that represents the sum().



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] deadlock detected - when multiple threads try to update one table

2005-12-02 Thread Harakiri
Hello,

im using postgresql 8 under SLES 9 and RH 3.x - under
both OS i encountered the following problem :

Lets say i have a table, which has no reference to any
other table - and i create one entry for each day of
the year within this table (PK). During a day, i
update the values within that specific entry - this
entry will be updated from multiple threads.

Under a higher load, i get a message from postgresql
deadlock detected - im using JDBC to update/query
postgresql.

However, i do not understand why i would get a
deadlock detected error message because i only
INCREASE the values of the fields - i do not overwrite
them - my statement simplified looks like this :

update mytable set field1 = field1 + 1, field2 =
field2 + 4200 where pkDate = someDate

Now, multiple threads with a connection pool are
connecting to postgres - and i can reproduce the
deadlock problem when i use a test class which starts
1000 threads to update this table.

Increasing the value of deadlock_timeout helps a bit -
but solves not the problem - because i think there
should not be a problem at all - the transaction
management of postgres should be able to handle two or
more threads which only want to increase the value o f
specific fields - like in programming i++;

Releated Questions

a) Is there another way to increase field values
within postgres other then field = field + MY_VALUE
b) Is it a problem with postgresql or is it my code ?

Thanks



__ 
Yahoo! DSL – Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] deadlock detected - when multiple threads try to update

2005-12-02 Thread Stephan Szabo
On Fri, 2 Dec 2005, Harakiri wrote:

 im using postgresql 8 under SLES 9 and RH 3.x - under
 both OS i encountered the following problem :

 Lets say i have a table, which has no reference to any
 other table - and i create one entry for each day of
 the year within this table (PK). During a day, i
 update the values within that specific entry - this
 entry will be updated from multiple threads.

 Under a higher load, i get a message from postgresql
 deadlock detected - im using JDBC to update/query
 postgresql.

 However, i do not understand why i would get a
 deadlock detected error message because i only
 INCREASE the values of the fields - i do not overwrite
 them - my statement simplified looks like this :

 update mytable set field1 = field1 + 1, field2 =
 field2 + 4200 where pkDate = someDate

 Now, multiple threads with a connection pool are
 connecting to postgres - and i can reproduce the
 deadlock problem when i use a test class which starts
 1000 threads to update this table.

 Increasing the value of deadlock_timeout helps a bit -
 but solves not the problem - because i think there
 should not be a problem at all - the transaction
 management of postgres should be able to handle two or
 more threads which only want to increase the value o f
 specific fields - like in programming i++;

 Releated Questions

 a) Is there another way to increase field values
 within postgres other then field = field + MY_VALUE
 b) Is it a problem with postgresql or is it my code ?

It's hard to say with just the above. Are you doing other things in the
transactions besides a single update of that table and/or is the order of
events consistent between the transactions? Does the table have any
triggers, rules or foreign keys?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Gary Horton



Richard Huxton wrote:


Gary Horton wrote:

...and in 7.3.4, this works just fine, with the shell consuming the 
stdin and redirecting it to the createuser command, which 
subsequently doesn't bother the user with the prompt (i.e. the 
prompts are responded to automatically). However, this doesn't work 
at all in 8.1.0 -- regardless of the stdin redirection, the 
createuser program appears to ignore the stdin information and 
insists on prompting for the input instead.



Don't know what's changed, but it could be something that happened 
when roles were introduced. A way to work around it would be to issue 
the SQL CREATE USER... PASSWORD 'foo' via psql.


That's a great idea, Richard - thanks, I'll give this a shot --



Oh - anything else you come across in the upgrade would be of interest 
to many here, myself included ;-)


So far, so good in general -- I was unable to execute the regression 
tests (installcheck) - the C compilation fails. But I'm successfully 
executing all of our own app-level tests, so I'm optimistic...other than 
that, I did send out a list of questions around configure options, but 
haven't gotten any responses; but I will post to the list about anything 
else I come across!


-gh

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] deadlock detected - when multiple threads try to update one table

2005-12-02 Thread Harakiri
Hi, thanks for the response ,

 
 It's hard to say with just the above. Are you doing
 other things in the
 transactions besides a single update of that table
 and/or is the order of

in each transaction i do basically the same stuff :

insert 1 row into table A, B, C (B and C have a
foreign key in A) without any sub queries - i just
plainy insert data..

then - update my table D row for today with some
numbers (as described)

 events consistent between the transactions? Does the
 table have any
 triggers, rules or foreign keys?

The table D in question does not have any triggers,
rules or foreign keys - it has only one PK and a few
numeric fields.

I dont understand why, when multiple connections do :

T1
update myTable set field1 = field1 + 1, field2 =
field2 + 5000 where myID = 1;

T2
update myTable set field1 = field1 + 1, field2 =
field2 + 2500 where myID = 1;


there should be no deadlock here - T1 just blocks T2
till the update is done, then T2 should be able to
update the same row..

Thanks
--- Stephan Szabo [EMAIL PROTECTED]
wrote:

 On Fri, 2 Dec 2005, Harakiri wrote:
 
  im using postgresql 8 under SLES 9 and RH 3.x -
 under
  both OS i encountered the following problem :
 
  Lets say i have a table, which has no reference to
 any
  other table - and i create one entry for each day
 of
  the year within this table (PK). During a day, i
  update the values within that specific entry -
 this
  entry will be updated from multiple threads.
 
  Under a higher load, i get a message from
 postgresql
  deadlock detected - im using JDBC to
 update/query
  postgresql.
 
  However, i do not understand why i would get a
  deadlock detected error message because i only
  INCREASE the values of the fields - i do not
 overwrite
  them - my statement simplified looks like this :
 
  update mytable set field1 = field1 + 1, field2 =
  field2 + 4200 where pkDate = someDate
 
  Now, multiple threads with a connection pool are
  connecting to postgres - and i can reproduce the
  deadlock problem when i use a test class which
 starts
  1000 threads to update this table.
 
  Increasing the value of deadlock_timeout helps a
 bit -
  but solves not the problem - because i think there
  should not be a problem at all - the transaction
  management of postgres should be able to handle
 two or
  more threads which only want to increase the value
 o f
  specific fields - like in programming i++;
 
  Releated Questions
 
  a) Is there another way to increase field values
  within postgres other then field = field +
 MY_VALUE
  b) Is it a problem with postgresql or is it my
 code ?





__ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Reduce NUMERIC size by 2 bytes, reduce max length to 508 digits

2005-12-02 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 There is a patch under consideration for 8.2 that would reduce the
 storage requirement for numeric values by two bytes, but also reduce the
 range of allowed numeric values to 508 digits.  The current specified
 maximum NUMERIC length is 1000 (NUMERIC(1000,0)), and the maximum
 computational length is 4096 digits.  (Computations over 4096 digits are
 silently truncated.

Bruce is entirely incorrect on the last two points (although he may have
found a bug in his copy of psql, see thread in -patches if you care).

The actual effective limit on NUMERIC is presently 10^128K, which is
probably enough to count the femtoseconds remaining until the heat death
of the universe, and then multiply that by the number of elementary
particles therein ;-).  And it is checked --- compare what you get from
select pow(10::numeric, 131071);
select pow(10::numeric, 131072);

Restricting NUMERIC to 10^508 would therefore be a significant reduction
in dynamic range.  Whether anyone seriously cares is another question
--- if you do want unlimited-precision arithmetic, you should probably
be doing it in some other software anyway.  (The NUMERIC routines get
painfully slow with tens of thousands of digits :-(.)

The current 1000-digit limit on declared NUMERIC columns is basically an
artificial limit, with pretty much the same reasoning as the artificial
limit on declared VARCHAR length: if you think you need more than 1000
digits then you probably ought not be declaring a specific upper limit
at all.

BTW, the limit is on dynamic range, not number of digits: the first
significant digit has to be within 128K places of the decimal point
(or, if this patch is applied, within 508 places of the decimal point),
but you can have as many digits as you like after that one.  It would be
reasonable to describe the patched system as allowing 500 places before
and 500 places after the decimal point, or 1000 digits overall.

So the question is, is anyone doing anything with Postgres that would be
affected by a 500-place limit, or even come close to being affected?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Reduce NUMERIC size by 2 bytes, reduce max length to 508 digits

2005-12-02 Thread Tom Lane
Roger Hand [EMAIL PROTECTED] writes:
 I would be in favor of this change. What's the plan for anyone who
 may be currently using  508 digits (if there is anyone!)?

You could store your data as text columns and do whatever arithmetic
you had in mind on the client side.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [Fwd: Re: [GENERAL] deadlock on the same relation]

2005-12-02 Thread Tom Lane
Francesco Formenti - TVBLOB S.r.l. [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Probably you have been careless about avoiding lock upgrade
 situations.

 Unfortunately, the first operation I do after the BEGIN declaration is 
 the LOCK TABLE in access exclusive mode, and is the only explicit lock I 
 perform in all the stored procedures.

If you mean that you placed a LOCK TABLE inside the stored procedure,
that's far from being the same thing as the start of the transaction.
For example, if your application does

BEGIN;
SELECT * FROM mytab;
SELECT myprocedure();
COMMIT;

then by the time control arrives inside myprocedure your transaction
already holds a nonexclusive lock on mytab.  If you do LOCK TABLE mytab
inside the function then you're risking deadlock.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] accessing text of the query in a rule

2005-12-02 Thread Jaime Casanova
On 12/2/05, Toni Casueps [EMAIL PROTECTED] wrote:
 I am making rules for a table and I want to know what values or variables
 can I use apart from the new... and old... fields, for example I would
 like to read the SQL query that the user wrote that caused this rule to
 execute. Can this be done?


You can't see the original query but have a lot of information much relevant

C triggers: http://www.postgresql.org/docs/current/static/trigger-interface.html
In plpgsql: http://www.postgresql.org/docs/current/static/plpgsql-trigger.html
In pltcl: http://www.postgresql.org/docs/current/static/pltcl-trigger.html
In plperl: http://www.postgresql.org/docs/current/static/plperl-triggers.html
In plpython: http://www.postgresql.org/docs/current/static/plpython-trigger.html

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Bruce Momjian
Gary Horton wrote:
 
 
 Richard Huxton wrote:
 
  Gary Horton wrote:
 
  ...and in 7.3.4, this works just fine, with the shell consuming the 
  stdin and redirecting it to the createuser command, which 
  subsequently doesn't bother the user with the prompt (i.e. the 
  prompts are responded to automatically). However, this doesn't work 
  at all in 8.1.0 -- regardless of the stdin redirection, the 
  createuser program appears to ignore the stdin information and 
  insists on prompting for the input instead.
 
 
  Don't know what's changed, but it could be something that happened 
  when roles were introduced. A way to work around it would be to issue 
  the SQL CREATE USER... PASSWORD 'foo' via psql.
 
 That's a great idea, Richard - thanks, I'll give this a shot --

createuser was converted from a shell script to a C program in 7.4, so I
guess something changed at that point.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Adding order bys to the view definitions (7.4)

2005-12-02 Thread Tom Lane
elein [EMAIL PROTECTED] writes:
 Is there any case when PostgreSQL adds an order by
 to a view *definition* in 7.4?

I believe SELECT DISTINCT (or DISTINCT ON) will do that.
I have a vague recollection that GROUP BY might have done
so at one time as well, but it seems not to anymore.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Tom Lane
Gary Horton [EMAIL PROTECTED] writes:
 We're migrating from 7.3.4 to 8.1.0, and I'm seeing different behavior 
 with the createuser command. What we are wanting is to streamline our 
 product installs, to require minimal user interaction, so we basically 
 embed the password like this in our bourne-shell wrapper script:

 $POSTGRESQL_HOME/bin/createuser  -d -P $PGUSER END
 $PGPASSWORD

We changed all the clients some time ago to insist on getting passwords
from /dev/tty not stdin.  There was a security argument for it, I think,
but I don't recall the reasoning exactly.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] New.* and old.* as function arguments within rules

2005-12-02 Thread Karl O. Pinc

Hi,

I'm trying to make sure I understand what I'm doing.

Where is new.* and old.* documented, as regards
using them as arguments to functions called from
rules?  If it's not documented then can I rely
on this syntax continuing to work?

(It's tough searching on these strings.  :-)

TIA


Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_dump from Java or SQL?

2005-12-02 Thread Jason Long








Is it possible to access backup and restore functionality
from SQL or Java? I am attempting to allow some this functionality from a
webapp.



The best I have so far is the following Java code.



 Runtime runtime = Runtime.getRuntime();

 String[] cmd = { cmd, 

 /c, 

 C:/PostgreSQL/8.0/pgAdmin_III/pg_dump.exe
-i -h localhost -p 5432 -U postgres -F c -b -v -f C:/

 +(new Date()).getTime()

 +.backup mydatabase
};

 Process p = runtime.exec(cmd);



Thank you for your time,



Jason Long

CEO and Chief Software Engineer

BS Physics, MS Chemical Engineering

http://www.supernovasoftware.com 










[GENERAL] was a initdb required from 8.1beta3 - beta4?

2005-12-02 Thread Tony Caduto

Hi,
I messed up and updated my beta3 to 8.1 and didn't realize a initdb was 
required.
Is there anyway I can get at the data?  It's on Gentoo and the 
/var/lib/postgresql/data dir is still intact.


I tried to install beta4 but got the this error:

FATAL:  database files are incompatible with server
DETAIL:  The database cluster was initialized with CATALOG_VERSION_NO 
200510051, but the server was compiled with CATALOG_VERSION_NO 200510211.

HINT:  It looks like you need to initdb.

Thanks,

Tony

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] was a initdb required from 8.1beta3 - beta4?

2005-12-02 Thread Joshua D. Drake
On Fri, 2005-12-02 at 16:03 -0600, Tony Caduto wrote:
 Hi,
 I messed up and updated my beta3 to 8.1 and didn't realize a initdb was 
 required.
 Is there anyway I can get at the data?  It's on Gentoo and the 
 /var/lib/postgresql/data dir is still intact.

Install beta3, backup, install 8.1 restore.

 
 I tried to install beta4 but got the this error:
 
 FATAL:  database files are incompatible with server
 DETAIL:  The database cluster was initialized with CATALOG_VERSION_NO 
 200510051, but the server was compiled with CATALOG_VERSION_NO 200510211.
 HINT:  It looks like you need to initdb.
 
 Thanks,
 
 Tony
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl, ODBCng - http://www.commandprompt.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] default_index_tablespace?

2005-12-02 Thread Jim C. Nasby
TODO?

%Add default_index_tablespace GUC and database parameter.

On Thu, Dec 01, 2005 at 11:48:28AM -0600, Kelly Burkhart wrote:
 On 12/1/05, Alexander M. Pravking [EMAIL PROTECTED] wrote:
 
  PostgreSQL 8.0 brought a great tablespaces feature. However, it's still
  a real pain for one who wants to separate tables and indices to different
  tablespaces: he has to do it manually, explicitely specifying tablespace
  for each index.
 
  Thus, I think it would be handy to have default_index_tablespace server
  variable. Any thoughts?
 
 
 We would use such a thing.  Our databases use three partitions: log, data,
 index.  We currently create our databases like:
 
 CREATE DATABASE dbname OWNER owner TABLESPACE tsname;
 
 We then specify another index tablespace on index creation (which really
 isn't that difficult).
 
 
 Adding an INDEX_TABLESPACE option to CREATE DATABASE might be worthwhile if
 what we have done is typical.  Or if pg maintainers want to suggest this
 usage as sensible.
 
 -K

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] default_index_tablespace?

2005-12-02 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 TODO?
 %Add default_index_tablespace GUC and database parameter.

That was part of the original tablespace proposal and was rejected for
(what seemed at the time) good reasons.  It'd be nice if this thread
betrayed any awareness whatsoever of the previous discussions ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] alter table schema on 8.0.X

2005-12-02 Thread Jim C. Nasby
On Thu, Dec 01, 2005 at 10:25:48AM -0200, William Leite Ara?jo wrote:
  Hi, I'm trying alter a table schema on 'PostgreSQL 8.0.2 on
 i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-5)'.
  On 8.1, by documentation, has the commant ALTER TABLE table SET
 SCHEMA new_schema. Any solution for this in 8.0?
 
 
 Ps.: solutions different of CREATE TABLE new_schema.table as SELECT *
 FROM schema.table please.
 
 --
 William Leite Ara?jo

Theoretically, you can do this manually, but there's a number of things
you need to look at. Your best bet is to read through
AlterTableNamespace in
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/tablecmds.c?rev=1.176
and see what it's doing.

It's always a bit dangerous mucking about with the system catalogs, so I
wouldn't recommend this for a production system.

If you do code up something that takes all those tables into account
it'd be good to share it with the community. If nothing else, one of the
core coders might tell you how risky it actually is.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] default_index_tablespace?

2005-12-02 Thread Jim C. Nasby
On Fri, Dec 02, 2005 at 06:06:25PM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  TODO?
  %Add default_index_tablespace GUC and database parameter.
 
 That was part of the original tablespace proposal and was rejected for
 (what seemed at the time) good reasons.  It'd be nice if this thread
 betrayed any awareness whatsoever of the previous discussions ...

Any idea on what to search for? 'default index tablespace' isn't turning
up anything that looks like that discussion...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] memory leak under heavy load?

2005-12-02 Thread Will Glynn

hi
i think i've encountered a bug in postgresql 8.1.
yet - i'm not reallty info submitting it to -bugs, as i have no way to 
successfully redo it again.


basically
i have server, with dual opteron, 4g of memory, 2gb of swap. 
everything working under centos 4.2.

...
what i say is that postmaster user started to eat memory.
it allocated *all* memory (both ram and swap), and then died.
load on the machine jumped to something around 20.


I noticed a similar occurrence. We have a high-load PostgreSQL database 
-- not a ridiculous amount of inserts or updates, but a huge variety of 
diverse queries on some 200 tables.


We had noticed load averages of 3-4 on our database for the past couple 
days. Then, this morning, Postgres got killed twice by the Linux 
out-of-memory process killer. (Also on a dual Opteron, 4GB of memory.) 
We were showing 3.5 GB of memory allocated to *something*, but stopping 
Postgres completely for a few seconds didn't lower the number. It wasn't 
taken by any process, which leads me to believe that it's a kernel bug. 
One reboot later, everything is rosy -- load hovers around 1.2, there's 
enough free memory to have a 2.5 GB buffer cache, and swap is untouched.


PostgreSQL 7.4 had run on this box flawlessly for six months -- bad RAM 
forced us to take it down -- then again for another month until we 
upgraded to 8.1 last week. Like the original poster, we're set up for 
~500 MB of shared memory; certainly not enough to make the kernel kill 
-9 postmaster. Kernel is 2.6.11-gentoo-r6, same as before the upgrade.


Also, this didn't happen in our test environment, which uses a similar 
but x86 server. Perhaps this is AMD64 related?


--Will Glynn
Freedom Healthcare

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] 8.1, OID's and plpgsql

2005-12-02 Thread Jim C. Nasby
On Thu, Dec 01, 2005 at 07:18:10PM -0800, Uwe C. Schroeder wrote:
 Why not have something like the rowid in oracle?

http://www.postgresql.org/docs/8.1/interactive/datatype-oid.html, search
on ctid. And
http://www.postgresql.org/docs/8.1/interactive/ddl-system-columns.html.

From the 2nd URL:
ctid

The physical location of the row version within its table. Note that
although the ctid can be used to locate the row version very
quickly, a row's ctid will change each time it is updated or moved
by VACUUM FULL. Therefore ctid is useless as a long-term row
identifier. The OID, or even better a user-defined serial number,
should be used to identify logical rows.

Though I think that a lazy vacuum can change (well, technically remove)
a ctid. AFAIK, it's not safe to use a ctid outside of the transaction
you got it in. Though come to think about it, I don't think there's any
way to get the ctid of a row you just inserted anyway...

Maybe the docs should be changed to just say that you should never reuse
a ctid outside of the transaction you obtained the ctid in?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] createuser ignores stdin in 8.1.0?

2005-12-02 Thread Gary Horton




Thanks, Tom, that explains
it. I'm using Richard Huxton's suggested workaround and it's
working just fine -
-gh

Tom Lane wrote:

  Gary Horton [EMAIL PROTECTED] writes:
  
  
We're migrating from 7.3.4 to 8.1.0, and I'm seeing different behavior 
with the createuser command. What we are wanting is to streamline our 
product installs, to require minimal user interaction, so we basically 
embed the password like this in our bourne-shell wrapper script:

  
  
  
  
$POSTGRESQL_HOME/bin/createuser  -d -P $PGUSER END
$PGPASSWORD

  
  
We changed all the clients some time ago to insist on getting passwords
from /dev/tty not stdin.  There was a security argument for it, I think,
but I don't recall the reasoning exactly.

			regards, tom lane
  





Re: [GENERAL] 8.1, OID's and plpgsql

2005-12-02 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 Maybe the docs should be changed to just say that you should never reuse
 a ctid outside of the transaction you obtained the ctid in?

That's not a sufficient rule either: someone else could still delete or
update the row while your transaction runs.  You'd really have to SELECT
FOR UPDATE or FOR SHARE to be sure the ctid remains stable.  (Of course,
this isn't an issue for the case of a row you just inserted yourself,
since no one else can see it yet to change it.)

The paragraph defining ctid is not the place for a discussion of how it
could be used ... I'm not quite sure where is, though.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reduce NUMERIC size by 2 bytes, reduce max length to 508 digits

2005-12-02 Thread Tom Lane
I wrote:
 The actual effective limit on NUMERIC is presently 10^128K, which is
 probably enough to count the femtoseconds remaining until the heat death
 of the universe, and then multiply that by the number of elementary
 particles therein ;-).

Should have done my research first.  A little googling says that

* The total number of particles in the universe has been
  variously estimated at numbers from 10^72 up to 10^87.

* The time to the heat death of the universe has been estimated at
  10^200 years (and if there's one significant digit in that exponent
  I'd be surprised...)

So the product I fancifully mentioned would weigh in somewhere around
10^300, and thus be *well* within the capability of even the proposed
restricted numeric format.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.1, OID's and plpgsql

2005-12-02 Thread Jim C. Nasby
On Fri, Dec 02, 2005 at 06:58:39PM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  Maybe the docs should be changed to just say that you should never reuse
  a ctid outside of the transaction you obtained the ctid in?
 
 That's not a sufficient rule either: someone else could still delete or
 update the row while your transaction runs.  You'd really have to SELECT
 FOR UPDATE or FOR SHARE to be sure the ctid remains stable.  (Of course,
 this isn't an issue for the case of a row you just inserted yourself,
 since no one else can see it yet to change it.)
 
 The paragraph defining ctid is not the place for a discussion of how it
 could be used ... I'm not quite sure where is, though.

Maybe the MVCC paragraph?

Related to the original question though, is there actually any way to
get the ctid of a row that was just inserted?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.1, OID's and plpgsql

2005-12-02 Thread Jim C. Nasby
On Fri, Dec 02, 2005 at 06:58:39PM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  Maybe the docs should be changed to just say that you should never reuse
  a ctid outside of the transaction you obtained the ctid in?
 
 That's not a sufficient rule either: someone else could still delete or
 update the row while your transaction runs.  You'd really have to SELECT
 FOR UPDATE or FOR SHARE to be sure the ctid remains stable.  (Of course,

Erm, even if they do delete or update the row, wouldn't it's ctid still
remain valid since nothing could vacuum it yet? Of course, now it'd
probably see the old version of the row, but that behavior could be
changed so that the database would follow t_ctid in that case.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] was a initdb required from 8.1beta3 - beta4?

2005-12-02 Thread Tony Caduto

Joshua D. Drake wrote

.



Install beta3, backup, install 8.1 restore.

  


That's what I wanted to do but I can't seem to find a beta3 tar.gz 
anywhere, the downloads only go to beta4 :-(




Tony

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] was a initdb required from 8.1beta3 - beta4?

2005-12-02 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 Install beta3, backup, install 8.1 restore.

 That's what I wanted to do but I can't seem to find a beta3 tar.gz 
 anywhere, the downloads only go to beta4 :-(

If you don't find a tarball, you could pull the beta3 tag from CVS.
See http://developer.postgresql.org/docs/postgres/cvs.html
The tag you want is REL8_1_0BETA3 ... too lazy to check the cvs
man page, but I think you would add -r REL8_1_0BETA3 to the
normal cvs checkout command.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] was a initdb required from 8.1beta3 - beta4?

2005-12-02 Thread Tony Caduto

Tom Lane wrote:

If you don't find a tarball, you could pull the beta3 tag from CVS.
See http://developer.postgresql.org/docs/postgres/cvs.html
The tag you want is REL8_1_0BETA3 ... too lazy to check the cvs
man page, but I think you would add -r REL8_1_0BETA3 to the
normal cvs checkout command.

regards, tom lane

  

Thanks Tom,
I will give that a go.

Tony

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] user_write_lock_oid questions

2005-12-02 Thread Qingqing Zhou

Marian Naghen [EMAIL PROTECTED] wrote

 1. if a user set a write lock and then, for some reason (bad
 connections, app chashes, etc), disconect from server without releasing,
 what happened with the lock ? It will be still holding ?

When the connection is lost, server will release all its resources, 
including locks.

 2. there is a way to set/obtain informations about the user who sets a
 lock ?

Check out pg_stat_get_backend_userid(integer backendid) function.

Regards,
Qingqing 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] New.* and old.* as function arguments within rules

2005-12-02 Thread Andreas Kretschmer
Karl O. Pinc [EMAIL PROTECTED] schrieb:

 Hi,
 
 I'm trying to make sure I understand what I'm doing.
 
 Where is new.* and old.* documented, as regards
 using them as arguments to functions called from
 rules?  If it's not documented then can I rely

http://www.postgresql.org/docs/8.1/interactive/triggers.html


HTH Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org