Re: [GENERAL] Change Windows path to Unix path...

2005-04-27 Thread Patrick . FICHE
Thanks,

That's exactly what I was looking for.



--- 
Patrick Fiche 
email : [EMAIL PROTECTED] 
tel : 01 69 29 36 18 

--- 




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Shelby Cain
Sent: mardi 26 avril 2005 17:52
To: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Change Windows path to Unix path...



--- [EMAIL PROTECTED] wrote:
 Hi,
  
 I'm trying to execute COPY command from some pgsql
 function.
 The filename is given as an argument of the
 function.
 But I get the filename like 'F:\tmp\file.txt' and I
 need to change this to
 'F:/tmp/file.txt' before applying the COPY command.
  
 I dind't succeed to replace '\' by '/' in the
 filename. 

Did you escape the backslash?  Postgresql interprets
that as a C-style escape sequence.

Try something like:

create function win32tounix(varchar) returns varchar
as $$
  select replace($1, '\\', '/');
$$ language sql;



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] oid wraparound

2005-04-27 Thread =?ISO-8859-15?Q?Hubert_Fr=F6hlich?=
Thanks, Neil.
Hubert Fröhlich wrote:
Those days, we had PostgreSQL 7.1 and 7.2, and we had to be careful 
oids approaching 2^32 (2.14 billion)

Now, we have 8.0. What does the situation look like?

With the default settings, there is exactly the same risk of OID 
wraparound as in earlier releases. However, you can set the 
default_with_oids configuration parameter to false to significantly 
reduce OID consumption, to the point that you probably won't need to 
worry about it. It will mean that tables will not have OIDs by default, 
so you should specify WITH OIDS when creating tables that need OIDs if 
necessary (although think twice before doing this, as there are only a 
few good reasons to use OIDs in user tables).
What good reasons to use OIDs in user tables are still left?
 * For speeding up  some special types of queries?
--
Mit freundlichen Grüßen / With kind regards
Hubert Fröhlich
---
Dr.-Ing. Hubert Fröhlich
Bezirksfinanzdirektion München  
Alexandrastr. 3, D-80538 München, GERMANY
Tel. :+49 (0)89 / 2190 - 2980
Fax  :+49 (0)89 / 2190 - 2997
hubert dot froehlich at bvv dot bayern dot de
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Tue, Apr 26, 2005 at 03:48:44PM -0500,
 Scott Marlowe [EMAIL PROTECTED] wrote 
 a message of 26 lines which said:

 Here's a quote from the SQL1992 spec that's VERY clear:

Yes, PostgreSQL is right and implement the standard. Now, what's the
rationale for the standard? I understand it for a single column but,
for several columns, it should be still possible to have different
tuples, such as (3, NULL) and (5, NULL) for instance.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Why sequential scan for currval?

2005-04-27 Thread John Barham
 test=# explain select s from tt where id = currval('tt_id_key');
   QUERY PLAN
 --
  Seq Scan on tt  (cost=0.00..1734.42 rows=1 width=32)
Filter: (id = currval('tt_id_key'::text))
 (2 rows)

should be:

test=# explain select s from tt where id = currval('tt_id_seq'); --
tt_id_seq vs. tt_id_key
  QUERY PLAN
--
 Seq Scan on tt  (cost=0.00..1734.42 rows=1 width=32)
   Filter: (id = currval('tt_id_seq'::text))
(2 rows)

but the question still holds.

  John

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Why sequential scan for currval?

2005-04-27 Thread John Barham
test=# create table tt (id serial unique, s varchar);
[populate tt w/ 10 rows]
test=# insert into tt (s) values ('foo');
test=# select currval('tt_id_seq');
 currval
-
  12
(1 row)
test=# explain select s from tt where id = 12;
 QUERY PLAN
-
 Index Scan using tt_id_key on tt  (cost=0.00..6.01 rows=1 width=32)
   Index Cond: (id = 12)
(2 rows)
test=# explain select s from tt where id = currval('tt_id_key');
  QUERY PLAN
--
 Seq Scan on tt  (cost=0.00..1734.42 rows=1 width=32)
   Filter: (id = currval('tt_id_key'::text))
(2 rows)

Why is a sequential scan used when comparing id to currval() value vs.
index scan when compared to a constant?

TIA,

  John

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] www.thekompany.com rekall

2005-04-27 Thread Chris Green
On Tue, Apr 26, 2005 at 09:30:49PM -0400, [EMAIL PROTECTED] wrote:
I just downloaded the windows demo for Rekall, which is an MSAccess
like product (loosely speaking) with native drivers for postgresql and
some other engines (plus odbc for yet others).  I was a bit confused
on certain things so I emailed my questions, and the president of the
company replied.
 
 
 
It wasnt clear what product I should purchase for windows, and he said
that the basic $60 rekall gives you both windows and linux versions.
 
You know there is a free version of Rekall as well as the paid for one
do you?  Take a look at http://www.rekallrevealed.org
 
 
I was also unclear about how long the demo remains active (30 days, 10
days?) but he explained that the demo never expires,  remains active
indefinitely, but logs off ever 20 minutes.   It unzipped and
installed effortlessly on my part.  And it was very simple to tell it
to look at a postgresql database.  It uses python as a scripting
language.
 
 
 
I havent done a lot with it yet, but I think I am going to like it a
lot.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Why sequential scan for currval?

2005-04-27 Thread Klint Gore
On Wed, 27 Apr 2005 00:28:18 -0700, John Barham [EMAIL PROTECTED] wrote:
 test=# create table tt (id serial unique, s varchar);
 [populate tt w/ 10 rows]
 test=# insert into tt (s) values ('foo');
 test=# select currval('tt_id_seq');
  currval
 -
   12
 (1 row)
 test=# explain select s from tt where id = 12;
  QUERY PLAN
 -
  Index Scan using tt_id_key on tt  (cost=0.00..6.01 rows=1 width=32)
Index Cond: (id = 12)
 (2 rows)
 test=# explain select s from tt where id = currval('tt_id_key');
   QUERY PLAN
 --
  Seq Scan on tt  (cost=0.00..1734.42 rows=1 width=32)
Filter: (id = currval('tt_id_key'::text))
 (2 rows)
 
 Why is a sequential scan used when comparing id to currval() value vs.
 index scan when compared to a constant?

currval is volatile which means it can change from one row in a
statement to the next.  So the scan has to be sequential to check if the
value of currval() has changed.

klint.

+---+-+
: Klint Gore: Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Query Designer

2005-04-27 Thread Hannes Dorbath
The SQL generated by tools like that is mostly neither performant nor 
readable in particular.

Better learn basic SQL and use an editor with keyboard templates. That 
way you'll be writing your statements even faster than cicking arround 
in some stupid GUI ;)

And yes, PGExplorer is the only FREE query builder I know off as well. 
There are several commercial..

a lot of WHERE... =...AND...=...AND...
Maybe it creates implicit joins?
On 26.04.2005 21:35, Carlos Gustavo Fischer wrote:
Hello, people.
I´m looking for a FREE tool where you can VISUALLY link tables and
mark fields and the tool
generates the query automatically.
I´ve tried PGExplorer. It´s nice, but it doens´t create JOINS, just
a lot of WHERE... =...AND...=...AND...
Do you people have any tips ??
Thanks in advance,
Carlos G. Fischer
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread Richard Huxton
Stephane Bortzmeyer wrote:
On Tue, Apr 26, 2005 at 03:48:44PM -0500,
 Scott Marlowe [EMAIL PROTECTED] wrote 
 a message of 26 lines which said:


Here's a quote from the SQL1992 spec that's VERY clear:

Yes, PostgreSQL is right and implement the standard. Now, what's the
rationale for the standard? I understand it for a single column but,
for several columns, it should be still possible to have different
tuples, such as (3, NULL) and (5, NULL) for instance.
The value of (3,NULL) isn't well-defined. In particular, you can't say 
that (3,NULL) = (3,NULL) since NULL means not-known. The fact that part 
of the value is not known means the value as a whole is not known.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Performance difference between ANY and IN, also array

2005-04-27 Thread Alban Hertroys
Bart Grantham wrote:
Hello, all.  I work for a Manhattan ISP and have developed an internal 
systems management/housekeeping app on php/postgres 7.4.  I am trying to 
speed up some bits with stored procedures and have had great success, 
except I've now run into a bit of trouble.  It comes down to this:

# SELECT * FROM connections  WHERE connectee_node_id IN ( 28543,28542 );
-snip-
Time: 1.410 ms
If you can divide your data set into ranges of adjacent values, it is 
even faster to use series of BETWEEN clauses. Kind of depends on the 
amount of processing you need to do to get those ranges. The values 
being ordered beforehand helps a great deal.

For example, you could do WHERE value IN (1,2,3,4,6,7,8,9,10) or you 
could use WHERE value BETWEEN 1 AND 4 OR value BETWEEN 6 AND 10.

You'll also want to prevent having queries like WHERE value BETWEEN 2 
AND 3. There IN (2,3) is probably the better alternative.

--
Alban Hertroys
MAG Productions
T: +31(0)53 4346874
F: +31(0)53 4346876
E: [EMAIL PROTECTED]
W: http://www.magproductions.nl
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Query Designer

2005-04-27 Thread Sean Davis
Particularly if you have a set of queries that are used often, you 
could certainly build a framework for doing this.  See:

http://genome.ucsc.edu/cgi-bin/hgTables
as an example.  There are many others, I would imagine.  Perl, ruby, 
and probably Java and python offer database abstraction tools that make 
are aware of the foreign key structures linking tables, so building 
an app that capitalizes on this could allow you to build such a query 
builder, but this problem is a hard one to do generally, I think.

Sean
On Apr 27, 2005, at 3:59 AM, Hannes Dorbath wrote:
The SQL generated by tools like that is mostly neither performant nor 
readable in particular.

Better learn basic SQL and use an editor with keyboard templates. That 
way you'll be writing your statements even faster than cicking arround 
in some stupid GUI ;)

And yes, PGExplorer is the only FREE query builder I know off as well. 
There are several commercial..

a lot of WHERE... =...AND...=...AND...
Maybe it creates implicit joins?
On 26.04.2005 21:35, Carlos Gustavo Fischer wrote:
Hello, people.
I´m looking for a FREE tool where you can VISUALLY link tables and
mark fields and the tool
generates the query automatically.
I´ve tried PGExplorer. It´s nice, but it doens´t create JOINS, just
a lot of WHERE... =...AND...=...AND...
Do you people have any tips ??
Thanks in advance,
Carlos G. Fischer
---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to 
[EMAIL PROTECTED])
---(end of 
broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] free WINDOWS rekall?

2005-04-27 Thread Typing80wpm



Chris - Thanks so much for your suggestion to visit www.rekallrevealed.org. I just 
read over the entire site, and it does not appear that there is a free Windows 
automatic install package for Rekall, and I am really not in a position to 
attempt to take some kind of source code and attempt to generate a windows 
package, especially if it would all be done for me for a reasonable $60 (I 
forget exact price). Or, am I mistaken regarding a free windows version of 
Rekall?



On Tue, Apr 26, 2005 at 09:30:49PM -0400, [EMAIL PROTECTED] 
wrote:  I just downloaded the windows demo for Rekall, which 
is an MSAccess  like product (loosely speaking) with native 

drivers for postgresql and  some other engines (plus odbc 
for yet others). I was a bit confused  on certain 
things so I emailed my questions, and the president of the  
company replied. It wasnt clear 
what product I should purchase for windows, and he said  
that the basic $60 rekall gives you both windows and linux versions. 
You know there is a free version of Rekall as well as the paid for onedo 
you? Take a look at http://www.rekallrevealed.org  
  I was also unclear about how long the demo remains active 
(30 days, 10  days?) but he explained that the demo never 
expires, remains active  indefinitely, but logs off 
ever 20 minutes. It unzipped and  installed 
effortlessly on my part. And it was very simple to tell it 
 to look at a postgresql database. It uses python as a 
scripting  language.   
  I havent done a lot with it yet, but I think I am going to 
like it a  lot.---(end 
of broadcast)---TIP 5: Have you checked our 
extensive FAQ?   
http://www.postgresql.org/docs/faq


Re: [GENERAL] free WINDOWS rekall?

2005-04-27 Thread Chris Green
On Wed, Apr 27, 2005 at 07:06:24AM -0400, [EMAIL PROTECTED] wrote:
Chris -  Thanks so much for your suggestion to visit
[1]www.rekallrevealed.org.  I just read over the entire site, and it
does not appear that there is a free Windows automatic install package
for Rekall, and I am really not in a position to attempt to take some
kind of source code and attempt to generate a windows package,
especially if it would all be done for me for a reasonable $60 (I
forget exact price).  Or, am I mistaken regarding a free windows
version of Rekall?
 
No, sorry, you're probably right.  I dabbled in Rekall a few months
ago when I was trying to move an Access application to my Linux system
so didn't look at the Windows aspects of Rekall.

However there is yet another site:-
http://www.totalrekall.co.uk

At that site a Windows version is downloadable and only costs $25 if I
have read it correctly.

I think there was some sort of (rather acrimonious) split between the
people who now run both www.totalrekall.co.uk and www.rekallrevealed.org
and the people who run www.theKompany.com.

-- 
Chris Green ([EMAIL PROTECTED])

Never ascribe to malice that which can be explained by incompetence.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] www.thekompany.com rekall

2005-04-27 Thread Christopher Browne
Quoth [EMAIL PROTECTED]:
 I just downloaded the windows demo for Rekall, which is an MSAccess
 like product (loosely speaking) with native drivers for postgresql
 and some other engines (plus odbc for yet others).  I was a bit
 confused on certain things so I emailed my questions, and the
 president of the company replied.

 It wasnt clear what product I should purchase for windows, and he
 said that the basic $60 rekall gives you both windows and linux
 versions.

 I was also unclear about how long the demo remains active (30 days,
 10 days?) but he explained that the demo never expires,  remains
 active indefinitely, but logs off ever 20 minutes.   It unzipped and
 installed effortlessly on my part.  And it was very simple to tell
 it to look at a postgresql database.  It uses python as a scripting
 language.

 I havent done a lot with it yet, but I think I am going to like it a
 lot.

It is worth noting that the authors of Rekall are quite separate from
theKompany.

Before you consider paying TheKompany for licenses, you might want to
take a peek at the authors' web site...

http://totalrekall.co.uk/

And pay particular attention to the FAQ, especially the part where
theKompany is mentioned...

http://totalrekall.co.uk/modules.php?name=FAQ
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','ntlug.org').
http://linuxdatabases.info/info/slony.html
Rules of the Evil Overlord #42.  When I capture the hero, I will make
sure I also get his dog, monkey, ferret, or whatever sickeningly cute
little animal capable of untying ropes and filching keys happens to
follow him around. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Stephane Bortzmeyer [EMAIL PROTECTED] wrote:

% But it does not apply to primary keys containing a group of
% columns. In that case (my case), columns do not have to be UNIQUE. But
% they have to be NOT NULL, which puzzles me.

It does apply to primary keys containing groups of columns.

You can get the table definition you want by using a unique constraint,
but you should know that in SQL, unique constraints don't apply to
rows containing null values in the constrained columns. If you
do this:

 create table x (
   name TEXT NOT NULL,
   address INET,
   CONSTRAINT na UNIQUE (name, address)
 );

your table definition will be as you want it, but the constraint you
want won't be there.

$ INSERT INTO x VALUES ('alpha');
INSERT 194224 1
$ INSERT INTO x VALUES ('alpha');
INSERT 194225 1
$ INSERT INTO x VALUES ('alpha');
INSERT 194226 1
$ INSERT INTO x VALUES ('alpha');
INSERT 194227 1

-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Query Designer

2005-04-27 Thread Carlos Gustavo fischer
Hello, people.
I´m looking for a FREE tool where you can VISUALLY link tables and mark 
fields and the tool
generates the query automatically.

I´ve tried PGExplorer. It´s nice, but it doens´t create JOINS, just a 
lot of WHERE... =...AND...=...AND...

Do you people have any tips ??
Thanks in advance,
Carlos G. Fischer
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is

2005-04-27 Thread Scott Marlowe
On Wed, 2005-04-27 at 02:12, Stephane Bortzmeyer wrote:
 On Tue, Apr 26, 2005 at 03:48:44PM -0500,
  Scott Marlowe [EMAIL PROTECTED] wrote 
  a message of 26 lines which said:
 
  Here's a quote from the SQL1992 spec that's VERY clear:
 
 Yes, PostgreSQL is right and implement the standard. Now, what's the
 rationale for the standard? I understand it for a single column but,
 for several columns, it should be still possible to have different
 tuples, such as (3, NULL) and (5, NULL) for instance.

Since NULL  NULL, that means you could then have 

(5,NULL) and (5,NULL) since the two NULLS aren't equal.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread Stephan Szabo

On Wed, 27 Apr 2005, Stephane Bortzmeyer wrote:

 On Tue, Apr 26, 2005 at 03:48:44PM -0500,
  Scott Marlowe [EMAIL PROTECTED] wrote
  a message of 26 lines which said:

  Here's a quote from the SQL1992 spec that's VERY clear:

 Yes, PostgreSQL is right and implement the standard. Now, what's the
 rationale for the standard? I understand it for a single column but,
 for several columns, it should be still possible to have different
 tuples, such as (3, NULL) and (5, NULL) for instance.

The case that they're trying to prevent is two tuples like (3, NULL) and
(3,NULL) since uniqueness alone doesn't prevent them both from being
inserted.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 05:19:32AM +,
 Patrick TJ McPhee [EMAIL PROTECTED] wrote 
 a message of 37 lines which said:

 but you should know that in SQL, unique constraints don't apply to
 rows containing null values

May be I should but I didn't.

 your table definition will be as you want it, but the constraint you
 want won't be there.

OK, I will try to write a custom trigger, then.

 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread Scott Marlowe
On Wed, 2005-04-27 at 09:06, Stephane Bortzmeyer wrote:
 On Wed, Apr 27, 2005 at 05:19:32AM +,
  Patrick TJ McPhee [EMAIL PROTECTED] wrote 
  a message of 37 lines which said:
 
  but you should know that in SQL, unique constraints don't apply to
  rows containing null values
 
 May be I should but I didn't.
 
  your table definition will be as you want it, but the constraint you
  want won't be there.
 
 OK, I will try to write a custom trigger, then.

Often the best bet here, btw, is to declare it not null then use
something other than null to represent null, like the text characters NA
or something.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 10:26:30AM -0400,
 Tom Lane [EMAIL PROTECTED] wrote 
 a message of 9 lines which said:

 If that's what you want, declare it as UNIQUE not PRIMARY KEY.

As shown by Patrick TJ McPhee, it does not work:

tests=  create table x (
tests(name TEXT NOT NULL,
tests(address INET,
tests(CONSTRAINT na UNIQUE (name, address)
tests(  );
NOTICE:  CREATE TABLE / UNIQUE will create implicit index na for table x
CREATE TABLE
tests= INSERT INTO x (name) values ('foobar');
INSERT 45380 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45381 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45382 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45383 1
tests= select * from x;
  name  | address 
+-
 foobar | 
 foobar | 
 foobar | 
 foobar | 
(4 rows)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread Scott Marlowe
On Wed, 2005-04-27 at 09:06, Stephane Bortzmeyer wrote:
 On Wed, Apr 27, 2005 at 05:19:32AM +,
  Patrick TJ McPhee [EMAIL PROTECTED] wrote 
  a message of 37 lines which said:
 
  but you should know that in SQL, unique constraints don't apply to
  rows containing null values
 
 May be I should but I didn't.

Actually, considering that many databases (at least in the past) have
ignored this and treated nulls as unique things, it's quite
understandable.

MSSQL, for instance, used to definitely allow only one null in a unique
field.  So, for that database, not null wasn't really necessary for a
primary key column.

I believe this problem exist(s)(ed) in several other supposedly
enterprise class databases as well.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Tom Lane
Stephane Bortzmeyer [EMAIL PROTECTED] writes:
 Yes, PostgreSQL is right and implement the standard. Now, what's the
 rationale for the standard? I understand it for a single column but,
 for several columns, it should be still possible to have different
 tuples, such as (3, NULL) and (5, NULL) for instance.

If that's what you want, declare it as UNIQUE not PRIMARY KEY.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread =?ISO-8859-1?Q?Sebastian_B=F6ck?=
Stephane Bortzmeyer wrote:
On Wed, Apr 27, 2005 at 10:26:30AM -0400,
 Tom Lane [EMAIL PROTECTED] wrote 
 a message of 9 lines which said:


If that's what you want, declare it as UNIQUE not PRIMARY KEY.

As shown by Patrick TJ McPhee, it does not work:
tests=  create table x (
tests(name TEXT NOT NULL,
tests(address INET,
tests(CONSTRAINT na UNIQUE (name, address)
tests(  );
NOTICE:  CREATE TABLE / UNIQUE will create implicit index na for table x
CREATE TABLE
tests= INSERT INTO x (name) values ('foobar');
INSERT 45380 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45381 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45382 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45383 1
tests= select * from x;
  name  | address 
+-
 foobar | 
 foobar | 
 foobar | 
 foobar | 
(4 rows)

If i understand correctly, you want something like:
create table x (
  name TEXT NOT NULL PRIMARY KEY,
  address INET
);
CREATE UNIQUE INDEX na ON x (name, address) WHERE address IS NULL;
HTH
Sebastian
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] After insert trigger question

2005-04-27 Thread mmiranda
Hi ppl,
i have a specific question about insert triggers, in the docs i found that
you can change the value of an inserted column using the following syntax:

NEW.column_name := value

and then if you return NEW the new value is stored instead of the original. 
this is true if it is a before insert trigger.
The manual also says that the return value of an after insert trigger is
ignored, that means that you cannot update the value of a column in the same
way with an after insert trigger?.
I am concerned about how reliable is an before insert trigger, i made some
computation in my trigger and i want that no matter what happens inside  the
trigger (exceptions, erros, divide by zero, etc) , the row must be inserted,
i mean if the trigger fails,  i always have the row in my table.
Because of that, i think after insert trigger is the best option, beacuse is
fired after the data is in the table, am i wrong?
thanks  

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 04:50:23PM +0200,
 Sebastian Böck [EMAIL PROTECTED] wrote 
 a message of 48 lines which said:

 CREATE UNIQUE INDEX na ON x (name, address) WHERE address IS NULL;

No, because it prevents two tuples with the same value of name.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread Sebastian Böck
Stephane Bortzmeyer wrote:
On Wed, Apr 27, 2005 at 04:50:23PM +0200,
 Sebastian Böck [EMAIL PROTECTED] wrote 
 a message of 48 lines which said:


CREATE UNIQUE INDEX na ON x (name, address) WHERE address IS NULL;

No, because it prevents two tuples with the same value of name.
Ahh, sorry! Ment something more like:
CREATE TABLE table x (
  name TEXT NOT NULL,
  address INET
);
CREATE UNIQUE INDEX na ON x (name, address);
CREATE UNIQUE INDEX n ON x (name) WHERE address IS NULL;
HTH
Sebastian
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 09:36:57AM -0500,
 Scott Marlowe [EMAIL PROTECTED] wrote 
 a message of 18 lines which said:

 Often the best bet here, btw, is to declare it not null then use
 something other than null to represent null, like the text
 characters NA or something.

Yes, but it defeats the purpose of NULL. And what should I use as a
pseudo-NULL value for INET? 127.0.0.1? 0.0.0.0? Special values are
well-known for the problems they raise. That's why many languages have
NULL-like solutions (None in Python, undef in Perl, Maybe types in
Haskell, etc).


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Reduce size of $PGDATA for demo cdrom?

2005-04-27 Thread Philip Hallstrom
Hi all -
	We've got a product built on FreeBSD and PostgreSQL 7.4.2 that 
I've had to fit onto an installable CDROM tradeshows and customer demos. 
This is the only way I've found to ensure an easy to re-install option 
for the non-technical folks at the tradeshows should they corrupt the box 
itself.  This part all works fine.

However, I've only got 700mb to work with and the bootable CDROM takes a 
chunk of that as does the rest of our app.

Doing a pg_dumpall of the database results in a 369Kb file.
However, $PGDATA is around 60mb.
[EMAIL PROTECTED]:/local/pgsql/data% du -hcs base/* pg_xlog/*
4.4Mbase/1
4.4Mbase/17141
4.4Mbase/17144
4.4Mbase/17145
6.6Mbase/17146
5.4Mbase/17147
 16Mpg_xlog/0006
 16Mpg_xlog/0007
 62Mtotal
My question is what's the best way to trim that down?  I realize I could 
remove it completely and have my install script do an initdb, etc, but if 
there's anyway to keep the installation intact to begin with I'd prefer 
that.

Is there any other way to trim the size of those files?  Maybe dump 
everything out, clean up, then restore it.  Then immediately burn things 
to CD?  Or if there are postgresql.conf options I can set to not keep WAL 
logs, etc around that would work too since this is just for tradeshows...

Thanks all!
-philip
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Reduce size of $PGDATA for demo cdrom?

2005-04-27 Thread Scott Marlowe
On Wed, 2005-04-27 at 10:10, Philip Hallstrom wrote:
 Hi all -
   We've got a product built on FreeBSD and PostgreSQL 7.4.2 that 
 I've had to fit onto an installable CDROM tradeshows and customer demos. 
 This is the only way I've found to ensure an easy to re-install option 
 for the non-technical folks at the tradeshows should they corrupt the box 
 itself.  This part all works fine.
 
 However, I've only got 700mb to work with and the bootable CDROM takes a 
 chunk of that as does the rest of our app.
 
 Doing a pg_dumpall of the database results in a 369Kb file.
 
 However, $PGDATA is around 60mb.
 
 [EMAIL PROTECTED]:/local/pgsql/data% du -hcs base/* pg_xlog/*
 4.4Mbase/1
 4.4Mbase/17141
 4.4Mbase/17144
 4.4Mbase/17145
 6.6Mbase/17146
 5.4Mbase/17147
   16Mpg_xlog/0006
   16Mpg_xlog/0007
   62Mtotal
 
 
 My question is what's the best way to trim that down?  I realize I could 
 remove it completely and have my install script do an initdb, etc, but if 
 there's anyway to keep the installation intact to begin with I'd prefer 
 that.


My first recommendation would be to put everything into one database. 
it looks like you've got 6 databases.  If you've still got the template0
database, you can probably get rid of that one as well.  If you're not
going to need to create any new databases you can maybe drop template1
too.

psql somedbotherthanthetemplateones
update pg_database set datistemplate=false;
drop database template0;
drop database template1;

Set the checkpoint_segments = 1 in the postgresql.conf file.  Not sure
how much that will help, but it should keep the checkpoint files at 16
meg.

Lastly, put those other four or databases into separate schemas in one
database.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] After insert trigger question

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 08:45:44AM -0600,
 [EMAIL PROTECTED] [EMAIL PROTECTED] wrote 
 a message of 21 lines which said:

 I am concerned about how reliable is an before insert trigger, i
 made some computation in my trigger and i want that no matter what
 happens inside the trigger (exceptions, erros, divide by zero, etc)
 , the row must be inserted,

I do not think that pl/pgsql has exception handlers
(http://www.postgresql.org/docs/7.4/interactive/plpgsql-errors-and-messages.html).
 You
can raise exceptions but not catch them. Could you rewrite your
trigger function with another programming language? In Python, it
would be something like (not tested):

try:
   ... your computations
finally:
   # Insert anyway
   return OK

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT

2005-04-27 Thread Stephane Bortzmeyer
On Wed, Apr 27, 2005 at 05:04:07PM +0200,
 Sebastian Böck [EMAIL PROTECTED] wrote 
 a message of 24 lines which said:

   One is enough :-)
   v
 CREATE TABLE table x (
   name TEXT NOT NULL,
   address INET
 );
 
 CREATE UNIQUE INDEX na ON x (name, address);
 CREATE UNIQUE INDEX n ON x (name) WHERE address IS NULL;

Great! It works fine. Many thanks.

tests= select * from x;
 name | address 
--+-
 foo  | 
 foo  | 1.2.3.4
 foo  | ::1
 bar  | ::1
 bar  | 
(5 rows)
tests= insert into x (name) values ('bar');
ERROR:  duplicate key violates unique constraint n
tests= insert into x (name, address) values ('bar', '::1');
ERROR:  duplicate key violates unique constraint na
tests= insert into x (name) values ('new');
INSERT 21128 1

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] After insert trigger question

2005-04-27 Thread Michael Fuhr
On Wed, Apr 27, 2005 at 05:24:16PM +0200, Stephane Bortzmeyer wrote:
 On Wed, Apr 27, 2005 at 08:45:44AM -0600,
  [EMAIL PROTECTED] [EMAIL PROTECTED] wrote 
  
  I am concerned about how reliable is an before insert trigger, i
  made some computation in my trigger and i want that no matter what
  happens inside the trigger (exceptions, erros, divide by zero, etc)
  , the row must be inserted,
 
 I do not think that pl/pgsql has exception handlers
 (http://www.postgresql.org/docs/7.4/interactive/plpgsql-errors-and-messages.html).

PostgreSQL 8.0 introduced PL/pgSQL exception handlers.

http://www.postgresql.org/docs/8.0/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Regardless of whether the trigger is BEFORE or AFTER, an untrapped
error will abort the insert.

CREATE FUNCTION trigfunc() RETURNS trigger AS '
DECLARE
i  integer;
BEGIN
i := NEW.x / 0;
RETURN NULL;
END;
' LANGUAGE plpgsql;

CREATE TABLE foo (x integer);

CREATE TRIGGER footrig_after AFTER INSERT ON foo
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();

INSERT INTO foo VALUES (123);
ERROR:  division by zero
CONTEXT:  PL/pgSQL function trigfunc line 4 at assignment

SELECT * FROM foo;
 x 
---
(0 rows)

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] After insert trigger question

2005-04-27 Thread mmiranda
PostgreSQL 8.0 introduced PL/pgSQL exception handlers.

http://www.postgresql.org/docs/8.0/interactive/plpgsql-control-structures.h
tml#PLPGSQL-ERROR-TRAPPING

Regardless of whether the trigger is BEFORE or AFTER, an untrapped
error will abort the insert.

CREATE FUNCTION trigfunc() RETURNS trigger AS '
DECLARE
i  integer;
BEGIN
i := NEW.x / 0;
RETURN NULL;
END;
' LANGUAGE plpgsql;

CREATE TABLE foo (x integer);

CREATE TRIGGER footrig_after AFTER INSERT ON foo
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();

INSERT INTO foo VALUES (123);
ERROR:  division by zero
CONTEXT:  PL/pgSQL function trigfunc line 4 at assignment

SELECT * FROM foo;
 x 
---
(0 rows)

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/


So, the answer is:  double check every operation and use exeption handlers
What about performance, if its a matter of choice between after or before
insert, what perform better?
thanks


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] populating a table via the COPY command using C code.

2005-04-27 Thread Mak, Jason
 hi,
 
 I'm writing an application in C that basically converts binary data into 
 something meaningful.  My first attempt was to parse the binary and insert 
 directly to the database in one step.  But this proved to be very slow.  So I 
 decided to go with a two step process.  The first step is to parse the data 
 and create a flat file with tab delimited fields.  The second step is to load 
 this data using the COPY command.  I don't quite understand how this is done 
 within C.  Can someone provide me with some examples.  I've already done some 
 searches on the internet.  the examples that I found don't match with I'm 
 trying to do.  Please help!
 
 thanks,
 jason.
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] After insert trigger question

2005-04-27 Thread Michael Fuhr
On Wed, Apr 27, 2005 at 10:38:48AM -0600, [EMAIL PROTECTED] wrote:

 What about performance, if its a matter of choice between after or before
 insert, what perform better?

According to the Triggers chapter in the documentation, If you have
no specific reason to make a trigger before or after, the before case
is more efficient, since the information about the operation doesn't
have to be saved until end of statement.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] restarting after power outage

2005-04-27 Thread Jon Lapham
Hello,
The following has happened to me maybe 3 or 4 times over the past few 
years (and again today), so I thought I might send in an email to the 
list to see if others experience this.

After a power outage (and bad UPS combo, or plug pull, or bad RAM, etc) 
sometimes (I would guess 10% of the time) postgresql fails to restart 
automatically after booting the computer.  Invariably, it is because the 
postmaster.pid file exists, but maybe this is just a symptom of 
something else.  The solution I have been performing is to simply delete 
this file, and then restart postgres (service postgresql start).

Is this the correct procedure?  Should I be doing something else?  Do 
others see this, or am I the only one?

Finally, I would make the suggestion that the init script should check 
to see if the PID file exists BEFORE starting the server.  If so, issue 
some sort of message on how to procede.

Thanks, Jon
PS: vital stats:
[EMAIL PROTECTED] init.d]# head -1 /etc/issue
Fedora Core release 3 (Heidelberg)
[EMAIL PROTECTED] init.d]# uname -a
Linux bilbo 2.6.11-1.14_FC3 #1 Thu Apr 7 19:23:49 EDT 2005 i686 athlon 
i386 GNU/Linux
[EMAIL PROTECTED] init.d]# rpm -q postgresql
postgresql-7.4.7-3.FC3.1

--
-**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
 Jon Lapham  [EMAIL PROTECTED]Rio de Janeiro, Brasil
 Personal: http://www.jandr.org/
***-*--**---***---
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Had a problem with pg_clog

2005-04-27 Thread Peter Wiersig
On Sun, Apr 24, 2005 at 11:52:11AM -0400, Tom Lane wrote:
 Peter Wiersig [EMAIL PROTECTED] writes:
  On Sat, Apr 23, 2005 at 08:19:31PM -0400, Tom Lane wrote:
  Peter Wiersig [EMAIL PROTECTED] writes:
  vacuumdb -z miwabar
  PANIC:  open of /var/lib/pgsql/data/pg_clog/ failed:
  Keine Berechtigung
  server closed the connection unexpectedly
  
  What PG version is this exactly?
 
  It's 7.3.9 from SUSE rpms postgresql-7.3.9-3
 
 Hmm.  7.3.9 has all the known patches for hint-bit updates and
 premature clog truncation,

Good.

 Can you repeat this failure from a standing start --- that is,
 initdb, load your data dump, vacuumdb, PANIC? 


No, I couldn't.

But I can with the old cluster.

And only if I create the new database with the sequence psql
template1 dump_with_create.sql; vacuumdb test.

Other combinations like createdb test; psql test
dump_with_create.sql; vacuumdb test work.

I never tried to restore from a dump created by pg_dump -C
before, but this time I was in a hurry to free up more disk space.

-- 
Peter

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] populating a table via the COPY command using C code.

2005-04-27 Thread Michael Fuhr
On Wed, Apr 27, 2005 at 01:12:42PM -0400, Mak, Jason wrote:
 
 The second step is to load this data using the COPY command.
 I don't quite understand how this is done within C.

Are you writing a client application that uses libpq?  If so, have
you seen Functions Associated with the COPY Command in the libpq
chapter of the documentation?

http://www.postgresql.org/docs/8.0/interactive/libpq-copy.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Tom Lane
Jon Lapham [EMAIL PROTECTED] writes:
 After a power outage (and bad UPS combo, or plug pull, or bad RAM, etc) 
 sometimes (I would guess 10% of the time) postgresql fails to restart 
 automatically after booting the computer.  Invariably, it is because the 
 postmaster.pid file exists, but maybe this is just a symptom of 
 something else.  The solution I have been performing is to simply delete 
 this file, and then restart postgres (service postgresql start).

 Is this the correct procedure?

It is.  We have been fooling with the postmaster startup logic to try to
eliminate this gotcha, but it's only very recently (8.0.2) that I think
we got it right.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] dump and restore a single table containing large objects

2005-04-27 Thread John Liu
Is there a clean way to dump/restore a single table which contains large
objects instead of the entire DB? 
In Pg version=7.4 -
pg_dump: Large object output is not supported for a single table.
pg_dump: Use all tables or a full dump instead.

Does version 8.0's pg_dump support a single table dump containing Lo?

Thanks.
John

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Jon Lapham
Tom Lane wrote:
Jon Lapham [EMAIL PROTECTED] writes:
After a power outage (and bad UPS combo, or plug pull, or bad RAM, etc) 
sometimes (I would guess 10% of the time) postgresql fails to restart 
automatically after booting the computer.  Invariably, it is because the 
postmaster.pid file exists, but maybe this is just a symptom of 
something else.  The solution I have been performing is to simply delete 
this file, and then restart postgres (service postgresql start).

Is this the correct procedure?
It is.  We have been fooling with the postmaster startup logic to try to
eliminate this gotcha, but it's only very recently (8.0.2) that I think
we got it right.
So, then it would be correct to change my init scripts to do the 
following:  (if so, this patch can be applied to the 7.4 branch)

--- postgresql  2005-02-21 16:33:37.0 -0300
+++ postgresql_pidkiller2005-04-27 15:38:03.0 -0300
@@ -178,6 +178,13 @@
fi
echo -n $PSQL_START
+
+   # If there is a stray postmaster.pid file laying around, remove it
+   if [ -f ${PGDATA}/postmaster.pid ]
+   then
+   rm ${PGDATA}/postmaster.pid
+   fi
+
$SU -l postgres -c $PGENGINE/postmaster -p ${PGPORT} -D 
'${PGDATA}' ${PGOPTS}   $PGLOG 21  /dev/null
sleep 2
pid=`pidof -s $PGENGINE/postmaster`

--
-**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
 Jon Lapham  [EMAIL PROTECTED]Rio de Janeiro, Brasil
 Personal: http://www.jandr.org/
***-*--**---***---
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Tom Lane
Jon Lapham [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 It is.  We have been fooling with the postmaster startup logic to try to
 eliminate this gotcha, but it's only very recently (8.0.2) that I think
 we got it right.

 So, then it would be correct to change my init scripts to do the 
 following:  (if so, this patch can be applied to the 7.4 branch)

I would recommend strongly AGAINST that, because what you just did was
remove the defense against starting two postmasters concurrently in the
same data directory (which would be a disaster of the first magnitude).
This is not a problem for bootup of course, but if you ever use this
script to start the postmaster by hand, then you are playing with fire.

We would have put something like that in the standard init scripts
years ago if it were safe.

If you want a solution in the 7.4 branch, I have back-patched the
8.0.2 fix into the latest Fedora Core 3 RPMs (7.4.7-5.FC3.1).

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Jon Lapham
Tom Lane wrote:
Jon Lapham [EMAIL PROTECTED] writes:
So, then it would be correct to change my init scripts to do the 
following:  (if so, this patch can be applied to the 7.4 branch)
I would recommend strongly AGAINST that, because what you just did was
remove the defense against starting two postmasters concurrently in the
same data directory (which would be a disaster of the first magnitude).
This is not a problem for bootup of course, but if you ever use this
script to start the postmaster by hand, then you are playing with fire.
I figured there must be more to it...
If you want a solution in the 7.4 branch, I have back-patched the
8.0.2 fix into the latest Fedora Core 3 RPMs (7.4.7-5.FC3.1).
Nice, thanks.
--
-**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
 Jon Lapham  [EMAIL PROTECTED]Rio de Janeiro, Brasil
 Personal: http://www.jandr.org/
***-*--**---***---
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 Jon Lapham [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 It is.  We have been fooling with the postmaster startup logic to try to
 eliminate this gotcha, but it's only very recently (8.0.2) that I think
 we got it right.

 So, then it would be correct to change my init scripts to do the 
 following:  (if so, this patch can be applied to the 7.4 branch)

 I would recommend strongly AGAINST that, because what you just did was
 remove the defense against starting two postmasters concurrently in the
 same data directory (which would be a disaster of the first magnitude).
 This is not a problem for bootup of course, but if you ever use this
 script to start the postmaster by hand, then you are playing with fire.

What I have done is to create a separate init.d script that removes
the PID file, and arrange for it to run before the PG startup script.
That way you can use the regular script to stop and start without
danger, but on a bootup after an unclean shutdown the PID file will
get removed before PG gets started.  If you're dumb enough to run the
removal script by hand while PG is running, you deserve what you get.  :)

-Doug

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Peter Wiersig
On Wed, Apr 27, 2005 at 03:03:50PM -0400, Doug McNaught wrote:
 
 What I have done is to create a separate init.d script that
 removes the PID file, and arrange for it to run before the PG
 startup script.

An even better place (if you really want to do all this) would be
something that happens only at boot time.

On a Debian system a skript linked to /etc/rcS.d/ would be in
order, on SUSE I would rm it in /etc/init.d/boot.local.


But I advise against it. Do things like that manually after you've
checked that your pgsql-partition is mounted and filled with
correct data.

Some of the distributed init.d skripts for starting postgres also
initdb the data-location if they think they should. That can lead
to the total loss of your cluster.

-- 
Peter

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] populating a table via the COPY command using C code.

2005-04-27 Thread Michael Fuhr
[Please copy the mailing list on replies so others can contribute
to and learn from the discussion.]

On Wed, Apr 27, 2005 at 02:34:26PM -0400, Mak, Jason wrote:

 Yes, my application is a client application that uses libpq api, ie.
 PQexec, etc...  I have looked at the Functions Associated with the COPY
 Command.  But I still don't understand.  what I really need is an
 example of how those api's(PQputCopyData) are used, other than the
 simple example that's provided.

What example are you looking at and what don't you understand about it?

 This dataload should be relatively simple.  I already have a flat
 file created.  I should be able to use some api and say here is the
 pointer to my db connection and here is a pointer to the flat file.
 now do your thing.  Perhaps you can explain this to me.

libpq provides the primitives that you could use to implement such
an API: it would be a trivial matter to write a function that opens
the indicated file, reads its contents, and sends them to the
database.  As the documentation indicates, you'd use PQexec() or
its ilk to send a COPY FROM STDIN command (see the COPY documentation
for the exact syntax), then PQputCopyData() or PQputline() to send
the data (probably in a loop), then PQputCopyEnd() or PQendcopy()
to indicate that you're finished.  Add the necessary file I/O
statements and there's your function.

Do you have a reason for using an intermediate file?  Instead of
writing data to the file and then reading it back, you could use
PQputCopyData() or PQputline() to send the data directly to the
database.

Another possibility: if the file resides somewhere the backend can
read, and if you can connect to the database as a superuser, then
you could use COPY tablename FROM 'filename'.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Philip Hallstrom
It is.  We have been fooling with the postmaster startup logic to try to
eliminate this gotcha, but it's only very recently (8.0.2) that I think
we got it right.

So, then it would be correct to change my init scripts to do the
following:  (if so, this patch can be applied to the 7.4 branch)
I would recommend strongly AGAINST that, because what you just did was
remove the defense against starting two postmasters concurrently in the
same data directory (which would be a disaster of the first magnitude).
This is not a problem for bootup of course, but if you ever use this
script to start the postmaster by hand, then you are playing with fire.
What I have done is to create a separate init.d script that removes
the PID file, and arrange for it to run before the PG startup script.
That way you can use the regular script to stop and start without
danger, but on a bootup after an unclean shutdown the PID file will
get removed before PG gets started.  If you're dumb enough to run the
removal script by hand while PG is running, you deserve what you get.  :)
Or, if your cron supports it, add the following to root's crontab:
@reboot /bin/rm -f /path/to/postgres/pid
Although I like having a separate startup script that runs first to go 
around removing this and other things as well...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] on insert rule primary key

2005-04-27 Thread Scott Frankel
My original post got eaten.  Apologies in advance if you receive this 
message twice.

I am trying to construct a rule that performs an UPDATE if specific 
conditions are met in an INSERT statement.  Limiting UPDATE's SET 
action to just the new row by testing for the new primary key is 
failing for some reason.  Yet if I eliminate the test, all rows in the 
table are updated.

The actual rule I'm building must handle several OR clauses in its 
conditional test, so I've included that in the following sample.  The 
output I would've expected would have both the Carlos and Miranda 
inserts yielding their favorite color, azul.

Any suggestions on how I can construct the rule to automatically and 
correctly fill the fav_color field?

Thanks in advance!
Scott

CREATE TABLE colors (
clrs_pkey SERIALPRIMARY KEY,
first_nametext  UNIQUE DEFAULT NULL,
fav_color text  DEFAULT NULL
);
CREATE RULE color_rule AS ON INSERT
TO ONLY colors
WHERE
first_name = 'carlos' OR
first_name = 'miranda'
DO UPDATE ONLY colors SET fav_color = 'azul'
WHERE clrs_pkey = NEW.clrs_pkey;
INSERT INTO colors (first_name, fav_color) VALUES ('carmen', 'verde');
INSERT INTO colors (first_name) VALUES ('carlos');
INSERT INTO colors (first_name, fav_color) VALUES ('rocio', 'rojo');
INSERT INTO colors (first_name, fav_color) VALUES ('miranda', 'negro');
test= SELECT * FROM ONLY colors;
 clrs_pkey | first_name | fav_color
---++---
 1 | carmen | verde
 2 | carlos |
 5 | rocio  | rojo
 6 | miranda| negro
(4 rows)
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Serial / auto increment data type

2005-04-27 Thread Typing80wpm



I am just beginning to teach myself Postgresql and Access. I notice 
that PG has the "serial" type of field which increments 
automatically. I notice that when I do external links in Access 
through ODBC, that Access wants a field or fields which will be unique.

Here is my question. Would I be wise to define each and ever table 
with a serial id, so that I may always be guaranteed something uniqe, to satisfy 
Access. It seems to me like no harm can be done, since if you dont 
need the serial id, you dont need it, but it is always there if you do need it, 
and it is harder to add such a field down the line, after the fact.

Thanks!


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Tom Lane
Philip Hallstrom [EMAIL PROTECTED] writes:
 Although I like having a separate startup script that runs first to go 
 around removing this and other things as well...

I think most Unix variants have a specific bootup script that's charged
with doing exactly that; if you can find it, that's a good place to add
a line for postmaster.pid.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] populating a table via the COPY command using C code.

2005-04-27 Thread Mak, Jason
 What example are you looking at and what don't you understand about it?

Some of the examples that I looked over are either from the internet or from 
the Postgres Manual.  The API I'm refering to is PQputCopyData.  However, with 
the explanation given below.  I'm starting to understand.

 libpq provides the primitives that you could use to implement such
 an API: it would be a trivial matter to write a function that opens
 the indicated file, reads its contents, and sends them to the
 database.  As the documentation indicates, you'd use PQexec() or
 its ilk to send a COPY FROM STDIN command (see the COPY documentation
 for the exact syntax), then PQputCopyData() or PQputline() to send
 the data (probably in a loop), then PQputCopyEnd() or PQendcopy()
 to indicate that you're finished.  Add the necessary file I/O
 statements and there's your function.

so basically in C, I would open some file i/o using fopen and in a loop.  Do 
something like a read line into the buffer with some byte count and send that 
to the database using the PQputCopyData.  Is this correct??

 Do you have a reason for using an intermediate file?  Instead of
 writing data to the file and then reading it back, you could use
 PQputCopyData() or PQputline() to send the data directly to the
 database.

For the project I'm working on.  We basically setup a postgres data warehouse.  
We have a large set of binary data that needs to be parsed and translated into 
something meaningful.  We intend to load this processed data into 3 tables 
using the quickest means possible.  I've already tried parsing and doing 
inserts.  but this proved to be very slow.  So I figured a 2 step automated 
process.  The first step would be to parse the data and create 3 separate 
files.  then load each file into the warehouse.  Never considered using 
PQputCopyData in realtime.  Not sure how this would work given 3 different 
tables that hold differnet data or how fast it's going to be.  but I have tried 
the last approach.  It works fairly well.  The only problem is the lack of 
insight into where it is during the load processing.

What's your thoughts??  which approach would be the fastest?
1) 2 step process.
2) realtime PQputCopyData - not sure how this would work with 3 different 
tables.
3) COPY tablename FROM 'filename'

thanks,
jason.



-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 27, 2005 3:46 PM
To: Mak, Jason
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] populating a table via the COPY command using C
code.


[Please copy the mailing list on replies so others can contribute
to and learn from the discussion.]

On Wed, Apr 27, 2005 at 02:34:26PM -0400, Mak, Jason wrote:

 Yes, my application is a client application that uses libpq api, ie.
 PQexec, etc...  I have looked at the Functions Associated with the COPY
 Command.  But I still don't understand.  what I really need is an
 example of how those api's(PQputCopyData) are used, other than the
 simple example that's provided.

What example are you looking at and what don't you understand about it?

 This dataload should be relatively simple.  I already have a flat
 file created.  I should be able to use some api and say here is the
 pointer to my db connection and here is a pointer to the flat file.
 now do your thing.  Perhaps you can explain this to me.

libpq provides the primitives that you could use to implement such
an API: it would be a trivial matter to write a function that opens
the indicated file, reads its contents, and sends them to the
database.  As the documentation indicates, you'd use PQexec() or
its ilk to send a COPY FROM STDIN command (see the COPY documentation
for the exact syntax), then PQputCopyData() or PQputline() to send
the data (probably in a loop), then PQputCopyEnd() or PQendcopy()
to indicate that you're finished.  Add the necessary file I/O
statements and there's your function.

Do you have a reason for using an intermediate file?  Instead of
writing data to the file and then reading it back, you could use
PQputCopyData() or PQputline() to send the data directly to the
database.

Another possibility: if the file resides somewhere the backend can
read, and if you can connect to the database as a superuser, then
you could use COPY tablename FROM 'filename'.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Serial / auto increment data type

2005-04-27 Thread Scott Marlowe
On Wed, 2005-04-27 at 15:43, [EMAIL PROTECTED] wrote:
 I am just beginning to teach myself Postgresql and Access.  I notice
 that PG has the serial type of field which increments
 automatically.   I notice that when I do external links in Access
 through ODBC, that Access wants a field or fields which will be
 unique.
  
 Here is my question.  Would I be wise to define each and ever table
 with a serial id, so that I may always be guaranteed something uniqe,
 to satisfy Access.   It seems to me like no harm can be done, since if
 you dont need the serial id, you dont need it, but it is always there
 if you do need it, and it is harder to add such a field down the line,
 after the fact.

Since 7.4 or 7.3 (I forget which) serial has NOT guaranteed uniqueness. 
you have to add unique or primary key to the declaration like so:

create table foo (bar serial unique, otherfield int);

Or replace unique with primary key...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Serial / auto increment data type

2005-04-27 Thread Tony Caduto
Yes, that is the way to do it.
I migrated several large Access DB to postgres and most of the tables that 
where done by the
non developers didn't have a primary key defined, so the easiest way was just 
to add a new field to every table and make it a serial.
I believe you would have to do this even if you were using MS SQL server as the 
backend.
Here is my question.  Would I be wise to define each and ever table with 
a serial id, so that I may always be guaranteed something uniqe, to 
satisfy Access.   It seems to me like no harm can be done, since if you 
dont need the serial id, you dont need it, but it is always there if you 
do need it, and it is harder to add such a field down the line, after 
the fact.
 
Thanks!
--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql 8.x
http://www.amsoftwaredesign.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] populating a table via the COPY command using C code.

2005-04-27 Thread Sean Davis
On Apr 27, 2005, at 4:48 PM, Mak, Jason wrote:
What's your thoughts??  which approach would be the fastest?
1) 2 step process.
2) realtime PQputCopyData - not sure how this would work with 3 
different tables.
3) COPY tablename FROM 'filename'

thanks,
jason.
COPY tablename FROM 'filename'
is VERY fast.  Generally, I think people generally load the data into 
postgres using COPY (perhaps into a loader table that isn't in the 
same format that the final tables will be in) and then do data 
manipulation and cleaning within the database using database tools.  
This paradigm may or may not work for you, but it seems to be pretty 
general.

Sean
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Thanks for Rekall link!

2005-04-27 Thread Typing80wpm



Thanks for the link and the info. I went there and, sure enough, 
there is a free windows version of Rekall as well as a 25 Pound (British 
Sterling?) version for Windows. The message board forum looks very 
useful. They have a news item about Novell certification/approval 
 

I must say one intereting thing. When I downloaded the trial version from 
TheKompany, and asked it to browse a test file in PGSql which I loaded with 
250,000 rows, it started to read them, and read for a long long time (as 
MSAccess does), but the seemed to get stuck, whereas MSAccess is able to browse 
the entire file. I must experiment more with the demo version from 
theKompany, and also with this free version from the site you gave me.
===

Quoth [EMAIL PROTECTED]: I just downloaded the windows demo for 
Rekall, which is an MSAccess like product (loosely speaking) with native 
drivers for postgresql and some other engines (plus odbc for yet 
others). I was a bit confused on certain things so I emailed my 
questions, and the president of the company replied. It 
wasnt clear what product I should purchase for windows, and he said that 
the basic $60 rekall gives you both windows and linux 
versions. I was also unclear about how long the demo remains 
active (30 days, 10 days?) but he explained thatthe demo never 
expires, remains active indefinitely, but logs off ever 20 
minutes. It unzipped and installed effortlessly on my 
part. And it was very simple to tell it to look at a postgresql 
database. It uses python as a scripting language. 
I havent done a lot with it yet, but I think I am going to like it a 
lot.It is worth noting that the authors of Rekall are quite separate 
from"theKompany."Before you consider paying TheKompany for licenses, 
you might want totake a peek at the authors' web 
site...http://totalrekall.co.uk/And pay particular 
attention to the FAQ, especially the part wheretheKompany is 
mentioned...http://totalrekall.co.uk/modules.php?name=FAQ-- 
wm(X,Y):-write(X),write('@'),write(Y). 
wm('cbbrowne','ntlug.org').http://linuxdatabases.info/info/slony.htmlRules 
of the Evil Overlord #42. "When I capture the hero, I will makesure I 
also get his dog, monkey, ferret, or whatever sickeningly cutelittle animal 
capable of untying ropes and filching keys happens tofollow him around." 
http://www.eviloverlord.com/---(end of 
broadcast)---TIP 9: the planner will ignore your 
desire to choose an index scan if your   joining column's 
datatypes do not match


[GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Ed L.

This is for PostgreSQL 7.4.6 on hppa2.0w-hp-hpux11.00, compiled 
by GCC 3.2.2.  I'm seeing the following 2 errors:

ERROR:  missing chunk number 0 for toast value 19319495
ERROR:  unexpected chunk number 4 (expected 0) for toast value 
19319495

It is always the same toast value.  The table has 1.8M rows.  
I've identified the column and the row:

$ psql -c \d audit
   Table public.audit
 Column |Type | Modifiers 
+-+---
 value  | text| 
 cdate  | timestamp without time zone | 
 key| text| 
Indexes:
idx_audit_cdate btree (cdate)
idx_audit_key btree (key)

$ psql -c select key from audit where key = '791015.o9' 
key
---
 791015.o9
(1 row)

$ psql -c select cdate from audit where key = '791015.o9'
cdate
-
 2005-04-06 00:00:00
(1 row)

$ psql -c select value from audit where key = '791015.o9'
ERROR:  missing chunk number 0 for toast value 19319495

It causes all dumps and any access (via SELECT, UPDATE, and 
COPY/pg_dump) to audit.value to fail.  I've tried to look at the 
underlying toast table per some old posts, but no joy.

As usual, this is on a system with a downtime-sensitive customer.  

Any ideas?

Ed

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Uwe C. Schroeder

Is this just me or did anyone actually think about adding a UPS to the machine 
and monitor it with NUT ?
That way the machine would shut down properly, making the whole stale pid-file 
issue irrelevant.

UC


On Wednesday 27 April 2005 13:41, Tom Lane wrote:
 Philip Hallstrom [EMAIL PROTECTED] writes:
  Although I like having a separate startup script that runs first to go
  around removing this and other things as well...

 I think most Unix variants have a specific bootup script that's charged
 with doing exactly that; if you can find it, that's a good place to add
 a line for postmaster.pid.

   regards, tom lane

 ---(end of broadcast)---
 TIP 8: explain analyze is your friend

--
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Tom Lane
Ed L. [EMAIL PROTECTED] writes:
 This is for PostgreSQL 7.4.6 on hppa2.0w-hp-hpux11.00, compiled 
 by GCC 3.2.2.  I'm seeing the following 2 errors:

 ERROR:  missing chunk number 0 for toast value 19319495
 ERROR:  unexpected chunk number 4 (expected 0) for toast value 
 19319495

Have you tried REINDEXing the toast table in question?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Doug McNaught
Uwe C. Schroeder [EMAIL PROTECTED] writes:

 Is this just me or did anyone actually think about adding a UPS to
 the machine and monitor it with NUT ?  That way the machine would
 shut down properly, making the whole stale pid-file issue
 irrelevant.

UPSs fail.  People kick out power cords.  It's good to be able to deal
with it.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Ed L.
On Wednesday April 27 2005 3:55 pm, Tom Lane wrote:
  ERROR: missing chunk number 0 for toast value 19319495
  ERROR: unexpected chunk number 4 (expected 0) for toast
  value 19319495

 Have you tried REINDEXing the toast table in question?

Not yet.  Any way to repair it without blocking concurrent 
access?

Ed

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Uwe C. Schroeder
On Wednesday 27 April 2005 15:17, Doug McNaught wrote:
 Uwe C. Schroeder [EMAIL PROTECTED] writes:
  Is this just me or did anyone actually think about adding a UPS to
  the machine and monitor it with NUT ?  That way the machine would
  shut down properly, making the whole stale pid-file issue
  irrelevant.

 UPSs fail.  People kick out power cords.  It's good to be able to deal
 with it.

 -Doug

You're right about that. Question is how often does this happen to rectify 
some automated procedure. In case of a hard shutdown there are a whole bunch 
of things that could potentially go wrong on startup (like fsck failing 
etc.). So checking up on the machine might be a good idea anyways.
I for my part locked the server room - works every time when the cleaning crew 
comes into the office looking for an outlet to plug the vacuum in. All they 
take out now is the faxmachine :-)

UC

--
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Tom Lane
Ed L. [EMAIL PROTECTED] writes:
 On Wednesday April 27 2005 3:55 pm, Tom Lane wrote:
 Have you tried REINDEXing the toast table in question?

 Not yet.  Any way to repair it without blocking concurrent 
 access?

Unlikely.  But the lock will only affect operations that need to touch
toasted field values.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Ed L.
On Wednesday April 27 2005 4:40 pm, Tom Lane wrote:
 Ed L. [EMAIL PROTECTED] writes:
  On Wednesday April 27 2005 3:55 pm, Tom Lane wrote:
  Have you tried REINDEXing the toast table in question?
 
  Not yet.  Any way to repair it without blocking concurrent
  access?

 Unlikely.  But the lock will only affect operations that need
 to touch toasted field values.

Does this shed any light?  pg_toast_6221538 is the relevant toast 
table...

$ psql -c set enable_indexscan=off; select chunk_seq, 
length(chunk_data) from pg_toast.pg_toast_6221538 where chunk_id 
= 19319495 order by chunk_seq;
 chunk_seq | length 
---+
(0 rows)

$ psql -c select chunk_seq, length(chunk_data) from 
pg_toast.pg_toast_6221538 where chunk_id = 19319495 order by 
chunk_seq;  
 chunk_seq | length 
---+
(0 rows)

Ed

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Tom Lane
Ed L. [EMAIL PROTECTED] writes:
 Does this shed any light?  pg_toast_6221538 is the relevant toast 
 table...

 $ psql -c set enable_indexscan=off; select chunk_seq, 
 length(chunk_data) from pg_toast.pg_toast_6221538 where chunk_id 
 = 19319495 order by chunk_seq;
  chunk_seq | length 
 ---+
 (0 rows)

Doesn't look very promising :-(

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] SPI cursors pinned buffer and moving to the first tuple.

2005-04-27 Thread Tzahi Fadida
Hi,
I need to verify something.
I use spi cursors in a C function (which return a SETOF).
I am using the spi cursors to run the following query:
SELECT DISTINCT * FROM mytable.
Each time the system gets back to the function 
spi cursor can give me the next tuple which I can return.
Q1:My question is, does an SPI cursor pin its last buffer,
I.e. if I will take 1 tuple from SPI each time and
a page can contain 10 tuples, would there be 10 I/Os
or just 1 (since the last page is always pinned in memory).

I tried to reason the answer from the code but its not as clear as
the heap_fetch,etc... functions.

Q2:If I move to the first tuple in the cursor, does it reads everything
in its way back? I don't want the database to start another SELECT
DISTINCT
and use what is already ready.

Thanks.

Regards,
tzahi.

WARNING TO SPAMMERS:  see at
http://members.lycos.co.uk/my2nis/spamwarning.html



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] missing chunk number 0 for toast value

2005-04-27 Thread Ed L.
On Wednesday April 27 2005 6:11 pm, Tom Lane wrote:
 Ed L. [EMAIL PROTECTED] writes:
  Does this shed any light?  pg_toast_6221538 is the relevant
  toast table...
 
  $ psql -c set enable_indexscan=off; select chunk_seq,
  length(chunk_data) from pg_toast.pg_toast_6221538 where
  chunk_id = 19319495 order by chunk_seq;
   chunk_seq | length
  ---+
  (0 rows)

 Doesn't look very promising :-(

Sorry...what does that mean?  Not promising that reindexing the 
toast table would help because there is no data there? 

Ed


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] I must be daft, but I can't figure out what is wrong with this

2005-04-27 Thread Dann Corbit
I am toying around with a SQL92 INFORMATION_SCHEMA model.  I have
created the SQL to try to explore the idea, but I can't get off the
ground.

What is my utter and obvious idiocy?

E.g.:

schema_test= CREATE SCHEMA INFORMATION_SCHEMA AUTHORIZATION
INFORMATION_SCHEMA ;
ERROR:  parser: parse error at or near authorization
schema_test=


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Visual Query Builder

2005-04-27 Thread Hrishikesh Deshmukh
Hi All,

I have DB in Postgresql; is there a visual query builder? 

Thanks in advance.
Hrishi

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] BUG #1633: about transactions and row level locking

2005-04-27 Thread Bruno Wolff III
On Wed, Apr 27, 2005 at 13:56:02 +0100,
  deepak [EMAIL PROTECTED] wrote:
 
 The following bug has been logged online:

This isn't a bug, its a question. I am moving the discussion over to
the general list.

 Bug reference:  1633
 Logged by:  deepak
 Email address:  [EMAIL PROTECTED]
 PostgreSQL version: 8.0.1
 Operating system:   MS windows 2000 Server
 Description:about transactions and row level locking
 Details: 
 
 How we can know a record is locked before updating the record. The purpose
 is to avoid deadlock or queuing of update statements while using select *
 from table for update.

You might be able to use a short timeout on the statement that tries the
update.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] I must be daft, but I can't figure out what is wrong with this

2005-04-27 Thread Tom Lane
Dann Corbit [EMAIL PROTECTED] writes:
 schema_test= CREATE SCHEMA INFORMATION_SCHEMA AUTHORIZATION
 INFORMATION_SCHEMA ;
 ERROR:  parser: parse error at or near authorization
 schema_test=

Judging by the spelling of the error message, you're trying to do this
in a pre-7.3 PG.  We didn't have schemas back then.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reduce size of $PGDATA for demo cdrom?

2005-04-27 Thread Bruno Wolff III
On Wed, Apr 27, 2005 at 10:23:19 -0500,
  Scott Marlowe [EMAIL PROTECTED] wrote:
 
 My first recommendation would be to put everything into one database. 
 it looks like you've got 6 databases.  If you've still got the template0
 database, you can probably get rid of that one as well.  If you're not
 going to need to create any new databases you can maybe drop template1
 too.

Isn't template0 needed for dumps, since they are relative to template0?

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Postgresql, Windows, Peer-2-Peer network

2005-04-27 Thread Typing80wpm



At work, we have five workstations running Windows XP (Home edition), 
connected peer to peer. I do not know that much about interner or 
intranet. But, each machine has its own IP address, and one printer is 
accessible from the five machines, and the printer has its own IP address. 


IS IT POSSIBLE to install Postgresql for windows as a service on one of the 
machines (as I did on my home machine, specifing "localhost") but then have the 
other machines access the PG server through the internet address of that machine 
(the quad address)?

- Thanks in advance for your help!


Re: [GENERAL] Postgresql, Windows, Peer-2-Peer network

2005-04-27 Thread Aly Dharshi
Hello,
	I don't see why not. You may have to turn on the PG Server to listen 
for connections coming in via TCP so that the other machines can connect 
to it, as well as install the client related stuff on the other client 
machines accessing the machine running the PG server.

Cheers,
Aly.
[EMAIL PROTECTED] wrote:
At work, we have five workstations running Windows XP (Home edition), 
connected peer to peer.  I do not know that much about interner or 
intranet.  But, each machine has its own IP address, and one printer is 
accessible from the five machines, and the printer has its own IP address. 
 
IS IT POSSIBLE to install Postgresql for windows as a service on one of 
the machines (as I did on my home machine, specifing localhost) but 
then have the other machines access the PG server through the internet 
address of that machine (the quad address)?
 
- Thanks in advance for your help!
--
Aly Dharshi
[EMAIL PROTECTED]
 A good speech is like a good dress
  that's short enough to be interesting
  and long enough to cover the subject
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Free rekall for Postgresql

2005-04-27 Thread Typing80wpm



I downloaded the free windows rekall from totalrekall.co.uk and it seems to 
run just fine. I created some forms for some text tables, and they wouldnt 
do add or save. It was only when I defined a table with one primary key, and 
several fields, and then created a form with ever field EXCEPT the primary 
key. Then it added records just fine. The error I was getting 
involved putting the primary key in the form. It seems like when it goes to add 
or save a record, then it assumes that the primary key is NOT on the form, and 
if it IS, then you get a sql error of the same field mentioned twice. When 
you leave the primary key off of the form, then it is automatically included in 
the insert sql statement, and seems to be a series type autoincrement 
field.

I must say, I am quite confused by the two opposing companies. But, I guess 
I shall see how much I may accomplish with the FREE windows rekall, and 
meanwhile wait for the dust to settle regarding the various rekall companies 
before I invest in a non-free version.

Apparently Novell is going great guns to support Rekall as a substitute for 
M$ Access.


Re: [GENERAL] Postgresql, Windows, Peer-2-Peer network

2005-04-27 Thread Tony Caduto
OF COURSE as long as the PC is NT or greater(won't work on windows 95/98).
just make sure you modify the pg_hba.conf file to allow the other PCs to 
access and change the postgresql.conf file to listen on all i.e. *

You could even use PG Lightning Admin with it's built in SSH tunnel 
support, or Putty.exe and have all the other machines connect via 
encrypted SSH, then you could simply leave your server to allow 
connections only from localhost.

--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql 8.x
http://www.amsoftwaredesign.com
 
IS IT POSSIBLE to install Postgresql for windows as a service on one 
of the machines (as I did on my home machine, specifing localhost) 
but then have the other machines access the PG server through the 
internet address of that machine (the quad address)?
 
- Thanks in advance for your help!


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Visual Query Builder

2005-04-27 Thread Tony Caduto
PG Lightning Admin has one.  It's not free,but very,very inexpensive, 
especially if you purchase with Euros
http://www.amsoftwaredesign.com

Hrishikesh Deshmukh wrote:
Hi All,
I have DB in Postgresql; is there a visual query builder? 

Thanks in advance.
Hrishi
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Thanks for Rekall link!

2005-04-27 Thread Christopher Browne
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] wrote:
 I must say one intereting thing. When I downloaded the trial version
 from TheKompany, and asked it to browse a test file in PGSql which I
 loaded with 250,000 rows, it started to read them, and read for a
 long long time (as MSAccess does), but the seemed to get stuck,
 whereas MSAccess is able to browse the entire file.  I must
 experiment more with the demo version from theKompany, and also with
 this free version from the site you gave me.

This sort of problem is characteristic of the use of array objects
in graphical toolkits.

Suppose you're populating something with 250K rows, perhaps with a
dozen fields per row.  In such a case, the toolkit is slinging around
3-4 million objects, and having to evaluate which of them are visible
on screen at any given time.

_Any_ kind of inefficiency in the library, or in the use of the
library, can easily lead to rendering turning out really, really
badly.

The X Window system has gotten heavily criticized for speed problems,
commonly with respect to how Mozilla used to work when rendering large
web pages.  Reality was that Mozilla was implemented (this is no
longer true, by the way) atop a platform-independent library called
Rogue Wave which then had a mapping to Motif (which is noted as Not
Everyone's Favorite Graphics Library ;-)) which then rendered things
using X.  The True Problem lay somewhere in that set of layers and,
since several of the layers were pretty inscrutable, it was
essentially impractical to address the performance problem.

Much the same thing took place with the Tcl/Tk application, cbb
(Check Book Balancer); the Tk 'array' object got to behave
increasingly badly with increasing thousands of rows.  And changing
one transaction near the top of an account would lead to cascading
balance updates, therefore walking (linear fashion, more than likely
leading to superlinear resource consumption :-() through the rest of
the transactions to update every single balance...

Gigahertz, Gigabytes, and upgrades may overcome that, to some degree,
but it wouldn't be overly surprising if you were hitting some such
unfortunate case.  It might represent something fixed in a later
release of Rekall; it could represent something thorny to resolve.

I would really hate the notion of depending on a GUI to manage
millions of objects in this manner; it is just so easy for it to go
badly.
-- 
cbbrowne,@,gmail.com
http://linuxdatabases.info/info/nonrdbms.html
Rules of the Evil Overlord #10.  I will not interrogate my enemies in
the inner sanctum  -- a small hotel well outside  my borders will work
just as well. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq