[HACKERS] findoidjoins

2002-09-03 Thread Christopher Kings-Lynne

findoidjoins doens't seem to compile:

gmake[1]: Entering directory `/home/chriskl/pgsql-head/contrib/findoidjoins'
gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../sr
c/interfaces/libpgeasy -I../../src/interfaces/libpq -I. -I../../src/include 
  -c -o findoidjoins.o findoidjoins.c -MMD
findoidjoins.c:8: halt.h: No such file or directory
findoidjoins.c:9: libpgeasy.h: No such file or directory
findoidjoins.c: In function `main':
findoidjoins.c:26: warning: implicit declaration of function `halt'
findoidjoins.c:29: warning: implicit declaration of function `connectdb'
findoidjoins.c:31: warning: implicit declaration of function
`on_error_continue'
findoidjoins.c:32: warning: implicit declaration of function `on_error_stop'
findoidjoins.c:34: warning: implicit declaration of function `doquery'
findoidjoins.c:50: warning: implicit declaration of function `get_result'
findoidjoins.c:50: warning: assignment makes pointer from integer without a
cast
findoidjoins.c:60: warning: assignment makes pointer from integer without a
cast
findoidjoins.c:62: warning: implicit declaration of function `set_result'
findoidjoins.c:63: warning: implicit declaration of function `fetch'
findoidjoins.c:63: `END_OF_TUPLES' undeclared (first use in this function)
findoidjoins.c:63: (Each undeclared identifier is reported only once
findoidjoins.c:63: for each function it appears in.)
findoidjoins.c:66: warning: implicit declaration of function `reset_fetch'
findoidjoins.c:69: warning: implicit declaration of function `unset_result'
findoidjoins.c:83: warning: passing arg 2 of `sprintf' makes pointer from
integer without a cast
findoidjoins.c:107: warning: implicit declaration of function `disconnectdb'
gmake[1]: *** [findoidjoins.o] Error 1
gmake[1]: Leaving directory `/home/chriskl/pgsql-head/contrib/findoidjoins'
gmake: *** [install] Error 2


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] findoidjoins

2002-09-03 Thread Alvaro Herrera

Christopher Kings-Lynne dijo: 

 findoidjoins doens't seem to compile:
 
 gmake[1]: Entering directory `/home/chriskl/pgsql-head/contrib/findoidjoins'
 gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../sr
 c/interfaces/libpgeasy -I../../src/interfaces/libpq -I. -I../../src/include 
   -c -o findoidjoins.o findoidjoins.c -MMD
 findoidjoins.c:8: halt.h: No such file or directory
 findoidjoins.c:9: libpgeasy.h: No such file or directory

Seems related to the ripping of libpgeasy out of the main
distribution...

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Uno puede defenderse de los ataques; contra los elogios se esta indefenso


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] possible vacuum improvement?

2002-09-03 Thread Mario Weilguni

I know everyone is busy with the 7.3beta, but maybe this is something to think of 
before releasing the beta. Currently VACUUM will vacuum every table, but sometimes
it's desireable to leave tables untouched because the're mostly static or protocol 
tables. In my case this would be the pg_largeobject which is around 4GB of data, while 
the
other tables are ~40MB. Vacuuming the data is important, the large object table 
however rarely changes. The same goes for  a protocol table which is around 1GB and 
never is
changed beside INSERTS, so it's just growing, but never needs vacuum. VACUUM on the 
4GB table needs a long long time and no improvements, it just hurts performance and
fills OS buffers.

If pg_class would have a field for storing misc flags (e.g. a bitfield). This would 
allow to set a flag like NO_AUTO_VACUUM and modify the vacuum code to leave that 
tables untouched
if not specified by hand. Maybe there are other uses for such a bitfield too, and will 
help  prevent an initdb for simple improvements.

Any comments?

Best regards,
Mario Weilguni




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump compatibility between 7.3 and 7.2?

2002-09-03 Thread Mario Weilguni

Am Dienstag, 3. September 2002 00:26 schrieb Peter Eisentraut:
 Mario Weilguni writes:
  Timestamps in 7.2.x are this format: 2002-07-04 15:19:11.363562+02
  7.3 expects a timestamp per default in this format: 2002-09-02
  08:51:27,223455+02

 If you're not running the reallly latest 7.3 tip, update and try again.
 Something related to this was fixed recently.

It seems to work now, thanks. My last update was yesterday morning, so I thought it 
was fresh enough.

Thanks!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Shridhar Daithankar

On 3 Sep 2002 at 8:55, Mario Weilguni wrote:

 I know everyone is busy with the 7.3beta, but maybe this is something to think of 
before releasing the beta. Currently VACUUM will vacuum every table, but sometimes
 it's desireable to leave tables untouched because the're mostly static or protocol 
tables. In my case this would be the pg_largeobject which is around 4GB of data, 
while the
 other tables are ~40MB. Vacuuming the data is important, the large object table 
however rarely changes. The same goes for  a protocol table which is around 1GB and 
never is
 changed beside INSERTS, so it's just growing, but never needs vacuum. VACUUM on the 
4GB table needs a long long time and no improvements, it just hurts performance and
 fills OS buffers.
 
 If pg_class would have a field for storing misc flags (e.g. a bitfield). This would 
allow to set a flag like NO_AUTO_VACUUM and modify the vacuum code to leave that 
tables untouched
 if not specified by hand. Maybe there are other uses for such a bitfield too, and 
will help  prevent an initdb for simple improvements.
 
 Any comments?

I suggest vacumming only the table that changes. Further I believe, 
updates/deletes should be watched for performance as they cause dead tuples. Of 
course insert impacts statistics and should be monitored but something like a 
log table does not need vacuuming that often..

Knowing the application load can help a lot in tuning the DB, in short.

I was running a banking simulation for benchmarking. I know that accounts table 
gets updated for each transaction but log table is just an insert. So rather 
than vacumming entire db, just doing 'vacuum analyze accounts' give me almost 
same results. 

Performance was far better in earlier case. Without any vacuum I got something 
like 50 tps for 80K transactions. With 'vacuum analyze accounts' for each 5K 
transactions I got 200tps.

Personally I would prefer to have a trigger on a metadata table where I could 
trigger vacuuming a particular table each n number of transactions(Oh it would 
be great if that vacuum runs in background not blocking meta data table.. just 
a wishlist...). Can anybody tell me which table I could write such a trigger? I 
went thr. pg_* for some time but didn't find what I was looking for..

Bye
 Shridhar

--
Reisner's Rule of Conceptual Inertia:   If you think big enough, you'll never 
have to do it.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Christopher Kings-Lynne

 Personally I would prefer to have a trigger on a metadata table
 where I could
 trigger vacuuming a particular table each n number of
 transactions(Oh it would
 be great if that vacuum runs in background not blocking meta data
 table.. just
 a wishlist...). Can anybody tell me which table I could write
 such a trigger? I
 went thr. pg_* for some time but didn't find what I was looking for..

Actually, if you wrote it in C and kept some static data on each table, you
could probably write a vacuum trigger pretty easily.  You could even keep
the info in a table.

Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Shridhar Daithankar

On 3 Sep 2002 at 15:14, Christopher Kings-Lynne wrote:

  Personally I would prefer to have a trigger on a metadata table
  where I could
  trigger vacuuming a particular table each n number of
  transactions(Oh it would
  be great if that vacuum runs in background not blocking meta data
  table.. just
  a wishlist...). Can anybody tell me which table I could write
  such a trigger? I
  went thr. pg_* for some time but didn't find what I was looking for..
 
 Actually, if you wrote it in C and kept some static data on each table, you
 could probably write a vacuum trigger pretty easily.  You could even keep
 the info in a table.

Actually that's what I did. Update global  transaction counter than trigger the 
vacuum from a spare thread.

but having it in DB has advantages of centralisation. It's just a good to have 
kind of thing..

Bye
 Shridhar

--
I don't know why, but first C programs tend to look a lot worse thanfirst 
programs in any other language (maybe except for fortran, but thenI suspect all 
fortran programs look like `firsts')(By Olaf Kirch)


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Mario Weilguni

 gets updated for each transaction but log table is just an insert. So
rather
 than vacumming entire db, just doing 'vacuum analyze accounts' give me
almost
 same results.


That is not really practicable, one datebase has 107 tables, and making a
cron job
with 107 vacuum calls is completly out of question and very error prone
anyway.

Regards,
Mario Weilguni



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Shridhar Daithankar

On 3 Sep 2002 at 9:36, Mario Weilguni wrote:
 That is not really practicable, one datebase has 107 tables, and making a
 cron job
 with 107 vacuum calls is completly out of question and very error prone
 anyway.

That's correct.. What are the possible alternatives? Either backend has to 
support something or the DBA has to script something.

1)If number of tables that need vacuum are far more than those who don't, then 
a simple all  vacuum would do. But again sizes of individual tables will affect 
that judgement as well.

2)As OP suggested, if vacuum could pick up only those tables marked by 
bitfields, ay by an additional option like, 'vacuum analyse frequent_ones'.. 
this is going to need a backend change.

3)I guess scripting cron job for vacuum is one time job. If it's desparately 
needed, say 60 tables out of 107 require vacuum, personally I would spend some 
time making that script. Depends upon the requirement actually.

On a sidenote, does anybody have some statistics from benchmark may be, as in 
what's a rule of thumb for vacuuming? I found that a vacuum every 5K-10K 
transactions increases the tps like anything but below 1K transactions, it's 
not as much effective. May be one should consider this factor as well..

Bye
 Shridhar

--
Pascal: A programming language named after a man who would turn overin his 
grave if he knew about it.  -- Datamation, January 15, 1984


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Shridhar Daithankar

On 3 Sep 2002 at 15:39, Christopher Kings-Lynne wrote:

  Actually that's what I did. Update global  transaction counter
  than trigger the
  vacuum from a spare thread.
 
  but having it in DB has advantages of centralisation. It's just a
  good to have
  kind of thing..
 
 Care to submit it as a BSD licensed contrib module then?  Or at least create
 a project for it on http://gborg.postgresql.org/ ?

Sounds like a nice idea. I would do that by this week end, once I finalise the 
details about it.

Give me couple of days to finish it. Will come back soon with that..

Bye
 Shridhar

--
Reporter, n.:   A writer who guesses his way to the truth and dispels it with a 
tempest of words.   -- Ambrose Bierce, The Devil's Dictionary


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Christopher Kings-Lynne

 Actually that's what I did. Update global  transaction counter
 than trigger the
 vacuum from a spare thread.

 but having it in DB has advantages of centralisation. It's just a
 good to have
 kind of thing..

Care to submit it as a BSD licensed contrib module then?  Or at least create
a project for it on http://gborg.postgresql.org/ ?

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] 7.3 gotchas for applications and client libraries

2002-09-03 Thread Lee Kindness

Tom, do you think there is millage in adding functions (at least to
contrib) to PostgreSQL to avoid some of the common tasks applications
look into pg_* for?

For example I recently audited our code here for pg_* access, and
managed to create two plpgsql functions to replace all
occurrences. They were relatively simple queries to check if a table
existed and to check if a column existed, functions for 7.2.x:

 \echo creating function: column_exists
 CREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '
DECLARE
tab ALIAS FOR $1;
col ALIAS FOR $2;
rec RECORD;
BEGIN
SELECT INTO rec *
FROM pg_class c, pg_attribute a
WHERE c.relname = tab
AND   c.oid = a.attrelid
AND   a.attnum   0
AND   a.attname = col;
IF NOT FOUND THEN
RETURN false;
ELSE
RETURN true;
END IF;
END;
 ' LANGUAGE 'plpgsql';

 \echo creating function: table_exists
 CREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS '
DECLARE
tab ALIAS FOR $1;
rec RECORD;
BEGIN
SELECT INTO rec *
FROM  pg_class c
WHERE c.relname = tab;
IF NOT FOUND THEN
RETURN false;
ELSE
RETURN true;
END IF;
END;
 ' LANGUAGE 'plpgsql';

Obviously these need attention when our application targets 7.3 (and
thanks for the heads-up), but all changes are localised. Surely these
must be fairly common tests and maybe better added to the database
server so applications are less dependant on internal catalogues?

Any desire for me to polish these two functions up for contrib in 7.3?
Actually the Cookbook at http://www.brasileiro.net/postgres/ has
similar function which will need attention for 7.3 too, is the
eventual plan for this to be folded into the core release?

Thanks, Lee.

Tom Lane writes:
  Bruce suggested that we need a porting guide to help people look for
  application and client-library code that will be broken by the changes
  in PG 7.3.  Here is a first cut at documenting the issues.
  Comments welcome --- in particular, what have I missed?
  [snip ]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Memory management question

2002-09-03 Thread Nigel J. Andrews



It's probably a pretty basic question explained in some document I haven't seen
but...if I do something like a CreateTupleDescCopy() how do I know my memory
context owns everything allocated without following the code all the way
through until it returns to me?


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Memory management question

2002-09-03 Thread Gavin Sherry

On Tue, 3 Sep 2002, Nigel J. Andrews wrote:

 
 
 It's probably a pretty basic question explained in some document I haven't seen
 but...if I do something like a CreateTupleDescCopy() how do I know my memory
 context owns everything allocated without following the code all the way
 through until it returns to me?

Umm.. how else could you *really* know unless you read the
source? Basically, all convenience routines off this nature allow memory
in the current memory context.

As for CreateTupleDescCopy() you don't have to look too far to see what it
does:

--

CreateTupleDescCopy(TupleDesc tupdesc)
{
TupleDesc   desc;
int i,
size;

desc = (TupleDesc) palloc(sizeof(struct tupleDesc));

--

Gavin


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Memory management question

2002-09-03 Thread Karel Zak

On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:
 
 
 It's probably a pretty basic question explained in some document I haven't seen
 but...if I do something like a CreateTupleDescCopy() how do I know my memory
 context owns everything allocated without following the code all the way
 through until it returns to me?

 If some code doesn't call MemoryContextSwitchTo() all is allocated in
current memory context. You can check if CurrentMemoryContext is same
before and after call that is important for you - but this check say
nothing, bacuse some code can switch to other context and after usage
switch back to your context. IMHO is not common way how check it.
(Ok, maybe check all contexts size before/after call...)

 Suggestion: add to memory managment counter that handle number
 of MemoryContextSwitchTo() calls. IMHO it can be compile
 only if MEMORY_CONTEXT_CHECKING is define.

 But I think there is not to much places which switching between
contexts and all are good commented (I hope, I wish :-)

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Rod Taylor

On Tue, 2002-09-03 at 03:36, Mario Weilguni wrote:
  gets updated for each transaction but log table is just an insert. So
 rather
  than vacumming entire db, just doing 'vacuum analyze accounts' give me
 almost
  same results.
 
 
 That is not really practicable, one datebase has 107 tables, and making a
 cron job
 with 107 vacuum calls is completly out of question and very error prone
 anyway.

So...  Write a script which does something like:

skiptables = 'skipme' 'andme'
tables = `psql -c 'SELECT relname from pg_class where relname not in
(${skiptables})' template1`

for tab in ${tables} ; do
  vacuumdb -t ${tab}
done


Fill in the holes and your done -- get the right pg_class type, handle
schemas appropriately, etc.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] HISTORY file

2002-09-03 Thread Marc G. Fournier


S'alright, I can do the package together tomorrow morning to let you wrap
up the loose ends :)

On Tue, 3 Sep 2002, Bruce Momjian wrote:

 I am still working on the 7.3 HISTORY file.  I have extracted the items,
 but I have to worksmith them and write an introduction.

 It is midnight here now.  I don't think I can finish before ~3am and at
 that point, I am not sure I will know what I am writing.

 Basically, one day of feature freeze wasn't enough time for me to get
 this together.  I could have worked on it earlier, but I didn't have time
 then either --- only the feature freeze has given me time to work on
 this.

 Should we ship beta1 without a HISTORY or delay one day?  People should
 probably review this HISTORY file too, and there isn't time for that
 either unless we delay.

 If all our beta testers are on Hackers, I can post the list when I am
 done so you can package.

 I don't know of any other items holding up the packaging.

 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] HISTORY file

2002-09-03 Thread Marc G. Fournier

On Tue, 3 Sep 2002, Tom Lane wrote:

 Bruce Momjian [EMAIL PROTECTED] writes:
  Yes, I haven't gotten to the release checklist yet.  Let's delay a day.

 Or at least late in the day tomorrow.  I have some loose ends to clean
 up yet as well, but I'm beat and am going to bed.

 But I assume we are now officially in feature freeze, right?

Yes, definitely ... and unless any of you have any showstoppers, I'll do
the beta1 packaging first thing Wed morning instead of today ;);


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Memory management question

2002-09-03 Thread Nigel J. Andrews

On Tue, 3 Sep 2002, Karel Zak wrote:

 On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:
  
  
  It's probably a pretty basic question explained in some document I haven't seen
  but...if I do something like a CreateTupleDescCopy() how do I know my memory
  context owns everything allocated without following the code all the way
  through until it returns to me?
 
  If some code doesn't call MemoryContextSwitchTo() all is allocated in
 current memory context. You can check if CurrentMemoryContext is same
 before and after call that is important for you - but this check say
 nothing, bacuse some code can switch to other context and after usage
 switch back to your context. IMHO is not common way how check it.
 (Ok, maybe check all contexts size before/after call...)
 
  Suggestion: add to memory managment counter that handle number
  of MemoryContextSwitchTo() calls. IMHO it can be compile
  only if MEMORY_CONTEXT_CHECKING is define.


I quite like that idea. Only thing is it doesn't full address the issue of
identifying if my context owns memory allocated by other functions I've
used. For example:

A called procedure could be doing (psuedo code obviously):

SwitchContext()
mem=palloc(anumber)
/* use mem */
pfree(mem)
SwitchContectBack()
retmem=palloc(anothersize)

There, net effect is that I do own retmem but the test on context switch
counters would indicate that I may not.

I think the problem is that I don't fully understand why [and when] is context
switch is or should be done. 

  But I think there is not to much places which switching between
 contexts and all are good commented (I hope, I wish :-)

As someone pointed out my example wasn't very complex so checking the source
wasn't onerous. Checking something like heap_modifytuple() is more time
consuming.

I was hoping there was some sort of 'rule of thumb'. In general I can't see how
it could be sensibly known without such a rule and without tracing through the
source.


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] 7.3 gotchas for applications and client libraries

2002-09-03 Thread Tom Lane

Lee Kindness [EMAIL PROTECTED] writes:
  CREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '

  CREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS '

 Obviously these need attention when our application targets 7.3 (and
 thanks for the heads-up), but all changes are localised.

They are?  What will your policy be about schema names --- won't you
have to touch every caller to add a schema name parameter?

I'm not averse to trying to push logic over to the backend, but I think
the space of application requirements is wide enough that designing
general-purpose functions will be quite difficult.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Wanted: pgdiff ($$$)

2002-09-03 Thread ad wolf

We're offering a small reward for a PG hacker that can code up a
mysqldiff-like utility for PG.   For those unfamiliar with mysqldiff:
http://adamspiers.org/computing/mysqldiff/

Creating something similar for PG is slightly more involved (because
of the ref. integrity issues, among others...), but it would certainly
be useful.

To get more details, stop by:
http://www.wolfbioscience.com/pgdiff/

Thanks for the help!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] new index access methods

2002-09-03 Thread Håkon Hagen Clausen

Hi,

I'm planning to experiment with a new index access method. More
specifically I want to plug an external index program in to postgres as an
index.

Does anybody have some hints on how I can find some info on where to
begin? I have looked a little bit on the GIST project, and in the
programmers guide, but they don't go in to many details about
implementation. I would like to get an overview on how much/what I need to
implement

Regards,

Håkon

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [INTERFACES] pgaccess 0.98.8 beta 1 - the show starts

2002-09-03 Thread Teo

- Original Message -
From: Iavor Raytchev [EMAIL PROTECTED]
To: pgaccess - developers [EMAIL PROTECTED]; pgaccess - users
[EMAIL PROTECTED]; pgsql-interfaces [EMAIL PROTECTED];
pgsql-hackers [EMAIL PROTECTED]
Sent: Saturday, August 31, 2002 8:55 PM
Subject: [INTERFACES] pgaccess 0.98.8 beta 1 - the show starts


 In the eve of the big success I would like to bring once again one name
 in focus - Constantin Teodorescu, who built the first (original)
 pgaccess and gave us the tool to find great enjoinment in.

 Teo, we want to thank you, your place on the pgaccess table will be
 always kept!

Thanks a lot Iavor, and thanks to all of you who are pushing further the
development of PgAccess.
I am sad because I don't have any free time in order to get a hand of help
to your efforts but I hope that in a couple of months I will be able to make
something new for PgAccess 1.0 release.

Thanks again,
Teo



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Memory management question

2002-09-03 Thread Tom Lane

Nigel J. Andrews [EMAIL PROTECTED] writes:
 It's probably a pretty basic question explained in some document I
 haven't seen but...if I do something like a CreateTupleDescCopy() how
 do I know my memory context owns everything allocated without
 following the code all the way through until it returns to me?

If it doesn't, then it's broken.  A general rule of the system is that
structures being allocated for return to a routine's caller must be
allocated in the caller's CurrentMemoryContext.  The only exceptions are
for cases where the routine in question is taking responsibility for the
long-term management of the object (for example, a syscache) --- in
which case, it isn't your problem.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Tom Lane

Mario Weilguni [EMAIL PROTECTED] writes:
 I know everyone is busy with the 7.3beta, but maybe this is something
 to think of before releasing the beta.

We are already in feature freeze.

In terms of what might happen for 7.4 or beyond, what I'd personally
like to see is some auto vacuum facility that would launch background
vacuums automatically every so often.  This could (eventually) be made
self-tuning so that it would vacuum heavily-updated tables more often
than seldom-updated ones --- while not forgetting the
every-billion-transactions rule...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] serial type as foreign key referential integrity violation

2002-09-03 Thread Shridhar Daithankar

On 31 Aug 2002 at 5:28, Zhicong Leo Liang wrote:

 Hi all,
   Just briefly describe my problem.
   I have two tables.
 create table A(
a1 serial primary key,
a2 varchars(10)

that should be varchar..

 );
 create table B(
 b1 integer primary key,
 b2 Integer,
 foreign key(b2) references a(a1)
 )
 insert into A values('123'); 
 select a1 from A where a2='123'
 --
 a1 
 --
 1
 --
 insert into B values (1,1);
 ERROR!! referential integrity violation - key referenced from B not found in A.

this works.. I guess it's matter of writing a bit cleaner sql if nothing else. 
I am using postgresql-7.2-12mdk with mandrake8.2. 

I don't know which approach is better or correct, yours or mine. But this 
solves your problems at least..

test=# select * from a;
 a1  | a2
-+
 123 |
(1 row)

test=# insert into A(a2) values('123');
INSERT 4863345 1
test=# select * from a;
 a1  | a2
-+-
 123 |
   1 | 123
(2 rows)

test=# insert into b(b1,b2) values(1,1);
INSERT 4863346 1
test=# select * from a;
 a1  | a2
-+-
 123 |
   1 | 123
(2 rows)

test=# select * from b;
 b1 | b2
+
  1 |  1
(1 row)

test=#

Bye
 Shridhar

--
Concept, n.:Any idea for which an outside consultant billed you more than 
$25,000.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] contrib/ intarray, ltree, intagg broken(?) by array

2002-09-03 Thread mlw

This built and worked on my system.
famous last words, huh?


Bruce Momjian wrote:
 
 Can someone address the intagg issue here, or is the code OK?
 
 ---
 
 Tom Lane wrote:
  Joe Conway and I have just committed some changes in the internal
  representation of Postgres arrays: an element-type-OID field is added to
  the array header, and alignment calculations are now done the same way
  as in ordinary tuple storage, instead of taking shortcuts.  I believe
  that these changes need to be reflected into the intarray, ltree, and
  intagg contrib modules.
 
  intarray and ltree both seem to be mapping their own declarations onto
  arrays using largely-similar code.  But while intarray fails its
  regression test, I find ltree still passes.  So I'm confused about what
  that code is really doing and don't want to touch it.
 
  I tried to fix intagg, but since there is no regression test for it
  I'm unsure whether it's okay.
 
  Could you folks take a look at CVS tip and see what changes are needed,
  if any?
 
  In the longer run, it might be possible to improve these routines to be
  array-type-polymorphic using the new features.  But with the 7.3 beta
  date nearly upon us, I'd counsel first making the existing functionality
  work again...
 
regards, tom lane
 
  ---(end of broadcast)---
  TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
 
 
 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

? int_aggregate.sql
? intagg.patch
? intagg_test.sql
Index: int_aggregate.c
===
RCS file: /projects/cvsroot/pgsql-server/contrib/intagg/int_aggregate.c,v
retrieving revision 1.4
diff -u -r1.4 int_aggregate.c
--- int_aggregate.c 2002/08/30 00:28:40 1.4
+++ int_aggregate.c 2002/08/30 15:22:03
@@ -11,7 +11,8 @@
  * This file is the property of the Digital Music Network (DMN).
  * It is being made available to users of the PostgreSQL system
  * under the BSD license.
- *
+ * 
+ * NOTE: This module requires sizeof(void *) to be the same as sizeof(int)
  */
 #include postgres.h
 
@@ -37,6 +38,9 @@
 #include utils/lsyscache.h
 
 
+/* Uncomment this define if you are compiling for postgres 7.2.x */
+/* #define PG_7_2 */
+
 /* This is actually a postgres version of a one dimensional array */
 
 typedef struct
@@ -96,7 +100,9 @@
p-a.size = cb;
p-a.ndim = 0;
p-a.flags = 0;
+#ifndef PG_7_2
p-a.elemtype = INT4OID;
+#endif
p-items = 0;
p-lower= START_NUM;
}
@@ -149,7 +155,9 @@
pnew-a.size = cb;
pnew-a.ndim=1;
pnew-a.flags = 0;
+#ifndef PG_7_2
pnew-a.elemtype = INT4OID;
+#endif
pnew-lower = 0;
}
else
Index: int_aggregate.sql.in
===
RCS file: /projects/cvsroot/pgsql-server/contrib/intagg/int_aggregate.sql.in,v
retrieving revision 1.1
diff -u -r1.1 int_aggregate.sql.in
--- int_aggregate.sql.in2002/02/25 03:45:27 1.1
+++ int_aggregate.sql.in2002/08/30 15:22:03
@@ -1,7 +1,7 @@
 -- Drop functions
+drop aggregate int_array_aggregate(int4);
 drop function int_agg_state (int4, int4);
 drop function int_agg_final_array (int4);
-drop aggregate int_array_aggregate(int4);
 drop function int_array_enum (int4[]);
 
 
@@ -9,14 +9,14 @@
 -- Is called for each item in an aggregation
 create function int_agg_state (int4, int4)
returns int4
-   as 'MODULE_FILENAME','int_agg_state'
+   as 'MODULE_PATHNAME','int_agg_state'
language 'c';
 
 -- Internal function for the aggregate
 -- Is called at the end of the aggregation, and returns an array.
 create function int_agg_final_array (int4)
returns int4[]
-   as 'MODULE_FILENAME','int_agg_final_array'
+   as 'MODULE_PATHNAME','int_agg_final_array'
language 'c';
 
 -- The aggration funcion.
@@ -35,6 +35,6 @@
 -- as a row.
 create function int_array_enum(int4[])
returns setof integer
-   as 'MODULE_FILENAME','int_enum'
+   as 'MODULE_PATHNAME','int_enum'
language 'c';
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] findoidjoins

2002-09-03 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 Christopher Kings-Lynne dijo: 
 findoidjoins doens't seem to compile:

 Seems related to the ripping of libpgeasy out of the main
 distribution...

I believe it's been broken for some time (disremember just why, maybe a
schema issue?).  I had a TODO item to resurrect it so that we could
update the oidjoins regression test, which is sadly out of date for
the current system catalogs.  If anyone wants to work on that ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Just testing tighgter UCE controls ...

2002-09-03 Thread Marc G. Fournier


ignore if you see this ...


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] BYTEA with ecpg

2002-09-03 Thread Mike Sacauskis

How do I do an insert and retreival for bytea field using embedded SQL? 
Anybody have a code example?  I'm trying to port from informix to 
postgresQL.

Thanks

Mike

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Inheritance

2002-09-03 Thread Peter Gulutzan

The August draft of the SQL:200n standard (9075-2 Foundation) says in
Section 4.17.2: Every table constraint specified for base table T is
implicitly a constraint on every subtable of T, by virtue of the fact
that every row in a subtable is considered to have a corresponding
superrow in every one of its supertables.

Peter Gulutzan
Co-Author, SQL-99 Complete, Really
Co-Author, SQL Performance Tuning

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Sprintf() auditing and a patch

2002-09-03 Thread Jukka Holappa

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I'm very new to this project and inspired by recent security release, I
started to audit postgresql source against common mistakes with sprintf().

I mostly found problems with sprintf() used on statically allocated
buffers or dynamically allocated buffers with random constant size.

I used lib/stringinfo.h functions when I was sure palloc()-memory
allocation was the right thing to do and I felt like code needed to
construct a complete string no matter how complex.

There were places where I just changes sprintf() to snprintf(). Like in
some *BSD dl loading functions etc.

There were also places where I could identify the possible bug but
didn't know 'the' right way to fix it. As I say, I don't know the
codebase very well so I really didn't know what auxiliarity functions
there are to use. These parts are marked as FIXME and should be easily
identified by looking at the patch (link below - it is a big one).

There were also simple mistakes like in src/backend/tioga/tgRecipe.c
- - sprintf(qbuf, Q_LOOKUP_EDGES_IN_RECIPE, name);
- - pqres = PQexec(qbuf);
+   snprintf(qbuf, MAX_QBUF_LENGTH, Q_LOOKUP_EDGES_IN_RECIPE, name);
~   pqres = PQexec(qbuf);
~   if (*pqres == 'R' || *pqres == 'E')

Notice how previous PQexec() is removed. There were two of them.

Some of my fixes cause code to be a bit slower because of dynamically
allocated mem, but it also fixes a lot of ptr+strlen(ptr) -style
performance problems. I didn't particularly try to fix these but some of
them are corrected by simply using lib/stringinfo.h

Please take look at this patch but since I have worked three long nights
with this one, there probably are bugs. I tried compiling it with
configure --with-tcl --with-perl --with-python and at least it
compiled for me :) But that's about all I can promise.

diffstat postgresql-7.2.2-sprintf.patch
~ contrib/cube/cube.c |   26 --
~ contrib/cube/cubeparse.y|   11
~ contrib/intarray/_int.c |   29 +-
~ contrib/rserv/rserv.c   |   30 +-
~ contrib/seg/segparse.y  |   18 -
~ contrib/spi/refint.c|   39 +--
~ contrib/spi/timetravel.c|   12
~ doc/src/sgml/spi.sgml   |2
~ src/backend/parser/analyze.c|2
~ src/backend/port/dynloader/freebsd.c|   10
~ src/backend/port/dynloader/netbsd.c |   11
~ src/backend/port/dynloader/nextstep.c   |2
~ src/backend/port/dynloader/openbsd.c|   10
~ src/backend/postmaster/postmaster.c |2
~ src/backend/storage/file/fd.c   |1
~ src/backend/storage/ipc/shmqueue.c  |1
~ src/backend/tioga/tgRecipe.c|   11
~ src/backend/utils/adt/ri_triggers.c |  312

~ src/bin/pg_dump/pg_dump.c   |   14 -
~ src/bin/pg_passwd/pg_passwd.c   |2
~ src/bin/psql/command.c  |2
~ src/bin/psql/describe.c |3
~ src/interfaces/ecpg/preproc/pgc.l   |8
~ src/interfaces/ecpg/preproc/preproc.y   |   24 -
~ src/interfaces/ecpg/preproc/type.c  |   16 -
~ src/interfaces/ecpg/preproc/variable.c  |   12
~ src/interfaces/libpgeasy/examples/pgwordcount.c |6
~ src/interfaces/libpgtcl/pgtclCmds.c |4
~ src/interfaces/libpq/fe-auth.c  |2
~ src/interfaces/odbc/connection.c|2
~ src/interfaces/odbc/dlg_specific.c  |5
~ src/interfaces/odbc/info.c  |   38 +-
~ src/interfaces/odbc/qresult.c   |4
~ src/interfaces/odbc/results.c   |8
~ src/interfaces/odbc/statement.c |6
~ 35 files changed, 365 insertions, 320 deletions

Patch is about 70k and downloadable from
http://suihkari.baana.suomi.net/postgresql/patches/postgresql-7.2.2-sprintf.patch

At least I didn't just bitch and moan about the bugs. ;)

- - Jukka
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQE9bRZSYYWM2XTSwX0RApcSAJ40pTB0DEiucS/4m2aNFHSn5XVXlwCfeyYT
EL5AF82ZlcqT/dGgd6BRJWM=
=qojm
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Shridhar Daithankar

On 3 Sep 2002 at 9:49, Tom Lane wrote:

 In terms of what might happen for 7.4 or beyond, what I'd personally
 like to see is some auto vacuum facility that would launch background
 vacuums automatically every so often.  This could (eventually) be made
 self-tuning so that it would vacuum heavily-updated tables more often
 than seldom-updated ones --- while not forgetting the
 every-billion-transactions rule...

OK, I plan to work on this. Here is my brief idea

1)Create a table (vacuum_info) that stores table name and auto vacuum defaults. 
Since I am planning this in contrib, I would not touch pg_class.

The table will store
- table names
- number of transactions to trigger vacuum analyze(default 1K)
- number of transactions to trigger full vacuum(default 10K)

A trigger on pg_class i.e. table creation should add a row in this table as 
well.

2)Write a trigger on tables that updates statistics on table activity. I see 

-pg_stat_all_tables
-pg_stat_sys_tables
-pg_stat_user_tables. 

The columns are 

-n_tup_ins 
-n_tup_upd
-n_tup_del 

Of course it will ignore it's own updates and inserts to avoid infinite loops. 
This will update the pseudo statistics in vacuum_info table

Another trigger on vacuum_info will trigger vacuum if required. Ideally I would 
write it in external multithreaded library to trigger vacuum in background 
without blocking operations on vacuum_info table.

I need to know the following..

1)Is this sounds like a workable solution?

2)Is this as simple as I have put here or am I missing some vital components?

3)Is there some kind of rework involved?

4)Is use of threads sounds portable enough? I just need to trigger a thread in 
background and return. No locking, nothing is required. Will there be any 
problem for postgres invoking such an external trigger?

5)When I create a function in a .so, is it possible to invoke init/startup 
routines? I can create and destroy thread in these routine to avoid thread 
creation overhead. If postgres is using dlopen, I can use _init, _fini. 

6)such a 'daemon' would be on per back-end basis if I am guessing correctly. 
Would locking things in transactions for vacuum_info be sufficient?

I hope I am making a sensible proposal/design(My first attempt to contribute to 
postgres). Please let me know your comments. 


Bye
 Shridhar

--
Blast medicine anyway!  We've learned to tie into every organ in thehuman body 
but one.  The brain!  The brain is what life is all about.  -- McCoy, The 
Menagerie, stardate 3012.4


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Just testing tighgter UCE controls ...

2002-09-03 Thread Vince Vielhaber

On Tue, 3 Sep 2002, Marc G. Fournier wrote:


 ignore if you see this ...

What if we don't see it?

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
  http://www.camping-usa.com  http://www.cloudninegifts.com
   http://www.meanstreamradio.com   http://www.unknown-artists.com
==




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Tom Lane

Shridhar Daithankar [EMAIL PROTECTED] writes:
 1)Is this sounds like a workable solution?

Adding a trigger to every tuple update won't do at all.  Storing the
counts in a table won't do either, as the updates on that table will
generate a huge amount of wasted space themselves (not to mention
enough contention to destroy concurrent performance).

 4)Is use of threads sounds portable enough?

Threads are completely out of the question, at least if you have any
hope of seeing this code get accepted into the core distro.


For vacuum's purposes all that we really care to know about is the
number of obsoleted tuples in each table: committed deletes and updates,
and aborted inserts and updates all count.  Furthermore, we do not need
or want a 100% reliable solution; approximate counts would be plenty
good enough.

What I had in the back of my mind was: each backend counts attempted
insertions and deletions in its relcache entries (an update adds to both
counts).  At transaction commit or abort, we know which of these two
counts represents the number of dead tuples added to each relation, so
while we scan the relcache for post-xact cleanup (which we will be doing
anyway) we can transfer the correct count into the shared FSM entry for
the relation.  This gives us a reasonably accurate count in shared
memory of all the tuple obsoletions since bootup, at least for
heavily-used tables.  (The FSM might choose to forget about lightly-used
tables.)  The auto vacuumer could look at the FSM numbers to decide
which tables are highest priority to vacuum.

This scheme would lose the count info on a database restart, but that
doesn't bother me.  In typical scenarios the same tables will soon get
enough new counts to be highly ranked for vacuuming.  In any case the
auto vacuumer must be designed so that it vacuums every table every so
often anyhow, so the possibility of forgetting that there were some dead
tuples in a given table isn't catastrophic.

I do not think we need or want a control table for this; certainly I see
no need for per-table manual control over this process.  There should
probably be a few knobs in the form of GUC parameters so that the admin
can control how much overall work the auto-vacuumer does.  For instance
you'd probably like to turn it off when under peak interactive load.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Inquiry From Form [pgsql]

2002-09-03 Thread control

does anybody know who i can talk to about a virus/bug scanning engine for postgresql 
tables/webportal stuff?  i would like to start a proof-of-concept thread for this.  
sounds strange, 
i know, but chill out, details to come.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Rod Taylor

On Tue, 2002-09-03 at 11:01, Tom Lane wrote:
 Shridhar Daithankar [EMAIL PROTECTED] writes:
  1)Is this sounds like a workable solution?
 
 Adding a trigger to every tuple update won't do at all.  Storing the
 counts in a table won't do either, as the updates on that table will
 generate a huge amount of wasted space themselves (not to mention
 enough contention to destroy concurrent performance).
 
  4)Is use of threads sounds portable enough?
 
 Threads are completely out of the question, at least if you have any
 hope of seeing this code get accepted into the core distro.
 
 
 For vacuum's purposes all that we really care to know about is the
 number of obsoleted tuples in each table: committed deletes and updates,
 and aborted inserts and updates all count.  Furthermore, we do not need
 or want a 100% reliable solution; approximate counts would be plenty
 good enough.

It would be nice if it could track successful inserts, and fire off an
analyze run when it changes more than 20% from what stats says.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] contrib/cube update

2002-09-03 Thread Bruno Wolff III

What I changed is covered in the CHANGES file.
Note that this includes a bug fix I already subimtted.
These changes are versus 7.3 CVS and may not be backwards compatible with 7.2.
They do not include a bug fix for a problem I reported with cube_yyerror.
A context diff is attached.


diff -c -r -N --exclude=CVS cube/CHANGES newcube/CHANGES
*** cube/CHANGESWed Dec 31 18:00:00 1969
--- newcube/CHANGES Wed Aug 28 09:47:20 2002
***
*** 0 
--- 1,95 
+ Changes that were made in August 2002.
+ 
+ Note that this was based on a 7.3 development version and changes may not
+ directly work with earlier versions.
+ 
+ I fixed a bug in cubescan.pl that prevented signed numbers with no digits
+ before a decimal point from being accepted. This was submitted as a separate
+ patch and may already be applied.
+ 
+ I reported but did not fix a potential buffer overrun problem in cube_yyerror
+ in cubeparse.y.
+ 
+ cube_inter should really return NULL if the two cubes don't overlap. However
+ this requires changing to the new calling sequence and I don't know enough
+ about how to do it to make the change.
+ 
+ I changed all floats to doubles except for g_cube_penalty which I don't
+ think can be changed to return double. This might cause the penalty to
+ overflow sooner than one might expect, but overflow could have happened
+ even with floats.
+ 
+ I changed the output format (in cube_out) to use %.16g instead of %g, since the
+ default is only 6 digits of precision.
+ 
+ I changed all of the functions declared with (isstrict) to use the current
+ method of declaring this.
+ 
+ I changed all of the externally visible functions to be immutable which
+ they are. I don't think this matters for the gist functions and didn't
+ try to declare them immutable in case there was something tricky about them
+ that I don't understand.
+ 
+ I changed the regression tests to use some larger exponents to test output
+ in exponential form. 1e7 was too small for this.
+ 
+ I added some regression tests to check for 16 digits of precision. This
+ may or may not be a good idea.
+ 
+ I got rid of the swap_corners function. It created scratch boxes that
+ were iterated through and deleted. This is slower than just getting the
+ larger or smaller coordinate as needed, since swap_corners was doing the
+ same thing with the overhead of a function call and memory allocation.
+ 
+ I added memset calls to zero out newly allocated NDBOXes as the documentation
+ on functions indicates should be done.
+ 
+ I got rid of a call to cube_same in cube_lt and cube_gt since the test
+ was redundant with other checks being made. The call to cube_same would
+ only be faster if most of the time you were comparing equivalent cubes.
+ 
+ In cube_lt and cube_gt, the second (UR) for loop for comparing
+ extra coordinates to 0 had the wrong range.
+ 
+ Note that the cube_distance function wasn't mentioned in the README.cube file.
+ 
+ I added regression tests for the cube_distance function.
+ 
+ I added the following new functions:
+ cube
+ cube_dim
+ cube_ll_coord
+ cube_ur_coord
+ cube_is_point
+ cube_enlarge
+ 
+ cube takes text input and returns a cube. This is useful for making cubes
+ from computed strings.
+ 
+ cube_dim returns the number of dimensions stored in the the data structure
+ for a cube. This is useful for constraints on the dimensions of a cube.
+ 
+ cube_ll_coord returns the nth coordinate value for the lower left corner
+ of a cube. This is useful for doing coordinate transformations.
+ 
+ cube_ur_coord returns the nth coordinate value for the upper right corner
+ of a cube. This is useful for doing coordinate transformations.
+ 
+ cube_is_point returns true if a cube is also a point. This is true when the
+ two defining corners are the same.
+ 
+ cube_enlarge increases the size of a cube by a specified radius in at least
+ n dimensions. If the radius is negative the box is shrunk instead. This
+ is useful for creating bounding boxes around a point for searching for
+ nearby points. All defined dimensions are changed by the radius. If n
+ is greater than the number of defined dimensions and the cube is being
+ increased (r = 0) then 0 is used as the base for the extra coordinates.
+ LL coordinates are decreased by r and UR coordinates are increased by r. If a
+ LL coordinate is increased to larger than the corresponding UR coordinate
+ (this can only happen when r  0) than both coordinates are set to their
+ average.
+ 
+ I added regression tests for the new functions.
+ 
+ I added documentation for cube_distance and the new functions to README.cube
+ as well as making a few other minor changes.
diff -c -r -N --exclude=CVS cube/README.cube newcube/README.cube
*** cube/README.cubeMon Dec 11 14:39:14 2000
--- newcube/README.cube Wed Aug 28 08:55:47 2002
***
*** 99,105 
  
  n [0-9]+
  integer   [+-]?{n}
! real  [+-]?({n}\.{n}?)|(\.{n})
  FLOAT 

Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 On Tue, 2002-09-03 at 11:01, Tom Lane wrote:
 For vacuum's purposes all that we really care to know about is the
 number of obsoleted tuples in each table: committed deletes and updates,
 and aborted inserts and updates all count.  Furthermore, we do not need
 or want a 100% reliable solution; approximate counts would be plenty
 good enough.

 It would be nice if it could track successful inserts, and fire off an
 analyze run when it changes more than 20% from what stats says.

That's a thought too.  I was only thinking of space reclamation, but
it'd be easy to extend the scheme to keep track of the number of tuples
successfully inserted, changed, or deleted (all three events would
affect stats) as well as the number of dead tuples.  Then you could fire
auto-analyze every so often, along with auto-vacuum.

Auto-analyze might need more tuning controls than auto-vacuum, though.
Vacuum doesn't have any question about when it needs to run: a dead
tuple is a dead tuple.  But for analyze you might have plenty of update
traffic and yet no meaningful change in the interesting stats for a
table.  An admin who knows the behavior of his tables would like to be
able to configure the frequency of analyze runs, rather than trust to
a necessarily-not-too-bright auto-analyze routine.  (Not sure whether
this is important enough to warrant the complications of making it
configurable though.  You can always do it the old-fashioned way with
cron scripts if you want that kind of control, I suppose.)

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Mario Weilguni

I do not think we need or want a control table for this; certainly I see
no need for per-table manual control over this process.  There should
probably be a few knobs in the form of GUC parameters so that the admin
can control how much overall work the auto-vacuumer does.  For instance
you'd probably like to turn it off when under peak interactive load.

If (auto)vacuum is clever to check that some tables do not need vacuum
there's really no need for that. That brings me to another point, can't the
statistics collector used for that?

For my database I wrote a statistic display program for web-access, and all
the info autovacuum would need is here.
http://mw.sime.com/pgsql.htm

That brings me to another point, is there interest for this
web-statistics-frontend, maybe for /contrib? I found it extremly useful
because it showed up the weak points in my applications.

Best regards,
Mario Weilguni

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] BYTEA with ecpg

2002-09-03 Thread Michael Meskes

On Tue, Aug 27, 2002 at 05:59:36AM +, Mike Sacauskis wrote:
 How do I do an insert and retreival for bytea field using embedded SQL? 
 Anybody have a code example?  I'm trying to port from informix to 
 postgresQL.

Actually I never tried. Doesn't it work with using an array of char as C
variable?

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] serial type as foreign key referential integrity

2002-09-03 Thread Stephan Szabo

On 31 Aug 2002, Zhicong Leo Liang wrote:

 Hi all,
   Just briefly describe my problem.
   I have two tables.
 create table A(
a1 serial primary key,
a2 varchars(10)
 );
 create table B(
 b1 integer primary key,
 b2 Integer,
 foreign key(b2) references a(a1)
 )
 insert into A values('123');
 select a1 from A where a2='123'
 --
 a1
 --
 1
 --

Did you actually do that sequence and get that result?
Because you shouldn't.  That should have put a 123 in a1 and
a NULL in a2.
Perhaps you meant insert into a(a2) values('123');

 insert into B values (1,1);
 ERROR!! referential integrity violation - key referenced from B not found in A.

In any case doing the above (with correction) and the insert
works fine for me.  We'll need more info.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Memory management question

2002-09-03 Thread Serguei A. Mokhov


Maybe when this thread is over, some parts of it can be
added to the dev. FAQ?

-s

On Tue, 3 Sep 2002, Karel Zak wrote:

 Date: Tue, 3 Sep 2002 13:52:09 +0200
 From: Karel Zak [EMAIL PROTECTED]
 To: Nigel J. Andrews [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: [HACKERS] Memory management question

 On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:
 
 
  It's probably a pretty basic question explained in some document I haven't seen
  but...if I do something like a CreateTupleDescCopy() how do I know my memory
  context owns everything allocated without following the code all the way
  through until it returns to me?

  If some code doesn't call MemoryContextSwitchTo() all is allocated in
 current memory context. You can check if CurrentMemoryContext is same
 before and after call that is important for you - but this check say
 nothing, bacuse some code can switch to other context and after usage
 switch back to your context. IMHO is not common way how check it.
 (Ok, maybe check all contexts size before/after call...)

  Suggestion: add to memory managment counter that handle number
  of MemoryContextSwitchTo() calls. IMHO it can be compile
  only if MEMORY_CONTEXT_CHECKING is define.

  But I think there is not to much places which switching between
 contexts and all are good commented (I hope, I wish :-)

 Karel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Just testing tighgter UCE controls ...

2002-09-03 Thread Marc G. Fournier

On Tue, 3 Sep 2002, Vince Vielhaber wrote:

 On Tue, 3 Sep 2002, Marc G. Fournier wrote:

 
  ignore if you see this ...

 What if we don't see it?

let me know? :)



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Wanted: pgdiff ($$$)

2002-09-03 Thread Greg Stark

[EMAIL PROTECTED] (ad wolf) writes:

 We're offering a small reward for a PG hacker that can code up a
 mysqldiff-like utility for PG.   For those unfamiliar with mysqldiff:
 http://adamspiers.org/computing/mysqldiff/

You might want to check out the perl model Alzabo, I think it's capable of
doing this.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Wanted: pgdiff ($$$)

2002-09-03 Thread Kaare Rasmussen

 You might want to check out the perl model Alzabo, I think it's capable of
 doing this.

It does not (yet) support foreign keys, alas.

But if anybody likes to code this, maybe they could work together with the 
Alzabo developer.

-- 
Kaare Rasmussen--Linux, spil,--Tlf:3816 2582
Kaki Datatshirts, merchandize  Fax:3816 2501
Howitzvej 75   Åben 12.00-18.00Email: [EMAIL PROTECTED]
2000 FrederiksbergLørdag 11.00-17.00   Web:  www.suse.dk

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] I am done

2002-09-03 Thread Kaare Rasmussen

 Are you guys competing for the modesty award?  ;-)
 I heard Stallman is trying to win it this year.  :-)

Hah, that's a good one.

For doing what - telling you not to call it GNU/Linux, only Linux/GNU ?
:-)

-- 
Kaare Rasmussen--Linux, spil,--Tlf:3816 2582
Kaki Datatshirts, merchandize  Fax:3816 2501
Howitzvej 75   Åben 12.00-18.00Email: [EMAIL PROTECTED]
2000 FrederiksbergLørdag 11.00-17.00   Web:  www.suse.dk

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Wanted: pgdiff ($$$)

2002-09-03 Thread Justin Clift

Hi all,

Just a link to this from the front page of the techdocs.postgresql.org
site.

Hope it helps.

:-)

Regards and best wishes,

Justin Clift


Greg Stark wrote:
 
 [EMAIL PROTECTED] (ad wolf) writes:
 
  We're offering a small reward for a PG hacker that can code up a
  mysqldiff-like utility for PG.   For those unfamiliar with mysqldiff:
  http://adamspiers.org/computing/mysqldiff/
 
 You might want to check out the perl model Alzabo, I think it's capable of
 doing this.
 
 --
 greg
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] I am done

2002-09-03 Thread Alexandre Dulaunoy

On Tue, 3 Sep 2002, Kaare Rasmussen wrote:

  Are you guys competing for the modesty award?  ;-)
  I heard Stallman is trying to win it this year.  :-)
 
 Hah, that's a good one.
 
 For doing what - telling you not to call it GNU/Linux, only Linux/GNU ?
 :-)

  SELECT FreeProject FROM History ORDER BY FreeProject

  Seems quite logical : GNU/Linux ;-)

-- 
  Alexandre Dulaunoy -- http://www.foo.be/
  3B12 DCC2 82FA 2931 2F5B 709A 09E2 CD49 44E6 CBCD  ---   AD993-6BONE
People who fight may lose.People who do not fight have already lost.
Bertolt Brecht






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Inheritance

2002-09-03 Thread Bruce Momjian

Peter Gulutzan wrote:
 The August draft of the SQL:200n standard (9075-2 Foundation) says in
 Section 4.17.2: Every table constraint specified for base table T is
 implicitly a constraint on every subtable of T, by virtue of the fact
 that every row in a subtable is considered to have a corresponding
 superrow in every one of its supertables.

Yep, this is where we are stuck;  having an index span multiple tables
in some way.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Docs for v7.3 ...

2002-09-03 Thread Peter Eisentraut

Marc G. Fournier writes:

 Just a quick one before I package up the wrong thing ... where should I be
 pulling docs from? :)

cd doc/src
gmake postgres.tar.gz

You can take the man pages from an old release until we figure them out.

(Any news on repackaging 7.2.2?)

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] 7.2.2 bug?

2002-09-03 Thread Rod Taylor

Seems it wants to run a redo entry that doesn't exist.

Not a big deal as it's a test environment only.  It was recently
upgraded from 7.2.1 to 7.2.2.  I'm wondering whether the person who did
the upgrade shutdown the daemon before installing.



FATAL 1:  The database system is starting up
FATAL 1:  The database system is starting up
DEBUG:  database system is ready
DEBUG:  server process (pid 9084) was terminated by signal 10
DEBUG:  terminating any other active server processes
DEBUG:  all server processes terminated; reinitializing shared memory
and semaphores
DEBUG:  database system was interrupted at 2002-09-03 13:54:33 EDT
DEBUG:  checkpoint record is at 0/1E1F1D90
DEBUG:  redo record is at 0/1E1F1D90; undo record is at 0/0; shutdown
TRUE
DEBUG:  next transaction id: 320415; next oid: 488052
DEBUG:  database system was not properly shut down; automatic recovery
in progress
DEBUG:  ReadRecord: record with zero length at 0/1E1F1DD0
DEBUG:  redo is not required
FATAL 1:  The database system is starting up
FATAL 1:  The database system is starting up
DEBUG:  database system is ready
DEBUG:  server process (pid 9097) was terminated by signal 10
DEBUG:  terminating any other active server processes
DEBUG:  all server processes terminated; reinitializing shared memory
and semaphores
DEBUG:  database system was interrupted at 2002-09-03 13:54:37 EDT
DEBUG:  checkpoint record is at 0/1E1F1DD0
DEBUG:  redo record is at 0/1E1F1DD0; undo record is at 0/0; shutdown
TRUE
DEBUG:  next transaction id: 320415; next oid: 488052
DEBUG:  database system was not properly shut down; automatic recovery
in progress
DEBUG:  ReadRecord: record with zero length at 0/1E1F1E10
DEBUG:  redo is not required
FATAL 1:  The database system is starting up





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Barry Lind

Wouldn't it make sense to implement autovacuum information in a struture 
like the FSM, a Dirty Space Map (DSM)?  As blocks are dirtied by 
transactions they can be added to the DSM.  Then vacuum can give 
priority processing to those blocks only.  The reason I suggest this is 
that in many usage senerios it will be more efficient to only vacuum 
part of a table than the entire table.  Given a large table that grows 
over time, it tends to be the case that older data in the table becomes 
more static as it ages (a lot of financial data is like this, when it is 
initially created it may get a lot of updates done early in it's life 
and may even be deleted, but once the data gets older (for example a 
year old), it is unlikely to change).  This would imply that over time 
the first blocks in a table will change less and most activity will 
occur towards the end of the table.  If you have a multigig table, where 
most of the activity occurs near the end, a lot of cpu cycles can be 
wasted going over the mostly static begining of the table.

thanks,
--Barry

Tom Lane wrote:

Shridhar Daithankar [EMAIL PROTECTED] writes:
  

1)Is this sounds like a workable solution?



Adding a trigger to every tuple update won't do at all.  Storing the
counts in a table won't do either, as the updates on that table will
generate a huge amount of wasted space themselves (not to mention
enough contention to destroy concurrent performance).

  

4)Is use of threads sounds portable enough?



Threads are completely out of the question, at least if you have any
hope of seeing this code get accepted into the core distro.


For vacuum's purposes all that we really care to know about is the
number of obsoleted tuples in each table: committed deletes and updates,
and aborted inserts and updates all count.  Furthermore, we do not need
or want a 100% reliable solution; approximate counts would be plenty
good enough.

What I had in the back of my mind was: each backend counts attempted
insertions and deletions in its relcache entries (an update adds to both
counts).  At transaction commit or abort, we know which of these two
counts represents the number of dead tuples added to each relation, so
while we scan the relcache for post-xact cleanup (which we will be doing
anyway) we can transfer the correct count into the shared FSM entry for
the relation.  This gives us a reasonably accurate count in shared
memory of all the tuple obsoletions since bootup, at least for
heavily-used tables.  (The FSM might choose to forget about lightly-used
tables.)  The auto vacuumer could look at the FSM numbers to decide
which tables are highest priority to vacuum.

This scheme would lose the count info on a database restart, but that
doesn't bother me.  In typical scenarios the same tables will soon get
enough new counts to be highly ranked for vacuuming.  In any case the
auto vacuumer must be designed so that it vacuums every table every so
often anyhow, so the possibility of forgetting that there were some dead
tuples in a given table isn't catastrophic.

I do not think we need or want a control table for this; certainly I see
no need for per-table manual control over this process.  There should
probably be a few knobs in the form of GUC parameters so that the admin
can control how much overall work the auto-vacuumer does.  For instance
you'd probably like to turn it off when under peak interactive load.

   regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

  




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] CREATE CAST requires immutable cast function?

2002-09-03 Thread Peter Eisentraut

Tom Lane writes:

 Perhaps there's a case for prohibiting volatile casts (as opposed to
 stable ones), but I don't really see it.  I'd prefer to just remove
 this restriction.  Comments?

I'm not wedded to it, I just modelled it after the SQL standard, but
evidently the volatility levels are different in detail.  I would disallow
volatile casts in any case.  There ought to be a minimal behavioral
contract between creators and users of types.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Docs for v7.3 ...

2002-09-03 Thread Bruce Momjian

Peter Eisentraut wrote:
 Marc G. Fournier writes:
 
  Just a quick one before I package up the wrong thing ... where should I be
  pulling docs from? :)
 
 cd doc/src
 gmake postgres.tar.gz
 
 You can take the man pages from an old release until we figure them out.

Woh.  Better to ship no manual pages rather than ship the ones from
7.2.2.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] CREATE CAST requires immutable cast function?

2002-09-03 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 I'm not wedded to it, I just modelled it after the SQL standard, but
 evidently the volatility levels are different in detail.  I would disallow
 volatile casts in any case.  There ought to be a minimal behavioral
 contract between creators and users of types.

Shrug ... ISTM the behavior of a type is whatever the type creator says
it should be.  Whether a volatile cast is a good idea is dubious
(I can't think of any good examples of one offhand) but I don't see the
argument for having the system restrict the type creator's choices.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Tom Lane

Mario Weilguni [EMAIL PROTECTED] writes:
 That brings me to another point, can't the
 statistics collector used for that?

Hmm, that would be a different way of attacking the problem.  Not sure
offhand which is better, but it'd surely be worth considering both.

Note that collecting of dead-tuple counts requires input from aborted
transactions as well as successful ones.  I don't recall whether the
stats collector currently collects anything from aborted xacts; that
might or might not be a sticky point.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] 7.2.2 bug?

2002-09-03 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 DEBUG:  server process (pid 9097) was terminated by signal 10

Could we have a backtrace from that core dump?

AFAICT it's getting through the WAL redo just fine, so the problem
is (probably) not what you think.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.2.2 bug?

2002-09-03 Thread Rod Taylor

On Tue, 2002-09-03 at 16:42, Tom Lane wrote:
 Rod Taylor [EMAIL PROTECTED] writes:
  DEBUG:  server process (pid 9097) was terminated by signal 10
 
 Could we have a backtrace from that core dump?
 
 AFAICT it's getting through the WAL redo just fine, so the problem
 is (probably) not what you think.

Took me a while, but I eventually figured out that they changed the
NAMEDATALEN in the old version, and didn't match it in the new one.

So the error is exactly what is expected -- memory allocation errors.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.2 bug?

2002-09-03 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 Took me a while, but I eventually figured out that they changed the
 NAMEDATALEN in the old version, and didn't match it in the new one.

Grumble.  It occured to us to store NAMEDATALEN in pg_control in 7.3,
but 7.2 doesn't have that defense.  Sorry bout that...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] source code indexer

2002-09-03 Thread Laurette Cisneros

Thanks to everyone who made suggestions.
 
I have found Source Navigator to be very close and useful for what I was 
looking for!
 
Thanks again,
 
L.
On Fri, 30 Aug 2002, Manfred Koizar wrote:

 On Fri, 30 Aug 2002 11:57:17 -0700 (PDT), Laurette Cisneros
 [EMAIL PROTECTED] wrote:
 I was wondering if you could recomend a good source code db/indexer that
 could be used to search through the postgresql code?
 
 I use Source Navigator v5.1 http://sourceforge.net/projects/sourcenav/
 
 Servus
  Manfred
 

-- 
Laurette Cisneros
The Database Group
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
--
A wiki we will go...


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Christopher Kings-Lynne

 That brings me to another point, is there interest for this
 web-statistics-frontend, maybe for /contrib? I found it extremly useful
 because it showed up the weak points in my applications.

Why not create a project here for it:  http://gborg.postgresql.org/

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] I am done

2002-09-03 Thread Alexandre Dulaunoy

On Tue, 3 Sep 2002, Kaare Rasmussen wrote:

  Are you guys competing for the modesty award?  ;-)
  I heard Stallman is trying to win it this year.  :-)
 
 Hah, that's a good one.
 
 For doing what - telling you not to call it GNU/Linux, only Linux/GNU ?
 :-)

  SELECT FreeProject FROM History ORDER BY FreeProject

  Seems quite logical : GNU/Linux ;-)

-- 
  Alexandre Dulaunoy -- http://www.foo.be/
  3B12 DCC2 82FA 2931 2F5B 709A 09E2 CD49 44E6 CBCD  ---   AD993-6BONE
People who fight may lose.People who do not fight have already lost.
Bertolt Brecht





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] webdav interface to pgsql

2002-09-03 Thread joel w. reed

i've hacked out a webdav (apache2) interface to pgsql. i'd love
to replicate the browsing/managing interfaces of the bigname
RDBMS's using webdav for pgsql but i'm not sure how far i'll
get. right now the 0.1.15 release supports a fair number of
browsing options (tables, columns, functions, sequences, etc)

its only ready for hackers i think so i'm posting here - hope
that's ok.

you can see some screenshots at:
http://home.attbi.com/~joelwreed/

and download the source from the above URL or this one:
http://sourceforge.net/project/showfiles.php?group_id=60618

jr

-- 

Joel W. Reed412-257-3881
All the simple programs have been written.--




msg21636/pgp0.pgp
Description: PGP signature


[HACKERS] FW: [GWAVA:fku1fb18] Source block message notification

2002-09-03 Thread Christopher Kings-Lynne

Does anyone else get this rubbish when they post to -php ?

Our domain isn't on any blacklists AFAIK...

Chris

 -Original Message-
 From: GWAVA [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, 4 September 2002 11:24 AM
 To: [EMAIL PROTECTED]
 Subject: [GWAVA:fku1fb18] Source block message notification


 Den postmeddelelse du prøvede at sende til [No To Addresses] blev
 ikke afleveret.
 Meddelelsen kom fra en adresse som ikke tillades i postsystemet
 akf.dk, og blev derfor afvist.

 Kontakt venligst din systemadministrator for at få flere
 oplysninger om problemet.

 Information om den afviste postmeddelelse:

 FRA: [EMAIL PROTECTED]
 TIL: [No To Addresses]
 Emne: Re: [PHP] fastest way to retrieve data

 Vedhæftet fil:

 The message you tried to send to [No To Addresses] was not delivered.
 The message was sent from an address which is not permitted in
 the akf.dk mail system and was rejected.

 Please contact your system administrator for more information.

 Information about the problem message:

 FROM: [EMAIL PROTECTED]
 TO: [No To Addresses]
 Subject: Re: [PHP] fastest way to retrieve data

 Attachment Name:



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] FW: [GWAVA:fku1fb18] Source block message notification

2002-09-03 Thread Bruce Momjian


Yes, we have told Marc to remove it several times.  He may be having
trouble figuring out which email address is generating it.

---

Christopher Kings-Lynne wrote:
 Does anyone else get this rubbish when they post to -php ?
 
 Our domain isn't on any blacklists AFAIK...
 
 Chris
 
  -Original Message-
  From: GWAVA [mailto:[EMAIL PROTECTED]]
  Sent: Wednesday, 4 September 2002 11:24 AM
  To: [EMAIL PROTECTED]
  Subject: [GWAVA:fku1fb18] Source block message notification
 
 
  Den postmeddelelse du pr?vede at sende til [No To Addresses] blev
  ikke afleveret.
  Meddelelsen kom fra en adresse som ikke tillades i postsystemet
  akf.dk, og blev derfor afvist.
 
  Kontakt venligst din systemadministrator for at f? flere
  oplysninger om problemet.
 
  Information om den afviste postmeddelelse:
 
  FRA: [EMAIL PROTECTED]
  TIL: [No To Addresses]
  Emne: Re: [PHP] fastest way to retrieve data
 
  Vedh?ftet fil:
 
  The message you tried to send to [No To Addresses] was not delivered.
  The message was sent from an address which is not permitted in
  the akf.dk mail system and was rejected.
 
  Please contact your system administrator for more information.
 
  Information about the problem message:
 
  FROM: [EMAIL PROTECTED]
  TO: [No To Addresses]
  Subject: Re: [PHP] fastest way to retrieve data
 
  Attachment Name:
 
 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Matthew T. OConnor

On Tuesday 03 September 2002 16:24, Tom Lane wrote:
 Mario Weilguni [EMAIL PROTECTED] writes:
  That brings me to another point, can't the
  statistics collector used for that?

 Hmm, that would be a different way of attacking the problem.  Not sure
 offhand which is better, but it'd surely be worth considering both.

 Note that collecting of dead-tuple counts requires input from aborted
 transactions as well as successful ones.  I don't recall whether the
 stats collector currently collects anything from aborted xacts; that
 might or might not be a sticky point.

I have been doing some poking around with this item, and I was planning on 
using the stats collector to do intelligent auto-vacuuming.  I was planning 
on adding some new columns that account for activity that has taken place 
since the last vacuum.  The current stats collector shows n_tup_ins, 
n_tup_upd and n_tup_del for any given rel, but those numbers have nothing to 
do with what has happened since the last vacuum, hence nothing to do with 
current status or need for vacuum.

I hope to have something worth showing soon (a week or two).  I know that is a 
bit slow, but I am new at pg internals and since we are in beta I know this 
is a 7.4 item.

FYI, the current stats collector does keep track of inserts, updates and 
deletes that are part of a rolled back transaction, as shown in the example 
below:

matthew=# create TABLE foo (id serial, name text);
NOTICE:  CREATE TABLE will create implicit sequence 'foo_id_seq' for SERIAL 
column 'foo.id'
CREATE TABLE
matthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from 
pg_stat_all_tables where relname = 'foo';
 relname | n_tup_ins | n_tup_upd | n_tup_del
-+---+---+---
 foo | 0 | 0 | 0
(1 row)

matthew=# INSERT INTO foo (name) VALUES ('asdf');
INSERT 17075 1
matthew=# UPDATE foo SET name='qwert';
UPDATE 1
matthew=# DELETE FROM foo;
DELETE 1
matthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from 
pg_stat_all_tables where relname = 'foo';
 relname | n_tup_ins | n_tup_upd | n_tup_del
-+---+---+---
 foo | 1 | 1 | 1
(1 row)

matthew=# begin;
BEGIN
matthew=# INSERT INTO foo (name) VALUES ('asdf');
INSERT 17076 1
matthew=# UPDATE foo SET name='qwert';
UPDATE 1
matthew=# DELETE FROM foo;
DELETE 1
matthew=# rollback;
ROLLBACK
matthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from 
pg_stat_all_tables where relname = 'foo';
 relname | n_tup_ins | n_tup_upd | n_tup_del
-+---+---+---
 foo | 2 | 2 | 2
(1 row)



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Christopher Kings-Lynne

 I have been doing some poking around with this item, and I was 
 planning on 
 using the stats collector to do intelligent auto-vacuuming.  I 
 was planning 
 on adding some new columns that account for activity that has taken place 
 since the last vacuum.  The current stats collector shows n_tup_ins, 
 n_tup_upd and n_tup_del for any given rel, but those numbers have 
 nothing to 
 do with what has happened since the last vacuum, hence nothing to do with 
 current status or need for vacuum.

Postgres 7.3-beta has a new function 'pg_stat_reset()' that you can call to reset the 
stats collector after a vacuum...

Chris


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Matthew T. OConnor

On Tuesday 03 September 2002 23:47, Christopher Kings-Lynne wrote:
  I have been doing some poking around with this item, and I was
  planning on
  using the stats collector to do intelligent auto-vacuuming.  I
  was planning
  on adding some new columns that account for activity that has taken place
  since the last vacuum.  The current stats collector shows n_tup_ins,
  n_tup_upd and n_tup_del for any given rel, but those numbers have
  nothing to
  do with what has happened since the last vacuum, hence nothing to do with
  current status or need for vacuum.

 Postgres 7.3-beta has a new function 'pg_stat_reset()' that you can call to
 reset the stats collector after a vacuum...

Just my opinion here, but I don't think having autovac constantly resetting 
the stats is a good idea, it means that you lose the current stat 
functionality when using autovacuum, and also implies that the stats mean 
differnet things if autovac is turned on or off. 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] possible vacuum improvement?

2002-09-03 Thread Mario Weilguni

Am Mittwoch, 4. September 2002 05:44 schrieb Matthew T. OConnor:
 I have been doing some poking around with this item, and I was planning on
 using the stats collector to do intelligent auto-vacuuming.  I was
 planning on adding some new columns that account for activity that has
 taken place since the last vacuum.  The current stats collector shows
 n_tup_ins, n_tup_upd and n_tup_del for any given rel, but those numbers
 have nothing to do with what has happened since the last vacuum, hence
 nothing to do with current status or need for vacuum.

This should be no real problem, extending the table pg_stat_all_tables with 3 fields
av_n_tup_ins, av_n_tup_upd, av_n_tup_del should do it IMO.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] webdav interface to pgsql

2002-09-03 Thread Christopher Kings-Lynne

Cool.  Is it worth putting it on greatbridge?  gborg.postgresql.org

With the new tightening of the postgres source tree, it's unlikely this
would make it into our CVS methinks, however people are working on setting
up greatbridge as a one-stop-shop for postgres add-ons...

Chris

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of joel w. reed
 Sent: Wednesday, 4 September 2002 11:12 AM
 To: [EMAIL PROTECTED]
 Subject: [HACKERS] webdav interface to pgsql


 i've hacked out a webdav (apache2) interface to pgsql. i'd love
 to replicate the browsing/managing interfaces of the bigname
 RDBMS's using webdav for pgsql but i'm not sure how far i'll
 get. right now the 0.1.15 release supports a fair number of
 browsing options (tables, columns, functions, sequences, etc)

 its only ready for hackers i think so i'm posting here - hope
 that's ok.

 you can see some screenshots at:
   http://home.attbi.com/~joelwreed/

 and download the source from the above URL or this one:
   http://sourceforge.net/project/showfiles.php?group_id=60618

 jr

 --
 
 Joel W. Reed412-257-3881
 All the simple programs have been written.--




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] findoidjoins

2002-09-03 Thread Joe Conway

Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
Christopher Kings-Lynne dijo: 
findoidjoins doens't seem to compile:
Seems related to the ripping of libpgeasy out of the main
distribution...
 
 I believe it's been broken for some time (disremember just why, maybe a
 schema issue?).  I had a TODO item to resurrect it so that we could
 update the oidjoins regression test, which is sadly out of date for
 the current system catalogs.  If anyone wants to work on that ...

I'm not sure I interpreted the intent of findoidjoins just right, but 
here it is updated for schemas, new reg* types, using SPI instead of 
libpgeasy, and returning the results as a table function. Any 
corrections/comments? If there is any interest, I'll polish this up a 
bit more and submit to patches. Just let me know.

(Should qualify as a fix, right?)

Thanks,

Joe




findoidjoins-fix.1.patch.gz
Description: GNU Zip compressed data


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] findoidjoins

2002-09-03 Thread Bruce Momjian


Your patch has been added to the PostgreSQL unapplied patches list at:

http://207.106.42.251/cgi-bin/pgpatches

I will try to apply it within the next 48 hours.

---


Joe Conway wrote:
 Tom Lane wrote:
  Alvaro Herrera [EMAIL PROTECTED] writes:
 Christopher Kings-Lynne dijo: 
 findoidjoins doens't seem to compile:
 Seems related to the ripping of libpgeasy out of the main
 distribution...
  
  I believe it's been broken for some time (disremember just why, maybe a
  schema issue?).  I had a TODO item to resurrect it so that we could
  update the oidjoins regression test, which is sadly out of date for
  the current system catalogs.  If anyone wants to work on that ...
 
 I'm not sure I interpreted the intent of findoidjoins just right, but 
 here it is updated for schemas, new reg* types, using SPI instead of 
 libpgeasy, and returning the results as a table function. Any 
 corrections/comments? If there is any interest, I'll polish this up a 
 bit more and submit to patches. Just let me know.
 
 (Should qualify as a fix, right?)
 
 Thanks,
 
 Joe
 

[ application/x-gzip is not supported, skipping... ]

 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org