Re: [GENERAL] 8.1 on gentoo

2006-05-25 Thread Richard Broersma Jr
I wouldn't expect a version marked stable will be out any time soon.
http://packages.gentoo.org/search/?sstring=postgresql

I haven't ckeck whether this version is avaliable as a masked ebuild.

Regards,

Richard Broersma Jr.

--- Sim Zacks [EMAIL PROTECTED] wrote:

 Does anybody know when postgresql 8.1 will be considered stable on 
 gentoo for x86?
 Another question would be does anybody know of any instability of 
 postgresql 8.1 unstable for gentoo x86?
 Does anybody have it working in a production environment?
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] background triggers?

2006-05-25 Thread Thomas Hallgren

Sim Zacks wrote:
If the database supported background triggers, it might be implemented 
by a daemon or by the Listen/Notify framework, but I really couldn't 
care less. It is part of the database.


Assume the backend would handle this, what would the transaction semantics look like? You 
can't wait for the background work to complete before you commit your transaction, so what 
should happen when the background trigger fails? Or what if a number of such triggers 
where fired and then rolled back?


The only thing the database *can* do, is what it does today. It provides the hooks needed 
for specialized code that can react on the *outcome* of transactions (and then perform its 
task asynchronously using transactions that are autonomous to the first one).


What you're trying to do doesn't belong in triggers and the concept of background triggers 
doesn't make any sense. Triggers execute (and complete) within a transaction and the work 
they perform should be rolled back if the transaction is rolled back.


Kind regards,
Thomas Hallgren



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
I'd like to propose a 'syntax/semantics' of such trigger:

Triggers normally execute inside of a transaction. 

A COMMIT within a trigger could mean: do a fork: fork-1) return to the
main and schedule COMMIT there, fork-2) continue in bacground.

From the perspective of my earlier applications, it would be desired to
have an argument for such COMMIT - a label, which is a *global* database
object (may be just a semaphore), and is:
1) Assuming semaphore implementation - semaphore should be tested and
fail if already asserted, NOT tesed an block. FORK should be initiated
only if semaphore test succeeds.
2) the execution of procedure within fork-2 (meaning, after semaphore
assertion succeeds) should be posponed until caller actually COMMITS.
3) On EXIT, fork-2 deasserts semaphore. 
4) in the simplest case, the semaphore can be defined on the trigger
function name itself, and consequently, the 'label' for the COMMIT
wouldn't be necesary?


-R

On Thu, 2006-05-25 at 08:50 +0200, Sim Zacks wrote:
 The question is not how it is implemented, the question is what does the 
 end user have to do to accomplish his requirements. If I have to write 
 my own daemon, then I have to debug it and maintain it, write a script 
 so it automatically turns on when the machine is rebooted. If I have 
 multiple environments, for development, testing and production, I will 
 need to modify it to handle this. If I move the database to a new server 
 this script has to be in the action items to be moved along with its 
 accompanying start-up scripts. If I decide to recover a backup to test 
 something (something I do on an occasional basis), I will need to 
 consider my daemon as well. It is basically an outside process that has 
 a lot of overhead (not necessarily computing overhead).
 
 If the database supported background triggers, it might be implemented 
 by a daemon or by the Listen/Notify framework, but I really couldn't 
 care less. It is part of the database. I only need to write the function 
   and the trigger code with an option to run this in the background. No 
 matter what I do now, or how many environments I have, the database 
 handles it all. If I backup my database and install it on another 
 server, my process is still intact.
 
 The reason why this should be part of the database engine and not 
 another client application is because what you want is a trigger. All 
 you want is when a table is updated that another function should run, 
 you don't need any outside processing. The Listen/Notify framework is 
 needed for a case where you would like non-database actions to take 
 place. Your client application can then go and do what it needs to and 
 then comes back and can tell the database that it is done.
 
 Chris Browne wrote:
 
  Well, you *clearly* need to have some sort of daemon running in
  order to do this.
  
  I expect it will in effect be a LISTEN process that waits for clients
  to submit NOTIFY requests.
  
  Even if you don't actually choose to use NOTIFY/LISTEN, per se, you'll
  doubtless wind up creating an ad hoc, informally-specified
  implementation of part of it...
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] postgreslog - panic message

2006-05-25 Thread surabhi.ahuja
hi,
i am getting the following error 
messages:

2006-05-24 15:28:08 CDT%LOG: could not bind IPv4 socket: 
Address already in use2006-05-24 15:28:08 CDT%HINT: Is another 
postmaster already running on port 5432? If not, wait a few seconds and 
retry.2006-05-24 15:28:08 CDT%WARNING: could not create listen 
socket for "localhost"2006-05-24 15:28:08 CDT%LOG: database 
system was shut down at 2006-05-24 15:28:08 CDT

2006-05-24 16:19:39 CDT%PANIC: could not locate a valid 
checkpoint record2006-05-24 16:19:39 CDT%LOG: startup process 
(PID 6548) was terminated by signal 62006-05-24 16:19:39 
CDT%LOG: aborting startup due to startup process 
failure2006-05-24 16:21:28 CDT%LOG: database system was shut 
down at 2006-05-24 16:01:00 CDT2006-05-24 16:21:28 CDT%LOG: 
record with zero length at 0/AC92BC2006-05-24 16:21:28 
CDT%LOG: invalid primary checkpoint record2006-05-24 16:21:28 
CDT%LOG: record with zero length at 0/AC92802006-05-24 
16:21:28 CDT%LOG: invalid secondary checkpoint 
record2006-05-24 16:21:28 CDT%PANIC: could not locate a valid 
checkpoint record2006-05-24 16:21:28 CDT%LOG: startup process 
(PID 8428) was terminated by signal 6
i am using postgres 8.0.0

please tell as to what might be going wrong.
thanks,
regards
Surabhi

Re: [GENERAL] background triggers?

2006-05-25 Thread Dawid Kuroczko

On 5/25/06, Rafal Pietrak [EMAIL PROTECTED] wrote:

I'd like to propose a 'syntax/semantics' of such trigger:

Triggers normally execute inside of a transaction.

A COMMIT within a trigger could mean: do a fork: fork-1) return to the
main and schedule COMMIT there, fork-2) continue in bacground.


I don't think fork(2)ing a running backed is a good idea, probably it would
end up with major data corruption.  You want to call fork(2) in your
application. Something like: if (fork()==0) { reestablish connection, issue
trigger-code on the database } else  { gracefully return };


From the perspective of my earlier applications, it would be desired to
have an argument for such COMMIT - a label, which is a *global* database
object (may be just a semaphore), and is:
1) Assuming semaphore implementation - semaphore should be tested and
fail if already asserted, NOT tesed an block. FORK should be initiated
only if semaphore test succeeds.
2) the execution of procedure within fork-2 (meaning, after semaphore
assertion succeeds) should be posponed until caller actually COMMITS.
3) On EXIT, fork-2 deasserts semaphore.
4) in the simplest case, the semaphore can be defined on the trigger
function name itself, and consequently, the 'label' for the COMMIT
wouldn't be necesary?


Hmm, I've got a feeling its something like I don't feel like coding it in
application, so it would be better if community changed the backend
to do it. :) However what you propose i 1,2,3,4 points is somewhat
similar to already existing 2PC (2-phase commit), which PostgreSQL
implements. Probably not what you want, but should be valuable to
know, I guess.

And as for COMMIT; within function...  Not possible, not without
breaking awful lot of things.  Think about a tranasction as:
BEGIN;
 -- commands, like INSERTs, ended with commit;
 -- user given triggers fired after user issued COMMIT;
 -- Referential Integrity triggers and what not
COMMIT; -- actual commit performed by DB.

If your trigger would call COMMIT, the referential integrity triggers would
not have anything to do -- the commit would be already done.  No referential
integrity, you might as well use MyISAM then. ;)

So... let's assume the commit whould not actually commit, but rather
start another backend and do the work [1].  The problem is that newly
started backed would not see the work until the old backend actually
COMMIT;

The idea of commit within a procedure might be interesting, but from
the perspective of very-long-runing queries which update whole a lot
of rows, but that's another issue.

  Regards,
  Dawid

[1]: If you really insist on doing it this way, of course you may!  Here is
a fishing rod:
write a trigger in PL/perlU, which will fork();  The newly started child will
use DBI to connect to database, and issue your query, and then call
exit(0) to be sure you don't return to backend.  You might want to call
exec() with a pre-prepared script doing above work.


From the perspective of the main backend, the trigger will call fork(),

and finish.  And your application will commit.

That's everything you need to do it the way you want it.  Have fun!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] background triggers?

2006-05-25 Thread Thomas Hallgren

Rafal Pietrak wrote:

I'd like to propose a 'syntax/semantics' of such trigger:

Triggers normally execute inside of a transaction. 


A COMMIT within a trigger could mean: do a fork: fork-1) return to the
main and schedule COMMIT there, fork-2) continue in bacground.

And what if fork-1) returns to the main, attempts the COMMIT but instead and rolls back due 
to a violated constraint? Where does that leave fork-2?


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
On Thu, 2006-05-25 at 10:33 +0200, Thomas Hallgren wrote:
 Rafal Pietrak wrote:
  I'd like to propose a 'syntax/semantics' of such trigger:
  
  Triggers normally execute inside of a transaction. 
  
  A COMMIT within a trigger could mean: do a fork: fork-1) return to the
  main and schedule COMMIT there, fork-2) continue in bacground.
  
 And what if fork-1) returns to the main, attempts the COMMIT but instead and 
 rolls back due 
 to a violated constraint? Where does that leave fork-2?
 
 Regards,
 Thomas Hallgren

No problem at all (at least in particular case of an application I have
in mind :). The precedure that remains within fork-2 just does a time
consuming housekeeping. Like a cleanup - always succeeds, even if
sometimes is not really necesary (like in case of main rolling-back). 

And that's exacly why I thing that it should be 'released to run' by
RDBMS *after* the main COMMITS (or ROLLES-BACK). It should be run on
COMMITED (visible to the world) changes, not on session trancients.

-R


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.1 on gentoo

2006-05-25 Thread Holger Hoffstaette
On Thu, 25 May 2006 08:55:51 +0200, Sim Zacks wrote:

 Does anybody know when postgresql 8.1 will be considered stable on gentoo
 for x86?

No, maybe ask in gentoo-users or -dev? Anyway just because it's not marked
stable does not mean it isn't. It's very unfortunate that distributions
have adopted these terms because more often than not they have absolutely
nothing to do with the *runtime* stability of the software.

 Another question would be does anybody know of any instability of
 postgresql 8.1 unstable for gentoo x86?

~x86 works just fine. How could it not? It's just the same as compiling
yourself - which is considered to be acceptable, if not good, practice on
all other platforms and distributions too. If a particular build would be
totally broken/untested/known to eat your soul it would be hardmasked.

-h



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
On Thu, 2006-05-25 at 10:21 +0200, Dawid Kuroczko wrote:
 On 5/25/06, Rafal Pietrak [EMAIL PROTECTED] wrote:
  I'd like to propose a 'syntax/semantics' of such trigger:
 
  Triggers normally execute inside of a transaction.
 
  A COMMIT within a trigger could mean: do a fork: fork-1) return to the
  main and schedule COMMIT there, fork-2) continue in bacground.
 
 I don't think fork(2)ing a running backed is a good idea, probably it would
 end up with major data corruption.  You want to call fork(2) in your
 application. Something like: if (fork()==0) { reestablish connection, issue
 trigger-code on the database } else  { gracefully return };

I'm not that fluent in postgresql backend programming. But further on,
you write a suggestion of a trigger skeleton, which should be feasable -
so if such trigger (user level function) is feasable, may be it could be
implemented by backend engine, too as if it were a syntax shortcut
(using wining trigger COMMIT keyword) into the trigger implementation
you suggest at the end of your response below?

 Hmm, I've got a feeling its something like I don't feel like coding it in
 application, so it would be better if community changed the backend
 to do it. :) However what you propose i 1,2,3,4 points is somewhat

:) YES!!!

 similar to already existing 2PC (2-phase commit), which PostgreSQL
 implements. Probably not what you want, but should be valuable to
 know, I guess.

May be. Currnetly I wouldn't know - I never used 2PC. When I've learned
about 2PC, I though 2PC is for occasions when application can figure out
how to re-run a transaction in some other way when 'primary' way fails
(and rolls-back). The alternative way might re-use parcial work acheved
by the original path, up to the checkpoint.

I never thought this can be used for a *disconnedted*, forked additional
instance of a back-end process.

I might have been wrong - I must get back to books.

But the goal here is not to re-run a transaction some other way, but to
cut the transaction short, and do tasks which don't need transaction
braces, outside of a transaction.

To run part of the trigger ourside of a transaction. Thusly make the
transaction commit sooner.

I don't think 2PC gives any help here.

 If your trigger would call COMMIT, the referential integrity triggers would

I didn't really ment 'semantics of COMMIT'. I just ment using COMMIT
keyword, as it normally *may*never* be used within the trigger; and
initiate/implement semantics, which 'detaches' the remaining processing
implemented/encoded as contained within that trigger funciton, from the
original transaction execution.

 So... let's assume the commit whould not actually commit, but rather
 start another backend and do the work [1].  The problem is that newly
 started backed would not see the work until the old backend actually
 COMMIT;

Yes. And in fact, it should be kept blocked until that moment.

 [1]: If you really insist on doing it this way, of course you may!  Here is
 a fishing rod:
 write a trigger in PL/perlU, which will fork();  The newly started child will
 use DBI to connect to database, and issue your query, and then call
 exit(0) to be sure you don't return to backend.  You might want to call
 exec() with a pre-prepared script doing above work.
 
 From the perspective of the main backend, the trigger will call fork(),
 and finish.  And your application will commit.
 
 That's everything you need to do it the way you want it.  Have fun!

If that works, may be it could be implemented within the database
backend? And accesable to client programming by means of COMMIT keyword
(or to be more generic: COMMIT macro, provided also for other language
bindings) within the trigger function body?

-R

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] New DBs from existing schemas

2006-05-25 Thread Karsten Hilbert
On Wed, May 24, 2006 at 04:14:46PM -0700, Nishad Prakash wrote:

 I want to create a new database with the exact schema of an existing one,
 but a different name.  After some reading, it seems
 
 pg_dump -s old_db  old_schema
 createdb -t old_schema new_db
Now new_db is a 1:1 copy of old_schema, triggers, data and
all. You may need to setup login permissins in pg_hba for
users to be able to connect to new_db.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.1 on gentoo

2006-05-25 Thread Tim Allen

Holger Hoffstaette wrote:

On Thu, 25 May 2006 08:55:51 +0200, Sim Zacks wrote:


Does anybody know when postgresql 8.1 will be considered stable on gentoo
for x86?


No, maybe ask in gentoo-users or -dev? Anyway just because it's not marked
stable does not mean it isn't. It's very unfortunate that distributions
have adopted these terms because more often than not they have absolutely
nothing to do with the *runtime* stability of the software.


Presumably the stable status is more to do with all the packages that 
depend on postgres, rather than postgres itself. I guess the reason it 
hasn't been marked stable yet is just lack of round tuits for someone to 
test all the other packages.



Another question would be does anybody know of any instability of
postgresql 8.1 unstable for gentoo x86?


~x86 works just fine. How could it not? It's just the same as compiling
yourself - which is considered to be acceptable, if not good, practice on
all other platforms and distributions too. If a particular build would be
totally broken/untested/known to eat your soul it would be hardmasked.

-h


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] background triggers?

2006-05-25 Thread Thomas Hallgren

Rafal Pietrak wrote:

On Thu, 2006-05-25 at 10:33 +0200, Thomas Hallgren wrote:

Rafal Pietrak wrote:

I'd like to propose a 'syntax/semantics' of such trigger:

Triggers normally execute inside of a transaction. 


A COMMIT within a trigger could mean: do a fork: fork-1) return to the
main and schedule COMMIT there, fork-2) continue in bacground.

And what if fork-1) returns to the main, attempts the COMMIT but instead and rolls back due 
to a violated constraint? Where does that leave fork-2?


Regards,
Thomas Hallgren


No problem at all (at least in particular case of an application I have
in mind :). The precedure that remains within fork-2 just does a time
consuming housekeeping. Like a cleanup - always succeeds, even if
sometimes is not really necesary (like in case of main rolling-back). 


A somewhat limited use-case to form generic database functionality on, wouldn't 
you say?


And that's exacly why I thing that it should be 'released to run' by
RDBMS *after* the main COMMITS (or ROLLES-BACK). It should be run on
COMMITED (visible to the world) changes, not on session trancients.

Right, so it's not a trigger. It's another session (another transaction) that reacts on a 
notification that is sent only if the first transaction succeeds. This is exactly what 
notify/listen is for.


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgreslog - panic message

2006-05-25 Thread Richard Huxton

surabhi.ahuja wrote:

hi,
i am getting the following error messages:
 
2006-05-24 15:28:08 CDT%LOG:  could not bind IPv4 socket: Address already in use

2006-05-24 15:28:08 CDT%HINT:  Is another postmaster already running on port 
5432? If not, wait a few seconds and retry.
2006-05-24 15:28:08 CDT%WARNING:  could not create listen socket for 
localhost
2006-05-24 15:28:08 CDT%LOG:  database system was shut down at 2006-05-24 
15:28:08 CDT


This is what it says. You've either got:
1. another version of PostgreSQL running on port 5432
2. Something else on port 5432
3. The operating-system hasn't recycled the port after a shutdown (it 
can take a few seconds sometimes).


I assume it started up after this?


2006-05-24 16:19:39 CDT%PANIC:  could not locate a valid checkpoint record
2006-05-24 16:19:39 CDT%LOG:  startup process (PID 6548) was terminated by 
signal 6
2006-05-24 16:19:39 CDT%LOG:  aborting startup due to startup process failure
2006-05-24 16:21:28 CDT%LOG:  database system was shut down at 2006-05-24 
16:01:00 CDT
2006-05-24 16:21:28 CDT%LOG:  record with zero length at 0/AC92BC
2006-05-24 16:21:28 CDT%LOG:  invalid primary checkpoint record
2006-05-24 16:21:28 CDT%LOG:  record with zero length at 0/AC9280
2006-05-24 16:21:28 CDT%LOG:  invalid secondary checkpoint record
2006-05-24 16:21:28 CDT%PANIC:  could not locate a valid checkpoint record
2006-05-24 16:21:28 CDT%LOG:  startup process (PID 8428) was terminated by 
signal 6

i am using postgres 8.0.0


You should be running the latest release of 8.0.8 - there are 8 sets of 
bug-fixes between your version and the latest.



please tell as to what might be going wrong.


Either the WAL has been corrupted (have you had a crash? Are your disks 
reliable?), PG is reading the wrong file (unlikely) or you've hit a bug 
in 8.0.0. Read the release notes for 8.0 and see if any of the changes 
could affect you.


http://www.postgresql.org/docs/8.0/static/release.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
On Thu, 2006-05-25 at 11:29 +0200, Thomas Hallgren wrote:
 Rafal Pietrak wrote:
  consuming housekeeping. Like a cleanup - always succeeds, even if
  sometimes is not really necesary (like in case of main rolling-back). 
  
 A somewhat limited use-case to form generic database functionality on, 
 wouldn't you say?

OK. I admit.

...but may be ... :) OK. just a final comment and I'm done.

  And that's exacly why I thing that it should be 'released to run' by
  RDBMS *after* the main COMMITS (or ROLLES-BACK). It should be run on
  COMMITED (visible to the world) changes, not on session trancients.
  
 Right, so it's not a trigger. It's another session (another transaction) that 
 reacts on a 
 notification that is sent only if the first transaction succeeds. This is 
 exactly what 
 notify/listen is for.

Yes.

And no.

It is a trigger. But 'the other kind of' trigger.

political-section-possible-source-of-fame-and-war-sorry

The thing is. That I've seen *very* inefficent database application,
mainly because that was 'the easiest way to go'.

One of programmer's main virtue is lazyness. Of which I myself am proud
of :)

Thinking along the lines: OK, we have this agregate tables, but we
don't need them 100% acurate, so let's not trigger syncing them with
transaction log on every INSERT takes effort, and more often then not,
the implementation goes along as: We have those aggregate tables - we
must keep those in-sync with main log, so we trigger an UPDATE on every
INSERT forking a housekeeper process to receive NOTIFY  naaa
I don't think so, may be next release. But once the application is in
production, we don't redesign when database load comes to the point
where performence suffer. The redesign is too risky.

My point is, that having a tool like COMMIT within a trigger function,
may result in a better application created easier right from the
beginning.

We don't see programmers wide use of LISTEN/NOTIFY, as I believe is
'just that little over the acceptable complaxity border'. The's why the
request may sound like 'for rare cases/limited use'.

The programmers' environment should 'gravitate' us to create good
software. The gravity is those little thing we find handy - triggers are
handy regretably lead to inefficiency.

/political-section

I mean. Workaround exists but they are just workarounds nonetheless.

Then again. I just wanted to back-up the request which I've seen valid
and help explaining it. Obviously I wouldn't like to endlessly discuss
it's pros and cons. I think, the idea is well stated now, and if someone
is in the mood of implementing it (now, or in some unforseen future) -
hi/she has all the (end-user) explanations in the archieve.

-R

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Attn: Richard Huxton

2006-05-25 Thread Dave Page
Your mailserver seems to be borked Richard:

dev@archonet.com on 25/05/2006 11:49
The message cannot be delivered due to a configuration error on the
server. Please contact your Administrator.
 s1.uklinux.net #5.3.0 X-Unix; 73

Regards, Dave

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] psql \echo strangeness with :variables

2006-05-25 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Jerry Sievers [EMAIL PROTECTED] wrote:

% [EMAIL PROTECTED]
% = \set date `date '+%Y%m%d'`
% 
% [EMAIL PROTECTED]
% = \echo :date
% 20060524
% 
% [EMAIL PROTECTED]
% = \echo foo_:date
% foo_:date   --  Was expecting this to expand... see below

variables need to be space-delimited in \ contexts. Work-around:

 \set tn 'foo_' :date
 \echo :tn
 create table :tn();

-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] column order

2006-05-25 Thread nuno
hi, all. thanks for the replies.

yes, i meant 'columns' not 'rows'. sorry if i made you a bit confused.
my explanation should've been more descriptive than that i suppose.

anyway, it's good to know that postgresql guarantees the column order.
i was just trying to be double-safe before i mess up with the data.

thanks.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Insert into partition table hangs

2006-05-25 Thread Nik
Ok, that's a good point. I overlooked the possibility of this
considering that I didn't think the timestamp resolution was that high.
I changed my rules and constraints on tables to take care of this case.
Thanks.

However, how does this affect the problem I having considering that
inserts that were getting stuck were not necessarily close to critical
time (midnight)?

Also, today a developer reported similar issue when running a select
statement from this partitioned table. He said that query would just
run forever and if he stopped it and ran it again, it would return data
within seconds. It was behaving in this manner randomly (aka it would
work a few times in a row, then it would get stuck).

As before I am not seeing any locks or unusal behavior in the logs and
pg_stat_activity.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PROBLEMAS CON POSTGRES EN MAQUINAS MULTIPROCESADORES

2006-05-25 Thread Jimmy Salazar

---BeginMessage---
lists

Saludos,

Resulta que el tiempo de ejecución obtenido en una maquina HP DL380 con
2 procesadores Xeon de 3.6 Gh y 4G de memoria con el siguiente comando 
cat nombrearchivo.gz | gunzip | psql nombrebd -f - es de 4 horas...

este mismo Proceso en una maquina de DL580 con 4 procesadores de 3.0 Gh
y 8G de memoria su tiempo de ejecución es de 5 horas y media.

Las anteriores pruebas se ejecutaron utilizando POSTGRES 8.03 en un
sistema operativo REDHAT 4.2 de 64.

La inquietud es si POSTGRES a presentado problemas en maquinas de 64b
con múltiples procesadores.. lo decimos con base en los tiempos de
ejecución expuestos anteriormente..

Otra prueba realizada fue en la maquina de 64b se realizo el mismo
proceso pero no utilizando SMP y el tiempo de respuesta fue de 3horas y
media (pareciera que postgres tiene problemas con varios procesadore)  

Existe alguna solución? o algún ajuste posible?, alguna recomendación?
  
-- 
__
Juan Jimmy Salazar Ramirez
Ingeniero Informatico
Analista Programador
Solati Ltda.
Medellin -  Colombia
Tel. 268-75-51



end
---BeginMessage---
Saludos,

Resulta que el tiempo de ejecución obtenido en una maquina HP DL380 con
2 procesadores Xeon de 3.6 Gh y 4G de memoria con el siguiente comando 
cat nombrearchivo.gz | gunzip | psql nombrebd -f - es de 4 horas...

este mismo Proceso en una maquina de DL580 con 4 procesadores de 3.0 Gh
y 8G de memoria su tiempo de ejecución es de 5 horas y media.

Las anteriores pruebas se ejecutaron utilizando POSTGRES 8.03 en un
sistema operativo REDHAT 4.2 de 64.

La inquietud es si POSTGRES a presentado problemas en maquinas de 64b
con múltiples procesadores.. lo decimos con base en los tiempos de
ejecución expuestos anteriormente..

Otra prueba realizada fue en la maquina de 64b se realizo el mismo
proceso pero no utilizando SMP y el tiempo de respuesta fue de 3horas y
media (pareciera que postgres tiene problemas con varios procesadore)  

Existe alguna solución? o algún ajuste posible?, alguna recomendación?
  
-- 
__
Juan Jimmy Salazar Ramirez
Ingeniero Informatico
Analista Programador
Solati Ltda.
Medellin -  Colombia
Tel. 268-75-51

---End Message---
---End Message---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Postgresql with Hibernate issues

2006-05-25 Thread xuemei








Hello:



I have a java application that starts several processes to
retrieve data and store them into the database tables;

When I run this on winXP and with Hibernate/Mysql,
everything looks ok.

Now I need to use postgresql, so I change the hibernate
configuration file.

When I run the program, the tables are created and records
inserted into several tables.

But the ID for each table are all wrong. 



A closer look indicates that the hibernate is using the
select nextval for each insert command, which results in the

ID for each table is considered as a global one, 



Tables ID

table1 1 , 5

table2 2,3,4

table3 6

table4 7,8

table1 9 ,10



Why this is happening, does this mean Hibernate with postgresql
can not work?, whileas same application+ Hibernate with mysql 

Works perfectly fine?



Thanks a lot for your advices.



Shemy








Re: [GENERAL] 8.1 on gentoo

2006-05-25 Thread Kenneth Downs

Tim Allen wrote:


Holger Hoffstaette wrote:


On Thu, 25 May 2006 08:55:51 +0200, Sim Zacks wrote:

Does anybody know when postgresql 8.1 will be considered stable on 
gentoo

for x86?



No, maybe ask in gentoo-users or -dev? Anyway just because it's not 
marked
stable does not mean it isn't. It's very unfortunate that 
distributions
have adopted these terms because more often than not they have 
absolutely

nothing to do with the *runtime* stability of the software.



We've been running 8.1 on gentoo almost since it came out into the ~x86 
tree, and have had no problems.


Not very scientific, but thats our experience, for what its worth
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgresql with Hibernate issues

2006-05-25 Thread Bjørn T Johansen
How does your hibernate mapping look like? Especially your generator tag...

BTJ

On Thu, 25 May 2006 16:09:22 -
xuemei [EMAIL PROTECTED] wrote:

 Hello:
 
  
 
 I have a java application that starts several processes to retrieve data and
 store them into the database tables;
 
 When I run this on winXP and with Hibernate/Mysql, everything looks ok.
 
 Now I need to use postgresql, so I change the hibernate configuration file.
 
 When I run the program, the tables are created and records inserted into
 several tables.
 
 But the ID for each table are all wrong. 
 
  
 
 A closer look indicates that the hibernate is using the select nextval for
 each insert command, which results in the
 
 ID for each table is considered as a global one, 
 
  
 
 TablesID
 
 table1   1 , 5
 
 table2   2,3,4
 
 table3   6
 
 table4   7,8
 
 table1   9 ,10
 
  
 
 Why this is happening, does this mean Hibernate with postgresql can not
 work?, whileas same application+ Hibernate with mysql 
 
 Works perfectly fine?
 
  
 
 Thanks a lot for your advices.
 
  
 
 Shemy
 


-- 
---
Bjørn T Johansen

[EMAIL PROTECTED]
---
Someone wrote:
I understand that if you play a Windows CD backwards you hear strange Satanic 
messages
To which someone replied:
It's even worse than that; play it forwards and it installs Windows
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Postmaster crashes after upgrade to 8.1.4!

2006-05-25 Thread CG
Upgrading from 8.1.3 to 8.1.4, I compiled with the same configure flags,
installed to a separate directory, shut down 8.1.3, copied the data directory
over to the new 8.1.4 directory (cp -Rp), set my symlinks so that
/usr/local/pgsql points to the new 8.1.4 directory, and fired it up.  I ran
some queries, inserts, updates, deletes, etc. Everything looked good! I removed
the old data directory since by leaving it there we'd be strapped for space,
and I went to bed. Everything ran fine until 4AM when the database dump runs.
After that we've been extremely unstable... We run, we crash, we run, we crash
... Here's a snippet of what I've been seeing...

2006-05-25 08:29:26.665 EDT   LOG:  all server processes terminated;
reinitializing
2006-05-25 08:29:26.677 EDT   LOG:  database system was interrupted at
2006-05-25 08:28:55 EDT
2006-05-25 08:29:26.678 EDT   LOG:  checkpoint record is at 28/3C0A35D8
2006-05-25 08:29:26.678 EDT   LOG:  redo record is at 28/3C0A35D8; undo record
is at 0/0; shutdown TRUE
2006-05-25 08:29:26.678 EDT   LOG:  next transaction ID: 204190433; next OID:
186871674
2006-05-25 08:29:26.678 EDT   LOG:  next MultiXactId: 1; next MultiXactOffset:
0
2006-05-25 08:29:26.678 EDT   LOG:  database system was not properly shut down;
automatic recovery in progress
2006-05-25 08:29:26.688 EDT   LOG:  redo starts at 28/3C0A3628
2006-05-25 08:29:26.702 EDT   LOG:  unexpected pageaddr 28/28102000 in log file
40, segment 60, offset 1056768
2006-05-25 08:29:26.702 EDT   LOG:  redo done at 28/3C0FFDF8
2006-05-25 08:29:27.097 EDT myuser mydata 192.168.167.4(54695)FATAL:  the
database system is starting up
2006-05-25 08:29:27.303 EDT   LOG:  database system is ready
2006-05-25 08:29:27.303 EDT   LOG:  transaction ID wrap limit is 1073799886,
limited by database postgres
2006-05-25 08:30:34.139 EDT   LOG:  autovacuum: processing database mydata
2006-05-25 08:30:50.076 EDT   LOG:  server process (PID 32140) was terminated
by signal 11
2006-05-25 08:30:50.076 EDT   LOG:  terminating any other active server
processes
2006-05-25 08:30:50.076 EDT myuser mydata 10.0.1.1(4135)WARNING:  terminating
connection because of crash of another server process
2006-05-25 08:30:50.076 EDT myuser mydata 10.0.1.1(4135)DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly corrupted
shared memory.
2006-05-25 08:30:50.076 EDT myuser mydata 10.0.1.1(4135)HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2006-05-25 08:30:50.077 EDT myuser mydata 10.0.0.12(2990)WARNING:  terminating
connection because of crash of another server process
2006-05-25 08:30:50.077 EDT myuser mydata 10.0.0.12(2990)DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2006-05-25 08:30:50.077 EDT myuser mydata 10.0.0.12(2990)HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2006-05-25 08:30:50.078 EDT myuser mydata 192.168.167.4(54696)WARNING: 
terminating connection because of crash of another server process
2006-05-25 08:30:50.078 EDT myuser mydata 192.168.167.4(54696)DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2006-05-25 08:30:50.078 EDT myuser mydata 192.168.167.4(54696)HINT:  In a
moment you should be able to reconnect to the database and repeat your command.
2006-05-25 08:30:50.080 EDT myuser mydata 10.0.2.1(4474)WARNING:  terminating
connection because of crash of another server process
2006-05-25 08:30:50.080 EDT myuser mydata 10.0.2.1(4474)DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly corrupted
shared memory.
2006-05-25 08:30:50.080 EDT myuser mydata 10.0.2.1(4474)HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2006-05-25 08:30:50.081 EDT myuser mydata 10.0.2.1(4473)WARNING:  terminating
connection because of crash of another server process
2006-05-25 08:30:50.081 EDT myuser mydata 10.0.2.1(4473)DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly corrupted
shared memory.
2006-05-25 08:30:50.081 EDT myuser mydata 10.0.2.1(4473)HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2006-05-25 08:30:50.081 EDT myuser mydata 10.0.2.1(4459)WARNING:  terminating
connection because of crash of another server process
2006-05-25 08:30:50.081 EDT myuser mydata 10.0.2.1(4459)DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and

Re: [GENERAL] postgreslog - panic message

2006-05-25 Thread Volkan YAZICI
On May 25 11:04, Richard Huxton wrote:
 surabhi.ahuja wrote:
 hi,
 i am getting the following error messages:
  
 2006-05-24 15:28:08 CDT%LOG:  could not bind IPv4 socket: Address 
 already in use
 2006-05-24 15:28:08 CDT%HINT:  Is another postmaster already running on 
 port 5432? If not, wait a few seconds and retry.
 2006-05-24 15:28:08 CDT%WARNING:  could not create listen socket for 
 localhost
 2006-05-24 15:28:08 CDT%LOG:  database system was shut down at 
 2006-05-24 15:28:08 CDT
 
 This is what it says. You've either got:
 1. another version of PostgreSQL running on port 5432
 2. Something else on port 5432
 3. The operating-system hasn't recycled the port after a shutdown (it 
 can take a few seconds sometimes).

Just for curiosity, shouldn't SO_REUSEADDR (in
backend/libpq/pqcomm.c:342) handle the 3rd possibility in the above
list? Or isn't this behaviour guaranteed by SO_REUSEADDR on every
platform?


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgreslog - panic message

2006-05-25 Thread Tom Lane
surabhi.ahuja [EMAIL PROTECTED] writes:
 2006-05-24 16:21:28 CDT%LOG:  database system was shut down at =
 2006-05-24 16:01:00 CDT
 2006-05-24 16:21:28 CDT%LOG:  record with zero length at 0/AC92BC
 2006-05-24 16:21:28 CDT%LOG:  invalid primary checkpoint record
 2006-05-24 16:21:28 CDT%LOG:  record with zero length at 0/AC9280
 2006-05-24 16:21:28 CDT%LOG:  invalid secondary checkpoint record
 2006-05-24 16:21:28 CDT%PANIC:  could not locate a valid checkpoint =
 record
 2006-05-24 16:21:28 CDT%LOG:  startup process (PID 8428) was =
 terminated by signal 6

Hm, those are remarkably small values of the checkpoint record pointers;
roughly what you'd see immediately after initdb.  What's the history of
this installation?  If it's not freshly initdb'd, the only thing I can
think is that somebody has restored an old version of pg_control into
the directory tree.  Which would be a pretty bad move, but maybe you can
recover with pg_resetxlog.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postmaster crashes after upgrade to 8.1.4!

2006-05-25 Thread Tom Lane
CG [EMAIL PROTECTED] writes:
 2006-05-25 08:30:50.076 EDT   LOG:  server process (PID 32140) was terminated
 by signal 11

That should be leaving a core dump file (if not, restart the postmaster
under ulimit -c unlimited).  Get a stack trace with gdb to get some
more info about what's going on.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postmaster crashes after upgrade to 8.1.4!

2006-05-25 Thread CG
I didn't find a core dump. 

Perhaps I'm looking in the wrong spot or for the wrong file. The file should be
called core.32140, correct? ... I did a find / -name core* ... that found
nothing useful.



--- Tom Lane [EMAIL PROTECTED] wrote:

 CG [EMAIL PROTECTED] writes:
  2006-05-25 08:30:50.076 EDT   LOG:  server process (PID 32140) was
 terminated
  by signal 11
 
 That should be leaving a core dump file (if not, restart the postmaster
 under ulimit -c unlimited).  Get a stack trace with gdb to get some
 more info about what's going on.
 
   regards, tom lane
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postmaster crashes after upgrade to 8.1.4!

2006-05-25 Thread Bill Moran
CG [EMAIL PROTECTED] wrote:

 I didn't find a core dump. 
 
 Perhaps I'm looking in the wrong spot or for the wrong file. The file should 
 be
 called core.32140, correct? ... I did a find / -name core* ... that found
 nothing useful.

find / -name '*core*' would be more reliable.  FreeBSD, for example, makes
coredumps in the format {processname}.core.

 --- Tom Lane [EMAIL PROTECTED] wrote:
 
  CG [EMAIL PROTECTED] writes:
   2006-05-25 08:30:50.076 EDT   LOG:  server process (PID 32140) was
  terminated
   by signal 11
  
  That should be leaving a core dump file (if not, restart the postmaster
  under ulimit -c unlimited).  Get a stack trace with gdb to get some
  more info about what's going on.

-- 
Bill Moran

Also, I can kill you with my brain.

River Tam


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] background triggers?

2006-05-25 Thread Florian G. Pflug

Rafal Pietrak wrote:

On Wed, 2006-05-24 at 13:36 +0200, Sim Zacks wrote:
My understanding of Listen/Notify is that it is a completely 
disconnected process running on the database server.


Yes. But In my particular case (and I presume, the intention of
'bacground triggers' is that) a programmer (like myself) is not really
interested in the outcome of thet trigger process. So there is no
'technical' need for him/her to create the server side proces *provided*
hi/she can setup a job *within* the database server itself, and just go
away.
Well, exactly not being interested in the outcome is IMHO the reason 
why your demands clash with archtecture of a RDBMS. Most RDBMS go a long 
way to ensure consistency and safety of your data, once they promised 
(usually by not raising an error at least on commit) that they stored 
them. This doesn't match very well with asynchronous processes for which 
nobody cares (In the sense that there is nobody to reports errors to).



That's the idea of 'background triggers'.

Surely, there are work arounds. Like the LISTEN/NOTIFY server (not
datagase server, but system server) daemon that takes database server
notiffications. And even a system server daemon, that simply uses
synchronous database communication (like I did in my case). The problem
is, that I have this 'eatching desire', to have such technical issues
supported 'withing the framework of RDBM architecture'.

I wouldn't call this a workaround. It's a sane solution to your problem.
The only real downside I can see is that it requires you to implement 
that daemon - but that can be solved once and for all - see below.



That's why I keep thinking, that the solution I used is actually
'bitting the fances', while gurus do it some other, *better* way. But if
not, a think that 'bakground triggers' could help here.

I may not have understood exactly what you are trying to do, but from 
what I understood, this will solve your problem.


I think you did. I just feel that 'background triggers' is 'real life'
engineering issue, so it should get some backing from RDBMS.
I don't think the RDBMS can do much more than provide a notification 
framework - which postgres does (LISTEN / NOTIFY).


But what would probably ease your 'etching desire' is if there was a 
general-purpose daemon that could be told the execute a given statement

either at predefined times, or when a notification arrives, or both.

Newer pgadmin3 versions include pgAgent, which is kind of a 
postgres-aware cron daemon, as far as I know. I don't think that pgAgent

currently supports LISTEN/NOTIFY, but you could talk to the pgadmin
maintainers about adding support for that.

Then you wouldn't need to write the daemon yourself, and it would even
be part of the framework as long as you consider pgadmin to be part
of the framework.

greetings, Florian Pflug





just my 2c.

-R



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] gentoo and 8.x

2006-05-25 Thread Michael Dean
Does anybody know when postgresql 8.1 will be considered stable on 
gentoo for x86?
Gentoo, as a community-based distro with a unique perspective, usually 
seems to lag behind even important upgrades as 8.14.  On 86 platform, it 
should be ok, although I would bring in code from postgresql and just 
work it into your system.  We have had no problems with any earlier 
version on our gentoo serves, but we also lag behind in our production 
environments because of minor dependencies in user applications higher 
up the stack.


Another question would be does anybody know of any instability of 
postgresql 8.1 unstable for gentoo x86? 
Poor terminology use by gentoo, no doubt designed to keep everything in 
more or less sync for those who rely on emerge.


Does anybody have it working in a production environment?

we remain at an earlier version by design.
Michael

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postmaster crashes after upgrade to 8.1.4!

2006-05-25 Thread CG
Okay, there was no core dump to be found.

I had to revert back to 8.1.3 which seems to be running fine. I am /extremely/
thankful that there was no data corruption.

I took a 24 hour old dumpfile of the database it was crashing on and I restored
it to a similar AMD64 box (SunFire x2100 instead of SunFire x4100) running
8.1.4 and tried to crash it as the other was crashing. No joy. It seems to run.
I'll leave it running and try to put a decent load on the box to get it to
crash. 

Since I would have to down the production database to get a working copy, I
won't be able to copy the offending data directory over to the test
installation until my next maint window rolls around in a few weeks. That, or
we have another outage of some type which would give me the ability to down the
database and copy the tree over.

I wish I could've done more analysis while the server was crippled. I'll keep
trying.

CG

--- Tom Lane [EMAIL PROTECTED] wrote:

 CG [EMAIL PROTECTED] writes:
  2006-05-25 08:30:50.076 EDT   LOG:  server process (PID 32140) was
 terminated
  by signal 11
 
 That should be leaving a core dump file (if not, restart the postmaster
 under ulimit -c unlimited).  Get a stack trace with gdb to get some
 more info about what's going on.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
On Thu, 2006-05-25 at 18:49 +0200, Florian G. Pflug wrote:
 Rafal Pietrak wrote:
  'technical' need for him/her to create the server side proces *provided*
  hi/she can setup a job *within* the database server itself, and just go
  away.
 Well, exactly not being interested in the outcome is IMHO the reason 
 why your demands clash with archtecture of a RDBMS. Most RDBMS go a long 
 way to ensure consistency and safety of your data, once they promised 
 (usually by not raising an error at least on commit) that they stored 
 them. This doesn't match very well with asynchronous processes for which 
 nobody cares (In the sense that there is nobody to reports errors to).

No, no no. This is completly different story.

That would really be very missfortunate if we couldn't relay on RDBMS
'confirmed storage'.

Here I'm just not interested in that procedure outcome: if it eventually
COMMITED or ROLLED-BACK. But obviously I am interested in consistancy of
database, when the detached procedure COMMITS.

I mean. It looks like this is *really* a novelty for RDBMS design - I
feel, that real programmers here (you guys :) are so hard to persuade
its necesary, because it's so far from the 'synchronous nature' of
clasical RDBMS design and triggers in particular.

But I'd like to express my believe, that having such tool within the
server can help build better database applications. 

BTW: It's tempting to bring here another example from another world: OS
kernel design (linux in particular) does have a 'sort of triggers' - the
interrupt service subsystem. To have the overal system efficient,
interrupt serive routines are split into 'front-end' and 'back-ends'.
Front-ends execute 'inside' interrupt context - interrupts disabled.
Back-ends execute 'outside' interupt context - interrupts enabled.

The goal here would be to allow for similar optimisation handed over to
database programmers. Some part of trigger must be executed within the
context of an opened transaction; but another part may be executed
without constrains of blocking the caller.

Lots of similarities.

Having an easy to use database backend framework that supports splitting
trigger execution, will allow 'user space' programmers optimise trigger
implementations.

 But what would probably ease your 'etching desire' is if there was a 
 general-purpose daemon that could be told the execute a given statement
 either at predefined times, or when a notification arrives, or both.

Yesss. A pg_crontab system table. One, that keeps stored procedures for
scheduled execution within the backend by the backend (like VACUUM)
would be nice. Yes, yes, yes. But that's not exactly the functionality
of 'background triggers'.

 Then you wouldn't need to write the daemon yourself, and it would even
 be part of the framework as long as you consider pgadmin to be part
 of the framework.

I wouldn't. 

I would only call it a framework if I can say COMMIT within the trigger
body. Or alternatively, if I can define a trigger, so to say: FOR EACH
COMMIT (pls note, that it's a different point in time, then FOR EACH
STATEMENT) which I could also define as DETACHED - launched by the
forked backend.

sory :)

regards,

-R

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] background triggers?

2006-05-25 Thread Dawid Kuroczko

On 5/25/06, Rafal Pietrak [EMAIL PROTECTED] wrote:

 Well, exactly not being interested in the outcome is IMHO the reason
 why your demands clash with archtecture of a RDBMS. Most RDBMS go a long
 way to ensure consistency and safety of your data, once they promised
 (usually by not raising an error at least on commit) that they stored
 them. This doesn't match very well with asynchronous processes for which
 nobody cares (In the sense that there is nobody to reports errors to).

No, no no. This is completly different story.

That would really be very missfortunate if we couldn't relay on RDBMS
'confirmed storage'.


Oh, I think Florian meant that it is strange that your application is not
interested in the trigger's output.  Of course one might want to add
a notify-about-a-trigger-failure-by-email feature to circumvent that,
but I won't be going so far off.

What is here, is that with your approach, you fire a trigger and forget
about it.  It either commits some time later, or does not, and you
don't know it.  You don't know it, because your application went on,
did other things, and has no way of knowing what's with the commit.

Well, you can speculate, that you will notice that no work is being
done.  But why?  Maybe the trigger is inefficient and isss soo slw,
iittt tke aggs tooo cooompleeete.  Or maybe
it ROLLBACKed, effectively removing all evidence of the work done.
With this approach, you don't know it -- and this is what probably
struck Florian's strange for RDBMS feeling.


Here I'm just not interested in that procedure outcome: if it eventually
COMMITED or ROLLED-BACK. But obviously I am interested in consistancy of
database, when the detached procedure COMMITS.


Hmm.  How are you going to get it?  No, an on ROLLBACK trigger is not
a good idea! :-)


I mean. It looks like this is *really* a novelty for RDBMS design - I
feel, that real programmers here (you guys :) are so hard to persuade
its necesary, because it's so far from the 'synchronous nature' of
clasical RDBMS design and triggers in particular.


Don't get me wrong, but a word bizzarre is more suitable than
novelty.  The background processing is there since very long
time -- why do you think LISTEN/NOTIFY was implemented? :)


But I'd like to express my believe, that having such tool within the
server can help build better database applications.


write faster  write better.  As I wrote some time earlier, you can
code a trigger in PL/perlU doing exactly what you want.  The more
usual approach of using LISTEN/NOTIFY or a cron job is easier to
manage (you have much better control on how many times the
given function is called).  Imagine a query with thousands of INSERTS
grouped inside a transaction.  Your background trigger will mean
that postgresql will be spawning awfully alot of new connections,
for nothing, as they won't see a new rows from different transaction.

You said that your scheme would implement exclusive locking.
Well, if I were writing such an application, I would rather want such
code to be fired not more frequently than 1 minute.
ON COMMIT DETACHED WITH MINIMUM DELAY '1 minute'::interval; ? :-)

For locking you can simply use existing infrastructure, like
CREATE TABLE my_locking_table (lasttime timestamptz);
INSERT INTO my_locking_table(now());
BEGIN
 SELECT lasttime FROM my_locking_table WHERE lasttime  now()-'1
minute'::interval FOR UPDATE NOWAIT;
 IF FOUND THEN
   -- do dome work
   UPDATE my_locking_table SET lattime=now();
   RETURN;
 END IF;
 EXECPTION when locked...
END;

And if you want to check for 'ps auxw|grep backup.sh', you may
also, without need for extending these things.


I would only call it a framework if I can say COMMIT within the trigger
body. Or alternatively, if I can define a trigger, so to say: FOR EACH
COMMIT (pls note, that it's a different point in time, then FOR EACH
STATEMENT) which I could also define as DETACHED - launched by the
forked backend.


Actually, I like the idea of ON COMMIT trigger (though without the
DETACHED part), but this is another story...

  Regards,
  Dawid

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] move from 1 database to another

2006-05-25 Thread P.M
Hi,I would like to know how can i move from 1 database to another one ?basically i want to run a *.sql script in which i connect under a particular database (for example : template1 or postgres) and my *.sql script should create a new database (for example : test) and after execute several SQL command under test database.thanks a lot,Maileen
		Blab-away for as little as 1¢/min. Make  PC-to-Phone Calls using Yahoo! Messenger with Voice.

Re: [GENERAL] background triggers?

2006-05-25 Thread Florian G. Pflug

Dawid Kuroczko wrote:

On 5/25/06, Rafal Pietrak [EMAIL PROTECTED] wrote:

 Well, exactly not being interested in the outcome is IMHO the reason
 why your demands clash with archtecture of a RDBMS. Most RDBMS go a 
long

 way to ensure consistency and safety of your data, once they promised
 (usually by not raising an error at least on commit) that they stored
 them. This doesn't match very well with asynchronous processes for 
which
 nobody cares (In the sense that there is nobody to reports errors 
to)..


No, no no. This is completly different story.

That would really be very missfortunate if we couldn't relay on RDBMS
'confirmed storage'.


Oh, I think Florian meant that it is strange that your application is not
interested in the trigger's output.  Of course one might want to add
a notify-about-a-trigger-failure-by-email feature to circumvent that,
but I won't be going so far off.

Ouch. ;-)


What is here, is that with your approach, you fire a trigger and forget
about it.  It either commits some time later, or does not, and you
don't know it.  You don't know it, because your application went on,
did other things, and has no way of knowing what's with the commit.

Well, you can speculate, that you will notice that no work is being
done.  But why?  Maybe the trigger is inefficient and isss soo slw,
iittt tke aggs tooo cooompleeete.  Or maybe
it ROLLBACKed, effectively removing all evidence of the work done.
With this approach, you don't know it -- and this is what probably
struck Florian's strange for RDBMS feeling.

This is exactly what I meant. Since there is always a possibillity for
failure (out of space, maschine crash, ...), all guarantees a RDBMS 
gives you are of the form If I responded with OK to command ... _then_

you can assume ..

Doing things asynchronously defeats this for two reasons. First, nobody
will be able to check if it responded OK. Second, since your proposed
background trigger would _need_ to fire only after the transaction 
comitted (Which snapshot should it see otherwise? And when would its own

changes be visible?). But what if the trigger fails then? I can't cause
the transaction that caused it's run to rollback now, because the 
database already committed that transaction.



I mean. It looks like this is *really* a novelty for RDBMS design - I
feel, that real programmers here (you guys :) are so hard to persuade
its necesary, because it's so far from the 'synchronous nature' of
clasical RDBMS design and triggers in particular.


Don't get me wrong, but a word bizzarre is more suitable than
novelty.  The background processing is there since very long
time -- why do you think LISTEN/NOTIFY was implemented? :)


But I'd like to express my believe, that having such tool within the
server can help build better database applications.
IMHO it won't. The _very_ reason why people use database systems is 
because they provide strong guarantees about isolation and durability

of transactions. _Additionally_ the provide convenient searching, and
indexing, and stuff like that. But, at least from my point of view, the
part really hard to get right is the correct behavior of transactions.

I know that people grew up with mysql, especially mysql before at 
least version 4, think differently. For them, a database is merely a

convenient replacement for flat files, providing a nice language (sql)
to specify complicated algorithms which would be tendious to hand-code.
If this is your interpretation of what a database does, then I can see
why this background trigger feature sounds so intriguing.

But if look
at things from the transactional point of view, then the feature 
doesn't sound intriguing any more, because it just doesn't work together
well with transactions, and would have very strange semantics when 
compared to other feature of the database.



I would only call it a framework if I can say COMMIT within the trigger
body. Or alternatively, if I can define a trigger, so to say: FOR EACH
COMMIT (pls note, that it's a different point in time, then FOR EACH
STATEMENT) which I could also define as DETACHED - launched by the
forked backend.

I'd suggest that you try to find a way to do what you want _outside_ of
the backend. I'd actually love to see a more generic pgAgent (see my 
other mail) that is distributed in contrib, or even installed by default.


You could use such a postgres-aware cron to schedule vacuum runs, 
periodic cleanup jobs, or start some procedure on some notification.
_But_ it wouldn't be part of the backend. It would be a separate 
process, connecting like any other process.



Actually, I like the idea of ON COMMIT trigger (though without the
DETACHED part), but this is another story...
Me too, although 2PC-Transactions would actually need to call it on 
prepare, not on commit, so on commit would be a bad name.


greetings, Florian Pflug

---(end of broadcast)---
TIP 3: Have you checked 

Re: [GENERAL] background triggers?

2006-05-25 Thread Rafal Pietrak
On Thu, 2006-05-25 at 20:27 +0200, Dawid Kuroczko wrote:
 On 5/25/06, Rafal Pietrak [EMAIL PROTECTED] wrote:
  Here I'm just not interested in that procedure outcome: if it eventually
  COMMITED or ROLLED-BACK. But obviously I am interested in consistancy of
  database, when the detached procedure COMMITS.
 
 Hmm.  How are you going to get it?  No, an on ROLLBACK trigger is not
 a good idea! :-)

good point. But no. I was just to quick with typeing. I meant 'basically
interested', that is I'd like to relay on COMMITs guaranee of consistant
database. But I don't have to see that guarantee on every COMMIT. I'd
prefere to trust the database on that.

trigger on ROLLBACK . now, that's bizzare.

  I mean. It looks like this is *really* a novelty for RDBMS design - I
  feel, that real programmers here (you guys :) are so hard to persuade
  its necesary, because it's so far from the 'synchronous nature' of
  clasical RDBMS design and triggers in particular.
 
 Don't get me wrong, but a word bizzarre is more suitable than

OK. agree :)

 You said that your scheme would implement exclusive locking.

No. never mentioned locking. Not me.

 Well, if I were writing such an application, I would rather want such
 code to be fired not more frequently than 1 minute.
 ON COMMIT DETACHED WITH MINIMUM DELAY '1 minute'::interval; ? :-)

sweet. But there are issues.

In cases when the main traffic (keep in ming a nnumerous concurent
batches of INSERTS) does not rest for a single second, the above would
probably delay the agregare UPDATES indefinitely. Which may not
neceserly be appropriate. So there should be an indication, whether to
wait for 1 minute idle-time between INSERTS (foreground/main), or 1
minute idle-time between UPDATES (background/trigger).

Still, very, very desirable.

 Actually, I like the idea of ON COMMIT trigger (though without the
 DETACHED part), but this is another story...

By now, I like it even more then the initial 1) 2) 3) 4) scenario :) ...
but the DETACHED is vital.

regards,

-R

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall
When I create a brand new database and then run the reindexdb program 
under PG 8.1.3, I get the follow error:


reindexdb: reindexing of database  failed: ERROR:  could not open 
relation with OID 41675


I'm not sure what this error is since we have not even added any data 
yet.  Is this something that's misconfigured in the template1 database 
(we have not customized it), something we're doing wrong, or is this 
normal?


Note that prior to this error, we get many NOTICE messages about each 
table being reindexed just as we'd like.


Thanks,
David


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread Tom Lane
David Wall [EMAIL PROTECTED] writes:
 When I create a brand new database and then run the reindexdb program 
 under PG 8.1.3, I get the follow error:

 reindexdb: reindexing of database  failed: ERROR:  could not open 
 relation with OID 41675

 I'm not sure what this error is since we have not even added any data 
 yet.  Is this something that's misconfigured in the template1 database 
 (we have not customized it), something we're doing wrong, or is this 
 normal?

I think you've done something to template1, even though you say you
haven't.  Out-of-the-box, an 8.1.3 database won't have any OIDs assigned
above about 11000, so OID 41675 would have to belong to a user-created
object.  Look in template1 and see if you see a table with that OID.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall





(repost to newsgroup since the other went directly to Mr. Lane's email
address)
I think you've done something to template1,
even though you say you
  
haven't. Out-of-the-box, an 8.1.3 database won't have any OIDs
assigned
  
above about 11000, so OID 41675 would have to belong to a user-created
  
object. Look in template1 and see if you see a table with that OID.
  
  
  
 
Thanks, but I clearly lack that expertise. How do I find "a table with
that OID"? The \dl lists large objects, but there's no oid with that
number in my regular db and template1 has not oids at all using that
command. I also checked the database area with all the files that have
such numbers, and there's no entry with the matching number either,
though there are some that are close. 
Is it possible that the 'reindexdb' program (not the command within
psql, but the standalone program) creates some temp tables that might
reflect this oid?


Thanks,

David







Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread Joachim Wieland
David,

On Thu, May 25, 2006 at 02:07:27PM -0700, David Wall wrote:
 object.  Look in template1 and see if you see a table with that OID.

 Thanks, but I clearly lack that expertise.  How do I find a table with 
 that OID?

Do a normal select against pg_class. Something like:

select oid, relname from pg_class


Joachim


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] What to expect when mixing inherited tables and different schemas while dealing with functions and views?

2006-05-25 Thread Jim Nasby

On May 23, 2006, at 9:20 PM, Jorge Godoy wrote:
a) We are with some restricted search_path set (e.g. after SET  
search_path TO
   schema_1) and we make a SELECT * FROM base.view.  What we're  
seeing is
   that views are tied to the schema where they were created, no  
matter if
   they are or not fully qualified in their definition.  Is this  
correct?  I'd
   expect views to respect the search_path if they aren't fully  
qualified
   (i.e. if I created them as SELECT something FROM table instead  
of SELECT

   something FROM schema.table).


Yes. Views essentially end up with schemas hard-coded into them. If  
that doesn't work you should be able to create views on set returning  
functions, though that's obviously more work.


I don't know how hard it would be to allow views to become  
search_path aware on execution, or if such a change would be accepted.


Ultimately though, why is this a problem? Aren't you defining all the  
views in their appropriate schema?


b) We are seeing a really weird behaviour when we use functions.   
It appears
   that it disregards the search_path from the second run and on.   
If I SELECT
   from a function with the search_path set to, e.g., schema_1,  
then when I
   set it to schema_2 then I'll still see data from schema_1.   
Note, here,
   that even the function being created on the base schema results  
were
   correctly retrieved at first execution.  (You can repeat that  
use the above
   dump by connecting, setting the search path to any of three  
schemas,
   selecting from the function, changing to other schema and then  
selecting
   again from the same function -- you'll see the same result --;  
then, if you

   reconnect and do a first select in another schema and change your
   search_path you'll see a different result from the previous  
connection but

   it will be the same result for both search_paths.)


This is due to query plan caching. If you grab a new connection every  
time you switch companies it won't be an issue. There's also been  
talk of adding the ability to 'reset' a connection, but I don't  
remember the status of that or if it would reset the query plan cache.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Best practice to grant all privileges on all bjects in database?

2006-05-25 Thread Jim Nasby

On May 24, 2006, at 10:50 AM, Bruno Wolff III wrote:

On Mon, May 22, 2006 at 12:59:06 +0300,
  Joe Kramer [EMAIL PROTECTED] wrote:

On a related note, which objects need to be GRANTed specifically?
There is a saying that following objects can have permissions  
GRANTed:

1. TABLE
2. DATABASE
3. FUNCTION
4. LANGUAGE
5. SCHEMA
6. TABLESPACE

What about SEQUENCE, TRIGGER? PostgreSQL manual has no mention  
about this.


Did you look at:
http://developer.postgresql.org/docs/postgres/sql-grant.html


Only helps if the OP is willing to run on HEAD; grant on sequence is  
not in 8.1 (at least not according to the docs).


As for triggers, I don't really see how that would make any sense.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] move from 1 database to another

2006-05-25 Thread Jim Nasby

On May 25, 2006, at 1:52 PM, P.M wrote:

I would like to know how can i move from 1 database to another one ?

basically i want to run a *.sql script in which i connect under a  
particular database (for example : template1 or postgres) and my  
*.sql script should create a new database (for example : test) and  
after execute several SQL command under test database.


CREATE DATABASE new_database TEMPLATE = database_to_copy;
-- Insert additional SQL here...

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] What to expect when mixing inherited tables and different schemas while dealing with functions and views?

2006-05-25 Thread Jorge Godoy
Em Quinta 25 Maio 2006 19:27, Jim Nasby escreveu:

 Yes. Views essentially end up with schemas hard-coded into them. If
 that doesn't work you should be able to create views on set returning
 functions, though that's obviously more work.

 I don't know how hard it would be to allow views to become
 search_path aware on execution, or if such a change would be accepted.

 Ultimately though, why is this a problem? Aren't you defining all the
 views in their appropriate schema?

Yes, this is a problem because I have to create the views on each schema.  If 
I could just use search_path, then I would end up with only one instance of 
each view and function and they'd do the right thing using data from that 
particular schema.

As they don't respect the search_path, then I have to create n+1 copies of 
each view/function, one for each schema and one for the base schema.  This 
looks like inneficient because if I need to change the view, I'll have to 
change n+1 views instead of just one.  The same applies to functions :-(

I confess that I expected it to respect the search_path.

 This is due to query plan caching. If you grab a new connection every
 time you switch companies it won't be an issue. There's also been
 talk of adding the ability to 'reset' a connection, but I don't
 remember the status of that or if it would reset the query plan cache.

Making them respect the search_path would be also nice.  I thought that a 
VOLATILE functions had no cache, even for the query plan.

One last try...  Languages other than plpgsql doesn't reuse / save the query 
plan by default, right?  So if I switch to, e.g., plpythonu I wouldn't, 
theoretically, have this problem when running functions, right?


It turns out that this won't work in an easy way in a standard installation of 
PostgreSQL... :-(  Unfortunately.  It would save a lot of code, a lot of 
redundant definitions and would make life a lot easier to manage the 
database.

If we can solve the problem with functions by using a language other than 
plpgsql (and of course sql), then we'd need to profile and try using another 
language to write our functions and replace views with set returning 
functions...  This isn't all that pretty, but instead of changing 1000 
instances of each view and each function that needs some modification I 
prefer changing one function that doesn't save the query plan (if possible at 
all, of course).


Thanks again,
-- 
Jorge Godoy   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Best practice to grant all privileges on all bjects in database?

2006-05-25 Thread Jorge Godoy
Em Quinta 25 Maio 2006 19:33, Jim Nasby escreveu:

 Only helps if the OP is willing to run on HEAD; grant on sequence is
 not in 8.1 (at least not according to the docs).

 As for triggers, I don't really see how that would make any sense.

A trigger could start some modification in a table where the user has no 
direct access to, e.g. a logging table.  By granting access to the trigger 
and making the trigger able to access that table, then the operation could be 
completed and data could ba safe from users.  I dunno, though, if the 
permissions set to the function would allow that... 

-- 
Jorge Godoy   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] move from 1 database to another

2006-05-25 Thread Michael Fuhr
On Thu, May 25, 2006 at 05:41:34PM -0500, Jim Nasby wrote:
 On May 25, 2006, at 1:52 PM, P.M wrote:
 I would like to know how can i move from 1 database to another one ?
 
 basically i want to run a *.sql script in which i connect under a  
 particular database (for example : template1 or postgres) and my  
 *.sql script should create a new database (for example : test) and  
 after execute several SQL command under test database.
 
 CREATE DATABASE new_database TEMPLATE = database_to_copy;
 -- Insert additional SQL here...

I wonder if the OP is looking for \c new_database.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] move from 1 database to another

2006-05-25 Thread Ragnar
On fim, 2006-05-25 at 11:52 -0700, P.M wrote:

 I would like to know how can i move from 1 database to another one ?
 
 basically i want to run a *.sql script in which i connect under a
 particular database (for example : template1 or postgres) and my *.sql
 script should create a new database (for example : test) and after
 execute several SQL command under test database.

I guess you are asking how to connect to a new database
in the middleof a sql script

Assuming the script will be run by psql, you can use
\connect test

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Status of gist locking in 8.1.3?

2006-05-25 Thread Francisco Reyes
The release notes for 8.1, http://www.postgresql.org/docs/whatsnew, states 
about GIST


indexing mechanism has improved to support the high-speed concurrency, 
recoverability and update performance


As I write this I am creating an index with gist and trying to do a select 
on the table froze. Using Postgresql 8.1.3


The documents for GIST seem to imply that by 8.1 the lock issue would be 
resolved, 
http://www.sai.msu.su/~megera/oddmuse/index.cgi/GiST_Concurrency_Recovery.


Is the locing an issue mostly at index creation time?



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall




Rats! I found nothing But just for grins, I ran the reindexdb
program a second time, and this time it reported a different OID,
higher than the previous. I know we didn't change the template1
database other than using it when creating new databases, creating
users, tablespaces, etc.

My guess is this is somehow related to the reindexdb program. Do you
know if there's any advantage to using the reindexdb program versus
running the psql commands REINDEX DATABASE and REINDEX SYSTEM? 

When reading the docs, I'm beginning to wonder if it's even necessary
to do the reindexing. I think it may be a holdover from earlier
thinking that doesn't apply anymore. I believe the idea is that as
rows are inserted, updated and deleted, the index can get rather
scrambled, and that reindexing often sorts the values and rewrites that
data so that the indexes take less space and work faster. Is that not
the case here?

Thanks,
David



Tom Lane wrote:

  David Wall [EMAIL PROTECTED] writes:
  
  
Thanks, but I clearly lack that expertise.  How do I find "a table with 
that OID"?

  
  
Try "select * from pg_class where oid = 41675" and
"select * from pg_attribute where attrelid = 41675"
to see if you find any rows.

			regards, tom lane

  





Re: [GENERAL] Status of gist locking in 8.1.3?

2006-05-25 Thread Tom Lane
Francisco Reyes [EMAIL PROTECTED] writes:
 As I write this I am creating an index with gist and trying to do a select 
 on the table froze. Using Postgresql 8.1.3

CREATE INDEX shouldn't block any concurrent SELECT, regardless of which
index AM is involved.  Can you provide a reproducible test case?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.1.4: Who says PHP deprecated addslashes since 4.0?

2006-05-25 Thread ljb
[EMAIL PROTECTED] wrote:
 ljb [EMAIL PROTECTED] writes:
 |  addslashes() or magic_quotes. We note that these tools have been 
 deprecated
 |  by the PHP group since version 4.0.

 Can anyone provide a source for the statement?

 I'm not going to put words in Josh's mouth about where he got that from,
 but anyone who reads all of the comments at
 http://us3.php.net/manual/en/function.addslashes.php
 ought to come away suitably unimpressed with the security of that
 function.

Yes, sorry, I did see those comments, although I don't think they are from
the PHP group themselves.  But I missed the statement on the pg_escape_string
manual page saying use of this function is recommended instead of
addslashes(). I still think since version 4.0 is wrong.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread Robert Treat
On Thursday 25 May 2006 19:34, David Wall wrote:
 Rats!  I found nothing  But just for grins, I ran the reindexdb
 program a second time, and this time it reported a different OID, higher
 than the previous.  I know we didn't change the template1 database other
 than using it when creating new databases, creating users, tablespaces,
 etc.

 My guess is this is somehow related to the reindexdb program.  Do you
 know if there's any advantage to using the reindexdb program versus
 running the psql commands REINDEX DATABASE and REINDEX SYSTEM?

 When reading the docs, I'm beginning to wonder if it's even necessary to
 do the reindexing.  I think it may be a holdover from earlier thinking
 that doesn't apply anymore.  I believe the idea is that as rows are
 inserted, updated and deleted, the index can get rather scrambled, and
 that reindexing often sorts the values and rewrites that data so that
 the indexes take less space and work faster.  Is that not the case here?


It really isn't neccessary. That said, there is *something* going on with your 
db, so it might be prudent to figure out what it is. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall


It really isn't neccessary. That said, there is *something* going on with your 
db, so it might be prudent to figure out what it is. 
  
Thanks, Robert.  Actually, I noted that each time I ran my backup 
script, the OID mentioned in the error was bigger than previous number.


Before I run the reindexdb program, I am running the vacuumlo (from 
contrib).  Just by putting a sleep 5 between the two commands, I cannot 
reproduce the error.  If I remove the sleep, then the error happens 
again.  Clearly, there's something left over from vacuumlo that 
reindexdb is finding.


Any thoughts on that observation?

Thanks,
David

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Toronto PostgreSQL Gathering - June 1st

2006-05-25 Thread Christopher Browne
I had intended to try to organize some form of Toronto PostgreSQL
'user group'; some challenges have gotten in the way of having that be
at all large scale, but it certainly makes sense to try to get local
people interested in PostgreSQL together every so often.

A number of groups gather at different times at Toronto's Linux Caffe;
the first Thursday of the month seems an opportune time for this, and
the next incidence of that is next Thursday, June 1st.

Linux Caffe is located at Grace and Harbord, just South of Christie
station.  It has free wifi, so feel free to bring a laptop.  David
Patrick, the proprietor, has found himself reasonably keen about
PostgreSQL; he's using it, along with SQL Ledger, for some his
accounting efforts.  He does some quite excellent sandwiches and
panini, too, and it has been too long since I have had one...
-- 
output = (cbbrowne @ acm.org)
http://linuxdatabases.info/info/lisp.html
Rules of the Evil Overlord #49. If I learn the whereabouts of the one
artifact  which can destroy me, I  will not send all  my troops out to
seize it.  Instead I  will send them  out to seize something  else and
quietly put a Want-Adin the  local paper.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread Tom Lane
David Wall [EMAIL PROTECTED] writes:
 Thanks, Robert.  Actually, I noted that each time I ran my backup 
 script, the OID mentioned in the error was bigger than previous number.

That's fairly interesting, but can you provide a self-contained test
case?  The reindexdb script really doesn't do anything except invoke
REINDEX, so I see no way to blame it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Toronto PostgreSQL Gathering - June 1st

2006-05-25 Thread Christopher Browne
Martha Stewart called it a Good Thing when Christopher Browne [EMAIL 
PROTECTED] wrote:
 A number of groups gather at different times at Toronto's Linux Caffe;
 the first Thursday of the month seems an opportune time for this, and
 the next incidence of that is next Thursday, June 1st.

And oops, 7pm is the appointed time...
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://cbbrowne.com/info/spreadsheets.html
A cynic is a man who knows the price  of everything, and the value of
nothing.  -- Oscar Wilde

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Restricting access to rows?

2006-05-25 Thread Benjamin Smith
We have a growing ASP-hosted application built on PHP/Postgres 8.1, and are 
getting requests from clients to manipulate the databases more directly. 
However, the structure of our databases prevents this from happening readily.

Assume I have two tables configured thusly: 

create table customers (
id serial unique not null, 
name varchar not null
); 

create table widgets ( 
customers_id integer not null references customers(id), 
name varchar not null, 
value real not null default 0
);

insert into customers (name) values ('Bob'); 
insert into customers (name) values ('Jane'); 
insert into widgets (customers_id, name, value) VALUES (1, 'Foo', 100); 
insert into widgets (customers_id, name, value) VALUES (1, 'Bar', 50); 
insert into widgets (customers_id, name, value) VALUES (2, 'Bleeb', 500); 

This leaves us with two customers, Bob who has two widgets worth $150, and 
Jane with one widget worth $500. 

How can I set up a user so that Bob can update his records, without letting 
Bob update Jane's records? Is it possible, say with a view or some other 
intermediate data type? 

Thanks, 

-Ben 
-- 
The best way to predict the future is to invent it.
- XEROX PARC slogan, circa 1978

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Restricting access to rows?

2006-05-25 Thread Andreas Kretschmer
Benjamin Smith [EMAIL PROTECTED] schrieb:
 How can I set up a user so that Bob can update his records, without letting 
 Bob update Jane's records? Is it possible, say with a view or some other 
 intermediate data type? 

You can use a VIEW to select all rows for CURRENT_USER, and then create
RULES for this view to do INSERT, UPDATE and DELETE.

A nice framework for row-level access-control is 'veil':
http://pgfoundry.org/projects/veil


HTH, Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings