Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-15 Thread Jean-Paul ARGUDO

  AFAIK some SQL/C type precompilers and other frontend tools for other
  databases do generate stored procedures for PREPAREd CURSORs.
 
 You mean ECPG should/could replace a PEPARE statement with a CREATE
 FUNCTION and then the usage of the cursor with the usage of that
 function?
 
 Should be possible, but needs some work.

Wow Michael, this would be much much much appreciated. :-)
 
  I'm afraid ECPG does not :(
 
 That's correct of course.
 Michael

Thanks. Then we know our conclusions on the survey are right.

We hope functionality about prepared cursors, bind variables, etc will
come soon in PG :-)

We actually think about solutions to patch PostgreSQL and contribute
this way, adding a feature we need for business.  

Thanks.

-- 
Jean-Paul ARGUDOIDEALX S.A.S
Consultant bases de données 15-17, av. de Ségur
http://www.idealx.com   F-75007 PARIS

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-15 Thread Michael Meskes

On Fri, Mar 15, 2002 at 10:25:09AM +0100, Jean-Paul ARGUDO wrote:
  You mean ECPG should/could replace a PEPARE statement with a CREATE
  FUNCTION and then the usage of the cursor with the usage of that
  function?
  
  Should be possible, but needs some work.
 
 Wow Michael, this would be much much much appreciated. :-)

Problem is I have no idea when I will find time to care about such an
addition. It certainly won't be possible prior May or so. Sorry.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Tom Lane

'Ben Grimm' [EMAIL PROTECTED] writes:
 When these bugs are fixed there is still the issue of bug #3 that I 
 came across.  The one that I work around by resetting log_cnt to 0 when a 
 backend initializes a sequence.  It's this third bug that made the other 
 two so apparent.  Fixing them does not obviate the need to fix this one.

What's bug #3?  I don't recall a third issue.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Tom Lane

Attached is a patch against current CVS that fixes both of the known
problems with sequences: failure to flush XLOG after a transaction
that only does SELECT nextval(), and failure to force a new WAL
record to be written on the first nextval after a checkpoint.
(The latter uses Vadim's idea of looking at the sequence page LSN.)
I haven't tested it really extensively, but it seems to cure the
reported problems.

Some notes:

1. I found what I believe is another bug in the sequence logic:
fetch = log = fetch - log + SEQ_LOG_VALS;
should be
fetch = log = fetch + SEQ_LOG_VALS;
I can't see any reason to reduce the number of values prefetched
by the number formerly prefetched.  Also, if the sequence's cache
setting is large (more than SEQ_LOG_VALS), the original code could
easily fail to fetch as many values as it was supposed to cache,
let alone additional ones to be prefetched and logged.

2. I renamed XLogCtl-RedoRecPtr to SavedRedoRecPtr, and renamed
the associated routines to SetSavedRedoRecPtr/GetSavedRedoRecPtr,
in hopes of reducing confusion.

3. I believe it'd now be possible to remove SavedRedoRecPtr and
SetSavedRedoRecPtr/GetSavedRedoRecPtr entirely, in favor of letting
the postmaster fetch the updated pointer with GetRedoRecPtr just
like a backend would.  This would be cleaner and less code ... but
someone might object that it introduces a risk of postmaster hangup,
if some backend crashes whilst holding info_lck.  I consider that
risk minuscule given the short intervals in which info_lck is held,
but it can't be denied that the risk is not zero.  Thoughts?

Comments?  Unless I hear objections I will patch this in current
and the 7.2 branch.  (If we agree to remove SavedRedoRecPtr,
though, I don't think we should back-patch that change.)

regards, tom lane



*** src/backend/access/transam/xact.c.orig  Tue Mar 12 07:56:31 2002
--- src/backend/access/transam/xact.c   Thu Mar 14 20:00:50 2002
***
*** 546,577 
xid = GetCurrentTransactionId();
  
/*
!* We needn't write anything in xlog or clog if the transaction was
!* read-only, which we check by testing if it made any xlog entries.
 */
!   if (MyLastRecPtr.xrecoff != 0)
{
-   XLogRecData rdata;
-   xl_xact_commit xlrec;
XLogRecPtr  recptr;
  
BufmgrCommit();
  
-   xlrec.xtime = time(NULL);
-   rdata.buffer = InvalidBuffer;
-   rdata.data = (char *) (xlrec);
-   rdata.len = SizeOfXactCommit;
-   rdata.next = NULL;
- 
START_CRIT_SECTION();
  
!   /*
!* SHOULD SAVE ARRAY OF RELFILENODE-s TO DROP
!*/
!   recptr = XLogInsert(RM_XACT_ID, XLOG_XACT_COMMIT, rdata);
  
/*
!* Sleep before commit! So we can flush more than one commit
 * records per single fsync.  (The idea is some other backend may
 * do the XLogFlush while we're sleeping.  This needs work still,
 * because on most Unixen, the minimum select() delay is 10msec or
--- 546,593 
xid = GetCurrentTransactionId();
  
/*
!* We only need to log the commit in xlog and clog if the transaction made
!* any transaction-controlled XLOG entries.  (Otherwise, its XID appears
!* nowhere in permanent storage, so no one will ever care if it
!* committed.)  However, we must flush XLOG to disk if we made any XLOG
!* entries, whether in or out of transaction control.  For example, if we
!* reported a nextval() result to the client, this ensures that any XLOG
!* record generated by nextval will hit the disk before we report the
!* transaction committed.
 */
!   if (MyXactMadeXLogEntry)
{
XLogRecPtr  recptr;
  
BufmgrCommit();
  
START_CRIT_SECTION();
  
!   if (MyLastRecPtr.xrecoff != 0)
!   {
!   /* Need to emit a commit record */
!   XLogRecData rdata;
!   xl_xact_commit xlrec;
! 
!   xlrec.xtime = time(NULL);
!   rdata.buffer = InvalidBuffer;
!   rdata.data = (char *) (xlrec);
!   rdata.len = SizeOfXactCommit;
!   rdata.next = NULL;
! 
!   /*
!* XXX SHOULD SAVE ARRAY OF RELFILENODE-s TO DROP
!*/
!   recptr = XLogInsert(RM_XACT_ID, XLOG_XACT_COMMIT, rdata);
!   }
!   else
!   {
!   /* Just flush through last record written by me */
!   recptr = ProcLastRecEnd;
!   }
  
/*
!* Sleep 

Re: [HACKERS] User Level Lock question

2002-03-15 Thread Tom Lane

Lance Ellinghaus [EMAIL PROTECTED] writes:
 Is there an easy way to test the lock on a user level lock without actually
 issuing the lock?

Why would you ever want to do such a thing?  If you test the lock but
don't actually acquire it, someone else might acquire the lock half a
microsecond after you look at it --- and then what does your test result
mean?  It's certainly unsafe to take any action based on assuming that
the lock is free.

I suspect what you really want is a conditional acquire, which you can
get (in recent versions) using the dontWait parameter to LockAcquire.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously

2002-03-15 Thread Bruce Momjian

Tom Lane wrote:
 Attached is a patch against current CVS that fixes both of the known
 problems with sequences: failure to flush XLOG after a transaction
 that only does SELECT nextval(), and failure to force a new WAL
 record to be written on the first nextval after a checkpoint.
 (The latter uses Vadim's idea of looking at the sequence page LSN.)
 I haven't tested it really extensively, but it seems to cure the
 reported problems.

I can confirm that the patch fixes the problem shown in my simple test:

test= create table test (x serial, y varchar(255));
NOTICE:  CREATE TABLE will create implicit sequence 'test_x_seq' for SERIAL column 
'test.x'
NOTICE:  CREATE TABLE / UNIQUE will create implicit index 'test_x_key' for table 'test'
CREATE
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16561 1
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16562 1
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16563 1
...

test= select nextval('test_x_seq');
 nextval 
-
  22
(1 row)

test= checkpoint;
CHECKPOINT
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16582 1
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16583 1
test= insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');
INSERT 16584 1

[ kill -9 to backend ]

#$ sql test
Welcome to psql, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

test= select nextval('test_x_seq');
 nextval 
-
  56
(1 row)


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously

2002-03-15 Thread Bruce Momjian

Tom Lane wrote:
 2. I renamed XLogCtl-RedoRecPtr to SavedRedoRecPtr, and renamed
 the associated routines to SetSavedRedoRecPtr/GetSavedRedoRecPtr,
 in hopes of reducing confusion.

Good.

 3. I believe it'd now be possible to remove SavedRedoRecPtr and
 SetSavedRedoRecPtr/GetSavedRedoRecPtr entirely, in favor of letting
 the postmaster fetch the updated pointer with GetRedoRecPtr just
 like a backend would.  This would be cleaner and less code ... but
 someone might object that it introduces a risk of postmaster hangup,
 if some backend crashes whilst holding info_lck.  I consider that
 risk minuscule given the short intervals in which info_lck is held,
 but it can't be denied that the risk is not zero.  Thoughts?

The change sounds good to me.

 Comments?  Unless I hear objections I will patch this in current
 and the 7.2 branch.  (If we agree to remove SavedRedoRecPtr,
 though, I don't think we should back-patch that change.)

Totally agree.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Tom Lane

'Ben Grimm' [EMAIL PROTECTED] writes:
 On Fri, 15 Mar 2002, Tom Lane wrote:
 What's bug #3?  I don't recall a third issue.

 The problem I was seeing before is that when the postmaster was shutdown 
 properly, log_cnt in the sequence record was saved with whatever value it 
 had at the time.

Right, it's supposed to do that.

 So when it loaded from disk it would have a value greater 
 than zero resulting in no XLogInsert until you'd exceded log_cnt calls to
 nextval.  

This is the same as the post-checkpoint issue: we fix it by forcing an
XLogInsert on the first nextval after a checkpoint (or system startup).

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] timestamp_part() bug?

2002-03-15 Thread Thomas Lockhart

 There is a problem with epoch as well that was not in the 7.1.3
 7.1.3# select extract(epoch from '00:00:34'::time), now();
 7.1.3# 34  2002-03-05 22:13:16 +01
 Is that a bug or I didn't understand the new date/time types ?

Looks like a bug (or at least it looks like it behaves differently than
I would expect). Thanks for the report; I'll look at it asap.

   - Thomas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] psql and output from \?

2002-03-15 Thread Bruce Momjian


Your patch has been added to the PostgreSQL unapplied patches list at:

http://candle.pha.pa.us/cgi-bin/pgpatches

I will try to apply it within the next 48 hours.

---


Ian Barwick wrote:
 On Thursday 14 March 2002 22:40, Bruce Momjian wrote:
I guess some of these weren't introduces by you, but if someone is
going to fix this, he might as well take care of these.
  
   Will submit another patch in the morning (it's late here).
 
  Ian, do you have another version of this patch ready?
 
 Patch attached (diff against CVS, replacing previous patch).
 
 Ian Barwick

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://archives.postgresql.org

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Time for 7.2.1?

2002-03-15 Thread Thomas F. O'Connell

is there any further word on 7.2.1, at this point? haven't seen mention 
of it on the list in a while? is it still waiting on something big?

-tfo

Bruce Momjian wrote:
 Applied to current and 7.2.X.  Thanks.
 
 (No delay for /contrib commits from maintainers.)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Time for 7.2.1?

2002-03-15 Thread Tom Lane

Thomas F. O'Connell [EMAIL PROTECTED] writes:
 is there any further word on 7.2.1, at this point? haven't seen mention 
 of it on the list in a while? is it still waiting on something big?

Well, we were gonna release it last weekend, but now it's waiting on
sequence fixes (currently being tested).  And Lockhart may also wish to
hold it up while he looks at the recently reported timestamp_part
problem.  (Thomas, are you considering backpatching that?)  One way
or another I'd expect it next week sometime.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Mikheev, Vadim

 Attached is a patch against current CVS that fixes both of the known
 problems with sequences: failure to flush XLOG after a transaction

Great! Thanks... and sorry for missing these cases year ago -:)

Vadim

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Client/Server compression?

2002-03-15 Thread Greg Copeland

On Thu, 2002-03-14 at 14:03, Arguile wrote:

[snip]

 I'm sceptical of the benefit such compressions would provide in this setting
 though. We're dealing with sets that would have to be compressed every time
 (no caching) which might be a bit expensive on a database server. Having it
 as a default off option for psql migtht be nice, but I wonder if it's worth
 the time, effort, and cpu cycles.
 

I dunno.  That's a good question.  For now, I'm making what tends to be
a safe assumption (opps...that word), that most database servers will be
I/O bound rather than CPU bound.  *IF* that assumption hold true, it
sounds like it may make even more sense to implement this.  I do know
that in the past, I've seen 90+% compression ratios on many databases
and 50% - 90% compression ratios on result sets using tunneled
compression schemes (which were compressing things other than datasets
which probably hurt overall compression ratios).  Depending on the work
load and the available resources on a database system, it's possible
that latency could actually be reduced depending on where you measure
this.  That is, do you measure latency as first packet back to remote or
last packet back to remote.  If you use last packet, compression may
actually win.

My current thoughts are to allow for enabled/disabled compression and
variable compression settings (1-9) within a database configuration. 
Worse case, it may be fun to implement and I'm thinking there may
actually be some surprises as an end result if it's done properly.

In looking at the communication code, it looks like only an 8k buffer is
used.  I'm currently looking to bump this up to 32k as most OS's tend to
have a sweet throughput spot with buffer sizes between 32k and 64k. 
Others, depending on the devices in use, like even bigger buffers. 
Because of the fact that this may be a minor optimization, especially on
a heavily loaded server, we may want to consider making this a
configurable parameter.

Greg






signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Client/Server compression?

2002-03-15 Thread Greg Copeland

On Thu, 2002-03-14 at 19:43, Bruce Momjian wrote:
 Kyle wrote:
  On the subject on client/server compression, does the server
  decompress toast data before sending it to the client?  Is so, why
  (other than requiring modifications to the protocol)?
  
  On the flip side, does/could the client toast insert/update data
  before sending it to the server?
 
 It has to decrypt it so the server functions can process it too.  Hard
 to avoid that.  Of course, in some cases, it doesn't need to be
 processed on the server, just passed, so it would have to be done
 conditionally.
 

Along those lines, it occurred to me if the compressor somehow knew the
cardinality of the data rows involved with the result set being
returned, a compressor data dictionary (...think of it as a heads up on
patterns to be looking for) could be created using the unique
cardinality values which, I'm thinking, could dramatically improve the
level of compression for data being transmitted.

Just some food for thought.  After all, these two seem to be somewhat
related as you wouldn't want the communication layer attempting to
recompress data which was natively compressed and needed to be
transparently transmitted.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] insert statements

2002-03-15 Thread Thomas Lockhart

Sorry for the previous sarcastic response.

But I *really* don't see the benefit of that table(table.col)
syntax. Especially when it cannot (?? we need a counterexample) lead to
any additional interesting beneficial behavior.

   - Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Time for 7.2.1?

2002-03-15 Thread Tom Lane

Thomas Lockhart [EMAIL PROTECTED] writes:
 It is somewhat complicated by the fact that my code tree is pretty
 massively changed in this area as I implement an int64-based date/time
 storage alternative to the float64 scheme we use now. The alternative
 would be enabled with something like #ifdef HAVE_INT64_TIMESTAMP.
 Benefits would include having a predictable precision behavior for all
 allowed dates and times.

Interesting.  But if this is just an #ifdef, I can see some serious
problems coming up the first time someone runs a backend compiled with
one set of timestamp code in a database created with the other.  May
I suggest that the timestamp representation be identified in a field
added to pg_control?  That's how we deal with other options that
affect database contents ...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] User Level Lock question

2002-03-15 Thread Lance Ellinghaus

I know it does not sound like something that would need to be done, but here
is why I am looking at doing this...

I am trying to replace a low level ISAM database with PostgreSQL. The low
level ISAM db allows locking a record during a read to allow Exclusive
access to the record for that process. If someone tries to do a READ
operation on that record, it is skipped. I have to duplicate this
functionality. The application also allows locking multiple records and then
unlocking individual records or unlocking all of them at once. This cannot
be done easily with PostgreSQL unless I add a status field to the records
and manage them. This can be done, but User Level Locks seem like a much
better solution as they provide faster locking, no writes to the database,
when the backend quits all locks are released automatically, and I could
lock multiple records and then clear them as needed. They also exist outside
of transactions!

So my idea was to use User Level Locks on records and then include a test on
the lock status in my SELECT statements to filter out any records that have
a User Level Lock on it. I don't need to set it during the query, just test
if there is a lock to remove them from the query. When I need to do a true
lock during the SELECT, I can do it with the supplied routines.

Does this make any more sense now or have I made it that much more
confusing?

Lance

- Original Message -
From: Tom Lane [EMAIL PROTECTED]
To: Lance Ellinghaus [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, March 15, 2002 9:11 AM
Subject: Re: [HACKERS] User Level Lock question


 Lance Ellinghaus [EMAIL PROTECTED] writes:
  Is there an easy way to test the lock on a user level lock without
actually
  issuing the lock?

 Why would you ever want to do such a thing?  If you test the lock but
 don't actually acquire it, someone else might acquire the lock half a
 microsecond after you look at it --- and then what does your test result
 mean?  It's certainly unsafe to take any action based on assuming that
 the lock is free.

 I suspect what you really want is a conditional acquire, which you can
 get (in recent versions) using the dontWait parameter to LockAcquire.

 regards, tom lane


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] insert statements

2002-03-15 Thread Tom Lane

Vince Vielhaber [EMAIL PROTECTED] writes:
 On Fri, 15 Mar 2002, Thomas Lockhart wrote:
 But I *really* don't see the benefit of that table(table.col)
 syntax. Especially when it cannot (?? we need a counterexample) lead to
 any additional interesting beneficial behavior.

 The only benefit I can come up with is existing stuff written under
 the impression that it's acceptable.

That's the only benefit I can see either --- but it's not negligible.
Especially not if the majority of other DBMSes will take this syntax.

I was originally against adding any such thing, but I'm starting to
lean in the other direction.

I'd want it to error out on INSERT foo (bar.col), though ;-)

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] User Level Lock question

2002-03-15 Thread Greg Copeland

Are you trying to do a select for update?

Greg


On Fri, 2002-03-15 at 13:54, Lance Ellinghaus wrote:
 I know it does not sound like something that would need to be done, but here
 is why I am looking at doing this...
 
 I am trying to replace a low level ISAM database with PostgreSQL. The low
 level ISAM db allows locking a record during a read to allow Exclusive
 access to the record for that process. If someone tries to do a READ
 operation on that record, it is skipped. I have to duplicate this
 functionality. The application also allows locking multiple records and then
 unlocking individual records or unlocking all of them at once. This cannot
 be done easily with PostgreSQL unless I add a status field to the records
 and manage them. This can be done, but User Level Locks seem like a much
 better solution as they provide faster locking, no writes to the database,
 when the backend quits all locks are released automatically, and I could
 lock multiple records and then clear them as needed. They also exist outside
 of transactions!
 
 So my idea was to use User Level Locks on records and then include a test on
 the lock status in my SELECT statements to filter out any records that have
 a User Level Lock on it. I don't need to set it during the query, just test
 if there is a lock to remove them from the query. When I need to do a true
 lock during the SELECT, I can do it with the supplied routines.
 
 Does this make any more sense now or have I made it that much more
 confusing?
 
 Lance
 
 - Original Message -
 From: Tom Lane [EMAIL PROTECTED]
 To: Lance Ellinghaus [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Friday, March 15, 2002 9:11 AM
 Subject: Re: [HACKERS] User Level Lock question
 
 
  Lance Ellinghaus [EMAIL PROTECTED] writes:
   Is there an easy way to test the lock on a user level lock without
 actually
   issuing the lock?
 
  Why would you ever want to do such a thing?  If you test the lock but
  don't actually acquire it, someone else might acquire the lock half a
  microsecond after you look at it --- and then what does your test result
  mean?  It's certainly unsafe to take any action based on assuming that
  the lock is free.
 
  I suspect what you really want is a conditional acquire, which you can
  get (in recent versions) using the dontWait parameter to LockAcquire.
 
  regards, tom lane
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] User Level Lock question

2002-03-15 Thread Neil Conway

On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:
 I know it does not sound like something that would need to be done, but here
 is why I am looking at doing this...
 
 I am trying to replace a low level ISAM database with PostgreSQL. The low
 level ISAM db allows locking a record during a read to allow Exclusive
 access to the record for that process. If someone tries to do a READ
 operation on that record, it is skipped.

If the locked record is skipped, how can the application be sure it is
getting a consistent view of the data?

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] User Level Lock question

2002-03-15 Thread Greg Copeland

On Fri, 2002-03-15 at 16:24, Neil Conway wrote:
 On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:
  I know it does not sound like something that would need to be done, but here
  is why I am looking at doing this...
  
  I am trying to replace a low level ISAM database with PostgreSQL. The low
  level ISAM db allows locking a record during a read to allow Exclusive
  access to the record for that process. If someone tries to do a READ
  operation on that record, it is skipped.
 
 If the locked record is skipped, how can the application be sure it is
 getting a consistent view of the data?
 
 Cheers,
 
 Neil
 


Ya, that's what I'm trying to figure out.

It sounds like either he's doing what equates to a select for update or
more of less needs a visibility attribute for the row in question. 
Either way, perhaps he should share more information on what the end
goal is so we can better address any changes in idiom that better
reflect a relational database.

Greg




signature.asc
Description: This is a digitally signed message part


[HACKERS] pg_hba.conf and secondary password file

2002-03-15 Thread Bruce Momjian

Right now, we support a secondary password file reference in
pg_hba.conf.

If the file contains only usernames, we assume that it is the list of
valid usernames for the connection.  If it contains usernames and
passwords, like /etc/passwd, we assume these are the passwords to be
used for the connection.  Such connections must pass the unencrypted
passwords over the wire so they can be matched against the file;
'password' encryption in pg_hba.conf.

Is it worth keeping this password capability in 7.3?  It requires
'password' in pg_hba.conf, which is not secure, and I am not sure how
many OS's still use crypt in /etc/passwd anyway.  Removing the feature
would clear up pg_hba.conf options a little.

The ability to specify usernames in pg_hba.conf or in a secondary file
is being added to pg_hba.conf anyway, so it is really only the password
part that we have to decide to keep or remove.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] question on index access

2002-03-15 Thread Neil Conway

I had an idea on a possible way to increase performance under query
loads with a lot of short-term locking. I haven't looked at the
implementation of this at all, so if someone could tell me why this
wouldn't work, that would save me some time ;-)

AFAIK, current Postgres behavior when processing SELECT queries is like
this:

(1) for each tuple in the result set, try to get an
AccessShareLock on it

(2) if it can't acqure the lock, wait until it can

(3) read the data on the previously locked row and continue
onward

i.e. when it encounters a locked row, it waits for the lock to be
released and then continued the scan.

Instead, why not modify the behavior in (2) so that instead of waiting
for the lock to be released, Postgres would instead continue the scan,
keeping a note that it has skipped over the locked tuple. When it has
finished the scan (and so it has the entire result set, except for the
locked tuples), it should return to each of the previously locked
tuples. Since most locks are relatively short-term (AFAIK), there's a
good chance that during the time it took to scan the rest of the table,
the lock on the tuple has been released -- so it can read the value and
add it into the result set at the appropriate place without needing to
do nothing while waiting for the lock to be released.

This is probably stupid for some reason: can someone let me know what
that reason is? ;-)

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_hba.conf and secondary password file

2002-03-15 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Right now, we support a secondary password file reference in
 pg_hba.conf.
 Is it worth keeping this password capability in 7.3?

I'd not cry if it went away.  We could get rid of pg_passwd, which
is an ugly mess...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] timestamp_part() bug?

2002-03-15 Thread Thomas Lockhart

 There is a problem with epoch as well that was not in the 7.1.3

Hmm. 7.1.x did not implement any date_part() functions for time types.
So the results were obtained from a conversion to interval before
calling date_part()!

7.2 implements date_part() for time with time zone, and converts time
without time zone to time with time zone when executing your query. The
behavior is likely to be somewhat different. But...


I think that your problem report now has two parts:

1) extract(epoch from time with time zone '00:00:34') should return
something reasonable. I'll claim that it does that currently, since
(if you were trying that query) you are one hour away from GMT and get
3600+34 seconds back, which is consistant with same instant in GMT. If
the epoch is relative to GMT, then this may be The Right Thing To Do.

2) extract(epoch from time '00:00:34') should return something which
does not involve a time zone of any kind if it were following the
conventions used for timestamp without time zone. So we should have an
explicit function to do that, rather than relying on converting to time
with time zone before extracting the epoch.

Unfortunately, I can't put a new function into 7.2.x due to the
long-standing rule of not modifying system tables in minor upgrades. So
solving (2) completely needs to wait for 7.3.

You can work around this mis-feature for now by patching 7.2.x,
replacing one of the definitions for date_part in
src/include/catalog/pg_proc.h, oid = 1385 with the following:

select date_part($1, cast((cast($2 as text) || ''+00'') as time with
time zone));

Or, it seems that you can actually drop and replace this built-in
function (I vaguely recall that there used to be problems with doing
this, but it sure looks like it works!):

thomas=# drop function date_part(text,time);
DROP
thomas=# create function date_part(text,time) returns double precision
as '
thomas'# select date_part($1, cast((cast($2 as text) || ''+00'') as time
with time zone));
thomas'# ' language 'sql';
CREATE
thomas=# select extract(epoch from time '00:00:34');
 date_part 
---
34


In looking at this issue I did uncover a bug in moving time with time
zones to other time zones:

thomas=# select timetz(interval '01:00', time with time zone
'08:09:10-08');
 timetz 

 00:00:00.00+01

after repairing the offending code in timetz_izone() it seems to do the
right thing:

thomas=# select timetz(interval '01:00', time with time zone
'08:09:10-08');
   timetz
-
 17:09:10+01

This last issue will be fixed in 7.2.1. And the function will be renamed
to timezone() in 7.3 to be consistant with similar functions for other
data types.

- Thomas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] question on index access

2002-03-15 Thread Tom Lane

Neil Conway [EMAIL PROTECTED] writes:
 AFAIK, current Postgres behavior when processing SELECT queries is like
 this:
   (1) for each tuple in the result set, try to get an
 AccessShareLock on it

Uh, no.  There are no per-tuple locks, other than SELECT FOR UPDATE
which doesn't affect SELECT at all.  AccessShareLock is taken on the
entire table, mainly as a means of ensuring the table doesn't disappear
from under us.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] question on index access

2002-03-15 Thread Neil Conway

On Fri, 2002-03-15 at 18:23, Tom Lane wrote:
 Neil Conway [EMAIL PROTECTED] writes:
  AFAIK, current Postgres behavior when processing SELECT queries is like
  this:
  (1) for each tuple in the result set, try to get an
  AccessShareLock on it
 
 Uh, no.  There are no per-tuple locks, other than SELECT FOR UPDATE
 which doesn't affect SELECT at all.  AccessShareLock is taken on the
 entire table, mainly as a means of ensuring the table doesn't disappear
 from under us.

Ah, that makes sense. My mistake -- thanks for the info.

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] pg_hba.conf and secondary password file

2002-03-15 Thread Peter Eisentraut

Bruce Momjian writes:

 Is it worth keeping this password capability in 7.3?  It requires
 'password' in pg_hba.conf, which is not secure, and I am not sure how
 many OS's still use crypt in /etc/passwd anyway.  Removing the feature
 would clear up pg_hba.conf options a little.

Personally, I don't care.  But I'm concerned that some people might use
this to support different passwords for different databases.  Not sure why
you'd want that.  Maybe send an advisory to -general to see.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_hba.conf and secondary password file

2002-03-15 Thread Bruce Momjian

Peter Eisentraut wrote:
 Bruce Momjian writes:
 
  Is it worth keeping this password capability in 7.3?  It requires
  'password' in pg_hba.conf, which is not secure, and I am not sure how
  many OS's still use crypt in /etc/passwd anyway.  Removing the feature
  would clear up pg_hba.conf options a little.
 
 Personally, I don't care.  But I'm concerned that some people might use
 this to support different passwords for different databases.  Not sure why
 you'd want that.  Maybe send an advisory to -general to see.

Yes, I will send to general.  I wanted to get feedback from hackers
first --- I will send now.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Anyone have a SQL code for cumulative F distribution function?

2002-03-15 Thread Tony Reina

I know it's probably a long shot, but has anyone coded statistical
distributions as functions in PostgreSQL? Specifically, I'm looking
for a function to calculate the cumulative F distribution.

By the way, I know that I can do /df at the psql command line to list
the available functions. Is there a help function or better
description for a given function? Specifically, I'd like to know what
array_in and array_out do.

Thanks
-Tony

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Client/Server compression?

2002-03-15 Thread Kyle

Greg Copeland wrote:
 [cut]
 My current thoughts are to allow for enabled/disabled compression and
 variable compression settings (1-9) within a database configuration. 
 Worse case, it may be fun to implement and I'm thinking there may
 actually be some surprises as an end result if it's done properly.
 
 [cut]

 Greg


Wouldn't Tom's suggestion of riding on top of ssh would give similar
results?  Anyway, it'd probably be a good proof of concept of whether
or not it's worth the effort.  And that brings up the question: how
would you measure the benefit?  I'd assume you'd get a good cut in
network traffic, but you'll take a hit in cpu time.  What's an
acceptable tradeoff?

That's one reason I was thinking about the toast stuff.  If the
backend could serve toast, you'd get an improvement in server to
client network traffic without the server spending cpu time on
compression since the data has previously compressed.

Let me know if this is feasible (or slap me if this is how things
already are): when the backend detoasts data, keep both copies in
memory.  When it comes time to put data on the wire, instead of
putting the whole enchilada down give the client the compressed toast
instead.  And yeah, I guess this would require a protocol change to
flag the compressed data.  But it seems like a way to leverage work
already done.

-kf


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Clark C . Evans

(userland comment)

On Fri, Mar 15, 2002 at 01:05:33AM -0800, Vadim Mikheev wrote:
|  But sequences should not be under transaction control.  Can you
|  safely rollback a sequence?  No!  The only way to ensure that would
| ...
|  Placing a restriction on an application that says it must treat the values
|  returned from a sequence as if they might not be committed is absurd.
| 
| Why? The fact that you are not able to rollback sequences does not
| necessary mean that you are not required to perform commit to ensure
| permanent storage of changes made to database.

I use sequences to generate message identifiers for a simple
external-to-database message passing system.   I also use
them for file upload identifiers.  In both cases, if the
external action (message or file upload) succeeds, I commit; 
otherwise I roll-back.  I assume that the datbase won't give
me a duplicate sequence... otherwise I'd have to find some
other way go get sequences or I'd have duplicate messages
or non-unique file identifiers.

With these changes is this assumption no longer valid?  If
so, this change will break alot of user programs.

| And why? Just for convenience of  1% applications which need
| to use sequences in their own, non-database, external objects?

I think you may be underestimating the amount of external resources
which may be associated with a datbase object.  Regardless, may of the
database features in PostgreSQL are there for 1% or less of the
user base... 

Best,

Clark

-- 
Clark C. Evans   Axista, Inc.
http://www.axista.com800.926.5525
XCOLLA Collaborative Project Management Software

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Rod Taylor

I do basically the same thing for files.  Except I md5 a 4 character
random string, and the sequence ID just incase I get the same one
twice -- as it's never been written in stone that I wouldn't -- not to
mention the high number of requests for returning a sequence ID back
to the pool on a rollback.

Anyway, you might try using the OID rather than a sequence ID but if
you rollback the database commit due to failure of an action
externally, shouldn't you be cleaning up that useless external stuff
as well?
--
Rod Taylor

This message represents the official view of the voices in my head

- Original Message -
From: Clark C . Evans [EMAIL PROTECTED]
To: Vadim Mikheev [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, March 15, 2002 8:54 PM
Subject: Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to
previously chec


 (userland comment)

 On Fri, Mar 15, 2002 at 01:05:33AM -0800, Vadim Mikheev wrote:
 |  But sequences should not be under transaction control.  Can you
 |  safely rollback a sequence?  No!  The only way to ensure that
would
 | ...
 |  Placing a restriction on an application that says it must treat
the values
 |  returned from a sequence as if they might not be committed is
absurd.
 |
 | Why? The fact that you are not able to rollback sequences does not
 | necessary mean that you are not required to perform commit to
ensure
 | permanent storage of changes made to database.

 I use sequences to generate message identifiers for a simple
 external-to-database message passing system.   I also use
 them for file upload identifiers.  In both cases, if the
 external action (message or file upload) succeeds, I commit;
 otherwise I roll-back.  I assume that the datbase won't give
 me a duplicate sequence... otherwise I'd have to find some
 other way go get sequences or I'd have duplicate messages
 or non-unique file identifiers.

 With these changes is this assumption no longer valid?  If
 so, this change will break alot of user programs.

 | And why? Just for convenience of  1% applications which need
 | to use sequences in their own, non-database, external objects?

 I think you may be underestimating the amount of external
resources
 which may be associated with a datbase object.  Regardless, may of
the
 database features in PostgreSQL are there for 1% or less of the
 user base...

 Best,

 Clark

 --
 Clark C. Evans   Axista, Inc.
 http://www.axista.com800.926.5525
 XCOLLA Collaborative Project Management Software

 ---(end of
broadcast)---
 TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Anyone have a SQL code for cumulative F distribution function?

2002-03-15 Thread Joe Conway

Tony Reina wrote:
 I know it's probably a long shot, but has anyone coded statistical
 distributions as functions in PostgreSQL? Specifically, I'm looking
 for a function to calculate the cumulative F distribution.
 
 By the way, I know that I can do /df at the psql command line to list
 the available functions. Is there a help function or better
 description for a given function? Specifically, I'd like to know what
 array_in and array_out do.
 
 Thanks
 -Tony
 

Not quite what you asked for, but there *is* a library which allows you 
to query data from a PostgreSQL database from within R, called RPgSQL. 
Its available on Sourceforge and from the R Archive: 
http://cran.r-project.org/

Joe




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] User Level Lock question

2002-03-15 Thread Lance Ellinghaus

The application actually does not want nor need a consistent view of the
data. It is expecting that records that are locked will not be viewed at
all. The locks are normally held for VERY short periods of time. The fact
that the application is expecting locked records not to be viewed is causing
me problems because under PostgreSQL this is not easy to do. Even if I lock
a record using SELECT ... FOR UPDATE, I can still do a SELECT and read it.
I need to effectively do a SELECT ... FOR UPDATE and make the other
reading clients skip that record completely.

I can do this with a flag column, but this requires the disk access to do
the UPDATE and if the client/backend quits/crashes with outstanding records
marked, they are locked.

The User Level Locks look like a great way to do this as I can set a lock
very quickly without disk access and if the client/backend quits/crashes,
the locks are automatically removed.

I can set the User Level Lock on a record using the supplied routines in the
contrib directory when I do a SELECT, and can reset the lock by doing an
UPDATE or SELECT as well.
But without the ability to test for an existing lock (without ever setting
it) I cannot skip the locked records.

I would set up all the SELECTs in thunking layer (I cannot rewrite the
application, only replace the ISAM library with a thunking library that
converts the ISAM calls to PostgreSQL calls) to look like the following:

SELECT col1, col2, col3
FROM table
WHERE
col1 = 'whatever'
  AND
col2 = 'whatever'
  AND
user_lock_test(oid) = 0;

user_lock_test() would return 0 if there is no current lock, and 1 if there
is.

Does this clear it up a little more or make it more complicated. The big
problem is the way that the ISAM code acts compared to a REAL RDBMS. If this
application was coded with a RDBMS in mind, things would be much easier.

Lance

- Original Message -
From: Neil Conway [EMAIL PROTECTED]
To: Lance Ellinghaus [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, March 15, 2002 4:24 PM
Subject: Re: [HACKERS] User Level Lock question


 On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:
  I know it does not sound like something that would need to be done, but
here
  is why I am looking at doing this...
 
  I am trying to replace a low level ISAM database with PostgreSQL. The
low
  level ISAM db allows locking a record during a read to allow Exclusive
  access to the record for that process. If someone tries to do a READ
  operation on that record, it is skipped.

 If the locked record is skipped, how can the application be sure it is
 getting a consistent view of the data?

 Cheers,

 Neil

 --
 Neil Conway [EMAIL PROTECTED]
 PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Client/Server compression?

2002-03-15 Thread Greg Copeland

On Fri, 2002-03-15 at 19:44, Kyle wrote:
[snip]

 Wouldn't Tom's suggestion of riding on top of ssh would give similar
 results?  Anyway, it'd probably be a good proof of concept of whether
 or not it's worth the effort.  And that brings up the question: how
 would you measure the benefit?  I'd assume you'd get a good cut in
 network traffic, but you'll take a hit in cpu time.  What's an
 acceptable tradeoff?

Good question.  I've been trying to think of meaningful testing methods,
however, I can still think of reasons all day long where it's not an
issue of a tradeoff.  Simply put, if you have a low bandwidth
connection, as long as there are extra cycles available on the server,
who really cares...except for the guy at the end of the slow connection.

As for SSH, well, that should be rather obvious.  It often is simply not
available.  While SSH is nice, I can think of many situations this is a
win/win.  At least in business settings...where I'm assuming the goal is
to get Postgres into.  Also, along those lines, if SSH is the answer,
then surely the SSL support should be removed too...as SSH provides for
encryption too.  Simply put, removing SSL support makes about as much
sense as asserting that SSH is the final compression solution.

Also, it keeps being stated that a tangible tradeoff between CPU and
bandwidth must be realized.  This is, of course, a false assumption. 
Simply put, if you need bandwidth, you need bandwidth.  Its need is not
a function of CPU, rather, it's a lack of bandwidth.  Having said that,
I of course would still like to have something meaningful which reveals
the impact on CPU and bandwidth.

I'm talking about something that would be optional.  So, what's the cost
of having a little extra optional code in place?  The only issue, best I
can tell, is can it be implemented in a backward compatible manner.

 
 That's one reason I was thinking about the toast stuff.  If the
 backend could serve toast, you'd get an improvement in server to
 client network traffic without the server spending cpu time on
 compression since the data has previously compressed.
 
 Let me know if this is feasible (or slap me if this is how things
 already are): when the backend detoasts data, keep both copies in
 memory.  When it comes time to put data on the wire, instead of
 putting the whole enchilada down give the client the compressed toast
 instead.  And yeah, I guess this would require a protocol change to
 flag the compressed data.  But it seems like a way to leverage work
 already done.
 

I agree with that, however, I'm guessing that implementation would
require a significantly larger effort than what I'm suggesting...then
again, probably because I'm not aware of all the code yet.  Pretty much,
the basic implementation could be in place by the end of this weekend
with only a couple hours worth of work...and then, mostly because I
still don't know lots of the code.  The changes you are talking about is
going to require not only protocol changes but changes at several layers
within the engine.

Of course, something else to keep in mind is that using the TOAST
solution requires that TOAST already be in use.  What I'm suggesting
benefits (size wise) all types of data being sent back to a client.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] User Level Lock question

2002-03-15 Thread Greg Copeland

On Fri, 2002-03-15 at 21:45, Lance Ellinghaus wrote:
 The application actually does not want nor need a consistent view of the
 data. It is expecting that records that are locked will not be viewed at
 all. The locks are normally held for VERY short periods of time. The fact
 that the application is expecting locked records not to be viewed is causing

You keep asserting that these viewed records qualify as being called
locked.  It sounds like a record attribute to me.  Furthermore, it
sounds like that attribute reflects a record's visibility and not if
it's locked.  Locks are generally used to limit accessibility rather
than visibility.  This, I think, seems like the primary source of issue
you're having with your desired implementation.

 me problems because under PostgreSQL this is not easy to do. Even if I lock
 a record using SELECT ... FOR UPDATE, I can still do a SELECT and read it.
 I need to effectively do a SELECT ... FOR UPDATE and make the other
 reading clients skip that record completely.
 
 I can do this with a flag column, but this requires the disk access to do
 the UPDATE and if the client/backend quits/crashes with outstanding records
 marked, they are locked.

That's what transactions are for.  If you have a failure, the
transaction should be rolled back.  The visibility marker would be
restored to it's original visible state.

 
 The User Level Locks look like a great way to do this as I can set a lock
 very quickly without disk access and if the client/backend quits/crashes,
 the locks are automatically removed.

But do you really need to lock it or hide it or both?  If both, you may
want to consider doing an update inside of a transaction or even a
select for update if it fits your needs.  Transactions are your friend. 
:)  I'm assuming you're needing to lock it because you are needing to
update the row at some point in time.  If you are not wanting to update
it, then you are really needing to hide it, not lock it.

 
 I can set the User Level Lock on a record using the supplied routines in the
 contrib directory when I do a SELECT, and can reset the lock by doing an
 UPDATE or SELECT as well.
 But without the ability to test for an existing lock (without ever setting
 it) I cannot skip the locked records.
 
 I would set up all the SELECTs in thunking layer (I cannot rewrite the
 application, only replace the ISAM library with a thunking library that
 converts the ISAM calls to PostgreSQL calls) to look like the following:
 
 SELECT col1, col2, col3
 FROM table
 WHERE
 col1 = 'whatever'
   AND
 col2 = 'whatever'
   AND
 user_lock_test(oid) = 0;
 
 user_lock_test() would return 0 if there is no current lock, and 1 if there
 is.


SELECT col1, col2, col3
FROM table
WHERE
col1 = 'whatever'
AND
col2 = 'whatever'
AND
visible = '1' ;


 
 Does this clear it up a little more or make it more complicated. The big
 problem is the way that the ISAM code acts compared to a REAL RDBMS. If this
 application was coded with a RDBMS in mind, things would be much easier.
 

I understand that...and that can be hard...but sometimes semantics and
idioms have to be adjusted to allow for an ISAM to RDBMS migration.


Greg




signature.asc
Description: This is a digitally signed message part


[HACKERS] [patch] fe-connect.c doesn't handle EINTR correctly

2002-03-15 Thread David Ford

Last year we had a drawn out discussion about this and I created a patch 
for it.  I never noticed that the patch didn't go in until I installed 
7.2 the other day and realised that fe-connect.c never was fixed.

Here is the patch again.  It is against CVS 3/16/2002.  This time I only 
rewrote the connect procedure at line 912, I leave it up to the regular 
hackers to copy it's functionality to the SSL procedure just below it.

In summary, if a software writer implements timer events or other events 
which generate a signal with a timing fast enough to occur while libpq 
is inside connect(), then connect returns -EINTR.  The code following 
the connect call does not handle this and generates an error message. 
 The sum result is that the pg_connect() fails.  If the timer or other 
event is right on the window of the connect() completion time, the 
pg_connect() may appear to work sporadically.  If the event is too slow, 
pg_connect() will appear to always work and if the event is too fast, 
pg_connect() will always fail.

David



Index: src/interfaces/libpq/fe-connect.c
===
RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.181
diff -u -r1.181 fe-connect.c
--- src/interfaces/libpq/fe-connect.c   2001/11/11 02:09:05 1.181
+++ src/interfaces/libpq/fe-connect.c   2002/03/16 05:17:47
@@ -909,29 +909,48 @@
 * Thus, we have to make arrangements for all eventualities.
 * --
 */
-   if (connect(conn-sock, conn-raddr.sa, conn-raddr_len)  0)
-   {
-   if (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || 
SOCK_ERRNO == 0)
-   {
-   /*
-* This is fine - we're in non-blocking mode, and the
-* connection is in progress.
-*/
-   conn-status = CONNECTION_STARTED;
-   }
-   else
-   {
-   /* Something's gone wrong */
-   connectFailureMessage(conn, SOCK_ERRNO);
-   goto connect_errReturn;
+   do {
+   int e;
+   e=connect(conn-sock, conn-raddr.sa, conn-raddr_len)
+
+   if(e  0) {
+   switch (e) {
+   case EINTR:
+   /*
+* Interrupted by a signal, keep trying.  This 
+handling is
+* required because the user may have turned 
+on signals in
+* his program.  Previously, libpq would 
+erronously fail to
+* connect if the user's timer event fired and 
+interrupted
+* this syscall.  It is important that we 
+don't try to sleep
+* here because this may cause havoc with the 
+user program.
+*/
+   continue;
+   break;
+   case 0:
+   case EINPROGRESS:
+   case EWOULDBLOCK:
+   /*
+* This is fine - we're in non-blocking mode, 
+and the
+* connection is in progress.
+*/
+   conn-status = CONNECTION_STARTED;
+   break;
+   default:
+   /* Something's gone wrong */
+   connectFailureMessage(conn, SOCK_ERRNO);
+   goto connect_errReturn;
+   break;
+   }
+   } else {
+   /* We're connected now */
+   conn-status = CONNECTION_MADE;
}
-   }
-   else
-   {
-   /* We're connected already */
-   conn-status = CONNECTION_MADE;
-   }
+   
+   if(conn-status == CONNECTION_STARTED || conn-status == 
+CONNECTION_MADE)
+   break;
 
+   } while(1);
+   
 #ifdef USE_SSL
/* Attempt to negotiate SSL usage */
if (conn-allow_ssl_try)



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-15 Thread Vadim Mikheev

 |  Placing a restriction on an application that says it must treat the
values
 |  returned from a sequence as if they might not be committed is absurd.
 |
 | Why? The fact that you are not able to rollback sequences does not
 | necessary mean that you are not required to perform commit to ensure
 | permanent storage of changes made to database.

 I use sequences to generate message identifiers for a simple
 external-to-database message passing system.   I also use
 them for file upload identifiers.  In both cases, if the
 external action (message or file upload) succeeds, I commit;
 otherwise I roll-back.  I assume that the datbase won't give
 me a duplicate sequence... otherwise I'd have to find some

So can you do select nextval() in *separate* (committed)
transaction *before* external action and real transaction where
you store information (with sequence number) about external
action in database?

BEGIN;
SELECT NEXTVAL();
COMMIT;
BEGIN;
-- Do external actions and store info in DB --
COMMIT/ROLLBACK;

Is this totally unacceptable? Is it really *required* to call nextval()
in *the same* transaction where you store info in DB? Why?

 other way go get sequences or I'd have duplicate messages
 or non-unique file identifiers.

 With these changes is this assumption no longer valid?  If

1. It's not valid to assume that sequences will not return duplicate
numbers if there was no commit after nextval.
2. It doesn't matter when sequence numbers are stored in
   database objects only.
3. But if you're going to use sequence numbers in external objects
you must (pre)fetch those numbers in separate committed
transaction.

(Can we have this in FAQ?)

 so, this change will break alot of user programs.

 | And why? Just for convenience of  1% applications which need
 | to use sequences in their own, non-database, external objects?

 I think you may be underestimating the amount of external resources
 which may be associated with a datbase object.  Regardless, may of the
 database features in PostgreSQL are there for 1% or less of the
 user base...

Please note that I was talking about some *inconvenience*, not about
*inability* of using sequence numbers externally (seems my words were
too short). Above is how to do this. And though I agreed that it's not
very convenient/handy/cosy to *take care* and fetch numbers in
separate committed transaction, but it's required only in those special
cases and I think it's better than do fsync() per each nextval() call what
would affect other users/applications where storing sequence numbers
outside of database is not required.

Vadim



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html