Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Michael Paesold
Neil Conway wrote:
IMHO, the patent issue is *not* a potential problem for a lot of people, 
it *is* a problem -- it makes people uncomfortable to be deploying 
software that they know might cause them legal headaches down the line. It 
also makes life difficult for people distributing commercial versions of 
PostgreSQL.
I live in Europe, and right now, the patent, if granted, would not have any 
effect on me. Even if Europe will have patents on software, I doubt that 
this ARC algorithm will be patentable in Europe.

I've posted a patch to -patches that replaces ARC with LRU. The patch is 
stable -- I'll post some code cleanup for it tomorrow, but I've yet to 
find any bugs despite a fair bit of testing. The patch also reverts the 
code to being quite close to 7.4, which is another reason to have some 
confidence in its correctness.

I think the best solution is to replace ARC with LRU in 8.0.1 or 8.0.2, 
and develop a better replacement policy during the 8.1 development cycle.
I have not much confidence in such changes in a minor release, seeing that 
there is really not much more testing on them than regression testing (am I 
wrong?) an perhaps simple benchmarking in this case. I believe many really 
annoying or dangerous bugs have only been found in field testing.

Don't get me wrong, Neil, I trust your coding skills. But replacing ARC with 
LRU seems a rather big change, which could introduce new bugs and have 
performance issues. And the change also effects bgwriter behaviour.

Please don't rush out untested core components, and perhaps think about the 
people who are quite comfortable with ARC (e.g. us guys in Europe over 
here).

If ARC replacement can be done in a 8.0.* release, it doesn't have to be now 
in a rush, does it?

Best Regards,
Michael Paesold 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Hannu Krosing
Ühel kenal päeval (kolmapäev, 26. jaanuar 2005, 15:38+1100), kirjutas
Neil Conway:
 Bruce Momjian wrote:
  So if we have to address it we call it 8.0.7 or something.  My point is
  that we don't need to address it until we actually find out the patent
  is being enforced against someone, and that possibility is quite unlikely.
 
 IMHO, the patent issue is *not* a potential problem for a lot of 
 people, it *is* a problem -- it makes people uncomfortable to be 
 deploying software that they know might cause them legal headaches down 
 the line. 

If people see we are scared by so little, I fear that someone will pop
up another potential patent problem just after we have reverted to
LRU. Or even better - a day or two before releasing 8.0.x withr RLU fix.

 It also makes life difficult for people distributing 
 commercial versions of PostgreSQL.

Simple repackaging should not be a basis of commercial version. If
they want it, they could 

a) distribute OSS version and sell support

b) test your LRU backpatch and sell (a likely worse performing) version
with that.

c) implement a better-than-ARC cache replacement scheme, and sell that.
If they are really pissed off at PGDG they could even apply for a patent
to that scheme and gain competitive advantage in their investors eyes.

 I've posted a patch to -patches that replaces ARC with LRU. The patch is 
 stable -- I'll post some code cleanup for it tomorrow, but I've yet to 
 find any bugs despite a fair bit of testing. 

Have you done any performance comparisons with your LRU patch our 8.0
PgARC implementation.

IIRC the differences between 7.4 and 8.0 were best visible on really
heavy workloads, like the ones tested at OSDL.

If the performance does not matter, the simplest solution would be
setting postgres internal cache to 0 bytes and rely just on OS buffers.
That stops infringement immediately as one is not *using* any patented
technologies then. 

 I think the best solution is to replace ARC with LRU in 8.0.1 or 8.0.2, 
 and develop a better replacement policy during the 8.1 development cycle.

That could be the best solution for those worried about it (commercial
distributors mainly) but for the others a better tested and stable ARC-
like solution we have implemented and tested now is probably still
better. 


AN UNRELATED SUGGESTION:

Could anybody take the patent application and comment it claim-by-claim
marking up things having prior art (like claim 1 - keeping two lists),
so that when we start designing our ARC replacement, we know what points
we can safely infringe (IIRC some points involved doing LRU-like
stuff, so if we decide to be unconditionally terrified by the patent
application we should avoid LRU as well).

Then we could put up the commented version on our website or perhaps
havet it up at http://www.groklaw.net/ for comments from larger
community already familiar with IP issues ;).

Slashdot would be another place to ask for comments/prior art.

My point is, that while the IBM's patent as a whole could be worth
patent protection, each and every claim in it separately is probably
not.

-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Concurrent free-lock

2005-01-26 Thread Pailloncy Jean-Gerard
This is a very important thread. Many thanks to Jean-Gerard for 
bringing
the community's attention to this.
Thanks Simon.
I was working during my PhD on some parallel algorithm. The computer 
was a 32-grid processor in 1995. In this architecture we need to do the 
lock on the data, with minimum contention. We can not do a lock on the 
code path with mutex, because there was 32 different boards and a sync 
across the system was not doable. The data was a mesh graph that 
represent the segmentation of some satellite image.

When I see this paper with some good test, I remember this old days and 
think that if we have some generic algorithm for type like hash, tree, 
list with lock-free parallel read property it will be a very good 
win.
I think about an other paper I read on the PostgreSQL site about an 
algorithm with a global ordering of transaction design for multi-master 
database. I do not remember the url.

The third thing that come to my mind, is the next generation of 
slony/pgcluster.

Cordialement,
Jean-Gérard Pailloncy
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Pailloncy Jean-Gerard
I live in Europe, and right now, the patent, if granted, would not 
have any effect on me. Even if Europe will have patents on software, I 
doubt that this ARC algorithm will be patentable in Europe.
Is it possible to have an abstraction api where we can plug different 
algorithms.
With two plugins : LRU, ARC. ARC in a contrib module on european server 
only. ;-)
So any people get the best in regard to their local law. ;-/

It is a trick, a work-around, ok.
BUT, a general way to have some plugins for cache, database file 
format, etc (like the one for new type/operator) could be very 
interesting :
a) as a patent workaround
b) as a framework to test new feature

Cordialement,
Jean-Gérard Pailloncy
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Hannu Krosing
Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 21:10-0400), kirjutas
Marc G. Fournier:
 On Tue, 25 Jan 2005, Bruce Momjian wrote:

  So if we have to address it we call it 8.0.7 or something.  My point is
  that we don't need to address it until we actually find out the patent
  is being enforced against someone, and that possibility is quite unlikely.
 
 Ah, so you are advocating waiting *until* the problem exists, even *after* 
 we know a) there may be a problem and b) we know that we can fix it ... ?

It may be my englisk skills, as I'm not a native speaker, but your
temporal logic escapes me ...

... waiting *until* the problem exists ... there *may be* a problem ...

so *bruce* advocates waiting *until* there *is* a problem, *we* know it
*may be* (*there* ?) and we know we *can* fix the problem that *may
be* ?

-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] RQ: Prepared statements used by multiple connections

2005-01-26 Thread Bojidar Mihajlov
It looks it couldn't happen this a way.
Did somebody find out an alternative.
Is reasonable some idea based on a connection pool ?
-Bozhidar

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] IBM patent

2005-01-26 Thread Tommi Maekitalo
Hi,

I just read about this IBM-patent-issue at www.heise.de. IBM grants this 
patens to all projects, which follow one of the licenses, which are approved 
by the open-source-initiative. And the BSD-license is as far as I see 
approved (I found New BSD license).

When releasing commercial closed-source-variants of postgresql this 
BSD-license stays intact, so the use of these patents in postgresql seems ok.


Tommi Mäkitalo

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] WAL: O_DIRECT and multipage-writer

2005-01-26 Thread ITAGAKI Takahiro
Tom Lane [EMAIL PROTECTED] writes:

 What does XLOG_EXTRA_BUFFERS accomplish?

It is because the buffer passed to direct-io must be aligned to 
the same size of filesystem page, typically 4KB. Buffers allocated
with ShmemInitStruct are not necessarily aligned, so we need to allocate
extra buffers and align them by ourself.


 Also, I'm worried that you broke something by not updating
 Write-curridx immediately in XLogWrite.  There certainly isn't going
 to be any measurable performance boost from keeping that in a local
 variable, so why take any risk?

Keeping Write-curridx in a local variable is not for performance,
but for writing multiple pages at once.
I think it is ok to update Write-curridx at the end of XLogWrite,
because XLogCtl.Write.curridx will be touched by only one process
at a time. Process-shared variables are not modified until XLogWrite
is completed, so that other backends can write same contents later
even if the backend in XLogWrite is crushed. 


Sincerely,
ITAGAKI Takahiro


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] how to add a new column in pg_proc table

2005-01-26 Thread noman naeem
Hello Tom,

Now I have been able to generate valid bki file and
have been able to avoid all the errors thanks to
you,but still have not been able to add that
column.Now at initdb the database fails to initialize
itself.And the error it gives is. 

duplicate key violates unique constraint
pg_attribute_relid_attnum_index

I am very sure it is due to pg_attribute.h file in
which I have inserted the entry for protempsrc
column.

The main thing is I am unable to understand this
insert statement structure.Please guide,I am in an
urgency.

It would be great if you can describe what it does and
means.

DATA(insert ( 1255 protempsrc 26 -1 -1 -1 0 -1 -1 f x
i f f f t 0));



--- Tom Lane [EMAIL PROTECTED] wrote:

 noman naeem [EMAIL PROTECTED] writes:
  They came at the time of frmgrtab.h file
 creation,they
  are
 
  fmgrtab.c:25: error: syntax error before '-' token
  fmgrtab.c:2168: error: syntax error before '}'
 token
 
  there are loads and loads of such errors.
 
 I suppose you forgot to update the Gen_fmgrtab.sh
 script
 to account for new column numbering in pg_proc.
 
  Could you tell me from where I can have the last
 patch
 
 See our CVS server --- a checkout and then cvs
 diff around
 the time point I identified for you should do the
 trick.
 
   regards, tom lane
 




__ 
Do you Yahoo!? 
All your favorites on one personal page – Try My Yahoo!
http://my.yahoo.com 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ODBC] [HACKERS] RQ: Prepared statements used by multiple connections

2005-01-26 Thread Merlin Moncure
  ... a prepared version that is local to the backend that invokes the
  function, yes (i.e. it will be planned once per backend). So ISTM
this
  is equivalent functionality to what you can get using PREPARE or the
  extended query protocol.
 
 Are you sure it's only per-backend?  I thought I tested it and it
seemed
 to prepare it everywhere... oh well.

Plpgsql functions at the least are compiled by each backend.  I take
advantage of this...I use schemas and I don't have to keep a copy of the
function for each dataset.  I think vanilla sql functions might be
different.
 
 Either way, it avoids the problem with prepared queries in that you
 cannot know in advance if your query has already been prepared or not.

Yep.  I like things the way they are, but I can feel the pain of
applications that don't (or can't) keep connections open.

Merlin



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] bug w/ cursors and savepoints

2005-01-26 Thread Alvaro Herrera
On Wed, Jan 26, 2005 at 03:33:07PM +1100, Neil Conway wrote:
 Tom Lane wrote:
 The routine's comments need a bit of work too.  Otherwise it seems OK.
 Neil or anyone else --- see an issue here?
 
 The policy will now be: cursor creation is transaction, but cursor state 
 modifications (FETCH) are non-transactional -- right? I wonder if it 
 wouldn't be more consistent to make cursor deletion (CLOSE) 
 transactional as well -- so that a CLOSE in an aborted subtransaction 
 would not actually destroy the cursor.

Hmm ... not sure how hard that is.  We left a lot of details for 8.1
though, like trying to save the state of the executor related to the
cursor so that FETCH is transactional too.

 Other than that, I think there ought to be some user-level documentation 
 for how cursors and savepoints interact,

There is some detail (as of my patch, outdated) in
http://developer.postgresql.org/docs/postgres/sql-rollback-to.html
If you have a suggestion on where else it should go I'm all ears ...

 and some regression tests for this behavior, but I'm happy to add that
 myself if no one beats me to it.

Please do.

I'll post a corrected patch ASAP, including the doc change.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
La espina, desde que nace, ya pincha (Proverbio africano)

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 Well, you've suggested that I should try and reduce the API churn caused 
 by the patch. As I said on -patches, I don't really see this as an issue 
 if we just apply the patch to REL8_0_STABLE.

If we do that then the patch will go out with essentially no testing
beyond your own; an idea that doesn't fill me with confidence.

 I think the biggest area of concern with the LRU patch is how it changes 
 bgwriter behavior.

The easiest way to handle that is to keep storing a full list of all the
buffers in freelist.c, instead of reverting to the pre-8.0 data structure.
(Of course, if we decide we *want* to change the bgwriter behavior, it's
a different story.)

 I think it would be better to have a few weeks of beta prior to 8.0.2 
 and resolve the problem here and now, rather than crippling the 8.1 
 cycle (as the no-initdb policy would) or waiting for the problem to 
 *really* become serious (as the no action needed now policy would).

I'm leaning in that direction too, but I think that the only way to get
any meaningful testing is to have the patch in HEAD as well as the back
branch.  So I want something that doesn't undo more than the minimum
necessary to remove the ARC algorithm.

I don't have time to deal with this today or tomorrow, but once the
security releases are out, I will look at developing an LRU patch that
fits with my ideas about how to do it.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] Backporting pg_dump to 7.4

2005-01-26 Thread Christopher Kings-Lynne
I think it would be great to backport 8.0's pg_dump utilities with all 
their fixes and corrections back to 7.4.  I don't think it would take 
much to alter the output to be 7.4 compatible...

Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [HACKERS] Performance of the temporary table creation and use.

2005-01-26 Thread Luiz Gonzaga da Mata
Tom Lane escreveu:
Luiz Gonzaga da Mata [EMAIL PROTECTED] writes:
 

Although to have changed they sort_mem/work_mem it for 1 MB, it did not 
use this area in available memory for the connection to make the 
creation of the temporary table.
   

Why would you expect it to, and why would you think there is any
advantage?  A small, short-lived temp table will probably never actually
be spilled to disk by the kernel (we never fsync them) so the actual
performance improvement would be minimal.
 

It can be that kernel not writing physically in disk, but can be also 
that write.

If the use to order by, distinct, creating  index  and other temporary 
resources  is to greater then it sort_mem/work_mem per processes, the 
creating resource in disk can be used.

If to create a concept of work_mem_pool internally, we could optimize 
the resources of SO X SGDB and and the use most rational of resource..

For the administrator of the operational system and the administrator of 
the SGDB that is very important.

Work_mem_pool = (number of connections-postgresql.conf) X 
(work_mem-postgresql.conf).

If *memory real used* for all the user processes will be minor who 
work_mem_pool, would use more resources in memory of that simply the 
value of work_mem individual.

The general behavior of the PostgreSQL would be very better.
regards,
Luiz  Gonzaga da Mata
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] bug w/ cursors and savepoints

2005-01-26 Thread Alvaro Herrera
On Tue, Jan 25, 2005 at 02:06:24PM -0300, Alvaro Herrera wrote:

Hackers,

 At this point, gdb says that the portal is in PORTAL_READY state.  The
 code says to keep it open and reassign it to the parent subxact.  I
 don't remember what the rationale for this was ... I'll review the
 discussion about this.

Our conclusion at the time was that cursors created inside failed
subtransactions should remain open.  See the proposal and followup
discussion starting here:

http://archives.postgresql.org/pgsql-hackers/2004-07/msg00700.php

If we want to keep this behavior then we should not close all READY
cursors as per my previous patch.  We should keep them open, and only
mark FAILED those cursors that are related to state reversed by the
aborting subtransaction.

I don't see any way to do this cleanly, until we have some relcache-
dependency checking on Portals (maybe anything else?).  Not sure what we
can do about it right now, with 8.0.1 release tomorrow.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Ciencias políticas es la ciencia de entender por qué
 los políticos actúan como lo hacen  (netfunny.com)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] Data statement format used by the .sh scripts

2005-01-26 Thread noman naeem
Hello Every one,

Can some one explain me the under mentioned Data
statement format including the insert
parameterswhich is excessively used in
pg_attribute.h ,pg_class.h,pg_proc.h and at many more
places. 

DATA(insert ( 1255 prosrc 26 -1 -1 -1 0 -1 -1 f x
i f f f t 0));

Thanks,
Nauman



__ 
Do you Yahoo!? 
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Marc G. Fournier
On Wed, 26 Jan 2005, Hannu Krosing wrote:
Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 21:10-0400), kirjutas
Marc G. Fournier:
On Tue, 25 Jan 2005, Bruce Momjian wrote:

So if we have to address it we call it 8.0.7 or something.  My point is
that we don't need to address it until we actually find out the patent
is being enforced against someone, and that possibility is quite unlikely.
Ah, so you are advocating waiting *until* the problem exists, even *after*
we know a) there may be a problem and b) we know that we can fix it ... ?
It may be my englisk skills, as I'm not a native speaker, but your
temporal logic escapes me ...
... waiting *until* the problem exists ... there *may be* a problem ...
so *bruce* advocates waiting *until* there *is* a problem, *we* know it
*may be* (*there* ?) and we know we *can* fix the problem that *may
be* ?
Now you've totally confused me *shakes head*
Bruce is advocating waiting until the Patent has been Granted, instead of 
doing something about it now, when we know the patent is going through the 
system (and will likely get granted) ... a reactive vs proactive 
response to the problem.

Basically, after the patent is granted, we are going to scramble to get 
rid of the ARC stuff, instead of taking the time leadign up to the 
granting to get rid of it so that when granted, it isn't something we have 
to concern ourselves with ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] Deferrable Unique Constraints

2005-01-26 Thread George Essig
I noticed that implementing deferrable unique constraints is on the
TODO list.  I don't think its been said yet, but currently you can
implement a deferrable unique constraint by using a deferrable
constraint trigger together with a procedural language like plpgsql.
If you need an index on a column, you can use a regular index instead
of a unique index.

Yes, I noticed that getting rid of constraint triggers is also on the
TODO list.

Below is an example.

George Essig

--

create table t (x integer, y integer);
create index t_x_in on t (x);

-- Create a trigger function to test for duplicate values of x.
-- Table t column x unique insert update trigger function.

create or replace function t_x_un_ins_up_tr() RETURNS trigger
AS '
declare
   invalid integer;
begin

-- Not absolutely necessary, but avoids a query if the new and old
-- values of x are the same.

if TG_OP = ''UPDATE'' then
if new.x = old.x then
return new;
end if;
end if;

-- If 2 or more rows have the same value of x, set invalid to 1.

select 1 into invalid
from t
where x = new.x
offset 1 limit 1;

-- If found, raise exception.

if FOUND then
raise EXCEPTION
''Violation of unique constraint on column x in table t by new row:
x %, y %'', new.x, new.y;
end if;

return new;
end;'
LANGUAGE plpgsql;  

-- Create a deferrable constraint trigger that executes the trigger function.
-- This runs at transaction commit time for every row that was inserted or 
updated.

create constraint trigger t_x_un_ins_up_tr after insert or update on t 
deferrable initially deferred 
for each row 
execute procedure t_x_un_ins_up_tr ();

-- Begin a transaction.
-- Insert duplicate values of x successfully.
-- Violation of constraint when transaction is committed. 

test=# begin;
BEGIN
test=# insert into t (x, y) values (1,1);
INSERT 30332079 1
test=# insert into t (x, y) values (1,2);
INSERT 30332080 1
test=# commit;
ERROR:  Violation of unique constraint on column x in table t by new row:
x 1, y 1
test=# select * from t;
 x | y
---+---
(0 rows)

-- Begin a transaction.
-- Insert duplicate values of x successfully.
-- Update one of the duplicate values to another value.
-- Commit transaction successfully.

test=# begin;
BEGIN
test=# insert into t (x, y) values (1,1);
INSERT 30332083 1
test=# insert into t (x, y) values (1,2);
INSERT 30332084 1
test=# update t set x = 2 where y = 2;
UPDATE 1
test=# commit;
COMMIT
test=# select * from t;
 x | y
---+---
 1 | 1
 2 | 2
(2 rows)

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] cvs TIP, tsearch2 and Solaris 8 Sparc

2005-01-26 Thread Darcy Buskermolen
Hello,
It looks like teodor's latest commits to tseach2 has broken building on SPARC 
solaris 8. See 
http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=potorooodt=2005-01-26%2008:30:02 
 
for more details.


-- 
Darcy Buskermolen
Wavefire Technologies Corp.
ph: 250.717.0200
fx:  250.763.1759
http://www.wavefire.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] cvs TIP, tsearch2 and Solaris 8 Sparc

2005-01-26 Thread Tom Lane
Darcy Buskermolen [EMAIL PROTECTED] writes:
 It looks like teodor's latest commits to tseach2 has broken building on SPARC 
 solaris 8.

HPUX, too.  Patch committed.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Greg Stark

George Essig [EMAIL PROTECTED] writes:

 I noticed that implementing deferrable unique constraints is on the
 TODO list.  I don't think its been said yet, but currently you can
 implement a deferrable unique constraint by using a deferrable
 constraint trigger together with a procedural language like plpgsql.

You have a race condition. Two transactions can insert conflicting records and
if they commit at the same time they would both not see each other's
uncommitted records. 

Off the top of my head it seems the way to go about doing this would be to
simply not insert the records in the index until commit time. This doesn't
actually sound so hard, is there any problem with this approach?

You could almost implement this with a deferred trigger, a boolean column, and
a partial unique index. However I don't think deferred constraint triggers can
modify the record data.

The unfortunate bit here is that even if this worked the trigger setting the
boolean flag which puts the record into the index would create a new copy of
the record. Since it's modifying a record inserted by the same transaction it
could in theory just modify it in place. I don't think any attempt is made to
do that though. In any case a real native implementation wouldn't really need
the flag so this problem wouldn't come up.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Serguei A. Mokhov
Hello all,

With this paten issue on hand, can't we come up with a pluggable API
and pluggable cache-replacement modules so that folks who care not for US
patents can simply download and load in the PgARC module, and those who
can't, just load the NeilLRU, or a BetterThanARCCacheReplacement
module that don't violate those pattents. If the modules all conform to
the same pg-cache-replacement API, they could be swapped on the fly. I
believe the API and the two modules: ARC of Jan and LRU of Neil, can be
implemented for 8.0.1 and be less invasive than just ARC yanked out and
replaced with LRU.

I believe, PGDG does not need to concern itself with removing and all
traces of ARC from CVS or otherwise; on the conrary, it still can ship the
ARC module as an add-on to those who wish. This is possible because the
project is NOT hosted in the US (if I am wrong correct me).

This idea will also help the commercial vendros of PG to write up their
own modules they think best, of if they can't, just always load in LRU in
their commerical deployments of PG in the US.

-- 
Serguei A. Mokhov|  /~\The ASCII
Computer Science Department  |  \ / Ribbon Campaign
Concordia University |   XAgainst HTML
Montreal, Quebec, Canada |  / \  Email!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Off the top of my head it seems the way to go about doing this would be to
 simply not insert the records in the index until commit time. This doesn't
 actually sound so hard, is there any problem with this approach?

Yeah:
begin;
insert into foo (key, ...) values (33, ...);
select * from foo where key = 33;
...

If the SELECT uses an indexscan it will fail to find the just-inserted
row.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Serguei A. Mokhov
Hello all,

As I got the next digest of pg hackers, I see that Jean-Gerard Pailloncy
has already advocated this idea. In no means I meant to copy :) as I am
on the digest mode. However, I think it's a good path to go anyway as two
people at least came up with it. Please do not disregard this idea.

-s

On Wed, 26 Jan 2005, Serguei A. Mokhov wrote:

 Date: Wed, 26 Jan 2005 14:51:49 -0500 (EST)
 From: Serguei A. Mokhov [EMAIL PROTECTED]
 To: pgsql-hackers@postgresql.org
 Subject: Re: Patent issues and 8.1

 Hello all,

 With this paten issue on hand, can't we come up with a pluggable API
 and pluggable cache-replacement modules so that folks who care not for US
 patents can simply download and load in the PgARC module, and those who
 can't, just load the NeilLRU, or a BetterThanARCCacheReplacement
 module that don't violate those pattents. If the modules all conform to
 the same pg-cache-replacement API, they could be swapped on the fly. I
 believe the API and the two modules: ARC of Jan and LRU of Neil, can be
 implemented for 8.0.1 and be less invasive than just ARC yanked out and
 replaced with LRU.

 I believe, PGDG does not need to concern itself with removing and all
 traces of ARC from CVS or otherwise; on the conrary, it still can ship the
 ARC module as an add-on to those who wish. This is possible because the
 project is NOT hosted in the US (if I am wrong correct me).

 This idea will also help the commercial vendros of PG to write up their
 own modules they think best, of if they can't, just always load in LRU in
 their commerical deployments of PG in the US.



-- 
Serguei A. Mokhov|  /~\The ASCII
Computer Science Department  |  \ / Ribbon Campaign
Concordia University |   XAgainst HTML
Montreal, Quebec, Canada |  / \  Email!

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 Greg Stark [EMAIL PROTECTED] writes:
  Off the top of my head it seems the way to go about doing this would be to
  simply not insert the records in the index until commit time. This doesn't
  actually sound so hard, is there any problem with this approach?
 
 Yeah:
   begin;
   insert into foo (key, ...) values (33, ...);
   select * from foo where key = 33;
   ...
 
 If the SELECT uses an indexscan it will fail to find the just-inserted
 row.

Well presumably you would need a non-unique index created for query execution
purposes. The unique index would be purely for enforcing the constraint.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bug w/ cursors and savepoints

2005-01-26 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Our conclusion at the time was that cursors created inside failed
 subtransactions should remain open.  See the proposal and followup
 discussion starting here:

 http://archives.postgresql.org/pgsql-hackers/2004-07/msg00700.php

 If we want to keep this behavior then we should not close all READY
 cursors as per my previous patch.  We should keep them open, and only
 mark FAILED those cursors that are related to state reversed by the
 aborting subtransaction.

 I don't see any way to do this cleanly, until we have some relcache-
 dependency checking on Portals (maybe anything else?).  Not sure what we
 can do about it right now, with 8.0.1 release tomorrow.

I don't think we have a lot of choices: we have to destroy (or at least
mark FAILED) all such cursors for the time being.  The whole question of
cursor transaction semantics could stand to be revisited in 8.1, but we
can't ship the system with such a trivial crashing bug.

Note that dependency tracking would not in itself be enough to solve the
problem, since the question is not merely what objects the cursor
depends on, but whether their definitions were changed in the failed
subtransaction.  Talk about messy :-(

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bug w/ cursors and savepoints

2005-01-26 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 On Wed, 2005-01-26 at 12:02 -0300, Alvaro Herrera wrote:
 Hmm ... not sure how hard that is.

 Would it work to record the sub XID of the deleting subtxn on CLOSE, and
 then consider whether to really do the deletion when the subtxn
 commits/aborts?

It'd take more than that.  Consider

BEGIN;
DECLARE c CURSOR ...;
SAVEPOINT x;
CLOSE c;
DECLARE c CURSOR ...;   -- draws duplicate-cursor error

You'd need to do something to hide the deleted cursor for the
remainder of its subtransaction.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] bug w/ cursors and savepoints

2005-01-26 Thread Neil Conway
On Wed, 2005-01-26 at 12:02 -0300, Alvaro Herrera wrote:
  and some regression tests for this behavior, but I'm happy to add that
  myself if no one beats me to it.
 
 Please do.

Attached is a patch adding regression tests for this change -- I've
already applied it to HEAD.

-Neil

Index: src/test/regress/expected/transactions.out
===
RCS file: /var/lib/cvs/pgsql/src/test/regress/expected/transactions.out,v
retrieving revision 1.10
diff -c -r1.10 transactions.out
*** src/test/regress/expected/transactions.out	13 Sep 2004 20:09:51 -	1.10
--- src/test/regress/expected/transactions.out	27 Jan 2005 01:25:43 -
***
*** 470,472 
--- 470,519 
  DROP TABLE foo;
  DROP TABLE baz;
  DROP TABLE barbaz;
+ -- verify that cursors created during an aborted subtransaction are
+ -- closed, but that we do not rollback the effect of any FETCHs
+ -- performed in the aborted subtransaction
+ begin;
+ savepoint x;
+ create table abc (a int);
+ insert into abc values (5);
+ insert into abc values (10);
+ declare foo cursor for select * from abc;
+ fetch from foo;
+  a 
+ ---
+  5
+ (1 row)
+ 
+ rollback to x;
+ -- should fail
+ fetch from foo;
+ ERROR:  cursor foo does not exist
+ commit;
+ begin;
+ create table abc (a int);
+ insert into abc values (5);
+ insert into abc values (10);
+ insert into abc values (15);
+ declare foo cursor for select * from abc;
+ fetch from foo;
+  a 
+ ---
+  5
+ (1 row)
+ 
+ savepoint x;
+ fetch from foo;
+  a  
+ 
+  10
+ (1 row)
+ 
+ rollback to x;
+ fetch from foo;
+  a  
+ 
+  15
+ (1 row)
+ 
+ abort;
Index: src/test/regress/sql/transactions.sql
===
RCS file: /var/lib/cvs/pgsql/src/test/regress/sql/transactions.sql,v
retrieving revision 1.10
diff -c -r1.10 transactions.sql
*** src/test/regress/sql/transactions.sql	13 Sep 2004 20:10:13 -	1.10
--- src/test/regress/sql/transactions.sql	27 Jan 2005 01:23:37 -
***
*** 290,292 
--- 290,327 
  DROP TABLE foo;
  DROP TABLE baz;
  DROP TABLE barbaz;
+ 
+ -- verify that cursors created during an aborted subtransaction are
+ -- closed, but that we do not rollback the effect of any FETCHs
+ -- performed in the aborted subtransaction
+ begin;
+ 
+ savepoint x;
+ create table abc (a int);
+ insert into abc values (5);
+ insert into abc values (10);
+ declare foo cursor for select * from abc;
+ fetch from foo;
+ rollback to x;
+ 
+ -- should fail
+ fetch from foo;
+ commit;
+ 
+ begin;
+ 
+ create table abc (a int);
+ insert into abc values (5);
+ insert into abc values (10);
+ insert into abc values (15);
+ declare foo cursor for select * from abc;
+ 
+ fetch from foo;
+ 
+ savepoint x;
+ fetch from foo;
+ rollback to x;
+ 
+ fetch from foo;
+ 
+ abort;

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Heads up: upcoming releases in all branches back to

2005-01-26 Thread Neil Conway
On Tue, 2005-01-25 at 13:09 -0500, Tom Lane wrote:
 Current thought is to wrap these on Thursday for release Friday.
 If you have any last-minute fixes for the back branches, now's the
 time to get them in.

Sorry for the last minute commit, but I realized that I forgot to
backpatch the cursor buffer overrun fix to 7.3 and 7.2. I've done that
now, and I've got nothing else I wanted to get into any of these
releases.

-Neil



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Heads up: upcoming releases in all branches back to

2005-01-26 Thread Marc G. Fournier
Not really last minute, since wrap is tomorrow evening :)
On Thu, 27 Jan 2005, Neil Conway wrote:
On Tue, 2005-01-25 at 13:09 -0500, Tom Lane wrote:
Current thought is to wrap these on Thursday for release Friday.
If you have any last-minute fixes for the back branches, now's the
time to get them in.
Sorry for the last minute commit, but I realized that I forgot to
backpatch the cursor buffer overrun fix to 7.3 and 7.2. I've done that
now, and I've got nothing else I wanted to get into any of these
releases.
-Neil

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Patent issues and 8.1

2005-01-26 Thread Tim Allen
Bruce Momjian wrote:
pgman wrote:
...
What I would like to do is to pledge that we will put out an 8.0.X to
address any patent conflict experienced by our users.  This would
include ARC or anything else.  This way we don't focus just on ARC but
have a plan for any patent issues that appear, and we don't have to
adjust our development cycle until an actual threat appears.
This pledge sounds like an open-ended commitment of an infinite number 
of development hours. I don't think you can pledge to address any 
patent conflict. There is a limit to the number of tgl-hours in a day :).

One advantage we have is that we can easily adjust our code to work
around patented code by just installing a new binary.  (Patents that
affect our storage format would be more difficult.  A fix would have to
perhaps rewrite the on-disk data.)
easily? Maybe, maybe not. I don't think you can assume that the fix to 
as-yet-unknown patent conflicts is necessarily going to be easy. Even 
the USPTO occasionally grants patents on things that aren't trivial.

Just my AUD0.02, which should probably be worth even less given the size 
of my contribution to postgresql to date.

Tim
--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/
  http://www4.tpg.com.au/users/rita_tim/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Neil Conway
On Wed, 2005-01-26 at 15:48 -0500, Greg Stark wrote:
 Well presumably you would need a non-unique index created for query execution
 purposes. The unique index would be purely for enforcing the constraint.

Yuck.

You could perhaps relax the uniqueness of the index during the
transaction itself, and keep around some backend-local indication of
which index entries it have been inserted. Then at transaction-commit
you'd need to re-check the inserted index entries to verify that they
are unique. It would be nice to just keep a pin on the leaf page that we
inserted into, although we'd need to take care to follow subsequent page
splits (could we use the existing L  Y techniques to do this?).
Needless to say, it would be pretty ugly...

-Neil



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 You could perhaps relax the uniqueness of the index during the
 transaction itself, and keep around some backend-local indication of
 which index entries it have been inserted. Then at transaction-commit
 you'd need to re-check the inserted index entries to verify that they
 are unique.

Yeah, what I've been visualizing is a list of tentative duplicates ---
that is, you do the immediate unique check same as now, and if it passes
(which hopefully is most of the time) then you're in the clear.
Otherwise you log the apparent duplicate key value to be rechecked at
commit.

 It would be nice to just keep a pin on the leaf page that we
 inserted into, although we'd need to take care to follow subsequent page
 splits (could we use the existing L  Y techniques to do this?).

I do not believe we can do that without risking deadlocks.  It'll be
safer just to repeat the search for each key value that's of concern.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Alvaro Herrera
On Thu, Jan 27, 2005 at 03:31:29PM +1100, Neil Conway wrote:

 You could perhaps relax the uniqueness of the index during the
 transaction itself, and keep around some backend-local indication of
 which index entries it have been inserted. Then at transaction-commit
 you'd need to re-check the inserted index entries to verify that they
 are unique. It would be nice to just keep a pin on the leaf page that we
 inserted into, although we'd need to take care to follow subsequent page
 splits (could we use the existing L  Y techniques to do this?).

Maybe we can do something like

1. use a boolean-returning unique insertion.  If it fails, returns
false, doesn't ereport(ERROR); if it works, inserts and returns true.

2. the caller checks the return value.  If false, records the insertion
attempt into an should-check-later list.

3. at transaction end, unique insertion is tried again with the items on
the list.  If it fails, the transaction is aborted.

It's only a SMOC, nothing difficult AFAICS.  Disk-spilling logic
included, because it'd be probably needed.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Si no sabes adonde vas, es muy probable que acabes en otra parte.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Deferrable Unique Constraints

2005-01-26 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Yeah, what I've been visualizing is a list of tentative duplicates ---
 that is, you do the immediate unique check same as now, and if it passes
 (which hopefully is most of the time) then you're in the clear.

I don't see how you're in the clear. If session A does an insert and it
doesn't see a duplicate and doesn't commit, but then B does an insert and sees
the duplicate from A and marks his tentative, and then commits, shouldn't B's
commit succeed? Then when A commits shouldn't his fail? So A still has to
recheck even if there was no sign of a duplicate when he inserted.

Unless there's some way for B to indicate to A that his insert has become
tentative then I think you have to resign yourself to checking all deferred
unique constraints, not just ones that seem suspect.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Allow GRANT/REVOKE permissions to be applied to all schema objects with one command

2005-01-26 Thread Matthias Schmidt
Hi Tom + *,
as I learned from severall posts this TODO splits into two distinct 
TODO's

TODO1: Allow GRANT/REVOKE permissions to be applied to all schema 
objects with one command.
TODO2: Assign Permissions to schemas wich get automatically inherited 
by objects created in the schema.

my questions are:
a) should we pursue both of them?
b) how can a syntax for TODO1 look like? Anchored at 'GRANT ... ON 
SCHEMA' or 'GRANT ... ON objecttype' ?

greetings,
Matthias
--
Matthias Schmidt
Viehtriftstr. 49
67346 Speyer
GERMANY
Tel.: +49 6232 4867
Fax.: +49 6232 640089
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org