Re: [HACKERS] ARC patent

2005-01-20 Thread Hannu Krosing
Ühel kenal päeval (esmaspäev, 17. jaanuar 2005, 23:22+), kirjutas
Simon Riggs:
 On Mon, 2005-01-17 at 14:02 -0800, Joshua D. Drake wrote:
  IBM can NEVER sue customers for using infringing
  code before first informing them of infringement and
  giving reasonable time to upgrade to uninfringing
  version.
...

 It seems clear that anybody on 8.0.0ARC after the patent had been
 granted could potentially be liable to pay damages. At best, the
 community would need to do a product recall to ensure patents were not
 infringed.
 
 So, it also seems clear that 8.0.x should eventually have a straight
 upgrade path to a replacement, assuming the patent is granted. 
 
 We should therefore plan to:
 1. improve/replace ARC for 8.1

improved ARC still needs licence from IBM if they get the patent and
our improved one infringes any claims in it. 

Actually getting patents on all useful improvements on existing patent
has been a known winning strategy in corporate patent hardball - you
force the original patent holder to negotiate, as he's rendered unable
to improve his design without infringing your patents. IIRC some early
electronic consumer devices were wrangled out of single company control
that way.

We could consider donating our improvements to some free patent
foundation to be patented for this kind of action plan.

-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] US Patents vs Non-US software ...

2005-01-20 Thread Hannu Krosing
Ühel kenal päeval (esmaspäev, 17. jaanuar 2005, 21:45-0300), kirjutas
Alvaro Herrera:
 On Mon, Jan 17, 2005 at 07:31:48PM -0400, Marc G. Fournier wrote:
 
  Just curious here, but are patents global?  PostgreSQL is not US software, 
  but it is run within the US ... so, would this patent, if it goes through, 
  only affect those using PostgreSQL in the US, or do patents somehow 
  transcend international borders?
 
 No, they are limited to the territory they are registered in.
 
 Not sure how that applies to somebody who just uses Postgres in the US;
 of course, IANAL.

USAmericans can just place their servers somewhere not under US
jurisdiction (Cuba) or even better, in legal vacuum (Quantanamo) and run
client over internet.

If something infringes then it surely is the server, not the client.

-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] ARC patent

2005-01-20 Thread Hannu Krosing
Ühel kenal päeval (kolmapäev, 19. jaanuar 2005, 00:39-0500), kirjutas
Tom Lane:
 What this really boils down to is whether we think we have
 order-of-a-year before the patent is issued.  I'm nervous about
 assuming that.  I'd like to have a plan that will produce a tested,
 credible patch in less than six months.

Can't this thing be abstracted out like so many other things are (types,
functions, pl-s) or should be/were once (storage managers) ?

Like different scheduling algorithms in the linux kernel ?

What makes this inherently so difficult to do ? 

Is it just testing or something for fundamental?

Most likely also the gathering of information needed to decide on
replacement policy.

If just testing, we could move fast to supplying two algos LRU/ARC ,
selectable at startup. 

This has extra benefit of allowing easily testing other algorithms - I
guess that for unpredictable workloads a random policy in 80% tail of
LRU cache should not do too badly, probably better than 7.x's seqscan
polluteable LRU ;)


-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] ARC patent

2005-01-20 Thread Hannu Krosing
Ühel kenal päeval (esmaspäev, 17. jaanuar 2005, 11:57-0800), kirjutas
Joshua D. Drake:
 However, I don't want to be beholden to IBM indefinitely --- in five
 years their corporate strategy might change.  I think that a reasonable
 response to this is to plan to get rid of ARC, or at least modify the
 code enough to avoid the patent, in time for 8.1.  (It's entirely likely
 that that will happen before the patent issues, anyway.)
 
  regards, tom lane
   
 
 IBM makes 20% of their money from licensing patents.

OTOH they make 80% of their goodwill in OS community out of being nice
to opensource projects. Or at least avoiding being seen as downright
unfair. So I expect at least some civility from them if and when thei
get the patent.

I'm also suspect that PG possibly infringes on enough already granted
patents (some likely owned by IBM) to at least get it into as much
trouble as SCO has caused to IBM. 

The reason we havent seen any IBM lawyers is that demanding royalties
from PostgreSQL Global Development Group would be bad publicity, not
thet they could not have done it if PG were a product of MomPop
Software Startup Co.

What comes to companies that take PG source, rebrand it and sell as
closed-source product, then they have several options :
 1) just wait and hope that the public version evolves past ARC patent
before the patent is granted.
 2) licence the patent from IBM, if and when it is granted
 3) rewrite the part that uses ARC (and if they're really paranoid, 
then parts bordering it) in their commercial version.
 4) hire some core developers to do 3) in the public version

-- 
Hannu Krosing [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ARC patent

2005-01-20 Thread Hannu Krosing
Ühel kenal päeval (esmaspäev, 17. jaanuar 2005, 14:48-0500), kirjutas
Tom Lane:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  Andrew Sullivan wrote:
  What will you do if the patent is granted, 8.0 is out there with the
  offending code, and you get a cease-and-desist letter from IBM
  demanding the removal of all offending code from the Net?
 
  We can modify the code slightly to hopefully avoid the patent.  With the
  US granting patents on even obvious ideas, I would think that most large
  software projects, including commercial ones, already have tons of
  patent violations in their code.  Does anyone think otherwise?
 
 I think there is zero probability of being sued by IBM in the near
 future.  They would instantly destroy the credibility and good
 relationships they've worked so hard to build up with the entire
 open source community.

Agreed

 However, I don't want to be beholden to IBM indefinitely --- in five
 years their corporate strategy might change.  I think that a reasonable
 response to this is to plan to get rid of ARC, or at least modify the
 code enough to avoid the patent, in time for 8.1.  (It's entirely likely
 that that will happen before the patent issues, anyway.)

I'd rather like a solution where the cache replacement policy has clean-
enough interface to have many competing algorithms/implementations,
probably even selactable at startup (or even runtime ;).

Firstly, I'm sure that there is no single best strategy (even ARC) for
all kinds of workloads  - think OLTP v.s.OLAP.

Secondly, some people might want to use ARC even if and when IBM gets
the patent, even badly enough to license it from IBM. (We are not
obliged to design an interfaces that prevents usage of patented stuff as
this is generally impossible.)

Thirdly, having it as a well-defined component/API might encourage more
research on different algorithms - see how many schedulers linux 2.6
has, both for processes and disk io.

-- 
Hannu Krosing [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Two-phase commit for 8.1

2005-01-20 Thread Christopher Kings-Lynne
If the patch is ready to be committed early in the cycle, I'd say most 
definitely ... just depends on how late in the cycle its ready ...

I *believe* that 8.1, we're looking at a 2mo cycle before beta, so 
figure beta for ~April 1st (no april fools jokes, eh?) ...
You guys are crazy :)  We haven't had a release cycle less than a year 
in many, many releases :)

Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Mark Cave-Ayland

 -Original Message-
 From: Jeff Davis [mailto:[EMAIL PROTECTED] 
 Sent: 19 January 2005 21:33
 To: Alvaro Herrera
 Cc: Mark Cave-Ayland; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Much Ado About COUNT(*)
 
 
 
 To fill in some details I think what he's saying is this:
 
 = create table foo(...);
 = create table foo_count(num int);
 = insert into foo_count values(0);
 = create table foo_change(num int);
 
 then create a trigger after delete on foo that does insert 
 into foo_change values(-1) and a trigger after insert on 
 foo that inserts a +1 into foo_change.
 
 Periodically, do:
 = begin;
 = set transaction isolation level serializable;
 = update foo_count set num=num+(select sum(num) from 
 foo_change); = delete from foo_change; = commit; = VACUUM;
 
 And then any time you need the correct count(*) value, do 
 instead: = select sum(num) from (select num from foo_count 
 union select num from foo_change);
 
 And that should work. I haven't tested this exact example, so 
 I may have overlooked something.
 
 Hope that helps. That way, you don't have huge waste from the 
 second table, and also triggers maintain it for you and you 
 don't need to think about it.
 
 Regards,
   Jeff Davis


Hi Jeff,

Thanks for the information. I seem to remember something similar to this
being discussed last year in a similar thread. My only real issue I can see
with this approach is that the trigger is fired for every row, and it is
likely that the database I am planning will have large inserts of several
hundred thousand records. Normally the impact of these is minimised by
inserting the entire set in one transaction. Is there any way that your
trigger can be modified to fire once per transaction with the number of
modified rows as a parameter?


Many thanks,

Mark.


WebBased Ltd
South West Technology Centre
Tamar Science Park
Plymouth
PL6 8BT 

T: +44 (0)1752 791021
F: +44 (0)1752 791023
W: http://www.webbased.co.uk
 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread D'Arcy J.M. Cain
On Thu, 20 Jan 2005 10:12:17 -
Mark Cave-Ayland [EMAIL PROTECTED] wrote:
 Thanks for the information. I seem to remember something similar to
 this being discussed last year in a similar thread. My only real issue
 I can see with this approach is that the trigger is fired for every
 row, and it is likely that the database I am planning will have large
 inserts of several hundred thousand records. Normally the impact of
 these is minimised by inserting the entire set in one transaction. Is
 there any way that your trigger can be modified to fire once per
 transaction with the number of modified rows as a parameter?

I don't believe that such a facility exists but before dismissing it you
should test it out.  I think that you will find that disk buffering (the
system's as well as PostgreSQL's) will effectively handle this for you
anyway.

-- 
D'Arcy J.M. Cain darcy@druid.net |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] ARC patent

2005-01-20 Thread Neil Conway
Simon Riggs wrote:
However, I think the ARC replacement should *not* be a fundamental
change in behavior: the algorithm should still attempt to balance
recency and frequency, to adjust dynamically to changes in workload, to
avoid sequential flooding, and to allow constant-time page
replacement.
Agreed: Those are the requirements. It must also scale better as well.
On thinking about this more, I'm not sure these are the right goals for 
an 8.0.x replacement algorithm. For 8.1 we should definitely Do The 
Right Thing and develop a complete ARC replacement. For 8.0.x, I wonder 
if it would be better to just replace ARC with LRU. The primary 
advantage to doing this is LRU's simplicity -- if we're concerned about 
introducing regressions in stability into 8.0, this is likely the best 
way to reduce the chance of that happening. Furthermore, LRU's behavior 
with PostgreSQL is well-known and has been extensively tested. A complex 
ARC replacement would receive even less testing than ARC itself has 
received -- which isn't a whole lot, in comparison with LRU.

Of course, the downside is that we lose the benefits of ARC or an 
ARC-like algorithm in 8.0. That would be unfortunate, but I don't think 
it is a catastrophe. The other bufmgr-related changes (vacuum hints, 
bgwriter and vacuum delay) should ensure that VACUUM still has a much 
reduced impact on system performance. Sequential scans will still flood 
the cache, but I don't view that as an enormous problem. In other words, 
I think a more intelligent replacement policy would be nice to have, but 
at this point in the 8.0 development cycle we should go with the 
simplest solution that we know is likely to work -- namely, LRU.

-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Richard Huxton
D'Arcy J.M. Cain wrote:
On Thu, 20 Jan 2005 10:12:17 -
Mark Cave-Ayland [EMAIL PROTECTED] wrote:
Thanks for the information. I seem to remember something similar to
this being discussed last year in a similar thread. My only real issue
I can see with this approach is that the trigger is fired for every
row, and it is likely that the database I am planning will have large
inserts of several hundred thousand records. Normally the impact of
these is minimised by inserting the entire set in one transaction. Is
there any way that your trigger can be modified to fire once per
transaction with the number of modified rows as a parameter?

I don't believe that such a facility exists but before dismissing it you
should test it out.  I think that you will find that disk buffering (the
system's as well as PostgreSQL's) will effectively handle this for you
anyway.
Well, it looks like ROW_COUNT isn't set in a statement-level trigger 
function (GET DIAGNOSTICS myvar=ROW_COUNT). Which is a shame, otherwise 
it would be easy to handle. It should be possible to expose this 
information though, since it gets reported at the command conclusion.

--
  Richard Huxton
  Archonet Ltd
-- stmt_trig_test.sql --
BEGIN;
CREATE TABLE trigtest (
a int4 NOT NULL,
b text,
PRIMARY KEY (a)
);
CREATE FUNCTION tt_test_fn() RETURNS TRIGGER AS '
DECLARE
nr integer;
ro integer;
nr2 integer;
BEGIN
GET DIAGNOSTICS nr = ROW_COUNT;
GET DIAGNOSTICS ro = RESULT_OID;
SELECT count(*) INTO nr2 FROM trigtest;
RAISE NOTICE ''nr = % / ro = % / nr2 = %'',nr,ro,nr2;
RETURN NULL;
END;
' LANGUAGE plpgsql;
CREATE TRIGGER tt_test AFTER INSERT OR UPDATE ON trigtest
FOR EACH STATEMENT
EXECUTE PROCEDURE tt_test_fn();
INSERT INTO trigtest VALUES (1,'a');
INSERT INTO trigtest VALUES (2,'b');
UPDATE trigtest SET b = 'x';
ROLLBACK;
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Two-phase commit for 8.1

2005-01-20 Thread Robert Treat
On Thursday 20 January 2005 04:16, Christopher Kings-Lynne wrote:
  If the patch is ready to be committed early in the cycle, I'd say most
  definitely ... just depends on how late in the cycle its ready ...
 
  I *believe* that 8.1, we're looking at a 2mo cycle before beta, so
  figure beta for ~April 1st (no april fools jokes, eh?) ...

 You guys are crazy :)  We haven't had a release cycle less than a year
 in many, many releases :)


If ARC is deemed a serious enough problem that we need to address it *now*, 
then I think we should do a 2 month cycle where core focus will be on putting 
in an ARC replacement and allowing only changes that do not require an 
initdb.  If we stick to that we can do a new release in probably 3-4 months 
and I think that will be acceptable as long as dump/reload is not required 
for upgrade.  I still think it might be worth contacting IBM and asking them 
if thier intention is to enforce the ARC patent if it is granted before 
pushing forward with that plan, but it seems like a good course of action to 
follow just in case.  If others felt strongly, we could probably start an 8.2 
branch right now as well and put someone like Neil in charge of keeping 8.2 
up to date with 8.1 changes while proceeding forward with other new whizbang 
functionality that requires initdb (I pick Neil cause iirc he has some initdb 
requiring changes planned for development, but it could be someone else).

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Translations at pgfoundry (was Re: [HACKERS] [PATCHES] Latest Turkish translation updates)

2005-01-20 Thread Peter Eisentraut
Peter Eisentraut wrote:
  Maybe we should have a pgfoundry project where all translations
  were kept, and from which the main CVS could be updated
  semi-automatically. Then we wouldn't have Peter checking out and
  committing all the time.

 That sounds like a fine idea.  My only concern would be the
 not-maintained-here syndrome, which occurs every time some CVS tree
 contains a file that is actually maintained by an external group,
 thus blocking the maintainers of the former CVS tree from applying
 necessary fixes at times.  Nevertheless, I think this is a winner. 
 Let's consider it when we start the 8.1 cycle.

OK, is anyone opposed to this idea?  I would register a pgfoundry 
project (name suggestions? translations?), give most established 
translators commit access, and move the statistics pages there.  Also, 
some translation groups seem to have their own mailing lists or web 
pages, which could optionally also be hosted there.

We could then sync the translations either regularly (e.g., once a week) 
or only at release time.  Of course we would need to mirror all the 
branches there.

Comments?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Mark Cave-Ayland
 

 -Original Message-
 From: Richard Huxton [mailto:[EMAIL PROTECTED] 
 Sent: 20 January 2005 12:45
 To: D'Arcy J.M. Cain
 Cc: Mark Cave-Ayland; [EMAIL PROTECTED]; 
 [EMAIL PROTECTED]; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Much Ado About COUNT(*)
 
 
 D'Arcy J.M. Cain wrote:
  On Thu, 20 Jan 2005 10:12:17 -
  Mark Cave-Ayland [EMAIL PROTECTED] wrote:
  
 Thanks for the information. I seem to remember something similar to 
 this being discussed last year in a similar thread. My only 
 real issue 
 I can see with this approach is that the trigger is fired for every 
 row, and it is likely that the database I am planning will 
 have large 
 inserts of several hundred thousand records. Normally the impact of 
 these is minimised by inserting the entire set in one 
 transaction. Is 
 there any way that your trigger can be modified to fire once per 
 transaction with the number of modified rows as a parameter?
  
  
  I don't believe that such a facility exists but before 
 dismissing it 
  you should test it out.  I think that you will find that disk 
  buffering (the system's as well as PostgreSQL's) will effectively 
  handle this for you anyway.
 
 Well, it looks like ROW_COUNT isn't set in a statement-level trigger 
 function (GET DIAGNOSTICS myvar=ROW_COUNT). Which is a shame, 
 otherwise 
 it would be easy to handle. It should be possible to expose this 
 information though, since it gets reported at the command conclusion.


Hi Richard,

This is more the sort of approach I would be looking for. However I think
even in a transaction with ROW_COUNT defined, the trigger will still be
called once per insert. I think something like this would require a new
syntax like below, and some supporting code that would keep track of the
tables touched by a transaction :( 

CREATE TRIGGER tt_test AFTER TRANSACTION ON trigtest
FOR EACH TRANSACTION
EXECUTE PROCEDURE tt_test_fn();

I am sure that Jeff's approach will work, however it just seems like writing
out one table entry per row is going to slow large bulk inserts right down.


Kind regards,

Mark.


WebBased Ltd
South West Technology Centre
Tamar Science Park
Plymouth
PL6 8BT 

T: +44 (0)1752 791021
F: +44 (0)1752 791023
W: http://www.webbased.co.uk



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Alvaro Herrera
On Thu, Jan 20, 2005 at 01:33:10PM -, Mark Cave-Ayland wrote:

 I am sure that Jeff's approach will work, however it just seems like writing
 out one table entry per row is going to slow large bulk inserts right down.

I don't see how it is any slower than the approach of inserting one
entry per row in the narrow table the OP wanted to use.  And it will be
faster for the scans.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Officer Krupke, what are we to do?
Gee, officer Krupke, Krup you! (West Side Story, Gee, Officer Krupke)

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: Translations at pgfoundry (was Re: [HACKERS] [PATCHES] Latest Turkish translation updates)

2005-01-20 Thread Alvaro Herrera
On Thu, Jan 20, 2005 at 02:08:20PM +0100, Peter Eisentraut wrote:
 Peter Eisentraut wrote:
   Maybe we should have a pgfoundry project where all translations
   were kept, and from which the main CVS could be updated
   semi-automatically. Then we wouldn't have Peter checking out and
   committing all the time.
 
  That sounds like a fine idea.  My only concern would be the
  not-maintained-here syndrome, which occurs every time some CVS tree
  contains a file that is actually maintained by an external group,
  thus blocking the maintainers of the former CVS tree from applying
  necessary fixes at times.  Nevertheless, I think this is a winner. 
  Let's consider it when we start the 8.1 cycle.
 
 OK, is anyone opposed to this idea?  I would register a pgfoundry 
 project (name suggestions? translations?), give most established 
 translators commit access, and move the statistics pages there.

Sounds good.  Maybe the name is too generic; what about
server translations, or something like that?

-- 
Alvaro Herrera ([EMAIL PROTECTED])
¿Cómo puedes confiar en algo que pagas y que no ves,
y no confiar en algo que te dan y te lo muestran? (Germán Poo)

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Richard Huxton
Mark Cave-Ayland wrote:
 


-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: 20 January 2005 12:45
To: D'Arcy J.M. Cain
Cc: Mark Cave-Ayland; [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Much Ado About COUNT(*)

D'Arcy J.M. Cain wrote:
On Thu, 20 Jan 2005 10:12:17 -
Mark Cave-Ayland [EMAIL PROTECTED] wrote:

Thanks for the information. I seem to remember something similar to 
this being discussed last year in a similar thread. My only 
real issue 

I can see with this approach is that the trigger is fired for every 
row, and it is likely that the database I am planning will 
have large 

inserts of several hundred thousand records. Normally the impact of 
these is minimised by inserting the entire set in one 
transaction. Is 

there any way that your trigger can be modified to fire once per 
transaction with the number of modified rows as a parameter?

I don't believe that such a facility exists but before 
dismissing it 

you should test it out.  I think that you will find that disk 
buffering (the system's as well as PostgreSQL's) will effectively 
handle this for you anyway.
Well, it looks like ROW_COUNT isn't set in a statement-level trigger 
function (GET DIAGNOSTICS myvar=ROW_COUNT). Which is a shame, 
otherwise 
it would be easy to handle. It should be possible to expose this 
information though, since it gets reported at the command conclusion.

Hi Richard,
This is more the sort of approach I would be looking for. However I think
even in a transaction with ROW_COUNT defined, the trigger will still be
called once per insert. I think something like this would require a new
syntax like below, and some supporting code that would keep track of the
tables touched by a transaction :( 
Well, a statement-level trigger would be called once per statement, 
which can be much less than per row.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Some things I like to pick from the TODO list ...

2005-01-20 Thread Matthias Schmidt
OK guys - i think I go for #3:
Allow GRANT/REVOKE permissions to be applied to all schema objects with 
one

cheers,
Matthias
Am 18.01.2005 um 20:47 schrieb Tom Lane:
Matthias Schmidt [EMAIL PROTECTED] writes:
These are the things I'm interested in:

1) Allow limits on per-db/user connections
2) Allow server log information to be output as INSERT statements
3) Allow GRANT/REVOKE permissions to be applied to all schema objects
with one
4) Allow PREPARE of cursors

what's free, what's apropriate for a newbee like me?
I'd vote for #3 just because it'd be much the most useful --- we
get requests for that every other day, it seems like.  The others
are far down the wish-list.  It's also localized enough that I think
a newbie could handle it.
regards, tom lane

--
Matthias Schmidt
Viehtriftstr. 49
67346 Speyer
GERMANY
Tel.: +49 6232 4867
Fax.: +49 6232 640089
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] livejournal outage post mortem

2005-01-20 Thread Michael Adler
(only tangentally on topic)

Interesting tail on the problems of MyISAM tables, disk write-caching,
and sharing space with people who can't resist pushing the big red
button. Only tangentally on topic. 

http://www.livejournal.com/community/lj_dev/670215.html

I wonder what livejournal would look like if they used pg instead
mysql. They had to figure out some ways of working around the
pessimistic locking of the non-MVCC approach, but they've also made
use of the replication features that might not have been available
until recently in pg.

http://www.danga.com/words/2004_mysqlcon/mysql-slides.pdf

 -Mike Adler

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] [GENERAL] Ways to check the status of a long-running transaction

2005-01-20 Thread Jim C. Nasby
Moving to hackers...

On Wed, Jan 19, 2005 at 11:57:12PM -0500, Greg Stark wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
 
  I recall this being discussed before, but I couldn't manage to find it
  in the archives.
  
  Is there any way to see how many rows a running transaction has written?
  vacuum analyze verbose only reports visible rows.
 
 Not AFAIK. In the past I've done ls -l and then divided by the average row
 size. But that required some guesswork and depended on the fact that I was
 building the table from scratch. 

Unfortunately in this case I'm not. And I wish I had some way to see
what was going on, because I let this process run for 2 days, then
canceled and restarted it and it ran in 5 minutes. It was consuming CPU
the whole time, too; I wish I knew what the hell it was doing.

 I think there's a tool to dump the raw table data which might be handy if you
 know the table didn't have a lot of dead tuples in it.
 
 It would be *really* handy to have a working dirty read isolation level that
 allowed other sessions to read uncommitted data.

I can see arguments against this. I'd be happy just having a means to
see how many new (not-yet-visible) tuples there were. Or better yet, how
many tuples had been modified by a specific transaction (since it could
both be inserting and deleting).

Can one or the other options be added as a TODO?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] ARC patent

2005-01-20 Thread Simon Riggs
On Thu, 2005-01-20 at 23:17 +1100, Neil Conway wrote:
 Simon Riggs wrote:
 However, I think the ARC replacement should *not* be a fundamental
 change in behavior: the algorithm should still attempt to balance
 recency and frequency, to adjust dynamically to changes in workload, to
 avoid sequential flooding, and to allow constant-time page
 replacement.
  
  Agreed: Those are the requirements. It must also scale better as well.
 
 For 8.1 we should definitely Do The 
 Right Thing and develop a complete ARC replacement. 

Agreed. That would be my focus.

 For 8.0.x, I wonder 
 if it would be better to just replace ARC with LRU. The primary 
 advantage to doing this is LRU's simplicity -- if we're concerned about 
 introducing regressions in stability into 8.0, this is likely the best 
 way to reduce the chance of that happening. Furthermore, LRU's behavior 
 with PostgreSQL is well-known and has been extensively tested. A complex 
 ARC replacement would receive even less testing than ARC itself has 
 received -- which isn't a whole lot, in comparison with LRU.
 
 Of course, the downside is that we lose the benefits of ARC or an 
 ARC-like algorithm in 8.0. That would be unfortunate, but I don't think 
 it is a catastrophe. The other bufmgr-related changes (vacuum hints, 
 bgwriter and vacuum delay) should ensure that VACUUM still has a much 
 reduced impact on system performance. Sequential scans will still flood 
 the cache, but I don't view that as an enormous problem. In other words, 
 I think a more intelligent replacement policy would be nice to have, but 
 at this point in the 8.0 development cycle we should go with the 
 simplest solution that we know is likely to work -- namely, LRU.

Agree with everything apart from the idea that seq scan flooding isn't
an issue. I definitely think it is.

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] ARC patent

2005-01-20 Thread Mark Kirkwood
Simon Riggs wrote:
On Thu, 2005-01-20 at 23:17 +1100, Neil Conway wrote:
(snippage)
For 8.0.x, I wonder 
if it would be better to just replace ARC with LRU.

Sequential scans will still flood 
the cache, but I don't view that as an enormous problem. 
Agree with everything apart from the idea that seq scan flooding isn't
an issue. I definitely think it is.
Is it feasible to consider LRU + a free-behind or seqscan hint type of 
replacement policy?

regards
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] ARC patent

2005-01-20 Thread Neil Conway
On Fri, 2005-01-21 at 01:26 +, Simon Riggs wrote:
 Agree with everything apart from the idea that seq scan flooding isn't
 an issue. I definitely think it is.

I agree it's an issue, I just don't think it's an issue of sufficient
importance that it needs to be solved in the 8.0.x timeframe.

In any case, I'll take a look at developing a patch to replace ARC with
LRU. If it's possible to solve sequential flooding (e.g. via some kind
of hint-based approach) without too much complexity, we could add that
to the patch down the line.

-Neil



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] ARC patent

2005-01-20 Thread Dann Corbit
How about LRU + learning -- something like the optimizer?

It might be nice also to be able to pin things in memory.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mark Kirkwood
Sent: Thursday, January 20, 2005 6:55 PM
To: Simon Riggs
Cc: Neil Conway; Tom Lane; Joshua D. Drake; Jeff Davis; pgsql-hackers
Subject: Re: [HACKERS] ARC patent

Simon Riggs wrote:
 On Thu, 2005-01-20 at 23:17 +1100, Neil Conway wrote:
 (snippage)
For 8.0.x, I wonder 
if it would be better to just replace ARC with LRU.

 Sequential scans will still flood 
the cache, but I don't view that as an enormous problem. 
 
 Agree with everything apart from the idea that seq scan flooding isn't
 an issue. I definitely think it is.
 
Is it feasible to consider LRU + a free-behind or seqscan hint type of 
replacement policy?

regards

Mark


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Translations at pgfoundry (was Re: [HACKERS] [PATCHES] Latest Turkish translation updates)

2005-01-20 Thread Euler Taveira de Oliveira
Hi Peter,

 Peter Eisentraut wrote:
   Maybe we should have a pgfoundry project where all translations
   were kept, and from which the main CVS could be updated
   semi-automatically. Then we wouldn't have Peter checking out and
   committing all the time.
 
  That sounds like a fine idea.  My only concern would be the
  not-maintained-here syndrome, which occurs every time some CVS
tree
  contains a file that is actually maintained by an external group,
  thus blocking the maintainers of the former CVS tree from applying
  necessary fixes at times.  Nevertheless, I think this is a winner. 
  Let's consider it when we start the 8.1 cycle.
 
 OK, is anyone opposed to this idea?  I would register a pgfoundry 
 project (name suggestions? translations?), give most established 
 translators commit access, and move the statistics pages there. 
Also, 
 some translation groups seem to have their own mailing lists or web 
 pages, which could optionally also be hosted there.
 
Great idea. Name? maybe 'pgtranslation'.

 We could then sync the translations either regularly (e.g., once a
week) 
 or only at release time.  Of course we would need to mirror all the 
 branches there.
 
Maybe the sync could be made every time someone changed the version of
the po, ie, commit it (some script can handle this stuff). It'll
reduce the number of unnecessary commits.



=
Euler Taveira de Oliveira
euler[at]yahoo_com_br

__
Converse com seus amigos em tempo real com o Yahoo! Messenger 
http://br.download.yahoo.com/messenger/ 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Two-phase commit for 8.1

2005-01-20 Thread Kenneth Marshall
On Wed, Jan 19, 2005 at 07:42:03PM -0500, Tom Lane wrote:
 Marc G. Fournier [EMAIL PROTECTED] writes:
  If the patch is ready to be committed early in the cycle, I'd say most 
  definitely ... just depends on how late in the cycle its ready ...
 
 My recollection is that it's quite far from being complete.  I had hoped
 to spend some time during the 8.1 cycle helping Heikki finish it up,
 but if we stick to the 2-month-dev-cycle idea I'm afraid there's no way
 it'll be done in time.  I thought that some time would probably amount
 to a solid man-month or so, and there's no way I can spend half my time
 on just one feature for this cycle.
 
 If Heikki wants this in for 8.1, the right thing to do is vote against
 the short-dev-cycle idea.  But we need a plausible answer about what to
 do about ARC to make that credible...
 

I think the idea of making a buffer management algorithm API and then
preparing a simple LRU algorithm to have ready to plug in if the ARC
patent is granted would be doable in a short development cycle. Then
we can take advantage of as well as test other algorithms more easily.

Ken

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] US Patents vs Non-US software ...

2005-01-20 Thread Reinoud van Leeuwen
On Tue, Jan 18, 2005 at 11:38:45AM -0800, J. Andrew Rogers wrote:
 On Tue, 18 Jan 2005 09:22:58 +0200
 Many countries do not grant software patents so it is not 
 likely
 that IBM applied through PCT since a refusal in one 
 country may
 cause to patent to be refused in all countries.
 
 
 Contrary to popular misconception, virtually all countries 
 grant software patents.  The problem is that people have 

Thanks to the new European Union member Poland, the Dutch plan to put the 
software patents on the agenda 3 days before Christmas was revoked. So no 
software patents in Europe for now. (and the opposition against it seems 
to grow!)

-- 
__
Nothing is as subjective as reality
Reinoud van Leeuwen[EMAIL PROTECTED]
http://www.xs4all.nl/~reinoud
__

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Much Ado About COUNT(*)

2005-01-20 Thread Christopher Browne
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Jeff Davis) wrote:
 I almost think to not supply an MVCC system would break the I in ACID,
 would it not? I can't think of any other obvious way to isolate the
 transactions, but on the other hand, wouldn't DB2 want to be ACID
 compliant?

Wrong, wrong, wrong...

MVCC allows an ACID implementation to not need to do a lot of resource
locking.

In the absence of MVCC, you have way more locks outstanding, which
makes it easier for there to be conflicts between lock requests.

In effect, with MVCC, you can do more things concurrently without the
system crumbling due to a surfeit of deadlocks.
-- 
cbbrowne,@,gmail.com
http://www3.sympatico.ca/cbbrowne/multiplexor.html
Why isn't phonetic spelled the way it sounds?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] It's OSCON Submission time again!

2005-01-20 Thread Josh Berkus
PostgreSQL Hackers  Community Members

OSCON has issued the official Call For Papers for OSCON 2005.   We're looking 
to get a good strong presentation team together, as we did last year.   OSCON 
last year was terrific, see:
http://www.varlena.com/varlena/Images/oscon2004/index.php

A team of your fellow community members are dedicated to sifting through the 
proposals.    So PLEASE CONTACT US BEFORE SUBMITTING PROPOSALS to O'Reilly.   
Last year, several people were rejected because their proposals were 
unnecessarily duplicative, and one because they were accidentally put in the 
wrong track.    So give us a chance to give you some feedback, first!  You 
can reach me at [EMAIL PROTECTED]   

People proposing a tutorial should also propose a talk as well, as we only get 
2 or 3 tutorials.  Please send a note to me with your proposal if you will 
need help with airfare or accommodations for the conference; limited funds 
may be available.

Deadline is February 13, so please don't procrastinate!

Official notice follows:
==
The Call for Proposals has just opened for the
7th Annual O'Reilly Open Source Convention
http://conferences.oreillynet.com/os2005/

OSCON is headed back to friendly, economical Portland, Oregon during the
week of August 1-5, 2005. If you've ever wanted to join the OSCON speaker
firmament, now's your chance to submit a proposal (or two) by February 13,
2005.

Complete details are available on the OSCON web site, but we're
particularly interested in exploring how software development is moving to
another level, and how developers and businesses are adjusting to new
business models and architectures. We're looking for sessions, tutorials,
and workshops proposals that appeal to developers, systems and network
administrators, and their managers in the following areas:

- All aspects of building applications, services, and systems that use the
new capabilities of the open source platform
- Burning issues for Java, Mozilla, web apps, and beyond
- The commoditization of software: who and/or what can show us the money?
- Network-enabled collaboration
- Software customizability, including software as a service
- Law, licensing, politics, and how best to navigate other troubled
waters

Specific topics and tracks at OSCON 2005 include: Linux and other open
source operating systems, Java, PHP, Python, Perl, Databases (including
MySQL and PostgreSQL), Apache, XML, Applications, Ruby, and Security.

Attendees have a wide range of experience, so be sure to target a
particular level of experience: beginner, intermediate, advanced. Talks
and tutorials should be technical; strictly no marketing presentations.
Session presentations are 45 or 90 minutes long, and tutorials are either
a half-day (3 hours) or a full day (6 hours).

Feel free to spread the word about the Call for Proposals to your friends,
family, colleagues, and compatriots. We want everyone to submit, from
American women hacking artificial life into the Linux kernel to Belgian
men building a better mousetrap from PHP and recycled military hardware.
We mean everyone!

Even if you don't want to participate as a speaker, send us your
suggestions--topics you'd like to see covered, groups we should bring into
the OSCON fold, extra-curricular activities we should organize--to
[EMAIL PROTECTED] .

This year, we're moving to the wide open spaces of the Oregon Convention
Center. We've arranged for the nearby Doubletree Hotel to be our
headquarters hotel--it's a short, free Max light rail ride (or a lovely
walk) from the Convention Center.

Registration opens in April 2005; hotel information will be available
shortly.

Deadline to submit a proposal is Midnight (PST), February 13.

For all the conference details, go to:
http://conferences.oreillynet.com/os2005/

Press coverage, blogs, photos, and news from the 2004 O'Reilly Open Source
Convention can be found at: http://www.oreillynet.com/oscon2004/

Would your company like to make a big impression on the open source
community? If so, consider exhibiting or becoming a sponsor. Contact
Andrew Calvo at (707) 827-7176, or [EMAIL PROTECTED] for more info.

See you Portland next summer,

The O'Reilly OSCON Team

-- 
Josh Berkus
PostgreSQL Core Team
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] It's OSCON Submission time again!

2005-01-20 Thread Oleg Bartunov
Hi there,
I'm willing to help anybody to prepare tutorials or paper about
full text search in postgresql and other our contrib modules.
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq