Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Pavan Deolasee

On 1/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps.

Only if you had fsync off, or equivalently a disk drive that lies about
write-complete.  You could possibly achieve such rates in a non-broken
configuration with a battery-backed write cache, but that's not "typical
desktop" kit.



May be I was too vague about the machine/test. Its probably not a
"typical desktop" machine since it has better storage. A two disk
RAID 0 configuration for data, and a dedicated disk for xlog. I remember
running with 50 clients and 50 scaling factor, 1 GB shared buffer,
autovacuum turned on with default parameters and rest with default
configuration. I don't think I had explicitly turned fsync off.



In any case, you ignored Heikki's point that the PG shared memory pages
holding CLOG are unlikely to be the sole level of caching, if the update
rate is that high.  The kernel will have some pages too.  And even if we
thought not, wouldn't bumping the size of the clog cache be a far
simpler solution offering benefit for more things than just this?



Yes. May be what Heikki said is true, but we don't know for sure.
Wouldn't bumping the cache size just delay the problem a bit ?
Especially with even larger table and a very high end machine/storage
which can clock very high transactions per minute ?

Anyways, if we agree that there is a problem, the solution could be
as simple as increasing the cache size, as you suggested.

Thanks,
Pavan

--

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> I know it might break the ctid chain, but does that really matter ?

Yes.  You can't just decide that the tuple isn't needed anymore.
As per other followup, you could possibly shrink a known-dead tuple to
just the header.



My apologies if this has been discussed before. I went through the earlier
discussions, but its still very fuzzy to me. I am not able to construct a
case
where a tuple is DEAD (not RECENTLY_DEAD) and still there could be
a transaction need to follow the ctid pointer chain from its parent. Can
somebody help me to construct this scenario ?


The notion of keeping linked lists etc seems like gross overdesign to me.

Why not just compact out the free space?



That would require us to acquire vacuum-strength lock on the page. For a
very large table where the probability of two backends looking at the same
page is very low, we might still be able to do that in most of the cases.
But
compacting a page would cause lots of data movements which might be
CPU intensive. Just a thought though.

Thanks,
Pavan

--

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Tom Lane
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps.

Only if you had fsync off, or equivalently a disk drive that lies about
write-complete.  You could possibly achieve such rates in a non-broken
configuration with a battery-backed write cache, but that's not "typical
desktop" kit.

In any case, you ignored Heikki's point that the PG shared memory pages
holding CLOG are unlikely to be the sole level of caching, if the update
rate is that high.  The kernel will have some pages too.  And even if we
thought not, wouldn't bumping the size of the clog cache be a far
simpler solution offering benefit for more things than just this?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Jim Nasby

On Jan 23, 2007, at 5:50 PM, Joshua D. Drake wrote:

To cure the shortage of experienced Postgres folks there is only one
solution - err, more experience! So the need is for good training
courses (not necessarily certification and all the IMHO nonsense that
comes with that), and a willingness on the part of employers to  
invest

in upskilling their staff.


You know its funny. Command Prompt, OTG-Inc, SRA and Big Nerd Ranch
*all* offer training.

Last I checked, OTG had to cancel classes because of lack of demand
(please verify Chander).


That may be OTG's problem then... I've personally taught a number of  
classes since I started with EnterpriseDB (PostgreSQL classes, not  
EnterpriseDB ones). Granted, this training is for existing customers,  
but I believe it speaks to the demand that's out there.


And while certification might not mean much to people knowledgeable  
enough to tell if someone has clue, I suspect that as PostgreSQL  
grows in popularity more people will look at training (especially for  
people that don't have "PostgreSQL" stamped all over their resume).


On the other hand, any time I find someone interesting in pushing  
their career towards PostgreSQL I always tell them the same thing:  
get on the mailing list and start helping folks. Perhaps that's  
ultimately all the certification we'll ever need.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Default permissisons from schemas

2007-01-23 Thread Jim Nasby

On Jan 23, 2007, at 12:07 PM, Stephen Frost wrote:
Whoa.  You are going to allow people to create objects owned by  
someone

else?  I don't think so ... most Unix systems have forbidden object
give-away for years, for very good reasons.


Hmm.  While I agree with the sentiment, Unix does provide for setgid
such that objects inherit a specific group on creation.  Using  
roles we

don't get that distinction so I don't think comparing it to Unix is a
slam-dunk.  There do need to be limitations here though, certainly.  A
couple options, in order of my preference:


Is there a use-case for per-schema default ownership? I can't really  
think of one...

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-23 Thread Jim Nasby

On Jan 22, 2007, at 6:53 PM, Kenneth Marshall wrote:

The default should
be approximately the OS standard read-ahead amount.


Is there anything resembling a standard across the OSes we support?  
Better yet, is there a standard call that allows you to find out what  
the read-ahead setting is?

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:


Pavan Deolasee wrote:
> Another source of I/O is perhaps the CLOG read/writes for checking
> transaction status. If we are talking about large tables like accounts
in
> pgbench or customer/stock in DBT2, the tables are vacuumed much later
than
> the actual UPDATEs. I don't have any numbers to prove yet, but my sense
is
> that CLOG pages holding the status of many of the transactions might
have
> been already flushed out of the cache and require an I/O. Since the
default
> CLOG SLRU buffers is set to 8, there could be severe CLOG SLRU thrashing
> during VACUUM as the transaction ids will be all random in a heap page.

8 log pages hold 8*8192*4=262144 transactions. If the active set of
transactions is larger than that, the OS cache will probably hold more
clog pages. I guess you could end up doing some I/O on clog on a vacuum
of a big table, if you have a high transaction rate and vacuum
infrequently...



On a typical desktop class 2 CPU Dell machine, we have seen pgbench
clocking more than 1500 tps. That implies CLOG would get filled up in less
than 262144/1500=174 seconds. VACUUM on accounts table takes much
longer to trigger.


Would it help to set the status of the XMIN/XMAX of tuples early enough
> such
> that the heap page is still in the buffer cache, but late enough such
that
> the XMIN/XMAX transactions are finished ?


Yeah, we could do that. First I'd like to see some more evidence that

clog trashing is a problem, though.



Here are some numbers from a 4 hour DBT2 run with 270 warehouses and 50
connections.

2007-01-23 07:40:30 PST_17428 LOG:  vacuuming "public.warehouse"
2007-01-23 07:40:30 PST_17428 LOG:  CLOG r(1), w(0)- vacuum start
2007-01-23 07:40:30 PST_17428 LOG:  CLOG r(1), w(0)- vacuum end
2007-01-23 07:40:30 PST_17428 LOG:  "warehouse": found 1214 removable,
2275 nonremovable row versions in 111 pages



2007-01-23 11:11:43 PST_30356 LOG:  vacuuming "public.stock"
2007-01-23 11:11:43 PST_30356 LOG:  CLOG r(545323), w(91)- vacuum start
2007-01-23 12:03:14 PST_30356 LOG:  CLOG r(1181851), w(133) - vacuum end
2007-01-23 12:03:14 PST_30356 LOG:  "stock": found 5645264 removable,
27003788 nonremovable row versions in 1554697 pages

I have just counted the number of read/write calls on the CLOG blocks. As
you can
see the total number of CLOG reads jumped from 545323 to 1181851 i.e.
1181851 - 545323 = 636528 CLOG block reads for 1554697 pages of stock table.
This figure is only indicative since it also includes the CLOG block reads
which would
have happened as part of other backend operations (VACUUM took almost 55
minutes to
complete). Still in the first 210 minutes of the run, the total reads were
only 545323. So
most of the 636528 reads in the next 55 minutes can be attributed to VACUUM.

The writes are very small though, may be because most of the CLOG pages are
accessed
read-only. A simple patch that I used to get these numbers is attached.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


CLOG-stats.patch.gz
Description: GNU Zip compressed data

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Updateable cursors

2007-01-23 Thread John Bartlett
Hi Simon,

Thanks for your comments.

The reason for those 5 options is to consider different means to cover the
Prepared Stmt requirement where the different stages of processing are
actually in different transactions. 

Regards,
John Bartlett
Software Development Engineer
Fujitsu Australia Software Technology
14 Rodborough Road, Frenchs Forest NSW 2086
Tel: +61 2 9452 9161
Fax: +61 2 9975 2899
Email: [EMAIL PROTECTED]
Web site: www.fastware.com



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Riggs
Sent: Tuesday, 23 January 2007 11:12 PM
To: FAST PostgreSQL
Cc: PostgreSQL-development
Subject: Re: [HACKERS] Updateable cursors

On Wed, 2007-01-24 at 02:42 +1100, FAST PostgreSQL wrote:

> In the UPDATE or DELETE statements the 'WHERE CURRENT OF ' 
> clause results in the cursor name being placed in the UpdateStmt or 
> DeleteStmt structure. During the processing of the functions - 
> transformDeleteStmt() and transformUpdateStmt() - the cursor name is used
to 
> obtain a pointer to the related Portal structure 

To support prepared statements we'd need to do this name lookup just
once, so that the Update/Delete stmt can record which Portal to look at
for the current tuple.

> and the tuple affected by 
> the current UPDATE or DELETE statement is extracted from the Portal, where
it 
> has been placed as the result of a previous FETCH request. At this point
all 
> the information for the UPDATE or DELETE statement is available so the 
> statements can be transformed into standard UPDATE or DELETE statements
and 
> sent for re-write/planning/execution as usual.


> 2.5 Changes to the Executor
> ---
> There are various options that have been considered for this part of the 
> enhancement. These are described in the sections below.


> Option 1  MVCC Via Continuous Searching of Database
> 
> The Executor is to be changed in the following ways:
> 1)When the FETCH statement is executed the id of the resulting tuple
is 
> extracted and passed back to the Portal structure to be saved to indicate
the 
> cursor is currently positioned on a tuple.
> 2)When the UPDATE or DELETE request is executed the tuple id
previously 
> FETCHed is held in the QueryDesc structure so that it can be compared with

> the tuple ids returned from the TidScan node processed prior to the actual

> UPDATE / DELETE node in the plan. This enables a decision to be made as to

> whether the tuple held in the cursor is visible to the UPDATE / DELETE 
> request according to the rules of concurrency. The result is that, at the 
> cost of repeatedly searching the database at each UPDATE / DELETE command,

> the hash table is no longer required.
> This approach has the advantage that there is no hash table held in memory
or 
> on disk so it will not be memory intensive but will be processing
intensive. 

Do you have a specific example that would cause problems? It's much
easier to give examples that might cause problems and discuss those.

AFAICS in the straightforward case the Fetch will only return rows it
can see so update/delete should have no problems, iff the update/delete
is using a same or later snapshot than the cursor. I can see potential
problems with scrollable cursors.

So I'm not sure why there's a big need for any of the 5 options, yet.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend

This is an email from Fujitsu Australia Software Technology Pty Ltd, ABN 27 003 
693 481. It is confidential to the ordinary user of the email address to which 
it was addressed and may contain copyright and/or legally privileged 
information. No one else may read, print, store, copy or forward all or any of 
it or its attachments. If you receive this email in error, please return to 
sender. Thank you.

If you do not wish to receive commercial email messages from Fujitsu Australia 
Software Technology Pty Ltd, please email [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Tom Lane
I wrote:
> After further thought I've developed a modified version of Brian's case
> that crashes 8.2 and HEAD but not 8.1 --- it turns the situation around
> by having the view fall back to typmod -1.  So what I'm now thinking
> is that the real problem is that an Append path generates its Vars by
> copying the first input, and in these sorts of situations that might not
> produce the correct typmod.  Back to the drawing board ...

Nope, that's not right either.  What it seems to boil down to is what I
said the first time: removal of trivial subqueryscans can leave us with
a plan that uses an above-a-union Var to describe a value from one of
the union inputs.  Some other 8.2 changes moved the subqueryscans
around, which is why the cases that manifest are a bit different, but
in the end it's the same issue.

I've concluded that this isn't really wrong; it's certainly not worth
disabling the trivial-subqueryscan optimization for, and I think getting
the planner to avoid the problem without that would be tricky.
Accordingly, I've applied the attached patch that just relaxes the
Assert.

regards, tom lane


Index: execScan.c
===
RCS file: /cvsroot/pgsql/src/backend/executor/execScan.c,v
retrieving revision 1.39
diff -c -r1.39 execScan.c
*** execScan.c  5 Jan 2007 22:19:27 -   1.39
--- execScan.c  24 Jan 2007 01:03:03 -
***
*** 215,222 
return false;   /* out of order */
if (att_tup->attisdropped)
return false;   /* table contains dropped 
columns */
Assert(var->vartype == att_tup->atttypid);
!   Assert(var->vartypmod == att_tup->atttypmod);
  
tlist_item = lnext(tlist_item);
}
--- 215,232 
return false;   /* out of order */
if (att_tup->attisdropped)
return false;   /* table contains dropped 
columns */
+   /*
+* Note: usually the Var's type should match the tupdesc 
exactly,
+* but in situations involving unions of columns that have 
different
+* typmods, the Var may have come from above the union and 
hence have
+* typmod -1.  This is a legitimate situation since the Var 
still
+* describes the column, just not as exactly as the tupdesc 
does.
+* We could change the planner to prevent it, but it'd then 
insert
+* projection steps just to convert from specific typmod to 
typmod -1,
+* which is pretty silly.
+*/
Assert(var->vartype == att_tup->atttypid);
!   Assert(var->vartypmod == att_tup->atttypmod || var->vartypmod 
== -1);
  
tlist_item = lnext(tlist_item);
}

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Updateable cursors

2007-01-23 Thread John Bartlett
Hi Richard,

Thanks for your comments.

I can see where you are coming from but I am not sure if a new log entry
would be such a good idea. The result of creating such a low level log could
be to increase the amount of logging by a rather large amount. 

However, the system catalogue will contain an entry that enables a cursor to
be identified as updatable.

Regards,
John Bartlett
Software Development Engineer
Fujitsu Australia Software Technology
14 Rodborough Road, Frenchs Forest NSW 2086
Tel: +61 2 9452 9161
Fax: +61 2 9975 2899
Email: [EMAIL PROTECTED]
Web site: www.fastware.com



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Troy
Sent: Wednesday, 24 January 2007 4:37 AM
To: FAST PostgreSQL
Cc: PostgreSQL-development
Subject: Re: [HACKERS] Updateable cursors


On Wed, 24 Jan 2007, FAST PostgreSQL wrote:
>
> We are trying to develop the updateable cursors functionality into
> Postgresql. I have given below details of the design and also issues we
are
> facing.  Looking forward to the advice on how to proceed with these
issues.
>
> Rgds,
> Arul Shaji
>

Hi Arul,

...I can see people are picking apart the implementation details so you're
getting good feedback on your ambitious proposal. Looks like you've put a
lot of thought/work into it.

I've never been a fan of cursors because they encourage bad behavior;
"Think time" in a transaction sometimes becomes "lunch time" for users and
in any event long lock duration is something to be avoided for the sake of
concurrency and sometimes performance (vacuum, etc). My philosophy is "get
in and get out quick."

Ten years ago May, our first customer insisted we implement what has
become our primary API library in Java and somewhat later I was shocked to
learn that for whatever reason Java ResultSets are supposed to be
implemented as _updateable_cursors._ This created serious security issues
for handing off results to other programs through the library - ones that
don't even have the ability to connect to the target database. Confirmed
in the behavior of Informix, we went through some hoops to remove the need
to pass ResultSets around. (If I had only known Postgres didn't implement
the RS as an updateable cursor, I'd have pushed for our primary platform
to be Postgres!)

What impresses me is that Postgres has survived so well without updateable
cursors. To my mind it illustrates that they aren't widely used. I'm
wondering what troubles lurk ahead once they're available. As a
DBA/SysAdmin, I'd be quite happy that there existed some kind of log
element that indicated updateable cursors were in use that I could search
for easily whenever trying to diagnose some performance or deadlocking
problem, etc, say log fiile entries that indicated the opening and later
closing of such a cursor with an id of some kind that allowed matching up
open/close pairs. I also think that that the documentation should be
updated to not only indicate usage of this new feature, but provide
cautionary warnings about the potential locking issues and, for the
authors of libraries, Java in particular, the possible security issues.

Regards,
Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

This is an email from Fujitsu Australia Software Technology Pty Ltd, ABN 27 003 
693 481. It is confidential to the ordinary user of the email address to which 
it was addressed and may contain copyright and/or legally privileged 
information. No one else may read, print, store, copy or forward all or any of 
it or its attachments. If you receive this email in error, please return to 
sender. Thank you.

If you do not wish to receive commercial email messages from Fujitsu Australia 
Software Technology Pty Ltd, please email [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake
Mark Kirkwood wrote:
> Joshua D. Drake wrote:
>>> To cure the shortage of experienced Postgres folks there is only one
>>> solution - err, more experience! So the need is for good training
>>> courses (not necessarily certification and all the IMHO nonsense that
>>> comes with that), and a willingness on the part of employers to invest
>>> in upskilling their staff.
>>
>> You know its funny. Command Prompt, OTG-Inc, SRA and Big Nerd Ranch
>> *all* offer training.
>>
>> Last I checked, OTG had to cancel classes because of lack of demand
>> (please verify Chander).
>>
> 
> Well that is interesting, so maybe there is no need for certification
> yet? or do you think employers are wanting folks that someone *else* has
> trained (or certified).

I believe that there is a market for Training, certainly (just watch
CMDs website in the next couple of weeks). However I believe that the
market is specific and nitch.

As far as certification, and I guarantee you Theo's experience is
different, I believe certification is dying. Certification used to make
sense, computing was relatively new tech. Keep in mind that the common
user base for computing is only 12-15 years old.

10 years ago.. you literally didn't know if the guy you were hiring new
his stuff or was lying through his teeth. You were a manager who grew up
with green ledger on wide print dot matrix. That little paper said,
"This guy has at least read the book".

Today? Its different in most markets. Theo and I for the most part don't
share market which is why I think his experience is different. My market
is say the 90% market, that is to say that I focus on a more general
service of PostgreSQL.

The people I deal with are FOSS people or people looking pointedly and
moving to FOSS. They don't give a winkle, dinkle about certification.
Most of my customers are business tech savvy, meaning they know outlook,
the know word, they understand the web and the internet.

They are not programmers but the know the difference between:

We are going to create a synergetic alliance of vertical technologies to
integrate your diverse infrastructure.

and

We are going to install Samba so windows, linux and apple can all talk
to a single file share. It will take a day.

Then again, I haven't had to write a resume in 10 years, what the hell
do I know.

Sincerely,

Joshua D. Drake



-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Mark Kirkwood

Joshua D. Drake wrote:

To cure the shortage of experienced Postgres folks there is only one
solution - err, more experience! So the need is for good training
courses (not necessarily certification and all the IMHO nonsense that
comes with that), and a willingness on the part of employers to invest
in upskilling their staff.


You know its funny. Command Prompt, OTG-Inc, SRA and Big Nerd Ranch
*all* offer training.

Last I checked, OTG had to cancel classes because of lack of demand
(please verify Chander).



Well that is interesting, so maybe there is no need for certification 
yet? or do you think employers are wanting folks that someone *else* has 
trained (or certified).


Cheers

Mark


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Tom Lane
I wrote:
> I'm tempted to suggest that we just remove the Assert on vartypmod in
> the 8.1 branch.  The Assert on vartype is doing as much as is really
> important to check, and I don't want to disable the trivial_subqueryscan
> optimization, which seems the only other low-risk fix.

After further thought I've developed a modified version of Brian's case
that crashes 8.2 and HEAD but not 8.1 --- it turns the situation around
by having the view fall back to typmod -1.  So what I'm now thinking
is that the real problem is that an Append path generates its Vars by
copying the first input, and in these sorts of situations that might not
produce the correct typmod.  Back to the drawing board ...

regards, tom lane


CREATE TABLE foo
(
x_data varchar(32),
row_date date
);

CREATE TABLE bar
(
x_data varchar(36),
row_date date
);

CREATE OR REPLACE VIEW bazz AS
SELECT
('bar: ' || bar.row_date) :: TEXT AS action,
bar.x_data AS more_data,
row_date
FROM
bar
UNION ALL
SELECT
('foo: ' || foo.row_date) :: TEXT AS action,
foo.x_data AS more_data,
row_date
FROM
foo
;

SELECT action, more_data FROM bazz;

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake

> To cure the shortage of experienced Postgres folks there is only one
> solution - err, more experience! So the need is for good training
> courses (not necessarily certification and all the IMHO nonsense that
> comes with that), and a willingness on the part of employers to invest
> in upskilling their staff.

You know its funny. Command Prompt, OTG-Inc, SRA and Big Nerd Ranch
*all* offer training.

Last I checked, OTG had to cancel classes because of lack of demand
(please verify Chander).

Command Prompt currently only trains corps with 12+ people per class so
we are a bit different.

Sinerely,

Joshua D. Drake


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Mark Kirkwood

Theo Schlossnagle wrote:


Get a CCIE and tell me that again :-)  When you are handed a complicated 
network of routers and switches running all sorts of version of IOS and 
CatOS and you go to lunch, they break it and you have a certain time 
allotment to fix it all.




I know all about CCIE - one session fixing up hardware is no substitute 
for experience (and is still vulnerable to the methods I mentioned - it 
is however a lot better than a multi choice exam of course).


To cure the shortage of experienced Postgres folks there is only one 
solution - err, more experience! So the need is for good training 
courses (not necessarily certification and all the IMHO nonsense that 
comes with that), and a willingness on the part of employers to invest 
in upskilling their staff.


Cheers

Mark


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake

> Oracle's certification programs have helped Oracle
> considerably in gaining the number of Oracle professionals in the job
> market.  PostgreSQL certification has the opportunity to do the same and
> in doing so increase overall PostgreSQL adoption.  That's a good thing.

Well maybe it is just me, but I am perfectly happy with PostgreSQL's
growth. I find that our customers are of a much higher quality than your
average MySQL or Oracle customer.


Sincerely,

Joshua D. Drake



> 
> -- 
> Theo
> 
> // Theo Schlossnagle
> // CTO -- http://www.omniti.com/~jesus/
> // OmniTI Computer Consulting, Inc. -- http://www.omniti.com/
> 
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Tom Lane
Brian Hurt <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Hmm, I thought that stack trace looked a bit familiar --- we seem to
>> have fixed the problem as of 8.2.  Unfortunately I can't recall what
>> the relevant change was exactly; time for some digging in the CVS logs.

> Any hope of getting the fix back-ported into the 8.1 tree?

The bug I was remembering is this one:
http://archives.postgresql.org/pgsql-hackers/2006-12/msg00313.php
http://archives.postgresql.org/pgsql-committers/2006-12/msg00055.php
which happens to fail at the same place but the cause is quite different
--- the fix was in code that didn't even exist in 8.1.

What seems to be happening here is that since 8.1 did not try to track
typmod in UNION trees, the view's column is declared as having type
varchar with typmod -1, and so the upper-level Var for the column has
that too.  But because 8.1 strips out "unnecessary" SubqueryScan nodes,
that Var ends up just one level above the outputs of the MergeJoins,
which show the correct typmod from the underlying tables.  So that
Assert gets upset.  (You don't see the bug in 8.0 because it didn't
have that optimization.)

I'm tempted to suggest that we just remove the Assert on vartypmod in
the 8.1 branch.  The Assert on vartype is doing as much as is really
important to check, and I don't want to disable the trivial_subqueryscan
optimization, which seems the only other low-risk fix.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread David Fetter
On Tue, Jan 23, 2007 at 05:19:45PM -0500, Theo Schlossnagle wrote:
> 
> On Jan 23, 2007, at 5:14 PM, Joshua D. Drake wrote:
> 
> >>Get a CCIE and tell me that again :-)  When you are handed a
> >>complicated network of routers and switches running all sorts of
> >>version of  IOS and CatOS and you go to lunch, they break it and
> >>you have a certain time allotment to fix it all.
> >>
> >>Most certifications are not simple multiple choice quizes.  Just
> >>the ones you hear about -- the ones that suck.
> >>
> >>>I think seeing relevant training courses + experience on a CV
> >>>trumps certification anytime - unfortunately a lot of folks out
> >>>there are mesmerized by shiny certificates
> >>
> >>Sure. But experience is very hard to get.  And since people with
> >>PostgreSQL experience are limited, companies adopting it need a
> >>good second option -- certified people.
> >
> >They aren't limited, just all employed ;)
> 
> I can't find 500, let alone 1000, people with extensive postgresql
> experience in an enterprise environment.  Oracle has an order of
> magnitude more.  MySQL even has better numbers than postgres in this
> arena.  If you only want to hire people with extensive experience,
> you're exposing yourself to an enormous business risk by adopting
> postgres.  You'd have to hire out to a consulting company and if too
> many do that, the consulting company will have scaling issues (as
> all  do).
> 
> The upside of Oracle is that I can hire out to a consulting company
> for some things (particularly challenging scale or recovery issues)
> and get someone who knows their way around Oracle reasonably well
> (has performed _real_ disaster recovery in a hands on fashion,
> performed hands-on query tuning, database sizing exercises, etc.) by
> simply finding someone who is Oracle certified (all of those things
> are part of the Oracle certification process).  Granted, just
> because  someone is certified doesn't mean they "fit" or will excel
> at the  problems you give them -- it's just a nice lower bar.
> Granted you  can make a name for yourself as an expert without
> getting a  certification, but if you've made a name for yourself,
> you aren't  likely to be on the job market -- which is really my
> point.  Oracle's  certification programs have helped Oracle
> considerably in gaining the  number of Oracle professionals in the
> job market.  PostgreSQL  certification has the opportunity to do the
> same and in doing so  increase overall PostgreSQL adoption.  That's
> a good thing.

When you're getting this together, by all means let me know so I can
trumpet it all over the PostgreSQL Weekly News :)

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Theo Schlossnagle


On Jan 23, 2007, at 5:14 PM, Joshua D. Drake wrote:



Get a CCIE and tell me that again :-)  When you are handed a  
complicated
network of routers and switches running all sorts of version of  
IOS and

CatOS and you go to lunch, they break it and you have a certain time
allotment to fix it all.

Most certifications are not simple multiple choice quizes.  Just the
ones you hear about -- the ones that suck.


I think seeing relevant training courses + experience on a CV trumps
certification anytime - unfortunately a lot of folks out there are
mesmerized by shiny certificates


Sure. But experience is very hard to get.  And since people with
PostgreSQL experience are limited, companies adopting it need a good
second option -- certified people.


They aren't limited, just all employed ;)


I can't find 500, let alone 1000, people with extensive postgresql  
experience in an enterprise environment.  Oracle has an order of  
magnitude more.  MySQL even has better numbers than postgres in this  
arena.  If you only want to hire people with extensive experience,  
you're exposing yourself to an enormous business risk by adopting  
postgres.  You'd have to hire out to a consulting company and if too  
many do that, the consulting company will have scaling issues (as all  
do).


The upside of Oracle is that I can hire out to a consulting company  
for some things (particularly challenging scale or recovery issues)  
and get someone who knows their way around Oracle reasonably well  
(has performed _real_ disaster recovery in a hands on fashion,  
performed hands-on query tuning, database sizing exercises, etc.) by  
simply finding someone who is Oracle certified (all of those things  
are part of the Oracle certification process).  Granted, just because  
someone is certified doesn't mean they "fit" or will excel at the  
problems you give them -- it's just a nice lower bar.  Granted you  
can make a name for yourself as an expert without getting a  
certification, but if you've made a name for yourself, you aren't  
likely to be on the job market -- which is really my point.  Oracle's  
certification programs have helped Oracle considerably in gaining the  
number of Oracle professionals in the job market.  PostgreSQL  
certification has the opportunity to do the same and in doing so  
increase overall PostgreSQL adoption.  That's a good thing.


--
Theo

// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake

> Get a CCIE and tell me that again :-)  When you are handed a complicated
> network of routers and switches running all sorts of version of IOS and
> CatOS and you go to lunch, they break it and you have a certain time
> allotment to fix it all.
> 
> Most certifications are not simple multiple choice quizes.  Just the
> ones you hear about -- the ones that suck.
> 
>> I think seeing relevant training courses + experience on a CV trumps
>> certification anytime - unfortunately a lot of folks out there are
>> mesmerized by shiny certificates
> 
> Sure. But experience is very hard to get.  And since people with
> PostgreSQL experience are limited, companies adopting it need a good
> second option -- certified people.

They aren't limited, just all employed ;)

Joshua D. Drake

> 
> // Theo Schlossnagle
> // CTO -- http://www.omniti.com/~jesus/
> // OmniTI Computer Consulting, Inc. -- http://www.omniti.com/
> 
> 
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Theo Schlossnagle


On Jan 23, 2007, at 5:04 PM, Mark Kirkwood wrote:


Theo Schlossnagle wrote:

On Jan 23, 2007, at 4:33 PM, David Fetter wrote:

On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:

Hello,
I did like to know what you think about the postgresql
certifications provided for

PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html

CertFirst http://www.certfirst.com/postgreSql.htm

My question is about the validate of this certification for the
clients.  Make difference to be certified?


Clueful clients will look unfavorably on any "PostgreSQL
certification" you have.  They will instead insist on experience and
references, as clueful clients do. :)
I don't believe that's true.  Oracle certification means quite a  
bit.  Cisco certification is excellent.  Sun certification is  
decent.  If the PostgreSQL certifications don't mean much it is a  
problem with the particular vendor of the certificate and you (as  
a PostgreSQL entity) should contest their right to use PostgreSQL  
name in their advertising or marketing.  Certification programs  
can and should mean something.


Certification is ok - but is only of actual value when combined  
with real experience. The reason I say this is that certification  
programs in general can be beaten by various techniques (e.g.  
friends, online research, guessing etc). Also over time they are  
rendered (almost) useless by the (lucrative) side businesses that  
come into being (e.g. 'boot camps', mock exams etc).


Get a CCIE and tell me that again :-)  When you are handed a  
complicated network of routers and switches running all sorts of  
version of IOS and CatOS and you go to lunch, they break it and you  
have a certain time allotment to fix it all.


Most certifications are not simple multiple choice quizes.  Just the  
ones you hear about -- the ones that suck.


I think seeing relevant training courses + experience on a CV  
trumps certification anytime - unfortunately a lot of folks out  
there are mesmerized by shiny certificates


Sure. But experience is very hard to get.  And since people with  
PostgreSQL experience are limited, companies adopting it need a good  
second option -- certified people.


// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake
Theo Schlossnagle wrote:
> 
> On Jan 23, 2007, at 4:33 PM, David Fetter wrote:
> 
>> On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:
>>> Hello,
>>> I did like to know what you think about the postgresql
>>> certifications provided for
>>>
>>> PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html
>>>
>>> CertFirst http://www.certfirst.com/postgreSql.htm
>>>
>>> My question is about the validate of this certification for the
>>> clients.  Make difference to be certified?
>>
>> Clueful clients will look unfavorably on any "PostgreSQL
>> certification" you have.  They will instead insist on experience and
>> references, as clueful clients do. :)
> 
> I don't believe that's true.  Oracle certification means quite a bit. 
> Cisco certification is excellent.  Sun certification is decent. 

I agree that their are certifications that are worth something,
specifically those you mention above.

However, PostgreSQL certifications are not currently worth anything IMO.

Sincerely,

Joshua D. Drake
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Mark Kirkwood

Theo Schlossnagle wrote:


On Jan 23, 2007, at 4:33 PM, David Fetter wrote:


On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:

Hello,
I did like to know what you think about the postgresql
certifications provided for

PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html

CertFirst http://www.certfirst.com/postgreSql.htm

My question is about the validate of this certification for the
clients.  Make difference to be certified?


Clueful clients will look unfavorably on any "PostgreSQL
certification" you have.  They will instead insist on experience and
references, as clueful clients do. :)


I don't believe that's true.  Oracle certification means quite a bit.  
Cisco certification is excellent.  Sun certification is decent.  If the 
PostgreSQL certifications don't mean much it is a problem with the 
particular vendor of the certificate and you (as a PostgreSQL entity) 
should contest their right to use PostgreSQL name in their advertising 
or marketing.  Certification programs can and should mean something.




Certification is ok - but is only of actual value when combined with 
real experience. The reason I say this is that certification programs in 
general can be beaten by various techniques (e.g. friends, online 
research, guessing etc). Also over time they are rendered (almost) 
useless by the (lucrative) side businesses that come into being (e.g. 
'boot camps', mock exams etc).


I think seeing relevant training courses + experience on a CV trumps 
certification anytime - unfortunately a lot of folks out there are 
mesmerized by shiny certificates


Cheers

Mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread David Fetter
On Tue, Jan 23, 2007 at 04:41:03PM -0500, Theo Schlossnagle wrote:
> On Jan 23, 2007, at 4:33 PM, David Fetter wrote:
> >On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:
> >>Hello,
> >>I did like to know what you think about the postgresql
> >>certifications provided for
> >>
> >>PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html
> >>
> >>CertFirst http://www.certfirst.com/postgreSql.htm
> >>
> >>My question is about the validate of this certification for the
> >>clients.  Make difference to be certified?
> >
> >Clueful clients will look unfavorably on any "PostgreSQL
> >certification" you have.  They will instead insist on experience
> >and references, as clueful clients do. :)
> 
> I don't believe that's true.  Oracle certification means quite a
> bit.  Cisco certification is excellent.  Sun certification is
> decent.  If the PostgreSQL certifications don't mean much it is a
> problem with the particular vendor of the certificate and you (as a
> PostgreSQL entity) should contest their right to use PostgreSQL name
> in their advertising or marketing.

Sadly, at least in the U.S., PostgreSQL is unlikely to be a defensible
trademark.  I am not an intellectual property attorney, and if I were
one, my opinion would not be as weighty as a court case.

> Certification programs can and should mean something.

I'd love to see a good one for PostgreSQL.  What I've seen so far has
been somewhere between dismal and rotten.

> We offer training programs here and have considered offering OmniTI
> certifications in the future.  I wouldn't offer then unless I
> thought  it meant something that companies "out there" could rely
> on.

Great :)

> Many  other certifying entities have the same approach.

99% of them give the rest a bad name ;)

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Theo Schlossnagle


On Jan 23, 2007, at 4:33 PM, David Fetter wrote:


On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:

Hello,
I did like to know what you think about the postgresql
certifications provided for

PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html

CertFirst http://www.certfirst.com/postgreSql.htm

My question is about the validate of this certification for the
clients.  Make difference to be certified?


Clueful clients will look unfavorably on any "PostgreSQL
certification" you have.  They will instead insist on experience and
references, as clueful clients do. :)


I don't believe that's true.  Oracle certification means quite a  
bit.  Cisco certification is excellent.  Sun certification is  
decent.  If the PostgreSQL certifications don't mean much it is a  
problem with the particular vendor of the certificate and you (as a  
PostgreSQL entity) should contest their right to use PostgreSQL name  
in their advertising or marketing.  Certification programs can and  
should mean something.


We offer training programs here and have considered offering OmniTI  
certifications in the future.  I wouldn't offer then unless I thought  
it meant something that companies "out there" could rely on.  Many  
other certifying entities have the same approach.


// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread David Fetter
On Tue, Jan 23, 2007 at 11:52:08AM -0200, Iannsp wrote:
> Hello,
> I did like to know what you think about the postgresql
> certifications provided for
> 
> PostgreSQL CE http://www.sraoss.co.jp/postgresql-ce/news_en.html
> 
> CertFirst http://www.certfirst.com/postgreSql.htm
> 
> My question is about the validate of this certification for the
> clients.  Make difference to be certified?

Clueful clients will look unfavorably on any "PostgreSQL
certification" you have.  They will instead insist on experience and
references, as clueful clients do. :)

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] guc fallback to default

2007-01-23 Thread Joachim Wieland
I'm working again on the patch for making guc variables fall back to their
default value if they get removed (or commented) in the configuration file.

There is still an issue with custom variables that needs discussion.

Remember that for regular variables we have the following semantics:

BEGIN;
SET enable_seqscan TO off;
COMMIT;

The effect of the commit on the variable is that the variable is set to the
specified value from then on in that session (outside of the transaction).

This is also valid for custom variables. But those can be removed from the
configuration file while all other variables can not (all other variables
fall back to some default value).

Imagine the following example:

Configuration file:
custom_variable_classes = "foo"
foo.var = 3

In a session we do:
BEGIN;
SET foo.var TO 5;

With the transaction still being open, we remove the definition of foo.var
from the configuration file and send SIGHUP.

Then we commit the transaction:

COMMIT;

So what should happen?

Interpretation 1:
foo.var got deleted. COMMIT can not assure that the value of
foo.var gets applied, because foo.var does not exist anymore.
The transaction fails.

Interpretation 2:
The foo.var variable from the configuration file got deleted but the
SET command in the transaction defines a new variable which is
valid, because we still have custom_variable_classes = "foo". The
transaction succeeds.


The second interpretation is based on the fact that you can create a custom
variable by just assigning a value to it. So if you have
custom_variable_classes = "foo", foo. is a valid variable.

Actually I think we could go either way, it seems to be a really rare corner
case. I'm fine with either way.

Note that if we deleted the line with "custom_variable_classes = foo" from
the previous example as well, it is clear that the transaction should fail.



Joachim




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] About PostgreSQL certification

2007-01-23 Thread Joshua D. Drake
Iannsp wrote:
> Hello,
> I did like to know what you think about the postgresql certifications
> provided for
> 
> PostgreSQL CE
> http://www.sraoss.co.jp/postgresql-ce/news_en.html
> 
> CertFirst
> http://www.certfirst.com/postgreSql.htm
> 
> My question is about the validate of this certification for the clients.
> Make difference to be certified?'

It doesn't make a difference to be certified.

Sincerely,

Joshua D. Drake



> 
> thanks for advanced.
> 
> Ivo Nascimento.
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] "tupdesc reference is not owned by resource owner Portal"

2007-01-23 Thread Stefan Kaltenbrunner
Tom Lane wrote:
> Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
>> The following testcase(extracted from a much much larger production code 
>> sample) results in
> 
>> WARNING:  TupleDesc reference leak: TupleDesc 0xb3573b88 (2249,1) still 
>> referenced
>> CONTEXT:  PL/pgSQL function "foo" line 4 at block variables initialization
>> ERROR:  tupdesc reference 0xb3573b88 is not owned by resource owner Portal
>> CONTEXT:  PL/pgSQL function "foo" while casting return value to 
>> function's return type
> 
> Hmm.  What's happening is that the record-function call creates a
> reference-counted TupleDesc, and tracking of the TupleDesc is
> assigned to the subtransaction resource owner because we're inside
> an EXCEPTION-block subtransaction.  But the pointer is held by the
> function's eval_context which lives throughout the function call,
> and so the free happens long after exiting the subtransaction, and
> the resource owner code quite properly complains about this.
> 
> In this particular case the worst consequence would be a short-term
> memory leak, but I think there are probably variants with worse
> problems, because anything done by a RegisterExprContextCallback()
> callback is equally at risk.
> 
> I think the proper fix is probably to establish a new eval_context
> when we enter an EXCEPTION block, and destroy it again on the way out.
> Slightly annoying, but probably small next to the other overhead of
> a subtransaction.  Comments?

we use exception blocks heavily here so anything that makes them slower
is not nice but if it fixes the issue at hand I'm all for it ...

> 
> BTW, both of the CONTEXT lines are misleading.  The WARNING happens
> during exit from the begin-block, not entry to it; and the ERROR
> happens after we've finished fooling with the result value.  I'm
> tempted to add a few more assignments to err_text to make this nicer.

yeah wondered about that too when I tried to produce a simple testcase -
the errors did't seem to make much sense in the context of what
triggered them. Improving that would be a very godd thing to do.


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Default permissisons from schemas

2007-01-23 Thread Tom Lane
Stephen Frost <[EMAIL PROTECTED]> writes:
> * Tom Lane ([EMAIL PROTECTED]) wrote:
>> Before discussing "limitations" you should first justify why we need any
>> such concept at all.  It was no part of the original TODO item and I
>> cannot see any good use for it.

> There are permissions which are not grantable but exist as implicitly
> granted to the owner of object.  These include drop, truncate, alter.
> Practically, I find myself having to change the owner of objects which I
> create almost as often as I'm defining the ACL for those objects.  In
> many of our schemas all the objects should be owned by the same 'admin'
> role so that those who are in that role can perform the actions which
> are only available to object owners, much the same as those objects
> having a certain set of minimum ACLs.

I don't see any argument here for not creating the object as owned by
the creator -- as you note, SET ROLE is the way to cause something to be
owned by a role you have permission to become.  The important difference
is that SET ROLE actually checks that you have that permission, whereas
a magical catalog entry saying "create objects as somebody else instead"
wouldn't.  Maybe you could make it do so, but that would just be a
surprising behavior IMHO; and surprising security-related behaviors are
seldom a good idea.

BTW, I believe a schema owner can DROP any contained object whether he
owns it or not; without that the owner's ability to DROP the schema
would of course be worthless...

> I had thought it was going to be possible to set up roles/permissions
> such that a newly created object would be owned by the role through
> which the CREATE permission is given but that doesn't seem to be the
> case (or perhaps I'm doing something wrong with it).

Hm, I have some vague recollection that we considered that and rejected
it.  Probably because it's ill-defined: what if there is more than one
path through which you've been granted CREATE permission?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Brian Hurt

Tom Lane wrote:


Brian Hurt <[EMAIL PROTECTED]> writes:
 


Execute, on a fresh database, the following sql, to recreate the bug:
   



Hmm, I thought that stack trace looked a bit familiar --- we seem to
have fixed the problem as of 8.2.  Unfortunately I can't recall what
the relevant change was exactly; time for some digging in the CVS logs.





Any hope of getting the fix back-ported into the 8.1 tree?

Brian



Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Tom Lane
Brian Hurt <[EMAIL PROTECTED]> writes:
> Execute, on a fresh database, the following sql, to recreate the bug:

Hmm, I thought that stack trace looked a bit familiar --- we seem to
have fixed the problem as of 8.2.  Unfortunately I can't recall what
the relevant change was exactly; time for some digging in the CVS logs.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] msvc failure in largeobject regression test

2007-01-23 Thread Jeremy Drake
On Tue, 23 Jan 2007, Magnus Hagander wrote:

> On Tue, Jan 23, 2007 at 09:31:40AM -0500, Andrew Dunstan wrote:
> > Magnus Hagander wrote:
> > >Hi!
> > >
> > >I get failures for the largeobject regression tests on my vc++ build. I
> > >don't think this has ever worked, given that those tests are fairly new.
> > >Any quick ideas on what's wrong before I dig deeper?
> > >
> > >
> > [snip]
> >
> > I wonder if this is a line-end issue? Assuming you are working from CVS,
> > does your client turn \n into \r\n ? I see that other windows boxes are
> > happily passing this test on the buildfarm, and of course the mingw cvs
> > doesn't adjust line endings.
>
> Bingo!
>
> That's it. I copeid the file in binary mode from a linux box and now it
> passes.

I thought about that when I wrote it, and thus tried it under mingw and
cygwin without issue ;)  I don't think the regression tests were in a
position of running on the msvc build at the time...  My thought for what
to do if this did run into a problem would be an alternate output file
that is also acceptable (I don't know what they're called but other tests
have them IIRC).



-- 
Fifth Law of Procrastination:
Procrastination avoids boredom; one never has the feeling that
there is nothing important to do.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Default permissisons from schemas

2007-01-23 Thread Stephen Frost
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
> > * Tom Lane ([EMAIL PROTECTED]) wrote:
> >> Whoa.  You are going to allow people to create objects owned by someone
> >> else?  I don't think so ... most Unix systems have forbidden object
> >> give-away for years, for very good reasons.
> 
> > Hmm.  While I agree with the sentiment, Unix does provide for setgid
> > such that objects inherit a specific group on creation.  Using roles we
> > don't get that distinction so I don't think comparing it to Unix is a
> > slam-dunk.  There do need to be limitations here though, certainly.
> 
> Before discussing "limitations" you should first justify why we need any
> such concept at all.  It was no part of the original TODO item and I
> cannot see any good use for it.

There are permissions which are not grantable but exist as implicitly
granted to the owner of object.  These include drop, truncate, alter.
Practically, I find myself having to change the owner of objects which I
create almost as often as I'm defining the ACL for those objects.  In
many of our schemas all the objects should be owned by the same 'admin'
role so that those who are in that role can perform the actions which
are only available to object owners, much the same as those objects
having a certain set of minimum ACLs.

This is, of course, only for object creation.  It is possible to use
'set role' to set initial ownership on an object but for as much as it's
possible I find that it doesn't happen very often.  I had thought it was
going to be possible to set up roles/permissions such that a newly
created object would be owned by the role through which the CREATE
permission is given but that doesn't seem to be the case (or perhaps I'm
doing something wrong with it).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Merlin Moncure

On 1/22/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:

Or so... :)

Thought I would do a poll of what is happening in the world for 8.3. I have:

Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions
Jonah Harris: WITH/Recursive Queries?
Andrei Kovalesvki: Some Win32 work with Magnus
Magnus Hagander: VC++ support (thank goodness)
Heikki Linnakangas: Working on Vacuum for Bitmap Indexes?
Oleg Bartunov: Tsearch2 in core
Neil Conway: Patch Review (including enums), pg_fcache


has there been any progress on the 'hot' tuple update mechanism?

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Brian Hurt
I forgot to mention: core dumps available upon request (obviously I 
don't want to post them to the list).


Brian


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] tripping an assert in 8.1.6 (more info)

2007-01-23 Thread Brian Hurt

More info on that assert I've hit.

Compile 8.1.6 with configuration options:

./configure --with-perl --enable-debug --enable-cassert

(not sure if --perl is relevent or not, I think not).

This is on Fedora Core 5 on x86-32.

Execute, on a fresh database, the following sql, to recreate the bug:

CREATE TABLE foo
(
some_data VARCHAR(32)
,row_date DATE
);

CREATE TABLE bar
(
some_data VARCHAR(32)
,row_date DATE
);

CREATE TABLE quux
(
some_data VARCHAR(32)
,more_data VARCHAR(32)
);


CREATE OR REPLACE VIEW bazz AS
SELECT
   ('bar: ' || bar.row_date) :: TEXT AS action,
   quux.more_data AS more_data,
   bar.row_date AS row_date
FROM
   bar
   JOIN quux
   ON bar.some_data = quux.some_data
UNION ALL
SELECT
   ('foo: ' || foo.row_date) :: TEXT AS action,
   quux.more_data AS more_data,
   foo.row_date AS row_date
FROM
   foo
   JOIN quux
   ON foo.some_data = quux.some_data
;

SELECT action, more_data FROM bazz;



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Default permissisons from schemas

2007-01-23 Thread Tom Lane
Stephen Frost <[EMAIL PROTECTED]> writes:
> * Tom Lane ([EMAIL PROTECTED]) wrote:
>> Whoa.  You are going to allow people to create objects owned by someone
>> else?  I don't think so ... most Unix systems have forbidden object
>> give-away for years, for very good reasons.

> Hmm.  While I agree with the sentiment, Unix does provide for setgid
> such that objects inherit a specific group on creation.  Using roles we
> don't get that distinction so I don't think comparing it to Unix is a
> slam-dunk.  There do need to be limitations here though, certainly.

Before discussing "limitations" you should first justify why we need any
such concept at all.  It was no part of the original TODO item and I
cannot see any good use for it.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] "tupdesc reference is not owned by resource owner Portal" issue in 8.2 and -HEAD

2007-01-23 Thread Tom Lane
Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
> The following testcase(extracted from a much much larger production code 
> sample) results in

> WARNING:  TupleDesc reference leak: TupleDesc 0xb3573b88 (2249,1) still 
> referenced
> CONTEXT:  PL/pgSQL function "foo" line 4 at block variables initialization
> ERROR:  tupdesc reference 0xb3573b88 is not owned by resource owner Portal
> CONTEXT:  PL/pgSQL function "foo" while casting return value to 
> function's return type

Hmm.  What's happening is that the record-function call creates a
reference-counted TupleDesc, and tracking of the TupleDesc is
assigned to the subtransaction resource owner because we're inside
an EXCEPTION-block subtransaction.  But the pointer is held by the
function's eval_context which lives throughout the function call,
and so the free happens long after exiting the subtransaction, and
the resource owner code quite properly complains about this.

In this particular case the worst consequence would be a short-term
memory leak, but I think there are probably variants with worse
problems, because anything done by a RegisterExprContextCallback()
callback is equally at risk.

I think the proper fix is probably to establish a new eval_context
when we enter an EXCEPTION block, and destroy it again on the way out.
Slightly annoying, but probably small next to the other overhead of
a subtransaction.  Comments?

BTW, both of the CONTEXT lines are misleading.  The WARNING happens
during exit from the begin-block, not entry to it; and the ERROR
happens after we've finished fooling with the result value.  I'm
tempted to add a few more assignments to err_text to make this nicer.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Richard Troy

On Wed, 24 Jan 2007, FAST PostgreSQL wrote:
>
> We are trying to develop the updateable cursors functionality into
> Postgresql. I have given below details of the design and also issues we are
> facing.  Looking forward to the advice on how to proceed with these issues.
>
> Rgds,
> Arul Shaji
>

Hi Arul,

...I can see people are picking apart the implementation details so you're
getting good feedback on your ambitious proposal. Looks like you've put a
lot of thought/work into it.

I've never been a fan of cursors because they encourage bad behavior;
"Think time" in a transaction sometimes becomes "lunch time" for users and
in any event long lock duration is something to be avoided for the sake of
concurrency and sometimes performance (vacuum, etc). My philosophy is "get
in and get out quick."

Ten years ago May, our first customer insisted we implement what has
become our primary API library in Java and somewhat later I was shocked to
learn that for whatever reason Java ResultSets are supposed to be
implemented as _updateable_cursors._ This created serious security issues
for handing off results to other programs through the library - ones that
don't even have the ability to connect to the target database. Confirmed
in the behavior of Informix, we went through some hoops to remove the need
to pass ResultSets around. (If I had only known Postgres didn't implement
the RS as an updateable cursor, I'd have pushed for our primary platform
to be Postgres!)

What impresses me is that Postgres has survived so well without updateable
cursors. To my mind it illustrates that they aren't widely used. I'm
wondering what troubles lurk ahead once they're available. As a
DBA/SysAdmin, I'd be quite happy that there existed some kind of log
element that indicated updateable cursors were in use that I could search
for easily whenever trying to diagnose some performance or deadlocking
problem, etc, say log fiile entries that indicated the opening and later
closing of such a cursor with an id of some kind that allowed matching up
open/close pairs. I also think that that the documentation should be
updated to not only indicate usage of this new feature, but provide
cautionary warnings about the potential locking issues and, for the
authors of libraries, Java in particular, the possible security issues.

Regards,
Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Default permissisons from schemas

2007-01-23 Thread Stephen Frost
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
> >   Following up on my reply to Joshua, what I'd like to propose is, for
> >   comments and suggestions:
> 
> >   ALTER SCHEMA name [ [ WITH ] [ DEFAULT ] option [ ... ] ]
> 
> >   where option can be:
> 
> > { GRANT { { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES | TRI=
> > GGER | EXECUTE }=20
> >   [,...] | ALL [ PRIVILEGES ] }=20
> >   TO { role | PUBLIC } [, ...] [ WITH GRANT OPTION ]=20
> > } [, ...]
> 
> > OWNER role
> 
> This seems to ignore the problem that different types of objects have
> different privileges.  E.g., if I want to grant USAGE on all sequences
> that doesn't necessarily mean I want to grant USAGE on all languages.

Hm, I agree with that.  So the construct should be more along the lines
of:
{ GRANT { { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES |
TRIGGER | EXECUTE } [,...] | ALL [ PRIVILEGES ] } ON 
{ TABLE | FUNCTION | LANGUAGE } TO { role | PUBLIC } [, ...] [ WITH
GRANT OPTION ] } [, ...]

That list is pulled from the GRANT syntax where we don't currently
distinguish sequences from tables.  I can understand wanting to make
that distinction here but I'm not sure what is best to use.  Perhaps,
from the 'create' syntax we could use this list instead:

AGGREGATE | CAST | CONVERSION | DOMAIN | SEQUENCE | TABLE | VIEW |
FUNCTION | LANGUAGE | OPERATOR CLASS | OPERATOR | TYPE

I've left out TRIGGER, RULE and INDEX as objects which don't have 
their own ACLs (I don't think?) and DATABASE, GROUP, ROLE, TABLESPACE,
and USER as objects which don't exist inside of schemas.

> >   When not-null the 'nspdefowner' would be the owner of all
> >   objects created in the schema.
> 
> Whoa.  You are going to allow people to create objects owned by someone
> else?  I don't think so ... most Unix systems have forbidden object
> give-away for years, for very good reasons.

Hmm.  While I agree with the sentiment, Unix does provide for setgid
such that objects inherit a specific group on creation.  Using roles we
don't get that distinction so I don't think comparing it to Unix is a
slam-dunk.  There do need to be limitations here though, certainly.  A
couple options, in order of my preference:

User issueing the ALTER SCHEMA command must be a member of the role
being set as the nspdefowner.  Other users who can create tables in that
schema need not be a member of the role the object ends up being owned
by.  The idea here being that theoretically the schema owner could
change the ownership to what they want it to be afterwards anyway.

User creating table must have all rights normally required to create the
object in the schema with the owner/acl asked for.  This would probably
also work for most people.  If those rights are not available then the
appropriate action, imv, would be to fall back to the process for
determining the owner currently used today.  As for insufficient rights
for the ACL, the ACL for the object would go back to NULL.  I'm a little
concerned this would end up being confusing for users though I suppose
we could issue a notice if this happens.  An alternative would be to
deny the creation, but that doesn't seem quite right if the user has
create rights on the schema.

Comments?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Simon Riggs
On Tue, 2007-01-23 at 10:39 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > On Tue, 2007-01-23 at 09:55 -0500, Tom Lane wrote:
> >> This really isn't gonna work, because it assumes that the tuple that is
> >> "current" at the instant of parsing is still going to be "current" at
> >> execution time.
> 
> > Of course thats true, but you've misread my comment.
> 
> > The portal with the cursor in will not change, no matter how many times
> > we execute WHERE CURRENT OF in another portal.
> 
> Really?  The cursor portal will cease to exist as soon as the
> transaction ends, but the prepared plan won't.  

Yes, understood.

I just want it to work well with prepared queries also. That seems both
a reasonable goal and also achievable by caching in the way requested.

> A reasonable person
> would expect that WHERE CURRENT OF will parse into a plan that just
> stores the cursor name, and looks up the cursor at execution time.

We just store the Xid for which the cache is relevant then refresh the
cache if the cache is stale.

If you don't like the idea, say so. There's no need for anything more.

But those are minor points if you have stronger reservations about the
main proposal, which it sounds like you do.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] "tupdesc reference is not owned by resource owner Portal" issue in 8.2 and -HEAD

2007-01-23 Thread Stefan Kaltenbrunner
The following testcase(extracted from a much much larger production code 
sample) results in


WARNING:  TupleDesc reference leak: TupleDesc 0xb3573b88 (2249,1) still 
referenced

CONTEXT:  PL/pgSQL function "foo" line 4 at block variables initialization
ERROR:  tupdesc reference 0xb3573b88 is not owned by resource owner Portal
CONTEXT:  PL/pgSQL function "foo" while casting return value to 
function's return type


on 8.2 and -HEAD.

8.1 seems to work fine.

Stefan


CREATE OR REPLACE FUNCTION public.foo() RETURNS INTEGER AS $$
DECLARE
v_var INTEGER;
BEGIN
BEGIN
v_var := (bar()).error_code;
EXCEPTION WHEN others THEN
RETURN 0;
END;
RETURN 0;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION public.bar(OUT error_code INTEGER, OUT new_id 
INTEGER) RETURNS RECORD AS $$

BEGIN
error_code := 1;
new_id := 1;
RETURN;
END;
$$ LANGUAGE plpgsql;

SELECT * FROM public.foo();

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Would it help to set the status of the XMIN/XMAX of tuples early enough
such
> that the heap page is still in the buffer cache, but late enough such
that
> the XMIN/XMAX transactions are finished ? How about doing it when the
> bgwriter is about to write the page to disk ?

No.  The bgwriter would then become subject to deadlocks because it
would be needing to read in clog pages before it could flush out
dirty pages.  In any case, if the table is in active use then some
passing backend has probably updated the bits already ...



Well, let me collect some evidence. If we figure out that there is indeed a
CLOG buffer thrash at VACUUM time, I am sure we would be able to solve
the problem one way or the other.

IMHO this case would be more applicable to the very large tables where the
UPDATEd rows are not accessed again for a long time. And hence the hint bits
might not have been updated.

Thanks,
Pavan




--

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Joshua D. Drake
Pavan Deolasee wrote:
> On 1/23/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>>
>> Or so... :)
>>
>> I am sure there are more, the ones with question marks are unknowns but
>> heard of in the ether somewhere. Any additions or confirmations?
>>
>>
> I have the first phase of Frequent Update Optimizations (HOT) patch ready.
> But I held it back because of the concerns that its too complex. It has
> shown decent performance gains on pgbench and DBT2 tests though.
> 
> I am splitting the patch into smaller pieces for ease of review and would
> submit those soon for comments.

*soon* is the operative word :).

Sincerely,

Joshua D. Drake

> 
> Thanks,
> Pavan
> 
> EnterpriseDB http://www.enterprisedb.com
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Tom Lane
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Tue, 2007-01-23 at 09:55 -0500, Tom Lane wrote:
>> This really isn't gonna work, because it assumes that the tuple that is
>> "current" at the instant of parsing is still going to be "current" at
>> execution time.

> Of course thats true, but you've misread my comment.

> The portal with the cursor in will not change, no matter how many times
> we execute WHERE CURRENT OF in another portal.

Really?  The cursor portal will cease to exist as soon as the
transaction ends, but the prepared plan won't.  A reasonable person
would expect that WHERE CURRENT OF will parse into a plan that just
stores the cursor name, and looks up the cursor at execution time.

> The OP suggested putting
> the current tuple pointer onto the portal data, so this will work.

No, as I read his message he was suggesting pulling data out of the
cursor portal at plan time so that no downstream (executor) changes
would be needed.  That is certainly never going to be workable.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] STOP all user access except for admin for a few minutes?

2007-01-23 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

Thx Russel,
I want to control it from software, changing network access via pg_hba 
with software doesnt feel right.


 possible case
Say I have a Group called Normal_Rights and one called Zero_Rights.

So dB runs as... Normal_Rights(User A, User B, User C, User D)
Then via sql, superuser REVOKEs those user rights and GRANTs them
Zero_Rights(User A, User B, User C, User D)... ie make users a member of 
the ZERO rights group.


Then hopefully Postgres kicks them out gracefully?

Then software make changes and switch's them back to their Normal_Rights 
group.


 or  more general case
RECORD all the SQL for all user rights...
REVOKE everything except needed software superusers (postgres, and 
program superuser).

make changes via software.
PLAY BACK all the rights SQL script.

What do you think, will PG kill connections, let them go gracefully, 
stop after current transaction


 maybe I'm in the wrong tree


Yes I'm thinking that too:

Is it possible to make quick structural changes to postgres, with user 
activety?


of course.

Maybe start a transaction that changes structure... wonder if that will 
stop or hold user activity???


Usually not - all your DDL is done in a transaction just like any other
access users would make. So it only fails (but as a whole) if you want
to modify locked tables and such. But you would not end up w/ a partly
changed database in any case. Just make sure you do everything in
a transaction. No need to suspend user accounts for that.


Regards
Tino



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Simon Riggs
On Tue, 2007-01-23 at 09:55 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > On Wed, 2007-01-24 at 02:42 +1100, FAST PostgreSQL wrote:
> >> In the UPDATE or DELETE statements the ‘WHERE CURRENT OF ’ 
> >> clause results in the cursor name being placed in the UpdateStmt or 
> >> DeleteStmt structure. During the processing of the functions - 
> >> transformDeleteStmt() and transformUpdateStmt() - the cursor name is used 
> >> to 
> >> obtain a pointer to the related Portal structure 
> 
> > To support prepared statements we'd need to do this name lookup just
> > once, so that the Update/Delete stmt can record which Portal to look at
> > for the current tuple.
> 
> This really isn't gonna work, because it assumes that the tuple that is
> "current" at the instant of parsing is still going to be "current" at
> execution time.

Of course thats true, but you've misread my comment.

The portal with the cursor in will not change, no matter how many times
we execute WHERE CURRENT OF in another portal. The OP suggested putting
the current tuple pointer onto the portal data, so this will work.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] regular expressions stranges

2007-01-23 Thread Tom Lane
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> The regex code is working with pg_wchar strings, which aren't
>> necessarily the same representation that the OS' wide-char functions
>> expect.  If we could guarantee compatibility then the above plan
>> would make sense ...

> it seems to me, that is possible for UTF8 encoding.

Why?  The one thing that a wchar certainly is not is UTF8.
It might be that the  functions are expecting UTF16 or UTF32,
but we don't know which, and really we can hardly even be sure they're
expecting Unicode at all.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] regular expressions stranges

2007-01-23 Thread Teodor Sigaev

The regex code is working with pg_wchar strings, which aren't
necessarily the same representation that the OS' wide-char functions
expect.  If we could guarantee compatibility then the above plan
would make sense ...


it seems to me, that is possible for UTF8 encoding. So isalpha() function may be 
defined as:


static int
pg_wc_isalpha(pg_wchar c)
{
if ( (c >= 0 && c <= UCHAR_MAX) )
return isalpha((unsigned char) c)
#ifdef HAVE_WCSTOMBS
else if ( GetDatabaseEncoding() == PG_UTF8 )
return iswalpha((wint_t) c)
#endif
return 0;
}



--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] STOP all user access except for admin for a few minutes?

2007-01-23 Thread org

Thx Russel,
I want to control it from software, changing network access via pg_hba with 
software doesnt feel right.


 possible case
Say I have a Group called Normal_Rights and one called Zero_Rights.

So dB runs as... Normal_Rights(User A, User B, User C, User D)
Then via sql, superuser REVOKEs those user rights and GRANTs them
Zero_Rights(User A, User B, User C, User D)... ie make users a member of the 
ZERO rights group.


Then hopefully Postgres kicks them out gracefully?

Then software make changes and switch's them back to their Normal_Rights 
group.


 or  more general case
RECORD all the SQL for all user rights...
REVOKE everything except needed software superusers (postgres, and program 
superuser).

make changes via software.
PLAY BACK all the rights SQL script.

What do you think, will PG kill connections, let them go gracefully, stop 
after current transaction


 maybe I'm in the wrong tree
Is it possible to make quick structural changes to postgres, with user 
activety?
Maybe start a transaction that changes structure... wonder if that will stop 
or hold user activity???


Thx
.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] tripping an assert in 8.1.6

2007-01-23 Thread Brian Hurt
Hello all.  It seems I'm tripping an assert in 8.1.6- the assert on line 
219 of  src/backend/executor/execScan.c (found by running gdb on a core 
dump).  This is on x86 and Redhat Linux (forget which version).  Note 
that if I recompile 8.1.6 with asserts turned off the query completes 
just fine.  I'm trying to put together an example which reproduces the 
problem without requiring half our company's data- that should follow soon.


The gdb backtrace is:


#0  0xe410 in __kernel_vsyscall ()
(gdb) bt
#0  0xe410 in __kernel_vsyscall ()
#1  0xb7d2dee9 in raise () from /lib/libc.so.6
#2  0xb7d2f4f1 in abort () from /lib/libc.so.6
#3  0x0824f931 in ExceptionalCondition (conditionName=Variable 
"conditionName" is not available.

) at assert.c:51
#4  0x081537ac in ExecAssignScanProjectionInfo (node=0x8426bec)
at execScan.c:219
#5  0x08161339 in ExecInitSubqueryScan (node=0x8412de4, estate=0x8426ad4)
at nodeSubqueryscan.c:212
#6  0x0814e0e4 in ExecInitNode (node=0x8412de4, estate=0x8426ad4)
at execProcnode.c:179
#7  0x0814c554 in ExecutorStart (queryDesc=0x842554c, explainOnly=1 
'\001')

at execMain.c:618
#8  0x081193f5 in ExplainOnePlan (queryDesc=0x842554c, stmt=0x839afe4,
tstate=0x83cbdac) at explain.c:243
#9  0x081198ac in ExplainOneQuery (query=0x83b88e4, stmt=0x839afe4,
tstate=0x83cbdac) at explain.c:214
#10 0x08119a92 in ExplainQuery (stmt=0x839afe4, dest=0x83b8a54)
at explain.c:121
#11 0x081da391 in PortalRunUtility (portal=0x83b67b4, query=0x839b07c,
dest=0x83b8a54, completionTag=0x0) at pquery.c:987
#12 0x081db6dc in PortalRun (portal=0x83b67b4, count=2147483647,
dest=0x839b030, altdest=0x839b030, completionTag=0xbf9efee8 "")
at pquery.c:637
#13 0x081d713c in exec_simple_query (
query_string=0x839a26c "explain SELECT action, bloomberg_code, 
composite_bloomberg_code, reuters_code, cusip_code, sedol_code, 
isin_code FROM vw_ca_generic_actions WHERE (action_date >= 
'20070122'::date) AND (action_date <= "...)

at postgres.c:1004
#14 0x081d8bd3 in PostgresMain (argc=4, argv=0x83593f0,
username=0x83593b8 "postgres") at postgres.c:3232
#15 0x081aca37 in ServerLoop () at postmaster.c:2865
#16 0x081ad936 in PostmasterMain (argc=3, argv=0x8358560) at 
postmaster.c:941
#17 0x0816c1c9 in main (argc=3, argv=Cannot access memory at address 
0x1515

) at main.c:265



This is mainly a "heads up- bug incomming" message.  Thanks.

Brian


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Tom Lane
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Would it help to set the status of the XMIN/XMAX of tuples early enough such
> that the heap page is still in the buffer cache, but late enough such that
> the XMIN/XMAX transactions are finished ? How about doing it when the
> bgwriter is about to write the page to disk ?

No.  The bgwriter would then become subject to deadlocks because it
would be needing to read in clog pages before it could flush out
dirty pages.  In any case, if the table is in active use then some
passing backend has probably updated the bits already ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] regular expressions stranges

2007-01-23 Thread Tom Lane
Teodor Sigaev <[EMAIL PROTECTED]> writes:
> As I can see, that is because of using isalpha (and other is*), tolower & 
> toupper instead of isw* and tow* functions. Is any reason to use them? If 
> not, I 
> can modify regc_locale.c similarly to tsearch2 locale part.

The regex code is working with pg_wchar strings, which aren't
necessarily the same representation that the OS' wide-char functions
expect.  If we could guarantee compatibility then the above plan
would make sense ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Tom Lane
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Wed, 2007-01-24 at 02:42 +1100, FAST PostgreSQL wrote:
>> In the UPDATE or DELETE statements the ‘WHERE CURRENT OF ’ 
>> clause results in the cursor name being placed in the UpdateStmt or 
>> DeleteStmt structure. During the processing of the functions - 
>> transformDeleteStmt() and transformUpdateStmt() - the cursor name is used to 
>> obtain a pointer to the related Portal structure 

> To support prepared statements we'd need to do this name lookup just
> once, so that the Update/Delete stmt can record which Portal to look at
> for the current tuple.

This really isn't gonna work, because it assumes that the tuple that is
"current" at the instant of parsing is still going to be "current" at
execution time.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] msvc failure in largeobject regression test

2007-01-23 Thread Magnus Hagander
On Tue, Jan 23, 2007 at 09:31:40AM -0500, Andrew Dunstan wrote:
> Magnus Hagander wrote:
> >Hi!
> >
> >I get failures for the largeobject regression tests on my vc++ build. I
> >don't think this has ever worked, given that those tests are fairly new.
> >Any quick ideas on what's wrong before I dig deeper?
> >
> >  
> [snip]
> 
> I wonder if this is a line-end issue? Assuming you are working from CVS, 
> does your client turn \n into \r\n ? I see that other windows boxes are 
> happily passing this test on the buildfarm, and of course the mingw cvs 
> doesn't adjust line endings.

Bingo!

That's it. I copeid the file in binary mode from a linux box and now it
passes.

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Tom Lane
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> I know it might break the ctid chain, but does that really matter ?

Yes.  You can't just decide that the tuple isn't needed anymore.
As per other followup, you could possibly shrink a known-dead tuple to
just the header.

The notion of keeping linked lists etc seems like gross overdesign to me.
Why not just compact out the free space?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] msvc failure in largeobject regression test

2007-01-23 Thread Andrew Dunstan

Magnus Hagander wrote:

Hi!

I get failures for the largeobject regression tests on my vc++ build. I
don't think this has ever worked, given that those tests are fairly new.
Any quick ideas on what's wrong before I dig deeper?

  

[snip]

I wonder if this is a line-end issue? Assuming you are working from CVS, 
does your client turn \n into \r\n ? I see that other windows boxes are 
happily passing this test on the buildfarm, and of course the mingw cvs 
doesn't adjust line endings.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] msvc failure in largeobject regression test

2007-01-23 Thread Stefan Kaltenbrunner

Magnus Hagander wrote:

Hi!

I get failures for the largeobject regression tests on my vc++ build. I
don't think this has ever worked, given that those tests are fairly new.
Any quick ideas on what's wrong before I dig deeper?


FWIW: emu managed to trigger a largeobject related failure too (though 
it looks different then yours):


http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=emu&dt=2007-01-22%2023:35:03

Stefan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] msvc failure in largeobject regression test

2007-01-23 Thread Magnus Hagander
Hi!

I get failures for the largeobject regression tests on my vc++ build. I
don't think this has ever worked, given that those tests are fairly new.
Any quick ideas on what's wrong before I dig deeper?

//Magnus

*** ./expected/largeobject.out  Tue Jan 23 14:55:25 2007
--- ./results/largeobject.out   Tue Jan 23 14:56:17 2007
***
*** 140,147 
  -- large object
  SELECT loread(fd, 36) FROM lotest_stash_values;
   loread  
! -
!  AAA\011FB\011xx\0122513\01132\0111\0111\0113\01113\0111
  (1 row)
  
  SELECT lo_tell(fd) FROM lotest_stash_values;
--- 140,147 
  -- large object
  SELECT loread(fd, 36) FROM lotest_stash_values;
  loread
! --
!  44\011144\044\0114144\0119144\01188\01189\011SN\011F
  (1 row)
  
  SELECT lo_tell(fd) FROM lotest_stash_values;
***
*** 170,177 
  
  SELECT loread(fd, 36) FROM lotest_stash_values;
 loread
! -
!  AAA\011FBabcdefghijklmnop1\0111\0113\01113\0111
  (1 row)
  
  SELECT lo_close(fd) FROM lotest_stash_values;
--- 170,177 
  
  SELECT loread(fd, 36) FROM lotest_stash_values;
loread  
! --
!  44\011144\04abcdefghijklmnop9\011SN\011F
  (1 row)
  
  SELECT lo_close(fd) FROM lotest_stash_values;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Teodor Sigaev
I would like to suggest patches for OR-clause optimization and using index for 
searching NULLs.



--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Heikki Linnakangas

Pavan Deolasee wrote:

Another source of I/O is perhaps the CLOG read/writes for checking
transaction status. If we are talking about large tables like accounts in
pgbench or customer/stock in DBT2, the tables are vacuumed much later than
the actual UPDATEs. I don't have any numbers to prove yet, but my sense is
that CLOG pages holding the status of many of the transactions might have
been already flushed out of the cache and require an I/O. Since the default
CLOG SLRU buffers is set to 8, there could be severe CLOG SLRU thrashing
during VACUUM as the transaction ids will be all random in a heap page.


8 log pages hold 8*8192*4=262144 transactions. If the active set of 
transactions is larger than that, the OS cache will probably hold more 
clog pages. I guess you could end up doing some I/O on clog on a vacuum 
of a big table, if you have a high transaction rate and vacuum 
infrequently...


Would it help to set the status of the XMIN/XMAX of tuples early enough 
such

that the heap page is still in the buffer cache, but late enough such that
the XMIN/XMAX transactions are finished ? How about doing it when the
bgwriter is about to write the page to disk ? Assuming few seconds of life
of a heap page in the buffer cache, hopefully most of the XMIN/XMAX
transactions should have completed and bgwriter can set 
XMIN(XMAX)_COMMITTED
or XMIN(XMAX)_INVALID for most of the tuples in the page. This would 
save us

CLOG I/Os  later, either during subsequent access to the tuple and/or
vacuum.


Yeah, we could do that. First I'd like to see some more evidence that 
clog trashing is a problem, though.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] About PostgreSQL certification

2007-01-23 Thread Iannsp

Hello,
I did like to know what you think about the postgresql certifications 
provided for


PostgreSQL CE
http://www.sraoss.co.jp/postgresql-ce/news_en.html

CertFirst
http://www.certfirst.com/postgreSql.htm

My question is about the validate of this certification for the clients.
Make difference to be certified?

thanks for advanced.

Ivo Nascimento.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:


ITAGAKI Takahiro wrote:
> BLCKSZ is typically 8192 bytes and sizeof(ItemPointerData) is 4 bytes.
> 1/4 comes from 8192 / 4 = 2048. If we allow zero-size tuples, the line
> pointers area can bloat up to the ratio. We have tuples no less than
> 32 bytes-size, so the area is restricted 256 bytes now.

sizeof(ItemPointerData) == 6 bytes



I guess ITAGAKI meant sizeof(ItemIdData) which is 4 bytes. Thats the data
type
for the line pointer.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Piggybacking vacuum I/O

2007-01-23 Thread Pavan Deolasee

On 1/22/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:


I've been looking at the way we do vacuums.

The fundamental performance issue is that a vacuum generates
nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
spread the cost like part payment, but the total is the same. In an I/O
bound system, the extra I/O directly leads to less throughput.



Another source of I/O is perhaps the CLOG read/writes for checking
transaction status. If we are talking about large tables like accounts in
pgbench or customer/stock in DBT2, the tables are vacuumed much later than
the actual UPDATEs. I don't have any numbers to prove yet, but my sense is
that CLOG pages holding the status of many of the transactions might have
been already flushed out of the cache and require an I/O. Since the default
CLOG SLRU buffers is set to 8, there could be severe CLOG SLRU thrashing
during VACUUM as the transaction ids will be all random in a heap page.

Would it help to set the status of the XMIN/XMAX of tuples early enough such
that the heap page is still in the buffer cache, but late enough such that
the XMIN/XMAX transactions are finished ? How about doing it when the
bgwriter is about to write the page to disk ? Assuming few seconds of life
of a heap page in the buffer cache, hopefully most of the XMIN/XMAX
transactions should have completed and bgwriter can set XMIN(XMAX)_COMMITTED
or XMIN(XMAX)_INVALID for most of the tuples in the page. This would save us
CLOG I/Os  later, either during subsequent access to the tuple and/or
vacuum.

Any thoughts ?

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Lukas Kahwe Smith

Joshua D. Drake wrote:

Lukas Kahwe Smith wrote:

Joshua D. Drake wrote:


Great! I will put it on my, "Remember to bug Arul" list :)

Hey Joshua,

could you put this stuff here:
http://developer.postgresql.org/index.php/Todo:WishlistFor83


Sure if you bother to unlock the page for me ;)


hmm .. i am not aware of having a lock. i dont know mediawiki all that 
well, but clicking around i could not find anything. IIRC someone else 
also had issues editing pages on the wiki.


regards,
Lukas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Heikki Linnakangas

ITAGAKI Takahiro wrote:

BLCKSZ is typically 8192 bytes and sizeof(ItemPointerData) is 4 bytes.
1/4 comes from 8192 / 4 = 2048. If we allow zero-size tuples, the line
pointers area can bloat up to the ratio. We have tuples no less than
32 bytes-size, so the area is restricted 256 bytes now.


sizeof(ItemPointerData) == 6 bytes


We can recycle unused line pointers, but we cannot shrink the area unless
the tail end of line pointers are removed. i.e, unusable free space will
remains at the middle of LP area. 


Yeah, agreed. It'd still be a good idea to do it when possible.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] regular expressions stranges

2007-01-23 Thread Teodor Sigaev
Regexp works differently with no-ascii characters depending on server encoding 
(bug.sql contains non-ascii char):


% initdb -E KOI8-R --locale ru_RU.KOI8-R
% psql postgres < bug.sql
 true
--
 t
(1 row)

 true | true
--+--
 t| t
(1 row)
% initdb -E UTF8 --locale ru_RU.UTF-8
% psql postgres < bug.sql
 true
--
 f
(1 row)

 true | true
--+--
 f| t
(1 row)

As I can see, that is because of using isalpha (and other is*), tolower & 
toupper instead of isw* and tow* functions. Is any reason to use them? If not, I 
can modify regc_locale.c similarly to tsearch2 locale part.




--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/
set client_encoding='KOI8';

SELECT  'Ä' ~* '[[:alpha:]]' as "true";
SELECT 
'äÏÒÏÇÁ' ~* 'ÄÏÒÏÇÁ' as "true", 
'ÄÏÒÏÇÁ' ~* 'ÄÏÒÏÇÁ' as "true";

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Pavel Stehule

Hello


Pavel Stehule: PLpsm


I expect so plpgpsm will be some time (+/- one year) external project. For 
8.3 I would to put 2 patches: scrollable cursors and trappable warnings 
(maybe not). I have patch for plpgsql for scrollable cursors too. No body 
here has experience with SQL/PSM and plpgpsm can be good joy for cognition 
of SQL/PSM. I am sure so when 8.3 will be downloadable, plpgpsm will be 
downloadable too.


My ToDo:
* statement RESIGNAL and enhanced diagnostic statement
* more documentation in english
* lot of work on clean and refactoring code

Currently I working on plpgpsm alone and cannot test it well.

Regards
Pavel Stehule

_
Invite your Hotmail contacts to join your friends list with Windows Live 
Spaces 
http://clk.atdmt.com/MSN/go/msnnkwsp007001msn/direct/01/?href=http://spaces.live.com/spacesapi.aspx?wx_action=create&wx_url=/friends.aspx&mkt=en-us



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Updateable cursors

2007-01-23 Thread Simon Riggs
On Wed, 2007-01-24 at 02:42 +1100, FAST PostgreSQL wrote:

> In the UPDATE or DELETE statements the ‘WHERE CURRENT OF ’ 
> clause results in the cursor name being placed in the UpdateStmt or 
> DeleteStmt structure. During the processing of the functions - 
> transformDeleteStmt() and transformUpdateStmt() - the cursor name is used to 
> obtain a pointer to the related Portal structure 

To support prepared statements we'd need to do this name lookup just
once, so that the Update/Delete stmt can record which Portal to look at
for the current tuple.

> and the tuple affected by 
> the current UPDATE or DELETE statement is extracted from the Portal, where it 
> has been placed as the result of a previous FETCH request. At this point all 
> the information for the UPDATE or DELETE statement is available so the 
> statements can be transformed into standard UPDATE or DELETE statements and 
> sent for re-write/planning/execution as usual.


> 2.5 Changes to the Executor
> ---
> There are various options that have been considered for this part of the 
> enhancement. These are described in the sections below.


> Option 1  MVCC Via Continuous Searching of Database
> 
> The Executor is to be changed in the following ways:
> 1)When the FETCH statement is executed the id of the resulting tuple is 
> extracted and passed back to the Portal structure to be saved to indicate the 
> cursor is currently positioned on a tuple.
> 2)When the UPDATE or DELETE request is executed the tuple id previously 
> FETCHed is held in the QueryDesc structure so that it can be compared with 
> the tuple ids returned from the TidScan node processed prior to the actual 
> UPDATE / DELETE node in the plan. This enables a decision to be made as to 
> whether the tuple held in the cursor is visible to the UPDATE / DELETE 
> request according to the rules of concurrency. The result is that, at the 
> cost of repeatedly searching the database at each UPDATE / DELETE command, 
> the hash table is no longer required.
> This approach has the advantage that there is no hash table held in memory or 
> on disk so it will not be memory intensive but will be processing intensive. 

Do you have a specific example that would cause problems? It's much
easier to give examples that might cause problems and discuss those.

AFAICS in the straightforward case the Fetch will only return rows it
can see so update/delete should have no problems, iff the update/delete
is using a same or later snapshot than the cursor. I can see potential
problems with scrollable cursors.

So I'm not sure why there's a big need for any of the 5 options, yet.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Magnus Hagander
On Mon, Jan 22, 2007 at 09:14:01PM -0500, Stephen Frost wrote:
> * Joshua D. Drake ([EMAIL PROTECTED]) wrote:
> > Thought I would do a poll of what is happening in the world for 8.3. I have:
> 
> Another thing which was mentioned previously which I'd really like to
> see happen (and was discussed on the list...) is replacing the Kerberos
> support with GSSAPI support and adding support for SSPI.  Don't recall
> who had said they were looking into working on it though..

That's Henry B. Hotz. He's done some work on it, and I have some stuff
to comment on sitting in my mailbox that I haven't had time to look at
yet. But I'm going to try to do that soon so he can continue.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:


Or so... :)

I am sure there are more, the ones with question marks are unknowns but
heard of in the ether somewhere. Any additions or confirmations?



I have the first phase of Frequent Update Optimizations (HOT) patch ready.
But I held it back because of the concerns that its too complex. It has
shown decent performance gains on pgbench and DBT2 tests though.

I am splitting the patch into smaller pieces for ease of review and would
submit those soon for comments.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Planning aggregates which require sorted or distinct

2007-01-23 Thread Simon Riggs
On Sat, 2007-01-20 at 14:20 +, Simon Riggs wrote:
> On Sat, 2007-01-20 at 23:54 +1100, Gavin Sherry wrote:

> > Windows are slightly more complex though. As you
> > probably know, there are two ways of specifying the window frame: by an
> > absolute number of rows (ROWS N PRECEDING, for example); or, by a 'range'
> > (RANGE N PRECEDING), where the range, in the case of 'preceding', is
> > determined by subtracted the range parameter from the value of the current
> > field -- i.e., the window attribute.
> 
> Sure. The MJ situation is to have the MergeJoin node call a Materialize
> node which calls the tuplestore functions. The MJ node does all the
> clever stuff, which includes a variable size buffer according to the
> values arriving. Seems OK to have a Window node that does its own brand
> of clever stuff, while the Materialize node just does whats its told in
> managing the tuplestore.
> 
> Rows type calls can do a read next/mark at same time, so they always
> maintain a fixed lead/lag as they go.
> 
> Range type calls can do a read, then some processing to determine if it
> should mark yet, just as MJ does. Or at least we can support an
> additional call to make RangeWindowStart hunt for the appropriate row to
> set as the end of the window and then read forward until the end of the
> range is found.

Gavin,

Following earlier thoughts, I thought I'd make some notes about how the
new tuplestore changes could be used for preparing Windows for use with
Windowed aggregate functionality.

The new tuplestore commands are
tuplestore_moveBufferStartForwards()  == mark
tuplestore_rewindToBufferStart()  == restore

(happy to change the names...)

They'd be used something like this, in simplified pseudo code that would
be executed by a Window? node.

With ROWS, for each new tuple:
-
tuplestore_puttupleslot() // add one new tuple
tuplestore_moveBufferStartForwards(++markpos) // move start forwards one

// position Window for processing
tuplestore_rewindToBufferStart() to position Window for processing

With RANGE, for each new row:

// locate new range start
tuplestore_rewindToBufferStart()
...step forward until start of new range found...
tuplestore_moveBufferStartForwards() at that point

// locate new range end
while (!end-of-new-range)
{
if (!eof)
tuplestore_gettuple() // move forwards
else
tuplestore_puttupleslot() // add new tuple
}

// position Window for processing
tuplestore_rewindToBufferStart()

(The above is simplified to remove boundary conditions.)

So AFAICS, there's not actually any additional work to do, over and
above the changes I'm working on to support a circular buffer in
tuplestore for merge joins.

The above makes the assumption that for RANGE windows, the range start
of tuple i+1 is always greater than or equal to the range start of tuple
i - could be very interesting if its not true.

Anyway, some options for you to consider, at least.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Heikki Linnakangas

Pavan Deolasee wrote:

I thought that we can not reclaim the line pointers unless we remove the
corresponding index entries as well. Isn't that the case ? If so, how would
we reclaim the line pointers after the last used one ?


There might be index pointers to dead line pointers in the proposed 
truncation scheme, so those can't be reclaimed, but after the index 
pointers are removed and the line pointers are unused like they are 
today, they could be reclaimed.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread ITAGAKI Takahiro

Heikki Linnakangas <[EMAIL PROTECTED]> wrote:

> > * Consider shrinking expired tuples to just their headers.
> 
> Yeah, same idea. You suggested in that thread that we should keep the 
> headers because of line pointer bloat, but I don't see how that's 
> better. You're still going to get some line pointer bloat, but not able 
> to reclaim as much free space.

That is not an essential solution, as you are aware. I think it will be
better to combine tuple shrinking and other restrictions of LP area.

> > Keeping only line pointers itself is not a problem, but it might lead
> > bloating of line pointers. If a particular tuple in a page is replaced
> > repeatedly, the line pointers area bloats up to 1/4 of the page.
> 
> Where does the 1/4 figure come from?

BLCKSZ is typically 8192 bytes and sizeof(ItemPointerData) is 4 bytes.
1/4 comes from 8192 / 4 = 2048. If we allow zero-size tuples, the line
pointers area can bloat up to the ratio. We have tuples no less than
32 bytes-size, so the area is restricted 256 bytes now.


> The problem is that if a tuple is updated say hundreds of times before 
> vacuum, but then it's not updated anymore, you'll have a page full of 
> useless line pointers that are not reclaimed. Clearly we should start 
> reclaiming line pointers, but we can only do that for unused line 
> pointers after the last used one.

We can recycle unused line pointers, but we cannot shrink the area unless
the tail end of line pointers are removed. i.e, unusable free space will
remains at the middle of LP area. 

[used lp][***unusable free space***][used lp] [free space] [heap tuples]

> Would it be enough cap the number of dead line pointers with a simple 
> rule like "max 20% of line pointers can be dead"? I'd be happy with that.

Yeah, I think it is enough, too. It might be a signal of vacuum.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:


Pavan Deolasee wrote:

> So during a sequential or index scan, if a tuple is found to be dead,
the
> corresponding line pointer is marked "unused" and the space is returned
> to a
> free list. This free list is maintained within the page. A linked-list
can
> be used for this purpose and the special area of the heap-page can be
used
> to track the fragment list. We can maintain some additional information
> about the fragmented space such as, total_free_space, max_fragment_size,
> num_of_fragments etc in the special area.

Maintaining a list like that seems like a lot of hassle to me. Instead,
you could just scan the line pointers looking for a dead tuple of the
right size. We already have to scan the line pointers when inserting to
find a free line pointer.



That's a good suggestion. Just to avoid useless scans when there is no
fragment which can accommodate the new tuple, we may have some book keeping
information in the special area though.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:


ITAGAKI Takahiro wrote:

> Keeping only line pointers itself is not a problem, but it might lead
> bloating of line pointers. If a particular tuple in a page is replaced
> repeatedly, the line pointers area bloats up to 1/4 of the page.

Where does the 1/4 figure come from?

> We need to work around the problem.

If a row is updated many times until vacuum comes along, what currently
happens is that we end up with a bunch of pages full of dead tuples.
With the truncation scheme, we could fit way more dead tuples on each
page, reducing the need to vacuum. If a row is for example 40 bytes
long, including header (a quite narrow one), you could fit 10 line
pointers to the space of one row, which means that you could ideally
multiply your vacuum interval by a factor of 10x. That's a huge benefit,
though indexes would still bloat unless selects marking index pointers
as dead keep the bloat in control.

The problem is that if a tuple is updated say hundreds of times before
vacuum, but then it's not updated anymore, you'll have a page full of
useless line pointers that are not reclaimed. Clearly we should start
reclaiming line pointers, but we can only do that for unused line
pointers after the last used one.



I thought that we can not reclaim the line pointers unless we remove the
corresponding index entries as well. Isn't that the case ? If so, how would
we reclaim the line pointers after the last used one ?

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-01-23 Thread Heikki Linnakangas

Joshua D. Drake wrote:

Or so... :)

Thought I would do a poll of what is happening in the world for 8.3. I have:

Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions


Gavin: how's it going with the bitmap indexes? I could work on it as 
well, but I don't want to step on your toes.



Heikki Linnakangas: Working on Vacuum for Bitmap Indexes?


Yeah, that's the plan.

Also:
* Grouped Index Tuples (http://community.enterprisedb.com/git/). I don't 
know how to proceed with this, but it's a feature I'd like to get in 
8.3. Suggestions, anyone? I haven't received much comments on the design 
or code...


* vacuum enhancements, not sure what exactly..

* Plan invalidation, possibly. Tom had plans on this as well.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Heikki Linnakangas

ITAGAKI Takahiro wrote:

"Pavan Deolasee" <[EMAIL PROTECTED]> wrote:


The overwhelming vast majoirty of tuples are going to be in one or more
indexes. Which means nearly all tuples are going to fall into this
category. So where's the benefit?

The line pointers can not reused, but the space consumed by the tuple can be.
So the benefit is in utilizing that space for newer tuples and thus reduce the
bloat.


I think your idea is same as the following TODO Item, that I suggested before.

* Consider shrinking expired tuples to just their headers.
http://archives.postgresql.org/pgsql-patches/2006-03/msg00142.php
http://archives.postgresql.org/pgsql-patches/2006-03/msg00166.php


Yeah, same idea. You suggested in that thread that we should keep the 
headers because of line pointer bloat, but I don't see how that's 
better. You're still going to get some line pointer bloat, but not able 
to reclaim as much free space.


In that thread, Tom mentioned that we may need to keep the header 
because the dead tuple might be part of an update chain. Reading back 
the discussion on the vacuum bug, I can't see how removing the header 
would be a problem, but maybe I'm missing something.



One assumption I am  making here is that its sufficient to mark the line pointer
"unused" (reset LP_USED flag) even though there is an index entry pointing to
the tuple. During index scan, we anyways check for ItemIdIsUsed() before
proceeding further. I know it might break the ctid chain, but does that really
matter ? I don't see any reason why somebody would need to follow ctid chain
past a dead tuple.


Keeping only line pointers itself is not a problem, but it might lead
bloating of line pointers. If a particular tuple in a page is replaced
repeatedly, the line pointers area bloats up to 1/4 of the page.


Where does the 1/4 figure come from?


We need to work around the problem.


If a row is updated many times until vacuum comes along, what currently 
happens is that we end up with a bunch of pages full of dead tuples. 
With the truncation scheme, we could fit way more dead tuples on each 
page, reducing the need to vacuum. If a row is for example 40 bytes 
long, including header (a quite narrow one), you could fit 10 line 
pointers to the space of one row, which means that you could ideally 
multiply your vacuum interval by a factor of 10x. That's a huge benefit, 
though indexes would still bloat unless selects marking index pointers 
as dead keep the bloat in control.


The problem is that if a tuple is updated say hundreds of times before 
vacuum, but then it's not updated anymore, you'll have a page full of 
useless line pointers that are not reclaimed. Clearly we should start 
reclaiming line pointers, but we can only do that for unused line 
pointers after the last used one.


Would it be enough cap the number of dead line pointers with a simple 
rule like "max 20% of line pointers can be dead"? I'd be happy with that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Heikki Linnakangas

Pavan Deolasee wrote:

I am thinking that maintaining fragmented free space within a heap page
might be a good idea. It would help us to reuse the free space ASAP without
waiting for a vacuum run on the page. This in turn will lead to lesser heap
bloats and also increase the probability of placing updated tuple in the
same heap page as the original one.


Agreed.


So during a sequential or index scan, if a tuple is found to be dead, the
corresponding line pointer is marked "unused" and the space is returned 
to a

free list. This free list is maintained within the page. A linked-list can
be used for this purpose and the special area of the heap-page can be used
to track the fragment list. We can maintain some additional information
about the fragmented space such as, total_free_space, max_fragment_size,
num_of_fragments etc in the special area.


Maintaining a list like that seems like a lot of hassle to me. Instead, 
you could just scan the line pointers looking for a dead tuple of the 
right size. We already have to scan the line pointers when inserting to 
find a free line pointer.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Heikki Linnakangas

Pavan Deolasee wrote:

One assumption I am  making here is that its sufficient to mark the line
pointer
"unused" (reset LP_USED flag) even though there is an index entry pointing
to
the tuple. During index scan, we anyways check for ItemIdIsUsed() before
proceeding further. I know it might break the ctid chain, but does that
really
matter ? I don't see any reason why somebody would need to follow ctid 
chain

past a dead tuple.


You can't clear the LP_USED flag, but you could use the LP_DELETE flag 
that's currently not used in heap pages.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread ITAGAKI Takahiro

"Pavan Deolasee" <[EMAIL PROTECTED]> wrote:

> > The overwhelming vast majoirty of tuples are going to be in one or more
> > indexes. Which means nearly all tuples are going to fall into this
> > category. So where's the benefit?
> 
> The line pointers can not reused, but the space consumed by the tuple can be.
> So the benefit is in utilizing that space for newer tuples and thus reduce the
> bloat.

I think your idea is same as the following TODO Item, that I suggested before.

* Consider shrinking expired tuples to just their headers.
http://archives.postgresql.org/pgsql-patches/2006-03/msg00142.php
http://archives.postgresql.org/pgsql-patches/2006-03/msg00166.php


> One assumption I am  making here is that its sufficient to mark the line 
> pointer
> "unused" (reset LP_USED flag) even though there is an index entry pointing to
> the tuple. During index scan, we anyways check for ItemIdIsUsed() before
> proceeding further. I know it might break the ctid chain, but does that really
> matter ? I don't see any reason why somebody would need to follow ctid chain
> past a dead tuple.

Keeping only line pointers itself is not a problem, but it might lead
bloating of line pointers. If a particular tuple in a page is replaced
repeatedly, the line pointers area bloats up to 1/4 of the page.
We need to work around the problem.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

On 1/23/07, Martijn van Oosterhout  wrote:


On Tue, Jan 23, 2007 at 01:48:08PM +0530, Pavan Deolasee wrote:

> We might not be able to reuse the line pointers because indexes may have
> references to it. All such line pointers will be freed when the page is
> vacuumed during the regular vacuum.

The overwhelming vast majoirty of tuples are going to be in one or more
indexes. Which means nearly all tuples are going to fall into this
category. So where's the benefit?



The line pointers can not reused, but the space consumed by the tuple can
be.
So the benefit is in utilizing that space for newer tuples and thus reduce
the
bloat.

One assumption I am  making here is that its sufficient to mark the line
pointer
"unused" (reset LP_USED flag) even though there is an index entry pointing
to
the tuple. During index scan, we anyways check for ItemIdIsUsed() before
proceeding further. I know it might break the ctid chain, but does that
really
matter ? I don't see any reason why somebody would need to follow ctid chain
past a dead tuple.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] Free space management within heap page

2007-01-23 Thread Martijn van Oosterhout
On Tue, Jan 23, 2007 at 01:48:08PM +0530, Pavan Deolasee wrote:
> I am thinking that maintaining fragmented free space within a heap page
> might be a good idea. It would help us to reuse the free space ASAP without
> waiting for a vacuum run on the page. This in turn will lead to lesser heap
> bloats and also increase the probability of placing updated tuple in the
> same heap page as the original one.



Nice idea but:

> We might not be able to reuse the line pointers because indexes may have
> references to it. All such line pointers will be freed when the page is
> vacuumed during the regular vacuum.

The overwhelming vast majoirty of tuples are going to be in one or more
indexes. Which means nearly all tuples are going to fall into this
category. So where's the benefit?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


[HACKERS] Free space management within heap page

2007-01-23 Thread Pavan Deolasee

I am thinking that maintaining fragmented free space within a heap page
might be a good idea. It would help us to reuse the free space ASAP without
waiting for a vacuum run on the page. This in turn will lead to lesser heap
bloats and also increase the probability of placing updated tuple in the
same heap page as the original one.

So during a sequential or index scan, if a tuple is found to be dead, the
corresponding line pointer is marked "unused" and the space is returned to a
free list. This free list is maintained within the page. A linked-list can
be used for this purpose and the special area of the heap-page can be used
to track the fragment list. We can maintain some additional information
about the fragmented space such as, total_free_space, max_fragment_size,
num_of_fragments etc in the special area.

During UPDATEs, if we find that there is no free space in the block, the
fragment list is searched (either first-fit or best-fit), the required space
is consumed and the remaining space is returned to the free list.

We might not be able to reuse the line pointers because indexes may have
references to it. All such line pointers will be freed when the page is
vacuumed during the regular vacuum.

Thanks,
Pavan

EnterpriseDB http://www.enterprisedb.com