Re: [HACKERS] contrib/citext versus collations

2011-06-07 Thread David E. Wheeler
On Jun 6, 2011, at 4:35 PM, Tom Lane wrote:

 That sounds like a good idea.
 
 BTW, it struck me shortly after sending this that we'd already discussed
 the idea of a flag in pg_proc showing whether a function pays attention
 to collation.  We could of course use that for this purpose.

Seems like a no-brainer.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gdb with postgres

2011-06-07 Thread HuangQi
On 6 June 2011 21:57, Kevin Grittner kevin.gritt...@wicourts.gov wrote:

 HuangQi huangq...@gmail.com wrote:

  (gdb) b qp_add_paths_to_joinrel
  Breakpoint 1 at 0x1a6744: file joinpath.c, line 67.
  (gdb) attach 23903

  If I enter c, gdb will directly finish executing this process and
  current query will finish.

 Are you absolutely sure that running your query will result in a
 call to this function?

 -Kevin



Thanks guys for your idea. I found the solution. Because after made some
change in postgres and make them, I didn't stop the server first. Now I stop
it and start after reinstall, then using gdb is just fine. Thanks for your
ideas.

-- 
Best Regards
Huang Qi Victor


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Dave Page
On Tue, Jun 7, 2011 at 12:29 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Dave Page dp...@pgadmin.org writes:
 On Mon, Jun 6, 2011 at 8:44 PM, Stephen Frost sfr...@snowman.net wrote:
 If we're going to start putting in changes like this, I'd suggest that
 we try and target something like September for 9.1 to actually be
 released.  Playing with the lock management isn't something we want to
 be doing lightly and I think we definitely need to have serious testing
 of this, similar to what has been done for the SSI changes, before we're
 going to be able to release it.

 Completely aside from the issue at hand, aren't we looking at a
 September release by now anyway (assuming we have to void late
 July/August as we usually do)?

 Very possibly.  So if we add this in, we're talking November or December
 instead of September.  You can't argue that July/August will be lost
 time for one development path but not another.

That would depend on 2 things - a) whether testing and review of this
single patch would really add 2 - 3 months to the schedule (I'm no
expert on our locking, but I suspect it would not), and b) whether
there are people around over the summer who could test/review. The
reason we usually skip the summer isn't actually a wholesale lack of
people - it's because it's not so good from a publicity perspective,
and it's hard to get all the packagers around at the same time.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Simon Riggs
On Mon, Jun 6, 2011 at 11:25 PM, Robert Haas robertmh...@gmail.com wrote:

 As to the question of whether it's safe, I think I'd agree that the
 chances of this backfiring are pretty remote.  I think that with the
 zeroing they are exactly zero, because (now that we start XLOG
 positions at 0/1) there is now way that xl_prev = {0, 0} can ever be
 valid.  Without the zeroing, well, it's remotely possible that xl_prev
 could happen to appear valid and that xl_crc could happen to match...
 but the chances are presumably quite remote.  Just the chances of the
 CRC matching should be around 2^-32.  The chances of an xl_prev match
 are harder to predict, because the matching values for CRCs should be
 pretty much uniformly distributed, while xl_prev isn't random.  But
 even given that the chance of a match is should be very small, so in
 practice there is likely no harm.

And if such a thing did actually happen we would also need to have an
accidentally correct value of all of the rest of the header values.
And even if we did we would apply at most one junk WAL record. Then we
are onto the next WAL record where we would need have to the same luck
all over again.

The probability of these occurrences is well below the acceptable
threshold for other problems occurring.

 It strikes me, though, that we
 could probably get nearly all of the benefit of this patch by being
 willing to zero the first sizeof(XLogRecord) bytes following a record,
 but not the rest of the buffer.  That would pretty much wipe out any
 chance of an xl_prev match, I think, and would likely still get nearly
 all of the performance benefit.

Which adds something onto the path of every XlogInsert(), rather than
once per page, so I'm a little hesitant to agree.

If we did that, we would only need to write out an additional 12 bytes
per WAL record, not the full sizeof(XLogRecord).

But even so, I think its wasted effort.

Measuring the benefit of a performance patch is normal, but I'm not
proposing this as a risk trade-off. It's just a straight removal of
multiple cycles from a hot code path. The exact benefit will depend
upon whether the WALInsertLock is the hot lock, which it likely will
be when other patches are applied.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Heikki Linnakangas

On 07.06.2011 10:21, Simon Riggs wrote:

On Mon, Jun 6, 2011 at 11:25 PM, Robert Haasrobertmh...@gmail.com  wrote:

It strikes me, though, that we
could probably get nearly all of the benefit of this patch by being
willing to zero the first sizeof(XLogRecord) bytes following a record,
but not the rest of the buffer.  That would pretty much wipe out any
chance of an xl_prev match, I think, and would likely still get nearly
all of the performance benefit.


Which adds something onto the path of every XlogInsert(), rather than
once per page, so I'm a little hesitant to agree.


You would only need to do it just before you write out the WAL. I guess 
you'd need to grab WALInsertLock in XLogWrite() to prevent more WAL 
records from being inserted on the page until you're done zeroing it, 
though.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Invalid byte sequence for encoding UTF8, caused due to non wide-char-aware downcase_truncate_identifier() function on WINDOWS

2011-06-07 Thread Jeevan Chalke
Hi Tom,

Issue is on Windows:

If you see in attached failure.out file, (after running failure.sql) we are
getting ERROR:  invalid
byte sequence for encoding UTF8: 0xe59aff error. Please note that byte
sequence we got from database is e5 9a ff, where as actual byte sequence for
the wide character '功' is e5 8a 9f.


'功'  == UNICODE Character
e5 8a 9f  == Original Byte Sequence for the given characters
e5 9a ff  == downcase_truncate_identifier() result, which is invalid UTF8
representation, stored in pg_catalog table

While displaying on client, we receive this invalid byte sequence which
throws an error. Note that UTF8 characters have predefined character ranges
for each byte which is checked in pg_utf8_islegal() function. Here is the
code snippet:

==
a = source[2];
if (a  0x80 || a  0xBF)
return false;
==
Note that source[2] = ff, which does not fall into the valid range which
results in illegal UTF8 character sequence. If you carefully see the
original one i.e. 9f, which falls within the range.

since we smash the identifier to lower case using
downcase_truncate_identifier() function, the solution is to make this
function should be wide-char aware, like LOWER() function functionality.

I see some discussion related to downcase_truncate_identifier() and
wide-char aware function, but seems like we lost somewhere.
(http://archives.postgresql.org/pgsql-hackers/2010-11/msg01385.php)
This invalid byte sequence issue seems like a more serious issue, because it
might lead e.g to pg_dump failures.

I have tested this on PG9.0 beta4 (one click installers), BTW, we have
observed same with earlier version as well.

Attached is the .sql and its output (run on PG9.0 beta4).

Any thoughts???

Thanks

-- 
Jeevan B Chalke
Senior Software Engineer, RD
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Phone: +91 20 30589500

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are not
the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.
SELECT version();
set client_encoding to EUC_CN;
SELECT name,setting FROM pg_settings WHERE name like 'lc%' OR name like '%encoding';
 create table  加入 (  用户名 text, 新功能 varchar);
 insert into  加入 values('- 隐私政策 ',' 使用条款');
 insert into  加入 values('计划政策',' 登录到');
 select 新功能 from  加入;
 select * from  加入;
 drop table  加入 ;


failure.out
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Heikki Linnakangas

On 06.06.2011 05:13, Kevin Grittner wrote:

Kevin Grittner wrote:


Maybe I should submit a patch without added complexity of the
scheduled cleanup and we can discuss that as a separate patch?


Here's a patch which adds the missing support for DDL.


It makes me a bit uncomfortable to do catalog cache lookups while 
holding all the lwlocks. We've also already removed the reserved entry 
for scratch space while we do that - if a cache lookup errors out, we'll 
leave behind quite a mess. I guess it shouldn't fail, but it seems a bit 
fragile.


When TransferPredicateLocksToHeapRelation() is called for a heap, do we 
really need to transfer all the locks on its indexes too? When a heap is 
dropped or rewritten or truncated, surely all its indexes are also 
dropped or reindexed or truncated, so you'll get separate Transfer calls 
for each index anyway. I think the logic is actually wrong at the 
moment: When you reindex a single index, 
DropAllPredicateLocksFromTableImpl() will transfer all locks belonging 
to any index on the same table, and any finer-granularity locks on the 
heap. It would be enough to transfer only locks on the index being 
reindexed, so you risk getting some unnecessary false positives. As a 
bonus, if you dumb down DropAllPredicateLocksFromTableImpl() to only 
transfer locks on the given heap or index, and not any other indexes on 
the same table, you won't need IfIndexGetRelation() anymore, making the 
issue of catalog cache lookups moot.


Seems weird to call SkipSplitTracking() for heaps. I guess it's doing 
the right thing, but all the comments and the name of that refer to indexes.



Cleanup of
predicate locks at commit time for transactions which ran DROP TABLE
or TRUNCATE TABLE can be added as a separate patch. I consider those
to be optimizations which are of dubious benefit, especially compared
to the complexity of the extra code required.


Ok.


In making sure that the new code for this patch was in pgindent
format, I noticed that the ASCII art and bullet lists recently added
to OnConflict_CheckForSerializationFailure() were mangled badly by
pgindent, so I added the dashes to protect those and included a
pgindent form of that function.  That should save someone some
trouble sorting things out after the next global pgindent run.


Thanks, committed that part.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 8:27 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 07.06.2011 10:21, Simon Riggs wrote:

 On Mon, Jun 6, 2011 at 11:25 PM, Robert Haasrobertmh...@gmail.com
  wrote:

 It strikes me, though, that we
 could probably get nearly all of the benefit of this patch by being
 willing to zero the first sizeof(XLogRecord) bytes following a record,
 but not the rest of the buffer.  That would pretty much wipe out any
 chance of an xl_prev match, I think, and would likely still get nearly
 all of the performance benefit.

 Which adds something onto the path of every XlogInsert(), rather than
 once per page, so I'm a little hesitant to agree.

 You would only need to do it just before you write out the WAL. I guess
 you'd need to grab WALInsertLock in XLogWrite() to prevent more WAL records
 from being inserted on the page until you're done zeroing it, though.

How would that help?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 3:21 AM, Simon Riggs si...@2ndquadrant.com wrote:
 It strikes me, though, that we
 could probably get nearly all of the benefit of this patch by being
 willing to zero the first sizeof(XLogRecord) bytes following a record,
 but not the rest of the buffer.  That would pretty much wipe out any
 chance of an xl_prev match, I think, and would likely still get nearly
 all of the performance benefit.

 Which adds something onto the path of every XlogInsert(), rather than
 once per page, so I'm a little hesitant to agree.

Urk.  Well, we don't want that, for sure.   The previous discussion
was talking about moving the zeroing around somehow, rather than
getting rid of it, so maybe there's some way to make it work...

One other thought is that I think that this patch might cause a
user-visible behavior change.  Right now, when you hit the end of
recovery, you most typically get a message saying - record with zero
length.  Not always, but often.  If we adopt this approach, you'll get
a wider variety of error messages there, depending on exactly how the
new record fails validation.  I dunno if that's important to be worth
caring about, or not.

 If we did that, we would only need to write out an additional 12 bytes
 per WAL record, not the full sizeof(XLogRecord).

 But even so, I think its wasted effort.

 Measuring the benefit of a performance patch is normal, but I'm not
 proposing this as a risk trade-off. It's just a straight removal of
 multiple cycles from a hot code path. The exact benefit will depend
 upon whether the WALInsertLock is the hot lock, which it likely will
 be when other patches are applied.

I don't think it's too hard to construct a test case where it is, even
now.  pgbench on a medium-sized machine ought to do it, with
synchronous_commit turned off.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Heikki Linnakangas  wrote:
 
 It makes me a bit uncomfortable to do catalog cache lookups while
 holding all the lwlocks. We've also already removed the reserved
 entry for scratch space while we do that - if a cache lookup errors
 out, we'll leave behind quite a mess. I guess it shouldn't fail,
 but it seems a bit fragile.
 
 When TransferPredicateLocksToHeapRelation() is called for a heap,
 do we really need to transfer all the locks on its indexes too?
 When a heap is dropped or rewritten or truncated, surely all its
 indexes are also dropped or reindexed or truncated, so you'll get
 separate Transfer calls for each index anyway.
 
Probably.  Will confirm and simplify based on that.
 
 I think the logic is actually wrong at the moment: When you reindex
 a single index, DropAllPredicateLocksFromTableImpl() will transfer
 all locks belonging to any index on the same table, and any
 finer-granularity locks on the heap. It would be enough to transfer
 only locks on the index being reindexed, so you risk getting some
 unnecessary false positives.
 
It seemed like a good idea at the time -- a relation lock on the heap
makes any other locks on the heap or any of its indexes redundant. 
So it was an attempt at cleaning house. Since we don't do anything
for an index request unless there is a lock on that index, it
couldn't cause false positives.  But this probably fits into the
category of premature optimizations, since the locks can't cause any
difference in when you get a serialization failure -- it's only a
matter of taking up space.  I could revert that.
 
 As a bonus, if you dumb down DropAllPredicateLocksFromTableImpl()
 to only transfer locks on the given heap or index, and not any
 other indexes on the same table, you won't need
 IfIndexGetRelation() anymore, making the issue of catalog cache
 lookups moot.
 
Which really makes it look like simplifying here to avoid the attempt
to clean house is a good idea.  If there's a benefit to be had from
it, it should be demonstrated before attempting (in some later
release), the same as any other optimization.
 
 Seems weird to call SkipSplitTracking() for heaps. I guess it's
 doing the right thing, but all the comments and the name of that
 refer to indexes.
 
Yeah, I think it's the right thing, but the macro name should
probably be changed.  It was originally created to do the right thing
during index split operations and became useful in other cases where
a transaction was doing things to predicate locks for all
transactions.
 
Most of this is simplifying, plus one search-and-replace in a single
file.  I'll try to post a new patch this evening.
 
Thanks for the review.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 1:24 PM, Robert Haas robertmh...@gmail.com wrote:

 One other thought is that I think that this patch might cause a
 user-visible behavior change.  Right now, when you hit the end of
 recovery, you most typically get a message saying - record with zero
 length.  Not always, but often.  If we adopt this approach, you'll get
 a wider variety of error messages there, depending on exactly how the
 new record fails validation.  I dunno if that's important to be worth
 caring about, or not.

Not.

We've never said what the message would be, only that it would fail.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Kevin Grittner  wrote:
 Heikki Linnakangas wrote:
 
 I think the logic is actually wrong at the moment: When you
 reindex a single index, DropAllPredicateLocksFromTableImpl() will
 transfer all locks belonging to any index on the same table, and
 any finer-granularity locks on the heap. It would be enough to
 transfer only locks on the index being reindexed, so you risk
 getting some unnecessary false positives.
 
 It seemed like a good idea at the time -- a relation lock on the
 heap makes any other locks on the heap or any of its indexes
 redundant.  So it was an attempt at cleaning house. Since we
 don't do anything for an index request unless there is a lock on
 that index, it couldn't cause false positives. But this probably
 fits into the category of premature optimizations, since the locks
 can't cause any difference in when you get a serialization failure
 -- it's only a matter of taking up space. I could revert that.
 
On reflection, Heikki was dead-on right here; I had some fuzzy
thinking going.  Just because one transaction has a lock in the index
doesn't mean that all transactions need lock promotion.  That still
leaves an opportunity for cleanup, but it's much narrower -- only
locks from transactions which held locks on the reorganized index can
be replaced by the heap relation lock.  That one is narrow enough to
be very unlikely to be worthwhile.
 
As usual, Heikki was right on target.  :-)
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Stephen Frost
* Alvaro Herrera (alvhe...@commandprompt.com) wrote:
 I note that if 2nd Quadrant is interested in having a game-changing
 platform without having to wait a full year for 9.2, they can obviously
 distribute a modified version of Postgres that integrates Robert's
 patch.

Having thought about this, I've got to agree with Alvaro on this one.
The people who need this patch are likely to pull it down and patch it
in and use it, regardless of if it's in a release or not.  My money is
that Treat's already got it running on some massive prod system that he
supports ( ;) ).

If we get it into the first CF of 9.2 then people are going to be even
more likely to pull it down and back-patch it into 9.1.  As soon as we
wrap up CF1 and put out our first alpha, the performance testers will
have something to point at and say look!  PG scales *even better* now!
and they're not going to particularly care that it's an alpha and the
blog-o-sphere isn't going to either, especially if we can say and it'll
be in the next release which is scheduled for May.

So, all-in-all, -1 from me on trying to get this into 9.1.  Let's get
9.1 done and out the door already, hopefully before summer saps away
*too* many resources..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 It makes me a bit uncomfortable to do catalog cache lookups while 
 holding all the lwlocks. We've also already removed the reserved entry 
 for scratch space while we do that - if a cache lookup errors out, we'll 
 leave behind quite a mess. I guess it shouldn't fail, but it seems a bit 
 fragile.

The above scares the heck out of me.  If you don't believe that a
catcache lookup will ever fail, I will contract to break the patch.
You need to rearrange the code so that this is less fragile.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 
 If you don't believe that a catcache lookup will ever fail, I will
 contract to break the patch.
 
As you probably know by now by reaching the end of the thread, this
code is going away based on Heikki's arguments; but for my
understanding, so that I don't make a bad assumption in this area
again, what could cause the following function to throw an exception
if the current process is holding an exclusive lock on the relation
passed in to it?  (I could be a heap or an index relation.)  It
seemed safe to me, and I can't spot the risk on a scan of the called
functions.  What am I missing?
 
static Oid
IfIndexGetRelation(Oid indexId)
{
HeapTuple tuple;
Form_pg_index index;
Oid result;

tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexId));
if (!HeapTupleIsValid(tuple))
return InvalidOid;

index = (Form_pg_index) GETSTRUCT(tuple);
Assert(index-indexrelid == indexId);

result = index-indrelid;
ReleaseSysCache(tuple);
return result;
}
 
Thanks for any clues,
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Merlin Moncure
On Mon, Jun 6, 2011 at 6:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 I vote for at minimum the type itself and ANYRANGE to be in core.
 From there you could make it like arrays where the range type is
 automatically generated for each POD type.  I would consider that for
 sure on basis of simplicity in user-land unless all the extra types
 and operators are a performance hit.

 Auto-generation of range types isn't going to happen, simply because the
 range type needs more information than is provided by the base type
 declaration.  (First, you need a btree opclass, and second, you need a
 next function if it's a discrete type.)

 By my count there are only about 20 datatypes in core for which it looks
 sensible to provide a range type (ie, it's a non-deprecated,
 non-composite type with a standard default btree opclass).  For that
 many, we might as well just build 'em in.

right. hm -- can you have multiple range type definitions for a
particular type?  I was thinking about a type reduction for casting
like we have for arrays: select '[1,3)'::int{}. but maybe that isn't
specific enough?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Tom Lane t...@sss.pgh.pa.us wrote:
 If you don't believe that a catcache lookup will ever fail, I will
 contract to break the patch.
 
 As you probably know by now by reaching the end of the thread, this
 code is going away based on Heikki's arguments; but for my
 understanding, so that I don't make a bad assumption in this area
 again, what could cause the following function to throw an exception
 if the current process is holding an exclusive lock on the relation
 passed in to it?  (I could be a heap or an index relation.)  It
 seemed safe to me, and I can't spot the risk on a scan of the called
 functions.  What am I missing?

Out-of-memory.  Query cancel.  The attempted catalog access failing
because it results in a detected deadlock.  I could probably think of
several more if I spent ten minutes on it; and that's not even
considering genuine problem conditions such as a corrupted catalog
index, which robustness demands that we not fall over completely for.

You should never, ever assume that an operation as complicated as a
catalog lookup can't fail.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 On Mon, Jun 6, 2011 at 6:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 By my count there are only about 20 datatypes in core for which it looks
 sensible to provide a range type (ie, it's a non-deprecated,
 non-composite type with a standard default btree opclass).  For that
 many, we might as well just build 'em in.

 right. hm -- can you have multiple range type definitions for a
 particular type?

In principle, sure, if the type has multiple useful sort orderings.
I don't immediately see any core types for which we'd bother.  (In
particular I don't see a use case for range types corresponding to
the *_pattern_ops btree opclasses, especially now that COLLATE C
has rendered them sorta obsolete.)

BTW, Jeff, have you worked out the implications of collations for
textual range types?  I confess to not having paid much attention
to range types lately.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 
 What am I missing?
 
 Out-of-memory.  Query cancel.  The attempted catalog access
 failing because it results in a detected deadlock.  I could
 probably think of several more if I spent ten minutes on it; and
 that's not even considering genuine problem conditions such as a
 corrupted catalog index, which robustness demands that we not fall
 over completely for.
 
 You should never, ever assume that an operation as complicated as
 a catalog lookup can't fail.
 
Got it.  Thanks.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Heikki Linnakangas

On 07.06.2011 10:55, Simon Riggs wrote:

On Tue, Jun 7, 2011 at 8:27 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

You would only need to do it just before you write out the WAL. I guess
you'd need to grab WALInsertLock in XLogWrite() to prevent more WAL records
from being inserted on the page until you're done zeroing it, though.


How would that help?


It doesn't matter whether the pages are zeroed while they sit in memory. 
And if you write a full page of WAL data, any wasted bytes at the end of 
the page don't matter, because they're ignored at replay anyway. The 
possibility of mistaking random garbage for valid WAL only occurs when 
we write a partial WAL page to disk. So, it is enough to zero the 
remainder of the partial WAL page (or just the next few words) when we 
write it out.


That's a lot cheaper than fully zeroing every page. (except for the fact 
that you'd need to hold WALInsertLock while you do it)


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Joshua D. Drake

On 06/06/2011 04:43 PM, Robert Haas wrote:

On Mon, Jun 6, 2011 at 6:53 PM, Alvaro Herrera
alvhe...@commandprompt.com  wrote:

Excerpts from Robert Haas's message of vie jun 03 09:17:08 -0400 2011:

I've now spent enough time working on this issue now to be convinced
that the approach has merit, if we can work out the kinks.  I'll start
with some performance numbers.


I hereby recommend that people with patches such as this one while on
the last weeks till release should refrain from posting them until the
release has actually taken place.


%@#!

Next time I'll be sure to only post my patches during beta if they suck.



I think Alvaro's point isn't directed at you Robert but at the idea that 
this should be applied to 9.1.


Sincerely,

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 07.06.2011 10:55, Simon Riggs wrote:
 How would that help?

 It doesn't matter whether the pages are zeroed while they sit in memory. 
 And if you write a full page of WAL data, any wasted bytes at the end of 
 the page don't matter, because they're ignored at replay anyway. The 
 possibility of mistaking random garbage for valid WAL only occurs when 
 we write a partial WAL page to disk. So, it is enough to zero the 
 remainder of the partial WAL page (or just the next few words) when we 
 write it out.

 That's a lot cheaper than fully zeroing every page. (except for the fact 
 that you'd need to hold WALInsertLock while you do it)

I think avoiding the need to hold both locks at once is probably exactly
why the zeroing was done where it is.

An interesting alternative is to have XLogInsert itself just plop down a
few more zeroes immediately after the record it's inserted, before it
releases WALInsertLock.  This will be redundant work once the next
record gets added, but it's cheap enough to not matter IMO.  As was
mentioned upthread, zeroing out the bytes that will eventually hold the
next record's xl_prev field ought to be enough to maintain a guarantee
that we won't believe the next record is valid.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 4:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 07.06.2011 10:55, Simon Riggs wrote:
 How would that help?

 It doesn't matter whether the pages are zeroed while they sit in memory.
 And if you write a full page of WAL data, any wasted bytes at the end of
 the page don't matter, because they're ignored at replay anyway. The
 possibility of mistaking random garbage for valid WAL only occurs when
 we write a partial WAL page to disk. So, it is enough to zero the
 remainder of the partial WAL page (or just the next few words) when we
 write it out.

 That's a lot cheaper than fully zeroing every page. (except for the fact
 that you'd need to hold WALInsertLock while you do it)

 I think avoiding the need to hold both locks at once is probably exactly
 why the zeroing was done where it is.

 An interesting alternative is to have XLogInsert itself just plop down a
 few more zeroes immediately after the record it's inserted, before it
 releases WALInsertLock.  This will be redundant work once the next
 record gets added, but it's cheap enough to not matter IMO.  As was
 mentioned upthread, zeroing out the bytes that will eventually hold the
 next record's xl_prev field ought to be enough to maintain a guarantee
 that we won't believe the next record is valid.

Lets see what the overheads are with a continuous stream of short WAL
records, say xl_heap_delete records.

xl header is 32 bytes, xl_heap_delete is 24 bytes.

So there would be ~145 records per page. 12 byte zeroing overhead per
record gives 1740 total zero bytes written per page.

The overhead is at worst case less than 25% of current overhead, plus
its spread out across multiple records.

When we get lots of full pages into WAL just after checkpoint we don't
get as much overhead - nearly every full page forces a page switch. So
we're removing overhead from where it hurts the most and amortising
across other records.

Maths work for me.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Vacuum, visibility maps and SKIP_PAGES_THRESHOLD

2011-06-07 Thread Greg Stark
On Jun 3, 2011 8:38 PM, Bruce Momjian br...@momjian.us wrote:

 I realize we just read the pages from the kernel to maintain sequential
 I/O, but do we actually read the contents of the page if we know it
 doesn't need vacuuming?  If so, do we need to?

I dont follow. What's your question?

Tom's final version does basically the optimal combination of the above I
think.


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Robert Creager
On Jun 6, 2011, at 7:29 PM, Andrew Dunstan and...@dunslane.net wrote:

 
 
 On 06/06/2011 07:30 PM, Robert Creager wrote:
 [4de65a8f.607a:1] LOG:  connection received: host=[local]
 [4de65a8f.607a:2] LOG:  connection authorized: user=Robert 
 database=pl_regression
 [4de65a8f.607a:3] LOG:  statement: CREATE OR REPLACE FUNCTION bar() RETURNS 
 integer AS $$
#die 'BANG!'; # causes server process to exit(2)
# alternative - causes server process to exit(255)
spi_exec_query(invalid sql statement);
$$ language plperl;
 
 I'll leave it running tonight (going home), so I can poke tomorrow if anyone 
 wants me to.
 
 
 
 That's weird. Why it should hang there I have no idea. Did it hang at the 
 same spot both times? Can you get a backtrace?

I think so, but I didn't pay much attention :-(

GNU gdb 6.3.50-20050815 (Apple version gdb-1518) (Sat Feb 12 02:52:12 UTC 2011)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as x86_64-apple-darwin...Reading symbols for shared 
libraries .. done

Attaching to program: `/Volumes/High 
Usage/usr/local/src/build-farm-4.4/builds/HEAD/inst/bin/postgres', process 
24698.
Reading symbols for shared libraries .+. done
0x000100a505e4 in Perl_get_hash_seed ()
(gdb) bt
#0  0x000100a505e4 in Perl_get_hash_seed ()
#1  0x000100a69b94 in perl_parse ()
#2  0x0001007bb680 in plperl_init_interp () at plperl.c:781
#3  0x0001007bc17a in _PG_init () at plperl.c:443
#4  0x000100301da6 in internal_load_library (libname=0x10100d540 
/Volumes/High 
Usage/usr/local/src/build-farm-4.4/builds/HEAD/inst/lib/postgresql/plperl.so) 
at dfmgr.c:284
#5  0x0001003026f5 in load_external_function (filename=value temporarily 
unavailable, due to optimizations, funcname=0x10100d508 plperl_validator, 
signalNotFound=1 '\001', filehandle=0x7fff5fbfd3b8) at dfmgr.c:113
#6  0x000100304c10 in fmgr_info_C_lang [inlined] () at /Volumes/High 
Usage/usr/local/src/build-farm-4.4/builds/HEAD/pgsql.2569/src/backend/utils/fmgr/fmgr.c:349
#7  0x000100304c10 in fmgr_info_cxt_security (functionId=41321, 
finfo=0x7fff5fbfd410, mcxt=value temporarily unavailable, due to 
optimizations, ignore_security=value temporarily unavailable, due to 
optimizations) at fmgr.c:280
#8  0x000100305e00 in OidFunctionCall1Coll (functionId=value temporarily 
unavailable, due to optimizations, collation=0, arg1=41426) at fmgr.c:1585
#9  0x00010009e493 in ProcedureCreate (procedureName=0x101006550 bar, 
procNamespace=2200, replace=1 '\001', returnsSet=0 '\0', returnType=23, 
languageObjectId=41322, languageValidator=41321, prosrc=0x101006748 \n#die 
'BANG!'; # causes server process to exit(2)\n# alternative - causes server 
process to exit(255)\nspi_exec_query(\invalid sql statement\);\n, 
probin=0x0, isAgg=0 '\0', isWindowFunc=0 '\0', security_definer=0 '\0', 
isStrict=0 '\0', volatility=118 'v', parameterTypes=0x10100d7d8, 
allParameterTypes=0, parameterModes=0, parameterNames=0, parameterDefaults=0x0, 
proconfig=0, procost=100, prorows=0) at pg_proc.c:652
#10 0x0001001046be in CreateFunction (stmt=0x101006a48, 
queryString=0x101005a38 CREATE OR REPLACE FUNCTION bar() RETURNS integer AS 
$$\n#die 'BANG!'; # causes server process to exit(2)\n# alternative - 
causes server process to exit(255)\nspi_exec_query(\invalid sql state...) 
at functioncmds.c:942
#11 0x00010023633b in MemoryContextSwitchTo [inlined] () at /Volumes/High 
Usage/usr/local/src/build-farm-4.4/builds/HEAD/pgsql.2569/src/include/utils/palloc.h:1184
#12 0x00010023633b in PortalRunUtility (portal=0x101027238, 
utilityStmt=0x101006a48, isTopLevel=value temporarily unavailable, due to 
optimizations, dest=0x101006df0, completionTag=0x7fff5fbfdea0 ) at 
pquery.c:1192
#13 0x000100237af5 in PortalRunMulti (portal=0x101027238, isTopLevel=value 
temporarily unavailable, due to optimizations, dest=0x101006df0, 
altdest=0x101006df0, completionTag=0x7fff5fbfdea0 ) at pquery.c:1315
#14 0x0001002384a8 in PortalRun (portal=0x101027238, 
count=9223372036854775807, isTopLevel=value temporarily unavailable, due to 
optimizations, dest=0x101006df0, altdest=0x101006df0, 
completionTag=0x7fff5fbfdea0 ) at pquery.c:813
#15 0x00010023445d in exec_simple_query (query_string=0x101005a38 CREATE 
OR REPLACE FUNCTION bar() RETURNS integer AS $$\n#die 'BANG!'; # causes 
server process to exit(2)\n# alternative - causes server process to 
exit(255)\nspi_exec_query(\invalid sql state...) at postgres.c:1018
#16 0x000100235021 in PostgresMain (argc=2, argv=value temporarily 
unavailable, due to optimizations, username=value temporarily unavailable, 
due to optimizations) at 

[HACKERS] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Robert Creager
This is the second time I've had this happen in the last week or so. I have a 'regular' postgresql server running, and then test test setup (both llvm ang gcc). It's chewing up 1 core on my MBP, never completing.502  229   1  0  0:00.50 ?? 0:00.60 /Library/PostgreSQL/8.3/bin/postgres -D /Library/PostgreSQL/8.3/data502  266  229  0  0:08.32 ?? 0:11.86 postgres: logger process   502  268  229  0  0:24.49 ?? 0:55.88 postgres: writer process   502  269  229  0  0:23.02 ?? 0:36.82 postgres: wal writer process 502  270  229  0  0:06.40 ?? 0:09.12 postgres: autovacuum launcher process 502  271  229  0  0:06.87 ?? 0:07.78 postgres: stats collector process   501 24638   1  0  0:06.74 ?? 0:07.63 /Volumes/High Usage/usr/local/src/build-farm-4.4/builds/HEAD/inst/bin/postgres -D data-C501 24640 24638  0  0:18.91 ?? 0:43.89 postgres: writer process  501 24641 24638  0  0:17.83 ?? 0:27.99 postgres: wal writer process  501 24642 24638  0  0:09.80 ?? 0:21.99 postgres: autovacuum launcher process  501 24643 24638  0  0:48.59 ?? 0:59.91 postgres: stats collector process  501 24698 24638  0 2116:52.81 ??2456:38.81 postgres: Robert pl_regression [local] CREATE FUNCTION Robert@dhcp-brm-bl5-204-2e-east-10-135-77-175:/Volumes/High Usage/usr/local/src/build-farm-4.4/builds/HEAD% ls -altrtotal 24drwxr-xr-x  9 Robert staff  306B Dec 10 22:31 ../-rw-r--r--  1 Robert staff  11B May 31 13:05 polecat.last.success.snap-rw-r--r--  1 Robert staff   0B Jun 1 09:18 builder.LCK-rw-r--r--  1 Robert staff  11B Jun 1 09:19 polecat.last.status-rw-r--r--  1 Robert staff  11B Jun 1 09:19 polecat.last.run.snapdrwxr-xr-x 16 Robert staff  544B Jun 1 09:19 pgsql/drwxr-xr-x 18 Robert staff  612B Jun 1 09:20 pgsql.2569/drwxr-xr-x 10 Robert staff  340B Jun 1 09:27 ./drwxr-xr-x 10 Robert staff  340B Jun 1 09:28 inst/drwxr-xr-x 16 Robert staff  544B Jun 1 09:28 polecat.lastrun-logs/Robert@dhcp-brm-bl5-204-2e-east-10-135-77-175:/Volumes/High Usage/usr/local/src/build-farm-4.4/builds/HEAD/polecat.lastrun-logs% ls -altrtotal 8760-rw-r--r--  1 Robert staff  40B Jun 1 09:19 githead.log-rw-r--r--  1 Robert staff  918B Jun 1 09:19 SCM-checkout.log-rw-r--r--  1 Robert staff  16K Jun 1 09:20 configure.log-rw-r--r--  1 Robert staff  324K Jun 1 09:20 config.log-rw-r--r--  1 Robert staff  257K Jun 1 09:25 make.log-rw-r--r--  1 Robert staff  1.8M Jun 1 09:26 check.log-rw-r--r--  1 Robert staff  50K Jun 1 09:27 make-contrib.logdrwxr-xr-x 10 Robert staff  340B Jun 1 09:27 ../-rw-r--r--  1 Robert staff  40K Jun 1 09:27 make-install.log-rw-r--r--  1 Robert staff  26K Jun 1 09:27 install-contrib.log-rw-r--r--  1 Robert staff  1.3K Jun 1 09:27 initdb-C.log-rw-r--r--  1 Robert staff  534B Jun 1 09:27 startdb-C-1.log-rw-r--r--  1 Robert staff  1.7M Jun 1 09:27 install-check-C.log-rw-r--r--  1 Robert staff  299B Jun 1 09:28 stopdb-C-1.log-rw-r--r--  1 Robert staff  534B Jun 1 09:28 startdb-C-2.logcat startdb-C-2.logwaiting for server to start doneserver started=== db log file ==[4de65a8c.603f:1] LOG: database system was shut down at 2011-06-01 09:28:01 MDT[4de65a8c.6042:1] LOG: autovacuum launcher started[4de65a8c.603e:1] LOG: database system is ready to accept connections[4de65a8d.6044:1] LOG: connection received: host=[local][4de65a8d.6044:2] LOG: connection authorized: user=Robert database=postgres[4de65a8d.6044:3] LOG: disconnection: session time: 0:00:00.009 user=Robert database=postgres host=[local]Robert@dhcp-brm-bl5-204-2e-east-10-135-77-175:/Volumes/High Usage/usr/local/src/build-farm-4.4/builds/HEAD/inst% tail -n 100 !$tail -n 100 logfile		while (@arrays  0) {			my $el = shift @arrays;			if (is_array_ref($el)) {push @arrays, @$el;			} else {$result .= $el;			}		}		return $result.' '.$array_arg;	$$ LANGUAGE plperl;[4de65a8f.6076:16] LOG: statement: select plperl_concat('{"NULL","NULL","NULL''"}');[4de65a8f.6076:17] LOG: statement: select plperl_concat('{{NULL,NULL,NULL}}');[4de65a8f.6076:18] LOG: statement: select plperl_concat('{"hello"," ","world!"}');[4de65a8f.6076:19] LOG: statement: CREATE TYPE foo AS (bar INTEGER, baz TEXT);[4de65a8f.6076:20] LOG: statement: CREATE OR REPLACE FUNCTION plperl_array_of_rows(foo[]) RETURNS TEXT AS $$		my $array_arg = shift;		my $result = "";for my $row_ref (@$array_arg) {			

Re: [HACKERS] Postmaster holding unlinked files for pg_largeobject table

2011-06-07 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Excerpts from Tom Lane's message of lun jun 06 12:49:46 -0400 2011:
 Hmm, there's already a mechanism for closing temp FDs at the end of a
 query ... maybe blind writes could use temp-like FDs?

 I don't think it can be made to work exactly like that.  If I understand
 correctly, the code involved here is the FlushBuffer() call that happens
 during BufferAlloc(), and what we have at that point is a SMgrRelation;
 we're several levels removed from actually being able to set the
 FD_XACT_TEMPORARY flag which is what I think you're thinking of.

It's not *that* many levels: in fact, I think md.c is the only level
that would just have to pass it through without doing anything useful.
I think that working from there is a saner and more efficient approach
than what you're sketching.

If you want a concrete design sketch, consider this:

1. Add a flag to the SMgrRelation struct that has the semantics of all
files opened through this SMgrRelation should be marked as transient,
causing them to be automatically closed at end of xact.

2. *Any* normal smgropen() call would reset this flag (since it suggests
that we are accessing the relation because of SQL activity).  In the
single case where FlushBuffer() is called with reln == NULL, it would
set the flag after doing its local smgropen().

3. Then, modify md.c to pass the flag down to fd.c whenever opening an
FD file.  fd.c sets a bit in the resulting VFD.

4. Extend CleanupTempFiles to close the kernel FD (but not release the
VFD) when a VFD has the bit set.

I'm fairly sure that CleanupTempFiles is never called in the bgwriter,
so we don't even need any special hack to prevent the flag from becoming
set in the bgwriter.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Andrew Dunstan



On 06/07/2011 12:35 AM, Tom Lane wrote:

Andrew Dunstanand...@dunslane.net  writes:

On 06/06/2011 07:30 PM, Robert Creager wrote:

[4de65a8f.607a:3] LOG:  statement: CREATE OR REPLACE FUNCTION bar() RETURNS 
integer AS $$
#die 'BANG!'; # causes server process to exit(2)
# alternative - causes server process to exit(255)
spi_exec_query(invalid sql statement);
$$ language plperl;

I'll leave it running tonight (going home), so I can poke tomorrow if anyone 
wants me to.

That's weird. Why it should hang there I have no idea. Did it hang at
the same spot both times? Can you get a backtrace?

You sure it's hung on that statement, and not the following one?
The following one would be trying to load plperlu into a backend
already using plperl, which is an area that it wouldn't exactly
be surprising to find platform-dependent issues in.




That's true, but he has log_statement = all, so the statement should be 
logged before it's executed. And the stack trace he's sent shows that's 
the statement being executed.


It seems to be hung in Perl_get_hash_seed().

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 06/07/2011 12:35 AM, Tom Lane wrote:
 You sure it's hung on that statement, and not the following one?
 The following one would be trying to load plperlu into a backend
 already using plperl, which is an area that it wouldn't exactly
 be surprising to find platform-dependent issues in.

 That's true, but he has log_statement = all, so the statement should be 
 logged before it's executed. And the stack trace he's sent shows that's 
 the statement being executed.

Yeah, the stack trace destroyed that theory.

 It seems to be hung in Perl_get_hash_seed().

Which is not our code, of course.  Who wants to dig into perl guts?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Simon Riggs
On Mon, Jun 6, 2011 at 8:50 PM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Jun 6, 2011 at 8:40 PM, Stefan Kaltenbrunner
 ste...@kaltenbrunner.cc wrote:
 On 06/06/2011 09:24 PM, Dave Page wrote:
 On Mon, Jun 6, 2011 at 8:12 PM, Dimitri Fontaine dimi...@2ndquadrant.fr 
 wrote:
 So, to the question “do we want hard deadlines?” I think the answer is
 “no”, to “do we need hard deadlines?”, my answer is still “no”, and to
 the question “does this very change should be considered this late?” my
 answer is yes.

 Because it really changes the game for PostgreSQL users.

 Much as I hate to say it (I too want to keep our schedule as
 predictable and organised as possible), I have to agree. Assuming the
 patch is good, I think this is something we should push into 9.1. It
 really could be a game changer.

 I disagree - the proposed patch maybe provides a very significant
 improvment for a certain workload type(nothing less but nothing more),
 but it was posted way after -BETA and I'm not sure we yet understand all
 implications of the changes.

 We certainly need to be happy with the implications if we were to make
 such a decision.

 We also have to consider that the underlying issues are known problems
 for multiple years^releases so I don't think there is a particular rush
 to force them into a particular release (as in 9.1).

 No, there's no *technical* reason we need to do this, as there would
 be if it were a bug fix for example. I would just like to see us
 narrow the gap with our competitors sooner rather than later, *if*
 we're a) happy with the change, and b) we're talking about a minimal
 delay (which we may be - Robert says he thinks the patch is good, so
 with another review and beta testing).

Stefan/Robert's observation that we perform a
VirtualXactLockTableInsert() to no real benefit is a good one.

It leads to the following simple patch to remove one lock table hit
per transaction. It's a lot smaller impact on the LockMgr locks, but
it will still be substantial. Performance tests please?

This patch is much less invasive and has impact only on CREATE INDEX
CONCURRENTLY and Hot Standby. It's taken me about 2 hours to write and
test and there's no way it will cause any delay at all to the release
schedule. (Though I'm sure Robert can improve it).

If we combine this patch with Koichi-san's recommended changes to the
number of lock partitions, we will have considerable impact for 9.1.
Robert will still get his day in the sun, just with 9.2.

This way we get something now *and* something later, while the risk
minimisers will have succeeded in protecting the code. A compromise
for everyone.

Please consider this as a serious proposal for tuning in 9.1.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


remove_VirtualXactLockTableInsert.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 12:51 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Mon, Jun 6, 2011 at 8:50 PM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Jun 6, 2011 at 8:40 PM, Stefan Kaltenbrunner
 ste...@kaltenbrunner.cc wrote:
 On 06/06/2011 09:24 PM, Dave Page wrote:
 On Mon, Jun 6, 2011 at 8:12 PM, Dimitri Fontaine dimi...@2ndquadrant.fr 
 wrote:
 So, to the question “do we want hard deadlines?” I think the answer is
 “no”, to “do we need hard deadlines?”, my answer is still “no”, and to
 the question “does this very change should be considered this late?” my
 answer is yes.

 Because it really changes the game for PostgreSQL users.

 Much as I hate to say it (I too want to keep our schedule as
 predictable and organised as possible), I have to agree. Assuming the
 patch is good, I think this is something we should push into 9.1. It
 really could be a game changer.

 I disagree - the proposed patch maybe provides a very significant
 improvment for a certain workload type(nothing less but nothing more),
 but it was posted way after -BETA and I'm not sure we yet understand all
 implications of the changes.

 We certainly need to be happy with the implications if we were to make
 such a decision.

 We also have to consider that the underlying issues are known problems
 for multiple years^releases so I don't think there is a particular rush
 to force them into a particular release (as in 9.1).

 No, there's no *technical* reason we need to do this, as there would
 be if it were a bug fix for example. I would just like to see us
 narrow the gap with our competitors sooner rather than later, *if*
 we're a) happy with the change, and b) we're talking about a minimal
 delay (which we may be - Robert says he thinks the patch is good, so
 with another review and beta testing).

 Stefan/Robert's observation that we perform a
 VirtualXactLockTableInsert() to no real benefit is a good one.

 It leads to the following simple patch to remove one lock table hit
 per transaction. It's a lot smaller impact on the LockMgr locks, but
 it will still be substantial. Performance tests please?

 This patch is much less invasive and has impact only on CREATE INDEX
 CONCURRENTLY and Hot Standby. It's taken me about 2 hours to write and
 test and there's no way it will cause any delay at all to the release
 schedule. (Though I'm sure Robert can improve it).

 If we combine this patch with Koichi-san's recommended changes to the
 number of lock partitions, we will have considerable impact for 9.1.
 Robert will still get his day in the sun, just with 9.2.

 This way we get something now *and* something later, while the risk
 minimisers will have succeeded in protecting the code. A compromise
 for everyone.

 Please consider this as a serious proposal for tuning in 9.1.

You seem to have completely ignored the reason why it works that way
in the first place, which is that there is otherwise a risk of
undetected deadlock.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 11:56 AM, Joshua D. Drake j...@commandprompt.com wrote:
 On 06/06/2011 04:43 PM, Robert Haas wrote:

 On Mon, Jun 6, 2011 at 6:53 PM, Alvaro Herrera
 alvhe...@commandprompt.com  wrote:

 Excerpts from Robert Haas's message of vie jun 03 09:17:08 -0400 2011:

 I've now spent enough time working on this issue now to be convinced
 that the approach has merit, if we can work out the kinks.  I'll start
 with some performance numbers.

 I hereby recommend that people with patches such as this one while on
 the last weeks till release should refrain from posting them until the
 release has actually taken place.

 %@#!

 Next time I'll be sure to only post my patches during beta if they suck.


 I think Alvaro's point isn't directed at you Robert but at the idea that
 this should be applied to 9.1.

Oh, I get that.  I'm just dismayed that we can't have a discussion
about the patch without getting sidetracked into a conversation about
whether we should throw feature freeze out the window.  If posting
patches that do interesting things during beta results in everyone
ignoring both the work that needs to be done to get from beta to final
release, and the patch itself, in favor of talking about the release
schedule, then I think at the next developer meeting we're going to
get to hear Tom argue that overlapping the end of beta with the
beginning of the next release cycle is a mistake and we should go back
to the old system where we yell at everyone to shut up unless they're
helping test or fix bugs.  Since that overlap is going to (hopefully)
allow this patch to get into the tree ~2-3 months SOONER than it would
have under the old system, I would be unhappy to see it abolished.

Everyone who is arguing for the inclusion of this patch in 9.1 should
take a minute to think about the following fact: If the PostgreSQL
development process does not work for Tom, it does not work.  Full
stop.  We all know that Tom is conservative with respect to release
management, but we also know that his output is enormous, that he
fixes virtually all of the bugs that *get* fixed, and that our
well-deserved reputation for high quality releases is in large part
attributable to him.  We will not be better off if we design a process
that leaves him cold.  The fact that Alvaro, Heikki, Andrew, Kevin,
and myself don't like the proposed process either is just icing on the
cake.  And I use the term process loosely, because what's really
being proposed is the complete absence of any process.  The idea of
having a feature freeze some time prior to release is hardly a novel
roadblock that we've invented here at the PostgreSQL Global
Development Group.  It's a basic software engineering principle that
has been universally adopted by just about every open and closed
source development project in existence, and with good reason.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 We've also already removed the reserved entry for scratch space
 
This and Tom's concerns have me wondering if we should bracket the
two sections of code where we use the reserved lock target entry
with HOLD_INTERRUPTS() and RESUME_INTERRUPTS().  In an assert-enable
build we wouldn't really recover from a transaction canceled while
it was checked out (although if that were the only problem, that
could be fixed), but besides that a cancellation while it's checked
out could cause these otherwise-safe functions to throw exceptions
due to a full heap table.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Heikki Linnakangas

On 07.06.2011 20:03, Kevin Grittner wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  wrote:


We've also already removed the reserved entry for scratch space


This and Tom's concerns have me wondering if we should bracket the
two sections of code where we use the reserved lock target entry
with HOLD_INTERRUPTS() and RESUME_INTERRUPTS().


That's not necessary. You're holding a lwlock, which implies that 
interrupts are held off already. There's a HOLD_INTERRUPTS() call in 
LWLockAcquire and RESUME_INTERRUPTS() in LWLockRelease.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Please consider this as a serious proposal for tuning in 9.1.

Look: it is at least four months too late for anything of the sort in 9.1.
We should be fixing bugs, and nothing else, if we ever want to get 9.1
out the door.  Performance improvements don't qualify, especially not
ones that tinker with fundamental parts of the system and seem highly
likely to introduce new bugs.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Alex Hunsaker
On Mon, Jun 6, 2011 at 21:16, Robert Creager robert.crea...@oracle.com wrote:

 That's weird. Why it should hang there I have no idea. Did it hang at the
 same spot both times? Can you get a backtrace?

 I think so, but I didn't pay much attention :-(
 GNU gdb 6.3.50-20050815 (Apple version gdb-1518) (Sat Feb 12 02:52:12 UTC
 2011)
 Copyright 2004 Free Software Foundation, Inc.
 GDB is free software, covered by the GNU General Public License, and you are
 welcome to change it and/or distribute copies of it under certain
 conditions.
 Type show copying to see the conditions.
 There is absolutely no warranty for GDB.  Type show warranty for details.
 This GDB was configured as x86_64-apple-darwin...Reading symbols for
 shared libraries .. done

 Attaching to program: `/Volumes/High
 Usage/usr/local/src/build-farm-4.4/builds/HEAD/inst/bin/postgres', process
 24698.
 Reading symbols for shared libraries .+. done
 0x000100a505e4 in Perl_get_hash_seed ()
 (gdb) bt
 #0  0x000100a505e4 in Perl_get_hash_seed ()
 #1  0x000100a69b94 in perl_parse ()

Perl_get_hash_seed is basically:

Perl_get_hash_seed {
char *s = getenv(PERL_HASH_SEED);
unsigned long myseed = 0;
if(s) {
  
  myseed = atoul(s);
}
srand(Perl_seed());
myseed = rand() *  UV_MAX;
return myseed;
}

U32 Perl_seed()
{
U32 u;
struct timeval when;
...
open(fd, /dev/urandom...)
read(fd, u, sizeof(u));
gettimeofday(when, NULL);
u = when[0] + SEED_C2 * when[1];
u += getpid();
u += PTR2UV(PL_stack_sp);
return u;
}

I don't suppose /dev/urandom blocks on OS X?  Granted, I may have
missed something in translation with the macro fest that is perl...

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Jeff Davis
On Tue, 2011-06-07 at 11:15 -0400, Tom Lane wrote:
 Merlin Moncure mmonc...@gmail.com writes:
  right. hm -- can you have multiple range type definitions for a
  particular type?
 
 In principle, sure, if the type has multiple useful sort orderings.

Right. Additionally, you might want to use different canonical
functions for the same subtype.

 I don't immediately see any core types for which we'd bother.

Agreed.

 BTW, Jeff, have you worked out the implications of collations for
 textual range types?

Well, it seems to work is about as far as I've gotten.

As far as the implications, I'll need to do a little more research and
thinking. But I don't immediately see anything too worrisome.

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Joshua Berkus
 iew. The
 reason we usually skip the summer isn't actually a wholesale lack of
 people - it's because it's not so good from a publicity perspective,
 and it's hard to get all the packagers around at the same time.

Actually, the summer is *excellent* from a publicity perspective ... at least, 
June and July are.  Both of those months are full of US conferences whose PR we 
can piggyback on to make a splash.

August is really the only bad month from a PR perspective, because we lose a 
lot of our European RCs, and there's no bandwagons to jump on.  But even August 
has the advantage of having no major US or Christian holidays to interfere with 
release dates.

However, we're more likely to have an issue with *packager* availability in 
August.  Besides, isn't this a little premature?  Last I looked, we still have 
some big nasty open items.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 ... I think at the next developer meeting we're going to
 get to hear Tom argue that overlapping the end of beta with the
 beginning of the next release cycle is a mistake and we should go back
 to the old system where we yell at everyone to shut up unless they're
 helping test or fix bugs.

I think we have already got quite enough evidence to conclude that this
approach is broken.  Not only does it appear that hardly anybody but me
is actively working on stabilizing 9.1, but I'm wasting quite a bit of
my time trying to keep Simon from destabilizing it; to say nothing of
reacting to design proposals for 9.2 work (or else feeling guilty
because I'm ignoring them, which is in fact what I've mostly been
doing).

As a measure of how completely this is not working: I've had read the
SSI code as a number one priority item for about two months now, and
still haven't found time to read one line of it.

 Everyone who is arguing for the inclusion of this patch in 9.1 should
 take a minute to think about the following fact: If the PostgreSQL
 development process does not work for Tom, it does not work.

I'd like to think that I'm not the sole driver of this process.
However, if everybody else is going to start playing in their 9.2
sandbox and ignore getting a release out, then yeah it comes down
to how much bandwidth I've got.  And that's finite.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Joshua Berkus
Robert,

 Oh, I get that. I'm just dismayed that we can't have a discussion
 about the patch without getting sidetracked into a conversation about
 whether we should throw feature freeze out the window. 

That's not something you can change.  Whatever the patch is, even if it's a 
psql improvement, *someone* will argue that it's super-critical to shoehorn it 
into the release at the last minute.  It's a truism of human nature to 
rationalize exceptions where your own interest is concerned.

As long as we have solidarity of the committers that this is not allowed, 
however, this is not a real problem.  And it appears that we do.  In the 
future, it shouldn't even be necessary to discuss it.

For my part, I'm excited that we seem to be getting some big hairy important 
patches in to CF1, which means that those patches will be well-tested by the 
time 9.2 reaches beta.  Espeically getting Robert's patch and Simons's 
WALInsertLock work into CF1 means that we'll have 7 months to find serious bugs 
before beta starts.  So I'd really like to carry on with the current 
development schedule.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Jeff Davis
On Mon, 2011-06-06 at 14:42 -0700, Darren Duncan wrote:
 On this note, here's a *big* thing that needs discussion ...

[ refering to the concept of discrete versus continuous ranges ]

Yes, there has been much discussion on this topic already.

The solution right now is that they both behave like continuous ranges
for most operations. But each time a value is produced, a discrete range
has a canonicalize function that aligns it to the proper boundaries
and chooses a convention from [], [), (], (). For discrete ranges that's
only a convention, because multiple representations are equal in value,
but that's not so for continuous ranges.

Another approach would be to offer next and prev functions instead
of canonical, or a plus(thetype, integer) and minus(thetype,
integer).


 Can Pg be changed to support . in operator names as long as they don't just 
 appear by themselves?  What would this break to do so?

Someone else would have to comment on that. My feeling is that it might
create problems with qualified names, and also with PG's arg.function
call syntax.

 foo in 1..10

 I believe it is quite reasonable to treat ranges like sets, in an abstract 
 sense, and so using set membership syntax like in is valid.

OK, I think I agree with this now. I'll think about it some more.

 I also see these as considerably less important and useful in practice than 
 the 
 continuous intervals.

[ multiranges ]

Agreed. I've left those alone for now, because it's a separate concept.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


9.1 release scheduling (was Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch)

2011-06-07 Thread Tom Lane
Joshua Berkus j...@agliodbs.com writes:
 Actually, the summer is *excellent* from a publicity perspective ... at 
 least, June and July are.  Both of those months are full of US conferences 
 whose PR we can piggyback on to make a splash.

 August is really the only bad month from a PR perspective, because we lose 
 a lot of our European RCs, and there's no bandwagons to jump on.  But even 
 August has the advantage of having no major US or Christian holidays to 
 interfere with release dates.

 However, we're more likely to have an issue with *packager* availability in 
 August.  Besides, isn't this a little premature?  Last I looked, we still 
 have some big nasty open items.

Well, we're trying to fix them --- I'm still hoping that the known beta
blockers will be cleared by Thursday so we can ship beta2.  However,
what happens after that is uncertain.  I'm concerned that once the CF
starts, the number of developer cycles devoted to 9.1 testing will go to
zero, meaning that four weeks or so from now when the CF is over, we'll
have made no real progress beyond beta2.  It's hard to see how we have a
release before August if that's how things stand in early July.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 1:27 PM, Joshua Berkus j...@agliodbs.com wrote:
 As long as we have solidarity of the committers that this is not allowed, 
 however, this is not a real problem.  And it appears that we do.  In the 
 future, it shouldn't even be necessary to discuss it.

Solidarity?

Simon - who was a committer last time I checked - seems to think that
the current process is entirely bunko.  And that is resulting in the
waste of a lot of time that could be better spent.  Our ability to
sustain this development process rests on the idea that we have some
kind of shared idea of what is and is not acceptable in general and at
particular points in the release cycle.  It *shouldn't* be necessary
to discuss it, but it apparently is.  Over and over and over again, in
fact.  It is critically important for the future success of this
project that we learn to walk and chew gum at the same time.  We are
failing outright.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Andrew Dunstan



On 06/07/2011 01:18 PM, Alex Hunsaker wrote:


I don't suppose /dev/urandom blocks on OS X?  Granted, I may have
missed something in translation with the macro fest that is perl...



I wondered if we were possibly exhausting some entropy pool. It seems 
like this would be just such a bad bug that it would be amazing if we 
were the first to trip up on it. But I guess you never know.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 It makes me a bit uncomfortable to do catalog cache lookups while 
 holding all the lwlocks.
 
I think I've caught up with the rest of the class on why this isn't
sane in DropAllPredicateLocksFromTableImpl, but I wonder about
CheckTableForSerializableConflictIn.  We *do* expect to be throwing
errors in here, and we need some way to tell whether an index is
associated with a particular heap relation.  Is the catalog cache
the right way to check that here, or is something else more
appropriate?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: 9.1 release scheduling (was Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch)

2011-06-07 Thread Thom Brown
On 7 June 2011 19:32, Tom Lane t...@sss.pgh.pa.us wrote:
 Joshua Berkus j...@agliodbs.com writes:
 Actually, the summer is *excellent* from a publicity perspective ... at 
 least, June and July are.  Both of those months are full of US conferences 
 whose PR we can piggyback on to make a splash.

 August is really the only bad month from a PR perspective, because we lose 
 a lot of our European RCs, and there's no bandwagons to jump on.  But even 
 August has the advantage of having no major US or Christian holidays to 
 interfere with release dates.

 However, we're more likely to have an issue with *packager* availability in 
 August.  Besides, isn't this a little premature?  Last I looked, we still 
 have some big nasty open items.

 Well, we're trying to fix them --- I'm still hoping that the known beta
 blockers will be cleared by Thursday so we can ship beta2.  However,
 what happens after that is uncertain.  I'm concerned that once the CF
 starts, the number of developer cycles devoted to 9.1 testing will go to
 zero, meaning that four weeks or so from now when the CF is over, we'll
 have made no real progress beyond beta2.  It's hard to see how we have a
 release before August if that's how things stand in early July.

Speaking of which, is it now safe to remove the NOT VALID constraints
don't dump properly issue from the blocker list since the fix has
been committed?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Andres Freund
On Tuesday, June 07, 2011 19:40:21 Andrew Dunstan wrote:
 On 06/07/2011 01:18 PM, Alex Hunsaker wrote:
  I don't suppose /dev/urandom blocks on OS X?  Granted, I may have
  missed something in translation with the macro fest that is perl...
 
 I wondered if we were possibly exhausting some entropy pool. It seems
 like this would be just such a bad bug that it would be amazing if we
 were the first to trip up on it. But I guess you never know.
Shouldn't the backtrace show a syscall in that case?

I guess one would need a debug perl build + single stepping for a more 
convincing answer...

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Tom Lane
Alex Hunsaker bada...@gmail.com writes:
 On Mon, Jun 6, 2011 at 21:16, Robert Creager robert.crea...@oracle.com 
 wrote:
 (gdb) bt
 #0  0x000100a505e4 in Perl_get_hash_seed ()
 #1  0x000100a69b94 in perl_parse ()

 I don't suppose /dev/urandom blocks on OS X?

The man page for it avers not, and besides it's hard to believe that
there wouldn't be a libc routine or two on the stack if we were blocked
in a kernel call, and also Robert showed that the process was consuming
CPU time, so it's not blocked.  Tis puzzling if there's no loop in the
function.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Christopher Browne
On Tue, Jun 7, 2011 at 5:40 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 06/07/2011 01:18 PM, Alex Hunsaker wrote:

 I don't suppose /dev/urandom blocks on OS X?  Granted, I may have
 missed something in translation with the macro fest that is perl...


 I wondered if we were possibly exhausting some entropy pool. It seems like
 this would be just such a bad bug that it would be amazing if we were the
 first to trip up on it. But I guess you never know.

/dev/urandom is the one that's supposed to be unblocking (that's
what the u is for).

Supposedly, /dev/random and /dev/urandom behave identically on OS-X,
using Yarrow for RNG.  It shouldn't be blocking.

http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man4/urandom.4.html
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 1:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 ... I think at the next developer meeting we're going to
 get to hear Tom argue that overlapping the end of beta with the
 beginning of the next release cycle is a mistake and we should go back
 to the old system where we yell at everyone to shut up unless they're
 helping test or fix bugs.

 I think we have already got quite enough evidence to conclude that this
 approach is broken.  Not only does it appear that hardly anybody but me
 is actively working on stabilizing 9.1, but I'm wasting quite a bit of
 my time trying to keep Simon from destabilizing it; to say nothing of
 reacting to design proposals for 9.2 work (or else feeling guilty
 because I'm ignoring them, which is in fact what I've mostly been
 doing).

 As a measure of how completely this is not working: I've had read the
 SSI code as a number one priority item for about two months now, and
 still haven't found time to read one line of it.

 Everyone who is arguing for the inclusion of this patch in 9.1 should
 take a minute to think about the following fact: If the PostgreSQL
 development process does not work for Tom, it does not work.

 I'd like to think that I'm not the sole driver of this process.
 However, if everybody else is going to start playing in their 9.2
 sandbox and ignore getting a release out, then yeah it comes down
 to how much bandwidth I've got.  And that's finite.

I plead guilty to taking my eye off the ball post-beta1.  I busted my
ass for two months stabilizing other people's code after CF4 was over,
and then I moved on to other things.  I will try to get my eye back on
the ball - but actually I'm not sure there's all that much to do.   A
quick review of the open items list suggests that we have fixed a
total of six issues since beta1, as opposed to 47 prior to beta1.  And
all of those are being handled (two by you).  I also don't see much in
the way of unanswered 9.1 bug reports on pgsql-bugs, either.  There
may well be other open items, and I'm not unwilling to work on them,
but I don't read minds.  What needs doing?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: 9.1 release scheduling (was Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch)

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 1:45 PM, Thom Brown t...@linux.com wrote:
 Speaking of which, is it now safe to remove the NOT VALID constraints
 don't dump properly issue from the blocker list since the fix has
 been committed?

I hope so, because I just did that (before noticing this email from you).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Jun 7, 2011 at 1:27 PM, Joshua Berkus j...@agliodbs.com wrote:
 As long as we have solidarity of the committers that this is not allowed, 
 however, this is not a real problem.  And it appears that we do.  In the 
 future, it shouldn't even be necessary to discuss it.

 Solidarity?

 Simon - who was a committer last time I checked - seems to think that
 the current process is entirely bunko.  And that is resulting in the
 waste of a lot of time that could be better spent.

Yes.  If it were anybody but Simon, we wouldn't be spending a lot of
time on it; we'd just say sorry, this has to wait for 9.2 and that
would be the end of it.  As things stand, we have to convince him not to
commit these things ... or else be prepared to fight a war over whether
to revert them, which will be even more time-consuming and
trust-destroying.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Mon, 2011-06-06 at 14:42 -0700, Darren Duncan wrote:
 Can Pg be changed to support . in operator names as long as they don't 
 just 
 appear by themselves?  What would this break to do so?

 Someone else would have to comment on that.

DOT_DOT is already a token in plpgsql; trying to make it be also an
operator name would break a lot of existing plpgsql code.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Heikki Linnakangas

On 07.06.2011 20:42, Kevin Grittner wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  wrote:


It makes me a bit uncomfortable to do catalog cache lookups while
holding all the lwlocks.


I think I've caught up with the rest of the class on why this isn't
sane in DropAllPredicateLocksFromTableImpl, but I wonder about
CheckTableForSerializableConflictIn.  We *do* expect to be throwing
errors in here, and we need some way to tell whether an index is
associated with a particular heap relation.  Is the catalog cache
the right way to check that here, or is something else more
appropriate?


Hmm, it's not as dangerous there, as you're not in the middle of 
modifying stuff, but it doesn't feel right there either.


Predicate locks on indexes are only needed to lock key ranges, to notice 
later insertions into the range, right? For locks on tuples that do 
exist, we have locks on the heap. If we're just about to delete every 
tuple in the heap, that doesn't need to conflict with any locks on 
indexes, because we're deleting, not inserting. So I don't think we need 
to care about index locks here at all, only locks on the heap. Am I 
missing something?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 6:33 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Jun 7, 2011 at 1:27 PM, Joshua Berkus j...@agliodbs.com wrote:
 As long as we have solidarity of the committers that this is not allowed, 
 however, this is not a real problem.  And it appears that we do.  In the 
 future, it shouldn't even be necessary to discuss it.

 Solidarity?

 Simon - who was a committer last time I checked - seems to think that
 the current process is entirely bunko.

I'm not sure why anyone that disagrees with you should be accused of
wanting to junk the whole process. I've not said that and I don't
think this.

Before you arrived, it was quite normal to suggest tuning patches
after feature freeze.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 I think I've caught up with the rest of the class on why this isn't
 sane in DropAllPredicateLocksFromTableImpl, but I wonder about
 CheckTableForSerializableConflictIn.  We *do* expect to be throwing
 errors in here, and we need some way to tell whether an index is
 associated with a particular heap relation.  Is the catalog cache
 the right way to check that here, or is something else more
 appropriate?

Just to answer the question (independently of Heikki's concern about
whether this is needed at all): it depends on the information you have.
If all you have is the index OID, then yeah a catcache lookup in
pg_index is probably the best thing.  If you have an open Relation for
the index, you could instead look into its cached copy of its pg_index
row.  If you have an open Relation for the table, I'd think that looking
for a match in RelationGetIndexList() would be the cheapest, since more
than likely that information is already cached.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 Predicate locks on indexes are only needed to lock key ranges, to
 notice later insertions into the range, right? For locks on tuples
 that do exist, we have locks on the heap. If we're just about to
 delete every tuple in the heap, that doesn't need to conflict with
 any locks on indexes, because we're deleting, not inserting. So I
 don't think we need to care about index locks here at all, only
 locks on the heap. Am I missing something?
 
You're right again.  My brain must be turning to mush.  This
function can also become simpler, and there is now no reason at all
to add catalog cache lookups to predicate.c.  I think that leaves me
with all the answers I need to get a new patch out this evening
(U.S. Central Time).
 
Thanks,
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Alex Hunsaker
On Tue, Jun 7, 2011 at 11:48, Tom Lane t...@sss.pgh.pa.us wrote:
 Alex Hunsaker bada...@gmail.com writes:
 On Mon, Jun 6, 2011 at 21:16, Robert Creager robert.crea...@oracle.com 
 wrote:
 (gdb) bt
 #0  0x000100a505e4 in Perl_get_hash_seed ()
 #1  0x000100a69b94 in perl_parse ()

 I don't suppose /dev/urandom blocks on OS X?

 The man page for it avers not, and besides it's hard to believe that
 there wouldn't be a libc routine or two on the stack if we were blocked
 in a kernel call,

Yeah.

 and also Robert showed that the process was consuming
 CPU time, so it's not blocked.  Tis puzzling if there's no loop in the
 function.

Well there is one, I snipped it out for brevity (I don't see how it
could be at fault):

const char *s = PerlEnv_getenv(PERL_HASH_SEED);
if (s)
   while (isSPACE(*s))
 s++;
if (s  isDIGIT(*s))
myseed = (UV)Atoul(s);
else
{
  srand(Perl_seed());
  myseed = rand() *UV_MAX;
   
}

Im looking at the raw perl 5.10.0 source... I wonder if apple is
shipping a modified version?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 
 Just to answer the question (independently of Heikki's concern
 about whether this is needed at all): it depends on the
 information you have. If all you have is the index OID, then yeah
 a catcache lookup in pg_index is probably the best thing.  If you
 have an open Relation for the index, you could instead look into
 its cached copy of its pg_index row.  If you have an open Relation
 for the table, I'd think that looking for a match in
 RelationGetIndexList() would be the cheapest, since more than
 likely that information is already cached.
 
Thanks, I wasn't aware of RelationGetIndexList().  That's a good one
to remember.
 
The issue here was: given a particular heap relation, going through
a list of locks looking for matches, where the lock targets use
OIDs.  So if I really *did* need to check whether an index was
related to that heap relation, it sounds like the catcache approach
would have been right.  But as Heikki points out, the predicate
locks on the indexes only need to guard scanned *gaps* against
*insert* -- so a DROP TABLE or TRUNCATE TABLE can just skip looking
at any locks which aren't against the heap relation.
 
I think maybe I need a vacation -- you know, one where I'm not using
my vacation time to attend a PostgreSQL conference.
 
Thanks for the overview of what to use when; it takes a while, but I
think I'm gradually coming up to speed on the million lines of code
which comprise PostgreSQL.  Tips like that do help.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Tom Lane
Alex Hunsaker bada...@gmail.com writes:
 Im looking at the raw perl 5.10.0 source... I wonder if apple is
 shipping a modified version?

You could find out by digging around at
http://www.opensource.apple.com/
polecat appears to be running OSX 10.6.7, so this is what you want:
http://www.opensource.apple.com/tarballs/perl/perl-63.tar.gz

Another question worth asking here is whether PG is picking up perl
5.10.0 or 5.8.9, both of which are shipped in this OSX release.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Stephen Frost
* Simon Riggs (si...@2ndquadrant.com) wrote:
 Before you arrived, it was quite normal to suggest tuning patches
 after feature freeze.

I haven't been around as long as some, but I think I've been around
longer than Robert, and I can say that I don't recall serious
performance patches, particularly ones around lock management and which
change a fair bit of good, generally being white-listed from feature
freeze or being pushed in after beta1.

Perhaps I've missed them or perhaps there's been a few exceptions that
I'm not remembering that make it look routine rather than an exception
basis.  We might have tweaked a config variable or changed a #define
somewhere close to the end of a cycle, but I really don't put those into
the same category as this change.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Darren Duncan

Jeff Davis wrote:

On Tue, 2011-06-07 at 11:15 -0400, Tom Lane wrote:

Merlin Moncure mmonc...@gmail.com writes:

right. hm -- can you have multiple range type definitions for a
particular type?

In principle, sure, if the type has multiple useful sort orderings.


Right. Additionally, you might want to use different canonical
functions for the same subtype.


I don't immediately see any core types for which we'd bother.


Agreed.


BTW, Jeff, have you worked out the implications of collations for
textual range types?


Well, it seems to work is about as far as I've gotten.

As far as the implications, I'll need to do a little more research and
thinking. But I don't immediately see anything too worrisome.


I would expect ranges to have exactly the same semantics as ORDER BY or  etc 
with respect to collations for textual range types.


If collation is an attribute of a textual type, meaning that the textual type or 
its values have a sense of their collation built-in, then ranges for those 
textual types should just work without any extra range-specific syntax, same 
as you could say ORDER BY without any further qualifiers.


If collation is not an attribute of a textual type, meaning that you normally 
have to qualify the desired collation for each order-sensitive operation using 
it (even if that can be defined by a session/etc setting which still just 
ultimately works at the operator rather than type level), or if a textual type 
can have it built in but it is overridable per operator, then either ranges 
should have an extra attribute saying what collation (or other type-specific 
order-determining function) to use, or all range operators take the optional 
collation parameter like with ORDER BY.


Personally, I think it is a more elegant programming language design for an 
ordered type to have its own sense of a one true canonical ordering of its 
values, and where one could conceptually have multiple orderings, there would be 
a separate data type for each one.  That is, while you probably only need a 
single type with respect to ordering for any real numeric type, for textual 
types you could have a separate textual type for each collation.


In particular, I say separate type because a collation can sometimes affect 
differently what text values compare as same, as far as I know.


On a tangent, I believe that various insensitive comparisons or sortings are 
very reasonably expressed as collations rather than some other mechanism, eg if 
you wanted sortings that compare different letter case as same or not, or with 
or without accents as same or not.


So under this elegant system, there is no need to ever specify collation at 
the operator level (which could become quite verbose and unweildy), but instead 
you can cast data types if you want to change their sense of canonical ordering.


Now if the various text-specific operators are polymorphic across these text 
type variants, users don't generally have to know the difference except when it 
matters.


On a tangent, I believe that the best definition of equal or same in a type 
system is global substitutability.  Ignoring implementation details, if a 
program ever finds that 2 operands to the generic = (equality test) operator 
result in TRUE, then the program should feel free to replace all occurrences of 
one operand in the program with occurrences of the other, for optimization, 
because generic = returning TRUE means one is just as good as the other.  This 
assumes generally that we're dealing with immutable value types.


-- Darren Duncan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Heikki Linnakangas

On 07.06.2011 21:10, Kevin Grittner wrote:

I think that leaves me
with all the answers I need to get a new patch out this evening
(U.S. Central Time).


Great, I'll review it in my morning (in about 12h)

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types and extensions

2011-06-07 Thread Darren Duncan

Jeff Davis wrote:

On Mon, 2011-06-06 at 14:42 -0700, Darren Duncan wrote:
Can Pg be changed to support . in operator names as long as they don't just 
appear by themselves?  What would this break to do so?


Someone else would have to comment on that. My feeling is that it might
create problems with qualified names, and also with PG's arg.function
call syntax.


With respect to qualified names or arg.function, then unless the function 
can be symbolic, I considered your examples to be the appear by themselves, 
hence . by itself wouldn't be a new operator, and I generally assumed here 
that any multi-character operators with . to be symbolic.


In any event, I also saw Tom's reply about DOT_DOT being a token already.

-- Darren Duncan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Kevin Grittner
Simon Riggs si...@2ndquadrant.com wrote:
 
 Before you arrived, it was quite normal to suggest tuning patches
 after feature freeze.
 
I've worn a lot of hats in the practical end of this industry, and
regardless of which perspective I look at this from, I can't think
of anything so destructive to productivity, developer morale,
meeting deadlines or release quality as slipping in just one more
item after feature freeze.  It's *always* something that someone
feels is so important that it's worth the delay and/or risk, and it
never works out well.
 
There are a lot of aspects of the development and release processes
on which I can see valid trade-offs and a lot of room for
negotiations and compromise, but having a feature freeze which is
treated seriously isn't one of them.  If nobody else was making an
issue of this, I still would be.
 
There's absolutely nothing personal or political in this -- I just
know what I've seen work and what I've seen cause problems.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Alex Hunsaker
On Tue, Jun 7, 2011 at 12:22, Tom Lane t...@sss.pgh.pa.us wrote:
 Alex Hunsaker bada...@gmail.com writes:
 Im looking at the raw perl 5.10.0 source... I wonder if apple is
 shipping a modified version?

 You could find out by digging around at
 http://www.opensource.apple.com/
 polecat appears to be running OSX 10.6.7, so this is what you want:
 http://www.opensource.apple.com/tarballs/perl/perl-63.tar.gz

Thanks!

 Another question worth asking here is whether PG is picking up perl
 5.10.0 or 5.8.9, both of which are shipped in this OSX release.

I was looking at
http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=polecatdt=2011-06-07%2015%3A23%3A34stg=config
which seems to point at 5.10.0.

Robert: perl -V might be useful

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Andrew Dunstan



On 06/07/2011 02:22 PM, Tom Lane wrote:

Alex Hunsakerbada...@gmail.com  writes:

Im looking at the raw perl 5.10.0 source... I wonder if apple is
shipping a modified version?

You could find out by digging around at
http://www.opensource.apple.com/
polecat appears to be running OSX 10.6.7, so this is what you want:
http://www.opensource.apple.com/tarballs/perl/perl-63.tar.gz

Another question worth asking here is whether PG is picking up perl
5.10.0 or 5.8.9, both of which are shipped in this OSX release.


configure: using perl 5.10.0

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 2:06 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, Jun 7, 2011 at 6:33 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Jun 7, 2011 at 1:27 PM, Joshua Berkus j...@agliodbs.com wrote:
 As long as we have solidarity of the committers that this is not allowed, 
 however, this is not a real problem.  And it appears that we do.  In the 
 future, it shouldn't even be necessary to discuss it.

 Solidarity?

 Simon - who was a committer last time I checked - seems to think that
 the current process is entirely bunko.

 I'm not sure why anyone that disagrees with you should be accused of
 wanting to junk the whole process. I've not said that and I don't
 think this.

 Before you arrived, it was quite normal to suggest tuning patches
 after feature freeze.

I, of course, am not in a position to comment on what happened before
I arrived.  But of the six committers who have weighed in on this
thread, you're the only one who thinks this can plausibly be called a
tuning patch.  Nor would the outcome of this discussion have been any
different if I hadn't participated in it, which is why I steered clear
of the whole topic of how the patch should be handled procedurally for
the first three days.  By the time I weighed in with my opinion, Tom
and Heikki had already expressed theirs.

Now it's possible that my influence is so widespread and pernicious
that I've managed to convince to change Tom and Heikki's opinions on
the topic of feature freeze.  Perhaps, three years ago, they would
have been willing to accept the patch at the last minute, but now,
because of my advocacy for a disciplined feature freeze, they are not.
 To accept this argument, you would have to believe that I have the
power to make Tom Lane more conservative.  I don't believe I have
either the power or the inclination to do any such thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Before you arrived, it was quite normal to suggest tuning patches
 after feature freeze.

*Low risk* tuning patches make sense at this stage, yes.  Fooling with
the lock mechanisms doesn't qualify as low risk in my book.  The
probability of undetected subtle problems is just too great.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] heap vacuum cleanup locks

2011-06-07 Thread Greg Stark
On Mon, Jun 6, 2011 at 11:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
 But I think you've hit the important point here. The problem is not
 whether VACUUM waits for the pin, its that the pins can be held for
 extended periods.

Yes

 It makes more sense to try to limit pin hold times than it does to
 come up with pin avoidance techniques.

Well it's super-exclusive-vacuum-lock avoidance techniques. Why
shouldn't it make more sense to try to reduce the frequency and impact
of the single-purpose outlier in a non-critical-path instead of
burdening every other data reader with extra overhead?

I think Robert's plan is exactly right though I would phrase it
differently. We should get the exclusive lock, freeze/kill any xids
and line pointers, then if the pin-count is 1 do the compaction.

I'm really wishing we had more bits in the vm. It looks like we could use:
 - contains not-all-visible tuples
 - contains not-frozen xids
 - in need of compaction

I'm sure we could find a use for one more page-level vm bit too.



-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for new feature: Buffer Cache Hibernation

2011-06-07 Thread Greg Smith

On 06/05/2011 08:50 AM, Mitsuru IWASAKI wrote:

It seems that I don't have enough time to complete this work.
You don't need to keep cc'ing me, and I'm very happy if postgres to be
the first DBMS which support buffer cache hibernation feature.
   


Thanks for submitting the patch, and we'll see what happens from here.  
I've switch to bcc'ing you here and we should get you off everyone 
else's cc: list here soon.  If this feature ends up getting committed, 
I'll try to remember to drop you a note about it so you can see what 
happened.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Jignesh Shah
On Mon, Jun 6, 2011 at 11:20 PM, Jignesh Shah jks...@gmail.com wrote:


 Okay I tried it out with sysbench read scaling test..
 Note I had tried that earlier on 9.0
 http://jkshah.blogspot.com/2010/11/postgresql-90-simple-select-scaling.html

 And on that test I found that doing that test on anything bigger than
 4 cores lead to decreased performance ..
 Redoing the same test with 100 users on 4 vCPU Virtual Machine with
 8GB with 1M rows I get
   transactions:                        17870082 (59566.46 per sec.)
 which is inline with the best number on 9.0.
 This test hardly had any idle CPUs.

 However where it made a huge impact was doing the same test on my 8
 vCPU VM with 8GB RAM I get
    transactions:                        33274594 (110914.85 per sec.)

 which is a whopping 1.8x scaling for 2x scaling (from 4 to 8 vCPU)..
 My idle cpu was less than 7% which when taken into consideration that
 the useful work is line with my expectations is really impressive..
 (And plus the last time I did MySQL they were around 95K or so for the
 same test).


 Next step DBT-2..



I tried with a warehouse size of 50 all cached in memory and my
initial tests with DBT-2 using 8 vCPU does not show any major changes
for a quick 10 minute run. I did eliminate write bottlenecks for this
test so as to stress on locks (using full_page_writes=off,
synchronous_commit=off, etc). I also have a large enough bufferpool to
fit the all 50 warehouse DB in memory

Without patch  score:  29088 NOTPM
With patch patch score:  30161 NOTPM

It could be that I have other problems in the setup..One of the things
I noticed is that there are too many Idle in Connections being
reported which tells me something else is becoming a bottleneck here
:-) I also tested with multiple clients but similar results..  both
postgresql shows multiple idle in transaction and fetch in waiting
while the clients show waiting in SocketCheck.. like shown below for
example.

#0  0x7fc4e83a43c6 in poll () from /lib64/libc.so.6
#1  0x7fc4e8abd61a in pqSocketCheck ()
#2  0x7fc4e8abd730 in pqWaitTimed ()
#3  0x7fc4e8abc215 in PQgetResult ()
#4  0x7fc4e8abc398 in PQexecFinish ()
#5  0x004050e1 in execute_new_order ()
#6  0x0040374f in process_transaction ()
#7  0x00403519 in db_worker ()


So yes for DBT2 I think this is inconclusive since there still could
be other bottlenecks in play..  (Networking included)
But overall yes I like the sysbench read scaling numbers quite a bit..


Regards,
Jignesh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] heap vacuum cleanup locks

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 8:24 PM, Greg Stark gsst...@mit.edu wrote:
 On Mon, Jun 6, 2011 at 11:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
 But I think you've hit the important point here. The problem is not
 whether VACUUM waits for the pin, its that the pins can be held for
 extended periods.

 Yes

 It makes more sense to try to limit pin hold times than it does to
 come up with pin avoidance techniques.

 Well it's super-exclusive-vacuum-lock avoidance techniques. Why
 shouldn't it make more sense to try to reduce the frequency and impact
 of the single-purpose outlier in a non-critical-path instead of
 burdening every other data reader with extra overhead?

 I think Robert's plan is exactly right though I would phrase it
 differently. We should get the exclusive lock, freeze/kill any xids
 and line pointers, then if the pin-count is 1 do the compaction.

Would that also be possible during recovery?

A similar problem exists with Hot Standby, so I'm worried fixing just
VACUUMs would be a kluge.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I plead guilty to taking my eye off the ball post-beta1.  I busted my
 ass for two months stabilizing other people's code after CF4 was over,
 and then I moved on to other things.  I will try to get my eye back on
 the ball - but actually I'm not sure there's all that much to do.   A
 quick review of the open items list suggests that we have fixed a
 total of six issues since beta1, as opposed to 47 prior to beta1.  And
 all of those are being handled (two by you).  I also don't see much in
 the way of unanswered 9.1 bug reports on pgsql-bugs, either.  There
 may well be other open items, and I'm not unwilling to work on them,
 but I don't read minds.  What needs doing?

Well, right at the moment there's not that much (if there were, I'd not
have proposed wrapping beta2 in two days).  You could look at some of
the not blocker items on the open-items list --- we really ought to
either do those things, or punt them off to TODO or the next CF as
appropriate, sometime before 9.1 final.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] contrib/citext versus collations

2011-06-07 Thread Greg Stark
On Mon, Jun 6, 2011 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The most workable alternative that I can see is to lobotomize citext so
 that it always does lower-casing according to the database's default
 collation, which would allow us to pretend that its notion of equality
 is not collation-sensitive after all.  We could hope to improve this in
 future release cycles, but not till we've done the infrastructure work
 outlined above.  One bit of infrastructure that might be a good idea is
 a flag to indicate whether an equality operator's behavior is
 potentially collation-dependent, so that we could avoid taking
 performance hits in the normal case.

 Comments, other ideas?

That would also mean that 9.1's citext will be no worse than 9.0, it
just won't have the 9.1 collation goodness.

Random thought -- the collation used for citext is not really the same
as the default collation for ordering in sql. Perhaps it could be
stored in the typmod? So you could declare different columns to be
case insensitive according to specific collations. And it would be
free to cast between them but would have to be explicit. I'm not sure
that's actually a good idea, it was just a first thought,

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 7:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 Before you arrived, it was quite normal to suggest tuning patches
 after feature freeze.

 *Low risk* tuning patches make sense at this stage, yes.  Fooling with
 the lock mechanisms doesn't qualify as low risk in my book.  The
 probability of undetected subtle problems is just too great.

Good, then we do agree. Some things are allowed, with suitable
justification. That has not been a point accepted by everybody here
though.

Upthread, I proposed that we leave Robert's patch until 9.2. That was
*after* I had reviewed it for impact and risk. I agree, its High Risk,
and so must be put off until normal dev opens because of the
sensitivity and criticality of getting the locking interactions right.

Moving on from that, I have proposed other solutions. Koichi, Jignesh
and and then Robert have shown measurements of the huge contention in
this area of our software. Robert's patch addresses the problems, as
do Koichi's and my latest patch.  I would like to see us do
*something* about these problems for 9.1. Not all of them are risky or
time consuming. I'm clearly not alone in this thought; Dave, Dimitri
and Koichi-san have also spoken in favour of action for this release.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Moving on from that, I have proposed other solutions. Koichi, Jignesh
 and and then Robert have shown measurements of the huge contention in
 this area of our software. Robert's patch addresses the problems, as
 do Koichi's and my latest patch.  I would like to see us do
 *something* about these problems for 9.1. Not all of them are risky or
 time consuming.

In the first place, all of these issues predate 9.1 by years.  They are
not regressions or new bugs, and they have not suddenly gotten more
urgent.  In the second place, I haven't seen any proposals in the area
that appear low risk.  I seriously doubt that I would consider *any*
meaningful change in the locking area to be low risk.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 3:44 PM, Jignesh Shah jks...@gmail.com wrote:
 On Mon, Jun 6, 2011 at 11:20 PM, Jignesh Shah jks...@gmail.com wrote:
 Okay I tried it out with sysbench read scaling test..
 Note I had tried that earlier on 9.0
 http://jkshah.blogspot.com/2010/11/postgresql-90-simple-select-scaling.html

 And on that test I found that doing that test on anything bigger than
 4 cores lead to decreased performance ..
 Redoing the same test with 100 users on 4 vCPU Virtual Machine with
 8GB with 1M rows I get
   transactions:                        17870082 (59566.46 per sec.)
 which is inline with the best number on 9.0.
 This test hardly had any idle CPUs.

 However where it made a huge impact was doing the same test on my 8
 vCPU VM with 8GB RAM I get
    transactions:                        33274594 (110914.85 per sec.)

 which is a whopping 1.8x scaling for 2x scaling (from 4 to 8 vCPU)..
 My idle cpu was less than 7% which when taken into consideration that
 the useful work is line with my expectations is really impressive..
 (And plus the last time I did MySQL they were around 95K or so for the
 same test).


 Next step DBT-2..



 I tried with a warehouse size of 50 all cached in memory and my
 initial tests with DBT-2 using 8 vCPU does not show any major changes
 for a quick 10 minute run. I did eliminate write bottlenecks for this
 test so as to stress on locks (using full_page_writes=off,
 synchronous_commit=off, etc). I also have a large enough bufferpool to
 fit the all 50 warehouse DB in memory

 Without patch  score:      29088 NOTPM
 With patch patch score:  30161 NOTPM

 It could be that I have other problems in the setup..One of the things
 I noticed is that there are too many Idle in Connections being
 reported which tells me something else is becoming a bottleneck here
 :-) I also tested with multiple clients but similar results..  both
 postgresql shows multiple idle in transaction and fetch in waiting
 while the clients show waiting in SocketCheck.. like shown below for
 example.

 #0  0x7fc4e83a43c6 in poll () from /lib64/libc.so.6
 #1  0x7fc4e8abd61a in pqSocketCheck ()
 #2  0x7fc4e8abd730 in pqWaitTimed ()
 #3  0x7fc4e8abc215 in PQgetResult ()
 #4  0x7fc4e8abc398 in PQexecFinish ()
 #5  0x004050e1 in execute_new_order ()
 #6  0x0040374f in process_transaction ()
 #7  0x00403519 in db_worker ()


 So yes for DBT2 I think this is inconclusive since there still could
 be other bottlenecks in play..  (Networking included)
 But overall yes I like the sysbench read scaling numbers quite a bit..

I think you will find that for write workloads WALInsertLock is so
badly contended that nothing else matters.  We really need to spend
some time working on that during the 9.2 cycle, but I don't have
anything that resembles a plan at this point.  If you have the cycles,
try compiling with LWLOCK_STATS defined and looking at the blk
numbers just to confirm that's where the bottleneck is.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 9:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 Moving on from that, I have proposed other solutions. Koichi, Jignesh
 and and then Robert have shown measurements of the huge contention in
 this area of our software. Robert's patch addresses the problems, as
 do Koichi's and my latest patch.  I would like to see us do
 *something* about these problems for 9.1. Not all of them are risky or
 time consuming.

 In the first place, all of these issues predate 9.1 by years.  They are
 not regressions or new bugs, and they have not suddenly gotten more
 urgent.  In the second place, I haven't seen any proposals in the area
 that appear low risk.  I seriously doubt that I would consider *any*
 meaningful change in the locking area to be low risk.

That's a shame. We'll fix it in 9.2 then.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] contrib/citext versus collations

2011-06-07 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 On Mon, Jun 6, 2011 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The most workable alternative that I can see is to lobotomize citext so
 that it always does lower-casing according to the database's default
 collation, which would allow us to pretend that its notion of equality
 is not collation-sensitive after all.

 That would also mean that 9.1's citext will be no worse than 9.0, it
 just won't have the 9.1 collation goodness.

On further reflection, I'm wondering exactly how much goodness to chop
off there.  What I'd originally been thinking was to just lobotomize the
case-folding step, and allow citext's comparison operators to still
respond to input collation when comparing the folded strings.  However,
I can imagine that some combinations of languages might produce pretty
weird results if we do that.  Should we lobotomize the comparisons too?
Or is the ability to affect the sort order valuable enough to put up
with whatever corner-case funnies there might be?

 Random thought -- the collation used for citext is not really the same
 as the default collation for ordering in sql. Perhaps it could be
 stored in the typmod?

Again, I'm wondering whether that's really a good idea.  I think the
currently implemented behavior of citext (fold and compare both act
according to input collation) is really the right thing ... we just
can't do it all yet.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 12:51 PM, Simon Riggs si...@2ndquadrant.com wrote:
 Stefan/Robert's observation that we perform a
 VirtualXactLockTableInsert() to no real benefit is a good one.

 It leads to the following simple patch to remove one lock table hit
 per transaction. It's a lot smaller impact on the LockMgr locks, but
 it will still be substantial. Performance tests please?

 This patch is much less invasive and has impact only on CREATE INDEX
 CONCURRENTLY and Hot Standby. It's taken me about 2 hours to write and
 test and there's no way it will cause any delay at all to the release
 schedule. (Though I'm sure Robert can improve it).

Incidentally, I spent the morning (before we got off on this tangent)
writing a patch to make VXID locks spring into existence on demand
instead of creating them for every transaction.  This applies on top
of my fastlock patch and fits in quite nicely with the existing
infrastructure that patch creates, and it helps modestly.  Well,
according to one metric, at least, it helps dramatically: traffic on
each lock manager partition locks drops from hundreds of thousands of
lock requests in a five minute period to just a few hundred.  But the
actual user-visible performance benefit is fairly modest - it goes
from ~36K TPS unpatched to ~129K TPS with the fast relation locks
alone to ~138K TPS with the fast relation locks plus a similar hack
for fast VXID locks (all results with pgbench -c 36 -j 36 -n -S -T 300
on a Nate-Boley-provided 24-core box).  Now, I'm not going to knock a
7% performance improvement and the benefit may be larger on Stefan's
80-core box and I think it's definitely worth going to the trouble to
implement that optimization for 9.2, but it appears at least based on
the testing that I've done so far that the fast relation locks are the
big win and after that it gets much harder to make an improvement.  If
we were to fix ONLY the vxid issue in 9.1 as you were advocating, the
benefit would probably be much less, because at least in my tests, the
fast relation lock patch increases overall system throughput
sufficiently to cause a 12x increase in contention due to vxid
traffic.

With both the fast-relation locks and the fast-vxid locks in place, as
I mentioned, the lock manager partition lock contention is completely
gone; in fact the lock manager partition traffic is pretty much gone.
The remaining contention comes mostly from the free list locks (blk
~13%) and the buffer mapping locks (which were roughly: 800k shacq,
12000 exacq, 850 blk)  Interestingly, I saw that one buffer mapping
lock got about 5x hotter than the others, which is odd, but possibly
harmless, since the absolute amount of blocking is really rather small
(~0.1%).  At least for read performance, we may need to start looking
less at reducing lock contention and more at making the actual
underlying operations faster.

In the process of doing all of this, I discovered that I had neglected
to update GetLockConflicts() and, consequently, fastlock-v2 is broken
insofar as CREATE INDEX CONCURRENTLY and Hot Standby are concerned.  I
will fix that and post an updated version; and I'll also post the
follow-on patch to accelerate the VXID locks at that time.  In the
meantime, I would appreciate any review or testing of the remainder of
the patch.

 If we combine this patch with Koichi-san's recommended changes to the
 number of lock partitions, we will have considerable impact for 9.1.
 Robert will still get his day in the sun, just with 9.2.

I am at this point of the viewpoint that there is little point in
raising the number of lock partitions.  If you are doing very simple
SELECT statements across a large number of tables, then increasing the
number of lock partitions will help.  On read-write workloads, there's
really no benefit, because WALInsertLock contention is the bottleneck.
 And on read-only workloads that only touch one or a handful of
tables, the individual lock manager partitions where the locks fall
get very hot regardless of how many partitions you have.  Now that
does still leave some space for improvement - specifically, lots of
tables, read-only or read-mostly - but the fast-relation-lock and
fast-vxid-lock stuff will address those bottlenecks far more
thoroughly.  And increasing the number of lock partitions also has a
downside: it will slow down end-of-transaction cleanup, which is
already an area where we know we have problems.

There might be some point in raising the number of buffer mapping
partitions, but I don't know how to create a test case where it's
actually material, especially without the fastlock stuff.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Tom Lane
Robert Creager robert.crea...@oracle.com writes:
 Another question worth asking here is whether PG is picking up perl
 5.10.0 or 5.8.9, both of which are shipped in this OSX release.

 Hmm...  This might be a problem:

 which perl
 /opt/local/bin/perl

 type -a perl
 /opt/local/bin/perl
 /usr/bin/perl

 /opt/local/bin/perl -V
 This is perl, v5.8.9 built for darwin-2level

The configure log mentioned upthread says it's finding /usr/bin/perl,
so apparently the buildfarm is running with a different PATH than you're
using here.  But that log also shows

configure:7158: checking for flags to link embedded Perl
configure:7174: result:  -L/usr/local/lib  
-L/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE -lperl -ldl -lm 
-lutil -lc

If there's anything perl-related in /usr/local/lib (not /opt/local/lib),
that could be confusing matters.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Simon Riggs
On Tue, Jun 7, 2011 at 9:52 PM, Robert Haas robertmh...@gmail.com wrote:

 If we were to fix ONLY the vxid issue in 9.1 as you were advocating

Sensible debate is impossible when you don't read what I've written.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] CREATE FUNCTION hang on test machine polecat on HEAD

2011-06-07 Thread Alvaro Herrera
Excerpts from Tom Lane's message of mar jun 07 14:22:13 -0400 2011:
 Alex Hunsaker bada...@gmail.com writes:
  Im looking at the raw perl 5.10.0 source... I wonder if apple is
  shipping a modified version?
 
 You could find out by digging around at
 http://www.opensource.apple.com/
 polecat appears to be running OSX 10.6.7, so this is what you want:
 http://www.opensource.apple.com/tarballs/perl/perl-63.tar.gz
 
 Another question worth asking here is whether PG is picking up perl
 5.10.0 or 5.8.9, both of which are shipped in this OSX release.

Another question is whether this environment variable is set at all.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 5:43 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, Jun 7, 2011 at 9:52 PM, Robert Haas robertmh...@gmail.com wrote:
 If we were to fix ONLY the vxid issue in 9.1 as you were advocating

 Sensible debate is impossible when you don't read what I've written.

I've read every word you've written on this thread.  Much of it,
multiple times.  I am unclear what we are arguing about.  I don't want
to have a debate.  I want to figure out what works, and do it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-07 Thread Josh Berkus
On 6/7/11 1:11 PM, Simon Riggs wrote:
 that appear low risk.  I seriously doubt that I would consider *any*
  meaningful change in the locking area to be low risk.
 That's a shame. We'll fix it in 9.2 then.

I will point out that we bounced Alvaro's FK patch, which *was*
submitted in time for CF4, because of unknown locking impact.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] heap vacuum cleanup locks

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 3:43 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, Jun 7, 2011 at 8:24 PM, Greg Stark gsst...@mit.edu wrote:
 On Mon, Jun 6, 2011 at 11:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
 But I think you've hit the important point here. The problem is not
 whether VACUUM waits for the pin, its that the pins can be held for
 extended periods.

 Yes

 It makes more sense to try to limit pin hold times than it does to
 come up with pin avoidance techniques.

 Well it's super-exclusive-vacuum-lock avoidance techniques. Why
 shouldn't it make more sense to try to reduce the frequency and impact
 of the single-purpose outlier in a non-critical-path instead of
 burdening every other data reader with extra overhead?

 I think Robert's plan is exactly right though I would phrase it
 differently. We should get the exclusive lock, freeze/kill any xids
 and line pointers, then if the pin-count is 1 do the compaction.

 Would that also be possible during recovery?

 A similar problem exists with Hot Standby, so I'm worried fixing just
 VACUUMs would be a kluge.

We have to do the same operation on both the master and standby, so if
the master decides to skip the compaction then the slave will skip it
as well (and need not worry about waiting for pin-count 1).  But if
the master does the compaction then the slave will have to get a
matching cleanup lock, just as now.

Your idea of somehow adjusting things so that we don't hold the buffer
pin for a long period of time would be better in that regard, but I'm
not sure how to do it.  Presumably we could rejigger things to copy
the tuples instead of holding a pin, but that would carry a
performance penalty for the (very common) case where there is no
conflict with VACUUM.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-07 Thread Tom Lane
I wrote:
 Anyway, I think we're out of time to do anything about the issue for
 9.1.  I think what we'd better do is force a downcast in the context
 of matching to an ANYARRAY parameter, and leave the other cases to
 revisit later.

Attached is a draft patch to do the above.  It's only lightly tested,
and could use some regression test additions, but it seems to fix
Regina's complaint.

Note that I changed coerce_type's behavior for both ANYARRAY and ANYENUM
targets, but the latter behavioral change is unreachable since the other
routines in parse_coerce.c will not match a domain-over-enum to ANYENUM.
I am half tempted to extend the patch so they will, which would allow
cases like this to work:

regression=#  create type color as enum('red','green','blue');
CREATE TYPE
regression=# select enum_first('green'::color);
 enum_first 

 red
(1 row)

regression=# create domain dcolor as color;
CREATE DOMAIN
regression=# select enum_first('green'::dcolor);
ERROR:  function enum_first(dcolor) does not exist
LINE 1: select enum_first('green'::dcolor);
   ^
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.

I'm unsure though if there's any support for this further adventure,
since it wouldn't be fixing a 9.1 regression.

Comments?

regards, tom lane

diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c
index 0418972517eee4df52dbdc8f7807aa8fa528a674..e0727f12285d73b3ce48b53ba91cd5d8d9fc87e4 100644
*** a/src/backend/parser/parse_coerce.c
--- b/src/backend/parser/parse_coerce.c
*** coerce_type(ParseState *pstate, Node *no
*** 143,151 
  	}
  	if (targetTypeId == ANYOID ||
  		targetTypeId == ANYELEMENTOID ||
! 		targetTypeId == ANYNONARRAYOID ||
! 		(targetTypeId == ANYARRAYOID  inputTypeId != UNKNOWNOID) ||
! 		(targetTypeId == ANYENUMOID  inputTypeId != UNKNOWNOID))
  	{
  		/*
  		 * Assume can_coerce_type verified that implicit coercion is okay.
--- 143,149 
  	}
  	if (targetTypeId == ANYOID ||
  		targetTypeId == ANYELEMENTOID ||
! 		targetTypeId == ANYNONARRAYOID)
  	{
  		/*
  		 * Assume can_coerce_type verified that implicit coercion is okay.
*** coerce_type(ParseState *pstate, Node *no
*** 154,168 
  		 * it's OK to treat an UNKNOWN constant as a valid input for a
  		 * function accepting ANY, ANYELEMENT, or ANYNONARRAY.	This should be
  		 * all right, since an UNKNOWN value is still a perfectly valid Datum.
- 		 * However an UNKNOWN value is definitely *not* an array, and so we
- 		 * mustn't accept it for ANYARRAY.  (Instead, we will call anyarray_in
- 		 * below, which will produce an error.)  Likewise, UNKNOWN input is no
- 		 * good for ANYENUM.
  		 *
! 		 * NB: we do NOT want a RelabelType here.
  		 */
  		return node;
  	}
  	if (inputTypeId == UNKNOWNOID  IsA(node, Const))
  	{
  		/*
--- 152,195 
  		 * it's OK to treat an UNKNOWN constant as a valid input for a
  		 * function accepting ANY, ANYELEMENT, or ANYNONARRAY.	This should be
  		 * all right, since an UNKNOWN value is still a perfectly valid Datum.
  		 *
! 		 * NB: we do NOT want a RelabelType here: the exposed type of the
! 		 * function argument must be its actual type, not the polymorphic
! 		 * pseudotype.
  		 */
  		return node;
  	}
+ 	if (targetTypeId == ANYARRAYOID ||
+ 		targetTypeId == ANYENUMOID)
+ 	{
+ 		/*
+ 		 * Assume can_coerce_type verified that implicit coercion is okay.
+ 		 *
+ 		 * These cases are unlike the ones above because the exposed type of
+ 		 * the argument must be an actual array or enum type.  In particular
+ 		 * the argument must *not* be an UNKNOWN constant.  If it is, we just
+ 		 * fall through; below, we'll call anyarray_in or anyenum_in, which
+ 		 * will produce an error.  Also, if what we have is a domain over
+ 		 * array or enum, we have to relabel it to its base type.
+ 		 */
+ 		if (inputTypeId != UNKNOWNOID)
+ 		{
+ 			Oid			baseTypeId = getBaseType(inputTypeId);
+ 
+ 			if (baseTypeId != inputTypeId)
+ 			{
+ RelabelType *r = makeRelabelType((Expr *) node,
+  baseTypeId, -1,
+  InvalidOid,
+  cformat);
+ 
+ r-location = location;
+ return (Node *) r;
+ 			}
+ 			/* Not a domain type, so return it as-is */
+ 			return node;
+ 		}
+ 	}
  	if (inputTypeId == UNKNOWNOID  IsA(node, Const))
  	{
  		/*
*** coerce_to_common_type(ParseState *pstate
*** 1257,1262 
--- 1284,1294 
   *	  (This is a no-op if used in combination with ANYARRAY or ANYENUM, but
   *	  is an extra restriction if not.)
   *
+  * Domains over arrays match ANYARRAY, and are immediately flattened to their
+  * base type.  (Thus, for example, we will consider it a match if one ANYARRAY
+  * argument is a domain over int4[] while another one is just int4[].)  Also
+  * notice that such a domain does *not* match ANYNONARRAY.
+  *
   * If we have UNKNOWN input (ie, an untyped literal) for 

Re: [HACKERS] smallserial / serial2

2011-06-07 Thread Brar Piening

On Wed, 20 Apr 2011 21:27:27 -0400, Mike Pultz m...@mikepultz.com wrote:


Can this be added?



Probably not - since it's not a complete patch ;-)

I tried to test this one but was unable to find a complete version of 
the patch in my local mail archives and in the official archives 
(http://archives.postgresql.org/message-id/023001cbffc3$46f77840$d4e668c0$@mikepultz.com)


Could you please repost it for testing?

Regards,

Brar


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 On 07.06.2011 21:10, Kevin Grittner wrote:
 
 I think that leaves me with all the answers I need to get a new
 patch out this evening (U.S. Central Time).
 
 Great, I'll review it in my morning (in about 12h)
 
Attached.  Passes all the usual regression tests I run, plus light
ad hoc testing.  All is working fine as far as this patch itself
goes, although more testing is needed to really call it sound.
 
If anyone is interested in the differential from version 3 of the
patch, it is the result of these two commits to my local repo:
 
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=018b0fcbeba05317ba7066e552efe9a04e6890d9
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=fc651e2721a601ea806cf6e5d53d0676dfd26dca
 
During testing I found two annoying things not caused by this patch
which should probably be addressed in 9.1 if feasible, although I
don't think they rise to the level of blockers.  More on those in
separate threads.
 
-Kevin

*** a/src/backend/catalog/heap.c
--- b/src/backend/catalog/heap.c
***
*** 63,68 
--- 63,69 
  #include parser/parse_relation.h
  #include storage/bufmgr.h
  #include storage/freespace.h
+ #include storage/predicate.h
  #include storage/smgr.h
  #include utils/acl.h
  #include utils/builtins.h
***
*** 1658,1663  heap_drop_with_catalog(Oid relid)
--- 1659,1672 
CheckTableNotInUse(rel, DROP TABLE);
  
/*
+* This effectively deletes all rows in the table, and may be done in a
+* serializable transaction.  In that case we must record a rw-conflict 
in
+* to this transaction from each transaction holding a predicate lock on
+* the table.
+*/
+   CheckTableForSerializableConflictIn(rel);
+ 
+   /*
 * Delete pg_foreign_table tuple first.
 */
if (rel-rd_rel-relkind == RELKIND_FOREIGN_TABLE)
*** a/src/backend/catalog/index.c
--- b/src/backend/catalog/index.c
***
*** 54,59 
--- 54,60 
  #include parser/parser.h
  #include storage/bufmgr.h
  #include storage/lmgr.h
+ #include storage/predicate.h
  #include storage/procarray.h
  #include storage/smgr.h
  #include utils/builtins.h
***
*** 1312,1317  index_drop(Oid indexId)
--- 1313,1324 
CheckTableNotInUse(userIndexRelation, DROP INDEX);
  
/*
+* All predicate locks on the index are about to be made invalid.
+* Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(userIndexRelation);
+ 
+   /*
 * Schedule physical removal of the files
 */
RelationDropStorage(userIndexRelation);
***
*** 2799,2804  reindex_index(Oid indexId, bool skip_constraint_checks)
--- 2806,2817 
 */
CheckTableNotInUse(iRel, REINDEX INDEX);
  
+   /*
+* All predicate locks on the index are about to be made invalid.
+* Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(iRel);
+ 
PG_TRY();
{
/* Suppress use of the target index while rebuilding it */
*** a/src/backend/commands/cluster.c
--- b/src/backend/commands/cluster.c
***
*** 39,44 
--- 39,45 
  #include optimizer/planner.h
  #include storage/bufmgr.h
  #include storage/lmgr.h
+ #include storage/predicate.h
  #include storage/procarray.h
  #include storage/smgr.h
  #include utils/acl.h
***
*** 385,390  cluster_rel(Oid tableOid, Oid indexOid, bool recheck, bool 
verbose,
--- 386,397 
if (OidIsValid(indexOid))
check_index_is_clusterable(OldHeap, indexOid, recheck, 
AccessExclusiveLock);
  
+   /*
+* All predicate locks on the table and its indexes are about to be made
+* invalid.  Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(OldHeap);
+ 
/* rebuild_relation does all the dirty work */
rebuild_relation(OldHeap, indexOid, freeze_min_age, freeze_table_age,
 verbose);
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 70,75 
--- 70,76 
  #include storage/bufmgr.h
  #include storage/lmgr.h
  #include storage/lock.h
+ #include storage/predicate.h
  #include storage/smgr.h
  #include utils/acl.h
  #include utils/builtins.h
***
*** 1040,1045  ExecuteTruncate(TruncateStmt *stmt)
--- 1041,1054 
Oid toast_relid;
  
/*
+* This effectively deletes all rows in the table, and 
may be done
+* in a serializable transaction.  In that case we must 
record a
+* rw-conflict in to this transaction from each 

[HACKERS] could not truncate directory pg_serial: apparent wraparound

2011-06-07 Thread Kevin Grittner
We had a report of the subject message during testing a while back
and attempted to address the issue.  It can result in a LOG level
message and the accumulation of files in the pg_serial SLRU
subdirectory.  We haven't seen a recurrence, until I hit it during
testing of the just-posted patch for SSI DDL.  I re-read the code
and believe that the attached is the correct fix.
 
-Kevin

*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***
*** 926,943  CheckPointPredicate(void)
else
{
/*
!* The SLRU is no longer needed. Truncate everything.  If we 
try to
!* leave the head page around to avoid re-zeroing it, we might 
not use
!* the SLRU again until we're past the wrap-around point, which 
makes
!* SLRU unhappy.
!*
!* While the API asks you to specify truncation by page, it 
silently
!* ignores the request unless the specified page is in a 
segment past
!* some allocated portion of the SLRU.  We don't care which 
page in a
!* later segment we hit, so just add the number of pages per 
segment
!* to the head page to land us *somewhere* in the next segment.
 */
!   tailPage = oldSerXidControl-headPage + SLRU_PAGES_PER_SEGMENT;
oldSerXidControl-headPage = -1;
}
  
--- 926,935 
else
{
/*
!* The SLRU is no longer needed. Truncate to head before we set 
head
!* invalid.
 */
!   tailPage = oldSerXidControl-headPage;
oldSerXidControl-headPage = -1;
}
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] smallserial / serial2

2011-06-07 Thread Mike Pultz
Sorry, forgot the documentation- I guess that stuff doesn't magically
happen!

 

New patch attached.

 

Mike

 

From: Brar Piening [mailto:b...@gmx.de] 
Sent: Tuesday, June 07, 2011 6:58 PM
To: Mike Pultz
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] smallserial / serial2

 

On Wed, 20 Apr 2011 21:27:27 -0400, Mike Pultz  mailto:m...@mikepultz.com
m...@mikepultz.com wrote: 

 

Can this be added?

 

Probably not - since it's not a complete patch ;-)

I tried to test this one but was unable to find a complete version of the
patch in my local mail archives and in the official archives
(http://archives.postgresql.org/message-id/023001cbffc3$46f77840$d4e668c0$@m
ikepultz.com)

Could you please repost it for testing?

Regards,

Brar



20110607_serial2_v2.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] reindex creates predicate lock on index root

2011-06-07 Thread Kevin Grittner
During testing of the SSI DDL changes I noticed that a REINDEX INDEX
created a predicate lock on page 0 of the index.  This is pretty
harmless, but mildly annoying.  There are a few other places where
it would be good to suppress predicate locks; these are listed on
the RD section of the Wiki.  I hope to clean some of these up in
9.2. Unless a very clean and safe fix for the subject issue pops out
on further review, I'll add this to that list.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 6:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I wrote:
 Anyway, I think we're out of time to do anything about the issue for
 9.1.  I think what we'd better do is force a downcast in the context
 of matching to an ANYARRAY parameter, and leave the other cases to
 revisit later.

 Attached is a draft patch to do the above.  It's only lightly tested,
 and could use some regression test additions, but it seems to fix
 Regina's complaint.

 Note that I changed coerce_type's behavior for both ANYARRAY and ANYENUM
 targets, but the latter behavioral change is unreachable since the other
 routines in parse_coerce.c will not match a domain-over-enum to ANYENUM.
 I am half tempted to extend the patch so they will, which would allow
 cases like this to work:

 regression=#  create type color as enum('red','green','blue');
 CREATE TYPE
 regression=# select enum_first('green'::color);
  enum_first
 
  red
 (1 row)

 regression=# create domain dcolor as color;
 CREATE DOMAIN
 regression=# select enum_first('green'::dcolor);
 ERROR:  function enum_first(dcolor) does not exist
 LINE 1: select enum_first('green'::dcolor);
               ^
 HINT:  No function matches the given name and argument types. You might need 
 to add explicit type casts.

 I'm unsure though if there's any support for this further adventure,
 since it wouldn't be fixing a 9.1 regression.

 Comments?

Well, on the one hand, if we're doing it for arrays, it's hard to
imagine that the same behavior for enums can be an outright disaster.
On the flip side, people get really cranky about changes that
break application code, so it would not be nice if we had to pull this
one back.  How likely is that?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Invalid byte sequence for encoding UTF8, caused due to non wide-char-aware downcase_truncate_identifier() function on WINDOWS

2011-06-07 Thread Robert Haas
2011/6/7 Jeevan Chalke jeevan.cha...@enterprisedb.com:
 since we smash the identifier to lower case using
 downcase_truncate_identifier() function, the solution is to make this
 function should be wide-char aware, like LOWER() function functionality.

 I see some discussion related to downcase_truncate_identifier() and
 wide-char aware function, but seems like we lost somewhere.
 (http://archives.postgresql.org/pgsql-hackers/2010-11/msg01385.php)
 This invalid byte sequence issue seems like a more serious issue, because it
 might lead e.g to pg_dump failures.

It's a problem, but without an efficient algorithm for Unicode case
folding, any fix we attempt to implement seems like it'll just be
moving the problem around.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: 9.1 release scheduling (was Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch)

2011-06-07 Thread Alvaro Herrera
Excerpts from Robert Haas's message of mar jun 07 13:53:23 -0400 2011:
 On Tue, Jun 7, 2011 at 1:45 PM, Thom Brown t...@linux.com wrote:
  Speaking of which, is it now safe to remove the NOT VALID constraints
  don't dump properly issue from the blocker list since the fix has
  been committed?
 
 I hope so, because I just did that (before noticing this email from you).

Yeah, pg_dump works in HEAD ... the bug now is that psql prints NOT
VALID twice.  Will fix.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Jun 7, 2011 at 6:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Note that I changed coerce_type's behavior for both ANYARRAY and ANYENUM
 targets, but the latter behavioral change is unreachable since the other
 routines in parse_coerce.c will not match a domain-over-enum to ANYENUM.
 I am half tempted to extend the patch so they will, which would allow
 cases like this to work:

 regression=# select enum_first('green'::dcolor);
 ERROR:  function enum_first(dcolor) does not exist

 Well, on the one hand, if we're doing it for arrays, it's hard to
 imagine that the same behavior for enums can be an outright disaster.
 On the flip side, people get really cranky about changes that
 break application code, so it would not be nice if we had to pull this
 one back.  How likely is that?

It's hard to see how allowing this match where there was no match before
would break existing code.  A more plausible objection is that we'd be
foreclosing any possibility of handling the match-domain-to-ANYENUM case
differently, since once 9.1 had been out in the field doing this for a
year, you can be sure there *would* be some apps depending on it.
So I think the real question is whether we have totally destroyed the
argument for letting domains pass through polymorphic functions without
getting smashed to their base types.  Personally I think that idea is
pretty much dead in the water, but I sense that Noah hasn't given up on
it yet ;-)  If we aren't yet willing to treat ANYELEMENT that way, maybe
it's premature to adopt the stance for ANYENUM.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-07 Thread Robert Haas
On Tue, Jun 7, 2011 at 9:39 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Jun 7, 2011 at 6:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Note that I changed coerce_type's behavior for both ANYARRAY and ANYENUM
 targets, but the latter behavioral change is unreachable since the other
 routines in parse_coerce.c will not match a domain-over-enum to ANYENUM.
 I am half tempted to extend the patch so they will, which would allow
 cases like this to work:

 regression=# select enum_first('green'::dcolor);
 ERROR:  function enum_first(dcolor) does not exist

 Well, on the one hand, if we're doing it for arrays, it's hard to
 imagine that the same behavior for enums can be an outright disaster.
 On the flip side, people get really cranky about changes that
 break application code, so it would not be nice if we had to pull this
 one back.  How likely is that?

 It's hard to see how allowing this match where there was no match before
 would break existing code.  A more plausible objection is that we'd be
 foreclosing any possibility of handling the match-domain-to-ANYENUM case
 differently, since once 9.1 had been out in the field doing this for a
 year, you can be sure there *would* be some apps depending on it.

Yes, that's the point I was trying to get at.

 So I think the real question is whether we have totally destroyed the
 argument for letting domains pass through polymorphic functions without
 getting smashed to their base types.  Personally I think that idea is
 pretty much dead in the water, but I sense that Noah hasn't given up on
 it yet ;-)  If we aren't yet willing to treat ANYELEMENT that way, maybe
 it's premature to adopt the stance for ANYENUM.

Given that we have no field demand for this behavior, maybe it's
better not to add it, so that we have the option later to change our
mind about how it should work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reindex creates predicate lock on index root

2011-06-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 During testing of the SSI DDL changes I noticed that a REINDEX INDEX
 created a predicate lock on page 0 of the index.  This is pretty
 harmless, but mildly annoying.  There are a few other places where
 it would be good to suppress predicate locks; these are listed on
 the RD section of the Wiki.  I hope to clean some of these up in
 9.2. Unless a very clean and safe fix for the subject issue pops out
 on further review, I'll add this to that list.

Do you mean page zero, as in the metapage (for most index types), or do
you mean the root page?  If the former, how is that not an outright bug,
since it corresponds to no data?  If the latter, how is that not a
serious performance problem, since it corresponds to locking the entire
index?  Any way you slice it, it sounds like a pretty bad bug.

It's not apparent to me why an index build (regular or reindex) should
create any predicate locks at all, ever.  Surely it's a basically
nontransactional operation that SSI should keep its fingers out of.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALInsertLock tuning

2011-06-07 Thread Fujii Masao
On Tue, Jun 7, 2011 at 9:54 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, Jun 7, 2011 at 1:24 PM, Robert Haas robertmh...@gmail.com wrote:

 One other thought is that I think that this patch might cause a
 user-visible behavior change.  Right now, when you hit the end of
 recovery, you most typically get a message saying - record with zero
 length.  Not always, but often.  If we adopt this approach, you'll get
 a wider variety of error messages there, depending on exactly how the
 new record fails validation.  I dunno if that's important to be worth
 caring about, or not.

 Not.

 We've never said what the message would be, only that it would fail.

BTW, walreceiver doesn't zero the page before writing the WAL. So,
if zeroing the page is *really* required for safe recovery, we might need
to change walreceiver.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >