Re: [PATCHES] page macros cleanup (ver 04)

2008-07-21 Thread Zdenek Kotala

Tom Lane napsal(a):

Heikki Linnakangas [EMAIL PROTECTED] writes:
...  That macro is actually doing the 
same thing as PageGetContents, so I switched to using that. As that 
moves the data sligthly on those bitmap pages, I guess we'll need a 
catversion bump.


I'm amazed that Zdenek didn't scream bloody murder about that.  You're
creating a work item for in-place-upgrade that would not otherwise
exist, in exchange for a completely trivial bit of code beautification.
(The same can be said of his proposed change to hash meta pages.)


:-) Yeah, These changes break in-place-upgrade on hash indexes and invokes 
reindexing request. I have had several reasons why I didn't complaint about it:


1) IIRC, hash function for double has been change
2) there is ongoing project to improve hash index performance - completely 
redesigned content
3) hash index is not much used (by my opinion) and it affect only small group of 
users



I'm planning to go over this patch today and apply it sans the parts
that would require catversion bump.  We can argue later about whether
those are really worth doing, but I'm leaning to not --- unless Zdenek
says that he has no intention of making in-place-upgrade handle hash
indexes any time soon.


Thanks for applying patch. I think that hash index upgradebility is currently 
broken or it will be with new hash index improvement. But if I think about it it 
does not make sense to break compatibility by this patch first. I will prepare 
patch which will be upgrade friendly. And if we will reimplement hash index 
soon, than we can clean a code.



BTW, after further thought about the PageGetContents() situation:
right now we can change it to guarantee maxalignment for free,
since SizeOfPageHeaderData happens to be maxaligned on all platforms
(this wasn't the case as recently as 8.2).  So I'm thinking we should
do that.  There's at least one place that thinks that PageGetContents
is the same as page + SizeOfPageHeaderData, but that's easily fixed.
On balance it seems like hidden assumptions about alignment are a bigger
risk than assumptions about that offset --- anyone want to argue the
contrary?


I think it is OK and I seen that you already applied a patch.

Thanks Zdenek



--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup (ver 04)

2008-07-13 Thread Tom Lane
Zdenek Kotala [EMAIL PROTECTED] writes:
 Good catch. if we have to bump catalog version then I have there small patch
 which introduce following macro and remove PageHeaderData stucture from 
 HashMetaPageData:

Seems like a bad idea --- PageGetContents is not guaranteed to deliver
a maxaligned pointer, so at least in principle this could result in
alignment violations.  It would accidentally fail to fail as of CVS
HEAD, but any future rearrangement of PageHeaderData or HashMetaPageData
could create a problem.

(Possibly PageGetContents *should* guarantee a maxaligned result,
but the current coding does not.)

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup (ver 04)

2008-07-13 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 ...  That macro is actually doing the 
 same thing as PageGetContents, so I switched to using that. As that 
 moves the data sligthly on those bitmap pages, I guess we'll need a 
 catversion bump.

I'm amazed that Zdenek didn't scream bloody murder about that.  You're
creating a work item for in-place-upgrade that would not otherwise
exist, in exchange for a completely trivial bit of code beautification.
(The same can be said of his proposed change to hash meta pages.)

I'm planning to go over this patch today and apply it sans the parts
that would require catversion bump.  We can argue later about whether
those are really worth doing, but I'm leaning to not --- unless Zdenek
says that he has no intention of making in-place-upgrade handle hash
indexes any time soon.

BTW, after further thought about the PageGetContents() situation:
right now we can change it to guarantee maxalignment for free,
since SizeOfPageHeaderData happens to be maxaligned on all platforms
(this wasn't the case as recently as 8.2).  So I'm thinking we should
do that.  There's at least one place that thinks that PageGetContents
is the same as page + SizeOfPageHeaderData, but that's easily fixed.
On balance it seems like hidden assumptions about alignment are a bigger
risk than assumptions about that offset --- anyone want to argue the
contrary?

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup (ver 04)

2008-07-09 Thread Heikki Linnakangas

Zdenek Kotala wrote:

Pavan Deolasee napsal(a):

On Fri, Jul 4, 2008 at 4:25 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


No, there's a itemsz = MAXALIGN(itemsz) call before the check against
HashMaxItemSize.



Ah, right. Still should we just not MAXALIGN_DOWN the Max size to
reflect the practical limit on the itemsz ? It's more academical
though, so not a big deal.


Finally I use following formula:

#define HashMaxItemSize(page) \
 MAXALIGN_DOWN(PageGetPageSize(page) - \
   ( SizeOfPageHeaderData + sizeof(ItemIdData) ) - \
MAXALIGN(sizeof(HashPageOpaqueData)) )


I did not replace PageGetPageSize(page), because other *MaxItemSize has 
same interface.


Ok, fair enough.

There's another academical discrepancy between these two hunks:


*** src/backend/access/hash/hashpage.c  12 May 2008 00:00:44 -  1.75
--- src/backend/access/hash/hashpage.c  9 Jul 2008 11:30:09 -
***
*** 407,413 
for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
{
if ((1  i) = (metap-hashm_bsize -
!
(MAXALIGN(sizeof(PageHeaderData)) +
  
MAXALIGN(sizeof(HashPageOpaqueData)
break;
}
--- 407,413 
for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
{
if ((1  i) = (metap-hashm_bsize -
!
(MAXALIGN(SizeOfPageHeaderData) +
  
MAXALIGN(sizeof(HashPageOpaqueData)
break;
}


and


*** src/include/access/hash.h   19 Jun 2008 00:46:05 -  1.88
--- src/include/access/hash.h   9 Jul 2008 11:30:10 -
***
*** 192,198 
  #define BMPG_SHIFT(metap) ((metap)-hashm_bmshift)
  #define BMPG_MASK(metap)  (BMPGSZ_BIT(metap) - 1)
  #define HashPageGetBitmap(pg) \
!   ((uint32 *) (((char *) (pg)) + MAXALIGN(sizeof(PageHeaderData
  
  /*

   * The number of bits in an ovflpage bitmap word.
--- 191,197 
  #define BMPG_SHIFT(metap) ((metap)-hashm_bmshift)
  #define BMPG_MASK(metap)  (BMPGSZ_BIT(metap) - 1)
  #define HashPageGetBitmap(pg) \
!   ((uint32 *) (((char *) (pg)) + MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData 
  
  /*

   * The number of bits in an ovflpage bitmap word.


I admit I don't understand what that bitmap is, it looks like we're 
assuming it can take up all the space between page header and the 
special portion, without any line pointers, but in HashPageGetBitmap, 
we're reserving space for one pointer. It looks like the actual size of 
the bitmap is only the largest power of 2 below the maximum size, so 
that won't be an issue in practice. That macro is actually doing the 
same thing as PageGetContents, so I switched to using that. As that 
moves the data sligthly on those bitmap pages, I guess we'll need a 
catversion bump.


Attached is an updated patch. I also fixed some whitespace and comments 
that were no longer valid. How does it look to you now?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
? DEADJOE
? GNUmakefile
? buildlog
? config.log
? config.status
? page_04-heikki-1.patch
? contrib/pg_standby/.deps
? contrib/pg_standby/pg_standby
? contrib/spi/.deps
? doc/src/sgml/cvsmsg
? src/Makefile.global
? src/backend/postgres
? src/backend/access/common/.deps
? src/backend/access/gin/.deps
? src/backend/access/gist/.deps
? src/backend/access/hash/.deps
? src/backend/access/heap/.deps
? src/backend/access/index/.deps
? src/backend/access/nbtree/.deps
? src/backend/access/transam/.deps
? src/backend/bootstrap/.deps
? src/backend/catalog/.deps
? src/backend/catalog/postgres.bki
? src/backend/catalog/postgres.description
? src/backend/catalog/postgres.shdescription
? src/backend/commands/.deps
? src/backend/executor/.deps
? src/backend/lib/.deps
? src/backend/libpq/.deps
? src/backend/main/.deps
? src/backend/nodes/.deps
? src/backend/optimizer/geqo/.deps
? src/backend/optimizer/path/.deps
? src/backend/optimizer/plan/.deps
? src/backend/optimizer/prep/.deps
? src/backend/optimizer/util/.deps
? src/backend/parser/.deps
? src/backend/port/.deps
? src/backend/postmaster/.deps
? src/backend/regex/.deps
? src/backend/rewrite/.deps
? src/backend/snowball/.deps
? src/backend/snowball/snowball_create.sql
? src/backend/storage/buffer/.deps
? src/backend/storage/file/.deps
? src/backend/storage/freespace/.deps
? src/backend/storage/ipc/.deps
? src/backend/storage/large_object/.deps
? src/backend/storage/lmgr/.deps
? src/backend/storage/page/.deps
? src/backend/storage/smgr/.deps
? src/backend/tcop/.deps
? src/backend/tsearch/.deps
? src/backend/utils/.deps
? src/backend/utils/probes.h
? src/backend/utils/adt/.deps
? src/backend/utils/cache/.deps
? src/backend/utils/error/.deps
? src/backend/utils/fmgr/.deps
? src/backend/utils/hash/.deps
? 

Re: [PATCHES] page macros cleanup (ver 04)

2008-07-09 Thread Zdenek Kotala

Heikki Linnakangas napsal(a):

Zdenek Kotala wrote:

Pavan Deolasee napsal(a):



There's another academical discrepancy between these two hunks:


snip

I admit I don't understand what that bitmap is, it looks like we're 
assuming it can take up all the space between page header and the 
special portion, without any line pointers, but in HashPageGetBitmap, 
we're reserving space for one pointer. It looks like the actual size of 
the bitmap is only the largest power of 2 below the maximum size, so 
that won't be an issue in practice. That macro is actually doing the 
same thing as PageGetContents, so I switched to using that. As that 
moves the data sligthly on those bitmap pages, I guess we'll need a 
catversion bump.


Good catch. if we have to bump catalog version then I have there small patch
which introduce following macro and remove PageHeaderData stucture from 
HashMetaPageData:


#define HashPageGetMeta(page) ((HashMetaPage) (PageGetContents(page)))

It also needs bump a catalog version and if we do it now I think it is better to
connect both these patches and do bump only once.

Attached is an updated patch. I also fixed some whitespace and comments 
that were no longer valid. How does it look to you now?


Perfect. Thanks

Zdenek





diff -rc pgsql_hash.851ed84fb6aa/src/backend/access/hash/hash.c pgsql_hash/src/backend/access/hash/hash.c
*** pgsql_hash.851ed84fb6aa/src/backend/access/hash/hash.c	út črc  8 21:31:38 2008
--- pgsql_hash/src/backend/access/hash/hash.c	út črc  8 21:31:38 2008
***
*** 527,533 
  	 * each bucket.
  	 */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap = (HashMetaPage) BufferGetPage(metabuf);
  	orig_maxbucket = metap-hashm_maxbucket;
  	orig_ntuples = metap-hashm_ntuples;
  	memcpy(local_metapage, metap, sizeof(local_metapage));
--- 527,533 
  	 * each bucket.
  	 */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap =  HashPageGetMeta(BufferGetPage(metabuf));
  	orig_maxbucket = metap-hashm_maxbucket;
  	orig_ntuples = metap-hashm_ntuples;
  	memcpy(local_metapage, metap, sizeof(local_metapage));
***
*** 629,635 
  
  	/* Write-lock metapage and check for split since we started */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_WRITE, LH_META_PAGE);
! 	metap = (HashMetaPage) BufferGetPage(metabuf);
  
  	if (cur_maxbucket != metap-hashm_maxbucket)
  	{
--- 629,635 
  
  	/* Write-lock metapage and check for split since we started */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_WRITE, LH_META_PAGE);
! 	metap = HashPageGetMeta(BufferGetPage(metabuf));
  
  	if (cur_maxbucket != metap-hashm_maxbucket)
  	{
diff -rc pgsql_hash.851ed84fb6aa/src/backend/access/hash/hashinsert.c pgsql_hash/src/backend/access/hash/hashinsert.c
*** pgsql_hash.851ed84fb6aa/src/backend/access/hash/hashinsert.c	út črc  8 21:31:38 2008
--- pgsql_hash/src/backend/access/hash/hashinsert.c	út črc  8 21:31:38 2008
***
*** 69,75 
  
  	/* Read the metapage */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap = (HashMetaPage) BufferGetPage(metabuf);
  
  	/*
  	 * Check whether the item can fit on a hash page at all. (Eventually, we
--- 69,75 
  
  	/* Read the metapage */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap = HashPageGetMeta(BufferGetPage(metabuf));
  
  	/*
  	 * Check whether the item can fit on a hash page at all. (Eventually, we
diff -rc pgsql_hash.851ed84fb6aa/src/backend/access/hash/hashovfl.c pgsql_hash/src/backend/access/hash/hashovfl.c
*** pgsql_hash.851ed84fb6aa/src/backend/access/hash/hashovfl.c	út črc  8 21:31:38 2008
--- pgsql_hash/src/backend/access/hash/hashovfl.c	út črc  8 21:31:38 2008
***
*** 187,193 
  	_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);
  
  	_hash_checkpage(rel, metabuf, LH_META_PAGE);
! 	metap = (HashMetaPage) BufferGetPage(metabuf);
  
  	/* start search at hashm_firstfree */
  	orig_firstfree = metap-hashm_firstfree;
--- 187,193 
  	_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);
  
  	_hash_checkpage(rel, metabuf, LH_META_PAGE);
! 	metap = HashPageGetMeta(BufferGetPage(metabuf));
  
  	/* start search at hashm_firstfree */
  	orig_firstfree = metap-hashm_firstfree;
***
*** 450,456 
  
  	/* Read the metapage so we can determine which bitmap page to use */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap = (HashMetaPage) BufferGetPage(metabuf);
  
  	/* Identify which bit to set */
  	ovflbitno = blkno_to_bitno(metap, ovflblkno);
--- 450,456 
  
  	/* Read the metapage so we can determine which bitmap page to use */
  	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! 	metap = HashPageGetMeta(BufferGetPage(metabuf));
  
  	/* Identify which bit to set */
  	ovflbitno = blkno_to_bitno(metap, ovflblkno);
diff -rc 

Re: [PATCHES] page macros cleanup

2008-07-04 Thread Zdenek Kotala

Pavan Deolasee napsal(a):

On Fri, Jun 13, 2008 at 9:38 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:

I attached code cleanup which is related to in-place upgrade. I replace
direct access to PageHeader structure with already existing macros and I
removed also unnecessary retyping.


A quick review comment:


Thanks you for your review,



One thing I noticed is that the modified definition of HashMaxItemSize
now does not account for the size of ItemId which may not be the right
thing. Please recheck that.


Good catch. I lost in basic arithmetic. What I see now that original definition 
count sizeof(ItemIdData) twice and on other side it does not take care about 
MAXALING correctly. I think correct formula is:


#define HashMaxItemSize(page) \
(PageGetPageSize(page) - \
  ( MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))+ \
MAXALIGN(sizeof(HashPageOpaqueData)) \
  )\
 )

What do you think?

Thanks for your comments Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Zdenek Kotala

Heikki Linnakangas napsal(a):

Just one quick note:


Should probably use PageGetSpecialSize here. Much simpler, and doesn't 
assume that the special area is always at the end of page (not that I 
see us changing that anytime soon).




Thanks for review,
good catch. I found similar example also in nbtree and hash. I attached updated 
patch version which also contains fix for Pavan's observation. Patch modifies 
HashMaxItemSize. It should require reindex on all hash indexes. Should we bump 
catalog version?


thanks Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql

diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c pgsql.orig/src/backend/access/gist/gistutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/gist/gistutil.c	pá črc  4 10:23:41 2008
***
*** 592,599 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 592,598 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(GISTPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c pgsql.orig/src/backend/access/hash/hashpage.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/hash/hashpage.c	pá črc  4 10:23:41 2008
***
*** 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(sizeof(PageHeaderData)) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
--- 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(SizeOfPageHeaderData) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c pgsql.orig/src/backend/access/hash/hashutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/hash/hashutil.c	pá črc  4 10:23:41 2008
***
*** 164,171 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 164,170 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(HashPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c pgsql.orig/src/backend/access/heap/heapam.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/heap/heapam.c	pá črc  4 10:23:41 2008
***
*** 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	PageHeader	dp;
  	OffsetNumber offnum;
  	bool		valid;
  
--- 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	Page		page;
  	OffsetNumber offnum;
  	bool		valid;
  
***
*** 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	dp = (PageHeader) BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
--- 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	page = BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
***
*** 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(dp))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
--- 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(page))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
***
*** 1379,1385 
  	/*
  	 * get the item line pointer corresponding to the requested tid
  	 */
! 	lp = PageGetItemId(dp, offnum);
  
  	/*
  	 * Must check for deleted tuple.
--- 1379,1385 
  	/*
  	 * get 

Re: [PATCHES] page macros cleanup

2008-07-04 Thread Pavan Deolasee
On Fri, Jul 4, 2008 at 1:01 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:


 Good catch. I lost in basic arithmetic. What I see now that original
 definition count sizeof(ItemIdData) twice and on other side it does not take
 care about MAXALING correctly. I think correct formula is:

 #define HashMaxItemSize(page) \
(PageGetPageSize(page) - \
  ( MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))+ \
MAXALIGN(sizeof(HashPageOpaqueData)) \
  )\
 )

 What do you think?



Yes. I think that's the correct way.


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Pavan Deolasee
On Fri, Jul 4, 2008 at 2:08 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:
  Patch
 modifies HashMaxItemSize. It should require reindex on all hash indexes.
 Should we bump catalog version?


Do we really need that ? Even if the new HashMaxItemSize comes out to
be lower than the current limit (because of MAXALIGN), I don't see how
existing hash indexes can have a larger item than the new limit.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup (ver 03)

2008-07-04 Thread Zdenek Kotala

Pavan Deolasee napsal(a):

On Fri, Jul 4, 2008 at 2:08 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:

 Patch
modifies HashMaxItemSize. It should require reindex on all hash indexes.
Should we bump catalog version?



Do we really need that ? Even if the new HashMaxItemSize comes out to
be lower than the current limit (because of MAXALIGN), I don't see how
existing hash indexes can have a larger item than the new limit.


Yeah, I performed calculation and situation is following (for MAXALIGN 4 and 8):

 4   8
Orig81448144
New 81488144

It should be OK for all cases.

I fixed also one typo (ItemId - ItemIdData) and new revision is attached.

Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql

diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c pgsql.orig/src/backend/access/gist/gistutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/gist/gistutil.c	pá črc  4 10:23:41 2008
***
*** 592,599 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 592,598 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(GISTPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c pgsql.orig/src/backend/access/hash/hashpage.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/hash/hashpage.c	pá črc  4 10:23:41 2008
***
*** 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(sizeof(PageHeaderData)) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
--- 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(SizeOfPageHeaderData) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c pgsql.orig/src/backend/access/hash/hashutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/hash/hashutil.c	pá črc  4 10:23:41 2008
***
*** 164,171 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 164,170 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(HashPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c pgsql.orig/src/backend/access/heap/heapam.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c	pá črc  4 10:23:41 2008
--- pgsql.orig/src/backend/access/heap/heapam.c	pá črc  4 10:23:41 2008
***
*** 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	PageHeader	dp;
  	OffsetNumber offnum;
  	bool		valid;
  
--- 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	Page		page;
  	OffsetNumber offnum;
  	bool		valid;
  
***
*** 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	dp = (PageHeader) BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
--- 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	page = BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
***
*** 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(dp))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
--- 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(page))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
***
*** 1379,1385 
  	/*
  	 * 

Re: [PATCHES] page macros cleanup

2008-07-04 Thread Heikki Linnakangas

Pavan Deolasee wrote:

On Fri, Jul 4, 2008 at 1:01 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:


Good catch. I lost in basic arithmetic. What I see now that original
definition count sizeof(ItemIdData) twice and on other side it does not take
care about MAXALING correctly. I think correct formula is:

#define HashMaxItemSize(page) \
   (PageGetPageSize(page) - \
 ( MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))+ \
   MAXALIGN(sizeof(HashPageOpaqueData)) \
 )\
)

What do you think?


Yes. I think that's the correct way.


Doesn't look right to me. There's no padding after the first line 
pointer, hence the first MAXALIGN shouldn't be there.


BTW, looking at hashinsert.c where it's used, we're actually passing a 
pointer to the meta page to HashMaxItemSize(). So the PageGetPageSize() 
call on that is quite bogus, since it's not the meta page that the tuple 
is going to be inserted to. It's academical, because all pages are the 
same size anyway, but doesn't look right. I think I'd go with BLKCSZ 
instead.


I think this is the way it should be:

#define HashMaxItemSize \
(BLCKSZ - \
 SizeOfPageHeaderData - \
 MAXALIGN(sizeof(HashPageOpaqueData)) - \
 sizeof(ItemIdData))

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Pavan Deolasee
On Fri, Jul 4, 2008 at 3:37 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


 I think this is the way it should be:

 #define HashMaxItemSize \
(BLCKSZ - \
 SizeOfPageHeaderData - \
 MAXALIGN(sizeof(HashPageOpaqueData)) - \
 sizeof(ItemIdData))


I am wondering if this would fail for corner case if HashMaxItemSize
happened to be unaligned. For example, if (itemsz  HashMaxItemSize 
MAXALIGN(itemsz), PageAddItem() would later fail with a not-so-obvious
error. Should we just MAXALIGN_DOWN the HashMaxItemSize ?

Thanks,
Pavan


-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Zdenek Kotala

Heikki Linnakangas napsal(a):

Pavan Deolasee wrote:
On Fri, Jul 4, 2008 at 1:01 PM, Zdenek Kotala [EMAIL PROTECTED] 
wrote:


Good catch. I lost in basic arithmetic. What I see now that original
definition count sizeof(ItemIdData) twice and on other side it does 
not take

care about MAXALING correctly. I think correct formula is:

#define HashMaxItemSize(page) \
   (PageGetPageSize(page) - \
 ( MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))+ \
   MAXALIGN(sizeof(HashPageOpaqueData)) \
 )\
)

What do you think?


Yes. I think that's the correct way.


Doesn't look right to me. There's no padding after the first line 
pointer, hence the first MAXALIGN shouldn't be there.


Are you sure? I expecting that tupleheader must be aligned to MAXALIGN. If it is 
not required than you are right.


Look on PageAddItem:

00226 alignedSize = MAXALIGN(size);
00227
00228 upper = (int) phdr-pd_upper - (int) alignedSize;

By my opinion first place where tuple should be placed is:

MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))


BTW, looking at hashinsert.c where it's used, we're actually passing a 
pointer to the meta page to HashMaxItemSize(). So the PageGetPageSize() 
call on that is quite bogus, since it's not the meta page that the tuple 
is going to be inserted to. It's academical, because all pages are the 
same size anyway, but doesn't look right. I think I'd go with BLKCSZ 
instead.




Yeah, BLKCSZ looks good. Anyway, I plan to reorganize all *MaxItemSize staff to 
be compatible with in-place upgrade.


thanks Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Heikki Linnakangas

Pavan Deolasee wrote:

On Fri, Jul 4, 2008 at 3:37 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


I think this is the way it should be:

#define HashMaxItemSize \
   (BLCKSZ - \
SizeOfPageHeaderData - \
MAXALIGN(sizeof(HashPageOpaqueData)) - \
sizeof(ItemIdData))



I am wondering if this would fail for corner case if HashMaxItemSize
happened to be unaligned. For example, if (itemsz  HashMaxItemSize 
MAXALIGN(itemsz), PageAddItem() would later fail with a not-so-obvious
error. Should we just MAXALIGN_DOWN the HashMaxItemSize ?


No, there's a itemsz = MAXALIGN(itemsz) call before the check against 
HashMaxItemSize.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Pavan Deolasee
On Fri, Jul 4, 2008 at 4:25 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


 No, there's a itemsz = MAXALIGN(itemsz) call before the check against
 HashMaxItemSize.


Ah, right. Still should we just not MAXALIGN_DOWN the Max size to
reflect the practical limit on the itemsz ? It's more academical
though, so not a big deal.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Pavan Deolasee
On Fri, Jul 4, 2008 at 4:20 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:



 By my opinion first place where tuple should be placed is:

 MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))


Tuple actually starts from the other end of the block.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Zdenek Kotala

Pavan Deolasee napsal(a):

On Fri, Jul 4, 2008 at 4:20 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:



By my opinion first place where tuple should be placed is:

MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))



Tuple actually starts from the other end of the block.


Yes, but I meant first from offset point of view which is opposite to insert 
order.

Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-04 Thread Heikki Linnakangas

Zdenek Kotala wrote:

By my opinion first place where tuple should be placed is:

MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))


Yeah, but we don't enforce that directly. We enforce it by MAXALIGNing 
size in PageAddItem, and with the rule that special-size is MAXALIGNed.


To put it another way, it's possible that there's an unaligned amount of 
free space on a page, as returned by PageGetExactFreeSpace. But since 
item size is always MAXALIGNed, it's impossible to use it all.



Come to think of it, why do we require MAXALIGN alignment of tuples? I 
must be missing something, but AFAICS the widest fields in 
HeapTupleHeaderData are 4-bytes wide, so it should be possible to get 
away with 4-byte alignment.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup (ver 04)

2008-07-04 Thread Zdenek Kotala

Pavan Deolasee napsal(a):

On Fri, Jul 4, 2008 at 4:25 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


No, there's a itemsz = MAXALIGN(itemsz) call before the check against
HashMaxItemSize.



Ah, right. Still should we just not MAXALIGN_DOWN the Max size to
reflect the practical limit on the itemsz ? It's more academical
though, so not a big deal.


Finally I use following formula:

#define HashMaxItemSize(page) \
MAXALIGN_DOWN(PageGetPageSize(page) - \
  ( SizeOfPageHeaderData + sizeof(ItemIdData) ) - \
   MAXALIGN(sizeof(HashPageOpaqueData)) )


I did not replace PageGetPageSize(page), because other *MaxItemSize has same 
interface.


Revised patch is attached.

Zdenek

--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql

diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c pgsql.orig/src/backend/access/gist/gistutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c	pá črc  4 14:04:20 2008
--- pgsql.orig/src/backend/access/gist/gistutil.c	pá črc  4 14:04:20 2008
***
*** 592,599 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 592,598 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(GISTPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c pgsql.orig/src/backend/access/hash/hashpage.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashpage.c	pá črc  4 14:04:20 2008
--- pgsql.orig/src/backend/access/hash/hashpage.c	pá črc  4 14:04:20 2008
***
*** 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(sizeof(PageHeaderData)) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
--- 407,413 
  	for (i = _hash_log2(metap-hashm_bsize); i  0; --i)
  	{
  		if ((1  i) = (metap-hashm_bsize -
! 		 (MAXALIGN(SizeOfPageHeaderData) +
  		  MAXALIGN(sizeof(HashPageOpaqueData)
  			break;
  	}
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c pgsql.orig/src/backend/access/hash/hashutil.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/hash/hashutil.c	pá črc  4 14:04:20 2008
--- pgsql.orig/src/backend/access/hash/hashutil.c	pá črc  4 14:04:20 2008
***
*** 164,171 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if (((PageHeader) (page))-pd_special !=
! 		(BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
--- 164,170 
  	/*
  	 * Additionally check that the special area looks sane.
  	 */
! 	if ( PageGetSpecialSize(page) != MAXALIGN(sizeof(HashPageOpaqueData)))
  		ereport(ERROR,
  (errcode(ERRCODE_INDEX_CORRUPTED),
   errmsg(index \%s\ contains corrupted page at block %u,
diff -cr pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c pgsql.orig/src/backend/access/heap/heapam.c
*** pgsql.orig.da8c485e0e2a/src/backend/access/heap/heapam.c	pá črc  4 14:04:20 2008
--- pgsql.orig/src/backend/access/heap/heapam.c	pá črc  4 14:04:20 2008
***
*** 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	PageHeader	dp;
  	OffsetNumber offnum;
  	bool		valid;
  
--- 1342,1348 
  	ItemPointer tid = (tuple-t_self);
  	ItemId		lp;
  	Buffer		buffer;
! 	Page		page;
  	OffsetNumber offnum;
  	bool		valid;
  
***
*** 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	dp = (PageHeader) BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
--- 1355,1361 
  	 * Need share lock on buffer to examine tuple commit status.
  	 */
  	LockBuffer(buffer, BUFFER_LOCK_SHARE);
! 	page = BufferGetPage(buffer);
  
  	/*
  	 * We'd better check for out-of-range offnum in case of VACUUM since the
***
*** 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(dp))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
--- 1362,1368 
  	 * TID was obtained.
  	 */
  	offnum = ItemPointerGetOffsetNumber(tid);
! 	if (offnum  FirstOffsetNumber || offnum  PageGetMaxOffsetNumber(page))
  	{
  		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
  		if (keep_buf)
***
*** 1379,1385 
  	/*
  	 * get the 

Re: [PATCHES] page macros cleanup

2008-07-03 Thread Pavan Deolasee
On Fri, Jun 13, 2008 at 9:38 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:
 I attached code cleanup which is related to in-place upgrade. I replace
 direct access to PageHeader structure with already existing macros and I
 removed also unnecessary retyping.

A quick review comment:

One thing I noticed is that the modified definition of HashMaxItemSize
now does not account for the size of ItemId which may not be the right
thing. Please recheck that.

Apart from that the patch looks good to me. As you said, we should
probably replace the other direct usage of PageHeader with appropriate
macros.

Thanks,
Pavan


-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] page macros cleanup

2008-07-03 Thread Heikki Linnakangas

Just one quick note:

Zdenek Kotala wrote:

*** pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c  pá črn 13 
18:00:35 2008
--- pgsql.orig/src/backend/access/gist/gistutil.c   pá črn 13 18:00:35 
2008
***
*** 592,598 
/*
 * Additionally check that the special area looks sane.
 */
!   if (((PageHeader) (page))-pd_special !=
(BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
--- 592,598 
/*
 * Additionally check that the special area looks sane.
 */
!   if ( PageGetSpecialPointer(page) - page !=
(BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),


Should probably use PageGetSpecialSize here. Much simpler, and doesn't 
assume that the special area is always at the end of page (not that I 
see us changing that anytime soon).


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches