HashMaxItemSize(page) \
(PageGetPageSize(page) - \
( MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))+ \
MAXALIGN(sizeof(HashPageOpaqueData)) \
)\
)
What do you think?
Yes. I think that's the correct way.
Thanks,
Pavan
--
Pavan Deolasee
of MAXALIGN), I don't see how
existing hash indexes can have a larger item than the new limit.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http
if this would fail for corner case if HashMaxItemSize
happened to be unaligned. For example, if (itemsz HashMaxItemSize
MAXALIGN(itemsz), PageAddItem() would later fail with a not-so-obvious
error. Should we just MAXALIGN_DOWN the HashMaxItemSize ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB
academical
though, so not a big deal.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
On Fri, Jul 4, 2008 at 4:20 PM, Zdenek Kotala [EMAIL PROTECTED] wrote:
By my opinion first place where tuple should be placed is:
MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData))
Tuple actually starts from the other end of the block.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http
.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
setting page_prune_xid). May be we should trigger pruning
if we got a line pointer bloat in a page too.
Please let me know comments/suggestions and any other improvements.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
VACUUM_second_scan-v5.patch.gz
Description: GNU Zip
normal scenario where it won't work well ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
are searching for a number closer to 1000, we can break the
array into large,small parts instead of equal parts and then
search.
Well, may be I making simple things complicated ;-)
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql
of thousands of subtransactions to
begin with..
True. But thats the case we are trying to solve here :-)
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http
a limit.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
if the transformation is correct. If xvac_committed is true, why would
one even get into the else part ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http
On 9/18/07, Jaime Casanova [EMAIL PROTECTED] wrote:
this sql scripts make current cvs + patch to crash with this message
in the logs:
Can you please check if the attached patch fixes the issue for you ?
It sets t_tableOid before returning a HOT tuple to the caller.
Thanks,
Pavan
--
Pavan
before checking
for HeapTupleIsHotUpdated, so we are fine. Or should we just check
for XMIN_INVALID explicitly at those places ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
anyway ...
I agree. I just wanted to leave a hint there that such a possibility exists
if someone really wants to optimize, now or later.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
your judgment.
I also liked the way you reverted the API changes to various index build
methods.
I would test the patch tomorrow in detail.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
of the other items.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
in a different mechanism
to handle that. So we should be able to get rid of HEAPTUPLE_DEAD_CHAIN.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
prune
it.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 9/12/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
One change that is worh mentioning
and discussing is that we don't follow HOT chains while fetching tuples
during
autoanalyze and autoanalyze would consider all such tuples as DEAD.
In the worst case when all the tuples in the table
,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 9/13/07, Tom Lane [EMAIL PROTECTED] wrote:
Never mind ... though my
suspicions would probably not have been aroused if anyone had bothered
to fix the comments.
Yeah, my fault. I should have fixed that. Sorry about that.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http
this well, may be I should post an example.
Need to run now.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
that tuple as well.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
of the
chain, analyze would use that. Otherwise the tuple is considered as
DEAD.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
from it.
Any other ideas ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
the changes related to freezing. I also think that
should let us remove the DEAD_CHAIN concept, but let me check.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 9/11/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Pruning removes intermediate dead tuples by marking their line pointers
~LP_USED and redirecting the root line pointer to the first
live/recently_dead tuple in the chain.
It seems utterly unsafe to do
On 9/11/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
- Track the minimum xmin in the page header to avoid repeated
(wasted) attempts to prune a Prunable page in the presence of long running
transactions.
I would actually think twice before even doing this because this would lead
to
complete
On 9/11/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
I would actually think twice before even doing this because this would
lead to complete change in heap page structure and stop people from
upgrading to 8.3 without a complete dump/restore. I don't
be unsafe, but we
don't care if you occasionally skip the maintenance work (or do it a
little early)
Thanks,
Pavan
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
this.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
assume we
are not worried about these maintenance activities.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
tuples and reduced HOT chain
to a single tuple. Hence the total time for subsequent SELECTs improved
tremendously.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
of a long
running transaction are high enough to justify adding 4 bytes
to page header.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
do better than HOT in the above case. Net-net
there will be equal number of index keys after the inserts.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
are moved to the end of the page to create a larger
contiguous free space in the page.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
about
making such changes right now unless we are sure about the benefits.
We can always tune and tweak in 8.4
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
header to avoid repeated
(wasted) attempts to prune a Prunable page in the presence of long running
transactions.
We can save rest of the techniques for beta testing period or 8.4.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
is limited.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
, but do I need to worry
about its interaction with HOT ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 8/31/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
In fact, now that I think about it there is no other
fundamental reason to not support HOT on system tables. So we
can very well do what you are suggesting.
On second thought, I wonder if there is really much to gain by
supporting HOT
am on vacation on Thursday/Friday
and for remaining days, I may not be able to spend extra cycles,
apart from regular working hours.
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 8/30/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Please see the version 14 of HOT patch attached.
I expected to find either a large new README, or some pretty substantial
additions to existing README files, to document how this all works.
The comments
On 8/30/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
You are right - a new index might mean that an existing HOT chain
is broken as far as the new index is concerned. The way we address
that is by indexing the root tuple of the chain, but the index key
On 8/31/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Not if someone else releases lock before committing. Which I remind you
is a programming technique we use quite a lot with respect to the system
catalogs. I'm not prepared to guarantee
. Further, we allow creating indexes on system attributes. So we
must support those.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 8/2/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
. It would also be better if we didn't emit a
separate WAL record for defraging a page, if we also prune it at the
same time. I'm not that worried about WAL usage in general, but that
seems simple enough to fix.
Ah I see. I shall
On 8/2/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
On 8/2/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Maybe a nicer
solution would be to have another version of ConditionalLockBuffer with
three different return values: didn't get lock, got exclusive lock, or
got cleanup lock
On 8/2/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
Please see the attached version 11 of HOT patch
Thanks!
One wrinkle in the patch is how the ResultRelInfo-struct is passed to
heap_update, and on to heap_check_idxupdate, to check any indexed
columns have
On 8/1/07, Simon Riggs [EMAIL PROTECTED] wrote:
On Wed, 2007-08-01 at 14:36 +0530, Pavan Deolasee wrote:
BufferIsLockedForCleanup() should be named BufferIsAvilableForCleanup().
There is no cleanup mode, what we mean is that there is only one pin;
the comments say If we are lucky enough
InvalidOffsetNumber ((OffsetNumber) 0)
So I think we should be OK to use that to indicate redirect-dead
pointers.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 6/2/07, Bruce Momjian [EMAIL PROTECTED] wrote:
OK, removed from 8.4 queue.
I am OK with this, though I personally never felt that it complicated
the code :-)
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
something like:
Tom has already expressed his unwillingness to add complexity
without any proven benefits. Your suggestion though good would
make the code more unreadable without much benefit since the
function is not called very often.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http
On 5/19/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Ah, sorry about that. For some reason my source tree was checked out
from the 8.2 branch, instead of CVS HEAD.
I looked at the patch. Not that I am very comfortable with this part
of the code, but nevertheless here are my comments:
I
On 5/21/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
On 5/19/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Ah, sorry about that. For some reason my source tree was checked out
from the 8.2 branch, instead of CVS HEAD.
I looked at the patch. Not that I am very comfortable
around operators like '', '+', '='
etc really makes the code more readable. Other examples are using
parenthesis in a right manner to improve code readability.
flag = (pointer == NULL); is more readable than
flag = pointer == NULL;
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http
and a failure after the first step will leave an invalid index behind. In
this particular case, CIC fails because of duplicate keys.
I did not deliberately fix the regression output to highlight this change.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 4/19/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
What's the purpose of the HeapScanHintPagePrune mechanism in index
builds? I lost track of the discussion on create index, is the it
necessary for correctness?
Its not required strictly for correctness, but it helps us prune the
them all as normal cold
updates.
Thanks Heikki. We might need to tweak it a bit because I think I had
made an assumption that heap_hot_fetch() should be called only on
the root tuple. Anyways, would look at it.
Thanks
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
=0x9ecdc50) at main.c:188
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On 4/1/07, Tom Lane [EMAIL PROTECTED] wrote:
Good point. I'm envisioning a procarray.c function along the
lines of
bool TransactionHasSnapshot(xid)
which returns true if the xid is currently listed in PGPROC
and has a nonzero xmin. CIC's cleanup wait loop would check
this and ignore
On 4/11/07, Tom Lane [EMAIL PROTECTED] wrote:
[ itch... ] The problem is with time-extended execution of
GetSnapshotData; what happens if the other guy lost the CPU for a good
long time while in the middle of GetSnapshotData? He might set his
xmin based on info you saw as long gone.
You
On 4/6/07, Tatsuo Ishii [EMAIL PROTECTED] wrote:
BTW, is anybody working on enabling the fill factor to the tables used
by pgbench? 8.3 will introduce HOT, and I think adding the feature
will make it easier to test HOT.
Please see if the attached patch looks good. It adds a new -F option
On 4/3/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Your patch has been added to the PostgreSQL unapplied patches list at:
Thanks Bruce. I would like to submit atleast one more revision
which would include couple of TODOs mentioned in my last mail.
I would also like to do some cleanup and
The version 5.0 of HOT WIP patch is attached. This fixes the
VACUUM FULL issue with HOT. In all the earlier versions, I'd
disabled VACUUM FULL.
When we move the HOT-chain, we move the chains but don't carry
the HOT_UPDATED or HEAP_ONLY flags and insert as many index
entries as there are tuples
Please see the attached version 4.4 of HOT WIP patch. I have
fixed couple of bugs in the earlier version posted. Other than
that there are not any significant changes in the patch.
The row-level fragmentation had a bug where we were
unintentionally sorting the line pointers array more than
Please see the attached HOT WIP patch, version 4.1. There are
not any significant changes since the version 4.0 patch that
I posted a week back.
This patch includes some optimizations for efficiently looking
up LP_DELETEd tuples. I have used the recent changes made by
Tom/Heikki which give us
Simon Riggs wrote:
I'll happily code it as functions or system cols or any other way, as
long as we can see everything there is to see.
With HOT, other useful information is about the line pointers. It would be
cool to be able to print the redirection info, details about LP_DELETEd
line
Heikki Linnakangas wrote:
Attached is a fix for that. It adds a flag to each heap page that
indicates that there isn't any free line pointers on this page, so
don't bother trying. Heap pages haven't had any heap-specific
per-page data before, so this patch adds a HeapPageOpaqueData-struct
Resending once again, all my previous attempts seems to have failed.
Is there a limit on patch size on -patches as well ? Attached is
a gzipped version.
Hi All,
Please see the version 4.0 of HOT WIP patch attached with the mail.
ISTM that this version has one of the most radical changes
On 2/27/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
- What do we do with the LP_DELETEd tuples at the VACUUM time ?
In this patch, we are collecting them and vacuuming like
any other dead tuples. But is that the best thing to do ?
Since they don't need index
Please see the attached WIP HOT patch - version 3.2. It now
implements the logic for reusing heap-only dead tuples. When a
HOT-update chain is pruned, the heap-only tuples are marked
LP_DELETE. The lp_offset and lp_len fields in the line pointer are
maintained.
When a backend runs out of free
On 2/20/07, Hannu Krosing [EMAIL PROTECTED] wrote:
Ühel kenal päeval, T, 2007-02-20 kell 12:08, kirjutas Pavan Deolasee:
What do you do, if there are no live tuples on the page ? will this
un-HOTify the root and free all other tuples in HOT chain ?
Yes. The HOT-updated status of the root
On 2/20/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
When following a HOT-update chain from the index fetch, if we notice
that
the root tuple is dead and it is HOT-updated, we try to prune the chain
to
the smallest possible length. To do that, the share lock is upgraded
On 2/20/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
... Yes. The HOT-updated status of the root and all intermediate
tuples is cleared and their respective ctid pointers are made
point to themselves.
Doesn't that destroy the knowledge that they form
On 2/20/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Recently dead means still live to somebody, so those tids better not
change either. But I don't think that's what he meant. I'm more
worried about the deadlock possibilities inherent in trying to upgrade a
buffer lock.
Reposting - looks like the message did not get through in the first
attempt. My apologies if multiple copies are received.
This is the next version of the HOT WIP patch. Since the last patch that
I sent out, I have implemented the HOT-update chain pruning mechanism.
When following a HOT-update
I had this in a different form, but reworked so that it matches the doc
patch that Teodor submitted earlier. I think it would be good to have this
information in the lock.h file as well.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprisedb.com
lock-compatibility.patch
Description:
On 1/30/07, Peter Eisentraut [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
I had this in a different form, but reworked so that it matches the
doc patch that Teodor submitted earlier. I think it would be good to
have this information in the lock.h file as well.
Why would we want to have
On 1/28/07, Tom Lane [EMAIL PROTECTED] wrote:
OTOH it might be
cleaner to refactor things that way, if we were going to apply this.
Here is a revised patch which includes refactoring of
heap_get_latest_tid(), as per Tom's suggestion.
Thanks,
Pavan
--
EnterpriseDB
On 1/27/07, Tom Lane [EMAIL PROTECTED] wrote:
It looks to me that you have introduced a buffer leak into
heap_get_latest_tid ...
I can't spot that. A previously pinned buffer is released at the start
of the loop if we are moving to a different block. Otherwise, the buffer
is released at all
Attached is a patch which should marginally improve the ctid chain followin
code path when current and the next tuple in the chain are in the same
block.
In the current code, we unconditionally drop pin of the current block
without
checking whether the next tuple is the same block or not. ISTM
84 matches
Mail list logo