ncellations and unlike
VACUUM there is no way user can control the frequency of prune
operations.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www
dated tables which receive very few queries on the
standby.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ce the query was started, it cancels
itself automatically,
Happy X'mas to all of you!
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
elpful.
>
Should we dump your list to a separate Wiki page so that people can
directly edit/comment/remove the items which they are sure about ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@p
On Thu, Jan 15, 2009 at 7:10 AM, Bruce Momjian wrote:
>
> Is this something for 8.4 CVS?
>
I worked out the patch as per Heikki's suggestion. So I think he needs
to review and decide it's fate.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
n identify what
can be restored in parallel and pg_restore using that information
during restore.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will i
x27;t know if this is a real problem for anybody, but I could think
of its use case, at least in theory.
Is it worth doing ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 3: Have you checke
d-only table.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
freezed table and even if the table is
subsequently updated,
hopefully DSM (or something of that sort) will help us reduce the vacuum freeze
time whenever its required.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
.
>
Understood. But if we consider a special case of creation and loading
of a table in a single transaction, we can possibly save the information
that the table was loaded with pre-frozen tuples with xmin equals to the
transaction creating the table.
Thanks,
Pavan
--
Pav
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Fri, Feb 29, 2008 at 8:19 PM, Florian G. Pflug <[EMAIL PROTECTED]> wrote:
> Pavan Deolasee wrote:
> > What I am thinking is if we can read ahead these blocks in the shared
> > buffers and then apply redo changes to them, it can potentially
> > improve things a
e amount of redo work to be done at the recovery
time. If we can significantly improve the recovery logic, we can then think of
reducing the work done at the checkpoint time (either through lazy checkpoints
or less frequent hard checkpoints) which would benefit the normal database
operation.
Thanks
e lock on the respective catalog relation. Then each try to
initialize its
own catalog cache. But to do that they need AccessShareLock on each other's
table leading to a deadlock.
Why not just unconditionally finish the phase 2 as part of
InitPostgres ? I understand
that we may end up initializing caches t
On Wed, Mar 5, 2008 at 3:41 PM, Pavan Deolasee <[EMAIL PROTECTED]> wrote:
>
>
>
> Two backends try to vacuum full two different catalog tables. Each acquires
> an
> exclusive lock on the respective catalog relation. Then each try to
> initialize its
> own cata
t to run concurrent INSERTs / UPDATEs / VACUUMs /
VACUUM FULL and CREATE/DROP INDEXes, and VACUUM FULL used to
once in a while complain about tuple mismatch.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgr
when we
collapse the redirected line pointers and that we can do at the end, on the
original page.
The last step which we run inside a critical section would then be just like
invoking heap_xlog_clean with the information collected in the first pass.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB htt
metic caught my eye though.
! nunused = (end - nowunused);
Shouldn't we typecast them to (char *) first ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your su
n CTID and fail when the cached tuple is accessed.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e "counter" table tuple.
This still doesn't solve the serializable transaction problem
though. But I am sure we can figure out some solution for that case
as well if we agree on the general approach.
I am sure this must have been discussed before. So what are the
objections ?
Thanks
On Wed, Mar 12, 2008 at 9:01 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> > I am sure this must have been discussed before.
>
> Indeed. Apparently you didn't find the threads in which the idea of
>
that the counter table may be
completely cached in memory and won't bloat much.
Also, we can always have a GUC (like pgstats) to control the overhead.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@post
about now, nobody would have
complained :-)
Anyways, your point is taken and it would be great if can make it configurable,
if not table level then at least globally.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql
;
True :-) And I would personally prefer any hack than playing with left
over redirected line pointers in VACUUM FULL.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
me.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
pTransactionContext) and restore it back after vacuum()
returns. But vacuum() might have started a new transaction invalidating the
saved context. Do we see any problem here ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsq
ombine
> rows into a BLOb in an audit table.
>
Another use case of BEFORE COMMIT trigger is to update the row counts
for fast select count(*) operations (of course with some additional
infrastructure)
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via
and merge them with the summary rows
and delete those temporary rows
I guess we can write the generic triggers as contrib module. What needs to done
is to let user specify the tables and the conditions on which they want to track
count(*) and then apply those conditions in the generic
ally attached to the lifespan of the object
itself. You may need to choose one of them if you know that what you
are allocating can not or should not outlive that object.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hack
so update the
FSM information of a page when its pruned/defragged so that the page
can also be used for subsequent INSERTs or non-HOT UPDATEs in
other pages. This might be easier said than done.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-h
h anyway --- it always examines all line pointers
> on each selected page, so we might as well rewrite it to use a simple
> loop more like vacuum uses.
>
I agree. I would write a patch on these lines, unless you are already on to it.
Thanks,
Pavan
--
Pavan Deolasee
Enterpris
can make heap_page_prune() to only return
number of HOT tuples pruned and then explicitly count the DEAD
line pointers in tups_vacuumed.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
Analyze-fix.patch.gz
Description: GNU Zip compressed data
--
Sent via
ated with the heap scan. So it might be tricky to
separate out the scan and the index building activity.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
h
t_analyze()
> can handle the latter part by groveling through the backend's pending
> statistics data.
>
Seems like a right approach to me. I assume we shall do the same for
DELETE_IN_PROGRESS tuples.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sen
On Thu, Apr 3, 2008 at 10:02 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> I've applied a modified/extended form of this patch for 8.3.2.
>
Thanks. I had another concern about VACUUM not reporting DEAD line
pointers (please see up thread). Any comments on that ?
Thanks,
Pavan
On Thu, Apr 3, 2008 at 10:39 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Pavan Deolasee" <[EMAIL PROTECTED]> writes:
>
> > Thanks. I had another concern about VACUUM not reporting DEAD line
> > pointers (please see up thread). Any comments on that ?
>
&
On Fri, Apr 4, 2008 at 11:10 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> The
> policy of this project is that we only put nontrivial bug fixes into
> back branches, and I don't think this item qualifies ...
>
Got it. I will submit a patch for HEAD.
Thanks,
Pav
hat
level we would run out of maximum supported file size anyways.
Well, this could be completely orthogonal to suggestions you are seeking,
but nevertheless I had this thought for quite some time. So just wanted
to speak about it :-)
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enter
go wrong
often. And periodically, VACUUM would correct any mistakes in FSM info.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Apr 9, 2008 at 10:48 PM, Claudio Rossi <[EMAIL PROTECTED]> wrote:
> nulls = (bool *)palloc(natts*sizeof(bool *));
>
May not be related to segfault you are seeing, but this looks completely wrong.
You want array of bool and not (bool *).
Thanks,
Pavan
--
Pavan Deolasee
hat we are lucky or our regression
suite don't have long enough running tests to give autovacuum chance
to recycle some of the dead tuples.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
lock 1 .. 10
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Apr 21, 2008 at 10:54 PM, Pavan Deolasee
<[EMAIL PROTECTED]> wrote:
> Case 1.
>
> Insert 100 records --- goes into block 1 .. 10
> Delete 100 records
> Insert 100 more records --- goes into 11 .. 20
>
>
> Case 2.
>
> Insert 100 records --- g
w include/portability directory added
yesterday.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
r similar deadlocks waiting to happen. Also I am not sure if the
issue is big enough to demand the change.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www
l" :(
>
Yeah. I think we better fix this, especially given the above mentioned scenario.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ed successfully, but that won't have any
correctness implication, but would only delay reclaiming
DEAD_RECLAIMED line pointers.
Comments ?
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
t; will need only a single scan. Smaller tables that require almost
> continual VACUUMing will probably do two scans, but who cares?
>
Yeah, I think we need to target the large table case. The second pass
is obviously much more costly for large tables. I think the timed-wait
answers your c
. But still the second pass would be required and it would
re-dirty all the pages again,
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
stoppers or the benefits of avoiding a second scan on a large
table is not worth. I personally have a strong feeling that it's worth
the efforts.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
the initial blocks to remove the DEAD line pointers.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ed, either at the code level or design level. I
expect that feedback during this commit fest.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ing a clean build at your end once again.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
intermediate tuples in the HOT chain may get removed
(because we handle aborted heap-only tuples separately) and break the
HOT chain.
I am also looking at the pruning logic to see if I can spot something unusual.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent
On Tue, Jul 21, 2009 at 10:38 AM, Robert Haas wrote:
>
> Pavan, are you planning to respond to Alex's comments and/or update this
> patch?
>
Yes, I will. Hopefully by end of this week.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent
On 11/10/06, Josh Berkus wrote:
Tom,> Actually, you omitted to mention the locking aspects of moving tuples> around --- exactly how are you going to make that work without breaking> concurrent scans?I believe that's the "unsolved technical issue" in the prototype, unless
Pavan h
On 11/10/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:> On 11/10/06, Josh Berkus <
josh@agliodbs.com> wrote:
>> I believe that's the "unsolved technical issue" in the prototype, unless>> Pavan has sol
On 11/10/06, Tom Lane <[EMAIL PROTECTED]
> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:> Yes. The last bit in the t_infomask is used up to mark presence of overflow
> tuple header. But I believe there are few more bits that can be reused.
> There are thr
On 11/10/06, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:> (Actually, the assumption that you can throw an additional back-pointer> into overflow tuple headers is the worst feature of this proposal in> that regard --- it's really not that easy to support multiple header
> formats.)
On 11/10/06, Simon Riggs <[EMAIL PROTECTED]> wrote:
On Fri, 2006-11-10 at 12:32 +0100, Zeugswetter Andreas ADI SD wrote:> e.g. a different header seems no easier in overflow than in heap
True. The idea there is that we can turn frequent update on/off fairlyeasily for normal tables since there are n
On 11/10/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:> On 11/10/06, Tom Lane <[EMAIL PROTECTED]> wrote:> (2) Isn't this full of race conditions?
> I agree, there could be race conditions. But IMO we can handl
Simon Riggs wrote:
On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
Christopher Browne wrote:
Seems to me that you could get ~80% of the way by having the simplest
"2 queue" implementation, where tables with size < some threshold get
thrown at the "little table" queue, and tables above
Simon Riggs wrote:
> On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
>> Christopher Browne wrote:
>>
>>> Seems to me that you could get ~80% of the way by having the simplest
>>> "2 queue" implementation, where tables with size < some threshold get
>>> thrown at the "little table" queue,
I am thinking that maintaining fragmented free space within a heap page
might be a good idea. It would help us to reuse the free space ASAP without
waiting for a vacuum run on the page. This in turn will lead to lesser heap
bloats and also increase the probability of placing updated tuple in the
s
On 1/23/07, Martijn van Oosterhout wrote:
On Tue, Jan 23, 2007 at 01:48:08PM +0530, Pavan Deolasee wrote:
> We might not be able to reuse the line pointers because indexes may have
> references to it. All such line pointers will be freed when the page is
> vacuumed during the regul
On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
ITAGAKI Takahiro wrote:
> Keeping only line pointers itself is not a problem, but it might lead
> bloating of line pointers. If a particular tuple in a page is replaced
> repeatedly, the line pointers area bloats up to 1/4 of the page.
On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
> So during a sequential or index scan, if a tuple is found to be dead,
the
> corresponding line pointer is marked "unused" and the space is returned
> to a
> free list. This free list i
On 1/23/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Or so... :)
I am sure there are more, the ones with question marks are unknowns but
heard of in the ether somewhere. Any additions or confirmations?
I have the first phase of Frequent Update Optimizations (HOT) patch ready.
But I held it
On 1/22/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
I've been looking at the way we do vacuums.
The fundamental performance issue is that a vacuum generates
nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
spread the cost like part payment, but the total is the same.
On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
ITAGAKI Takahiro wrote:
> BLCKSZ is typically 8192 bytes and sizeof(ItemPointerData) is 4 bytes.
> 1/4 comes from 8192 / 4 = 2048. If we allow zero-size tuples, the line
> pointers area can bloat up to the ratio. We have tuples no less th
On 1/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Would it help to set the status of the XMIN/XMAX of tuples early enough
such
> that the heap page is still in the buffer cache, but late enough such
that
> the XMIN
On 1/23/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
> Another source of I/O is perhaps the CLOG read/writes for checking
> transaction status. If we are talking about large tables like accounts
in
> pgbench or customer/stock in DBT2, the tables ar
On 1/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> I know it might break the ctid chain, but does that really matter ?
Yes. You can't just decide that the tuple isn't needed anymore.
As per other followup, you
On 1/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On a typical desktop class 2 CPU Dell machine, we have seen pgbench
> clocking more than 1500 tps.
Only if you had fsync off, or equivalently a disk drive that lies abou
On 1/24/07, Martijn van Oosterhout wrote:
On Wed, Jan 24, 2007 at 12:45:53PM +0530, Pavan Deolasee wrote:
> My apologies if this has been discussed before. I went through the
earlier
> discussions, but its still very fuzzy to me. I am not able to construct
a
> case
> where a tuple
On 1/24/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On 1/24/07, Martijn van Oosterhout wrote:
>>
>> I thought the classical example was a transaction that updated the same
>> tuple multiple times befor
On 1/24/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
> I have just counted the number of read/write calls on the CLOG blocks.
As
> you can
> see the total number of CLOG reads jumped from 545323 to 1181851 i.e.
> 1181851 - 545323 = 636528 CLOG block
On 1/25/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Pavan Deolasee wrote:
>
> Also is it worth optimizing on the total read() system calls which might
> not
> cause physical I/O, but
> still consume CPU ?
I don't think it's worth it, but now that we'r
On 1/26/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Heikki Linnakangas wrote:
> I'd like to see still more evidence that it's a problem before we start
> changing that piece of code. It has served us well for years.
So the TODO could be "investigate whether caching pg_clog and/or
pg_subtrans
On 1/26/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Maybe have the bgwriter update hint bits as it evicts pages out of the
cache? It could result in pg_clog read traffic for each page that needs
eviction; not such a hot idea.
I thought once we enhance clog so that there are no clog reads,
On 1/26/07, Tom Lane <[EMAIL PROTECTED]> wrote:
I think what he's suggesting is deliberately not updating the hint bits
during a SELECT ...
No, I was suggesting doing it in bgwriter so that we may not need to that
during
a SELECT. Of course, we need to investigate more and have numbers to pr
Not sure whether its worth optimizing, but had spotted this while browsing
the code a while back. So thought would post it anyways.
The stack usage for toast_insert_or_update() may run into several KBs since
the MaxHeapAttributeNumber is set to a very large value of 1600. The usage
could anywhere
On 1/30/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> The stack usage for toast_insert_or_update() may run into several KBs
since
> the MaxHeapAttributeNumber is set to a very large value of 1600. The
usage
> could anywhe
On 1/31/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Btw, I noticed that the toast_insert_or_update() is re-entrant.
> toast_save_datum() calls simple_heap_insert() which somewhere down the
> line calls toast_insert_or_
On 1/31/07, Pavan Deolasee <[EMAIL PROTECTED]> wrote:
Attached is a patch which would print the recursion depth for
toast_insert_or_update() before PANICing the server to help us
examine the core.
Here is the attachment.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprise
On 1/31/07, Tom Lane <[EMAIL PROTECTED]> wrote:
We can't change TOAST_MAX_CHUNK_SIZE without forcing an initdb, but I
think that it would be safe to remove the MAXALIGN'ing of the tuple
size in the tests in heapam.c, that is
That would mean that the tuple size in the heap may exceed
TOAST_TU
On 2/9/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
On Wed, 2007-02-07 at 14:17 -0500, Tom Lane wrote:
> ISTM we could fix that by extending the index VACUUM interface to
> include two concepts: aside from "remove these TIDs when you find them",
> there could be "replace these TIDs with those TID
On 2/11/07, Hannu Krosing <[EMAIL PROTECTED]> wrote:
Ühel kenal päeval, P, 2007-02-11 kell 12:35, kirjutas Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > What if we would just reuse the root tuple directly instead of turning
> > it into a stub ?
> > This would create a cycle of ctid p
On 2/12/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Hannu Krosing wrote:
> Ühel kenal päeval, P, 2007-02-11 kell 12:35, kirjutas Tom Lane:
>> Hannu Krosing <[EMAIL PROTECTED]> writes:
>>> What if we would just reuse the root tuple directly instead of turning
>>> it into a stub ?
>>> This w
On 2/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Hannu Krosing wrote:
>> Are we actually doing that ? I.E are null bitmaps really allocated in 1
>> byte steps nowadays ?
> Yes.
Not really; we still have to MAXALIGN at the end of the bitmap. The
This is a WIP patch based on the recent posting by Simon and discussions
thereafter. We are trying to do one piece at a time and intention is to post
the work ASAP so that we could get early and continuous feedback from
the community. We could then incorporate those suggestions in the next
WIP pat
On 2/14/07, Hannu Krosing <[EMAIL PROTECTED]> wrote:
OTOH, for same page HOT tuples, we have the command and trx ids stored
twice first as cmax,xmax of the old tuple and as cmin,xmin of the
updated tuple. One of these could probably be used for in-page HOT tuple
pointer.
I think we recently
On 2/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> What's the verdict on relaxing the "live tuple's ctid doesn't change
> rule"?
I think that's unacceptable; it is known that that will break the ODBC
and JDBC drivers, as well as any other programs t
On 2/15/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Do we actually ever want to remove dead tuples from the middle of the
chain? If a tuple in the middle of the chain is dead, surely every tuple
before it in the chain is dead as well, and we want to remove them as
well. I'm thinking, remo
On 2/16/07, Hannu Krosing <[EMAIL PROTECTED]> wrote:
Ühel kenal päeval, K, 2007-02-14 kell 10:41, kirjutas Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > OTOH, for same page HOT tuples, we have the command and trx ids stored
> > twice first as cmax,xmax of the old tuple and as cmin,xm
On 2/16/07, Zeugswetter Andreas ADI SD <[EMAIL PROTECTED]> wrote:
> > As described, you've made
> > that problem worse because you're trying to say we don't know which
of
> > the chain entries is pointed at.
>
> There should be a flag, say HOT_CHAIN_ENTRY for the tuple the
it's called HEAP_UPD
On 2/16/07, Zeugswetter Andreas ADI SD <[EMAIL PROTECTED]> wrote:
Oh sorry. Thanks for the clarification. Imho HEAP_UPDATE_ROOT should be
renamed for this meaning then (or what does ROOT mean here ?).
Maybe HEAP_UPDATE_CHAIN ?
Yes, you are right. There is some disconnect between what Simon h
On 2/17/07, Lukas Kahwe Smith <[EMAIL PROTECTED]> wrote:
I have emailed Gregory, Pavan and Simon only 2 days ago, so I am not
suprised to not haven gotten feedback yet.
Oops, I haven't received the email you mentioned ? Can you resend me the
same ?
Thanks,
Pavan
--
EnterpriseDB http:/
On 2/19/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Am Montag, 19. Februar 2007 13:12 schrieb Alvaro Herrera:
>> I don't understand -- what problem you got with "NO OPERATION"? It
>> seemed a sound idea to me.
> It seems nonorthogonal. What if only s
Reposting - looks like the message did not get through in the first
attempt. My apologies if multiple copies are received.
This is the next version of the HOT WIP patch. Since the last patch that
I sent out, I have implemented the HOT-update chain pruning mechanism.
When following a HOT-update
601 - 700 of 740 matches
Mail list logo