oubt that WARM meets
this standard, unfortunately, because it doesn't do anything for cases
that suffer only due to a long running xact.
I don't accept that there is a rigid dichotomy between Postgres style
MVCC, and using UNDO for MVCC, and I most certainly don't accept that
g
t is
necessarily the most recent version, which could introduce ambiguity
(what happens when it is changed, then changed back?). That's actually
rather similar to what you could do with HOT + the existing heapam,
except that there is a clearer demarcation of "current"
Robert Haas wrote:
(Boy, our implementation of DROP COLUMN is painful! If we really got
rid of columns when they were dropped we could've avoided this whole
mess.)
I tend to agree. I can recall several cases where it led to bugs that
went undetected for quite a while.
--
Peter Geog
Amit Kapila wrote:
Isn't it possible to confirm if the problem is due to commit
2ed5b87f9? Basically, if we have unlogged tables, then it won't
release the pin. So if the commit in question is the culprit, then
the same workload should not lead to bloat.
That's a great idea.
Peter Geoghegan wrote:
In Alik's workload, there are two queries: One UPDATE, one SELECT. Even
though the bloated index was a unique index, and so still gets
_bt_check_unique() item killing, the regression is still going to block
LP_DEAD cleanup by the SELECTs, which seems like it mig
just a few
leaf pages, pages that become very bloated. It's not fair to blame the
bloat we saw there on this regression, but I have to wonder how much it
may have contributed.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subs
he goal of WARM is, roughly speaking, to make
updates that would not be HOT-safe today do a "partial HOT update". My
concern with that idea is that it doesn't do much for the worst case.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
eventually have a good understanding of the system as a whole.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jul 25, 2017 at 8:02 PM, Peter Geoghegan wrote:
> Setup:
>
> Initialize pgbench (any scale factor).
> create index on pgbench_accounts (aid);
That "create index" was meant to be on "abalance", to make the UPDATE
queries always HOT-unsafe. (You'll want
On Tue, Jul 25, 2017 at 3:02 PM, Peter Geoghegan wrote:
> I've been thinking about this a lot, because this really does look
> like a pathological case to me. I think that this workload is very
> sensitive to how effective kill_prior_tuples/LP_DEAD hinting is. Or at
> least, I
On Fri, Jul 14, 2017 at 5:06 PM, Peter Geoghegan wrote:
> I think that what this probably comes down to, more than anything
> else, is that you have leftmost hot/bloated leaf pages like this:
>
>
> idx | level | l_item | blkno | btpo_prev |
> btpo_next |
think that this workload suffers from
index bloat in a way that isn't so easily explained. It does seem to
be an issue with VACUUM controlling bloat in the index in particular.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
possible, I don't see
much value in it. Unless you have normalized keys, you can only really
truncate whole attributes. And, I think it's a bad idea to truncate
anything other than the new high key for leaf pages, with or without
normalized keys. Changing the keys once they're in
should be cheapter to maintain without that requirement.
I agree, but unless you're using normalized keys, then I don't see
that you get much useful leeway from using fake or truncated TID
values. Presumably the comparison logic will be based on comparing an
ItemPointerData field, whic
t6RZAqNnQx-YLcw=q...@mail.gmail.com
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
7;s Zipfian
distribution test.
The way that the keyspace is broken up is supposed to be balanced, and
to have long term utility. Working against that to absorb a short term
bloat problem is penny wise, pound foolish.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
t;2816 | 2 |
> {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID,HEAP_XMIN_FROZEN}
> (1 row)
Seems like a good idea to me.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jul 19, 2017 at 7:57 PM, Tom Lane wrote:
> Peter Geoghegan writes:
>> My argument for the importance of index bloat to the more general
>> bloat problem is simple: any bloat that accumulates, that cannot be
>> cleaned up, will probably accumulate until it impacts
w it ends up being used (whether it's by
VACUUM, something closer to an synchronous deletion, or whatever).
[1] https://brandur.org/postgres-queues
[2]
https://wiki.postgresql.org/wiki/Key_normalization#Avoiding_unnecessary_unique_index_enforcement
--
Peter Geoghegan
--
Sent via pgsql
ill never actually be reclaimed [2].
[1]
postgr.es/m/CAH2-Wzmf6intNY1ggiNzOziiO5Eq=DsXfeptODGxO=2j-i1...@mail.gmail.com
[2]
https://wiki.postgresql.org/wiki/Key_normalization#VACUUM_and_nbtree_page_deletion
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
ack that adds
randomness to the search for free space among duplicates, and may let
us follow the Lehman & Yao algorithm more closely.
[1]
https://wiki.postgresql.org/wiki/Key_normalization#Suffix_truncation_of_normalized_keys
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (
about that we will never
find all of. *Plenty* of TIDs today do not point to the heap at all.
For example, internal pages in nbtree uses TIDs that point to the
level below.
You would break some code within indextuple.c, but that doesn't seem
so bad. IndexInfoFindDataOffset() already has to de
ing UPDATE index tuple insertions for indexes on unchanged
attributes, often just because pruning can fail to happen in time,
which WARM will not fix.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ponent (item
offset). Nothing I can think of prevents us from creating an
alternative, entirely logical identifier that fits in the same 6
bytes. It can map to a versioning indirection layer, for unique
indexes, or to a primary key value, for secondary indirect indexes.
[1]
postgr.es/m/CAH2-Wzmf6intNY1
e auxiliary posting list
structure is removed very quickly. I wouldn't expect btree_gin to be
faster for this workload today, because it doesn't have
kill_prior_tuple/LP_DEAD support, and because it doesn't support
unique indexes, and so cannot exploit the special situation that
exists with u
. There is no need to do this for
Postgres 10. I don't feel very strongly about it. It just doesn't make
sense to continue to support replacement selection.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscrip
On Thu, Jul 13, 2017 at 12:49 PM, Peter Geoghegan wrote:
> To reiterate what I say above:
>
> The number of leaf pages with dead items is 20 with this most recent
> run (128 clients, patched + unpatched). The leftmost internal page one
> level up from the leaf level contains 289
On Thu, Jul 13, 2017 at 10:02 AM, Peter Geoghegan wrote:
> The number of leaf pages at the left hand side of the leaf level seems
> to be ~50 less than the unpatched 128 client case was the first time
> around, which seems like a significant difference. I wonder why. Maybe
> autovacuu
ase is slightly more bloated when unique index enforcement is
removed.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jul 12, 2017 at 2:17 PM, Peter Geoghegan wrote:
> I'd be interested in seeing the difference it makes if Postgres is
> built with the call to _bt_check_unique() commented out within
> nbtinsert.c.
Actually, I mean that I wonder how much of a difference it would make
if thi
(49.8% of total, tps = 46317.766703)
- latency average = 2.630 ms
- latency stddev = 14.092 ms
This is from the 128 client case -- the slow case.
Notice that the standard deviation is very high for
ycsb_update_zipf.sql. I wonder if this degrades because of some kind
of feedback loop, that reaches a
On Wed, Jul 12, 2017 at 12:30 PM, Peter Geoghegan wrote:
> On Wed, Jul 12, 2017 at 4:28 AM, Alik Khilazhev
> wrote:
>> I am attaching results of query that you sent. It shows that there is
>> nothing have changed after executing tests.
>
> But something did change! In th
to investigate, though. Even if it wasn't
buffer lock contention, we're still talking about the difference
between the hot part of the B-Tree being about 353 pages, versus 285.
Buffer lock contention could naturally limit the growth in size to
"only" 353, by slowing everyth
d text, which I
think you really need to make the implementation effort worth it).
This is something that is discussed in a section on the normalized
keys wiki page I created recently [1].
[1]
https://wiki.postgresql.org/wiki/Key_normalization#ICU.2C_text_equality_semantics.2C_and_hashing
--
Peter
ase you didn't end up
with a valid separator due to the vagaries of the collation rules.
That's the kind of complexity that scales poorly, because the
complexity cannot be isolated.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
hat on the Wiki page. I also describe ways that these
techniques have some non-obvious benefits for VACUUM B-Tree page
deletion, which you should get "for free" once you do the 3 things I
mentioned. A lot of the benefits for VACUUM are seen in the worst
case, which is when they're rea
l list" which is also
ordered by (key, TID). Much less random I/O, and pretty good
guarantees about the worst case.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
based on feedback. It is still very
much a work in progress.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
few select datatypes.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
type,
live_items,
dead_items,
avg_item_size,
page_size,
free_size,
-- Only non-rightmost pages have high key.
case when btpo_next != 0 then (select data from bt_page_items(idx,
blkno) where itemoffset = 1) end as highkey
from
ordered_internal_items o
join internal_items i on o.blk
more curious about why we're performing badly than I am about a
> general-purpose random_zipfian function. :-)
I'm interested in both. I think that a random_zipfian function would
be quite helpful for modeling certain kinds of performance problems,
like CPU cache misses incurred at
nhancement to the collator algorithm could break
things that wouldn't break sort support.
ICU has facilities for this, and the ICU talks about storing strxfrm()
blobs on disk for a long time, but the details would need to be worked
out.
[1]
postgr.es/m/cah2-wzmw9ljtfzp7uy4chsf3nh0ym-_pow3lx
e much more important than anything
else. Even with collated text, the difference is not so large IIRC.
Though, it was noticeably larger.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jun 27, 2017 at 11:04 AM, Andres Freund wrote:
>
> On 2017-06-27 10:57:15 -0700, Peter Geoghegan wrote:
>> I looked at this again recently. I wrote a patch to prove to myself
>> that we can fairly easily reclaim 15 bits from every nbtree internal
>> page ItemId, an
On Thu, May 19, 2016 at 7:28 PM, Peter Geoghegan wrote:
> Abbreviated keys in indexes are supposed to help with this. Basically,
> the ItemId array is made to be interlaced with small abbreviated keys
> (say one or two bytes), only in the typically less than 1% of pages
> that are in
gt; I talked to appeared to prefer the shared collations approach.
I strongly prefer the second approach. The only downside that occurs
to me is that that approach requires more code. Is there something
that I've missed?
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pg
On Thu, Jun 22, 2017 at 7:10 PM, Tom Lane wrote:
> Is there some way I'm missing, or is this just a not-done-yet feature?
It's a not-done-yet feature.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subsc
h didn't try to do anything special about
concurrency. At the time, this was controversial. However, we now
understand that SQL MERGE really isn't obligated to handle that at all
[1]. Besides, we have ON CONFLICT for those use-cases.
[1] https://wiki.postgresql.org/wiki/UPSERT#MERGE_dis
mcmp() in a type/tuple descriptor agnostic fashion, leaving
compression, truncation, and abbreviation as relatively trivial tasks.
This is all very difficult, of course, which is why it wasn't
seriously pursued.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postg
e pass-by-value rather than a varlena might speed things up quite a
> bit in some cases.
What cases do you have in mind?
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
; Technically, an error doesn't need to be produced for the above "ON CONFLICT
> DO SELECT" statement - I think it is still perfectly well-defined to return
> duplicate rows as in option 1.
Returning rows with duplicate values seems rather unorthodox.
--
Peter Geoghegan
--
ows you
project in your new syntax, even when there is a separate unique
constraint on that column? I suppose that there is some risk of things
like that today, but this would make the "sleight of hand" used by ON
CONFLICT DO UPDATE more likely to cause problems.
--
Peter Geoghegan
--
HING
feature).
I haven't thought about this very carefully, but I guess you could do
something like passing a flag to ExecConstraints() that indicates
"don't throw an error; instead, just return false so I know not to
proceed". Plus maybe one or two other cases, like using specula
Q20 causing
> choice of inefficient plan), it's a great paper to read. I thought I've
> already posted a link to the this paper sometime in the past, but I don't
> see it in the archives.
Thanks for the tip!
The practical focus of this paper really appeals to me.
--
Peter
On Sun, Jun 11, 2017 at 10:27 AM, Peter Geoghegan wrote:
> Note that I introduced a new, redundant exists() in the agg_lineitem
> fact table subquery. It now takes 23 seconds for me on Tomas' 10GB
> TPC-H dataset, whereas the original query took over 90 minutes.
> Clearly we'
ed this thread.
Clearly Q20 is designed to reward systems that do better with moving
predicates into subqueries, as opposed to systems with better
selectivity estimation.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscrip
ently selectivity estimation isn't particularly challenging with
the TPC-H queries. I think that the big challenge for us is
limitations like this; there are similar issues with a number of other
TPC-H queries. It would be great if someone looked into implementing
bitmap semi-join.
--
Peter Geoghe
On Thu, Jun 8, 2017 at 3:13 PM, Robert Haas wrote:
> On Tue, Jun 6, 2017 at 8:19 PM, Peter Geoghegan wrote:
>> On Tue, Jun 6, 2017 at 5:01 PM, Peter Geoghegan wrote:
>>> Also, ISTM that the code within ENRMetadataGetTupDesc() probably
>>> requires more explanatio
On Fri, Jun 9, 2017 at 10:45 AM, Robert Haas wrote:
>> But they are getting the sort order they need. They just don't get the
>> equality semantics they expect.
>
> You're right.
If we happened to ever guarantee the user a stable sort, then I'd be
wrong. We don
me users might even find it worth
> giving up hashing in order to get the exact sort order they need.
But they are getting the sort order they need. They just don't get the
equality semantics they expect.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@post
CONFLICT DO NOTHING/UPDATE to COPY seems
> to be a large separated task and is out of the current project scope, but
> maybe there is
> a relatively simple way to somehow perform internally tuples insert with
> ON CONFLICT DO NOTHING? I have added Peter Geoghegan to cc, as
> I unders
studying
> this thread and the patches and determining whether or not I'm willing
> to take responsibility for this patch.
Thank you.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
to discourage revert on the
grounds that it's a slippery slope. Admitting fault doesn't need to be
made any harder.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
at he'll be able
to put in *sufficient* time, and in light of that concedes that it
might be best to revert and revisit for Postgres 11. He is being
cautious, and does not want to *risk* unduly holding up the release.
That was my understanding, at least.
--
Peter Geoghegan
--
Sent via pgsql-h
On Wed, Jun 7, 2017 at 3:00 PM, Peter Geoghegan wrote:
> My assumption would be that since you have as many as two
> statement-level triggers firing that could reference transition tables
> when ON CONFLICT DO UPDATE is used (one AFTER UPDATE statement level
> trigger, and another
n't provide much in the way of guidance.
My assumption about how transition tables ought to behave here is
based on the simple fact that we already fire both AFTER
statement-level triggers, plus my sense of aesthetics, or bias. I
admit that I might be missing the point, but if I am it would
On Tue, Jun 6, 2017 at 5:01 PM, Peter Geoghegan wrote:
> Also, ISTM that the code within ENRMetadataGetTupDesc() probably
> requires more explanation, resource management wise.
Also, it's not clear why it should be okay that the new type of
ephemeral RTEs introduced don't have pe
On Tue, Jun 6, 2017 at 3:47 PM, Peter Geoghegan wrote:
> I suppose you'll need two tuplestores for the ON CONFLICT DO UPDATE
> case -- one for updated tuples, and the other for inserted tuples.
Also, ISTM that the code within ENRMetadataGetTupDesc() probably
requires more explanatio
Table pass them to the trigger code explicitly.
I suppose you'll need two tuplestores for the ON CONFLICT DO UPDATE
case -- one for updated tuples, and the other for inserted tuples.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
gular collations wouldn't need to change their
behavior/implementation (to use ucol_equal() within texteq(), and so
on).
[1] http://unicode.org/reports/tr10/#Forcing_Deterministic_Comparisons
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
If we didn't do a binary comparison as a
> tie-breaker, wouldn't the result be logically incompatible with the =
> operator, which does a binary comparison?
I agree with that assessment.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.
is
provided for by the application's client encoding. That's a great
ideal to have, and one that is very close to completely workable.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
whatever non-technical reasons remain are actually
technical debt in disguise.
Where this leaves hash partitioning, I cannot say.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subs
On Mon, May 1, 2017 at 6:39 PM, Peter Geoghegan wrote:
> On Mon, May 1, 2017 at 6:20 PM, Tom Lane wrote:
>> Maybe you can fix this by assuming that your own session's advertised xmin
>> is a safe upper bound on everybody else's RecentGlobalXmin. But I'm not
>
On Thu, May 11, 2017 at 2:51 PM, Andres Freund wrote:
> Now that that's done, here's an updated version of that patch. Note the
> new logic to trigger xl_running_xact's to be logged at the right spot.
> Works well in my testing.
You forgot the patch. :-)
--
Peter G
oo much effort into
modelling concurrency ahead of optimizing serial performance. The
machine's *aggregate* memory bandwidth should be used as efficiently
as possible, and parallelism is just one (very important) tool for
making that happen.
--
Peter Geoghegan
VMware vCenter Server
https://
etely wrong when you said that. Lesson
learned, I suppose.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
with the MVCC snapshot's xmin in the first place --
I really don't have an opinion either way just yet.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 1, 2017 at 4:28 PM, Peter Geoghegan wrote:
> Anyone have an opinion on any of this? Offhand, I think that calling
> GetOldestXmin() once per index when its "amcheck whole index scan"
> finishes would be safe, and yet provide appreciably better test
> covera
On Mon, May 1, 2017 at 2:10 PM, Peter Geoghegan wrote:
> Actually, I guess amcheck would need to use its own scan's snapshot
> xmin instead. This is true because it cares about visibility in a way
> that's "backwards" relative to existing code that tests something
&
On Fri, Apr 28, 2017 at 6:02 PM, Peter Geoghegan wrote:
> - Is committed, and committed before RecentGlobalXmin.
Actually, I guess amcheck would need to use its own scan's snapshot
xmin instead. This is true because it cares about visibility in a way
that's "backwards" re
to
receive an even share of memory).
As I said, even if I was totally willing to duplicate the effort that
went into respecting work_mem as a budget within places like
tuplesort.c, having as little infrastructure code as possible is a
specific goal for amcheck.
[1] https://www.eecs.harvard.edu/
od
> idea, because it'll push down the likelihood of the issue below where
> people will see it, but it'll still be likely enough for it to create
> problems.
I was concerned about that too. I have a hard time defending changes
like this to myself, but it doesn't hurt to
a duplicate violation? I imagine that that's the much more
common case.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
. Is someone going to get around
to fixing the problem for CREATE INDEX CONCURRENTLY (e.g., having
extra steps to drop the useless index during recovery)? IIRC, this was
always the plan.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing lis
look like. The non-deterministic false
negatives may need to be considered by the user visible interface,
which is the main reason I mention it.
[1] postgr.es/m/20161017014605.ga1220...@tornado.leadboat.com
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-
o as to not give too much credit to the "high risk"
presort check optimization.
The switch to insertion sort that we left in (not the bad one removed
by a3f0b3d -- the insertion sort that actually comes from the B&M
paper) does "legitimately" make sorting faster with pr
the presorted input.
I think that it isn't fair to credit our qsort with doing so well on a
100% presorted case, because it doesn't do the necessary bookkeeping
to not throw that work away completely in certain important cases.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmw
riously suggesting that we should prefer multiple passes in
the vast majority of real world cases, nor am I suggesting that we
should go out of our way to help cases that need to do that. I just
find all this interesting.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent
On Thu, Apr 13, 2017 at 10:19 PM, Peter Geoghegan wrote:
> I actually think Heikki's work here would particularly help on
> spinning rust, especially when less memory is available. He
> specifically justified it on the basis of it resulting in a more
> sequential read pattern,
ectly sequential, then 7 tapes would probably be noticeably
*faster*, due to CPU caching effects.
Knuth was completely correct to say that it basically made no
difference once more than 7 tapes are used to merge, because he didn't
have logtape.c fragmentation to worry about.
--
Peter
g has to happen with an exclusive buffer
lock held on a leaf page, which could hold up rather a lot of scans
that need to visit the same value even if it's on some other,
relatively removed leaf page.
This is just a theory.
--
Peter Geoghegan
VMware vCenter Server
https://www.v
On Fri, Apr 7, 2017 at 12:28 PM, Alvaro Herrera
wrote:
> Peter Geoghegan wrote:
>> On Fri, Apr 7, 2017 at 11:37 AM, Andres Freund wrote:
>> > Write Amplification Reduction Method (WARM)
>> > - fair number of people don't think it's ready for v10.
>
>
27;t strike me as a useful restriction.
I agree that that CF app restriction makes little sense.
> Indexes with Included Columns (was Covering + unique indexes)
> - Don't think concerns about #columns on truncated tuples have been
> addressed. Should imo be returned-with-feedback
On Thu, Apr 6, 2017 at 2:50 PM, Andres Freund wrote:
> Pushed with very minor wording changes.
This had a typo:
+ * If copy is true, the slot receives a copied tuple that'll that will stay
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers
On Thu, Apr 6, 2017 at 2:50 PM, Andres Freund wrote:
> Pushed with very minor wording changes.
Thanks Andres.
--
Peter Geoghegan
VMware vCenter Server
https://www.vmware.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
h
e contexts tend to be associated with
expression contexts.
In any case, I'm not sure where you'd centrally document the
conventions. Although, it seems clear that it wouldn't be anywhere
this patch currently touches. The executor README, perhaps?
--
Peter Geoghegan
VMware vCenter S
RM on or off reminds me a little bit of the way
it was at one time suggested that HOT not be used against catalog
tables, a position that Tom pushed against. I'm not saying that it's
necessarily a bad idea, but we should exhaust alternatives, and have a
clear rationale for it.
--
Peter Ge
On Wed, Apr 5, 2017 at 10:21 AM, Fujii Masao wrote:
> Both launcher and worker don't handle SIGHUP signal and cannot
> reload the configuration. I think that this is a bug.
+1
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make chan
s pessimistic about memory lifetime unless
otherwise noted.
> Other than these minimal adjustments, this looks good to go to me.
Cool.
I'll try to get out a revision soon, maybe later today, including an
updated 0002-* (Valgrind suppression), which I have not forgotten
about.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
201 - 300 of 3472 matches
Mail list logo