On Thu, Sep 29, 2016 at 11:38 AM, Peter Geoghegan <p...@heroku.com> wrote:
> On Thu, Sep 29, 2016 at 2:59 PM, Robert Haas <robertmh...@gmail.com> wrote:
>>> Maybe that was the wrong choice of words. What I mean is that it seems
>>> somewhat unprincipled to give over an equal share of memory to each
>>> active-at-least-once tape, ...
>>
>> I don't get it.  If the memory is being used for prereading, then the
>> point is just to avoid doing many small I/Os instead of one big I/O,
>> and that goal will be accomplished whether the memory is equally
>> distributed or not; indeed, it's likely to be accomplished BETTER if
>> the memory is equally distributed than if it isn't.
>
> I think it could hurt performance if preloading loads runs on a tape
> that won't be needed until some subsequent merge pass, in preference
> to using that memory proportionately, giving more to larger input runs
> for *each* merge pass (giving memory proportionate to the size of each
> run to be merged from each tape).  For tapes with a dummy run, the
> appropriate amount of memory for there next merge pass is zero.

OK, true.  But I still suspect that unless the amount of data you need
to read from a tape is zero or very small, the size of the buffer
doesn't matter.  For example, if you have a 1GB tape and a 10GB tape,
I doubt there's any benefit in making the buffer for the 10GB tape 10x
larger.  They can probably be the same.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to