On Thu, Nov 19, 2015 at 12:35 PM, Greg Stark <st...@mit.edu> wrote:
> So I think you're kind of right and kind of wrong. The vast majority
> of use cases are either sub 1TB or are in work environments designed
> specifically for data warehouse queries where a user can obtain much
> more memory for their queries. However I think it's within the
> intended use cases that Postgres should be able to handle a few
> terabytes of data on a moderately sized machine in a shared
> environment too.
Maybe I've made this more complicated than it needs to be. The fact is
that my recent 16MB example is still faster than the master branch
when a multiple pass merge is performed (e.g. when work_mem is 15MB,
or even 12MB). More on that later.
> Our current defaults are particularly bad for this though. If you
> initdb a new Postgres database today and create a table even a few
> gigabytes and try to build an index on it it takes forever. The last
> time I did a test I canceled it after it had run for hours, raised
> maintenance_work_mem and built the index in a few minutes. The problem
> is that if we just raise those limits then people will use more
> resources when they don't need it.
I think that the bigger problems are:
* There is a harsh discontinuity in the cost function -- performance
suddenly falls off a cliff when a sort must be performed externally.
* Replacement selection is obsolete. It's very slow on machines from
the last 20 years.
> If it were safer for to have those
> limits be much higher then we could make the defaults reflect what
> people want when they do bigger jobs rather than just what they want
> for normal queries or indexes.
Or better yet, make it so that it doesn't really matter that much,
even while you're still using the same amount of memory as before.
If you're saying that the whole work_mem model isn't a very good one,
then I happen to agree. It would be very nice to have some fancy
admission control feature, but I'd still appreciate a cost model that
dynamically sets work_mem. The model avoids an excessively high
setting where there is only about half the memory available for a 10GB
sort. You should probably have 5 runs sized 2GB, rather than 2 runs
sized 5GB, even if you can afford the memory for the latter. It would
still make sense to have very high work_mem settings when you can
dynamically set it so high that the sort does complete internally,
>> I think that if the DBA ever sees the multipass_warning message, he or she
>> does not have an OLTP workload.
> Hm, that's pretty convincing. I guess this isn't the usual sort of
> warning due to the time it would take to trigger.
I would like more opinions on the multipass_warning message. I can
write a patch that creates a new system view, detailing how sort were
completed, if there is demand.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: