On Mon, Nov 30, 2015 at 5:12 PM, Greg Stark <st...@mit.edu> wrote:
> I think the take-away is that this is outside the domain where any 
> interesting break points occur.

I think that these are representative of what people want to do with
external sorts. We have already had Jeff look for a regression. He
found one only with less than 4MB of work_mem (the default), with over
100 million tuples.

What exactly are we looking for?

> And can you calculate an estimate where the domain would be where multiple 
> passes would be needed for this table at these work_mem sizes? Is it feasible 
> to test around there?

Well, you said that 1GB of work_mem was enough to avoid that within
about 4TB - 8TB of data. So, I believe the answer is "no":

[pg@hydra ~]$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
rootfs                      20G   19G  519M  98% /
devtmpfs                    31G  128K   31G   1% /dev
tmpfs                       31G  384K   31G   1% /dev/shm
/dev/mapper/vg_hydra-root   20G   19G  519M  98% /
tmpfs                       31G  127M   31G   1% /run
tmpfs                       31G     0   31G   0% /sys/fs/cgroup
tmpfs                       31G     0   31G   0% /media
/dev/md0                   497M  145M  328M  31% /boot
/dev/mapper/vg_hydra-data 1023G  322G  651G  34% /data

Peter Geoghegan

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to