Hi Dan, 2017-07-31 18:29 GMT+02:00 Dan Lyke <[email protected]>:
> > So over this weekend I exported the entire database out and imported it > into a new instance. Now that one table size is down to 34GB (from > 70GB before), which seems more in-line with the earlier database version > (I guess something happened with extraneous data on import that got > deleted). > Well, this means that the 70GB were due to a consistent bloat present in the data blocks. This means that the index was pointing to a larger number of pages, since the information has been spread. The index scan plan was necessary to inspect visibility of pages. So the planner chose the proper plan. > But the problem persists. > > This main table is 34G, we've got 3 other tables we use intensively, one > is about 3.4G, the other two are 200M and 34M. > > We're running this on Amazon db.m3.2xlarge instances, so 8vCPU with 30G > of RAM. Are you able to understand why the previous data import has grown up to 70GB? Probably the key to understand why the plan has changed is to understand what is changed in the meantime, also in terms of operations executed on the dataset. I suspect that bloat is playing a key role here. Giuseppe.
_______________________________________________ postgis-users mailing list [email protected] https://lists.osgeo.org/mailman/listinfo/postgis-users
