I'm wondering if there is anything I can tune in my PG 10.1 database to avoid these errors:
$ psql -f failing_query.sql psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free pages CONTEXT: parallel worker I tried throttling back the number of parallel workers to just 2, that didn't help. The query is joining two views that each have 50 or so underlying queries, unioned, in them. Unfortunately due to an invalid index, it is sequence scanning some of the tables. I can't fix the indexes until a few create materialized view commands that are currently running (and have been running for 6 days) finish or I kill them, because they are holding a lock that is blocking any attempt to reindex. So that leaves me looking for some tunable (hopefully one that doesn't require a restart) which will fix this by adding sufficient resources to the system to allow the dsa_allocate() to find enough (contiguous?) pages. My system seems to have plenty of extra capacity. There was a thread on pghackers in December where someone else was seeing a similar error, but couldn't reproduce it consistently. I've run the above query hundreds of times over the last 24 hours, but just the one fails when I select just the right parameters - and fails every time I run it with those parameters. In that thread someone speculated it had to do with running many parallel bitmap heap scans in one query. I count 98 in the query plan. I'm hoping there is a "magic X tunable" which I just need to bump up a little to let queries like this run without the fatal failure.