Hi All, I'm having some trouble improving the timing of a set of queries to a partitioned table. Basically, I'm trying to find an index that would be used instead of a bitmap heap scan by when the data is taken from disk. Or in any case, something that would make the process of retrieving the data from disk faster.
I've installed postgreSQL compiling the source: PostgreSQL 9.2.20 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit And these are the current changes on the configuration file: name | current_setting | source ----------------------------+--------------------+---------------------- application_name | psql | client client_encoding | UTF8 | client DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file lc_messages | en_US.UTF-8 | configuration file lc_monetary | en_US.UTF-8 | configuration file lc_numeric | en_US.UTF-8 | configuration file lc_time | en_US.UTF-8 | configuration file log_destination | stderr | configuration file log_directory | pg_log | configuration file log_filename | postgresql-%a.log | configuration file log_rotation_age | 1d | configuration file log_rotation_size | 0 | configuration file log_timezone | UTC | configuration file log_truncate_on_rotation | on | configuration file logging_collector | on | configuration file max_connections | 100 | configuration file max_stack_depth | 2MB | environment variable shared_buffers | 6GB | configuration file TimeZone | UTC | configuration file work_mem | 50MB | configuration file I'm running on CentOS 6.8, and all the tests are being done through psql. Now, this is the table in question: lportal=# \d+ data_jsons_partition Table "data_jsons_partition" Column | Type | Modifiers | Storage | Stats target | Description -----------------+-----------------------------+-----------+ ----------+--------------+------------- id | integer | | plain | | site_id | integer | | plain | | site_name | character varying(255) | | extended | | measured_on | date | | plain | | protocol | text | | extended | | data | json | | extended | | created_at | timestamp without time zone | | plain | | updated_at | timestamp without time zone | | plain | | org_name | character varying | | extended | | org_id | integer | | plain | | lat | double precision | | plain | | long | double precision | | plain | | elev | double precision | | plain | | Triggers: insert_measurement_trigger BEFORE INSERT ON data_jsons_partition FOR EACH ROW EXECUTE PROCEDURE data_insert_trigger() Child tables: partitions.partition_a_data_jsons_part, partitions.partition_b_data_jsons_part, ... partitions.partition_aa_data_jsons_part, partitions.partition_ab_data_jsons_part The child tables exists based on the protocol column. Now, each partition looks like this: lportal=# \d+ partitions.partition_ab_data_jsons_part Table "partitions.partition_ab_data_jsons_part" Column | Type | Modifiers | Storage | Stats target | Description -----------------+-----------------------------+-----------+ ----------+--------------+------------- id | integer | not null | plain | | site_id | integer | | plain | | site_name | character varying(255) | | extended | | measured_on | date | | plain | | protocol | text | | extended | | data | json | | extended | | created_at | timestamp without time zone | | plain | | updated_at | timestamp without time zone | | plain | | org_name | character varying | | extended | | organization_id | integer | | plain | | latitude | double precision | | plain | | longitude | double precision | | plain | | elevation | double precision | | plain | | Indexes: "partition_ab_data_jsons_part_pkey" PRIMARY KEY, btree (id) "partition_ab_data_jsons_part_spm_key" UNIQUE CONSTRAINT, btree (site_id, protocol, measured_on) "partition_ab_data_jsons_part_mo" btree (measured_on) "partition_ab_data_jsons_part_org" btree (org_name) "partition_ab_data_jsons_part_org_id" btree (organization_id) "partition_ab_data_jsons_part_sid" btree (site_id) CLUSTER "partition_ab_data_jsons_part_sm" btree (site_id, measured_on) Check constraints: "partition_ab_data_jsons_part_protocol_check" CHECK (protocol = ' partition_ab'::text) Inherits: data_jsons_partition Now, I have this query that I've executed with a clean cache: lportal=# explain analyze SELECT org_name, site_name, latitude, longitude, elevation, measured_on, data FROM data_jsons_partition where protocol in ('aerosols','precipitations') and site_id in (... around 1000 site_id-s ...) and (measured_on >= '2013-09-24' and measured_on <= '2016-10-10') order by org_name, site_name, measured_on limit 1000000; And I get the following: Limit (cost=149414.00..149518.52 rows=41806 width=110) (actual time=25827.893..26012.065 rows=126543 loops=1) -> Sort (cost=149414.00..149518.52 rows=41806 width=110) (actual time=25827.889..25970.671 rows=126543 loops=1) Sort Key: data_jsons_partition.org_name, data_jsons_partition.site_name, data_jsons_partition.measured_on Sort Method: external merge Disk: 70616kB -> Result (cost=0.00..146205.09 rows=41806 width=110) (actual time=38.533..20810.204 rows=126543 loops=1) -> Append (cost=0.00..146205.09 rows=41806 width=110) (actual time=38.530..20739.245 rows=126543 loops=1) -> Seq Scan on data_jsons_partition (cost=0.00..0.00 rows=1 width=608) (actual time=0.002..0.002 rows=0 loops=1) Filter: ((protocol = ANY ('{partition_a,partition_b}'::text[])) AND (measured_on >= '2013-09-24'::date) AND (measured_on <= '2016-10-10'::date) AND (site_id = ANY ('{... 1000 site_id-s ...}'::integer[]))) -> Bitmap Heap Scan on partition_a_data_jsons_part data_jsons_partition (cost=70.92..5209.38 rows=2132 width=114) (actual time=38.526..812.397 rows=3017 loops=1) Recheck Cond: ((measured_on >= '2013-09-24'::date) AND (measured_on <= '2016-10-10'::date)) Filter: ((protocol = ANY ('{partition_a, partition_b}'::text[])) AND (site_id = ANY ('{ ... }'::integer[]))) -> Bitmap Index Scan on partition_a _data_jsons_part_mo (cost=0.00..70.39 rows=3014 width=0) (actual time=2.974..2.974 rows=3017 loops=1) Index Cond: ((measured_on >= '2013-09-24'::date) AND (measured_on <= '2016-10-10'::date)) -> Bitmap Heap Scan on partition_b_data_jsons_part data_jsons_partition (cost=4582.19..140995.72 rows=39673 width=110) (actual time=738.486..19871.141 rows=123526 loops=1) Recheck Cond: ((site_id = ANY ('{... ...}'::integer[]))) Filter: (protocol = ANY ('{partition_a, partition_b}'::text[])) -> Bitmap Index Scan on partition_b _data_jsons_part_sm (cost=0.00..4572.27 rows=39673 width=0) (actual time=715.684..715.684 rows=123526 loops=1) Index Cond: ((site_id = ANY ('{... ...}'::integer[]))) Total runtime: 26049.062 ms >From this I've increased the effective_io_concurrency to 150 (since most of the time was on fetching the data from the partition_b_data_jsons_part in the second bitmap heap scan) and the work_mem to 1.5GB (for the sorting that's being spilled on disk), improving the timing to 7 seconds (from which 5-6 seconds comes from the sorting). Now, this is a relative fast query. Some other doesn't specify the protocol, and therefore goes over all the children tables. Those queries takes around 5 minutes (without changes mentioned above) and around 1.5min with the changes. Doing an explain analyze on those queries I see some of the tables uses index scans (much slower than bitmap scan since there's nothing on cache) and other the bitmap scans. Is there a way to make it faster? Thank you in advance.