On Wed, Dec 28, 2016 at 11:47 AM, Amit Kapila <[email protected]> wrote:
>
> Currently, queries that have references to SubPlans or
> AlternativeSubPlans are considered parallel-restricted. I think we
> can lift this restriction in many cases especially when SubPlans are
> parallel-safe. To make this work, we need to propagate the
> parallel-safety information from path node to plan node and the same
> could be easily done while creating a plan. Another option could be
> that instead of propagating parallel-safety information from path to
> plan, we can find out from the plan if it is parallel-safe (doesn't
> contain any parallel-aware node) by traversing whole plan tree, but I
> think it is a waste of cycles. Once we have parallel-safety
> information in the plan, we can use that for detection of
> parallel-safe expressions in max_parallel_hazard_walker(). Finally,
> we can pass all the subplans to workers during plan serialization in
> ExecSerializePlan(). This will enable workers to execute subplans
> that are referred in parallel part of the plan. Now, we might be able
> to optimize it such that we pass only subplans that are referred in
> parallel portion of plan, but I am not sure if it is worth the trouble
> because it is one-time cost and much lesser than other things we do
> (like creating
> dsm, launching workers).
>
> Attached patch implements the above idea. This will enable
> parallelism for queries containing un-correlated subplans, an example
> of which is as follows:
>
> set parallel_tuple_cost=0;
> set parallel_setup_cost=0;
> set min_parallel_relation_size=50;
>
> create table t1 (i int, j int, k int);
> create table t2 (i int, j int, k int);
>
> insert into t1 values (generate_series(1,10)*random(),
> generate_series(5,50)*random(),
> generate_series(8,80)*random());
> insert into t2 values (generate_series(4,10)*random(),
> generate_series(5,90)*random(),
> generate_series(2,10)*random());
>
>
> Plan without Patch
> -----------------------------
> postgres=# explain select * from t1 where t1.i not in (select t2.i
> from t2 where t2.i in (1,2,3));
> QUERY PLAN
> ---------------------------------------------------------------
> Seq Scan on t1 (cost=110.84..411.72 rows=8395 width=12)
> Filter: (NOT (hashed SubPlan 1))
> SubPlan 1
> -> Seq Scan on t2 (cost=0.00..104.50 rows=2537 width=4)
> Filter: (i = ANY ('{1,2,3}'::integer[]))
> (5 rows)
>
> Plan with Patch
> ------------------------
> postgres=# explain select * from t1 where t1.i not in (select t2.i
> from t2 where t2.i in (1,2,3));
> QUERY PLAN
> -------------------------------------------------------------------------
> Gather (cost=110.84..325.30 rows=8395 width=12)
> Workers Planned: 1
> -> Parallel Seq Scan on t1 (cost=110.84..325.30 rows=4938 width=12)
> Filter: (NOT (hashed SubPlan 1))
> SubPlan 1
> -> Seq Scan on t2 (cost=0.00..104.50 rows=2537 width=4)
> Filter: (i = ANY ('{1,2,3}'::integer[]))
> (7 rows)
>
> We have observed that Q-16 in TPC-H have been improved with the patch
> and the analysis of same will be shared by my colleague Rafia.
>
To study the effect of uncorrelated sub-plan pushdown on TPC-H and
TPC-DS benchmark queries we performed some experiments and the
execution time results for same are summarised as follows,
Query | HEAD | Patches | scale-factor
-----------+---------+-----------+-----------------
DS-Q45 | 35 | 19 | scale-factor: 100
H-Q16 | 812 | 645 | scale-factor: 300
H-Q16 | 49 | 37 | scale-factor: 20
Execution time given in above table is in seconds. Detailed analysis
of this experimentation is as follows,
Additional patches applied in this analysis are,
Parallel index scan [1]
Parallel index-only scan [2]
Parallel merge-join [3]
The system setup used for this experiment is,
Server parameter settings:
work_mem = 500 MB,
max_parallel_workers_per_gather = 4,
random_page_cost = seq_page_cost = 0.1 = parallel_tuple_cost,
shared_buffers = 1 GB
Machine used: IBM Power, 4 socket machine, 512 GB RAM
TPC-DS scale factor = 100 (approx size of database is 150 GB)
Query 45 which takes around 35 seconds on head, completes in 19
seconds with these patches. The point to note here is that without
this patch of pushing uncorrelated sublans, hash join which is using
subplan in join filter could not be pushed to workers and hence query
was unable to use the parallelism enough, with this patch parallelism
is available till the topmost join node. The output of explain analyse
statement of this query on both head and with patches is given in
attached file ds_q45.txt.
On further evaluating these patches on TPC-H queries on different
scale factors we came across query 16, for TPC-H scale factor 20 and
aforementioned parameter settings with the change of
max_parallel_workers_per gather = 2, it took 37 seconds with the
patches and 49 seconds on head. Though the improvement in overall
query performance is not appearing to be significantly high, yet the
point to note here is that the time taken by join was around 26
seconds on head which reduced to 14 seconds with the patches. Overall
benefit in performance is not high because sort node is dominating the
execution time. The plan information of this query is given in
attached file h_q16_20_2.txt.
On increasing the scale factor to 300, setting work_mem to 1GB,
increasing max_parallel_workers_per_gather = 6, and disabling the
parallel sequential scan for supplier table by 'alter table supplier
set (parallel_workers = 0)', Q16 completes in 645 seconds with
patches, which was taking 812 seconds on head. We need to disable
parallel_workers for supplier table because on higher worker count it
was taking parallel seq scan and hence the scan node that is using
subplan could not be parallelised. For this query both pushdown of
subplans and parallel merge-join, without any one of these the
benefits of parallelism might not be leveraged fully. The output of
explain analyse for this query is given in h_q16_300.txt
Overall, with pushdown of uncorrelated sub-plans to workers enables
the parallelism in joins which was restricted before and hence good
improvement in query performance can be witnessed.
> Now, we can further extend this to parallelize queries containing
> correlated subplans like below:
>
> explain select * from t1 where t1.i in (select t2.i from t2 where t2.i=t1.i);
> QUERY PLAN
> -------------------------------------------------------------
> Seq Scan on t1 (cost=0.00..831049.09 rows=8395 width=12)
> Filter: (SubPlan 1)
> SubPlan 1
> -> Seq Scan on t2 (cost=0.00..97.73 rows=493 width=4)
> Filter: (i = t1.i)
> (5 rows)
>
As per my analysis this extension to correlated subplans is likely to
improve the performance of following queries -- Q2 in TPC-H and Q6 in
TPC-DS.
> Yet, another useful enhancement in this area could be to consider both
> parallel and non-parallel paths for subplans. As of now, we consider
> the cheapest/best path and form subplan from it, but it is quite
> possible that instead of choosing parallel path (in case it is
> cheapest) at subplan level, the non-parallel path at subplan level
> could be beneficial when upper plan can use parallelism. I think this
> will be a separate project in itself if we want to do this and based
> on my study of TPC-H and TPC-DS queries, I am confident that this will
> be helpful in certain queries at higher scale factors.
>
I agree as then we do not need to disable parallelism for particular
relations as we currently do for supplier relation in Q16 of TPC-H.
[1]
https://www.postgresql.org/message-id/CAA4eK1KthrAvNjmB2cWuUHz%2Bp3ZTTtbD7o2KUw49PC8HK4r1Wg%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/CAOGQiiOv9NA1VrAo8PmENfGi-y%3DCj_DiTF4vyjp%2BfmuEzovwEA%40mail.gmail.com
[3]
https://www.postgresql.org/message-id/CAFiTN-tdYpcsk7Lpv0HapcmvSnMN_TgKjC7RkmvVLZAF%2BXfmPg%40mail.gmail.com
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
On head:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=539939.00..539941.25 rows=100 width=46) (actual
time=35194.017..35194.358 rows=100 loops=1)
-> GroupAggregate (cost=539939.00..541004.96 rows=47376 width=46) (actual
time=35194.015..35194.345 rows=100 loops=1)
Group Key: customer_address.ca_zip, customer_address.ca_state
-> Sort (cost=539939.00..540057.44 rows=47376 width=20) (actual
time=35193.994..35194.019 rows=101 loops=1)
Sort Key: customer_address.ca_zip, customer_address.ca_state
Sort Method: quicksort Memory: 48kB
-> Hash Join (cost=32899.51..536259.81 rows=47376 width=20)
(actual time=1735.259..35192.430 rows=303 loops=1)
Hash Cond: (web_sales.ws_item_sk = item.i_item_sk)
Join Filter: ((substr((customer_address.ca_zip)::text, 1,
5) = ANY ('{85669,86197,88274,83405,86475,85392,85460,80348,81792}'::text[]))
OR (hashed SubPlan 1))
Rows Removed by Join Filter: 2070923
-> Hash Join (cost=26911.21..527324.70 rows=90671
width=24) (actual time=1406.355..25817.821 rows=2071226 loops=1)
Hash Cond: (customer.c_current_addr_sk =
customer_address.ca_address_sk)
-> Nested Loop (cost=2237.81..501404.58 rows=90671
width=14) (actual time=48.696..22331.297 rows=2071226 loops=1)
-> Gather (cost=2237.39..446823.34
rows=90671 width=14) (actual time=48.657..1736.963 rows=2071511 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=1237.38..436756.24
rows=90671 width=14) (actual time=52.428..10749.067 rows=414302 loops=5)
Hash Cond:
(web_sales.ws_sold_date_sk = date_dim.d_date_sk)
-> Parallel Seq Scan on web_sales
(cost=0.00..367798.12 rows=17998432 width=18) (actual time=0.035..5528.430
rows=14400247 loops=5)
-> Hash (cost=1236.23..1236.23
rows=92 width=4) (actual time=51.864..51.864 rows=91 loops=5)
Buckets: 1024 Batches: 1
Memory Usage: 12kB
-> Seq Scan on date_dim
(cost=0.00..1236.23 rows=92 width=4) (actual time=25.196..51.808 rows=91
loops=5)
Filter: ((d_qoy = 2)
AND (d_year = 1999))
Rows Removed by
Filter: 72958
-> Index Scan using customer_pkey on customer
(cost=0.43..0.59 rows=1 width=8) (actual time=0.008..0.009 rows=1
loops=2071511)
Index Cond: (c_customer_sk =
web_sales.ws_bill_customer_sk)
-> Hash (cost=12173.40..12173.40 rows=1000000
width=18) (actual time=1356.992..1356.992 rows=1000000 loops=1)
Buckets: 1048576 Batches: 1 Memory Usage:
58500kB
-> Seq Scan on customer_address
(cost=0.00..12173.40 rows=1000000 width=18) (actual time=0.021..688.872
rows=1000000 loops=1)
-> Hash (cost=3432.80..3432.80 rows=204000 width=21)
(actual time=263.624..263.624 rows=204000 loops=1)
Buckets: 262144 Batches: 1 Memory Usage: 12607kB
-> Seq Scan on item (cost=0.00..3432.80
rows=204000 width=21) (actual time=0.008..146.867 rows=204000 loops=1)
SubPlan 1
-> Index Scan using item_pkey on item item_1
(cost=0.42..5.48 rows=10 width=17) (actual time=0.026..0.092 rows=10 loops=1)
Index Cond: (i_item_sk = ANY
('{2,3,5,7,11,13,17,19,23,29}'::integer[]))
Planning time: 3.126 ms
Execution time: 35195.336 ms
(37 rows)
With patches:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----
Limit (cost=533081.38..533083.63 rows=100 width=46) (actual
time=19506.763..19507.347 rows=100 loops=1)
-> GroupAggregate (cost=533081.38..534147.34 rows=47376 width=46) (actual
time=19506.760..19507.322 rows=100 loops=1)
Group Key: customer_address.ca_zip, customer_address.ca_state
-> Sort (cost=533081.38..533199.82 rows=47376 width=20) (actual
time=19506.710..19506.752 rows=101 loops=1)
Sort Key: customer_address.ca_zip, customer_address.ca_state
Sort Method: quicksort Memory: 48kB
-> Gather (cost=83592.41..529402.19 rows=47376 width=20)
(actual time=5199.329..19505.115 rows=303 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=82592.41..523664.59 rows=47376
width=20) (actual time=5236.517..19493.461 rows=61 loops=5)
Hash Cond: (web_sales.ws_item_sk = item.i_item_sk)
Join Filter:
((substr((customer_address.ca_zip)::text, 1, 5) = ANY
('{85669,86197,88274,83405,86475,85392,85460,80348,81792}'::text[])) OR (hashed
SubPlan
1))
Rows Removed by Join Filter: 414185
-> Hash Join (cost=76604.11..514729.49 rows=90671
width=24) (actual time=4710.986..17176.452 rows=414245 loops=5)
Hash Cond: (customer.c_current_addr_sk =
customer_address.ca_address_sk)
-> Hash Join (cost=51930.71..488809.36
rows=90671 width=14) (actual time=3070.350..15047.712 rows=414245 loops=5)
Hash Cond:
(web_sales.ws_bill_customer_sk = customer.c_customer_sk)
-> Hash Join (cost=1237.38..436756.24
rows=90671 width=14) (actual time=56.386..11554.924 rows=414302 loops=5)
Hash Cond:
(web_sales.ws_sold_date_sk = date_dim.d_date_sk)
-> Parallel Seq Scan on web_sales
(cost=0.00..367798.12 rows=17998432 width=18) (actual time=0.099..6058.670
rows=14400247 loops=5)
-> Hash (cost=1236.23..1236.23
rows=92 width=4) (actual time=56.100..56.100 rows=91 loops=5)
Buckets: 1024 Batches: 1
Memory Usage: 12kB
-> Seq Scan on date_dim
(cost=0.00..1236.23 rows=92 width=4) (actual time=29.093..56.032 rows=91
loops=5)
Filter: ((d_qoy = 2)
AND (d_year = 1999))
Rows Removed by
Filter: 72958
-> Hash (cost=25694.42..25694.42
rows=1999912 width=8) (actual time=3011.842..3011.842 rows=2000000 loops=5)
Buckets: 2097152 Batches: 1
Memory Usage: 94509kB
-> Seq Scan on customer
(cost=0.00..25694.42 rows=1999912 width=8) (actual time=0.065..1584.443
rows=2000000 loops=5)
-> Hash (cost=12173.40..12173.40
rows=1000000 width=18) (actual time=1639.416..1639.416 rows=1000000 loops=5)
Buckets: 1048576 Batches: 1 Memory
Usage: 58500kB
-> Seq Scan on customer_address
(cost=0.00..12173.40 rows=1000000 width=18) (actual time=0.046..855.394
rows=1000000 loops=5)
-> Hash (cost=3432.80..3432.80 rows=204000
width=21) (actual time=320.818..320.818 rows=204000 loops=5)
Buckets: 262144 Batches: 1 Memory Usage:
12607kB
-> Seq Scan on item (cost=0.00..3432.80
rows=204000 width=21) (actual time=0.016..186.555 rows=204000 loops=5)
SubPlan 1
-> Index Scan using item_pkey on item item_1
(cost=0.42..5.48 rows=10 width=17) (actual time=0.026..0.098 rows=10 loops=5)
Index Cond: (i_item_sk = ANY
('{2,3,5,7,11,13,17,19,23,29}'::integer[]))
Planning time: 3.940 ms
Execution time: 19519.561 ms
(39 rows)
On head:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------
Limit (cost=532028.06..532028.06 rows=1 width=44) (actual
time=49213.313..49213.314 rows=1 loops=1)
-> Sort (cost=532028.06..532480.72 rows=181066 width=44) (actual
time=49213.311..49213.311 rows=1 loops=1)
Sort Key: (count(DISTINCT partsupp.ps_suppkey)) DESC, part.p_brand,
part.p_type, part.p_size
Sort Method: top-N heapsort Memory: 25kB
-> GroupAggregate (cost=514691.25..531122.73 rows=181066 width=44)
(actual time=45469.589..49191.025 rows=27840 loops=1)
Group Key: part.p_brand, part.p_type, part.p_size
-> Sort (cost=514691.25..517615.42 rows=1169665 width=44)
(actual time=45469.337..46220.575 rows=2375516 loops=1)
Sort Key: part.p_brand, part.p_type, part.p_size
Sort Method: quicksort Memory: 285943kB
-> Hash Join (cost=118466.06..396802.68 rows=1169665
width=44) (actual time=6431.135..26289.132 rows=2375516 loops=1
)
Hash Cond: (partsupp.ps_partkey = part.p_partkey)
-> Seq Scan on partsupp (cost=2959.85..239600.71
rows=7999762 width=16) (actual time=247.486..13048.239 rows=1
5991520 loops=1)
Filter: (NOT (hashed SubPlan 1))
Rows Removed by Filter: 8480
SubPlan 1
-> Seq Scan on supplier
(cost=0.00..2959.80 rows=20 width=4) (actual time=0.479..247.334 rows=106 loop
s=1)
Filter: ((s_comment)::text ~~
'%Customer%Complaints%'::text)
Rows Removed by Filter: 199894
-> Hash (cost=108195.63..108195.63 rows=584847
width=40) (actual time=6182.670..6182.670 rows=594194 loops=1)
Buckets: 1048576 Batches: 1 Memory Usage:
51237kB
-> Seq Scan on part (cost=0.00..108195.63
rows=584847 width=40) (actual time=0.058..5649.112 rows=594194
loops=1)
Filter: ((p_brand <> 'Brand#31'::bpchar)
AND ((p_type)::text !~~ 'PROMO BRUSHED%'::text) AND (p_size
= ANY ('{41,4,15,12,42,46,32,30}'::integer[])))
Rows Removed by Filter: 3405806
Planning time: 1.165 ms
Execution time: 49221.879 ms
(25 rows)
With patches:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------
Limit (cost=509008.82..509008.82 rows=1 width=44) (actual
time=37272.962..37272.963 rows=1 loops=1)
-> Sort (cost=509008.82..509461.49 rows=181066 width=44) (actual
time=37272.960..37272.960 rows=1 loops=1)
Sort Key: (count(DISTINCT partsupp.ps_suppkey)) DESC, part.p_brand,
part.p_type, part.p_size
Sort Method: top-N heapsort Memory: 25kB
-> GroupAggregate (cost=491672.02..508103.49 rows=181066 width=44)
(actual time=33548.001..37250.009 rows=27840 loops=1)
Group Key: part.p_brand, part.p_type, part.p_size
-> Sort (cost=491672.02..494596.18 rows=1169665 width=44)
(actual time=33547.593..34301.735 rows=2375516 loops=1)
Sort Key: part.p_brand, part.p_type, part.p_size
Sort Method: quicksort Memory: 285943kB
-> Gather (cost=119466.06..373783.45 rows=1169665
width=44) (actual time=6563.283..14142.004 rows=2375516 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Hash Join (cost=118466.06..255816.95
rows=1169665 width=44) (actual time=6846.209..13804.511 rows=791839 lo
ops=3)
Hash Cond: (partsupp.ps_partkey =
part.p_partkey)
-> Parallel Seq Scan on partsupp
(cost=2959.85..122937.51 rows=3333234 width=16) (actual time=245.699..4
741.889 rows=5330507 loops=3)
Filter: (NOT (hashed SubPlan 1))
Rows Removed by Filter: 2827
SubPlan 1
-> Seq Scan on supplier
(cost=0.00..2959.80 rows=20 width=4) (actual time=0.474..245.544 rows=10
6 loops=3)
Filter: ((s_comment)::text ~~
'%Customer%Complaints%'::text)
Rows Removed by Filter: 199894
-> Hash (cost=108195.63..108195.63
rows=584847 width=40) (actual time=6599.562..6599.562 rows=594194 loo
ps=3)
Buckets: 1048576 Batches: 1 Memory
Usage: 51237kB
-> Seq Scan on part
(cost=0.00..108195.63 rows=584847 width=40) (actual time=0.062..6067.231 rows=
594194 loops=3)
Filter: ((p_brand <>
'Brand#31'::bpchar) AND ((p_type)::text !~~ 'PROMO BRUSHED%'::text) AND (
p_size = ANY ('{41,4,15,12,42,46,32,30}'::integer[])))
Rows Removed by Filter: 3405806
Planning time: 1.100 ms
Execution time: 37281.209 ms
(28 rows)
On head:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------
Limit (cost=9077419.45..9077419.45 rows=1 width=44) (actual
time=812826.044..812826.044 rows=1 loops=1)
-> Sort (cost=9077419.45..9077888.20 rows=187500 width=44) (actual
time=812826.042..812826.042 rows=1 loops=1)
Sort Key: (count(DISTINCT partsupp.ps_suppkey)) DESC, part.p_brand,
part.p_type, part.p_size
Sort Method: top-N heapsort Memory: 25kB
-> GroupAggregate (cost=8857722.29..9076481.95 rows=187500 width=44)
(actual time=714206.734..812801.585 rows=27840 loops=1)
Group Key: part.p_brand, part.p_type, part.p_size
-> Sort (cost=8857722.29..8901099.22 rows=17350773 width=40)
(actual time=714203.151..765512.643 rows=35593611 loops=1)
Sort Key: part.p_brand, part.p_type, part.p_size
Sort Method: external merge Disk: 1815480kB
-> Merge Join (cost=44177.40..6744311.66 rows=17350773
width=40) (actual time=3613.368..411214.196 rows=35593611 loops=1)
Merge Cond: (part.p_partkey = partsupp.ps_partkey)
-> Index Scan using part_pkey on part
(cost=0.56..1939335.17 rows=8675385 width=40) (actual time=0.038..113094.996
rows=8902829 loops=1)
Filter: ((p_brand <> 'Brand#31'::bpchar) AND
((p_type)::text !~~ 'PROMO BRUSHED%'::text) AND (p_size = ANY
('{41,4,15,12,42,46,32,30}'::integer[])))
Rows Removed by Filter: 51097171
-> Index Only Scan using partsupp_pkey on partsupp
(cost=44149.93..4309962.17 rows=120000184 width=8) (actual
time=3613.301..226287.744 rows=239882717 lo
ops=1)
Filter: (NOT (hashed SubPlan 1))
Rows Removed by Filter: 117280
Heap Fetches: 0
SubPlan 1
-> Seq Scan on supplier
(cost=0.00..44148.61 rows=300 width=4) (actual time=0.453..3611.403 rows=1466
loops=1)
Filter: ((s_comment)::text ~~
'%Customer%Complaints%'::text)
Rows Removed by Filter: 2998534
Planning time: 1.271 ms
Execution time: 812994.671 ms
(24 rows)
With patches:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------
Limit (cost=7919012.55..7919012.56 rows=1 width=44) (actual
time=645732.255..645732.255 rows=1 loops=1)
-> Sort (cost=7919012.55..7919481.30 rows=187500 width=44) (actual
time=645732.253..645732.253 rows=1 loops=1)
Sort Key: (count(DISTINCT partsupp.ps_suppkey)) DESC, part.p_brand,
part.p_type, part.p_size
Sort Method: top-N heapsort Memory: 25kB
-> GroupAggregate (cost=7699315.39..7918075.05 rows=187500 width=44)
(actual time=547855.352..645708.990 rows=27840 loops=1)
Group Key: part.p_brand, part.p_type, part.p_size
-> Sort (cost=7699315.39..7742692.32 rows=17350773 width=40)
(actual time=547851.770..598542.606 rows=35593611 loops=1)
Sort Key: part.p_brand, part.p_type, part.p_size
Sort Method: external merge Disk: 1815480kB
-> Gather (cost=45177.40..5585904.76 rows=17350773
width=40) (actual time=3811.864..228451.017 rows=35593611 loops=1)
Workers Planned: 6
Workers Launched: 6
-> Merge Join (cost=44177.40..3849827.46
rows=17350773 width=40) (actual time=4275.252..237087.584 rows=5084802 loops=7)
Merge Cond: (partsupp.ps_partkey =
part.p_partkey)
-> Parallel Index Only Scan using
partsupp_pkey on partsupp (cost=44149.93..1809958.34 rows=20000031 width=8)
(actual time=4274.474..39958.862 rows
=34268960 loops=7)
Filter: (NOT (hashed SubPlan 1))
Rows Removed by Filter: 16754
Heap Fetches: 0
SubPlan 1
-> Seq Scan on supplier
(cost=0.00..44148.61 rows=300 width=4) (actual time=5.186..4272.254 rows=1466
loops=7)
Filter: ((s_comment)::text ~~
'%Customer%Complaints%'::text)
Rows Removed by Filter: 2998534
-> Index Scan using part_pkey on part
(cost=0.56..1939335.17 rows=8675385 width=40) (actual time=0.144..181434.037
rows=12708841 loops=7)
Filter: ((p_brand <> 'Brand#31'::bpchar)
AND ((p_type)::text !~~ 'PROMO BRUSHED%'::text) AND (p_size = ANY
('{41,4,15,12,42,46,32,30}'::integer
[])))
Rows Removed by Filter: 72941508
Planning time: 1.350 ms
Execution time: 645897.192 ms
(27 rows)
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers