Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-25 Thread Michael Paquier
On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov  wrote:
> Is it completely safe to use manually patched version in production?

Patching upstream PostgreSQL to fix a critical bug is something that
can of course be done. And to reach a state where you think something
is safe to use in production first be sure to test it thoroughly on a
stage instance. The author is also working on Postgres for 20 years,
so this gives some insurance.
-- 
Michael



Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-25 Thread Dmitry Shalashov
> Excellent, please follow up if you learn anything new.

Sure. But my testing is over and something new might come out only
incidentally now. Testing hasn't reveal anything interesting.

> That will probably be in
> early February, per our release policy:

ok, thanks. That makes me kinda hope for some security problem :)

Is it completely safe to use manually patched version in production?


Dmitry Shalashov, relap.io & surfingbird.ru

2017-11-24 19:39 GMT+03:00 Tom Lane :

> Dmitry Shalashov  writes:
> > It looks that patch helps us. Tom, thank you!
> > I'm still testing it though, just in case.
>
> Excellent, please follow up if you learn anything new.
>
> > What are PostgreSQL schedule on releasing fixes like this? Can I expect
> > that it will be in 10.2 and when can I expect 10.2, approximately of
> course?
>
> I haven't pushed it to the git repo yet, but I will shortly, and then
> it will be in the next minor release.  That will probably be in
> early February, per our release policy:
> https://www.postgresql.org/developer/roadmap/
>
> regards, tom lane
>


Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-24 Thread Tom Lane
Dmitry Shalashov  writes:
> It looks that patch helps us. Tom, thank you!
> I'm still testing it though, just in case.

Excellent, please follow up if you learn anything new.

> What are PostgreSQL schedule on releasing fixes like this? Can I expect
> that it will be in 10.2 and when can I expect 10.2, approximately of course?

I haven't pushed it to the git repo yet, but I will shortly, and then
it will be in the next minor release.  That will probably be in
early February, per our release policy:
https://www.postgresql.org/developer/roadmap/

regards, tom lane



Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-24 Thread Dmitry Shalashov
> The line number offsets are expected when applying to v10, but it looks
> like you failed to transfer the attachment cleanly ...

Yes, it was some mistake on our side.

It looks that patch helps us. Tom, thank you!
I'm still testing it though, just in case.

What are PostgreSQL schedule on releasing fixes like this? Can I expect
that it will be in 10.2 and when can I expect 10.2, approximately of course?


Dmitry Shalashov, relap.io & surfingbird.ru

2017-11-23 20:00 GMT+03:00 Tom Lane :

> Dmitry Shalashov  writes:
> > We tried to apply the patch on 10.1 source, but something is wrong it
> seems:
> > patch -p1 < ../1.patch
> > (Stripping trailing CRs from patch; use --binary to disable.)
> > patching file src/backend/optimizer/plan/analyzejoins.c
> > (Stripping trailing CRs from patch; use --binary to disable.)
> > patching file src/backend/utils/adt/selfuncs.c
> > Hunk #1 succeeded at 3270 (offset -91 lines).
> > Hunk #2 succeeded at 3304 (offset -91 lines).
> > Hunk #3 succeeded at 3313 (offset -91 lines).
> > Hunk #4 succeeded at 3393 (offset -91 lines).
> > patch unexpectedly ends in middle of line
> > Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).
>
> The line number offsets are expected when applying to v10, but it looks
> like you failed to transfer the attachment cleanly ... there were
> certainly not CRs in it when I mailed it.  The output on v10
> should just look like
>
> patching file src/backend/optimizer/plan/analyzejoins.c
> patching file src/backend/utils/adt/selfuncs.c
> Hunk #1 succeeded at 3270 (offset -91 lines).
> Hunk #2 succeeded at 3304 (offset -91 lines).
> Hunk #3 succeeded at 3313 (offset -91 lines).
> Hunk #4 succeeded at 3393 (offset -91 lines).
> Hunk #5 succeeded at 3570 (offset -91 lines).
>
> regards, tom lane
>


Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-23 Thread Tom Lane
Dmitry Shalashov  writes:
> We tried to apply the patch on 10.1 source, but something is wrong it seems:
> patch -p1 < ../1.patch
> (Stripping trailing CRs from patch; use --binary to disable.)
> patching file src/backend/optimizer/plan/analyzejoins.c
> (Stripping trailing CRs from patch; use --binary to disable.)
> patching file src/backend/utils/adt/selfuncs.c
> Hunk #1 succeeded at 3270 (offset -91 lines).
> Hunk #2 succeeded at 3304 (offset -91 lines).
> Hunk #3 succeeded at 3313 (offset -91 lines).
> Hunk #4 succeeded at 3393 (offset -91 lines).
> patch unexpectedly ends in middle of line
> Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).

The line number offsets are expected when applying to v10, but it looks
like you failed to transfer the attachment cleanly ... there were
certainly not CRs in it when I mailed it.  The output on v10
should just look like

patching file src/backend/optimizer/plan/analyzejoins.c
patching file src/backend/utils/adt/selfuncs.c
Hunk #1 succeeded at 3270 (offset -91 lines).
Hunk #2 succeeded at 3304 (offset -91 lines).
Hunk #3 succeeded at 3313 (offset -91 lines).
Hunk #4 succeeded at 3393 (offset -91 lines).
Hunk #5 succeeded at 3570 (offset -91 lines).

regards, tom lane



Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-23 Thread Dmitry Shalashov
We tried to apply the patch on 10.1 source, but something is wrong it seems:

patch -p1 < ../1.patch
(Stripping trailing CRs from patch; use --binary to disable.)
patching file src/backend/optimizer/plan/analyzejoins.c
(Stripping trailing CRs from patch; use --binary to disable.)
patching file src/backend/utils/adt/selfuncs.c
Hunk #1 succeeded at 3270 (offset -91 lines).
Hunk #2 succeeded at 3304 (offset -91 lines).
Hunk #3 succeeded at 3313 (offset -91 lines).
Hunk #4 succeeded at 3393 (offset -91 lines).
patch unexpectedly ends in middle of line
Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).


Dmitry Shalashov, relap.io & surfingbird.ru

2017-11-23 2:07 GMT+03:00 Tom Lane :

> Dmitry Shalashov  writes:
> > Turns out we had not 9.6 but 9.5.
>
> I'd managed to reproduce the weird planner behavior locally in the
> regression database:
>
> regression=# create table foo (f1 int[], f2 int);
> CREATE TABLE
> regression=# explain select * from tenk1 where unique2 in (select distinct
> unnest(f1) from foo where f2=1);
> QUERY PLAN
> 
> ---
>  Nested Loop  (cost=30.85..80.50 rows=6 width=244)
>->  HashAggregate  (cost=30.57..30.63 rows=6 width=4)
>  Group Key: (unnest(foo.f1))
>  ->  HashAggregate  (cost=30.42..30.49 rows=6 width=4)
>Group Key: unnest(foo.f1)
>->  ProjectSet  (cost=0.00..28.92 rows=600 width=4)
>  ->  Seq Scan on foo  (cost=0.00..25.88 rows=6
> width=32)
>Filter: (f2 = 1)
>->  Index Scan using tenk1_unique2 on tenk1  (cost=0.29..8.30 rows=1
> width=244)
>  Index Cond: (unique2 = (unnest(foo.f1)))
> (10 rows)
>
> Digging into it, the reason for the duplicate HashAggregate step was that
> query_supports_distinctness() punted on SRFs-in-the-targetlist, basically
> on the argument that it wasn't worth extra work to handle that case.
> Thinking a bit harder, it seems to me that the correct analysis is:
> 1. If we are proving distinctness on the grounds of a DISTINCT clause,
> then it doesn't matter whether there are any SRFs, because DISTINCT
> removes duplicates after tlist SRF expansion.
> 2. But tlist SRFs break the ability to prove distinctness on the grounds
> of GROUP BY, unless all of them are within grouping columns.
> It still seems like detecting the second case is harder than it's worth,
> but we can trivially handle the first case, with little more than some
> code rearrangement.
>
> The other problem is that the output rowcount of the sub-select (ie, of
> the HashAggregate) is being estimated as though the SRF weren't there.
> This turns out to be because estimate_num_groups() doesn't consider the
> possibility of SRFs in the grouping columns.  It never has, but in 9.6 and
> before the problem was masked by the fact that grouping_planner scaled up
> the result rowcount by tlist_returns_set_rows() *after* performing
> grouping.  Now we're effectively doing that in the other order, which is
> more correct, but that means estimate_num_groups() has to apply some sort
> of adjustment.  I suggest that it just multiply its old estimate by the
> maximum of the SRF expansion counts.  That's likely to be an overestimate,
> but it's really hard to do better without specific knowledge of the
> individual SRF's behavior.
>
> In short, I propose the attached fixes.  I've checked this and it seems
> to fix Dmitry's original problem according to the test case he sent
> off-list.
>
> regards, tom lane
>
>


Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Tom Lane
Dmitry Shalashov  writes:
> Turns out we had not 9.6 but 9.5.

I'd managed to reproduce the weird planner behavior locally in the
regression database:

regression=# create table foo (f1 int[], f2 int);
CREATE TABLE
regression=# explain select * from tenk1 where unique2 in (select distinct 
unnest(f1) from foo where f2=1);
QUERY PLAN  
   
---
 Nested Loop  (cost=30.85..80.50 rows=6 width=244)
   ->  HashAggregate  (cost=30.57..30.63 rows=6 width=4)
 Group Key: (unnest(foo.f1))
 ->  HashAggregate  (cost=30.42..30.49 rows=6 width=4)
   Group Key: unnest(foo.f1)
   ->  ProjectSet  (cost=0.00..28.92 rows=600 width=4)
 ->  Seq Scan on foo  (cost=0.00..25.88 rows=6 width=32)
   Filter: (f2 = 1)
   ->  Index Scan using tenk1_unique2 on tenk1  (cost=0.29..8.30 rows=1 
width=244)
 Index Cond: (unique2 = (unnest(foo.f1)))
(10 rows)

Digging into it, the reason for the duplicate HashAggregate step was that
query_supports_distinctness() punted on SRFs-in-the-targetlist, basically
on the argument that it wasn't worth extra work to handle that case.
Thinking a bit harder, it seems to me that the correct analysis is:
1. If we are proving distinctness on the grounds of a DISTINCT clause,
then it doesn't matter whether there are any SRFs, because DISTINCT
removes duplicates after tlist SRF expansion.
2. But tlist SRFs break the ability to prove distinctness on the grounds
of GROUP BY, unless all of them are within grouping columns.
It still seems like detecting the second case is harder than it's worth,
but we can trivially handle the first case, with little more than some
code rearrangement.

The other problem is that the output rowcount of the sub-select (ie, of
the HashAggregate) is being estimated as though the SRF weren't there.
This turns out to be because estimate_num_groups() doesn't consider the
possibility of SRFs in the grouping columns.  It never has, but in 9.6 and
before the problem was masked by the fact that grouping_planner scaled up
the result rowcount by tlist_returns_set_rows() *after* performing
grouping.  Now we're effectively doing that in the other order, which is
more correct, but that means estimate_num_groups() has to apply some sort
of adjustment.  I suggest that it just multiply its old estimate by the
maximum of the SRF expansion counts.  That's likely to be an overestimate,
but it's really hard to do better without specific knowledge of the
individual SRF's behavior.

In short, I propose the attached fixes.  I've checked this and it seems
to fix Dmitry's original problem according to the test case he sent
off-list.

regards, tom lane

diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 5b0da14..5783f90 100644
*** a/src/backend/optimizer/plan/analyzejoins.c
--- b/src/backend/optimizer/plan/analyzejoins.c
*** rel_is_distinct_for(PlannerInfo *root, R
*** 744,751 
  bool
  query_supports_distinctness(Query *query)
  {
! 	/* we don't cope with SRFs, see comment below */
! 	if (query->hasTargetSRFs)
  		return false;
  
  	/* check for features we can prove distinctness with */
--- 744,751 
  bool
  query_supports_distinctness(Query *query)
  {
! 	/* SRFs break distinctness except with DISTINCT, see below */
! 	if (query->hasTargetSRFs && query->distinctClause == NIL)
  		return false;
  
  	/* check for features we can prove distinctness with */
*** query_is_distinct_for(Query *query, List
*** 787,806 
  	Assert(list_length(colnos) == list_length(opids));
  
  	/*
- 	 * A set-returning function in the query's targetlist can result in
- 	 * returning duplicate rows, if the SRF is evaluated after the
- 	 * de-duplication step; so we play it safe and say "no" if there are any
- 	 * SRFs.  (We could be certain that it's okay if SRFs appear only in the
- 	 * specified columns, since those must be evaluated before de-duplication;
- 	 * but it doesn't presently seem worth the complication to check that.)
- 	 */
- 	if (query->hasTargetSRFs)
- 		return false;
- 
- 	/*
  	 * DISTINCT (including DISTINCT ON) guarantees uniqueness if all the
  	 * columns in the DISTINCT clause appear in colnos and operator semantics
! 	 * match.
  	 */
  	if (query->distinctClause)
  	{
--- 787,796 
  	Assert(list_length(colnos) == list_length(opids));
  
  	/*
  	 * DISTINCT (including DISTINCT ON) guarantees uniqueness if all the
  	 * columns in the DISTINCT clause appear in colnos and operator semantics
! 	 * match.  This is true even if there are SRFs in the DISTINCT columns or
! 	 * elsewhere in the tlist.
  	 */
  	if (query->distinctClause)
  	{
*** query_is_distinct_for(Query *query, List
*** 820,825 
--- 810,825 
  	}
  
  	

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Dmitry Shalashov
I believe that with SSD disks random_page_cost should be very cheap, but
here you go (I decided to settle on EXPLAIN without ANALYZE this time, is
this is good enough?):

 Sort  (cost=18410.26..18410.27 rows=1 width=63)
   Sort Key: (sum(st.shows)) DESC
   CTE a
 ->  Index Scan using adroom_active_idx on adroom  (cost=0.28..301.85
rows=1 width=233)
   Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND
(CURRENT_TIMESTAMP <= stop_ts))
   Filter: (((groups -> 0) ->> 'provider'::text) ~
'^target_mail_ru'::text)
   CTE b
 ->  HashAggregate  (cost=1.28..1.29 rows=1 width=40)
   Group Key: a.provider, a.id, unnest(a.domain_ids)
   ->  ProjectSet  (cost=0.00..0.53 rows=100 width=40)
 ->  CTE Scan on a  (cost=0.00..0.02 rows=1 width=68)
   ->  GroupAggregate  (cost=18107.09..18107.11 rows=1 width=63)
 Group Key: b.provider, d.domain
 ->  Sort  (cost=18107.09..18107.09 rows=1 width=55)
   Sort Key: b.provider, d.domain
   ->  Nested Loop  (cost=1.00..18107.08 rows=1 width=55)
 Join Filter: ((b.id = st.adroom_id) AND (b.domain_id =
st.domain_id))
 ->  Nested Loop  (cost=0.42..8.46 rows=1 width=59)
   ->  CTE Scan on b  (cost=0.00..0.02 rows=1
width=40)
   ->  Index Scan using domains_pkey on domains d
(cost=0.42..8.44 rows=1 width=19)
 Index Cond: (id = b.domain_id)
 ->  Index Scan using
adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat st
(cost=0.58..180
91.26 rows=491 width=16)
   Index Cond: ((day >= date_trunc('day'::text,
(CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <=
date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = d.id))


Dmitry Shalashov, relap.io & surfingbird.ru

2017-11-22 17:44 GMT+03:00 Alex Ignatov <a.igna...@postgrespro.ru>:

> Here is my select right after initdb:
>
>
>
> postgres=# select name,setting from pg_settings where name like '%_cost';
>
>  name | setting
>
> --+-
>
> cpu_index_tuple_cost | 0.005
>
> cpu_operator_cost| 0.0025
>
> cpu_tuple_cost   | 0.01
>
> parallel_setup_cost  | 1000
>
> parallel_tuple_cost  | 0.1
>
> random_page_cost | 4
>
> seq_page_cost| 1
>
>
>
>
>
> Can you generate plan with random_page_cost = 4?
>
>
>
>
>
> --
>
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
> *From:* Dmitry Shalashov [mailto:skau...@gmail.com]
> *Sent:* Wednesday, November 22, 2017 5:29 PM
> *To:* Alex Ignatov <a.igna...@postgrespro.ru>
> *Cc:* pgsql-performa...@postgresql.org
> *Subject:* Re: Query became very slow after 9.6 -> 10 upgrade
>
>
>
> Sure, here it goes:
>
>
>
>  name | setting
>
> --+-
>
>  cpu_index_tuple_cost | 0.005
>
>  cpu_operator_cost| 0.0025
>
>  cpu_tuple_cost   | 0.01
>
>  parallel_setup_cost  | 1000
>
>  parallel_tuple_cost  | 0.1
>
>  random_page_cost | 1
>
>  seq_page_cost| 1
>
>
>
>
> Dmitry Shalashov, relap.io & surfingbird.ru
>
>
>
> 2017-11-22 17:24 GMT+03:00 Alex Ignatov <a.igna...@postgrespro.ru>:
>
> Hello!
>
> What about :
>
> select name,setting from pg_settings where name like '%_cost';
>
>
>
> --
>
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
>
>
> *From:* Dmitry Shalashov [mailto:skau...@gmail.com]
> *Sent:* Wednesday, November 22, 2017 5:14 PM
> *To:* pgsql-performa...@postgresql.org
> *Subject:* Query became very slow after 9.6 -> 10 upgrade
>
>
>
> Hi!
>
>
>
> I've seen few letters like this on mailing list and for some reason
> thought that probably it won't happen to us, but here I am lol.
>
>
>
> It's "nestloop hits again" situation.
>
>
>
> I'll try to provide plan from 9.6 later, but right now I have only plan
> from 10.1.
>
>
>
> Query: https://pastebin.com/9b953tT7
>
> It was running under 3 seconds (it's our default timeout) and now it runs
> for 12 minutes.
>
>
>
> \d adroom: https://pastebin.com/vBrPGtxT (3800 rows)
>
> \d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy
> condition on day column)
>
> \d domains: https://pastebin.com/65hk7YCm (73000 rows)
>
>
>
> All three tables are analyzed.
>
>
>
> EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0
>
> EXPLAI

RE: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Alex Ignatov
Here is my select right after initdb:

 

postgres=# select name,setting from pg_settings where name like '%_cost';

 name | setting

--+-

cpu_index_tuple_cost | 0.005

cpu_operator_cost| 0.0025

cpu_tuple_cost   | 0.01

parallel_setup_cost  | 1000

parallel_tuple_cost  | 0.1

random_page_cost | 4

seq_page_cost| 1

 

 

Can you generate plan with random_page_cost = 4?

 

 

--

Alex Ignatov 
Postgres Professional:  <http://www.postgrespro.com> http://www.postgrespro.com 
The Russian Postgres Company

 

From: Dmitry Shalashov [mailto:skau...@gmail.com] 
Sent: Wednesday, November 22, 2017 5:29 PM
To: Alex Ignatov <a.igna...@postgrespro.ru>
Cc: pgsql-performa...@postgresql.org
Subject: Re: Query became very slow after 9.6 -> 10 upgrade

 

Sure, here it goes:

 

 name | setting

--+-

 cpu_index_tuple_cost | 0.005

 cpu_operator_cost| 0.0025

 cpu_tuple_cost   | 0.01

 parallel_setup_cost  | 1000

 parallel_tuple_cost  | 0.1

 random_page_cost | 1

 seq_page_cost| 1




 

Dmitry Shalashov,  <http://relap.io/> relap.io &  <http://surfingbird.ru> 
surfingbird.ru

 

2017-11-22 17:24 GMT+03:00 Alex Ignatov <a.igna...@postgrespro.ru 
<mailto:a.igna...@postgrespro.ru> >:

Hello!

What about :

select name,setting from pg_settings where name like '%_cost';

 

--

Alex Ignatov 
Postgres Professional:  <http://www.postgrespro.com> http://www.postgrespro.com 
The Russian Postgres Company

 

 

From: Dmitry Shalashov [mailto:skau...@gmail.com <mailto:skau...@gmail.com> ] 
Sent: Wednesday, November 22, 2017 5:14 PM
To: pgsql-performa...@postgresql.org <mailto:pgsql-performa...@postgresql.org> 
Subject: Query became very slow after 9.6 -> 10 upgrade

 

Hi!

 

I've seen few letters like this on mailing list and for some reason thought 
that probably it won't happen to us, but here I am lol.

 

It's "nestloop hits again" situation.

 

I'll try to provide plan from 9.6 later, but right now I have only plan from 
10.1.

 

Query: https://pastebin.com/9b953tT7

It was running under 3 seconds (it's our default timeout) and now it runs for 
12 minutes.

 

\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)

\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy 
condition on day column)

\d domains: https://pastebin.com/65hk7YCm (73000 rows)

 

All three tables are analyzed.

 

EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0

EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)

 

Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of 
RAM and two very, very fast NVME server class SSD's in RAID1.

 

What can I do with it?

 

 

Also maybe this will be useful:

 

1st query, runs under 1ms

select title, id, groups->0->>'provider' provider, domain_ids from adroom where 
groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and 
current_timestamp between start_ts and stop_ts

 

2nd query that uses 1st one, runs under 3 ms

select distinct unnest(domain_ids) FROM (select title, id, 
groups->0->>'provider' provider, domain_ids from adroom where 
groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and 
current_timestamp between start_ts and stop_ts) t1

 

3rd query which returns 1.5mln rows, runs in about 0.6s

SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between 
date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', 
current_timestamp)

 

BUT if I'll add to 3rd query one additional condition, which is basically 2nd 
query, it will ran same 12 minutes:

SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between 
date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', 
current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM 
(select title, id, groups->0->>'provider' provider, domain_ids from adroom 
where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and 
current_timestamp between start_ts and stop_ts) t1)

 

Plan of last query:

 Nested Loop  (cost=88.63..25617.31 rows=491 width=16) (actual 
time=3.512..733248.271 rows=1442797 loops=1)

   ->  HashAggregate  (cost=88.06..88.07 rows=1 width=4) (actual 
time=3.380..13.561 rows=3043 loops=1)

 Group Key: (unnest(adroom.domain_ids))

 ->  HashAggregate  (cost=88.03..88.04 rows=1 width=4) (actual 
time=2.199..2.607 rows=3043 loops=1)

   Group Key: unnest(adroom.domain_ids)

   ->  ProjectSet  (cost=0.28..87.78 rows=100 width=4) (actual 
time=0.701..1.339 rows=3173 loops=1)

 ->  Index Scan using adroom_active_idx on adroom  
(cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1)