Re: [PERFORM] Big number of connections

2016-03-31 Thread Chris Cogdon
Although somewhat academic, since pgboucer doesn’t support it (and might not 
ever), have a look at this ticket which, if it was ever supported, would give 
you what you needed:

https://github.com/pgbouncer/pgbouncer/issues/75 
<https://github.com/pgbouncer/pgbouncer/issues/75>


> On Mar 31, 2016, at 15:47, Alvaro Herrera <alvhe...@2ndquadrant.com> wrote:
> 
>> If they are going to keep the client connections open, they would need to
>> run pgbouncer in statement or transaction mode.
> 
> As I understand, in pgbouncer you cannot have connections that serve
> different users.  If each individual requires its own database-level
> user, pgbouncer would not help at all.
> 
> I would look seriously into getting rid of the always-open requirement
> for connections.

— Chris Cogdon

[PERFORM] Adding a ROLLUP switches to GroupAggregate unexpectedly

2016-03-31 Thread Chris Cogdon
Hi folks! I’ve a query where adding a rollup to the group by switches to 
GroupAggregate unexpectedly, where the standard GROUP BY uses HashAggregate. 
Since the rollup should only add one additional bucket, the switch to having to 
sort (and thus a to-disk temporary file) is very puzzling. This reads like a 
query optimiser bug to me. This is the first I’ve posted to the list, please 
forgive me if I’ve omitted any “before bugging the list” homework.


Description: Adding a summary row by changing “GROUP BY x” into “GROUP BY 
ROLLUP (x)” should not cause a switch from HashAggregate to GroupAggregate


Here’s the “explain” from the simple GROUP BY:

projectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM 
api_activities GROUP BY error_code;
 QUERY PLAN 
 
-
 HashAggregate  (cost=3456930.11..3456930.16 rows=5 width=2) (actual 
time=26016.222..26016.223 rows=5 loops=1)
   Output: error_code, count(*)
   Group Key: api_activities.error_code
   ->  Seq Scan on public.api_activities  (cost=0.00..3317425.74 rows=27900874 
width=2) (actual time=0.018..16232.608 rows=36224844 loops=1)
 Output: id, client_id, date_added, kind, activity, error_code
 Planning time: 0.098 ms
 Execution time: 26016.337 ms
(7 rows)

Changing this to a GROUP BY ROLLUP switches to GroupAggregate (with the 
corresponding to-disk temporary table being created):

projectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM 
api_activities GROUP BY rollup (error_code);
QUERY PLAN  
   
---
 GroupAggregate  (cost=7149357.90..7358614.52 rows=6 width=2) (actual 
time=54271.725..82354.144 rows=6 loops=1)
   Output: error_code, count(*)
   Group Key: api_activities.error_code
   Group Key: ()
   ->  Sort  (cost=7149357.90..7219110.09 rows=27900874 width=2) (actual 
time=54270.636..76651.121 rows=36222428 loops=1)
 Output: error_code
 Sort Key: api_activities.error_code
 Sort Method: external merge  Disk: 424864kB
 ->  Seq Scan on public.api_activities  (cost=0.00..3317425.74 
rows=27900874 width=2) (actual time=0.053..34282.239 rows=36222428 loops=1)
   Output: error_code
 Planning time: 2.611 ms
 Execution time: 82437.416 ms
(12 rows)


I’ve given the output of “EXPLAIN ANAYLZE VERBOSE” rather than non-analyze, but 
there was no difference in the plan.

Running VACUUM FULL ANALYZE on this table makes no difference. Switching to 
Count(error_code) makes no difference. Using GROUP BY GROUPING SETS ((), 
error_code) makes no difference.

I understand that a HashAggregate is possible only if it can fit all the 
aggregates into work_mem. There are 5 different error codes, and the statistics 
(from pg_stats) are showing that PG knows this. Adding just one more bucket for 
the “()” case should not cause a fallback to GroupAggregate.


PostgreSQL version: 9.5.2 (just upgraded today, Thank you! <3 )

(Was exhibiting same problem under 9.5.0)


How installed: apt-get package from apt.postgresql.org 
<http://apt.postgresql.org/>


Settings differences:

 application_name: psql
 client_encoding: UTF8
 DateStyle: ISO, MDY
 default_text_search_config: pg_catalog.english
 dynamic_shared_memory_type: posix
 lc_messages: en_US.UTF-8
 lc_monetary: en_US.UTF-8
 lc_numeric: en_US.UTF-8
 lc_time: en_US.UTF-8
 listen_addresses: *
 log_line_prefix: %t [%p-%c-%l][%a][%i][%e][%s][%x-%v] %q%u@%d 
 log_timezone: UTC
 logging_collector: on
 max_connections: 100
 max_stack_depth: 2MB
 port: 5432
 shared_buffers: 1GB
 ssl: on
 ssl_cert_file: /etc/ssl/certs/ssl-cert-snakeoil.pem
 ssl_key_file: /etc/ssl/private/ssl-cert-snakeoil.key
 TimeZone: UTC
 work_mem: 128MB


OS and Version: Ubuntu Trusty: Linux 3.13.0-66-generic #108-Ubuntu SMP Wed Oct 
7 15:20:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


Program used to connect: psql


Nothing unusual in the logs, apart from the query indicating that it took a 
while to run.


I know that there’s several workarounds I can use for this simple case, such as 
using a CTE, then doing a rollup on that, but I’m simply reporting what I think 
is a bug in the query optimizer.


Thank you for your attention! Please let me know if there’s any additional 
information you need, or additional tests you’d like to run.


— Chris Cogdon <ch...@cogdon.org <mailto:ch...@cogdon.org>>
— Using PostgreSQL since 6.2!