Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Pavel Stehule
2016-08-26 22:26 GMT+02:00 Mike Sofen :

>
>
> *From:* pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
> ow...@postgresql.org] *On Behalf Of *Tommi K
> *Sent:* Friday, August 26, 2016 7:25 AM
> *To:* Craig James 
> *Cc:* andreas kretschmer ;
> pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] Slow query with big tables
>
>
>
> Ok, sorry that I did not add the original message. I thought that it would
> be automatically added to the message thread.
>
>
>
> Here is the question again:
>
>
>
> Is there way to keep query time constant as the database size grows.
> Should I use partitioning or partial indexes?
>

try to disable nested_loop - there are bad estimations.

This query should not be fast - there are two ILIKE filters with negative
impact on estimations.

Regards

Pavel


>
>
> Thanks,
>
> Tommi Kaksonen
>
>
>
>
>
>
>
> > Hello,
>
> >
>
> > I have the following tables and query. I would like to get some help to
> find out why it is slow and how its performance could be improved.
>
> >
>
> > Thanks,
>
> > Tommi K.
>
> >
>
> >
>
> > --Table definitions---
>
> > CREATE TABLE "Measurement"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   product_id bigserial NOT NULL,
>
> >   nominal_data_id bigserial NOT NULL,
>
> >   description text,
>
> >   serial text,
>
> >   measurement_time timestamp without time zone,
>
> >   status smallint,
>
> >   system_description text,
>
> >   CONSTRAINT "Measurement_pkey" PRIMARY KEY (id),
>
> >   CONSTRAINT "Measurement_nominal_data_id_fkey" FOREIGN KEY
> (nominal_data_id)
>
> >   REFERENCES "Nominal_data" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DELETE NO ACTION,
>
> >   CONSTRAINT "Measurement_product_id_fkey" FOREIGN KEY (product_id)
>
> >   REFERENCES "Product" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DELETE NO ACTION
>
> > )
>
> > WITH (
>
> >   OIDS=FALSE
>
> > );
>
> >
>
> > CREATE INDEX measurement_time_index
>
> >   ON "Measurement"
>
> >   USING btree
>
> >   (measurement_time);
>
> > ALTER TABLE "Measurement" CLUSTER ON measurement_time_index;
>
> >
>
> > CREATE TABLE "Product"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   name text,
>
> >   description text,
>
> >   info text,
>
> >   system_name text,
>
> >   CONSTRAINT "Product_pkey" PRIMARY KEY (id)
>
> > )
>
> > WITH (
>
> >   OIDS=FALSE
>
> > );
>
> >
>
> >
>
> > CREATE TABLE "Extra_info"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   measurement_id bigserial NOT NULL,
>
> >   name text,
>
> >   description text,
>
> >   info text,
>
> >   type text,
>
> >   value_string text,
>
> >   value_double double precision,
>
> >   value_integer bigint,
>
> >   value_bool boolean,
>
> >   CONSTRAINT "Extra_info_pkey" PRIMARY KEY (id),
>
> >   CONSTRAINT "Extra_info_measurement_id_fkey" FOREIGN KEY
> (measurement_id)
>
> >   REFERENCES "Measurement" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DELETE NO ACTION
>
> > )
>
> > WITH (
>
> >   OIDS=FALSE
>
> > );
>
> >
>
> > CREATE INDEX extra_info_measurement_id_index
>
> >   ON "Extra_info"
>
> >   USING btree
>
> >   (measurement_id);
>
> >
>
> > CREATE TABLE "Feature"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   measurement_id bigserial NOT NULL,
>
> >   name text,
>
> >   description text,
>
> >   info text,
>
> >   CONSTRAINT "Feature_pkey" PRIMARY KEY (id),
>
> >   CONSTRAINT "Feature_measurement_id_fkey" FOREIGN KEY (measurement_id)
>
> >   REFERENCES "Measurement" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DELETE NO ACTION
>
> > )
>
> > WITH (
>
> >   OIDS=FALSE
>
> > );
>
> >
>
> > CREATE INDEX feature_measurement_id_and_name_index
>
> >   ON "Feature"
>
> >   USING btree
>
> >   (measurement_id, name COLLATE pg_catalog."default");
>
> >
>
> > CREATE INDEX feature_measurement_id_index
>
> >   ON "Feature"
>
> >   USING hash
>
> >   (measurement_id);
>
> >
>
> >
>
> > CREATE TABLE "Point"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   feature_id bigserial NOT NULL,
>
> >   x double precision,
>
> >   y double precision,
>
> >   z double precision,
>
> >   status_x smallint,
>
> >   status_y smallint,
>
> >   status_z smallint,
>
> >   difference_x double precision,
>
> >   difference_y double precision,
>
> >   difference_z double precision,
>
> >   CONSTRAINT "Point_pkey" PRIMARY KEY (id),
>
> >   CONSTRAINT "Point_feature_id_fkey" FOREIGN KEY (feature_id)
>
> >   REFERENCES "Feature" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DELETE NO ACTION
>
> > )
>
> > WITH (
>
> >   OIDS=FALSE
>
> > );
>
> >
>
> > CREATE INDEX point_feature_id_index
>
> >   ON "Point"
>
> >   USING btree
>
> >   (feature_id);
>
> >
>
> > CREATE TABLE "Warning"
>
> > (
>
> >   id bigserial NOT NULL,
>
> >   feature_id bigserial NOT NULL,
>
> >   "number" smallint,
>
> >   info text,
>
> >   CONSTRAINT "Warning_pkey" PRIMARY KEY (id),
>
> >   CONSTRAINT "Warning_feature_id_fkey" FOREIGN KEY (feature_id)
>
> >   REFERENCES "Feature" (id) MATCH SIMPLE
>
> >   ON UPDATE NO ACTION ON DE

Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Jim Nasby

On 8/26/16 3:26 PM, Mike Sofen wrote:

Is there way to keep query time constant as the database size grows.


No. More data == more time. Unless you find a way to break the laws of 
physics.



Should I use partitioning or partial indexes?


Neither technique is a magic bullet. I doubt either would help here.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Mike Sofen
 

From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tommi K
Sent: Friday, August 26, 2016 7:25 AM
To: Craig James 
Cc: andreas kretschmer ; 
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query with big tables

 

Ok, sorry that I did not add the original message. I thought that it would be 
automatically added to the message thread.

 

Here is the question again:

 

Is there way to keep query time constant as the database size grows. Should I 
use partitioning or partial indexes?

 

Thanks,

Tommi Kaksonen

 

 

 

> Hello, 

> 

> I have the following tables and query. I would like to get some help to find 
> out why it is slow and how its performance could be improved.

> 

> Thanks,

> Tommi K.

> 

> 

> --Table definitions---

> CREATE TABLE "Measurement"

> (

>   id bigserial NOT NULL,

>   product_id bigserial NOT NULL,

>   nominal_data_id bigserial NOT NULL,

>   description text,

>   serial text,

>   measurement_time timestamp without time zone,

>   status smallint,

>   system_description text,

>   CONSTRAINT "Measurement_pkey" PRIMARY KEY (id),

>   CONSTRAINT "Measurement_nominal_data_id_fkey" FOREIGN KEY (nominal_data_id)

>   REFERENCES "Nominal_data" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION,

>   CONSTRAINT "Measurement_product_id_fkey" FOREIGN KEY (product_id)

>   REFERENCES "Product" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION

> )

> WITH (

>   OIDS=FALSE

> );

> 

> CREATE INDEX measurement_time_index

>   ON "Measurement"

>   USING btree

>   (measurement_time);

> ALTER TABLE "Measurement" CLUSTER ON measurement_time_index;

> 

> CREATE TABLE "Product"

> (

>   id bigserial NOT NULL,

>   name text,

>   description text,

>   info text,

>   system_name text,

>   CONSTRAINT "Product_pkey" PRIMARY KEY (id)

> )

> WITH (

>   OIDS=FALSE

> );

> 

> 

> CREATE TABLE "Extra_info"

> (

>   id bigserial NOT NULL,

>   measurement_id bigserial NOT NULL,

>   name text,

>   description text,

>   info text,

>   type text,

>   value_string text,

>   value_double double precision,

>   value_integer bigint,

>   value_bool boolean,

>   CONSTRAINT "Extra_info_pkey" PRIMARY KEY (id),

>   CONSTRAINT "Extra_info_measurement_id_fkey" FOREIGN KEY (measurement_id)

>   REFERENCES "Measurement" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION

> )

> WITH (

>   OIDS=FALSE

> );

> 

> CREATE INDEX extra_info_measurement_id_index

>   ON "Extra_info"

>   USING btree

>   (measurement_id);

> 

> CREATE TABLE "Feature"

> (

>   id bigserial NOT NULL,

>   measurement_id bigserial NOT NULL,

>   name text,

>   description text,

>   info text,

>   CONSTRAINT "Feature_pkey" PRIMARY KEY (id),

>   CONSTRAINT "Feature_measurement_id_fkey" FOREIGN KEY (measurement_id)

>   REFERENCES "Measurement" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION

> )

> WITH (

>   OIDS=FALSE

> );

> 

> CREATE INDEX feature_measurement_id_and_name_index

>   ON "Feature"

>   USING btree

>   (measurement_id, name COLLATE pg_catalog."default");

> 

> CREATE INDEX feature_measurement_id_index

>   ON "Feature"

>   USING hash

>   (measurement_id);

> 

> 

> CREATE TABLE "Point"

> (

>   id bigserial NOT NULL,

>   feature_id bigserial NOT NULL,

>   x double precision,

>   y double precision,

>   z double precision,

>   status_x smallint,

>   status_y smallint,

>   status_z smallint,

>   difference_x double precision,

>   difference_y double precision,

>   difference_z double precision,

>   CONSTRAINT "Point_pkey" PRIMARY KEY (id),

>   CONSTRAINT "Point_feature_id_fkey" FOREIGN KEY (feature_id)

>   REFERENCES "Feature" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION

> )

> WITH (

>   OIDS=FALSE

> );

> 

> CREATE INDEX point_feature_id_index

>   ON "Point"

>   USING btree

>   (feature_id);

> 

> CREATE TABLE "Warning"

> (

>   id bigserial NOT NULL,

>   feature_id bigserial NOT NULL,

>   "number" smallint,

>   info text,

>   CONSTRAINT "Warning_pkey" PRIMARY KEY (id),

>   CONSTRAINT "Warning_feature_id_fkey" FOREIGN KEY (feature_id)

>   REFERENCES "Feature" (id) MATCH SIMPLE

>   ON UPDATE NO ACTION ON DELETE NO ACTION

> )

> WITH (

>   OIDS=FALSE

> );

> 

> CREATE INDEX warning_feature_id_index

>   ON "Warning"

>   USING btree

>   (feature_id);

> 

> 

> ---Query---

> SELECT

> f.name  , 

> f.description,

> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND 
> warning.id   IS NULL THEN 1 ELSE 0 END) AS green_count, 

> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND 
> warning.id   IS NOT NULL THEN 1 ELSE 0 END) AS 
> green_warned_count,

> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND 
> NOT (p.status_x = 2 OR p.status_y = 2 OR

Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Tommi K
Ok, sorry that I did not add the original message. I thought that it would
be automatically added to the message thread.

Here is the question again:

Is there way to keep query time constant as the database size grows. Should
I use partitioning or partial indexes?

Thanks,
Tommi Kaksonen



> Hello,
>
> I have the following tables and query. I would like to get some help to
find out why it is slow and how its performance could be improved.
>
> Thanks,
> Tommi K.
>
>
> --Table definitions---
> CREATE TABLE "Measurement"
> (
>   id bigserial NOT NULL,
>   product_id bigserial NOT NULL,
>   nominal_data_id bigserial NOT NULL,
>   description text,
>   serial text,
>   measurement_time timestamp without time zone,
>   status smallint,
>   system_description text,
>   CONSTRAINT "Measurement_pkey" PRIMARY KEY (id),
>   CONSTRAINT "Measurement_nominal_data_id_fkey" FOREIGN KEY
(nominal_data_id)
>   REFERENCES "Nominal_data" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "Measurement_product_id_fkey" FOREIGN KEY (product_id)
>   REFERENCES "Product" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX measurement_time_index
>   ON "Measurement"
>   USING btree
>   (measurement_time);
> ALTER TABLE "Measurement" CLUSTER ON measurement_time_index;
>
> CREATE TABLE "Product"
> (
>   id bigserial NOT NULL,
>   name text,
>   description text,
>   info text,
>   system_name text,
>   CONSTRAINT "Product_pkey" PRIMARY KEY (id)
> )
> WITH (
>   OIDS=FALSE
> );
>
>
> CREATE TABLE "Extra_info"
> (
>   id bigserial NOT NULL,
>   measurement_id bigserial NOT NULL,
>   name text,
>   description text,
>   info text,
>   type text,
>   value_string text,
>   value_double double precision,
>   value_integer bigint,
>   value_bool boolean,
>   CONSTRAINT "Extra_info_pkey" PRIMARY KEY (id),
>   CONSTRAINT "Extra_info_measurement_id_fkey" FOREIGN KEY (measurement_id)
>   REFERENCES "Measurement" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX extra_info_measurement_id_index
>   ON "Extra_info"
>   USING btree
>   (measurement_id);
>
> CREATE TABLE "Feature"
> (
>   id bigserial NOT NULL,
>   measurement_id bigserial NOT NULL,
>   name text,
>   description text,
>   info text,
>   CONSTRAINT "Feature_pkey" PRIMARY KEY (id),
>   CONSTRAINT "Feature_measurement_id_fkey" FOREIGN KEY (measurement_id)
>   REFERENCES "Measurement" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX feature_measurement_id_and_name_index
>   ON "Feature"
>   USING btree
>   (measurement_id, name COLLATE pg_catalog."default");
>
> CREATE INDEX feature_measurement_id_index
>   ON "Feature"
>   USING hash
>   (measurement_id);
>
>
> CREATE TABLE "Point"
> (
>   id bigserial NOT NULL,
>   feature_id bigserial NOT NULL,
>   x double precision,
>   y double precision,
>   z double precision,
>   status_x smallint,
>   status_y smallint,
>   status_z smallint,
>   difference_x double precision,
>   difference_y double precision,
>   difference_z double precision,
>   CONSTRAINT "Point_pkey" PRIMARY KEY (id),
>   CONSTRAINT "Point_feature_id_fkey" FOREIGN KEY (feature_id)
>   REFERENCES "Feature" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX point_feature_id_index
>   ON "Point"
>   USING btree
>   (feature_id);
>
> CREATE TABLE "Warning"
> (
>   id bigserial NOT NULL,
>   feature_id bigserial NOT NULL,
>   "number" smallint,
>   info text,
>   CONSTRAINT "Warning_pkey" PRIMARY KEY (id),
>   CONSTRAINT "Warning_feature_id_fkey" FOREIGN KEY (feature_id)
>   REFERENCES "Feature" (id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX warning_feature_id_index
>   ON "Warning"
>   USING btree
>   (feature_id);
>
>
> ---Query---
> SELECT
> f.name,
> f.description,
> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND
warning.id IS NULL THEN 1 ELSE 0 END) AS green_count,
> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND
warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,
> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)
AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id
IS NULL THEN 1 ELSE 0 END) AS yellow_count,
> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)
AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id
IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count,
> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND
warning.id IS NULL THEN 1 ELSE 0 END) AS red_count,
> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND
warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count,
> SUM(CASE WHEN 

Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Craig James
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K  wrote:

> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>

Please include the email you are replying to when you respond. It saves
everyone time if they don't have to dig up your old emails, and many of us
discard old emails anyway and have no idea what you wrote before.

Craig


> Increasing work_mem did not have great impact on the performance. But I
> will try to update the PostgreSQL version to see if it speeds up things.
>
> However is there way to keep query time constant as the database size
> grows. Should I use partitioning or partial indexes?
>
> Best Regards,
> Tommi Kaksonen
>



-- 
-
Craig A. James
Chief Technology Officer
eMolecules, Inc.
-


Re: [PERFORM] Slow query with big tables

2016-08-26 Thread Tommi K
Hello,
thanks for the response. I did not get the response to my email even though
I am subscribed to the pgsql-performance mail list. Let's hope that I get
the next one :)

Increasing work_mem did not have great impact on the performance. But I
will try to update the PostgreSQL version to see if it speeds up things.

However is there way to keep query time constant as the database size
grows. Should I use partitioning or partial indexes?

Best Regards,
Tommi Kaksonen