[PERFORM] bad performance

2016-12-17 Thread Gabliver Faluker
Hey All,

I am not a PG expert. I like PG but i am puzzled as to what I shoud do .

I have a 4 core 5 GIG vm running a 500M db (it should fit to ram easly) and
I face slow queries.

here is a view that I have  :
 SELECT o.id,
cc.name AS "from",
o.phone,
c.name AS "to",
parcel_info.value::character varying(64) AS email,
o.barcode AS barcode_global,
o.barcode_alt AS barcode,
uu.name AS destination,
cur.name AS source,
o.code,
tr.context AS status,
tr.gener AS last_update,
rc.value::character varying(254) AS refcode,
o.type,
slot_inf.title AS size
   FROM data.orders o
 LEFT JOIN data.clients c ON c.id = o.client_id
 LEFT JOIN data.users u ON u.id = o.user_id
 LEFT JOIN data.clients cc ON cc.id = u.client_id
 LEFT JOIN data.users uu ON o.destin = uu.id
 LEFT JOIN ( SELECT DISTINCT ON (ccsend.order_id) ccsend.order_id,
cu.name
   FROM data.ccsend
 LEFT JOIN data.users cu ON cu.id = ccsend.source_id) cur ON
cur.order_id = o.id
 LEFT JOIN ( SELECT DISTINCT ON (track.order_id) track.order_id,
co.context,
track.gener
   FROM data.track
 LEFT JOIN data.contexts co ON co.id = track.context
  ORDER BY track.order_id, track.id DESC) tr ON tr.order_id = o.id
 LEFT JOIN ( SELECT oi.order_id,
oi.key,
oi.value
   FROM data.orders_info oi
  WHERE oi.key::text = 'email'::text) parcel_info ON
parcel_info.order_id = o.id
 LEFT JOIN ( SELECT orders_info.order_id,
orders_info.value
   FROM data.orders_info
  WHERE orders_info.key::text = 'refcode'::text) rc ON rc.order_id
= o.id
 LEFT JOIN data.slot_inf ON o.size = slot_inf.id;




and the xplain :https://explain.depesz.com/s/0LTn

It runs for ~5 seconds .

Can anyone suggest me anything on this ?

tx,
Gabliver


Re: [PERFORM] bad performance

2016-12-17 Thread Tom Lane
Gabliver Faluker  writes:
> It runs for ~5 seconds .

I'm a little skeptical that a 12-way join producing 340K rows
and executing in 5 seconds should be considered "bad performance".

It looks like it'd help some if you increased work_mem enough to let
both sorts happen in-memory rather than externally.  But really, this
is going to take awhile no matter what.  Do you really need all 340K
rows of the result?  Can you improve your data representation so that
you don't need to join quite so many tables to get the answer, and
(probably even more importantly) so that you don't need to use
SELECT DISTINCT?  The sort/unique steps needed to do DISTINCT are
eating a large part of the runtime, and they also form an optimization
fence IIRC.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance