Re: [PERFORM] Major performance problem after upgrade from 8.3 to 8.4

2010-09-14 Thread Gerhard Wiesinger

Hello Merlin,

Seems to be a feasible approach. On problem which might be that when 
multiple rows are returned that they are not ordered in each subselect 
correctly. Any idea to solve that?


e.g.
Raumsolltemperatur | Raumisttemperatur
Value from time 1  | Value from time 2
Value from time 2  | Value from time 1

but should be
Raumsolltemperatur | Raumisttemperatur
Value from time 1  | Value from time 1
Value from time 2  | Value from time 2

But that might be solveable by first selecting keys from the log_details 
table and then join again.


I will try it in the evening and I have to think about in detail.

But thank you for the new approach and opening the mind :-)

Ciao,
Gerhard

--
http://www.wiesinger.com/


On Mon, 13 Sep 2010, Merlin Moncure wrote:


On Mon, Sep 13, 2010 at 2:39 AM, Gerhard Wiesinger li...@wiesinger.com wrote:

Hello,

Any news or ideas regarding this issue?


hm.  is retooling the query an option?  specifically, can you try converting

CREATE OR REPLACE VIEW log_entries AS
SELECT
 l.id AS id,
 l.datetime AS datetime,
 l.tdate AS tdate,
 l.ttime AS ttime,
 d1.value  AS Raumsolltemperatur,
 [...]
FROM
 log l
LEFT JOIN log_details d1 ON l.id = d1.fk_id AND
 d1.fk_keyid = (SELECT keyid FROM key_description WHERE description =
'Raumsolltemperatur')
 [...]

to

CREATE OR REPLACE VIEW log_entries AS
SELECT
 l.id AS id,
 l.datetime AS datetime,
 l.tdate AS tdate,
 l.ttime AS ttime,
 (select value from log_details ld join key_description kd on
ld.fk_keyid = kd.keyid where ld.fk_id = l.id and  description =
'Raumsolltemperatur') AS Raumsolltemperatur,
[...]

(I am not 100% sure I have your head around your query, but I think I do)?
This should get you a guaranteed (although not necessarily 'the best'
plan, with each returned view column being treated independently of
the other (is that what you want?).  Also, if schema changes are under
consideration, you can play log_details/key_description, using natural
key and cut out one of the joins.  I can't speak to some of the more
complex planner issues at play, but your query absolutely screams
optimization at the SQL level.

What I am 100% sure of, is that you can get better performance if you
do a little out of the box thinking here...

merlin



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Useless sort by

2010-09-14 Thread Heikki Linnakangas

On 13/09/10 19:48, Tom Lane wrote:

Gaetano Mendolamend...@gmail.com  writes:

Of course I'm not suggesting to take away the sort by and give the user
an unsorted result, I'm asking why the the optimizer in cases like:



select unique(a) from v_table_with_order_by;



doesn't takes away the order by inside the view and puts it back rewriting 
the
query like this:



select unique(a) from v_table_without_order_by
order by a;


That changes the order in which the rows are fed to unique(a).  The
principal real-world use for a non-top-level ORDER BY is exactly to
determine the order in which rows are fed to a function, so we will
have a revolt on our hands if we break that.


You could check for volatile functions. I think this could be done 
safely. However, it doesn't seem worthwhile, it would be a fair amount 
of code, and it's not usually a good idea to put an ORDER BY in a view 
or subquery anyway unless you also have volatile functions in there, or 
you want to coerce the optimizer to choose a certain plan.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Major performance problem after upgrade from 8.3 to 8.4

2010-09-14 Thread Merlin Moncure
On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger li...@wiesinger.com wrote:
 Hello Merlin,

 Seems to be a feasible approach. On problem which might be that when
 multiple rows are returned that they are not ordered in each subselect
 correctly. Any idea to solve that?

 e.g.
 Raumsolltemperatur | Raumisttemperatur
 Value from time 1  | Value from time 2
 Value from time 2  | Value from time 1

 but should be
 Raumsolltemperatur | Raumisttemperatur
 Value from time 1  | Value from time 1
 Value from time 2  | Value from time 2

 But that might be solveable by first selecting keys from the log_details
 table and then join again.

 I will try it in the evening and I have to think about in detail.

 But thank you for the new approach and opening the mind :-)

Using subquery in that style select (subquery), ... is limited to
results that return 1 row, 1 column.  I assumed that was the case...if
it isn't in your view, you can always attempt arrays:

CREATE OR REPLACE VIEW log_entries AS
SELECT
 l.id AS id,
 l.datetime AS datetime,
 l.tdate AS tdate,
 l.ttime AS ttime,
 array(select value from log_details ld join key_description kd on
ld.fk_keyid = kd.keyid where ld.fk_id = l.id and  description =
'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,
 [...]

arrays might raise the bar somewhat in terms of dealing with the
returned data, or they might work great.  some experimentation is in
order.

XYZ being the ordering condition you want.  If that isn't available
inside the join then we need to think about this some more.  We could
probably help more if you could describe the schema in a little more
detail.  This is solvable.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Useless sort by

2010-09-14 Thread Maciek Sakrejda
 You could check for volatile functions. I think this could be done safely.

I don't think that's enough. A UDA like last() could have an immutable
sfunc, but still be sensitive to the sort order. I think you'd need
something like a special order-sensitive aggregate definition flag.

---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
(650) 242-3500 Main
www.truviso.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Held idle connections vs use of a Pooler

2010-09-14 Thread mark
Hello,

I am relatively new to postgres (just a few months) so apologies if
any of you are bearing with me.

I am trying to get a rough idea of the amount of bang for the buck I
might see if I put in a connection pooling service into the enviroment
vs our current methodology of using persistent open connections.

We have a number of in house applications that connect to a central
Postgres instance. (8.3.7). The box is admitting underpowered with
only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.
the database is about 35GB on disk and does mainly (~95%) OTLP type
queries. I am currently begging for more ram.

Most of the connections from the various apps hold idle connections
until they need to execute a query once done go back to holding an
open idle connection.  (there are ~600 open connections at any given
time, and most of the time most are idle)

this is typically fine while the number of active queries is low, but
some other application (that doesn't use connection pooling or holding
open connections when not in use) is hitting the db from time to time
with 50-100 small queries (2ms queries from my testing) nearly all at
once. when this happens the whole response time goes out the door
however).


I think from reading this list for a few weeks the answer is move to
using connection pooling package elsewhere to better manage incoming
connections, with a lower number to the db.

I am told this will require some re-working of some app code as I
understand pg-pool was tried a while back in our QA environment and
server parts of various in-house apps/scripts/..etc started to
experience show stopping problems.

to help make my case to the devs and various managers I was wondering
if someone could expand on what extra work is having to be done while
queries run and there is a high (500-600) number of open yet idle
connections to db. lots of the queries executed use sub-transactions
if that makes a difference.


basically what I am paying extra for with that many persistent
connections, that I might save if I go to the effort of getting the
in-house stuff to make use of a connection pooler ?


thank you for your time.

..: mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Useless sort by

2010-09-14 Thread Dave Crooke
I presume there is more usage of this view than just those 3 queries
(otherwise, for a start there would be no need for d, e, f in the view
definition)

Why not just rewrite these 3 queries to go directly off the main table? Or,
create a different view without the sort_by in its definition?

Or, if these are used very frequently and performance is critical, consider
(i) caching these results in the application layer, with logic to understand
when they need to be updated, or (b) maintaining extra tables that just
contain (a) (a,b) and (a,b,c)

Objectively, it's always better to optimize the SQL and application level
for the specific needs of the situation before concluding that the
underlying database engine should do these optimizations automatically, and
it seems like there are a number of options you could explore here.

Cheers
Dave

On Mon, Sep 13, 2010 at 4:47 AM, Gaetano Mendola mend...@gmail.com wrote:

 Hi all,
 I have a view v_table defined as following:

 select a,b,c,d,e,f
 from  t_table
 sort by  a,b,c;

 the usage pattern of this view is the following:

 select distinct(a) from v_table;
 select distinct(b) from v_table where a = XXX;
 select distinct(c) from v_table where a = XXX and b = ;

 because of that sort in the view definition the first query above
 takes not less than 3 seconds. I have solved this performance issue
 removing the sort from the view definition and putting it in the
 select reducing the time from  3secons to  150ms.

 Can not the optimizer take rid of that useless sort on those
 kind of queries ?


 Regards
 Gaetano Mendola




 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



Re: [PERFORM] Held idle connections vs use of a Pooler

2010-09-14 Thread Joshua D. Drake
On Tue, 2010-09-14 at 10:10 -0600, mark wrote:
 Hello,
 
 I am relatively new to postgres (just a few months) so apologies if
 any of you are bearing with me.
 
 I am trying to get a rough idea of the amount of bang for the buck I
 might see if I put in a connection pooling service into the enviroment
 vs our current methodology of using persistent open connections.

Well what a pooler does is provide persisten open connections that can
be reused. What tech are you using for these persisten open connections?


 Most of the connections from the various apps hold idle connections
 until they need to execute a query once done go back to holding an
 open idle connection.  (there are ~600 open connections at any given
 time, and most of the time most are idle)

Sounds like each app is holding its own pool?


 I think from reading this list for a few weeks the answer is move to
 using connection pooling package elsewhere to better manage incoming
 connections, with a lower number to the db.

Correct, because each connection is overhead. If you have 600
connections, of which really only 20 are currently executing, that is
highly inefficient. 

A pooler would have say, 40 connections open, with 20 currently
executing and a max pool of 600.

 
 I am told this will require some re-working of some app code as I
 understand pg-pool was tried a while back in our QA environment and
 server parts of various in-house apps/scripts/..etc started to
 experience show stopping problems.

Use pgbouncer. It is what Skype uses.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Held idle connections vs use of a Pooler

2010-09-14 Thread Merlin Moncure
On Tue, Sep 14, 2010 at 12:10 PM, mark dvlh...@gmail.com wrote:
 Hello,

 I am relatively new to postgres (just a few months) so apologies if
 any of you are bearing with me.

 I am trying to get a rough idea of the amount of bang for the buck I
 might see if I put in a connection pooling service into the enviroment
 vs our current methodology of using persistent open connections.

 We have a number of in house applications that connect to a central
 Postgres instance. (8.3.7). The box is admitting underpowered with
 only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.
 the database is about 35GB on disk and does mainly (~95%) OTLP type
 queries. I am currently begging for more ram.

 Most of the connections from the various apps hold idle connections
 until they need to execute a query once done go back to holding an
 open idle connection.  (there are ~600 open connections at any given
 time, and most of the time most are idle)

This is IMO a strong justification for a connection pooler.  Certain
classes of problems will go away and you will have a more responsive
server under high load conditions.

 this is typically fine while the number of active queries is low, but
 some other application (that doesn't use connection pooling or holding
 open connections when not in use) is hitting the db from time to time
 with 50-100 small queries (2ms queries from my testing) nearly all at
 once. when this happens the whole response time goes out the door
 however).


 I think from reading this list for a few weeks the answer is move to
 using connection pooling package elsewhere to better manage incoming
 connections, with a lower number to the db.

 I am told this will require some re-working of some app code as I
 understand pg-pool was tried a while back in our QA environment and
 server parts of various in-house apps/scripts/..etc started to
 experience show stopping problems.

What types of problems did you have?  Performance related or bugs
stemming from changes in the way your pooler runs the queries?  What
kind of session level objects (like prepared statements) do you rely
on? The answer to this question will affect the feasibility of using a
pooler, or which one you use.  pgbouncer in transaction mode is a
great choice if you can live under the restrictions -- it's almost
completely transparent.  pgpool I'm not nearly as familiar with.

 to help make my case to the devs and various managers I was wondering
 if someone could expand on what extra work is having to be done while
 queries run and there is a high (500-600) number of open yet idle
 connections to db. lots of the queries executed use sub-transactions
 if that makes a difference.

General note: queries with subtransactions (savepoints or pl/pgsql
exception handlers) are much more expensive than those without.   I
would maybe be trying to batch work in your load spike somehow or
working it so that retries are done in the app vs the database.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Useless sort by

2010-09-14 Thread Gaetano Mendola
On Tue, Sep 14, 2010 at 6:15 PM, Dave Crooke dcro...@gmail.com wrote:
 I presume there is more usage of this view than just those 3 queries
 (otherwise, for a start there would be no need for d, e, f in the view
 definition)

 Why not just rewrite these 3 queries to go directly off the main table? Or,
 create a different view without the sort_by in its definition?

 Or, if these are used very frequently and performance is critical, consider
 (i) caching these results in the application layer, with logic to understand
 when they need to be updated, or (b) maintaining extra tables that just
 contain (a) (a,b) and (a,b,c)

 Objectively, it's always better to optimize the SQL and application level
 for the specific needs of the situation before concluding that the
 underlying database engine should do these optimizations automatically, and
 it seems like there are a number of options you could explore here.

Question here is not how to do it right, but how to make the optimizer smarter
than it is now, taking rid of work not needed.

Regards
Gaetano Mendola

-- 
cpp-today.blogspot.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Held idle connections vs use of a Pooler

2010-09-14 Thread Craig James

On 9/14/10 9:10 AM, mark wrote:

Hello,

I am relatively new to postgres (just a few months) so apologies if
any of you are bearing with me.

I am trying to get a rough idea of the amount of bang for the buck I
might see if I put in a connection pooling service into the enviroment
vs our current methodology of using persistent open connections.

We have a number of in house applications that connect to a central
Postgres instance. (8.3.7). The box is admitting underpowered with
only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.
the database is about 35GB on disk and does mainly (~95%) OTLP type
queries. I am currently begging for more ram.

Most of the connections from the various apps hold idle connections
until they need to execute a query once done go back to holding an
open idle connection.  (there are ~600 open connections at any given
time, and most of the time most are idle)

this is typically fine while the number of active queries is low, but
some other application (that doesn't use connection pooling or holding
open connections when not in use) is hitting the db from time to time
with 50-100 small queries (2ms queries from my testing) nearly all at
once. when this happens the whole response time goes out the door
however).


While connection pooling may be a good answer for you, there also appears to be a 
problem/bug in 8.3.x that may be biting you.  My installation is very similar to yours 
(hundreds of idle lightweight connections, occasional heavy use by certain 
apps).  Look at this thread:

  http://archives.postgresql.org/pgsql-performance/2010-04/msg00071.php

On the server that's been upgraded to 8.4.4, we're not seeing this problem.  
But it's not in full production yet, so I can't say for sure that the CPU 
spikes are gone.

(Unfortunately, the archives.postgresql.org HTML formatting is horrible -- why 
on Earth can't it wrap lines?)

Craig




I think from reading this list for a few weeks the answer is move to
using connection pooling package elsewhere to better manage incoming
connections, with a lower number to the db.

I am told this will require some re-working of some app code as I
understand pg-pool was tried a while back in our QA environment and
server parts of various in-house apps/scripts/..etc started to
experience show stopping problems.

to help make my case to the devs and various managers I was wondering
if someone could expand on what extra work is having to be done while
queries run and there is a high (500-600) number of open yet idle
connections to db. lots of the queries executed use sub-transactions
if that makes a difference.


basically what I am paying extra for with that many persistent
connections, that I might save if I go to the effort of getting the
in-house stuff to make use of a connection pooler ?


thank you for your time.

..: mark




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Major performance problem after upgrade from 8.3 to 8.4

2010-09-14 Thread Gerhard Wiesinger

On Tue, 14 Sep 2010, Merlin Moncure wrote:


On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger li...@wiesinger.com wrote:

Hello Merlin,

Seems to be a feasible approach. On problem which might be that when
multiple rows are returned that they are not ordered in each subselect
correctly. Any idea to solve that?

e.g.
Raumsolltemperatur | Raumisttemperatur
Value from time 1  | Value from time 2
Value from time 2  | Value from time 1

but should be
Raumsolltemperatur | Raumisttemperatur
Value from time 1  | Value from time 1
Value from time 2  | Value from time 2

But that might be solveable by first selecting keys from the log_details
table and then join again.

I will try it in the evening and I have to think about in detail.

But thank you for the new approach and opening the mind :-)


Using subquery in that style select (subquery), ... is limited to
results that return 1 row, 1 column.  I assumed that was the case...if
it isn't in your view, you can always attempt arrays:

CREATE OR REPLACE VIEW log_entries AS
SELECT
l.id AS id,
l.datetime AS datetime,
l.tdate AS tdate,
l.ttime AS ttime,
array(select value from log_details ld join key_description kd on
ld.fk_keyid = kd.keyid where ld.fk_id = l.id and  description =
'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,
[...]

arrays might raise the bar somewhat in terms of dealing with the
returned data, or they might work great.  some experimentation is in
order.

XYZ being the ordering condition you want.  If that isn't available
inside the join then we need to think about this some more.  We could
probably help more if you could describe the schema in a little more
detail.  This is solvable.


Of course, subquery is limited to a result set returning 1 row and 1 
column. Also order is of course preserved because of the join.


Further, I think I found a perfect query plan for the EAV pattern.

First I tried your suggestion but there were some limitation with O(n^2) 
efforts (e.g. nested loops=12586 and also index scans with loop 12586):


CREATE OR REPLACE VIEW log_entries_test AS
SELECT
  l.id AS id,
  l.datetime AS datetime,
  l.tdate AS tdate,
  l.ttime AS ttime,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = 
kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumsolltemperatur') AS 
Raumsolltemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = 
kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumtemperatur') AS 
Raumtemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = 
kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesselsolltemperatur') AS 
Kesselsolltemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = 
kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesseltemperatur') AS 
Kesseltemperatur,

FROM
  log l
;

--
EXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime  now() - 
INTERVAL '10 days' ORDER BY datetime DESC;
--
Index Scan Backward using i_log_unique on log l  (cost=0.00..140820.12 rows=69 
width=32) (actual time=2.848..22812.331 rows=12586 loops=1)
  Index Cond: (datetime  (now() - '10 days'::interval))
  SubPlan 1
-  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual 
time=0.007..0.018 rows=1 loops=12586)
  -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1 width=8) 
(actual time=0.003..0.013 rows=1 loops=12586)
Filter: ((description)::text = 'Raumsolltemperatur'::text)
  -  Index Scan using unique_key_and_id on log_details d  
(cost=0.00..17.60 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=12586)
Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))
  SubPlan 2
-  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual 
time=0.006..0.017 rows=1 loops=12586)
  -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1 width=8) 
(actual time=0.003..0.013 rows=1 loops=12586)
Filter: ((description)::text = 'Raumtemperatur'::text)
  -  Index Scan using unique_key_and_id on log_details d  
(cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=12586)
Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))
  SubPlan 3
-  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual 
time=0.005..0.017 rows=1 loops=12586)
  -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1 width=8) 
(actual time=0.002..0.013 rows=1 loops=12586)
Filter: ((description)::text = 'Kesselsolltemperatur'::text)
  -  

Re: [PERFORM] Major performance problem after upgrade from 8.3 to 8.4

2010-09-14 Thread Merlin Moncure
On Tue, Sep 14, 2010 at 3:59 PM, Gerhard Wiesinger li...@wiesinger.com wrote:
 On Tue, 14 Sep 2010, Merlin Moncure wrote:

 On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger li...@wiesinger.com
 wrote:

 Hello Merlin,

 Seems to be a feasible approach. On problem which might be that when
 multiple rows are returned that they are not ordered in each subselect
 correctly. Any idea to solve that?

 e.g.
 Raumsolltemperatur | Raumisttemperatur
 Value from time 1  | Value from time 2
 Value from time 2  | Value from time 1

 but should be
 Raumsolltemperatur | Raumisttemperatur
 Value from time 1  | Value from time 1
 Value from time 2  | Value from time 2

 But that might be solveable by first selecting keys from the log_details
 table and then join again.

 I will try it in the evening and I have to think about in detail.

 But thank you for the new approach and opening the mind :-)

 Using subquery in that style select (subquery), ... is limited to
 results that return 1 row, 1 column.  I assumed that was the case...if
 it isn't in your view, you can always attempt arrays:

 CREATE OR REPLACE VIEW log_entries AS
 SELECT
 l.id AS id,
 l.datetime AS datetime,
 l.tdate AS tdate,
 l.ttime AS ttime,
 array(select value from log_details ld join key_description kd on
 ld.fk_keyid = kd.keyid where ld.fk_id = l.id and  description =
 'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,
 [...]

 arrays might raise the bar somewhat in terms of dealing with the
 returned data, or they might work great.  some experimentation is in
 order.

 XYZ being the ordering condition you want.  If that isn't available
 inside the join then we need to think about this some more.  We could
 probably help more if you could describe the schema in a little more
 detail.  This is solvable.

 Of course, subquery is limited to a result set returning 1 row and 1 column.
 Also order is of course preserved because of the join.

 Further, I think I found a perfect query plan for the EAV pattern.

 First I tried your suggestion but there were some limitation with O(n^2)
 efforts (e.g. nested loops=12586 and also index scans with loop 12586):

 CREATE OR REPLACE VIEW log_entries_test AS
 SELECT
  l.id AS id,
  l.datetime AS datetime,
  l.tdate AS tdate,
  l.ttime AS ttime,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid =
 kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumsolltemperatur') AS
 Raumsolltemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid =
 kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumtemperatur') AS
 Raumtemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid =
 kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesselsolltemperatur')
 AS Kesselsolltemperatur,
  (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid =
 kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesseltemperatur') AS
 Kesseltemperatur,
 
 FROM
  log l
 ;

 --
 EXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime  now() -
 INTERVAL '10 days' ORDER BY datetime DESC;
 --
 Index Scan Backward using i_log_unique on log l  (cost=0.00..140820.12
 rows=69 width=32) (actual time=2.848..22812.331 rows=12586 loops=1)
  Index Cond: (datetime  (now() - '10 days'::interval))
  SubPlan 1
    -  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual
 time=0.007..0.018 rows=1 loops=12586)
          -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1
 width=8) (actual time=0.003..0.013 rows=1 loops=12586)
                Filter: ((description)::text = 'Raumsolltemperatur'::text)
          -  Index Scan using unique_key_and_id on log_details d
  (cost=0.00..17.60 rows=1 width=16) (actual time=0.004..0.004 rows=1
 loops=12586)
                Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))
  SubPlan 2
    -  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual
 time=0.006..0.017 rows=1 loops=12586)
          -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1
 width=8) (actual time=0.003..0.013 rows=1 loops=12586)
                Filter: ((description)::text = 'Raumtemperatur'::text)
          -  Index Scan using unique_key_and_id on log_details d
  (cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1
 loops=12586)
                Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))
  SubPlan 3
    -  Nested Loop  (cost=0.00..19.99 rows=1 width=8) (actual
 time=0.005..0.017 rows=1 loops=12586)
          -  Seq Scan on key_description kd  (cost=0.00..2.38 rows=1
 width=8) (actual