Looking at the execution plan makes me wonder what your work_mem is
set to. Try cranking it up to test and lowering random_page_cost:
set work_mem='500MB';
set random_page_cost=1.2;
explain analyze select ...
and see what you get.
--
Sent via pgsql-performance mailing list (pgsql-performance@
On Thursday, May 23, 2013 10:51 PM fburgess wrote:
> serverdb=# set enable_hashjoin=off;
> SET
> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join
> SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT';
>
Sorry for the messy query, I'm very new to writing these complex queries.
I'll try and make it easier to read by using WITH clauses. However, just
to clarify, the WITH clauses only increase readability and not performance
in any way, right?
On Thu, May 23, 2013 at 4:22 PM, james wrote:
> On
On Thu, May 23, 2013 at 12:21 PM, wrote:
>
> But what negative impact is disabling hash joins?
>
doing it just for a single query, could be a tool for solving
particular problems.
setting it in postgresql.conf, therefore affecting all queries, is
like using a hammer to change tv channel... it wi
On 23/05/2013 22:57, Jonathan Morra wrote:
I'm not sure I understand your proposed solution. There is also the
case to consider where the same patient can be assigned the same
device multiple times. In this case, the value may be reset at each
assignment (hence the line value - issued_value A
I'm not sure I understand your proposed solution. There is also the case
to consider where the same patient can be assigned the same device multiple
times. In this case, the value may be reset at each assignment (hence the
line value - issued_value AS value from the original query).
On Thu, May
On 05/23/2013 10:57 AM, Jonathan Morra wrote:
Ultimately I'm going to deploy this to Heroku on a Linux machine (my
tests have so far indicated that Heroku is MUCH slower than my
machine), but I wanted to get it fast on my local machine first. I
agree with your role partitioning, however, this
1. Reads is constantly inserted upon. It should never be updated or
deleted.
2. I suppose I can, but that will make my insertion logic very
complicated. I cannot guarantee the order of any of this data, so I might
get reads at any time and also get assignments at any time (historical as
well).
>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence
I'm only summing the last read for each device for each patient.
Is "reads" table insert-only? Do you have updates/deletes of the
"historical" rows?
>>3. Can I modify my tables to make this query (which is the crux of my
HAndres,
> Well. For one you haven't proven that the changed setting actually
> improves performance. So the comparison isn't really valid. We will
I agree that I haven't proven this yet, but that doesn't make it
invalid. Just unproven.
I agree that performance testing is necessary ... and the
Ultimately I'm going to deploy this to Heroku on a Linux machine (my tests
have so far indicated that Heroku is MUCH slower than my machine), but I
wanted to get it fast on my local machine first. I agree with your role
partitioning, however, this is only a dev machine.
For the sum vs. last, the
On 05/23/2013 10:19 AM, Jonathan Morra wrote:
I am fairly new to squeezing performance out of Postgres, but I hope
this mailing list can help me. I have read the instructions found at
http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to
abide by them the best that I can. I a
serverdb=# set enable_hashjoin=off;SETserverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--
I am fairly new to squeezing performance out of Postgres, but I hope this
mailing list can help me. I have read the instructions found at
http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to
abide by them the best that I can. I am running "PostgreSQL 9.1.7,
compiled by Visual C+
On 05/23/2013 03:47 PM, Merlin Moncure wrote:
[cut]
[..] There are two components to the Swingbench test we're running here:
the database itself, and the redo log. The redo log stores all changes that
are made to the database, which allows the database to be reconstructed in
the event of a fa
On Thu, May 23, 2013 at 1:56 AM, Andrea Suisani wrote:
> On 05/22/2013 03:30 PM, Merlin Moncure wrote:
>>
>> On Tue, May 21, 2013 at 7:19 PM, Greg Smith wrote:
>>>
>>> On 5/20/13 6:32 PM, Merlin Moncure wrote:
>
>
> [cut]
>
>
>>> The only really huge gain to be had using SSD is commit rate at a l
Hello,
I am testing performance of postgresql application using pgbench.
I am getting spike in results(graphs) as shown in attached graph due to
throughput drop at that time.
pgbench itself doing checkpoint on server (where queries are running)
before and after test starts.
pgbench is running on
17 matches
Mail list logo