Good catch Jeff.
as for which version. We always recommend the latest version. 42.1.4
Dave Cramer
da...@postgresintl.com
www.postgresintl.com
On 29 September 2017 at 06:44, Subramaniam C
wrote:
> Yes you are right the timestamp which the application was providing was in
> seconds whereas the
Yes you are right the timestamp which the application was providing was in
seconds whereas the query which was using index had a timestamp in
milliseconds. So the query was taking time in application.
On Fri, Sep 29, 2017 at 12:19 PM, Jeff Janes wrote:
> On Thu, Sep 28, 2017 at 2:59 AM, Subraman
On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C
wrote:
> First output show the output when the query is executed from sql command
> line. The second output show when it is executed from the application. AS
> per the output it is clear that the when the query is executed through JDBC
> its not usin
gt;>>> object_table_pkey on object_table (cost=0.42..56727.00 rows=50
>>>> width=64)
>>>>
>>>> Index Cond: (("timestamp" >=
>>>> '0'::bigint) AND ("timestamp" <= '15059
(cost=367431.07..373153.32
>>> rows=55526 width=16)
>>>
>>> -> Unique (cost=367431.07..372459.24
>>> rows=55526 width=24)
>>>
>>> -> Sort (cost=367431.07..369945.16
>&g
s=55526 width=24)
>>
>> -> Sort (cost=367431.07..369945.16
>> rows=1005634 width=24)
>>
>> Sort Key:
>> health_timeseries_table.mobid DESC, health_timeseries_table."timestamp"
&
-> Seq Scan on
> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)
>
>
> Filter: (("timestamp" >=
> '150598950'::bigint) AND ("timestamp" <= '15
Filter: (("timestamp" >=
'150598950'::bigint) AND ("timestamp" <= '150599040'::bigint))
On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe
wrote:
> https://www.postgresql.org/docs/current/static/auto-explain.html
>
>
> -----Message d
mance@postgresql.org
Objet : Re: [PERFORM] Slow query in JDBC
On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C
wrote:
> I configured cursor_tuple_fraction to 1 but still I am facing the same
> issue.
Can you show explain (analyze, buffers) of the query when run from psql and run
from appli
On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C
wrote:
> I configured cursor_tuple_fraction to 1 but still I am facing the same
> issue.
Can you show explain (analyze, buffers) of the query when run from
psql and run from application (you can use auto_explain for that if
needed, see https://www.p
I configured cursor_tuple_fraction to 1 but still I am facing the same
issue.
Please help.
On Thu, Sep 28, 2017 at 2:18 PM, Julien Rouhaud wrote:
> On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C
> wrote:
> > Hi
> >
> > When I try to execute the query from sql command line then that query is
>
On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C
wrote:
> Hi
>
> When I try to execute the query from sql command line then that query is
> taking only around 1 sec. But when I execute the query using JDBC(Java)
> using preparedStatement then the same query is taking around 10 secs.
>
> Can you ple
On Tue, Dec 27, 2016 at 3:50 PM, Flávio Henrique wrote:
> Hi there, fellow experts!
>
> I need an advice with query that became slower after 9.3 to 9.6 migration.
>
> First of all, I'm from the dev team.
>
> Before migration, we (programmers) made some modifications on query bring
> it's average t
On Thu, Jan 5, 2017 at 9:51 AM, Daniel Blanch Bataller
wrote:
> If just recreating the index now it uses it, it might mean that the index
> was bloated, that is, it grew so big that it was cheaper a seq scan.
>
> I’ve seen another case recently where postgres 9.6 wasn’t using the right
> index in
erent values in
> FIELD.FIELD_NAME
>
> Gerardo
>
> - Mensaje original -
> > De: "Alessandro Ferrucci"
> > Para: pgsql-performance@postgresql.org
> > Enviados: Miércoles, 26 de Abril 2017 0:19:37
> > Asunto: Re: [PERFORM] Slow query with 3 ta
Dave -
I had re-ran ANALYZE and VACUUM on all the tables and that did not seem to
have fixed the issue (the query still took a long time, however I did not
let it finish to produce a full EXPLAIN plan.
However - after creating an index on FIELD(FIELD_NAME,UNIT_ID) and now the
query runs very fast
"
> Para: pgsql-performance@postgresql.org
> Enviados: Miércoles, 26 de Abril 2017 0:19:37
> Asunto: Re: [PERFORM] Slow query with 3 table joins
>
>
>
> After about 40 inutes the slow query finally finished and the result
> of the EXPLAIN plan can be found here:
>
>
&
- Mensaje original -
> De: "Alessandro Ferrucci"
> Para: pgsql-performance@postgresql.org
> Enviados: Miércoles, 26 de Abril 2017 0:19:37
> Asunto: Re: [PERFORM] Slow query with 3 table joins
>
>
>
> After about 40 inutes the slow query finall
> Hi Eskil -
>
>
> The I believe the id-field you're referring to is the UNIT.UNIT_ID, I
> could change this to a varchar, however that column is not used in the
> query in question, so that wouldn't have any effect on the query's
> performance.
Sorry, I did not notice that the column "unit_id
Hi Dave -
thank you very much for all this advice! I will try each of these and post
back results (some of this stuff, like creating the index, which is
happening now, takes a very long time).
Thanks again for all these pointers.
Cheers,
Alessandro
On Wed, Apr 26, 2017 at 12:12 AM, David Rowle
Hi Eskil -
The I believe the id-field you're referring to is the UNIT.UNIT_ID, I could
change this to a varchar, however that column is not used in the query in
question, so that wouldn't have any effect on the query's performance.
Just for curiosity - I have changed the ANSWER.ANS datatype to a
tis 2017-04-25 klockan 23:19 -0400 skrev Alessandro Ferrucci:
> After about 40 inutes the slow query finally finished and the result
> of the EXPLAIN plan can be found here:
>
>
> https://explain.depesz.com/s/BX22
>
>
> Thanks,
> Alessandro Ferrucci
I'm not so familiar with the index implement
On 26 April 2017 at 15:19, Alessandro Ferrucci
wrote:
> After about 40 inutes the slow query finally finished and the result of the
> EXPLAIN plan can be found here:
>
> https://explain.depesz.com/s/BX22
> Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52
> rows=850149 width=8)
After about 40 inutes the slow query finally finished and the result of the
EXPLAIN plan can be found here:
https://explain.depesz.com/s/BX22
Thanks,
Alessandro Ferrucci
On Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <
alessandroferru...@gmail.com> wrote:
> Hello - I am migrating a curre
Thank you for the reply. I had been trying to find that option for awhile
now.
On Fri, Jan 6, 2017 at 12:51 PM, Michael Paquier
wrote:
> On Fri, Jan 6, 2017 at 6:14 AM, Filipe Oliveira
> wrote:
> > Can you remove me from your mailing list?
>
> There is an unsubscribe action here:
> https://www.
On Fri, Jan 6, 2017 at 6:14 AM, Filipe Oliveira wrote:
> Can you remove me from your mailing list?
There is an unsubscribe action here:
https://www.postgresql.org/community/lists/subscribe/
--
Michael
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
Can you remove me from your mailing list?
Thanks.
On Thu, Jan 5, 2017 at 10:51 AM, Flávio Henrique wrote:
> @Merlin Moncure
>>
>> Big gains (if any) are likely due to indexing strategy.
>> I do see some suspicious casting, for example:
>> Join Filter: ((four_charlie.delta_tango)::integer =
>> (six_quebec.golf_bravo)::integer)
>> Are you casting i
Hi,
If just recreating the index now it uses it, it might mean that the index was
bloated, that is, it grew so big that it was cheaper a seq scan.
I’ve seen another case recently where postgres 9.6 wasn’t using the right index
in a query, I was able to reproduce the issue crafting index bigger,
On Thu, Jan 5, 2017 at 10:51 AM, Flávio Henrique wrote:
> Replying your comment, I think they tunned the server:
> effective_cache_size = 196GB
> shared_buffers = 24GB (this shouldn't be higher?)
Probably not, although it may be a good idea to try settings either
side of that (say, 16GB and 32GB
Hi all!
Sorry the delay (holidays).
Well, the most expensive sequencial scan was solved.
I asked the db team to drop the index and recreate it and guess what: now
postgresql is using it and the time dropped.
(thank you, @Gerardo Herzig!)
I think there's still room for improvement, but the problem
On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique wrote:
> Hi there, fellow experts!
>
> I need an advice with query that became slower after 9.3 to 9.6 migration.
>
> First of all, I'm from the dev team.
>
> Before migration, we (programmers) made some modifications on query bring
> it's average t
On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique wrote:
> I can see some buffers written that tells me
> that something is wrong.
Try running VACUUM FREEZE ANALYZE on all tables involved in the
query (or just run it as a superuser on the whole database). Do
*not* use the FULL option. Among oth
The biggest impact on performance you can achieve is by using a materialized
view. if it’s so heavily used as you said, even 2-3 seconds in a multiuser OLTP
environment still unacceptable under my point of view. I don’t know if this is
the case but if you have 1000 users connecting at 8 am all a
>
> Hi there, fellow experts!
>
>
> I need an advice with query that became slower after 9.3 to 9.6
> migration.
>
>
> First of all, I'm from the dev team.
>
>
> Before migration, we (programmers) made some modifications on query
> bring it's average time from 8s to 2-3s.
>
>
> As this que
Tom,
Thank you for a thorough answer. We’ll try the 2-column index.
Regards,
Andrey Povazhnyi
> On Dec 6, 2016, at 6:33 PM, Tom Lane wrote:
>
> Andrey Povazhnyi writes:
>> We’ve got a strange planner behavior on a query to one of our bigger tables
>> after we upgraded to postgres 9.6.1 recen
Andrey Povazhnyi writes:
> We’ve got a strange planner behavior on a query to one of our bigger tables
> after we upgraded to postgres 9.6.1 recently.
The basic problem with this query is that there are no good alternatives.
The planner believes there are about 53K rows matching the WHERE
condit
On Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes
wrote:
> Partitioning the Feature and Point tables on measurement_time (or
> measurement_start_time,
> you are not consistent on what it is called) might be helpful. However,
> measurement_time does not exist in those tables, so you would first
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K wrote:
> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>
> Increasing work_mem did not have great impact on the performance
On Sat, Aug 27, 2016 at 7:13 AM, Craig James wrote:
> On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby
> wrote:
>
>> On 8/26/16 3:26 PM, Mike Sofen wrote:
>>
>>> Is there way to keep query time constant as the database size grows.
>>>
>>
>> No. More data == more time. Unless you find a way to break th
Craig James writes:
> Straight hash-table indexes (which Postgres doesn't use) have O(1) access
> time. The amount of data has no effect on the access time.
This is wishful thinking --- once you have enough data, O(1) goes out the
window. For example, a hash index is certainly not going to conti
On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby wrote:
> On 8/26/16 3:26 PM, Mike Sofen wrote:
>
>> Is there way to keep query time constant as the database size grows.
>>
>
> No. More data == more time. Unless you find a way to break the laws of
> physics.
>
Straight hash-table indexes (which Postgr
rformance@postgresql.org
> *Subject:* Re: [PERFORM] Slow query with big tables
>
>
>
> Ok, sorry that I did not add the original message. I thought that it would
> be automatically added to the message thread.
>
>
>
> Here is the question again:
>
>
>
> I
On 8/26/16 3:26 PM, Mike Sofen wrote:
Is there way to keep query time constant as the database size grows.
No. More data == more time. Unless you find a way to break the laws of
physics.
Should I use partitioning or partial indexes?
Neither technique is a magic bullet. I doubt either woul
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tommi K
Sent: Friday, August 26, 2016 7:25 AM
To: Craig James
Cc: andreas kretschmer ;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query with big tables
Ok, sorry
Ok, sorry that I did not add the original message. I thought that it would
be automatically added to the message thread.
Here is the question again:
Is there way to keep query time constant as the database size grows. Should
I use partitioning or partial indexes?
Thanks,
Tommi Kaksonen
> Hell
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K wrote:
> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>
Please include the email you are replying to when you respond. It
Hello,
thanks for the response. I did not get the response to my email even though
I am subscribed to the pgsql-performance mail list. Let's hope that I get
the next one :)
Increasing work_mem did not have great impact on the performance. But I
will try to update the PostgreSQL version to see if i
Tommi Kaksonen wrote:
> ---Version---
> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit
current point release for 9.2 is 9.2.18, you are some years behind.
The plan seems okay for me, apart from the on-disk sort: increase
work_mem to avoid that.
If i where you i would switch to PG
>I ask your help to solve a slow query which is taking more than 14 seconds to
>be executed.
>Maybe I am asking too much both from you and specially from postgresql, as it
>is really huge, envolving 16 tables.
>
>Explain:
>http://explain.depesz.com/s/XII9
>
>Schema:
>http://adj.com.br/erp/data_
These are the queries I used to get the execution planer use the index scan
instead of the sequential scan:
IF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key) => sequential scan
IF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1) =>
sequential scan
IF NOT EXISTS (SEL
Guido Niewerth writes:
> And this is the execution plan. It looks like it does a slow sequential scan
> where it´s able to do an index scan:
> 2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan:
> Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data
> WHERE key = old.
I needed to set up the trigger function again, so here it is:
CREATE OR REPLACE FUNCTION public.fn_trigger_test ()
RETURNS trigger AS
$body$
DECLARE
start TIMESTAMP;
BEGIN
start := timeofday();
IF TG_OP = 'UPDATE' THEN
IF NOT EXISTS( SELECT key FROM custom_data WHERE ke
Guido Niewerth writes:
> As you can see there´s a huge runtime difference between the select query
> used in the trigger function and the one run from the SQL editor.
contrib/auto_explain might be useful in seeing what's going on, in
particular it would tell us whether or not a different plan is
OK.
If you benchmark that correctly,
then it seems that adding some redundant search can get the query faster in
some special cases.
And without further info, I can not see any reason.
2015-08-14 14:35 GMT+09:00 Robert Campbell :
> Hi,
>
> My mistake, didnt apply the sub query properly the firs
Is the "Vacancy"."ID" a primary key?
Or is unique in Vacancy table?
Hi,
Doing this returns 0 records
On Thu, Aug 13, 2015 at 7:22 PM, 林士博 [via PostgreSQL] <
ml-node+s1045698n5862008...@n5.nabble.com> wrote:
> In the 'not exists' cluster, you do not have to search table "Vacancy as
> v" again.
> I think it would be faster to use the outer Vacancy table as below.
In the 'not exists' cluster, you do not have to search table "Vacancy as v"
again.
I think it would be faster to use the outer Vacancy table as below.
In your case, that do the same work.
NOT EXISTS (
SELECT 1
FROM "CategoryOption_TableRow" "ct126"
WHERE "Vacancy"."Template
Hi Vik,
Thanks for your feedback, very helpful.
I modified your query slightly, this will return all vacancy templates and
all level 1 vacancies which arent templates, and does so in about
~800-900ms less, an great improvement on the original query.
SELECT "Vacancy"."ID",
"Vacancy"."JobTi
On 08/12/2015 04:34 AM, robbyc wrote:
> Hi,
>
> I am new to optimizing queries and i'm getting a slow running time
> (~1.5secs) with the following SQL:
Before mucking about with work_mem and indexes, the first thing to do is
rewrite this query correctly. Here are just some of the things wrong
wi
On Wed, Aug 12, 2015 at 3:29 PM, robbyc wrote:
> Hi Venkata,
>
> work_mem was set to 72MB, increased to 144MB, no change.
>
Increasing work_mem depends on various other factors like Table size
(amount of data being sorted), available memory etc.
> Added an index of type varchar_pattern_ops to
Hi Venkata,
work_mem was set to 72MB, increased to 144MB, no change.
Added an index of type varchar_pattern_ops to Vacancy.JobTitle, this did
not help either.
On Wed, Aug 12, 2015 at 2:09 PM, Venkata Balaji N [via PostgreSQL] <
ml-node+s1045698n5861839...@n5.nabble.com> wrote:
> On Wed, Aug 12,
On Wed, Aug 12, 2015 at 12:34 PM, robbyc wrote:
> Hi,
>
> I am new to optimizing queries and i'm getting a slow running time
> (~1.5secs) with the following SQL:
>
> SELECT distinct("Vacancy"."ID"), "Vacancy"."JobTitle",
> "Vacancy"."DateCreated", "Vacancy"."CustomAccess"
> , "Department"."Name"
On 24/06/15 09:05, Jim Nasby wrote:
> On 6/19/15 9:57 AM, Ian Pushee wrote:
>>
>>
>> On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:
Explain Analyze outputs (links as requested):
Default plan: http://explain.depesz.com/s/ib3k
Forced index (random_page_cost=1.0): http://explain.depes
On 6/19/15 9:57 AM, Ian Pushee wrote:
On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:
Explain Analyze outputs (links as requested):
Default plan: http://explain.depesz.com/s/ib3k
Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP
Software/Hardware: PGSql 9.2.1, Windows 8.1
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Igor Neyman
Sent: Friday, June 19, 2015 11:07 AM
To: Ian Pushee; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query (planner insisting on using
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Ian Pushee
Sent: Friday, June 19, 2015 10:54 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query (planner insisting on using 'external
On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:
Explain Analyze outputs (links as requested):
Default plan: http://explain.depesz.com/s/ib3k
Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP
Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM
All pgsql settings are at the
On 6/19/2015 10:46 AM, Igor Neyman wrote:
Probably events_confidnce index is not very selective, that's why optimizer
prefers seq scan.
I'd try to create an index on (name, eventspy_id, camera_id, type, status).
Also, the recent 9.2 is 9.2.13, you should upgrade.
Regards,
Igor Neyman
Hi I
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Ian Pushee
Sent: Friday, June 19, 2015 10:34 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Slow query (planner insisting on using 'external merge' sor
> Explain Analyze outputs (links as requested):
> Default plan: http://explain.depesz.com/s/ib3k
> Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP
>
> Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM
> All pgsql settings are at their defaults.
increase work_mem. per ses
On Thu, Jun 11, 2015 at 7:18 PM, Sasa Vilic wrote:
> Hi,
>
> I have a query that takes ridiculously long to complete (over 500ms) but if
> I disable nested loop it does it really fast (24.5ms)
>
> Here are links for
> * first request (everything enabled): http://explain.depesz.com/s/Q1M
> * second
On 10 June 2015 at 16:50, Tomas Vondra wrote:
>
>
> The problematic piece of the explain plan is this:
>
> -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593
> width=390)"
>Output: a.ut, c.gt, b.go, b.gn, d.au"
>Merge Cond: ((c.ut)::text = (d.rart_id)
On 06/10/15 15:42, Johann Spies wrote:
On 10 June 2015 at 15:02, Claudio Freire mailto:klaussfre...@gmail.com>> wrote:
The joins are different on both versions, and the most likely culprit
is the join against D. It's probably wrong, and the first query is
building a cartesian prod
On 10 June 2015 at 15:02, Claudio Freire wrote:
>
> The joins are different on both versions, and the most likely culprit
> is the join against D. It's probably wrong, and the first query is
> building a cartesian product.
>
> Without more information about the schema it's difficult to be sure th
On Wed, Jun 10, 2015 at 9:39 AM, Johann Spies wrote:
> COPY
> (SELECT A.ut,
> B.go AS funding_org,
> B.gn AS grant_no,
> C.gt AS thanks,
> D.au
>FROM isi.funding_text C,
> isi.rauthor D,
> isi.africa_uts A
>LEFT JOIN isi.funding_org
Sathish Nelson writes:
> am connecting three tables in query. one table have 73000 records
> another two tables have 138000 records.
> but its take 12 sec for show 12402 rows in tables
Increasing work_mem would make those sort steps faster ...
regards, tom lane
--
Sent
On 11/02/15 07:41, Sathish Nelson wrote:
> am connecting three tables in query. one table have 73000 records
>
> another two tables have 138000 records.
>
> but its take 12 sec for show 12402 rows in tables
you need more work_mem. The query plan shows multiple external sort nodes.
Otherwise, th
Ross Elliott-2 wrote
> Maybe someone can explain this. The following SQL will reproduce our
> issue:
> DROP TABLE IF EXISTS t1 CASCADE;
> CREATE TABLE t1 (name text,
> state text);
> CREATE INDEX t1_name ON t1(name);
> CREATE INDEX t1_state ON t1(state);
> CREATE INDEX t1_name_stat
>
> No, it doesn't. Read it again ... or read up on row comparisons,
> if you're unfamiliar with that notation. The above queries are
> exactly equivalent per spec.
>
Wow, this is great. Thanks.
johno writes:
> On Tue, Jul 22, 2014 at 4:53 AM, Tom Lane wrote:
>> johno writes:
>>> The obvious query is
>>> SELECT * FROM register_uz_accounting_entities
>>> WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND
>>> id > 1459)
>>> ORDER BY effective_on, id
>>> LIMIT 100
>> A
On Tue, Jul 22, 2014 at 4:53 AM, Tom Lane wrote:
> johno writes:
> > I am trying to optimize a simple query that returns first 100 rows that
> > have been updated since a given timestamp (ordered by timestamp and id
> > desc). If there are several rows with the same timestamp I need to a
> > se
johno writes:
> I am trying to optimize a simple query that returns first 100 rows that
> have been updated since a given timestamp (ordered by timestamp and id
> desc). If there are several rows with the same timestamp I need to a
> second condition, that states that I want to return rows having
r@
> [mailto:
> pgsql-performance-owner@
> ] On Behalf Of David Johnston
> Sent: Friday, March 07, 2014 11:53 AM
> To:
> pgsql-performance@
> Subject: Re: [PERFORM] Slow query
>
> Bikram Kesari Naik wrote
>> Hi,
>>
>> I have a view which joins mu
Johnston
Sent: Friday, March 07, 2014 11:53 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query
Bikram Kesari Naik wrote
> Hi,
>
> I have a view which joins multiple tables to give me a result. It
> takes more than a minute to give me the result on psql prompt whe
Bikram Kesari Naik wrote
> Hi,
>
> I have a view which joins multiple tables to give me a result. It takes
> more than a minute to give me the result on psql prompt when I select all
> the data from that view.
> The single CPU which is used to run this query is utilized 100%.Even if I
> fire a c
RHEL 5.10 kernel 2.6.18
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-wrong-index-used-maybe-tp5788979p5789206.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@p
On Mon, Jan 27, 2014, at 11:43, Gavin Flower wrote:
> On 28/01/14 08:10, bobJobS wrote:
> > My developers have had the same issue.
> >
> > Postgres 9.2.3 on Linux 5.6.
> >
> The latest Linux kernel is 3.13 (https://www.kernel.org), so I assume
> 5.6 is a distribution version.
>
> So which distr
On 28/01/14 08:10, bobJobS wrote:
My developers have had the same issue.
Postgres 9.2.3 on Linux 5.6.
The latest Linux kernel is 3.13 (https://www.kernel.org), so I assume
5.6 is a distribution version.
So which distribution of Linux are you using?
Cheers,
Gavin
--
Sent via pgsql-perform
My developers have had the same issue.
Postgres 9.2.3 on Linux 5.6.
The query planner estimates (for 27 table join SQL) that using the nestloop
is faster, when in fact it is not. A hashjoin returns results faster. We've
set enable_nestloop = false and have gotten good results. The problem is,
nes
Hello Stelian,
Have you tried to use func_table module?, I think it will help you to eliminate
all the joins.
Regards
On Monday, January 27, 2014 5:54 PM, Stelian Iancu wrote:
Hello,
I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
fairly large database (some table
On Mon, Jan 27, 2014, at 7:06, Tom Lane wrote:
> Stelian Iancu writes:
> > I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
> > fairly large database (some tables with approx. 1 mil. records) and I
> > have the following query:
> > [ 13-way join joined to a 3-way join ]
>
>
On Mon, Jan 27, 2014, at 9:20, salah jubeh wrote:
> Hello Stelian,
>
Hello,
> Have you tried to use func_table module?, I think it will help you to
> eliminate all the joins.
No, I haven't. I can have a look later, thanks.
>
> Regards
>
>
--
Sent via pgsql-performance mailing list (p
Stelian Iancu writes:
> I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
> fairly large database (some tables with approx. 1 mil. records) and I
> have the following query:
> [ 13-way join joined to a 3-way join ]
Think you'll need to raise join_collapse_limit and from_coll
> What are your thoughts on the right way to use SSDs in a RAID to
> enhance postgres I/O performance? In an earlier reply, you
> indicated one of a "RAID1+0 consisting of several spindles,
> NVRAM-based solution (SSD or PCIe card), or a SAN"
Well, it's a tiered approach. If you can identify yo
On 21/12/13 05:11, Shaun Thomas wrote:
[...]
.
Of course, don't forget to buy modules in multiples of four, otherwise
you're not taking advantage of all the CPU's memory channels. :)
Note some processors have 3 (three) memory channels! And I know of some
with 4 memory channels. So it is im
On 12/20/2013 09:57 AM, Sev Zaslavsky wrote:
There is a separate RAID-1 for WAL, another for tablespace and another
for operating system.
I tend to stick to DB-size / 10 as a minimum, but I also have an OLTP
system. For a more OLAP-type, the ratio is negotiable.
The easiest way to tell is t
On 12/19/2013 03:24 PM, Sergey Konoplev wrote:
2. You are limited with IO
I would also suggest you to upgrade your storage in this case.
I think this is the case. If I recall correctly, his setup includes a
single RAID-1 for everything, and he only has 32GB of RAM. In fact, the
WAL traffic f
On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky wrote:
> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:
>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
>>> Table rt_h_nbbo contains several hundred million rows. All rows for a
>>> given
>>> entry_date are appended to this table in an over
On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
[...]
> Table rt_h_nbbo contains several hundred million rows. All rows for a given
> entry_date are appended to this table in an overnight process every night -
> on the order of several million rows per day.
[...]
> I perceive an ineffici
1 - 100 of 431 matches
Mail list logo