> It has now been decided to try upgrading to 9.4 as that is the minimum to
> support Django 1.11 (which we are trying to upgrade a backend service to).
> The hope is whatever feature we have not configured properly in 9.6 is not
> there in 9.4.
It's entirely possible whatever is causing your
On Fri, Nov 3, 2017 at 5:28 AM, Thomas Kellerer wrote:
> I do like Oracle's approach with SQL profiles, where you can force the
> optimizer to try harder to find a good execution plan. I _think_ it even
> runs the statement with multiple plans and compares the expected outcome
> Yes, there is some code to catch exceptions like unique constraint violation
> and no data found. Do you suggest we trying by commenting that part?
That is likely it. Comment that out and test.
If you still need to handle a unique violation, see if you can instead
use the ON CONFLICT clause
Is there any error handling in there? I remember seeing performance
issues if you put in any code to catch exceptions.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
There is also the option of pg_stat_statements:
https://www.postgresql.org/docs/current/static/pgstatstatements.html and
auto_explain:
https://www.postgresql.org/docs/current/static/auto-explain.html
These should help you identify what is slowing things down. There is no
reason I could think of
On Fri, Jun 23, 2017 at 12:50 AM, Tom Lane wrote:
>
> It's possible that pgAdmin4 has improved matters in this area.
>
Sadly, not in my experience. It's actually considerably worse than
pgAdminIII in my experience when selecting a lot of rows, especially when
very wide (20+
>
> Does the "multiple steps" part involve UPDATEs on pre-existing rows?
> Do the updates change the column(s) used in the gin index?
>
Yes they do, however the updates happen prior to the index creation.
I just tried, and that looks like the solution. I really appreciate your
help on this.
Hey all, first off, i'm running: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu,
compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit
At the high level, I am having an issue with a query not using an index,
and in a very hard to reproduce way.
I have a function which builds two temp tables, fills each with
>
> Did you pay attention to the estimated number of groups (ie, estimated
> output rowcount for the aggregation plan node) while fooling around with
> the statistics? How does it compare to reality, and to 9.5's estimate?
>
I'm re-doing the tests and paying attention to that now.
With
>
> If the problem is "new server won't use hashagg", I'd wonder whether
> the work_mem setting is the same, or whether maybe you need to bump
> it up some (the planner's estimate of how big the hashtable would be
> might have changed a bit).
>
I actually was speaking with Stephen Frost in the
As suggested in the Postgres slack channel by lukasfittl, I disabled
hashagg on my old server, and ran the query again. That changed one piece
to a groupagg (like was used on the new server) and the performance was
similar to the 9.6.1 box.
9.5.5 w/ hashagg disabled:
Hello all, I have a query that was running quickly enough on 9.5.5 and has
slowed to a halt after upgrading to 9.6.1.
The server hardware is the same, 2 core, 4GB ram DigitalOcean virtual
server, running Debian 8.6.
The query is a pretty heavy reporting query to aggregate the dollar value
of
I finally managed to get it compiled, patched, and working. It gave the
same plan with the same estimates as when I turned fkey_estimates off.
I was wondering if I did things properly though, as i don't see the
enable_fkey_estimates GUC any more. Was it removed?
It'd be really hard to get a test dataset together I think, so I suppose
i'll learn how to compile Postgres. Will let you know how that goes.
Alright with that off I get:
'Nested Loop Anti Join (cost=25.76..21210.81 rows=16684 width=106) (actual
time=0.688..249.585 rows=26994 loops=1)'
' -> Hash Join (cost=25.34..7716.95 rows=21906 width=106) (actual
time=0.671..124.663 rows=28467 loops=1)'
'Hash Cond: (cp.claim_id =
gger
BEFORE INSERT
ON claim_product_reason_code
FOR EACH ROW
EXECUTE PROCEDURE
gosimple.update_claim_product_reason_code_active_range();
On Thu, Jun 16, 2016 at 10:09 PM, Adam Brusselback <
adambrusselb...@gmail.com> wrote:
> I analyzed all tables involved after loading, and also while tr
I analyzed all tables involved after loading, and also while trying to
diagnose this issue.
I have the same statistics target settings on both servers.
Here are the schemas for the tables:
On Thu, Jun 16, 2016 at 10:04 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Adam Brusselback <
Hey all, testing out 9.6 beta 1 right now on Debian 8.5.
I have a query that is much slower on 9.6 than 9.5.3.
As a side note, when I explain analyze instead of just executing the query
it takes more than 2x as long to run. I have tried looking for info on that
online but have not found any.
At that point would it be better to just use a boolean array?
Here is an example I just wrote up that does pretty damn fast searches.
SET work_mem = '256 MB';
CREATE TABLE test_bool AS
SELECT id, array_agg(random() < 0.85) as boolean_column
FROM generate_series(1, 100)
CROSS JOIN
scan is a ton faster even.
On Wed, Apr 13, 2016 at 2:54 PM, Evgeniy Shishkin <itparan...@gmail.com>
wrote:
>
> > On 13 Apr 2016, at 20:14, Adam Brusselback <adambrusselb...@gmail.com>
> wrote:
> >
> > Sorry, brain stopped working and I forgot to include the n
Sorry, brain stopped working and I forgot to include the normal info.
Postgres version: 9.5.1
Hardware: 2 core, 4gb Digital Ocean virtual server
OS: Debian
explain analyze for an example update:
'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual
time=29329.614..29329.614
Hey all, been running into some performance issues with one of my tables,
and it seems to be centered around index maintenance.
I have a table to store aggregated prices that are derived from sale data
over a configurable period, and a function that runs periodically that
inserts new prices if
Hey all, i've run into a performance problem with one of my queries that I
am really not sure what is causing it.
Setup info:
Postgres version 9.4.4 on Debian 7. Server is virtual, with a single core
and 512 ram available and ssd storage.
Changes to postgresql.conf:
maintenance_work_mem = 30MB
Hey all, first off, Postgres version 9.4.4 (also tested on 9.5 beta).
I have been having a pretty hard time getting a view of mine to play nice
with any other queries I need it for.
I have a few tables you'd need to know about to understand why i'm doing
what i'm doing.
First thing is we have a
oping to create a view to make working
with the final result the rules specified above easy when you want to know
what pricing is valid for a specific product on a contract.
So that is the "why" at least.
On Thu, Dec 17, 2015 at 12:00 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
claim_product clp
ON cp.contract_id = clp.contract_id
WHERE clp.claim_id = 'whatever';
On Thu, Dec 17, 2015 at 1:08 PM, Adam Brusselback <adambrusselb...@gmail.com
> wrote:
> No ORM, just me.
> Was somewhat similar to something I had seen done at an old job, but they
>
Hey all,
I have an attachment table in my database which stores a file in a bytea
column, the file name, and the size of the file.
Schema:
CREATE TABLE attachment
(
attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),
attachment_name character varying NOT NULL,
attachment_bytes_size
at 7:55 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, November 25, 2015, Adam Brusselback <
> adambrusselb...@gmail.com> wrote:
>
>> Hey all,
>>
>> I have an attachment table in my database which stores a file in a bytea
, 2015 at 8:01 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Adam Brusselback <adambrusselb...@gmail.com> writes:
> > CREATE TABLE attachment
> > (
> > attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),
> > attachment_name character varying NOT NULL,
>
29 matches
Mail list logo