On 11/6/17, 9:21 AM, "Justin Pryzby" wrote:
> see if statistics improve:
> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,
> tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv,
> FROM pg_stats WHERE
> It has now been decided to try upgrading to 9.4 as that is the minimum to
> support Django 1.11 (which we are trying to upgrade a backend service to).
> The hope is whatever feature we have not configured properly in 9.6 is not
> there in 9.4.
It's entirely possible whatever is causing your
Justin,
Thanks for the reply.
I changed the statistics on av.customer_id as suggested and the number returned
by pg_stats went from 202,333 to 904,097. There are 11.2 million distinct
customer_ids on the 14.8 million vehicle records. Rerunning the query showed
no significant change in time
On Mon, Nov 06, 2017 at 01:18:00PM +, Adam Torres wrote:
> Good morning all,
>
> We have a problem with performance after upgrading from 9.3 to 9.6 where
> certain queries take 9 times longer to run. On our initial attempt to
> upgrade, we noticed the system as a whole was taking longer to
On Tue, Oct 24, 2017 at 04:15:59PM +0200, johannes graën wrote:
> Hi Pavel, *,
>
> you were right with ANALYZing the DB first. However, even after doing
> so, I frequently see Seq Scans where an index was used before. This
> usually cooccurs with parallelization and looked different before
>
johannes graën wrote:
> Hi Pavel, *,
>
> you were right with ANALYZing the DB first. However, even after doing
> so, I frequently see Seq Scans where an index was used before. This
> usually cooccurs with parallelization and looked different before
> upgrading to 10. I can provide an example for
On 2017-10-24 17:18, Justin Pryzby wrote:
> You could (re)install PG96 alongside PG10 and run a copy of the DB (even from
> your homedir, or on a difference server) and pg_dump |pg_restore the relevant
> tables (just be sure to specify the alternate host/port/user/etc as needed for
> the restore
On Tue, Oct 24, 2017 at 04:15:59PM +0200, johannes graën wrote:
> upgrading to 10. I can provide an example for 10 [1], but I cannot
> generate a query plan for 9.6 anymore.
You could (re)install PG96 alongside PG10 and run a copy of the DB (even from
your homedir, or on a difference server) and
Hi Pavel, *,
you were right with ANALYZing the DB first. However, even after doing
so, I frequently see Seq Scans where an index was used before. This
usually cooccurs with parallelization and looked different before
upgrading to 10. I can provide an example for 10 [1], but I cannot
generate a
On Wed, Oct 11, 2017 at 1:11 PM, Pavel Stehule wrote:
> have you fresh statistics? After upgrade is necessary to run ANALYZE command
Yes, that was missing indeed. I did ANALYZE but apparently on all
databases but this one. I could have guessed that
2017-10-11 13:06 GMT+02:00 johannes graën :
> Hi,
>
> I wrote a query that joins several tables usually returning less than
> 1000 rows, groups them and generates a JSON object of the result. In
> 9.6 is was a question of milliseconds for that query to return the
> requested
I have the newest version :
select oracle_diag();
oracle_diag
-
oracle_fdw 1.5.0, PostgreSQL 9.6.3, Oracle client 11.2.0.4.0,
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error
Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
but I'm getting error
dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch
10240 );
ERROR: syntax error at or near "10240"
LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );
dbch=#
On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky
wrote:
> Hi Claudio, how can I do that ? Can you explain me what is this option ?
>
> 2017-08-24 2:15 GMT+03:00 Claudio Freire :
>>
>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
>>
Hi Claudio, how can I do that ? Can you explain me what is this option ?
2017-08-24 2:15 GMT+03:00 Claudio Freire :
> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
> wrote:
> > To summarize, I still have performance problems. My current
On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
wrote:
> To summarize, I still have performance problems. My current situation :
>
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running insert into
On Sat, Aug 19, 2017 at 10:37 AM, anand086 wrote:
Your email is very hard to read, the formatting and line wrapping is
heavily mangled. You might want to attach the plans as files attachments
instead of or in addition to putting the in the body.
> -> Index Only Scan
I think you query is a bit confusing and have many subqueries, so I tried
to simplify
If you cant´t have more import_num = 0 to the same login, try this
SELECT count(*)
FROM test_tab tab1
LEFT JOIN test_tab tab2
ON tab1.login = tab2.login AND tab2.import_num = '0'
WHERE
tab2.login IS NULL
Do you have an index on login column ?
If not, try creating an index and taking off those DISTICTs.
Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby
escreveu:
> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> >
>
On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> +---+|
>
>
Any thoughts on this?
--
View this message in context:
http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128p5979481.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list
-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
> Sent: Monday, August 21, 2017 10:20 AM
> To: MichaelDBA <michael...@sqlexec.com>
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] performance problem on big tables
>
> I had a system that c
chael...@sqlexec.com>
> *Cc:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] performance problem on big tables
>
>
>
> I had a system that consist from many objects(procedures,functions..) on
> an oracle database. We decided to integrate that system to postgr
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
Sent: Monday, August 21, 2017 10:20 AM
To: MichaelDBA <michael...@sqlexec.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] performance problem on big
I had a system that consist from many objects(procedures,functions..) on an
oracle database. We decided to integrate that system to postgresql. That
system coppied alot of big tables from a different read only oracle
database and preformed on it alot of queries to produce reports. The part
of
Maybe I missed it in this continuous thread activity, but have you tried
'''ora2pg"? You can export from Oracle and import to Postgres in
parallel jobs. The import commands use the efficient COPY command by
default (unless you override it in the ora2pg configuration file). You
can do the
I`m searching for a way to improve the current performance, I'm not
interesting in using a different tool or writing something new because I'm
trying to migrate a system on oracle database to a postgresql database.
2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller <
> El 21 ago 2017, a las 13:27, Mariel Cherkassky
> escribió:
>
> All this operation runs as part of a big transaction that I run.
> How can I create a dump in the oracle server and copy it to the postgresql
> server from a postgresql transaction ?
I guess you
All this operation runs as part of a big transaction that I run. How can I
create a dump in the oracle server and copy it to the postgresql server
from a postgresql transaction ? Chopping the table is optional when I use
copy, but when I use copy to remote oracle table it takes longer to create
> El 21 ago 2017, a las 10:00, Mariel Cherkassky
> escribió:
>
> To summarize, I still have performance problems. My current situation :
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running
To summarize, I still have performance problems. My current situation :
I'm trying to copy the data of many tables in the oracle database into my
postgresql tables. I'm doing so by running insert into
local_postgresql_temp select * from remote_oracle_table. The performance of
this operation are
When I run copy from local table the speed of the writing is 22 M/S. When I
use the copy from remote_oracle_Table it writes 3 M/s. SCP between the
servers coppies very fast. How should I continue ?
2017-08-20 14:00 GMT+03:00 Mariel Cherkassky :
> I realized something
I realized something weird. When I`m preforming the copy utility of
postgresql in order to create dump from a local table in my postgresql db
it takes for 32G table 20 minutes. When I try to use copy for a foregin
table (on oracle database) It takes more than 2 hours.. During the copy
operation
This server is dedicated to be a postgresql production database, therefore
postgresql is the only thing the runs on the server. The fs that I`m using
is xfs. I`ll add two different disks - one for the wals and one for the
temp tablespace. Regarding the disk, what size should they be considering
On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky
wrote:
> I checked with the storage team in the company and they saw that I have alot
> of io on the server. How should I reduce the io that the postgresql uses ?
Do you have concurrent activity on that server?
What
I would just check how does it take to copy 3GB using an standard copy command.
on my computer it took 10 secs.
> El 17 ago 2017, a las 11:00, Mariel Cherkassky
> escribió:
>
> I checked with the storage team in the company and they saw that I have alot
> of io
I checked with the storage team in the company and they saw that I have
alot of io on the server. How should I reduce the io that the postgresql
uses ?
2017-08-17 9:25 GMT+03:00 Mariel Cherkassky :
> Hi Daniel,
> I already tried to set the destination table to
Hi Daniel,
I already tried to set the destination table to unlogged - it improved the
performance slightly. Is there a way to make sure that I/O is the problem ?
2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.com>:
> Seems your disks are too slow. On my laptop
Seems your disks are too slow. On my laptop (nothing special, just one disk)
using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G
takes 10 secs.
Similar proportion you had, but much faster.
confirm I/O is your bottleneck, and tell us how you solved your problem
My server is virtual and it have virtual hd from a vnx storage machine. The
logs and the data are on the same disk.
2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.com>:
> Considering it has to write logs and data at checkpoints I don’t see it
> particularly slow
Considering it has to write logs and data at checkpoints I don’t see it
particularly slow compared to the extract phase. What kind of disks you have
SSD or regular disks? Different disks for ltransaction logs and data?
> El 16 ago 2017, a las 15:54, Mariel Cherkassky
I run the copy command via psql to create a local dump of a 3G table and it
took me 134059.732ms =~2 minutes. After that I imported the data via copy
and it took 458648.677ms =~7 minutes. So the copy command works but pretty
slow.
2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <
See if the copy command is actually working, copy should be very fast from your
local disk.
> El 16 ago 2017, a las 14:26, Mariel Cherkassky
> escribió:
>
>
> After all the changes of the memory parameters the same operation(without the
> copy utility) didnt
After all the changes of the memory parameters the same operation(without
the copy utility) didnt run much faster - it took one minute less. I made
a test with the copy command (without the 'with binary') and it took 1.5
hours to create the dumpfile in my local postgresql server. Then I tried to
2017-08-15 18:13 GMT+02:00 Jeff Janes :
> On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> Hi,
>> So I I run the cheks that jeff mentioned :
>> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1
>> hour and 35
On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky
wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound
On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
> \copy local_postresql_table from /tmp/tmp with binary - Didnt
Hi,
So I I run the cheks that jeff mentioned :
\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
and 35 minutes
\copy local_postresql_table from /tmp/tmp with binary - Didnt run because
the remote oracle database is currently under maintenance work.
So I decided to follow
On Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> I have performance issues with two big tables. Those tables are located on
> an oracle remote database. I'm running the quert : insert into
> local_postgresql_table select * from oracle_remote_table.
>
>
Moving that many gigs of data across your network could also take a long
time simply depending on your network configuration. Before spending a
huge amount of energy tuning postgresql, I'd probably look at how long it
takes to simply copy 20 or 30 G of data between the two machines.
> El 14
Hi.
In general using COPY is *much faster* than anything else. You can even split
the data load and run it in parallel, start with as many jobs as processors you
have. Same with indexes, run them in parallel. With parallel I mean various
psql running at the same time.
Tuning postgres will
Total RAM on your host is 5GB, really? Before touching anything else,
increase your RAM. That will be your big performance boost right
there. Then, you can "up" your effective_cache_size and
maintenance_work_mem.
Regards,
Michael Vitale
Mariel Cherkassky
2017-06-28 10:43 GMT+02:00 Pritam Baral :
>
>
> On Wednesday 28 June 2017 02:00 PM, Ulf Lohbrügge wrote:
> > Nope, I didn't try that yet. But I don't have the impression that
> reindexing the indexes in information_schema will help. The table
> information_schema.tables
Nope, I didn't try that yet. But I don't have the impression that
reindexing the indexes in information_schema will help. The table
information_schema.tables consists of the following indexes:
"pg_class_oid_index" UNIQUE, btree (oid)
"pg_class_relname_nsp_index" UNIQUE, btree (relname,
On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:
> Hi all,
>
> we use schemata to separate our customers in a multi-tenant setup (9.5.7,
> Debian stable). Each tenant is managed in his own schema with all the tables
> that only he can access. All tables in all schemata are the same in
On Mon, Mar 06, 2017 at 12:17:22PM +, Dinesh Chandra 12108 wrote:
> Below is the output of Query SELECT * FROM pg_stats WHERE tablename='point'
> AND attname='domain_class_id' ;
>
>
> schemaname | tablename | attname | inherited | null_frac | avg_width
> | n_distinct |
On Mon, Mar 6, 2017 at 7:20 AM, Piotr Gasidło wrote:
> We are having some performance issues after we upgraded to newest
> version of PostgreSQL, before it everything was fast and smooth.
>
> Upgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we
> upgraded to
reSQL)| Cyient Ltd. Noida.
-Original Message-
From: Justin Pryzby [mailto:pry...@telsasoft.com]
Sent: 06 March, 2017 10:54 AM
To: Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
Cc: Nur Agus <nuragus.li...@gmail.com>; Jeff Janes <jeff.ja...@gmail.com>;
pgsql-performanc
On Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:
> On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108
> wrote:
> > The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> > feature_id FROM evidence.point p INNER JOIN
On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <
dinesh.chan...@cyient.com> wrote:
> Dear Nur,
>
>
>
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence
> oe ON p.feature_id = oe.evd_feature_id
Dinesh Chandra 12108 writes:
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id
> FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON
> p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND
>
.
From: Nur Agus [mailto:nuragus.li...@gmail.com]
Sent: 03 March, 2017 5:54 PM
To: Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issue in PostgreSQL server...
Hello Dinesh,
You can try the EXPLAIN tool
psql=> EX
Hello Dinesh,
You can try the EXPLAIN tool
psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p
INNER JOIN evidence.observation_evidence oe ON p.feature_id =
oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time
> '2015-05-10 00:06:56.056 IST' OR
On 23 January 2017 at 17:55, Gabriel Dodan wrote:
>
> BUT if I run a trivial select on both servers, on a similar table, the
select
> perform much much better on second server!
You're comparing two very different systems it seems, therefore you might be
looking at
Thanks all for explain!
On Mon, Jan 2, 2017 at 9:36 PM, Tom Lane wrote:
> =?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> > create table t1 (id serial, str char(32));
>
> > create function f1(line text) returns void as $$
> > begin
> > perform *
=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> create table t1 (id serial, str char(32));
> create function f1(line text) returns void as $$
> begin
> perform * from t1 where str = line;
> end;
> $$ language plpgsql;
This query is specifying a text comparison (text
Hi
2017-01-02 15:34 GMT+01:00 Андрей Хозов :
> Hello there!
>
> I have an performance issue with functions and args type.
>
> Table and data:
> create table t1 (id serial, str char(32));
> insert into t1 (str) select md5(s::text) from generate_series(1, 100)
> as s;
>
>
On Wed, Nov 30, 2016 at 6:26 AM, Marc-Olaf Jaschke
wrote:
> Hi,
>
> i have a performance issue with bitmap index scans on huge amounts of big
> jsonb documents.
>
>
> = Background =
>
> - table with big jsonb documents
> - gin index on these documents
> -
Thanks for the explanation!
Best Regards,
Marc-Olaf
Marc-Olaf Jaschke · Softwareentwickler
shopping24 GmbH
Werner-Otto-Straße 1-7 · 22179 Hamburg
Telefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 7879
marc-olaf.jasc...@s24.com · www.s24.com
AG Hamburg HRB 63371
vertreten durch Dr. Björn
> big_jsonb @> '[{"x": 1, "filler": "cfcd208495d565ef66e7dff9f98764da"}]';
> I wonder why bitmap heap scan adds such a big amount of time on top of the
> plain bitmap index scan.
> It seems to me, that the recheck is active although all blocks are exact
> [1] and that pg is loading the jsonb for
: Tom Lane <t...@sss.pgh.pa.us>
Enviado: martes, 15 de noviembre de 2016 19:35:03
Para: Gabriela Serventi
Cc: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] Performance decrease after upgrade to 9.6.1
Gabriela Serventi <gabrielaserve...@hotmail.com> writes:
> $ pgbench -l -c 10
Gabriela Serventi writes:
> $ pgbench -l -c 100 -T 30 pgbench
> starting vacuum...end.
> transaction type:
> scaling factor: 1
> query mode: simple
> number of clients: 100
> number of threads: 1
> duration: 30 s
> number of transactions actually processed: 27428
>
Hi Matheus:
Thanks for your prompt answer. It's for a web application. This
part of the application allows to export the answers to a CSV
file. So pagination isn't possible here. The user can choose among
several filters. The group of the courses is one of them.
On Wed, Oct 19, 2016 at 8:54 AM, negora wrote:
> Nested Loop (cost=245.92..383723.28 rows=7109606 width=38) (actual
> time=1.091..2616.553 rows=8906075 loops=1)
>
I wonder about the use-case for this query, because it returns more than 8M
rows, so 2.6 seconds that sounds
Oscar Camuendo writes:
> I'm working on Postgresql 9.5.3 and executed a query which takes 5 or 7
> seconds and it should not take more than 0.30 milliseconds, the query is:
Have you ANALYZEd your tables lately? Some of these estimated row counts
seem awfully far off
fre 2016-07-22 klockan 19:08 +1200 skrev Mark Kirkwood:
> On 22/07/16 13:07, Johan Fredriksson wrote:
> > And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the
> > latest version in postgresl.org's own repository) without improvment.
> >
>
> Not sure what repo you are using, but
On 22/07/16 13:07, Johan Fredriksson wrote:
And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest
version in postgresl.org's own repository) without improvment.
Not sure what repo you are using, but 9.5.3 and 9.6 Beta are the
*actual* latest versions. Now I'm not sure
And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest
version in postgresl.org's own repository) without improvment.
/ Eskil
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
I can add that setting enable_nestloop = 0 cuts the runtime for this query down
to about 4 seconds.
Disabling nested loops globaly does however impacts performance of a lot of
other queries.
/ Eskil
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On Thu, Jul 21, 2016 at 3:29 PM, David G. Johnston
wrote:
> On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire
> wrote:
>>
>> That cross join doesn't look right. It has no join condition.
>
>
> That is that the definition of a "CROSS JOIN"...
>
>
On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire
wrote:
> That cross join doesn't look right. It has no join condition.
That is that the definition of a "CROSS JOIN"...
David J.
On Thu, Jul 21, 2016 at 11:48 AM, Johan Fredriksson wrote:
> EXPLAIN ANALYZE VERBOSE SELECT DISTINCT main.* FROM Users main CROSS
> JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id =
> main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON
> (
> > > The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.
> > > Did you remember to ANALYZE all the tables after migrating? Maybe there
> > > were some table-specific statistics targets that you forgot to transfer
> > > over? In any case, the 9.2 plan looks like
On Tue, Jun 7, 2016 at 9:57 PM, Ed Felstein wrote:
> Hello,
> First time poster here. Bear with me.
> Using PostgreSQL 9.5
> I have a situation where I have a LIKE and a NOT LIKE in the same query to
> identify strings in a varchar field. Since I am using wildcards, I have
On Wednesday, June 8, 2016, Ed Felstein wrote:
> Hello,
> First time poster here. Bear with me.
> Using PostgreSQL 9.5
> I have a situation where I have a LIKE and a NOT LIKE in the same query to
> identify strings in a varchar field. Since I am using wildcards, I have
>
> > The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.
> > Did you remember to ANALYZE all the tables after migrating? Maybe there
> > were some table-specific statistics targets that you forgot to transfer
> > over? In any case, the 9.2 plan looks like
> > I am just about to upgrade from PostgreSQL 8.4.20 to 9.2.15, but I'v run
> > into some huge performance issues.
>
> The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.
> Did you remember to ANALYZE all the tables after migrating? Maybe there
> were some table-specific
>
> ...(BTW, I wonder why you are moving only to 9.2 and not something more
>> recent.)
>>
>
> Well, 9.2.15 is what comes bundled with RHEL 7, so I decided to go with
> that to avoid dependency issues. But I could install a more fresh version
> from scratch if that would solve my problem.
>
The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.
Did you remember to ANALYZE all the tables after migrating? Maybe there
were some table-specific statistics targets that you forgot to transfer
over?
No, I did not. Honestly I though everything would be transfered with
Johan Fredriksson writes:
> I am just about to upgrade from PostgreSQL 8.4.20 to 9.2.15, but I'v run
> into some huge performance issues.
The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.
Did you remember to ANALYZE all the tables after migrating? Maybe
*EXTERN*'; 'Merlin Moncure'
Cc: 'postgres performance list'
Betreff: Re: [PERFORM] Performance problems with postgres and null Values?
Sven Kerkling wrote:
> This one ist the burden, running at least 100 seconds:
>
> SELECT b.id, b.status
> FROM export b, masterNew mb
Sven Kerkling wrote:
> This one ist the burden, running at least 100 seconds:
>
> SELECT b.id, b.status
> FROM export b, masterNew mb
> WHERE mb.sperre IS NULL
> AND mb.status IS NULL
> AND b.id = mb.id
> LIMIT 100;
>
> http://explain.depesz.com/s/eAqG
I think the
-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] Im Auftrag von Merlin Moncure
Gesendet: Samstag, 23. April 2016 00:11
An: Sven Kerkling
Cc: postgres performance list
Betreff: Re: [PERFORM] Performance problems with postgres and null Values?
On Thu, Apr 21, 2016 at 4:49 AM
On Thu, Apr 21, 2016 at 4:49 AM, Sven Kerkling wrote:
> Can somebody help me with these performance Problem.
>
> What can I try to solve this?
can you explain what the problem actually is? Which query is running
slow and how fast do you think it should run?
merlin
--
Hi Jim,
Thanks for your response. Yes the tables have been analysed and I have also
re-indexed and vacuumed the slave database.
Regards
Matthew
Sent from my iPad
> On 14 Dec 2015, at 17:49, Jim Nasby wrote:
>
>> On 12/14/15 11:16 AM, Matthew Lunnon wrote:
>>
On 12/14/15 11:16 AM, Matthew Lunnon wrote:
Inspecting the execution plan shows that there are some differences, for
example, the slave is using a HashAggregate when the master is simply
grouping. There also seems to be a difference with the ordering of the
sub plans.
Have you tried analyzing
Thx you for your hints.
I found lot of information in this thread
http://postgresql.nabble.com/how-to-investigate-GIN-fast-updates-and-cleanup-cycles-td5863756.html
Currently, we are monitoring pending_pages (pgstatginindex works on 9.4.4),
and run a vacuum every night. We hope it will solve the
On Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet <
bertrand.paq...@doctolib.fr> wrote:
> Hi,
>
> We have got big slow down on our production plateform (PG 9.4.4).
>
What is it slow compared to? Did your version change, or your
workload/usage change?
>
> After analyzing wals with pg_xlogdump,
1 - 100 of 1881 matches
Mail list logo