On Sat, Aug 19, 2017 at 10:37 AM, anand086 wrote:
Your email is very hard to read, the formatting and line wrapping is
heavily mangled. You might want to attach the plans as files attachments
instead of or in addition to putting the in the body.
> -> Index Only Scan
I think you query is a bit confusing and have many subqueries, so I tried
to simplify
If you cant´t have more import_num = 0 to the same login, try this
SELECT count(*)
FROM test_tab tab1
LEFT JOIN test_tab tab2
ON tab1.login = tab2.login AND tab2.import_num = '0'
WHERE
tab2.login IS NULL
Do you have an index on login column ?
If not, try creating an index and taking off those DISTICTs.
Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby
escreveu:
> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> >
>
On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> +---+|
>
>
Any thoughts on this?
--
View this message in context:
http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128p5979481.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list
On Mon, Mar 06, 2017 at 12:17:22PM +, Dinesh Chandra 12108 wrote:
> Below is the output of Query SELECT * FROM pg_stats WHERE tablename='point'
> AND attname='domain_class_id' ;
>
>
> schemaname | tablename | attname | inherited | null_frac | avg_width
> | n_distinct |
On Mon, Mar 6, 2017 at 7:20 AM, Piotr Gasidło wrote:
> We are having some performance issues after we upgraded to newest
> version of PostgreSQL, before it everything was fast and smooth.
>
> Upgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we
> upgraded to
reSQL)| Cyient Ltd. Noida.
-Original Message-
From: Justin Pryzby [mailto:pry...@telsasoft.com]
Sent: 06 March, 2017 10:54 AM
To: Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
Cc: Nur Agus <nuragus.li...@gmail.com>; Jeff Janes <jeff.ja...@gmail.com>;
pgsql-performanc
On Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:
> On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108
> wrote:
> > The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> > feature_id FROM evidence.point p INNER JOIN
On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <
dinesh.chan...@cyient.com> wrote:
> Dear Nur,
>
>
>
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence
> oe ON p.feature_id = oe.evd_feature_id
Dinesh Chandra 12108 writes:
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id
> FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON
> p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND
>
.
From: Nur Agus [mailto:nuragus.li...@gmail.com]
Sent: 03 March, 2017 5:54 PM
To: Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issue in PostgreSQL server...
Hello Dinesh,
You can try the EXPLAIN tool
psql=> EX
Hello Dinesh,
You can try the EXPLAIN tool
psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p
INNER JOIN evidence.observation_evidence oe ON p.feature_id =
oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time
> '2015-05-10 00:06:56.056 IST' OR
Thanks all for explain!
On Mon, Jan 2, 2017 at 9:36 PM, Tom Lane wrote:
> =?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> > create table t1 (id serial, str char(32));
>
> > create function f1(line text) returns void as $$
> > begin
> > perform *
=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> create table t1 (id serial, str char(32));
> create function f1(line text) returns void as $$
> begin
> perform * from t1 where str = line;
> end;
> $$ language plpgsql;
This query is specifying a text comparison (text
Hi
2017-01-02 15:34 GMT+01:00 Андрей Хозов :
> Hello there!
>
> I have an performance issue with functions and args type.
>
> Table and data:
> create table t1 (id serial, str char(32));
> insert into t1 (str) select md5(s::text) from generate_series(1, 100)
> as s;
>
>
On Wed, Nov 30, 2016 at 6:26 AM, Marc-Olaf Jaschke
wrote:
> Hi,
>
> i have a performance issue with bitmap index scans on huge amounts of big
> jsonb documents.
>
>
> = Background =
>
> - table with big jsonb documents
> - gin index on these documents
> -
Thanks for the explanation!
Best Regards,
Marc-Olaf
Marc-Olaf Jaschke · Softwareentwickler
shopping24 GmbH
Werner-Otto-Straße 1-7 · 22179 Hamburg
Telefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 7879
marc-olaf.jasc...@s24.com · www.s24.com
AG Hamburg HRB 63371
vertreten durch Dr. Björn
> big_jsonb @> '[{"x": 1, "filler": "cfcd208495d565ef66e7dff9f98764da"}]';
> I wonder why bitmap heap scan adds such a big amount of time on top of the
> plain bitmap index scan.
> It seems to me, that the recheck is active although all blocks are exact
> [1] and that pg is loading the jsonb for
On Tue, Aug 4, 2015 at 8:40 PM, Ram N yrami...@gmail.com wrote:
Thanks much for responding guys. I have tried both, building multi column
indexes and GIST, with no improvement. I have reduced the window from 180
days to 30 days and below are the numbers
Composite index - takes 30 secs
Thanks much for responding guys. I have tried both, building multi column
indexes and GIST, with no improvement. I have reduced the window from 180
days to 30 days and below are the numbers
Composite index - takes 30 secs
With Btree indexing - takes 9 secs
With GIST - takes 30 secs with kind
On Thu, Jul 30, 2015 at 4:51 AM, Ram N yrami...@gmail.com wrote:
select sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts
b.start_date and a.ts b.end_date and a.ts '2015-01-01 20:50:44.00
+00:00:00' and a.ts '2015-07-01 19:50:44.00 +00:00:00' group by a.ts,
st order
On Fri, Jul 31, 2015 at 10:55 AM, Ram N yrami...@gmail.com wrote:
Thanks Qingqing for responding. That didn't help. It in fact increased the
scan time. Looks like a lot of time is being spent on the NestedLoop Join
than index lookups though I am not sure how to optimize the join.
Good news
On Fri, Jul 31, 2015 at 3:06 PM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date,
'()'));
The index should be USING GIST, not GIN. Sorry.
--
Matheus de Oliveira
Thanks Qingqing for responding. That didn't help. It in fact increased the
scan time. Looks like a lot of time is being spent on the NestedLoop Join
than index lookups though I am not sure how to optimize the join. I am
assuming its in memory join, so I am not sure why it should take such a lot
of
On Thu, Jul 30, 2015 at 12:51 AM, Ram N yrami...@gmail.com wrote:
- Index Scan using end_date_idx on public.table2 b
(cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274
rows=403936 loops=181)
Output: b.serial_no, b.name, b.st, b.end_date,
Thanks for the feedback, everybody.
I spent a couple of days trying to optimise this;
As mentioned , the increased memory is not an option for me, as this query
is part of a report that can be run by any user on an ad hoc basis.
Allocating the required memory to any session on demand is not
2014-08-28 11:50 GMT+03:00 gmb gmbou...@gmail.com:
It seems as if the planner is not using the PRIMARY KEY as index which was
my assumption.
Can you send `EXPLAIN (analyze, buffers)` for your query instead?
It'll show exactly what's going on.
--
Victor Y. Yegorov
Can you send `EXPLAIN (analyze, buffers)` for your query instead?
It'll show exactly what's going on.
GroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual
time=4708.181..6688.699 rows=287268 loops=1)
Buffers: shared read=23899, temp read=30974 written=30974
- Sort
2014-08-28 12:08 GMT+03:00 gmb gmbou...@gmail.com:
GroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual
time=4708.181..6688.699 rows=287268 loops=1)
Buffers: shared read=23899, temp read=30974 written=30974
- Sort (cost=303425.31..306847.34 rows=1368812 width=48)
On Thu, Aug 28, 2014 at 11:50 AM, gmb gmbou...@gmail.com wrote:
Can somebody please confirm whether aggregate functions such as GROUP BY
should use indexes ?
Yes, if the planner deems it faster than other approaches. It can make
wrong choices for many reasons, but usually when your planner
Thanks for these suggestions
Unfortunately , I don't have a lot of memory available ( 65 connections ,
work_mem = 64MB in pg conf ).
I think index will be of no help here, as (1) you're reading whole table
anyway and (2) `amount` is not part of your index.
I did not think that the the field
2014-08-28 14:29 GMT+03:00 gmb gmbou...@gmail.com:
Unfortunately , I don't have a lot of memory available ( 65 connections ,
work_mem = 64MB in pg conf ).
You don't have to change cluster-wide settings here.
You can issue `SET` command from your client right before running your
query, only
On 08/28/2014 01:50 AM, gmb wrote:
Can somebody please confirm whether aggregate functions such as GROUP BY
should use indexes ?
Sometimes. In your case, the index has one more column than the GROUP
BY, which makes it less likely that Postgres will use it (since
depending on the cardinality
Jenish jenishv...@gmail.com wrote:
I am using postgres 8.4.x
With x being what? On what OS and hardware?
Table is having 3 before insert trigger and one after insert
trigger.
With all triggers enable it is inserting only 4-5 record per
second.
But if I disable after insert trigger
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
After insert trigger is again calling 2 more trigger and insert record in
another table depends on condition.
with all trigger enable there are 8 insert and 32
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
After insert trigger is again calling 2 more trigger and insert record in
another table depends on condition.
with all trigger enable there are 8 insert and 32
Hi,
I have already checked all the statements present in the trigger, no one is
taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95
% for all. (htop result)
DB has 960 concurrent users.
io : writing 3-4 MB per second or less (iotop result).
On Mon, Jun 27, 2011 at 9:22 AM, Jenish jenishv...@gmail.com wrote:
Hi All,
I am facing some performance issue with insert into some table.
I am using postgres 8.4.x
Table is having 3 before insert trigger and one after insert trigger.
With all triggers enable it is inserting only 4-5
Dne 27.6.2011 17:58, Jenish napsal(a):
Hi,
I have already checked all the statements present in the trigger, no one
is taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then
90-95 % for all. (htop result)
So all cores are 95% utilized? That means
Dne 27.6.2011 17:01, Jenish napsal(a):
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
How much data are we talking about? Does that fit into the shared
buffers or is it significantly larger? Do the triggers
Hi ,
This server is the dedicated database server.
And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am getting
good performance but when I am running the same the same test for 950
concurrent users I am
Dne 27.6.2011 22:14, Jenish napsal(a):
And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am
getting good performance but when I am running the same the same test
for 950 concurrent users I am getting very bad
Jenish jenishv...@gmail.com wrote:
This server is the dedicated database server.
And I am testing the limit for the concurrent active users. When I
am running my test for 400 concurrent user ie. Active connection.
I am getting good performance but when I am running the same the
same test
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address.
is
there any issue would face in performance related things which one
will
cause the performance issue.
The array is
On Mon, Feb 14, 2011 at 5:36 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
Hello
2011/2/14 dba dba...@gmail.com:
create table a( address1 int,address2 int,address3 int)
create table b(address int[3])
I have created two tables. In the first table i am using many fields to
store 3
Hello
2011/2/14 dba dba...@gmail.com:
create table a( address1 int,address2 int,address3 int)
create table b(address int[3])
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address. is
2009/6/17 Mark Steben mste...@autorevenue.com:
A few details – I can always provide more
Could you send:
1. Exact text of query.
2. EXPLAIN ANALYZE output on each machine.
3. VACUUM VERBOSE output on each machine, or at least the last 10 lines.
...Robert
--
Sent via pgsql-performance
We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
Replicate from one to the other.
I have to ask the obvious question. Do you regularly analyze the machine
you replicate too?
Dave
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Dave Dutcher
Sent: Wednesday, June 17, 2009 1:39 PM
To: 'Mark Steben'; pgsql-performance@postgresql.org
Cc: 'Rich Garabedian'
Subject: Re: [PERFORM] Performance issue - 2 linux
On 2008-02-05 Viviane Lestic wrote:
QUERY PLAN
-
Sort (cost=2345.54..2345.58 rows=16 width=308) (actual
time=270638.774..270643.142
On Feb 5, 2008 12:47 PM, Viviane Lestic [EMAIL PROTECTED] wrote:
Could someone help me analyze this problem?
I don't manage to see if the problem comes from bad tsearch tuning,
postgresql configuration, or something else...
Can you try to replace zoneindex_test @@ q with zoneindex_test @@
2008/2/5, Ansgar -59cobalt- Wiechers wrote:
Your planner estimates are way off. Try increasing the statistics target
for the columns used in this query and re-analyze the tables after doing
so.
I first set STATISTICS to 1000 for column zoneindex_test and saw no
significant improvement (with a
Hi Guys,
Following Tom Lane's advice I upgraded to 8.2, and that solved all my
problems. :D
Thank you so much for your input, I really appreciate it.
Kind regards
Willo van der Merwe
---(end of broadcast)---
TIP 6: explain analyze is your
In response to Willo van der Merwe [EMAIL PROTECTED]:
Hi Guys,
I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
how do I get PostgreSQL to work with their horrible code. The queries
they generate
Willo van der Merwe [EMAIL PROTECTED] writes:
I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
how do I get PostgreSQL to work with their horrible code. The queries
they generate look something like :
Karl Denninger skrev:
I've got an interesting issue here that I'm running into with 8.2.3
This is an application that has run quite well for a long time, and has
been operating without significant changes (other than recompilation)
since back in the early 7.x Postgres days. But now we're
Karl Denninger [EMAIL PROTECTED] writes:
Not sure where to start here. It appears that I'm CPU limited and the problem
may be that this is a web-served application that must connect to the Postgres
backend for each transaction, perform its queries, and then close the
connection down - in
Looks like that was the problem - got a day under the belt now with the
8.2.4 rev and all is back to normal.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Karl Denninger wrote:
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the first blush look
Yeah, the problem doesn't appear to be there. As I said, if I look at
the PS of the system when its bogging, there aren't a whole bunch of
processes stuck doing these, so while this does take a second or two to
come back, that's not that bad.
Its GENERAL performance that just bites - the
On 7/25/07, Karl Denninger [EMAIL PROTECTED] wrote:
Yeah, the problem doesn't appear to be there. As I said, if I look at the
PS of the system when its bogging, there aren't a whole bunch of processes
stuck doing these, so while this does take a second or two to come back,
that's not that
Hm. now that's interesting. Stats collector IS accumulating
quite a bit of runtime. me thinks its time to go grab 8.2.4.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Merlin Moncure wrote:
On 7/25/07, Karl Denninger [EMAIL PROTECTED] wrote:
Yeah, the problem
Karl Denninger [EMAIL PROTECTED] writes:
Hm. now that's interesting. Stats collector IS accumulating
quite a bit of runtime. me thinks its time to go grab 8.2.4.
I think Merlin might have nailed it --- the stats collector bug is
that it tries to write out the stats file way more
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the first blush look is that it makes a HUGE difference in system
load, and I no longer have the stats collector process on the top of the
top list..
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
On Thu, 2006-07-27 at 09:23, Eliott wrote:
Hi!
I hope I'm sending my question to the right list, please don't flame
if it's the wrong one.
I have noticed that while a query runs in about 1.5seconds on a 8.xx
version postgresql server on our 7.4.13 it takes around 15-20 minutes.
Since we
On 7/27/06, Eliott [EMAIL PROTECTED] wrote:
Hi!
I hope I'm sending my question to the right list, please don't flame if it's
the wrong one.
I have noticed that while a query runs in about 1.5seconds on a 8.xx version
postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are
try turning off genetic query optimization. regarding the rhel4
issue...does rhel not come with a c compiler? :)
Enterprises are not going to compile. They are going to accept the
latest support by vendor release.
Redhat has a tendency to be incredibly stupid about this particular
area
Joshua D. Drake wrote:
try turning off genetic query optimization. regarding the rhel4
issue...does rhel not come with a c compiler? :)
Enterprises are not going to compile. They are going to accept the
latest support by vendor release.
Redhat has a tendency to be incredibly stupid
Alvaro Herrera [EMAIL PROTECTED] writes:
Joshua D. Drake wrote:
Enterprises are not going to compile. They are going to accept the
latest support by vendor release.
Redhat has a tendency to be incredibly stupid about this particular
area of their packaging.
Stupid how?
Red Hat feels
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very similar than using libpq, even with a
big
number of simultaneous connections/queries. Of course that
-performance@postgresql.org
Subject: Re: [PERFORM] PERFORMANCE ISSUE ODBC x LIBPQ C++ Application
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very
Peter,
One possibility is to drop all the indexes, do the insert and re-add the
indexes.
The more indexes that exist and the more rows that exist, the more costly
the insert.
Regards,
Joseph
At 05:48 PM 9/24/2003 +1200, peter wrote:
Hello,
I have been trying to get my Postgres database to
get rid of any unnecessary indexes?
i've found that droping indexes and re-creating them isn't usually worth the
effort
mount the disk with the noatime option which saves you the time involved in
updating the last access time on files
make sure you're doing all the inserts in one transaction..
My statistics(Athlon 1.8Ghz)
20,000 itemsTakes on average 0.078seconds/room
385,000 items Takes on average .11seconds/room
690,000 items takes on average .270seconds/room
1,028,000 items Takes on average .475seconds/room
[snip]
I am
74 matches
Mail list logo