On 22.3.2015 22:50, Vivekanand Joshi wrote:
Any documentation regarding how to configure postgresql.conf file as per
individual user?
That can't be done in postgresql.conf, but by ALTER ROLE commands.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
Any documentation regarding how to configure postgresql.conf file as per
individual user?
On 21 Mar 2015 13:10, Josh Krupka jkru...@gmail.com wrote:
The other approaches of fixing the estimates, cost params, etc are the
right way of fixing it. *However* if you needed a quick fix for just this
The other approaches of fixing the estimates, cost params, etc are the
right way of fixing it. *However* if you needed a quick fix for just this
report and can't find a way of setting it in Jaspersoft for just the report
(I don't think it will let you run multiple sql statements by default,
maybe
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 9:00 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 16:24, Thomas Kellerer
: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 9:00 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 16:24, Thomas Kellerer wrote:
Tomas Vondra schrieb
...@comcast.net]
Sent: Thursday, March 19, 2015 12:06 AM
To: vjo...@zetainteractive.com
Cc: Tomas Vondra; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
Vivekanand Joshi vjo...@zetainteractive.com writes:
So, here is the first taste of success and which gives me the
confidence
@postgresql.org
Subject: Re: [PERFORM] Performance issues
Vivekanand Joshi vjo...@zetainteractive.com writes:
So, here is the first taste of success and which gives me the
confidence that if properly worked out with a good hardware and proper
tuning, PostgreSQL could be a good replacement.
Out
Hi,
On 18.3.2015 18:31, Vivekanand Joshi wrote:
So, here is the first taste of success and which gives me the
confidence that if properly worked out with a good hardware and
proper tuning, PostgreSQL could be a good replacement.
Out of the 9 reports which needs to be migrated in PostgreSQL,
-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 9:00 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 16:24, Thomas Kellerer wrote:
Tomas Vondra
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 8:13 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 15:19, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions
Tomas Vondra schrieb am 17.03.2015 um 15:43:
On 17.3.2015 15:19, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
is not
coming out.
Regards,
Vivek
-Original Message-
From: Vivekanand Joshi [mailto:vjo...@zetainteractive.com]
Sent: Tuesday, March 17, 2015 8:40 PM
To: 'Tomas Vondra'; 'pgsql-performance@postgresql.org'
Subject: RE: [PERFORM] Performance issues
The confusion for me here is that :
I am getting
On 17.3.2015 16:10, Vivekanand Joshi wrote:
The confusion for me here is that :
I am getting results from the view in around 3 seconds
(S_V_D_CAMPAIGN_HIERARCHY) and 25 seconds (S_V_F_PROMOTION_HISTORY_EMAIL)
But when I am using these two views in the query as the joining
tables, it
To: Tomas Vondra; vjo...@zetainteractive.com; Scott Marlowe; Varadharajan
Mukundan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 3/16/15 3:59 PM, Tomas Vondra wrote:
On 16.3.2015 20:43, Jim Nasby wrote:
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect
'
Subject: RE: [PERFORM] Performance issues
Hi Guys,
Next level of query is following:
If this works, I guess 90% of the problem will be solved.
SELECT
COUNT(DISTINCT TARGET_ID)
FROM
S_V_F_PROMOTION_HISTORY_EMAIL PH
INNER
Hi,
On 17.3.2015 08:41, Vivekanand Joshi wrote:
Hi Guys,
Next level of query is following:
If this works, I guess 90% of the problem will be solved.
SELECT
COUNT(DISTINCT TARGET_ID)
FROM
S_V_F_PROMOTION_HISTORY_EMAIL PH
-
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
Sent: Tuesday, March 17, 2015 5:15 PM
To: vjo...@zetainteractive.com; Jim Nasby; Scott Marlowe; Varadharajan
Mukundan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 12:07, Vivekanand Joshi wrote
On 17.3.2015 12:07, Vivekanand Joshi wrote:
EXPLAIN ANALYZE didn't give result even after three hours.
In that case the only thing you can do is 'slice' the query into smaller
parts (representing subtrees of the plan), and analyze those first. Look
for misestimates (significant differences
: [PERFORM] Performance issues
Hi Tomas,
This is what I am getting,
EXPLAIN ANALYZE SELECT * FROM s_f_promotion_history WHERE (send_dt =
'2014-03-13 00:00:00');
QUERY
PLAN
Just as I feared, the attached explain analyze results show significant
misestimates, like this for example:
Nested Loop (cost=32782.19..34504.16 rows=1 width=16)
(actual time=337.484..884.438 rows=46454 loops=1)
Nested Loop (cost=18484.94..20366.29 rows=1 width=776)
(actual
On 17.3.2015 15:19, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
ORDER BY FROM
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
ORDER BY FROM max_creation_dt) AS rn
FROM
On 17.3.2015 16:24, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 15:43:
On 17.3.2015 15:19, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER
On 16.3.2015 20:43, Jim Nasby wrote:
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect many of the relations referenced in the views are not
actually needed in the query, i.e. the join is performed but
then it's just discarded because those columns are not used.
Try to
On 3/16/15 3:59 PM, Tomas Vondra wrote:
On 16.3.2015 20:43, Jim Nasby wrote:
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect many of the relations referenced in the views are not
actually needed in the query, i.e. the join is performed but
then it's just discarded because
@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 15/03/15 10:23, Varadharajan Mukundan wrote:
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
: [PERFORM] Performance issues
On 15/03/15 10:23, Varadharajan Mukundan wrote:
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem
Hey guys, thanks a lot.
This is really helping. I am learning a lot. BTW, I changed CTE into
subquery and it improved the performance by miles. I am getting the result
in less than 3 seconds, though I am using a 24 GB ram server. It is still a
great turnaround time as compared to previous
On 16.3.2015 18:49, Marc Mamin wrote:
Hi Team,
This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:
FWIW, this is a somewhat more readable version of the plan:
http://explain.depesz.com/s/nbB
In the future, please do two things:
(1) Attach the plan as a text
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect many of the relations referenced in the views are not
actually needed in the query, i.e. the join is performed but
then it's just discarded because those columns are not used.
Try to simplify the views as much has possible -
On 14/03/15 13:12, Tomas Vondra wrote:
On 14.3.2015 00:28, Vivekanand Joshi wrote:
Hi Guys,
So here is the full information attached as well as in the link
provided below:
http://pgsql.privatepaste.com/41207bea45
I can provide new information as well.
Thanks.
We still don't have EXPLAIN
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 700MB
effective_cache_size = 4GB
On Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower
On 15/03/15 10:23, Varadharajan Mukundan wrote:
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 700MB
effective_cache_size = 4GB
We might even consider taking experts advice on how to tune queries and
server, but if postgres is going to behave like this, I am not sure we would
be able to continue with it.
Having said that, I would day again that I am completely new to this
territory, so I might miss lots and lots of
I am really worried about the performance of PostgreSQL as we have almost
1.5 billion records in promotion history table. Do you guys really think
PostgreSQL can handle this much load. We have fact tables which are more
than 15 GB in size and we have to make joins with those tables in almost
every
Since I was doing it only for the testing purposes and on a development
server which has only 8 GB of RAM, I used only 10m rows. But the original
table has 1.5 billion rows. We will obviously be using a server with very
high capacity, but I am not satisfied with the performance at all. This
might
On 13.3.2015 21:46, Vivekanand Joshi wrote:
Since I was doing it only for the testing purposes and on a
development server which has only 8 GB of RAM, I used only 10m rows.
But the original table has 1.5 billion rows. We will obviously be
using a server with very high capacity, but I am not
Hi,
On 13.3.2015 20:59, Vivekanand Joshi wrote:
I am really worried about the performance of PostgreSQL as we have
almost 1.5 billion records in promotion history table. Do you guys
In the previous message you claimed the post table has 10M rows ...
really think PostgreSQL can handle this
, March 14, 2015 3:56 AM
To: Varadharajan Mukundan
Cc: vjo...@zetainteractive.com; Tomas Vondra;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan
srinath...@gmail.com wrote:
We might even consider taking experts advice
On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan
srinath...@gmail.com wrote:
We might even consider taking experts advice on how to tune queries and
server, but if postgres is going to behave like this, I am not sure we would
be able to continue with it.
Having said that, I would day
On 14.3.2015 00:28, Vivekanand Joshi wrote:
Hi Guys,
So here is the full information attached as well as in the link
provided below:
http://pgsql.privatepaste.com/41207bea45
I can provide new information as well.
Thanks.
We still don't have EXPLAIN ANALYZE - how long was the query
If the s_f_promotion_history table will have a explosive growth, then its
worth considering partitioning by date and using constraint exclusion to
speed up the queries. Else, it makes sense to get started with multiple
partial index (like, have a index for each week or something like that. You
may
Hi Team,
I am a novice to this territory. We are trying to migrate few jasper
reports from Netezza to PostgreSQL.
I have one report ready with me but queries are taking too much time. To be
honest, it is not giving any result most of the time.
The same query in Netezza is running in less
Hi Vivekanand,
From the query plan, we can see that good amount of time is spent in this
line
- Seq Scan on public.s_f_promotion_history base (cost=0.00..283334.00
rows=1296 width=74)
Output: base.promo_hist_id, base.target_id,
base.audience_member_id,
10 million records in s_f_promotion_history table.
*From:* Varadharajan Mukundan [mailto:srinath...@gmail.com]
*Sent:* Friday, March 13, 2015 6:29 PM
*To:* vjo...@zetainteractive.com
*Cc:* pgsql-performance@postgresql.org
*Subject:* Re: [PERFORM] Performance issues
Hi Vivekanand,
From
Hi. Thanks for the comments. My data is right, and the result is exactly
what i want, but as you say i think what causes the query to be slow is the
ST_Intersection which creates the intersection between the vector grid
(fishnet) and the country polygons.
I will check with the postgis user list if
I have seen really complex geometries cause problems. If you have
thousands of points, when 10 would do, try ST_Simplify and see if it
doesnt speed things up.
-Andy
On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote:
Hi. Thanks for the comments. My data is right, and the result is exactly
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a 2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate,
capname,
Forgot to mention that the query terminates the connection because of a
crash of server process.
2011/3/8 Andreas Forø Tollefsen andrea...@gmail.com
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a 2 pts array.
Query:
SELECT
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a 2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divider,
Thanks, Ken.
It seems like the tip to turn off synchronous_commit did the trick:
/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 60 s
number of transactions actually
On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:
Thanks, Ken.
It seems like the tip to turn off synchronous_commit did the trick:
/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode:
Ok. Cheers. I will do some more testing on my heavy PostGIS queries which
often takes hours to complete.
Thanks.
Andreas
2011/3/7 Kenneth Marshall k...@rice.edu
On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:
Thanks, Ken.
It seems like the tip to turn off
On Mon, 7 Mar 2011, Andreas For? Tollefsen wrote:
Ok. Cheers. I will do some more testing on my heavy PostGIS queries which
often takes hours to complete.
I'd like to see hours long queries :) EXPLAIN ANALYZE
Thanks.
Andreas
2011/3/7 Kenneth Marshall k...@rice.edu
On Mon, Mar 07, 2011
The synchronous_commit off increased the TPS, but not the speed of the below
query.
Oleg:
This is a query i am working on now. It creates an intersection of two
geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the
other is the country geometries of all countries in the
On Mon, Mar 07, 2011 at 10:49:48PM +0100, Andreas For Tollefsen wrote:
- The synchronous_commit off increased the TPS, but not the speed of the below
- query.
-
- Oleg:
- This is a query i am working on now. It creates an intersection of two
- geometries. One is a grid of 0.5 x 0.5 decimal degree
=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= andrea...@gmail.com writes:
This is a query i am working on now. It creates an intersection of two
geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the
other is the country geometries of all countries in the world for a certain
On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar sachin...@globallogic.com wrote:
At times we have observed that postgres stops responding for several
minutes, even couldn’t fetch the number of entries in a particular table.
One such instance happens when we execute the following steps:
Sounds
Hi,
We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).
It's a master/slave deployment slony-2.0.4.rc2 is used for DB
replication on the slave box.
At times we have observed that postgres stops responding for several
minutes, even couldn't fetch the number of entries
On 29/06/10 15:01, Sachin Kumar wrote:
At times we have observed that postgres stops responding for several
minutes, even couldn't fetch the number of entries in a particular
table.
Quick guess: checkpoints. Enable checkpoint logging, follow the logs,
see if there's any correspondance.
In
Hi,
We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).
It's a master/slave deployment slony-2.0.4.rc2 is used for DB
replication (from master to slave).
At times we have observed that postgres stops responding for several
minutes, even couldn't fetch the number of
On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar sachin...@globallogic.com wrote:
Hi,
We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x). It’s
a master/slave deployment slony-2.0.4.rc2 is used for DB replication (from
master to slave).
You should really be running 8.4.4, not
On Wed, May 12, 2010 at 1:45 AM, venu madhav venutaurus...@gmail.com wrote:
[Venu] Yes, autovacuum is running every hour. I could see in the log
messages. All the configurations for autovacuum are disabled except that it
should run for every hour. This application runs on an embedded box, so
On Wed, May 12, 2010 at 3:17 AM, Jorge Montero
jorge_mont...@homedecorators.com wrote:
First, are you sure you are getting autovacuum to run hourly? Autovacuum
will only vacuum when certain configuration thresholds are reached. You can
set it to only check for those thresholds every so
PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around
10 Million
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
venu madhav wrote:
AND e.timestamp = '1270449180'
AND e.timestamp '1273473180'
ORDER BY.
e.cid DESC,
e.cid DESC
limit 21
offset 10539780
The second column acts as a secondary key for sorting
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
venu madhav venutaurus...@gmail.com wrote:
If the records are more in the interval,
How do you know that before you run your query?
I calculate the count first.
This and other comments suggest that
venu madhav venutaurus...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
No, the data gets added when the application is
venu madhav wrote:
AND e.timestamp = '1270449180'
AND e.timestamp '1273473180'
ORDER BY.
e.cid DESC,
e.cid DESC
limit 21
offset 10539780
The second column acts as a secondary key for sorting if the
primary sorting key is a different column. For this query both of
them are same.
venu madhav venutaurus...@gmail.com wrote:
If the records are more in the interval,
How do you know that before you run your query?
I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
If
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data
venu madhav venutaurus...@gmail.com wrote:
When I try to get the last twenty records from the database, it
takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority,
-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around 10
Million
Hi all,
In my database application, I've a table whose records can reach 10M and
insertions can happen at a faster rate like 100 insertions per second in the
peak times. I
* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
s.sig_id = e.signature AND e.timestamp = '1270449180' AND e.timestamp
'1273473180' ORDER BY
Hi Tom, Greg,
Thanks for your helpful suggestions - switching the BIGINT to FLOAT
and fixing the postgresql.conf to better match my server configuration
gave me about 30% speedup on the queries.
Because of the fact that my data insert order was almost never the
data retrieval order, I also got a
On Wed, 26 Aug 2009, Hrishikesh (??? ) wrote:
key = {device_id (uint64), identifier (uint32), sub_identifier (uint32),
unix_time} (these four taken together are unique)
You should probably tag these fields as NOT NULL to eliminate needing to
consider that possibility during
Hi All,
We are improving our network appliance monitoring system, and are evaluating
using PostgreSQL as the back-end traffic statistics database (we're
currently running a home-grown Berkeley-DB based statistics database).
We log data from various network elements (it's mainly in/out bytes and
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= hashincl...@gmail.com writes:
In my timing tests, the performance of PG is quite a lot worse than the
equivalent BerkeleyDB implementation.
Are you actually comparing apples to apples?
Hi Tom,
Thanks for your quick response.
2009/8/26 Tom Lane t...@sss.pgh.pa.us
hashincl...@gmail.com writes:
In my timing tests, the performance of PG is quite a lot worse than the
equivalent BerkeleyDB implementation.
Are you actually comparing apples to apples? I don't recall that BDB
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= hashincl...@gmail.com writes:
2009/8/26 Tom Lane t...@sss.pgh.pa.us
Do the data columns have to be bigint, or would int be enough to hold
the expected range?
For the 300-sec tables I
2009/8/26 Tom Lane t...@sss.pgh.pa.us:
How does a float (REAL) compare in terms of SUM()s ?
Casting to float or float8 is certainly a useful alternative if you
don't mind the potential for roundoff error. On any non-ancient
platform those will be considerably faster than numeric. BTW,
I
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there are some instances where it seems to be markedly worse, a factor
of up to 10. The problem seems to occur when I join to more than 4
tables. Has
Matthew Lunnon [EMAIL PROTECTED] writes:
In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.
The difference between 2ms and 6ms is pretty negligable. A single context
switch or disk cache miss could throw the results off by that margin in either
direction.
But what
Scott Marlowe wrote:
Whatever email agent you're using seems to be quoting in a way that
doesn't get along well with gmail, so I'm just gonna chop most of it
rather than have it quoted confusingly... Heck, I woulda chopped a
lot anyway to keep it small. :)
Thanks again for your time. I'm
Hi Scott,
Thanks for your time
Regards
Matthew
Scott Marlowe wrote:
On Jan 28, 2008 5:41 AM, Matthew Lunnon [EMAIL PROTECTED] wrote:
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there are some
On Jan 28, 2008 5:41 AM, Matthew Lunnon [EMAIL PROTECTED] wrote:
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there are some instances where it seems to be markedly worse, a factor
of up to 10.
Whatever email agent you're using seems to be quoting in a way that
doesn't get along well with gmail, so I'm just gonna chop most of it
rather than have it quoted confusingly... Heck, I woulda chopped a
lot anyway to keep it small. :)
On Jan 28, 2008 9:27 AM, Matthew Lunnon [EMAIL PROTECTED]
Hi Gregory/All,
Thanks for your time.
Yes the difference is pretty small but does seem to be consistent, the
problem that I have is that this is just part of the query, I have tried
to break things down so that I can see where the time is being spent. I
set the default_statistics_target to
/ session / query?
Gregory
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 04, 2006 12:47 PM
To: Gregory Stewart
Cc: Mark Kirkwood; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
All the machines I've
FWIW, I've found problems running PostgreSQL on Windows in a
multi-CPU environment on w2k3. It runs fine for some
period, and
then CPU and throughput drop to zero. So far I've
been unable to
track down any more information than that, other than the
fact that
I
:[EMAIL PROTECTED]
Sent: Tuesday, May 02, 2006 3:29 PM
To: Mark Kirkwood
Cc: Gregory Stewart; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times
On Wed, May 03, 2006 at 09:29:15AM +0200, Magnus Hagander wrote:
FWIW, I've found problems running PostgreSQL on Windows in a
multi-CPU environment on w2k3. It runs fine for some period, and
then CPU and throughput drop to zero. So far I've been unable to
track down any more
.
But I don't really know a whole lot about it.
-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Sunday, April 30, 2006 7:04 PM
To: Gregory Stewart
Cc: Theodore Loscalzo
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
Gregory Stewart wrote:
Theodore
; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It takes time to format the (large
FWIW, I've found problems running PostgreSQL on Windows in a
multi-CPU environment on w2k3. It runs fine for some period, and
then CPU and throughput drop to zero. So far I've been unable to
track down any more information than that, other than the
fact that
I haven't been
On Wednesday 03 May 2006 03:29, Magnus Hagander wrote:
FWIW, I've found problems running PostgreSQL on Windows in a
multi-CPU environment on w2k3. It runs fine for some period, and
then CPU and throughput drop to zero. So far I've been unable to
track down any more information than
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It takes time to format the (large) result set for display.
2/ It has to count the time spent waiting for the (large) result
On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It takes time to format the (large) result set for display.
2/ It has
On Tue, May 02, 2006 at 06:49:48PM -0400, Jan de Visser wrote:
On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It
1 - 100 of 105 matches
Mail list logo