Hi all,
Brett Wooldridge, the creator of HikariCP [1] - a high performance
Java connection pool - is contemplating the idea to change the way
pooling is done in HikariCP and have a fixed-size pool of connections
always open.
No maxPoolSize, no minIdle, no minPoolSize, juste a poolSize parameter
On Fri, Mar 21, 2014 at 4:49 PM, David Johnston pol...@yahoo.com wrote:
Consider this train-of-thought: no matter how large the pool size if you
are constantly keeping, say, 90% of the connections actively working then
having, on average, 10% of the connections sitting idle is probably not
Hi Tom,
On Fri, Mar 21, 2014 at 5:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It will cost you, in ProcArray scans for example. But lots-of-idle-
connections is exactly what a pooler is supposed to prevent. If you have
a server that can handle say 10 active queries, you should have a pool
Hi Sethu,
On Fri, Mar 21, 2014 at 6:51 PM, Sethu Prasad sethuprasad...@gmail.com wrote:
So here the connection creation action is the costliest among all the other
management tasks. so keeping the connections ready is the best option.
That's why you often have a minIdle parameter which allows
(cough cough, missed the Reply to all button)
Hi Jeff,
On Sat, Dec 8, 2012 at 3:32 AM, Jeff Janes jeff.ja...@gmail.com wrote:
If those estimates are better, it probably means that your filter
condition is picking a part of the el JOIN l that has much different
selectivity to r than the full
Hi,
I'm struggling with a query for some time and the major problem of the
query is that the statistics are way wrong on a particular operation:
- Nested Loop (cost=3177.72..19172.84 rows=*2* width=112) (actual
time=139.221..603.929 rows=*355331* loops=1)
Join Filter: (l.location_id =
Hi Nicolas,
On Wed, Jul 7, 2010 at 10:47 AM, JOUANIN Nicolas (44)
nicolas.joua...@dgfip.finances.gouv.fr wrote:
There were no modification made on the database except a restart yesterday
evening and a vacuumdb --analyse ran at night.
It's not really surprising considering you probably kept
2009/8/27 Gaël Le Mignot g...@pilotsystems.net:
The weird thing was that with the default of 100 for statistics
target, it was worse than when we moved back to 10. So I didn't try
with 1000, but I should have.
When you have so much data and a statistics target so low, you can't
On Wed, Aug 26, 2009 at 6:29 PM, Tom Lanet...@sss.pgh.pa.us wrote:
g...@pilotsystems.net (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:
So it seems it was quite wrong about estimated matching rows (192 predicted,
10222 reals).
Yup. What's even more interesting is that it seems the real win
Hi Gaël,
On Fri, Aug 21, 2009 at 3:37 PM, Gaël Le Mignotg...@pilotsystems.net wrote:
With 8.3 ::
Limit (cost=752.67..752.67 rows=1 width=24)
(11 rows)
With 8.4 ::
(8 rows)
Could you provide us the EXPLAIN *ANALYZE* output of both plans?
From what I can see, one of the difference is
On Thu, Mar 12, 2009 at 2:05 AM, Andrew Dunstan and...@dunslane.net wrote:
It is buffered at the individual log message level, so that we make sure we
don't multiplex messages. No more than that.
OK. So if the OP can afford multiplexed queries by using a log
analyzer supporting them, it might
On Wed, Mar 11, 2009 at 8:27 PM, Frank Joerdens fr...@joerdens.de wrote:
This works much better but once we are at about 80% of peak load -
which is around 8000 transactions per second currently - the server goes
into a tailspin in the manner described above and we have to switch off full
On Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau g...@mnc.ch wrote:
Until it's possible to specifically tell the JDBC driver (and/or
PG?) to not plan once for all runs (or is there something better
to think of?), or the whole thing would be more clever (off the
top of my head, PG could try
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops). Look
here for more information:
On Wed, Dec 31, 2008 at 5:01 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
I think it has been shown enough times that the performance drop caused
by a worse plan can be orders of magnitudes worse than what's gained by
producing the plan only once. It does not seem a bad idea to provide
On Thu, Jan 1, 2009 at 9:24 PM, da...@lang.hm wrote:
forgive my ignorance here, but if it's unnamed how can you reference it
later to take advantage of the parsing?
You can't. That's what unnamed prepared statements are for.
It's not obvious to me that the parsing phase is worth any caching.
On Tue, Dec 30, 2008 at 7:59 PM, bricklen brick...@gmail.com wrote:
I would like to continue to use bind variables to prevent sql
injection, but I'd like to force a plan re-parse for every single
query (if necessary?)
As far as I understand your problem, you don't care about using
prepared
On Mon, Jul 21, 2008 at 1:25 PM, Andreas Hartmann [EMAIL PROTECTED] wrote:
SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database where pg_database.datname = 'vvz_live_1';
datname| size
---+-
vvz_live_1|
On Thu, Jun 26, 2008 at 9:02 AM, Frank Joerdens [EMAIL PROTECTED] wrote:
Which version are you using? 8.3 removes a lot of implicit casts (all?
not sure), so this may already be your fix.
8.3 only removed implicit casts from non text types to text (date -
text, int - text, interval - text...)
Hi -performance,
I experienced this morning a performance problem when we imported a
dump in a 8.1 database.
The table is 5 millions rows large and when the dump creates an index
on a specific text column called clazz it takes 27 minutes while on
the other columns, it only takes a couple of
On Thu, May 22, 2008 at 3:14 PM, Tom Lane [EMAIL PROTECTED] wrote:
Do you have maintenance_work_mem set large enough that the index
creation sort is done in-memory? 8.1 depends on the platform's qsort
and a lot of them are kinda pessimal for input like this.
FWIW, it's a 32 bits CentOS 4.6
On Thu, May 22, 2008 at 9:18 PM, Tom Lane [EMAIL PROTECTED] wrote:
Ermm .. this is in fact mostly broken in 8.3.0 and 8.3.1. If you don't
want to wait for 8.3.2, you need this patch:
http://archives.postgresql.org/pgsql-committers/2008-03/msg00566.php
That's what I had in mind. We have to
Antoine,
On Tue, May 6, 2008 at 5:03 PM, Antoine Baudoux [EMAIL PROTECTED] wrote:
Limit (cost=23981.18..23981.18 rows=1 width=977)
- Sort (cost=23981.18..23981.18 rows=1 width=977)
Sort Key: this_.c_date
Can you please provide the EXPLAIN ANALYZE output instead of EXPLAIN?
Josh,
On Tue, May 6, 2008 at 11:10 PM, Josh Cole [EMAIL PROTECTED] wrote:
We are using pgfouine to try and optimize our database at this time. Is
there a way to have pgfouine show examples or breakout commits?
I hesitated before not implementing this idea. The problem is that you
often don't
Glyn,
On Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill [EMAIL PROTECTED] wrote:
I'm looking at switching out the perc5i (lsi megaraid) cards from our
Dell 2950s for something else as they're crap at raid 10.
Do you have numbers? Perc 5/i cards perform quite well for us (we have
a 8 disks RAID 10
On Feb 5, 2008 12:47 PM, Viviane Lestic [EMAIL PROTECTED] wrote:
Could someone help me analyze this problem?
I don't manage to see if the problem comes from bad tsearch tuning,
postgresql configuration, or something else...
Can you try to replace zoneindex_test @@ q with zoneindex_test @@
On Jan 25, 2008 5:50 AM, Tom Lane [EMAIL PROTECTED] wrote:
Hmm. I think what that really means is you haven't got to the part of
the query where the leak is :-(. In my attempt to reproduce this
I found that 8.3 has introduced a memory leak into the RI trigger
support, such that even if an
On Jan 23, 2008 3:02 AM, Guillaume Smet [EMAIL PROTECTED] wrote:
I'll post my results tomorrow morning.
It works perfectly well:
cityvox_prod=# CREATE OR REPLACE FUNCTION
getTypesLieuFromTheme(codeTheme text) returns text[] AS
$f$
SELECT ARRAY(SELECT codetylieu::text FROM rubtylieu WHERE codeth
Hi Tom,
On May 9, 2007 6:40 PM, Tom Lane [EMAIL PROTECTED] wrote:
To return to your original comment: if you're trying to model a
situation with a fully cached database, I think it's sensible
to set random_page_cost = seq_page_cost = 0.1 or so.
Is it still valid for 8.3 or is there any reason
On Jan 22, 2008 9:32 AM, Florian Weimer [EMAIL PROTECTED] wrote:
Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.
Ours was ext3, no LVM or RAID.
Also with SATA? If your SATA disk is lying about effectively SYNCing
the data, I'm not that surprised you don't see any
Hi -performance,
While testing 8.3, I found this query which is equally slow on 8.1 and
8.3 and seems to be really slow for a not so complex query. The stats
are as good as possible and the behaviour of PostgreSQL seems to be
logical considering the stats but I'm looking for a workaround to
speed
On Jan 23, 2008 2:43 AM, Tom Lane [EMAIL PROTECTED] wrote:
regression=# create or replace function getu2(int) returns int[] as $$
select array(select unique2 from tenk1 where thousand = $1);
$$ language sql immutable;
CREATE FUNCTION
regression=# explain select * from tenk1 where unique1 =
Hi Joshua,
On Jan 9, 2008 9:27 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
wal_sync_method = open_sync
Do you recommend it in every situation or just because data are on a
SAN? Do you have any numbers/real cases explaining this choice.
Thanks.
--
Guillaume
---(end of
On Dec 27, 2007 7:54 PM, Tom Lane [EMAIL PROTECTED] wrote:
I concur with Greg Stark's earlier comment that this is all
overreaction. Let's just fix the misleading comment in the
documentation and leave it at that.
IMHO, we should also have a special tag for all the binaries
distributed with
On Dec 27, 2007 7:10 AM, Tom Lane [EMAIL PROTECTED] wrote:
Enables firsttermassertion/ checks in the server, which test for
many quotecannot happen/ conditions. This is invaluable for
code development purposes, but the tests slow things down a little.
Maybe we
On Dec 26, 2007 12:21 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Can you try with
bgwriter_lru_maxpages = 0
So we can see if the bgwriter has any hand in this?
I will. I'm currently running tests with less concurrent clients (16)
with exactly the same results:
64M 4213.314902
256M 4012.782820
On Dec 26, 2007 12:06 PM, Cédric Villemain [EMAIL PROTECTED] wrote:
Which kernel do you have ?
Kernel of the distro. So a RH flavoured 2.6.18.
--
Guillaume
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
On Dec 26, 2007 12:21 PM, Simon Riggs [EMAIL PROTECTED] wrote:
bgwriter_lru_maxpages = 0
So we can see if the bgwriter has any hand in this?
It doesn't change the behaviour I have.
It's not checkpointing either as using pgbench-tools, I can see that
tps and latency are quite stable during the
On Dec 26, 2007 4:41 PM, Guillaume Smet [EMAIL PROTECTED] wrote:
Then I decided to perform read-only tests using -S option (pgbench -S
-s 100 -c 16 -t 3 -U postgres bench). And still the same
behaviour:
shared_buffers=64MB : 20k tps
shared_buffers=1024MB : 8k tps
Some more information
On Dec 26, 2007 7:23 PM, Greg Smith [EMAIL PROTECTED] wrote:
Ah, now this is really interesting, as it rules out all the write
components and should be easy to replicate even on a smaller server. As
you've already dumped a bunch of time into this the only other thing I
would suggest checking
Hi all,
I'm currently benchmarking the new PostgreSQL server of one of our
customers with PostgreSQL 8.3 beta4. I have more or less the same
configuration Stefan tested in his blog [1]:
- Dell 2900 with two brand new X5365 processors (quad core 3.0 GHz),
16 GB of memory
- a RAID1 array for
On Dec 2, 2007 11:26 AM, Mindaugas [EMAIL PROTECTED] wrote:
I execute simple query select * from bigtable where From='something'.
Query returns like 1000 rows and takes 5++ seconds to complete. As far as I
understand the query is slow because:
Can you post an EXPLAIN ANALYZE? Which version
On Nov 9, 2007 5:33 PM, Tom Lane [EMAIL PROTECTED] wrote:
he's got no MCVs, presumably because the field
is unique.
It is. The ancestors field contains the current folder itself so the
id of the folder (which is the primary key) is in it.
--
Guillaume
---(end of
Tom,
Just to confirm you that your last commit fixed the problem:
lbo=# explain analyze select * from cms_items where ancestors LIKE '1062/%';
QUERY PLAN
Tom,
On Nov 8, 2007 12:14 AM, Tom Lane [EMAIL PROTECTED] wrote:
I've applied a patch that might help you:
http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
AFAICS, it doesn't seem to fix the problem. I just compiled
REL8_1_STABLE branch and I still has the following
On Nov 8, 2007 4:01 PM, Tom Lane [EMAIL PROTECTED] wrote:
Hmm, can we see the pg_stats row for the ancestors column?
Sure:
public | cms_items | ancestors | 0 |32 |
-1 | | |
On Nov 9, 2007 3:08 AM, Tom Lane [EMAIL PROTECTED] wrote:
This rule works for all the locales I have installed ... but I don't
have any Far Eastern locales installed. Also, my test cases are only
covering ASCII characters, and I believe many locales have some non-ASCII
letters that sort after
On 11/7/07, Tom Lane [EMAIL PROTECTED] wrote:
Hmmm ... what locale are you working in? I notice that the range
estimator for this pattern would be ancestors = '1062/' AND
ancestors '10620', which will do the right thing in C locale
but maybe not so much elsewhere.
Sorry for not having
Hi all,
While studying a query taking forever after an ANALYZE on a never
analyzed database (a bad estimate causes a nested loop on a lot of
tuples), I found the following problem:
- without any stats (I removed the line from pg_statistic):
ccm_prod_20071106=# explain analyze select * from
Alexander,
On 11/7/07, Alexander Staubo [EMAIL PROTECTED] wrote:
That's a difference of less than *three milliseconds* -- a difference
probably way within the expected overhead of running explain
analyze. Furthermore, all three queries use the same basic plan: a
sequential scan with a filter.
On 11/7/07, Tom Lane [EMAIL PROTECTED] wrote:
I wanted the locale (lc_collate), not the encoding.
fr_FR.UTF-8
That would optimize this particular query and probably pessimize
a lot of others.
Sure but there aren't a lot of queries based on the ancestors field
and if they are a bit slower,
On 11/8/07, Tom Lane [EMAIL PROTECTED] wrote:
I've applied a patch that might help you:
http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
Thanks. I'll build a RPM package tomorrow with this patch and let you
know if it fixes the problem.
--
Guillaume
On 6/18/07, Sabin Coanda [EMAIL PROTECTED] wrote:
Hi Guillaume,
I tried pgFouine.php app on a sample log file but it reports me some errors.
Could you give me some startup support, please ?
I attach the log here to find what's wrong.
Sorry for the delay. I answered to your private email this
On 6/18/07, Y Sidhu [EMAIL PROTECTED] wrote:
I am following this discussion with great interest. I have PG running on
FreeBSD and am forced to run pgFouine on a separate Linux box. I am hoping I
can create a log file. and then copy that over and have pgFouine analyze it
on the Linux box.
a. I
Sabin,
On 6/14/07, Sabin Coanda [EMAIL PROTECTED] wrote:
I'd like to understand completely the report generated by VACUUM VERBOSE.
Please tell me where is it documented ?
You can take a look to what I did for pgFouine:
http://pgfouine.projects.postgresql.org/vacuum.html
--
Guillaume
On 6/14/07, Y Sidhu [EMAIL PROTECTED] wrote:
Can anyone share what value they have set log_min_duration_statement to?
It's OT but we use different values for different databases and needs.
On a very loaded database with a lot of complex queries (lots of join
on big tables, proximity queries,
On 5/22/07, cedric [EMAIL PROTECTED] wrote:
I made something very similar, and using PL/pgsql is very slow, when using
perl is very quick.
Another solution is to use tsearch2 for that:
CREATE TABLE word_counts AS SELECT * FROM stat('SELECT
to_tsvector(''simple'', lower(coalesce(field
On 4/10/07, Michael Dengler [EMAIL PROTECTED] wrote:
I'm using RHEL4 and wondering if I need to upgrade the php and php-pgsql
packages when upgrading from Postgres 7.4.1 to 8.2.3.
No you don't. Devrim Gunduz provides compat RPM for a long time now.
See
On 2/24/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
Thanks for your time.
GiN version, short:
- Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual
time=5.555..30.157 rows=7 loops=1)
Filter: (title % 'foo'::text)
- Bitmap Index Scan on trgm_idx
On 2/26/07, Oleg Bartunov oleg@sai.msu.su wrote:
Did you rewrite query manually or use rewrite feature of tsearch2 ?
Currently, it's manual. I perform a pg_trgm query for each word of the
search words (a few stop words excluded) and I generate the ts_query
with the similar words instead of
On 2/23/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Also isn't it pretty much *not* a problem with current versions of
PostgreSQL?
We had a really *big* scalability problem with a quad Xeon MP 2.2 and
PostgreSQL 7.4. The problem is mostly gone since we upgraded to 8.1 a
year ago.
Woodcrest
On 2/23/07, Geoffrey [EMAIL PROTECTED] wrote:
As I've heard. We're headed for 8 as soon as possible, but until we get
our code ready, we're on 7.4.16.
You should move to at least 8.1 and possibly 8.2. It's not a good idea
to upgrade only to 8 IMHO.
--
Guillaume
Florian, Steinar,
Could you try to see if the GIN implementation of pg_trgm is faster in
your cases?
Florian, instead of using WHERE similarity(...) 0.4, you should use
set_limit (SELECT set_limit(0.4);).
I posted it on -patches and it is available here:
Hi Steinar,
On 2/24/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
I'm sorry, I can no longer remember where I needed pg_trgm. Simple testing of
your patch seems to indicate that the GiN version is about 65% _slower_ (18ms
vs. 30ms) for a test data set I found lying around, but I remember
On 2/15/07, Guillaume Smet [EMAIL PROTECTED] wrote:
The use of PostGIS is slower than the previous cube/earthdistance
approach (on a similar query and plan).
For the record, here are new information about my proximity query work.
Thanks to Tom Lane, I found the reason of the performance drop
Tom,
On 2/13/07, Tom Lane [EMAIL PROTECTED] wrote:
It gets the right answer, yes. I'm not sure if we could safely put the
condition into the recheck instead of the filter. The particular code
I showed you has to go the direction it does, because a condition in the
filter has to be checked
On 2/14/07, Paul Ramsey [EMAIL PROTECTED] wrote:
You'll find that PostGIS does a pretty good job of selectivity
estimation.
So I finally have a working PostGIS and I fixed the query to use PostGIS.
The use of PostGIS is slower than the previous cube/earthdistance
approach (on a similar query
On 2/15/07, Tom Lane [EMAIL PROTECTED] wrote:
I think that the
answer is probably because the index is lossy for this operator,
so it has to be checked even if the bitmap didn't become lossy.
You'd have to check the GIST opclass definition to be sure.
Any idea on what I have to look for (if
On 2/15/07, Guillaume Smet [EMAIL PROTECTED] wrote:
On 2/15/07, Tom Lane [EMAIL PROTECTED] wrote:
I think that the
answer is probably because the index is lossy for this operator,
so it has to be checked even if the bitmap didn't become lossy.
You'd have to check the GIST opclass definition
Paul,
On 2/14/07, Paul Ramsey [EMAIL PROTECTED] wrote:
You'll find that PostGIS does a pretty good job of selectivity
estimation.
PostGIS is probably what I'm going to experiment in the future. The
only problem is that it's really big for a very basic need.
With my current method, I don't
Hi all,
I'm currently working on optimizing a couple of queries. While
studying the EXPLAIN ANALYZE output of a query, I found this Bitmap
Heap Scan node:
- Bitmap Heap Scan on lieu l (cost=12.46..63.98 rows=53 width=94)
(actual time=35.569..97.166 rows=78 loops=1)
Recheck Cond:
Hi all,
Following the work on Mark Stosberg on this list (thanks Mark!), I
optimized our slow proximity queries by using cube, earthdistance
(shipped with contrib) and a gist index. The result is globally very
interesting apart for a specific query and we'd like to be able to fix
it too to be
On 2/13/07, Tom Lane [EMAIL PROTECTED] wrote:
bitmapqualorig = list_difference_ptr(bitmapqualorig, qpqual);
What's not immediately clear is why the condition was in both lists to
start with. Perhaps idx_lieu_parking is a partial index with this as
its WHERE condition?
Yes, it is:
Dan,
On 2/13/07, Dan Harris [EMAIL PROTECTED] wrote:
Having upgraded to 2.6.18 fairly recently, I am *very* interested in
what caused the throughput to drop in 2.6.18? I haven't done any
benchmarking on my system to know if it affected my usage pattern
negatively, but I am curious if anyone
Erik,
Could you set log_min_duration_statement=0 on your server and enable
logging (tutorial here if you don't know how to do that:
http://pgfouine.projects.postgresql.org/tutorial.html).
You should see which queries are executed in both cases and find the
slow one easily.
Regards,
--
On 9/18/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
Tomorrow morning i plan to add 2Go RAM in order to test difference with
my actual config.
I don't think more RAM will change anything if you don't swap at all.
You can try to set shared_buffers lower (try 32768 and 16384) but I
don't
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
I migrated Postgres server from 7.4.6 to 8.1.4, But my server is
completely full, by moment load average 40
All queries analyzed by EXPLAIN, all indexes are used .. IO is good ...
What is the bottleneck? Are you CPU bound? Do you
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
15667 postgres 25 0 536m 222m 532m R 98.8 11.0 1:39.29 postmaster
19533 postgres 25 0 535m 169m 532m R 92.9 8.3 0:38.68 postmaster
16278 postgres 25 0 537m
Jérôme,
Perhaps it's a stupid question but are your queries slower than
before? You didn't tell it.
IMHO, it's not a problem to have a high load if you have a lot of
users and your queries are fast (and with 8.1, they should be far
faster than before).
To take a real example, we had a problem
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
Yes i have a lot of users ;-)
So your work_mem is probably far too high (that's what I told you in
my first message) and you probably swap when you have too many users.
Remember that work_mem can be used several times per query (and it's
Cosimo,
On 8/31/06, Cosimo Streppone [EMAIL PROTECTED] wrote:
The problem is that under peak load, when n. of concurrent transactions
raises, there is a sensible performance degradation.
Could you give us more information about the performance degradation?
Especially cpu load/iostat/vmstat
Guillaume,
On 28 Aug 2006 11:43:16 +0200, Guillaume Cottenceau [EMAIL PROTECTED] wrote:
max_fsm_pages is 2
max_fsm_relations is 1000
Do they look low?
Yes they are probably too low if you don't run VACUUM on a regular
basis and you have a lot of UPDATE/DELETE activity. FSM doesn't take
Tom,
On 7/21/06, Tom Lane [EMAIL PROTECTED] wrote:
It's really not possible for a full-table indexscan to be faster than a
seqscan, and not very credible for it even to be approximately as fast.
I suspect your second query here is the beneficiary of the first query
having fetched all the pages
the output of an explain analyze on the
list with all the relevant information (structure of the concerned
tables, indexes, size...).
If not, it's probably more an ODBC problem.
Regards,
--
Guillaume Smet
Open Wide
---(end of broadcast)---
TIP 9
On 5/23/06, Dan Gorman [EMAIL PROTECTED] wrote:
What I am looking for is that our DB is doing X selects a min.
If you're using 7.4, you can use log_duration to only log duration. It
won't log all the query text, only one short line per query. Then you
can use pgFouine to analyze this and
Guoping,
On 4/27/06, Guoping Zhang [EMAIL PROTECTED] wrote:
We have to looking at setting fsync OFF option for performance reason,
Did you try the other wal sync methods (fdatasync in particular)? I
saw a few posts lately explaining how changing sync method can affect
performances in specific
On 4/25/06, Arnau [EMAIL PROTECTED] wrote:
espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM
agenda_users_groups
espsm_moviltelevision-# WHERE group_id = '9';
QUERY PLAN
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
Can you try strace'ing some of the backend processes while the system is
behaving like this? I suspect what you'll find is a whole lot of
delaying select() calls due to high contention for spinlocks ...
As announced, we have migrated our
Guillaume,
On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
wrote:
Reading the documentation and postgresql list archives, I have
run ANALYZE right before my tests, I have increased the
statistics target to 50 for the considered table; my problem is
that the index scan cost reported by
Hello,
We are experiencing performances problem with a quad Xeon MP and
PostgreSQL 7.4 for a year now. Our context switch rate is not so high
but the load of the server is blocked to 4 even on very high load and
we have 60% cpu idle even in this case. Our database fits in RAM and
we don't have
Richard,
You should be seeing context-switching jump dramatically if it's the
classic multi-Xeon problem. There's a point at which it seems to just
escalate without a corresponding jump in activity.
No we don't have this problem of very high context switching in our
case even when the
Sven,
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
What version of XEON MP does your server have?
The server is a dell 6650 from end of 2004 with 4 xeon mp 2.2 and 2MB
cache per proc.
Here are the information from Dell:
4x PROCESSOR, 80532, 2.2GHZ, 2MB cache, 400Mhz, SOCKET F
8x DUAL
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
Hi Guillaume,
I had a similar issue last summer. Could you please provide details
about your XEON MP server and some statistics (context-switches/load/CPU
usage)?
I forgot the statistics:
CPU load usually from 1 to 4.
CPU usage 40% for each
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
Did you compare 7.4 on a 4-way with 8.1 on a 2-way?
I know there are too many parameters changing between the two servers
but I can't really change anything before tuesday. On tuesday, we will
be able to compare both servers with the same
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
Can you try strace'ing some of the backend processes while the system is
behaving like this? I suspect what you'll find is a whole lot of
delaying select() calls due to high contention for spinlocks ...
Tom,
I think we can try to do it.
You mean
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
What we want to find out is if there's a lot of select()s and/or
semop()s shown in the result. Ideally there wouldn't be any, but
I fear that's not what you'll find.
OK, I'll try to do it on monday before our upgrade then see what
happens with
Hi,
I'm also sending the EXPLAIN outputs.
Please provide EXPLAIN ANALYZE outputs instead of EXPLAIN. You will have
more information.
Indexes on your tables are obviously missing. You should try to add:
CREATE INDEX idx_node_filter ON node(name, type, usage);
CREATE INDEX
Andrew,
I would be very suspicious of that much memory for sort. Please see
the docs for what that does. That is the amount that _each sort_ can
allocate before spilling to disk.
If some set of your users are
causing complicated queries with, say, four sorts apiece, then each
user is
Qingqing Zhou wrote:
Someone is doing a massive *write* at this time, which makes your query
*read* quite slow. Can you find out which process is doing write?
Indexes should be in memory so I don't expect a massive write to slow
down the select queries. sdb is the RAID10 array dedicated to
Ron wrote:
If I understand your HW config correctly, all of the pg stuff is on the
same RAID 10 set?
No, the system and the WAL are on a RAID 1 array and the data on their
own RAID 10 array.
As I said earlier, there's only a few writes in the database so I'm not
really sure the WAL can be a
1 - 100 of 110 matches
Mail list logo