Hi,
We have some performances problem on a particular query.
We reproduced the problem on a 7.4.5 and on a 7.4.7 server.
* we load the dump in a new database
* query: it's fast ( 1ms)
* VACUUM FULL ANALYZE;
* query: it's really slow (130ms) and it's another plan
* set enable_seqscan=off;
* query:
Tom,
So, the usual questions: have these two tables been ANALYZEd lately?
Yes, of course.
As I wrote in my previous mail, here is how I reproduce the problem:
- we load the dump in a new database (to be sure, there is no problem on
an index or something like that)
- query: it's fast ( 1ms)
-
Well, those stats certainly appear to justify the planner's belief that
the indexscan needn't run very far: the one value of
parent_application_id is 1031 and this is below the smallest value of
object_id seen by analyze.
Yes, it seems rather logical but why does it cost so much if it should
Josh, Tom,
Thanks for your explanations.
In the meantime it seems like the quickest answer for Guillaume might
be to try to avoid keeping any NULLs in parent_application_id.
I can't do that as the majority of the applications don't have any
parent one. Moreover, we use a third party application
Hi all,
I'm currently testing PostgreSQL 8.1 beta3 and I have a problem with a
EXPLAIN ANALYZE output. You can find it attached.
I don't understand why I have the Nested Loop at line 19 with an actual
time of 254.292..257.328 because I can't find anywhere the line taking
this 254 ms. Is it
Steinar,
which seems to make sense; you have one run of about 257ms, plus 514 runs
taking about 0.035ms each (ie. about 18ms), which should add up to become
about 275ms (which is close enough to the reality of 281ms).
Yep. The line that disturbed me was the bitmap index scan with a cost of
Hi all,
We are operating a 1.5GB postgresql database for a year and we have
problems for nearly a month. Usually everything is OK with the database,
queries are executed fast even if they are complicated but sometimes and
for half an hour, we have a general slow down.
The server is a
Andrew,
Andrew Sullivan wrote:
Is it exactly half an hour? What changes at the time that happens
(i.e. what else happens on the machine?). Is this a time, for
example, when logrotate is killing your I/O with file moves?
No, it's not exactly half an hour. It's just that it slows down for
Andrew,
I would be very suspicious of that much memory for sort. Please see
the docs for what that does. That is the amount that _each sort_ can
allocate before spilling to disk.
If some set of your users are
causing complicated queries with, say, four sorts apiece, then each
user is
Qingqing Zhou wrote:
Someone is doing a massive *write* at this time, which makes your query
*read* quite slow. Can you find out which process is doing write?
Indexes should be in memory so I don't expect a massive write to slow
down the select queries. sdb is the RAID10 array dedicated to
Ron wrote:
If I understand your HW config correctly, all of the pg stuff is on the
same RAID 10 set?
No, the system and the WAL are on a RAID 1 array and the data on their
own RAID 10 array.
As I said earlier, there's only a few writes in the database so I'm not
really sure the WAL can be a
Claus and Andrew,
Claus Guttesen wrote:
Isn't sort_mem quite high? Remember that sort_mem size is allocated
for each sort, not for each connection. Mine is 4096 (4 MB). My
effective_cache_size is set to 27462.
I tested sort mem from 4096 to 32768 (4096, 8192, 16384, 32768) this
afternoon and
Ron,
First of all, thanks for your time.
As has been noted many times around here, put the WAL on its own
dedicated HD's. You don't want any head movement on those HD's.
Yep, I know that. That's just we supposed it was not so important if it
was nearly a readonly database which is wrong
Hi,
I'm also sending the EXPLAIN outputs.
Please provide EXPLAIN ANALYZE outputs instead of EXPLAIN. You will have
more information.
Indexes on your tables are obviously missing. You should try to add:
CREATE INDEX idx_node_filter ON node(name, type, usage);
CREATE INDEX
Hello,
We are experiencing performances problem with a quad Xeon MP and
PostgreSQL 7.4 for a year now. Our context switch rate is not so high
but the load of the server is blocked to 4 even on very high load and
we have 60% cpu idle even in this case. Our database fits in RAM and
we don't have
Richard,
You should be seeing context-switching jump dramatically if it's the
classic multi-Xeon problem. There's a point at which it seems to just
escalate without a corresponding jump in activity.
No we don't have this problem of very high context switching in our
case even when the
Sven,
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
What version of XEON MP does your server have?
The server is a dell 6650 from end of 2004 with 4 xeon mp 2.2 and 2MB
cache per proc.
Here are the information from Dell:
4x PROCESSOR, 80532, 2.2GHZ, 2MB cache, 400Mhz, SOCKET F
8x DUAL
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
Hi Guillaume,
I had a similar issue last summer. Could you please provide details
about your XEON MP server and some statistics (context-switches/load/CPU
usage)?
I forgot the statistics:
CPU load usually from 1 to 4.
CPU usage 40% for each
On 3/16/06, Sven Geisler [EMAIL PROTECTED] wrote:
Did you compare 7.4 on a 4-way with 8.1 on a 2-way?
I know there are too many parameters changing between the two servers
but I can't really change anything before tuesday. On tuesday, we will
be able to compare both servers with the same
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
Can you try strace'ing some of the backend processes while the system is
behaving like this? I suspect what you'll find is a whole lot of
delaying select() calls due to high contention for spinlocks ...
Tom,
I think we can try to do it.
You mean
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
What we want to find out is if there's a lot of select()s and/or
semop()s shown in the result. Ideally there wouldn't be any, but
I fear that's not what you'll find.
OK, I'll try to do it on monday before our upgrade then see what
happens with
Guillaume,
On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
wrote:
Reading the documentation and postgresql list archives, I have
run ANALYZE right before my tests, I have increased the
statistics target to 50 for the considered table; my problem is
that the index scan cost reported by
On 3/16/06, Tom Lane [EMAIL PROTECTED] wrote:
Can you try strace'ing some of the backend processes while the system is
behaving like this? I suspect what you'll find is a whole lot of
delaying select() calls due to high contention for spinlocks ...
As announced, we have migrated our
On 4/25/06, Arnau [EMAIL PROTECTED] wrote:
espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM
agenda_users_groups
espsm_moviltelevision-# WHERE group_id = '9';
QUERY PLAN
Guoping,
On 4/27/06, Guoping Zhang [EMAIL PROTECTED] wrote:
We have to looking at setting fsync OFF option for performance reason,
Did you try the other wal sync methods (fdatasync in particular)? I
saw a few posts lately explaining how changing sync method can affect
performances in specific
On 5/23/06, Dan Gorman [EMAIL PROTECTED] wrote:
What I am looking for is that our DB is doing X selects a min.
If you're using 7.4, you can use log_duration to only log duration. It
won't log all the query text, only one short line per query. Then you
can use pgFouine to analyze this and
the output of an explain analyze on the
list with all the relevant information (structure of the concerned
tables, indexes, size...).
If not, it's probably more an ODBC problem.
Regards,
--
Guillaume Smet
Open Wide
---(end of broadcast)---
TIP 9
Tom,
On 7/21/06, Tom Lane [EMAIL PROTECTED] wrote:
It's really not possible for a full-table indexscan to be faster than a
seqscan, and not very credible for it even to be approximately as fast.
I suspect your second query here is the beneficiary of the first query
having fetched all the pages
Guillaume,
On 28 Aug 2006 11:43:16 +0200, Guillaume Cottenceau [EMAIL PROTECTED] wrote:
max_fsm_pages is 2
max_fsm_relations is 1000
Do they look low?
Yes they are probably too low if you don't run VACUUM on a regular
basis and you have a lot of UPDATE/DELETE activity. FSM doesn't take
Cosimo,
On 8/31/06, Cosimo Streppone [EMAIL PROTECTED] wrote:
The problem is that under peak load, when n. of concurrent transactions
raises, there is a sensible performance degradation.
Could you give us more information about the performance degradation?
Especially cpu load/iostat/vmstat
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
I migrated Postgres server from 7.4.6 to 8.1.4, But my server is
completely full, by moment load average 40
All queries analyzed by EXPLAIN, all indexes are used .. IO is good ...
What is the bottleneck? Are you CPU bound? Do you
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
15667 postgres 25 0 536m 222m 532m R 98.8 11.0 1:39.29 postmaster
19533 postgres 25 0 535m 169m 532m R 92.9 8.3 0:38.68 postmaster
16278 postgres 25 0 537m
Jérôme,
Perhaps it's a stupid question but are your queries slower than
before? You didn't tell it.
IMHO, it's not a problem to have a high load if you have a lot of
users and your queries are fast (and with 8.1, they should be far
faster than before).
To take a real example, we had a problem
On 9/14/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
Yes i have a lot of users ;-)
So your work_mem is probably far too high (that's what I told you in
my first message) and you probably swap when you have too many users.
Remember that work_mem can be used several times per query (and it's
On 9/18/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
Tomorrow morning i plan to add 2Go RAM in order to test difference with
my actual config.
I don't think more RAM will change anything if you don't swap at all.
You can try to set shared_buffers lower (try 32768 and 16384) but I
don't
Erik,
Could you set log_min_duration_statement=0 on your server and enable
logging (tutorial here if you don't know how to do that:
http://pgfouine.projects.postgresql.org/tutorial.html).
You should see which queries are executed in both cases and find the
slow one easily.
Regards,
--
Hi all,
I'm currently working on optimizing a couple of queries. While
studying the EXPLAIN ANALYZE output of a query, I found this Bitmap
Heap Scan node:
- Bitmap Heap Scan on lieu l (cost=12.46..63.98 rows=53 width=94)
(actual time=35.569..97.166 rows=78 loops=1)
Recheck Cond:
Hi all,
Following the work on Mark Stosberg on this list (thanks Mark!), I
optimized our slow proximity queries by using cube, earthdistance
(shipped with contrib) and a gist index. The result is globally very
interesting apart for a specific query and we'd like to be able to fix
it too to be
On 2/13/07, Tom Lane [EMAIL PROTECTED] wrote:
bitmapqualorig = list_difference_ptr(bitmapqualorig, qpqual);
What's not immediately clear is why the condition was in both lists to
start with. Perhaps idx_lieu_parking is a partial index with this as
its WHERE condition?
Yes, it is:
Dan,
On 2/13/07, Dan Harris [EMAIL PROTECTED] wrote:
Having upgraded to 2.6.18 fairly recently, I am *very* interested in
what caused the throughput to drop in 2.6.18? I haven't done any
benchmarking on my system to know if it affected my usage pattern
negatively, but I am curious if anyone
Paul,
On 2/14/07, Paul Ramsey [EMAIL PROTECTED] wrote:
You'll find that PostGIS does a pretty good job of selectivity
estimation.
PostGIS is probably what I'm going to experiment in the future. The
only problem is that it's really big for a very basic need.
With my current method, I don't
Tom,
On 2/13/07, Tom Lane [EMAIL PROTECTED] wrote:
It gets the right answer, yes. I'm not sure if we could safely put the
condition into the recheck instead of the filter. The particular code
I showed you has to go the direction it does, because a condition in the
filter has to be checked
On 2/14/07, Paul Ramsey [EMAIL PROTECTED] wrote:
You'll find that PostGIS does a pretty good job of selectivity
estimation.
So I finally have a working PostGIS and I fixed the query to use PostGIS.
The use of PostGIS is slower than the previous cube/earthdistance
approach (on a similar query
On 2/15/07, Tom Lane [EMAIL PROTECTED] wrote:
I think that the
answer is probably because the index is lossy for this operator,
so it has to be checked even if the bitmap didn't become lossy.
You'd have to check the GIST opclass definition to be sure.
Any idea on what I have to look for (if
On 2/15/07, Guillaume Smet [EMAIL PROTECTED] wrote:
On 2/15/07, Tom Lane [EMAIL PROTECTED] wrote:
I think that the
answer is probably because the index is lossy for this operator,
so it has to be checked even if the bitmap didn't become lossy.
You'd have to check the GIST opclass definition
On 2/15/07, Guillaume Smet [EMAIL PROTECTED] wrote:
The use of PostGIS is slower than the previous cube/earthdistance
approach (on a similar query and plan).
For the record, here are new information about my proximity query work.
Thanks to Tom Lane, I found the reason of the performance drop
On 2/23/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Also isn't it pretty much *not* a problem with current versions of
PostgreSQL?
We had a really *big* scalability problem with a quad Xeon MP 2.2 and
PostgreSQL 7.4. The problem is mostly gone since we upgraded to 8.1 a
year ago.
Woodcrest
On 2/23/07, Geoffrey [EMAIL PROTECTED] wrote:
As I've heard. We're headed for 8 as soon as possible, but until we get
our code ready, we're on 7.4.16.
You should move to at least 8.1 and possibly 8.2. It's not a good idea
to upgrade only to 8 IMHO.
--
Guillaume
Florian, Steinar,
Could you try to see if the GIN implementation of pg_trgm is faster in
your cases?
Florian, instead of using WHERE similarity(...) 0.4, you should use
set_limit (SELECT set_limit(0.4);).
I posted it on -patches and it is available here:
Hi Steinar,
On 2/24/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
I'm sorry, I can no longer remember where I needed pg_trgm. Simple testing of
your patch seems to indicate that the GiN version is about 65% _slower_ (18ms
vs. 30ms) for a test data set I found lying around, but I remember
On 2/24/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
Thanks for your time.
GiN version, short:
- Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual
time=5.555..30.157 rows=7 loops=1)
Filter: (title % 'foo'::text)
- Bitmap Index Scan on trgm_idx
On 2/26/07, Oleg Bartunov oleg@sai.msu.su wrote:
Did you rewrite query manually or use rewrite feature of tsearch2 ?
Currently, it's manual. I perform a pg_trgm query for each word of the
search words (a few stop words excluded) and I generate the ts_query
with the similar words instead of
On 4/10/07, Michael Dengler [EMAIL PROTECTED] wrote:
I'm using RHEL4 and wondering if I need to upgrade the php and php-pgsql
packages when upgrading from Postgres 7.4.1 to 8.2.3.
No you don't. Devrim Gunduz provides compat RPM for a long time now.
See
On 5/22/07, cedric [EMAIL PROTECTED] wrote:
I made something very similar, and using PL/pgsql is very slow, when using
perl is very quick.
Another solution is to use tsearch2 for that:
CREATE TABLE word_counts AS SELECT * FROM stat('SELECT
to_tsvector(''simple'', lower(coalesce(field
Sabin,
On 6/14/07, Sabin Coanda [EMAIL PROTECTED] wrote:
I'd like to understand completely the report generated by VACUUM VERBOSE.
Please tell me where is it documented ?
You can take a look to what I did for pgFouine:
http://pgfouine.projects.postgresql.org/vacuum.html
--
Guillaume
On 6/14/07, Y Sidhu [EMAIL PROTECTED] wrote:
Can anyone share what value they have set log_min_duration_statement to?
It's OT but we use different values for different databases and needs.
On a very loaded database with a lot of complex queries (lots of join
on big tables, proximity queries,
On 6/18/07, Sabin Coanda [EMAIL PROTECTED] wrote:
Hi Guillaume,
I tried pgFouine.php app on a sample log file but it reports me some errors.
Could you give me some startup support, please ?
I attach the log here to find what's wrong.
Sorry for the delay. I answered to your private email this
On 6/18/07, Y Sidhu [EMAIL PROTECTED] wrote:
I am following this discussion with great interest. I have PG running on
FreeBSD and am forced to run pgFouine on a separate Linux box. I am hoping I
can create a log file. and then copy that over and have pgFouine analyze it
on the Linux box.
a. I
On 11/7/07, Tom Lane [EMAIL PROTECTED] wrote:
Hmmm ... what locale are you working in? I notice that the range
estimator for this pattern would be ancestors = '1062/' AND
ancestors '10620', which will do the right thing in C locale
but maybe not so much elsewhere.
Sorry for not having
Hi all,
While studying a query taking forever after an ANALYZE on a never
analyzed database (a bad estimate causes a nested loop on a lot of
tuples), I found the following problem:
- without any stats (I removed the line from pg_statistic):
ccm_prod_20071106=# explain analyze select * from
Alexander,
On 11/7/07, Alexander Staubo [EMAIL PROTECTED] wrote:
That's a difference of less than *three milliseconds* -- a difference
probably way within the expected overhead of running explain
analyze. Furthermore, all three queries use the same basic plan: a
sequential scan with a filter.
On 11/7/07, Tom Lane [EMAIL PROTECTED] wrote:
I wanted the locale (lc_collate), not the encoding.
fr_FR.UTF-8
That would optimize this particular query and probably pessimize
a lot of others.
Sure but there aren't a lot of queries based on the ancestors field
and if they are a bit slower,
On 11/8/07, Tom Lane [EMAIL PROTECTED] wrote:
I've applied a patch that might help you:
http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
Thanks. I'll build a RPM package tomorrow with this patch and let you
know if it fixes the problem.
--
Guillaume
Tom,
On Nov 8, 2007 12:14 AM, Tom Lane [EMAIL PROTECTED] wrote:
I've applied a patch that might help you:
http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
AFAICS, it doesn't seem to fix the problem. I just compiled
REL8_1_STABLE branch and I still has the following
On Nov 8, 2007 4:01 PM, Tom Lane [EMAIL PROTECTED] wrote:
Hmm, can we see the pg_stats row for the ancestors column?
Sure:
public | cms_items | ancestors | 0 |32 |
-1 | | |
On Nov 9, 2007 3:08 AM, Tom Lane [EMAIL PROTECTED] wrote:
This rule works for all the locales I have installed ... but I don't
have any Far Eastern locales installed. Also, my test cases are only
covering ASCII characters, and I believe many locales have some non-ASCII
letters that sort after
On Nov 9, 2007 5:33 PM, Tom Lane [EMAIL PROTECTED] wrote:
he's got no MCVs, presumably because the field
is unique.
It is. The ancestors field contains the current folder itself so the
id of the folder (which is the primary key) is in it.
--
Guillaume
---(end of
Tom,
Just to confirm you that your last commit fixed the problem:
lbo=# explain analyze select * from cms_items where ancestors LIKE '1062/%';
QUERY PLAN
On Dec 2, 2007 11:26 AM, Mindaugas [EMAIL PROTECTED] wrote:
I execute simple query select * from bigtable where From='something'.
Query returns like 1000 rows and takes 5++ seconds to complete. As far as I
understand the query is slow because:
Can you post an EXPLAIN ANALYZE? Which version
Hi all,
I'm currently benchmarking the new PostgreSQL server of one of our
customers with PostgreSQL 8.3 beta4. I have more or less the same
configuration Stefan tested in his blog [1]:
- Dell 2900 with two brand new X5365 processors (quad core 3.0 GHz),
16 GB of memory
- a RAID1 array for
On Dec 26, 2007 12:21 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Can you try with
bgwriter_lru_maxpages = 0
So we can see if the bgwriter has any hand in this?
I will. I'm currently running tests with less concurrent clients (16)
with exactly the same results:
64M 4213.314902
256M 4012.782820
On Dec 26, 2007 12:06 PM, Cédric Villemain [EMAIL PROTECTED] wrote:
Which kernel do you have ?
Kernel of the distro. So a RH flavoured 2.6.18.
--
Guillaume
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
On Dec 26, 2007 12:21 PM, Simon Riggs [EMAIL PROTECTED] wrote:
bgwriter_lru_maxpages = 0
So we can see if the bgwriter has any hand in this?
It doesn't change the behaviour I have.
It's not checkpointing either as using pgbench-tools, I can see that
tps and latency are quite stable during the
On Dec 26, 2007 4:41 PM, Guillaume Smet [EMAIL PROTECTED] wrote:
Then I decided to perform read-only tests using -S option (pgbench -S
-s 100 -c 16 -t 3 -U postgres bench). And still the same
behaviour:
shared_buffers=64MB : 20k tps
shared_buffers=1024MB : 8k tps
Some more information
On Dec 26, 2007 7:23 PM, Greg Smith [EMAIL PROTECTED] wrote:
Ah, now this is really interesting, as it rules out all the write
components and should be easy to replicate even on a smaller server. As
you've already dumped a bunch of time into this the only other thing I
would suggest checking
On Dec 27, 2007 7:10 AM, Tom Lane [EMAIL PROTECTED] wrote:
Enables firsttermassertion/ checks in the server, which test for
many quotecannot happen/ conditions. This is invaluable for
code development purposes, but the tests slow things down a little.
Maybe we
On Dec 27, 2007 7:54 PM, Tom Lane [EMAIL PROTECTED] wrote:
I concur with Greg Stark's earlier comment that this is all
overreaction. Let's just fix the misleading comment in the
documentation and leave it at that.
IMHO, we should also have a special tag for all the binaries
distributed with
Hi Joshua,
On Jan 9, 2008 9:27 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
wal_sync_method = open_sync
Do you recommend it in every situation or just because data are on a
SAN? Do you have any numbers/real cases explaining this choice.
Thanks.
--
Guillaume
---(end of
On Jan 22, 2008 9:32 AM, Florian Weimer [EMAIL PROTECTED] wrote:
Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.
Ours was ext3, no LVM or RAID.
Also with SATA? If your SATA disk is lying about effectively SYNCing
the data, I'm not that surprised you don't see any
Hi -performance,
While testing 8.3, I found this query which is equally slow on 8.1 and
8.3 and seems to be really slow for a not so complex query. The stats
are as good as possible and the behaviour of PostgreSQL seems to be
logical considering the stats but I'm looking for a workaround to
speed
On Jan 23, 2008 2:43 AM, Tom Lane [EMAIL PROTECTED] wrote:
regression=# create or replace function getu2(int) returns int[] as $$
select array(select unique2 from tenk1 where thousand = $1);
$$ language sql immutable;
CREATE FUNCTION
regression=# explain select * from tenk1 where unique1 =
On Jan 23, 2008 3:02 AM, Guillaume Smet [EMAIL PROTECTED] wrote:
I'll post my results tomorrow morning.
It works perfectly well:
cityvox_prod=# CREATE OR REPLACE FUNCTION
getTypesLieuFromTheme(codeTheme text) returns text[] AS
$f$
SELECT ARRAY(SELECT codetylieu::text FROM rubtylieu WHERE codeth
Hi Tom,
On May 9, 2007 6:40 PM, Tom Lane [EMAIL PROTECTED] wrote:
To return to your original comment: if you're trying to model a
situation with a fully cached database, I think it's sensible
to set random_page_cost = seq_page_cost = 0.1 or so.
Is it still valid for 8.3 or is there any reason
On Jan 25, 2008 5:50 AM, Tom Lane [EMAIL PROTECTED] wrote:
Hmm. I think what that really means is you haven't got to the part of
the query where the leak is :-(. In my attempt to reproduce this
I found that 8.3 has introduced a memory leak into the RI trigger
support, such that even if an
On Feb 5, 2008 12:47 PM, Viviane Lestic [EMAIL PROTECTED] wrote:
Could someone help me analyze this problem?
I don't manage to see if the problem comes from bad tsearch tuning,
postgresql configuration, or something else...
Can you try to replace zoneindex_test @@ q with zoneindex_test @@
Glyn,
On Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill [EMAIL PROTECTED] wrote:
I'm looking at switching out the perc5i (lsi megaraid) cards from our
Dell 2950s for something else as they're crap at raid 10.
Do you have numbers? Perc 5/i cards perform quite well for us (we have
a 8 disks RAID 10
Antoine,
On Tue, May 6, 2008 at 5:03 PM, Antoine Baudoux [EMAIL PROTECTED] wrote:
Limit (cost=23981.18..23981.18 rows=1 width=977)
- Sort (cost=23981.18..23981.18 rows=1 width=977)
Sort Key: this_.c_date
Can you please provide the EXPLAIN ANALYZE output instead of EXPLAIN?
Josh,
On Tue, May 6, 2008 at 11:10 PM, Josh Cole [EMAIL PROTECTED] wrote:
We are using pgfouine to try and optimize our database at this time. Is
there a way to have pgfouine show examples or breakout commits?
I hesitated before not implementing this idea. The problem is that you
often don't
Hi -performance,
I experienced this morning a performance problem when we imported a
dump in a 8.1 database.
The table is 5 millions rows large and when the dump creates an index
on a specific text column called clazz it takes 27 minutes while on
the other columns, it only takes a couple of
On Thu, May 22, 2008 at 3:14 PM, Tom Lane [EMAIL PROTECTED] wrote:
Do you have maintenance_work_mem set large enough that the index
creation sort is done in-memory? 8.1 depends on the platform's qsort
and a lot of them are kinda pessimal for input like this.
FWIW, it's a 32 bits CentOS 4.6
On Thu, May 22, 2008 at 9:18 PM, Tom Lane [EMAIL PROTECTED] wrote:
Ermm .. this is in fact mostly broken in 8.3.0 and 8.3.1. If you don't
want to wait for 8.3.2, you need this patch:
http://archives.postgresql.org/pgsql-committers/2008-03/msg00566.php
That's what I had in mind. We have to
On Thu, Jun 26, 2008 at 9:02 AM, Frank Joerdens [EMAIL PROTECTED] wrote:
Which version are you using? 8.3 removes a lot of implicit casts (all?
not sure), so this may already be your fix.
8.3 only removed implicit casts from non text types to text (date -
text, int - text, interval - text...)
On Mon, Jul 21, 2008 at 1:25 PM, Andreas Hartmann [EMAIL PROTECTED] wrote:
SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database where pg_database.datname = 'vvz_live_1';
datname| size
---+-
vvz_live_1|
On Tue, Dec 30, 2008 at 7:59 PM, bricklen brick...@gmail.com wrote:
I would like to continue to use bind variables to prevent sql
injection, but I'd like to force a plan re-parse for every single
query (if necessary?)
As far as I understand your problem, you don't care about using
prepared
On Wed, Dec 31, 2008 at 5:01 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
I think it has been shown enough times that the performance drop caused
by a worse plan can be orders of magnitudes worse than what's gained by
producing the plan only once. It does not seem a bad idea to provide
On Thu, Jan 1, 2009 at 9:24 PM, da...@lang.hm wrote:
forgive my ignorance here, but if it's unnamed how can you reference it
later to take advantage of the parsing?
You can't. That's what unnamed prepared statements are for.
It's not obvious to me that the parsing phase is worth any caching.
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops). Look
here for more information:
On Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau g...@mnc.ch wrote:
Until it's possible to specifically tell the JDBC driver (and/or
PG?) to not plan once for all runs (or is there something better
to think of?), or the whole thing would be more clever (off the
top of my head, PG could try
On Wed, Mar 11, 2009 at 8:27 PM, Frank Joerdens fr...@joerdens.de wrote:
This works much better but once we are at about 80% of peak load -
which is around 8000 transactions per second currently - the server goes
into a tailspin in the manner described above and we have to switch off full
On Thu, Mar 12, 2009 at 2:05 AM, Andrew Dunstan and...@dunslane.net wrote:
It is buffered at the individual log message level, so that we make sure we
don't multiplex messages. No more than that.
OK. So if the OP can afford multiplexed queries by using a log
analyzer supporting them, it might
1 - 100 of 110 matches
Mail list logo