On 3/4/08, Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
On Tue, Mar 4, 2008 at 8:42 AM, in message
[EMAIL PROTECTED], Chris Kratz
[EMAIL PROTECTED] wrote:
So, I've now been asked to ping the list as to whether turning off
nested loops system wide is a bad
[EMAIL PROTECTED] (Douglas J Hunley) writes:
Subject about says it all. Should I be more concerned about checkpoints
happening 'frequently' or lasting 'longer'? In other words, is it ok to
checkpoint say, every 5 minutes, if it only last a second or three or better
to have checkpoints every
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote:
Chris Kratz [EMAIL PROTECTED] writes:
The first frustration is that I can't get the transaction details scan
to get any more accurate. It thinks it will find 1407 records,
instead it finds 20,153. Then for whatever reason it thinks
Nested Loop (cost=0.00..31157.91 rows=3054 width=14) (actual
time=0.252..149.557 rows=2769 loops=1)
- Index Scan using
messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on
messungen_v_dat_2007_11_12 m (cost=0.00..5134.28 rows=3054 width=4)
(actual time=0.085..11.562 rows=2769 loops=1)
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote:
Chris Kratz [EMAIL PROTECTED] writes:
- Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual
time=2.932..27.772 rows=20153 loops=1)
- Hash Join (cost=10.89..22.58 rows=1 width=24) (actual
time=0.065..0.134 rows=1 loops=1
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote:
Chris Kratz [EMAIL PROTECTED] writes:
- Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual
time=2.932..27.772 rows=20153 loops=1)
- Hash Join (cost=10.89..22.58 rows=1 width=24) (actual
time=0.065..0.134 rows=1 loops=1
joins would be more appropriate.
Is there anything I can do to improve this short of the set
enable_nestloop=off?
PG 8.2.4 on Linux kernel 2.6.9 x64
-Chris
--- Full explain analyze -
Group (cost=336.76..336.82 rows=1 width=328) (actual
time=36620.831..36621.176 rows=248 loops=1
[EMAIL PROTECTED] (Scott Marlowe) writes:
On Jan 23, 2008 1:57 PM, Guy Rouillier [EMAIL PROTECTED] wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend
[EMAIL PROTECTED] (Florian Weimer) writes:
So, that web site seems to list products starting at about 32GB in a
separate rack-mounted box with redundant everything. I'd be more
interested in just putting the WAL on an SSD device, so 500MB or 1GB
would be quite sufficient. Can anyone point me
for any hands on experience in running PostgreSQL in some
sort of HA cluster.
Thanks,
Chris
[EMAIL PROTECTED] (Roberts, Jon) writes:
I think it is foolish to not make PostgreSQL as feature rich when it
comes to security as the competition because you are idealistic when
it comes to the concept of source code. PostgreSQL is better in
many ways to MS SQL Server and equal to many
[EMAIL PROTECTED] (Rafael Martinez) writes:
Heikki Linnakangas wrote:
On a small table like that you could run VACUUM every few minutes
without much impact on performance. That should keep the table size in
check.
Ok, we run VACUUM ANALYZE only one time a day, every night. But we would
in idx_blks_hit/read?
Please help me understand this apparent contradiction.
Thanks,
Chris
PG 8.1.3
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
a little better than loading three.
Thanks for any advice,
Chris
PG 8.1
RH 4.0
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
[EMAIL PROTECTED] (Jean-David Beyer) writes:
But what is the limitation on such a thing? In this case, I am just
populating the database and there are no other users at such a time. I am
willing to lose the whole insert of a file if something goes wrong -- I
would fix whatever went wrong and
[EMAIL PROTECTED] (Jean-David Beyer) writes:
Chris Browne wrote:
[EMAIL PROTECTED] (Jean-David Beyer) writes:
But what is the limitation on such a thing? In this case, I am just
populating the database and there are no other users at such a time. I am
willing to lose the whole insert
[EMAIL PROTECTED] (Yinan Li) writes:
I am trying to improve the performance of creating index.:p
I've set shared_buffers = 1024MB:p
Effective_cache_size = 1024MB:p
Work_mem = 1GB:p
Maintenance_work_mem=512MB:p
On Wednesday 17 October 2007 20:23, Tom Lane wrote:
Chris Kratz [EMAIL PROTECTED] writes:
On Wednesday 17 October 2007 14:49, Tom Lane wrote:
Evidently it's not realizing that every row of par will have a join
partner, but why not? I suppose a.activityid is unique, and in most
cases
Hello Everyone,
I'm struggling to get postgres to run a particular query quickly. It seems
that very early on, the planner seems to mis-estimate the number of rows
returned by a join which causes it to assume that there is only 1 row as it
goes up the tree. It then picks a nested loop join
On Wednesday 17 October 2007 14:49, Tom Lane wrote:
Chris Kratz [EMAIL PROTECTED] writes:
I'm struggling to get postgres to run a particular query quickly.
The key problem seems to be the join size misestimate here:
- Hash Join (cost=45.92..1251.07 rows=21 width=8
Tore Lukashaugen wrote:
Hi,
I am new to postgres having worked with Oracle in the past. I am interested
in understanding Postgres's table partition functionality better.
Specifically, I have a third party application running against my postgres
database, but the database is becoming rather
[EMAIL PROTECTED] (Kevin Kempter) writes:
any suggestions for improving LIKE '%text%' queries?
If you know that the 'text' portion of that query won't change, then
you might create a partial index on the boolean condition.
That is,
create index index_foo_text on my_table (tfield) where
[EMAIL PROTECTED] (Patrice Castet) writes:
I wonder if clustering a table improves perfs somehow ?
Any example/ideas about that ?
ref : http://www.postgresql.org/docs/8.2/interactive/sql-cluster.html
Sometimes.
1. It compacts the table, which may be of value, particularly if the
table is not
[EMAIL PROTECTED] (Pallav Kalva) writes:
Tom Lane wrote:
Pallav Kalva [EMAIL PROTECTED] writes:
We turned on autovacuums on 8.2 and we have a database which is
read only , it is basically a USPS database used only for address
lookups (only SELECTS, no updates/deletes/inserts).
[EMAIL PROTECTED] (Pallav Kalva) writes:
Mark Lewis wrote:
On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:
Can you please correct me if I am wrong, I want to understand how
this works.
Based on what you said, it will run autovacuum again when it passes
200M transactions, as SELECTS
version of PG and the OS.
If performance is getting worse, there ususally is some bloat
envolved. Not vacuuming aggressivly enough, might be the most
common cause. Do you autovacuum or vacuum manually?
Tell us more...
Bye,
Chris.
---(end of broadcast
on sun gear
- set explicitly wal_sync_method = fsync
- some other settings (see thread)
Bye,
Chris.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED
[EMAIL PROTECTED] (Mark Makarowsky) writes:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't that
[EMAIL PROTECTED] (Jeff Davis) writes:
On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:
Hi all,
whats the benefits of replication by using slony in
postgresql??
My office is separate in several difference place..its
about hundreds branch office in the difference
place..so any one can
Jonathan Gray wrote:
We’re experiencing a query performance problem related to the planner
and its ability to perform a specific type of merge.
We have created a test case (as attached, or here:
http://www3.streamy.com/postgres/indextest.sql) which involves a
hypothetical customer
Chris wrote:
Jonathan Gray wrote:
We’re experiencing a query performance problem related to the planner
and its ability to perform a specific type of merge.
We have created a test case (as attached, or here:
http://www3.streamy.com/postgres/indextest.sql) which involves a
hypothetical
Jonathan Gray wrote:
Chris,
Creating indexes on the customerclass table does speed up the queries but
still does not create the plan we are looking for (using the double index
with a backward index scan on the orders table).
Stupid question - why is that particular plan your goal plan
[EMAIL PROTECTED] (Dave Cramer) writes:
On 11-Jul-07, at 10:05 AM, Gregory Stark wrote:
Dave Cramer [EMAIL PROTECTED] writes:
Assuming we have 24 73G drives is it better to make one big
metalun and carve
it up and let the SAN manage the where everything is, or is it
better to
specify
= people.id)
That will help a lot.
HTH,
Chris
Tom Lane wrote:
Chris [EMAIL PROTECTED] writes:
I'm trying to do an update of a reasonably large table and it's taking
way too long so I'm trying to work out why and if I need to tweak any
settings to speed it up.
Any foreign keys leading to or from that table?
Nope :(
3.5 million row
A. Kretschmer wrote:
am Thu, dem 28.06.2007, um 15:03:32 +1000 mailte Chris folgendes:
Hi all,
I'm trying to do an update of a reasonably large table and it's taking
way too long so I'm trying to work out why and if I need to tweak any
settings to speed it up.
The table is around 3.5
A. Kretschmer wrote:
am Thu, dem 28.06.2007, um 16:16:50 +1000 mailte Chris folgendes:
Is there a better way to write the update? I thought about something
like this (but couldn't get it working - guess I don't have the right
syntax):
update t1 set domainname=(select id, SUBSTRING
Richard Huxton wrote:
Chris wrote:
Tom Lane wrote:
Any foreign keys leading to or from that table?
Nope :(
3.5 million row updates are not exactly gonna be instantaneous anyway,
but only FK checks or really slow user-written triggers would make it
take upwards of an hour ...
No triggers
Richard Huxton wrote:
Chris wrote:
db=# UPDATE email_upd_test SET domainname=substring(email from
position('@' in email));
UPDATE 100
Time: 43796.030 ms
I think I'm I/O bound from my very limited understanding of vmstat.
Well, 43 seconds to update 1 million rows suggests your real query
Hi all,
I'm trying to do an update of a reasonably large table and it's taking
way too long so I'm trying to work out why and if I need to tweak any
settings to speed it up.
The table is around 3.5 million records.
The query is
update table set domainname=substring(emailaddress from
[EMAIL PROTECTED] (Tom Lane) writes:
PS: for the record, there is a hard limit at 1GB of query text, owing
to restrictions built into palloc. But I think you'd hit other
memory limits or performance bottlenecks before that one.
It would be much funnier to set a hard limit of 640K of query
, that is almost 1/2 of the total database (it
is a 200GB+ db) requested in just 5 minutes!
Thanks for any clarification on this.
Chris
12756187
12756187
On 6/8/07, Bill Moran [EMAIL PROTECTED] wrote:
In response to Chris Hoover [EMAIL PROTECTED]:
I need some help. I have started taking snapshots of performance of my
databases with concerns to io. I created a view on each cluster defined
as:
SELECT pg_database.datname AS database_name
On 6/8/07, Bill Moran [EMAIL PROTECTED] wrote:
In response to Chris Hoover [EMAIL PROTECTED]:
On 6/8/07, Bill Moran [EMAIL PROTECTED] wrote:
In response to Chris Hoover [EMAIL PROTECTED]:
I need some help. I have started taking snapshots of performance of
my
databases
Question,
Does (pg_stat_get_db_blocks_fetched(oid)-pg_stat_get_db_blocks_hit(oid)*8) =
number of KB read from disk for the listed database since the last server
startup?
Thanks,
Chris
I am re-running it with other tuning, notably cost-based vacuum
delay and less frequent checkpoints, and it is a *lot* smoother.
These take a full night to run, so I'll post more results when I
have usefull stuff to show.
This has proven to be a very interesting trip to
Hi,
with that setup you should vacuum aggressivley.
I'd send a vacuum statement in a third thread every 15 minutes or so.
The table renaming trick doesn't sound very handy or even
necessary...
Bye,
Chris.
Date: Tue, 22 May 2007 14:38:40 -0400
From: Orhan Aglagul [EMAIL PROTECTED
for any comments/clarifications,
chris
PG 8.1.3
RH 4 AS
# -
# PostgreSQL configuration file
# -
listen_addresses = '*'# what IP address(es) to listen on;
port = 50001
max_connections = 1024
superuser_reserved_connections = 10
[EMAIL PROTECTED] (Michael Stone) writes:
On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:
Maybe, but we should also mention that CLUSTER is a likely faster
workaround.
Unless, of course, you don't particularly care about the order of
the items in your table; you might end up
David Levy wrote:
Hi,
I am about to order a new server for my Postgres cluster. I will
probably get a Dual Xeon Quad Core instead of my current Dual Xeon.
Which OS would you recommend to optimize Postgres behaviour (i/o
access, multithreading, etc) ?
I am hesitating between Fedora Core 6,
[EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Chris wrote:
David Levy wrote:
Hi,
I am about to order a new server for my Postgres cluster. I will
probably get a Dual Xeon Quad Core instead of my current Dual Xeon.
Which OS would you recommend to optimize Postgres behaviour (i/o
access
[EMAIL PROTECTED] (Paweł Gruszczyński) writes:
To test I use pgBench with default database schema, run for 25, 50, 75
users at one time. Every test I run 5 time to take average.
Unfortunetly my result shows that ext is fastest, ext3 and jfs are
very simillar. I can understand that ext2 without
Madison Kelly wrote:
Hi all,
I am asking in this list because, at the end of the day, this is a
performance question.
I am looking at writing a search engine of sorts for my database. I
have only ever written very simple search engines before which amounted
to not much more that the
;)
Bye, Chris.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
[EMAIL PROTECTED] (Steve) writes:
I'm wondering what we can do to make
this better if anything; would it be better to leave the indexes on?
It doesn't seem to be.
Definitely NOT. Generating an index via a bulk sort is a LOT faster
than loading data into an index one tuple at a time.
We saw
[EMAIL PROTECTED] (Daniel van Ham Colchete) writes:
You are right Christopher.
Okay. Let's solve this matter.
What PostgreSQL benchmark software should I use???
pgbench is one option.
There's a TPC-W at pgFoundry
(http://pgfoundry.org/projects/tpc-w-php/).
There's the Open Source
Joshua D. Drake wrote:
On Mon, 2006-12-11 at 14:33 +1100, Chris wrote:
Marcos Borges wrote:
07/12/2006 04:31
*SQL_CALC_FOUND_ROWS in POSTGRESQL*
In mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.
SELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name
'' LIMIT
Mark Kirkwood wrote:
Chris wrote:
It's the same as doing a select count(*) type query using the same
clauses, but all in one query instead of two.
It doesn't return any extra rows on top of the limit query so it's
better than using pg_numrows which runs the whole query and returns
[EMAIL PROTECTED] writes:
If anyone knows what may cause this problem, or has any other ideas, I
would be grateful.
Submit the command VACUUM ANALYZE VERBOSE locations; on both
servers, and post the output of that. That might help us tell for
sure whether the table is bloated (and needs VACUUM
Tobias Brox wrote:
[EMAIL PROTECTED] - Thu at 06:37:12PM -0600]
As my dataset has gotten larger I have had to throw more metal at the
problem, but I have also had to rethink my table and query design. Just
because your data set grows linearly does NOT mean that the performance of
your query is
[EMAIL PROTECTED] (Merlin Moncure) writes:
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
[EMAIL PROTECTED] (Alexander Staubo) writes:
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
[EMAIL PROTECTED] (Craig A. James) writes:
Mark Kirkwood wrote:
The result? I can't use my function in any WHERE clause that
involves any other conditions or joins. Only by itself. PG will
occasionally decide to use my function as a filter instead of doing
the join or the other WHERE
[EMAIL PROTECTED] (Tom Lane) writes:
Another thing we've been beat up about in the past is that loading a
pg_dump script doesn't ANALYZE the data afterward...
Do I misrecall, or were there not plans (circa 7.4...) to for pg_dump
to have an option to do an ANALYZE at the end?
I seem to remember
[EMAIL PROTECTED] (Graham Davis) writes:
Adding DESC to both columns in the SORT BY did not make the query use
the multikey index. So both
SELECT DISTINCT ON (assetid) assetid, ts
FROM asset_positions ORDER BY assetid, ts DESC;
and
SELECT DISTINCT ON (assetid) assetid, ts
FROM
[EMAIL PROTECTED] (Graham Davis) writes:
40 seconds is much too slow for this query to run and I'm assuming
that the use of an index will make it much faster (as seen when I
removed the GROUP BY clause). Any tips?
Assumptions are dangerous things.
An aggregate like this has *got to* scan the
://www.powerpostgresql.com/PerfList/
Bye, Chris.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Vacuum.
http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM
And also please suggest other steps that i need to
improve the performance .
http://www.powerpostgresql.com/PerfList
Bye,
Chris.
--
Chris Mair
http://www.1006.org
using. Diagnosing your
problem might depend on which OS you use...
Finally, explain what you mean by the age of template0 is
increasing very rapidly, you mean the size is increasing?
Bye,
Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast
Is there a reason you are not upgrading to PostgreSQL 8.1? it will run natively on Windoze, and will give you much better performance. 7.1 is way out of date, and has a lot of bad issues in it.Upgrading will most likely fix this issue.
ChrisOn 8/22/06, Ravindran G - TLS, Chennai. [EMAIL PROTECTED]
Subbiah, Stalin wrote:
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current
Subbiah, Stalin wrote:
Actually these servers will be upgraded to 8.1.4 in couple of months.
even so, you could get some bad data in there.
http://www.postgresql.org/docs/8.0/static/release.html . Go through the
old release notes and you'll find various race conditions, crashes etc.
Here
Dan Langille wrote:
I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
an index. With the index, I get executions times of 0.5 seconds.
Without, it's closer to 2.5 seconds.
Compare these two sets of results (also provided at
http://rafb.net/paste/results/ywcOZP66.html
Spiegelberg, Greg wrote:
Sort of on topic, how many foreign keys in a single table is good v.
bad? I realize it's relative to the tables the FK's reference so here's
an example:
Table A: 300 rows
Table B: 15,000,000 rows
Table C: 100,000 rows
Table E: 38 rows
Table F: 9 rows
Table G: is
Sue Fitt wrote:
Hi all,
This is my first post to the performance list, I hope someone can help me.
I'm setting up a table with 2 columns, both of which reference a column
in another table:
CREATE TABLE headwords_core_lexemes (
core_id int REFERENCES headwords_core(core_id),
lexeme_id int
Merlin Moncure wrote:
On 8/10/06, Chris [EMAIL PROTECTED] wrote:
Sue Fitt wrote:
Thanks Chris and Chris, you've solved it.
I had a gui open that connects to the database. It was doing nothing
(and not preventing me adding to or altering headwords_core via psql),
but having closed
I've got 226 customer databases in one cluster. Works like a champ with 8.1.3. I have 3 additional PostgreSQL servers with our largest customers on them. They have between 10 and 30 databases. The smallest of my servers has 261GB's worth of db's in the cluster, and the largest is 400GB's.
BTW, our
Just curious, is this a production server? Also, how large is the total cluster on disk?On 8/3/06, Ian Westmacott
[EMAIL PROTECTED] wrote:is that all? psql -l | grep 'rows)'(2016 rows)
On Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote: On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim
[EMAIL PROTECTED] (Denis Lussier) writes:
I have no personal experience with XFS, but, I've seen numerous
internal edb-postgres test results that show that of all file
systems... OCFS 2.0 seems to be quite good for PG update intensive
apps (especially on 64 bit machines).
I have been curious
[EMAIL PROTECTED] (Milen Kulev) writes:
I am pretty exited whether XFS will clearly outpertform ETX3 (no
default setups for both are planned !). I am not sure whether is it
worth to include JFS in comparison too ...
I did some benchmarking about 2 years ago, and found that JFS was a
few
Just turn on autovacuuming on your 8.1 database. You can tune the vacuum and autovacuum parameters to minimize the impact to your system. This is the optimal route to take since PG will maintain the tables for you as needed.
HTH,ChrisOn 7/15/06, Gabriele Turchi [EMAIL PROTECTED] wrote:
Il giorno
, reindexdb, vacuum analyze every weekend (we were on 7.3.4). This gave me pristine indexes and tables for Monday's start of the week.If you can, look hard at upgrading to
8.1.x as it will fix a lot of the issues you are having with autovacuum (along with a ton of other improvements).HTH,Chris
Ioana Danes wrote:
I have a problem with a query that in postgres 7.4 and 8.12 has an
acceptable response time but in postgres 8.14 is very slow.
This is the table I use:
*
create* *table* TEST (
TESTIDINT8 *not* *null*,
TESTTYPE INT4 *null*,
*constraint* PK_TESTID *primary* *key*
Eugeny N Dzhurinsky wrote:
On Wed, Jul 05, 2006 at 09:39:31AM -0400, Tom Lane wrote:
Eugeny N Dzhurinsky [EMAIL PROTECTED] writes:
but it seems pg_autovacuum does not do vacuuming on system tables.
There was a bug awhile back whereby autovac failed to notice temp table
cleanup at connection
Gene wrote:
Thanks for the suggestion. Actually I went ahead and created a reverse
function using plpgsql, created an index using reverse column and now
my queries use where reverse(column) like reverse('%2345') and it's
using the index like i hoped it would! Now if I could figure out how
to
performance?
I am running a similar environment. Each of our customers has a seperate database with serveral hundred tables per database. One of our servers is running over 200 customer databases with absolutely no problems.
HTH,chris
[EMAIL PROTECTED] writes:
Is it possible to start two instances of postgresql with different port and
directory which run simultaneously?
Certainly. We have one HACMP cluster which hosts 14 PostgreSQL
instances across two physical boxes. (If one went down, they'd all
migrate to the
[EMAIL PROTECTED] (Steve Poe) writes:
I have a client who is running Postgresql 7.4.x series database
(required to use 7.4.x). They are planning an upgrade to a new server.
They are insistent on Dell.
Then they're being insistent on poor performance.
If you search for dell postgresql
be greatly appreciated.
Thanks,
Chris Beecroft
QUERY PLAN
.
Thanks once again,
Chris Beecroft
On Mon, 2006-06-05 at 13:07, Tom Lane wrote:
Chris Beecroft [EMAIL PROTECTED] writes:
Our problem is that about a week and a half ago we started to get some
queries that would (seemingly) never return (e.g., normally run in a
couple minutes, but after 2.5
the nested loop shouldn't be a never returns case, should
it? For 1800 rows?
Well, it's a big query. If it ought to take a second or two, and
instead is taking an hour or two (1800 times the expected runtime), that
might be close enough to never to exhaust Chris' patience. Besides,
we don't know
On 5/29/06, Anton Maksimenkov [EMAIL PROTECTED] wrote:
Hi.
I have 2 tables - one with calls numbers and another with calls codes.
The structure almost like this:
...
How long does this query take?
SELECT code FROM a_voip_codes c, a_voip v where v.called_station_id
like c.code ||
'%' order by
if and how this might be possible?
Regards :)
Chris.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
archives for priority inversion to
find out why not.
regards, tom lane
Ok,
I've learned something new (*).
I'll drop that idea :)
Bye,
Chris.
(*)
http://en.wikipedia.org/wiki/Priority_inversion
---(end of broadcast)---
TIP
bundled or are there
single insert transactions? Are the statements prepared?
I use prepared statements for all requests. Each transaction is about
5-45 requests.
sounds ok,
could be even more bundled together if the application is compatible
with that.
Bye, Chris
. I've already read the Postges 7.4 (yes, I'm stuck on 7.4) manual, I was hoping for this mail-list' wisdom to supply me with some tips that can only be learnt through painful experience. :-)
Thanks.
- Chris
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]]
Sent
are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
THANKS!!!
- Chris
Christian Paul Cosinas wrote:
Hi!
How can I speed up my server's performance when I use offset and limit
clause.
For example I have a query:
SELECT * FROM table ORDER BY id, name OFFSET 10 LIMIT 1
This query takes a long time about more than 2 minutes.
If my query is:
SELECT * FROM
This is a deadly bug, because our web site goes dead when this happens,
and it requires an administrator to log in and kill the stuck postgres
process then restart Postgres. We've installed failover system so that
the web site is diverted to a backup server, but since this has happened
Shoaib Burq wrote:
UNSUBSCRIBE
To unsubscribe:
List-Unsubscribe:
mailto:[EMAIL PROTECTED]
Email admins - Could we add this above or below the random tips that get
appended to every email ?
--
Postgresql php tutorials
http://www.designmagick.com/
---(end of
101 - 200 of 362 matches
Mail list logo