It sounds like what you're doing is comparing the planner's cost
estimate from running EXPLAIN on a few different queries. The planner's
cost estimate was never intended to do what you're trying to do; it's
not an absolute scale of cost, it's just a tool that the planner uses to
get relative
c_1=val then if c_1 is
highly selective the index would still help.
See here:
http://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html
-- Mark Lewis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
On Tue, 2008-08-26 at 18:44 +0200, henk de wit wrote:
Hi,
We're currently having a problem with queries on a medium sized table. This
table is 22GB in size (via select
pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring
the total size of the table to 35 GB
On Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:
On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling
[EMAIL PROTECTED] wrote:
On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:
creating multiple indexes on same column will effect
performance?
On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:
Hi,
Can anyone suggest the performance tips for PostgreSQL using
Hibernate.
One of the queries:
- PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.
Which is better among them? or creating either of them is
. and the possibilities are like the above.
if we create such indexes will it effect on performance?
and what is the best go in this case?
On Wed, Aug 20, 2008 at 8:10 PM, Mark Lewis [EMAIL PROTECTED]
wrote:
On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote
On Wed, 2008-08-20 at 20:32 +0530, Kranti K K Parisa™ wrote:
Hi Mark,
Thank you very much for the information. I will analyse the DB
structure and create indexes on PG directly.
Are you using any connection pooling like DBCP? or PG POOL?
Regards, KP
On Wed, Aug 20, 2008 at 8:05 PM, Mark
or not you're getting a battery-backed write
cache for that ServeRAID-8K.
-- Mark Lewis
On Wed, 2008-03-12 at 19:58 +0100, Pascal Cohen wrote:
Hello, we plan to buy a dedicated server to host our database.
Here is the proposal I was given (with a second identical server fro
backup using log shipping
wouldn't need to be updated very frequently and you wouldn't need to
store a marker for each page, maybe only 100 markers spread evenly
across the result set would be sufficient.
-- Mark Lewis
On Thu, 2008-02-14 at 19:49 +, Michael Lorenz wrote:
Fair enough, and I did think of this as well
for reading past the end marker is the same as the algorithm
for skipping past the beginning marker then all is well.
-- Mark Lewis
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http
to have open at any time you
may want to consider increasing the max user processes and open files
settings as well.
-- Mark Lewis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
in the index.
This page has details and instructions for how to get it to work:
http://developer.postgresql.org/pgdocs/postgres/indexes-ordering.html
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
improvement on the new server but no significant change
on the old server then you've found your culprit.
-- Mark Lewis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Sat, 2007-09-15 at 01:51 +0530, Harsh Azad wrote:
Great, creating new tablespace for indexes worked! Now the question is
whether existing tables/index can be moved to the new tablespace using
an alter command or the only way possible is to drop and recreate
them?
You can alter an existing
.
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
a SAN for a database would be something like Oracle RAC,
but I'm not aware of any PG equivalent to that.
-- Mark Lewis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
fast enough, then you can
always try B later to see if it works any better.
-- Mark Lewis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do
On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:
Can you please correct me if I am wrong, I want to understand how this
works.
Based on what you said, it will run autovacuum again when it passes 200M
transactions, as SELECTS are transactions too and are going on these
tables.
But
On Fri, 2007-08-31 at 19:39 -0400, Tom Lane wrote:
I wrote:
Mark Lewis [EMAIL PROTECTED] writes:
We've been holding back from upgrading to 8.2 because this one is a
show-stopper for us.
Well, you could always make your own version with this patch reverted:
http
regular VACUUMs at all but are instead exclusively
running VACUUM FULL, then I don't think you would see warnings about
running out of fsm enties, which would explain why you did not notice
the bloat. I haven't confirmed that though, so I might be wrong.
-- Mark Lewis
On Thu, 2007-08-30 at 11:50
looking for a
literal underscore and not for any matching character by escaping the
underscore, that will allow it to do a much quicker index scan.
Something like:
cat.name like 'reporting|_group.Tier2%' ESCAPE '|'
-- Mark Lewis
---(end of broadcast
been holding back from upgrading to 8.2 because this one is a
show-stopper for us.
-- Mark Lewis
On Tue, 2007-08-28 at 11:24 -0500, Evan Carroll wrote:
-- Forwarded message --
From: Evan Carroll [EMAIL PROTECTED]
Date: Aug 28, 2007 11:23 AM
Subject: Re: [PERFORM] 8.2 Query 10
On Wed, 2007-08-15 at 16:36 -0300, Sebastián Baioni wrote:
Hello,
Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled
by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.
The query only uses the index if we have a limit n:
Without Limit n
explain
select esapcuit, esapcuil
from
itself, some
information about the tables/indexes/foreign keys involved, and an
EXPLAIN ANALYZE for one of the problematic queries?
Also, what kind of vacuuming regimen are you using? Just a daily cron
maybe? Are you regularly analyzing the tables?
-- Mark Lewis
---(end
.
Do you have the ability to try that?
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through
On Thu, 2007-07-26 at 09:18 -0700, Brandon Shalton wrote:
Hello all,
My hard disk is filling up in the /base directory to where it has consumed
all 200gig of that drive.
All the posts that i see keep saying move to a bigger drive, but at some
point a bigger drive would just get
could also look at the pgsql-jdbc archives for the JDBC
driver patches which allow you to use COPY-style bulk loading, which
should get you to the performance level of COPY, which should be
reasonably close to the performance of select into.
-- Mark Lewis
On Tue, 2007-07-17 at 22:50 +0200, Thomas
to analyze :)
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Wed, 2007-06-20 at 11:21 -0400, Greg Smith wrote:
...
One of the things that was surprising to me when I started looking at the
organization of the PostgreSQL buffer cache is how little gross
information about its contents is available. I kept expecting to find a
summary section where
On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query
-client reporting, however.
What version of PG is this? What is your vacuuming strategy? Have you
tried a REINDEX to see if that helps?
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
On Thu, 2007-05-24 at 21:54 +0100, James Mansion wrote:
If Sybase is still like SQL Server (or the other way around), it *may*
end up scanning the index *IFF* the index is a clustered index. If it's
a normal index, it will do a sequential scan on the table.
Are you sure its not
connections are opened.
-- Mark Lewis
On Fri, 2007-05-18 at 10:45 +1200, Ralph Mason wrote:
We have a database running on a 4 processor machine. As time goes by
the IO gets worse and worse peeking at about 200% as the machine loads
up.
The weird thing is that if we restart postgres it’s fine
1. If you go the route of using nice, you might want to run the 3D
front-end at a higher priority instead of running PG at a lower
priority. That way apache, php and the other parts all run at the same
priority as PG and just the one task that you want to run smoothly is
elevated.
2. You may not
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or newbie who otherwise would have a horribly
mis-tuned database.
Not to hijack this thread, but has anybody here tested the behavior of
PG on a file system with OS-level caching disabled via forcedirectio or
by using an inherently non-caching file system such as ocfs2?
I've been thinking about trying this setup to avoid double-caching now
that the 8.x series
understanding from reading through it was that
you never fully tracked down the cause of the factor of 10 write volume
mismatch, so I pretty much wrote it off as a data point for
forcedirectio because of the unknowns. Did you ever figure out the
cause of that?
-- Mark Lewis
---(end
is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1 int4);
-- insert many millions of rows into parent
CREATE TABLE child (col1 int4 REFERENCES parent(col1));
-- insert many millions of rows into child, very very slowly.
- Mark Lewis
On Tue, 2007-02-06 at 14:06 -0500, Merlin Moncure wrote:
On 2/6/07, Mark Lewis [EMAIL PROTECTED] wrote:
actually, I get the stupid award also because RI check to unindexed
column is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1
wondering if it's possible to shed some light on the
remaining dark shadows of PG performance troubleshooting.
-- Mark Lewis
On Thu, 2006-12-14 at 11:19 -0500, Tom Lane wrote:
Arnaud Lesauvage [EMAIL PROTECTED] writes:
Tom Lane a crit :
It seems the time must be going into this trigger function
a RAID controller with a battery-backed write cache can enable its
own write cache, but can't safely enable the write-caches on the disk
drives it manages.
-- Mark Lewis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
guess that would just make it so that instead of HIGH tasks
being effectively reduced to LOW, then LOW tasks could be promoted to
HIGH.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
other types of cases you're aware of where some form of cross-
column statistics would be useful? In the unlikely event that I
actually come up with a brilliant and simple solution, I'd at least like
to make sure that I'm solving the right problem :)
Thanks,
Mark Lewis
On Tue, 2006-10-10 at 22
of utility vs. annoyance. One of the tips that shows up
in the footers today is just a link to the archives anyway.
-- Mark Lewis
On Wed, 2006-10-04 at 11:28 -0500, Bruno Wolff III wrote:
On Wed, Oct 04, 2006 at 08:30:03 -0700,
Joshua D. Drake [EMAIL PROTECTED] wrote:
Although I 100% agree
if there were far fewer distinct
assetid values than rows in the main table, and would get slow if you
commonly delete rows from the main table or decrease the value for ts in
the row with the highest ts for a given assetid.
-- Mark Lewis
On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:
Thanks Tom
Hmmm. How many distinct assetids are there?
-- Mark Lewis
On Tue, 2006-10-03 at 14:23 -0700, Graham Davis wrote:
The summary table approach maintained by triggers is something we are
considering, but it becomes a bit more complicated to implement.
Currently we have groups of new positions
not sure if it does.
-- Mark Lewis
On Tue, 2006-10-03 at 14:35 -0700, Graham Davis wrote:
Not many. It fluctuates, but there are usually only ever a few hundred
at most. Each assetid has multi-millions of positions though.
Mark Lewis wrote:
Hmmm. How many distinct assetids
as you could possibly achieve by reducing BLKSIZE.
PG 8.1 is smart enough to use a bitmap index scan to combine the two
indexes at query time; if that gives you adequate performance then it
would be simpler than reducing BLKSIZE.
-- Mark Lewis
On Mon, 2006-09-25 at 17:54 -0400, Marc Morin wrote
? Have you confirmed that you aren't suffering from
table bloat?
3. What are the actual results you got from the PG run in question?
4. What is the size of the data set referenced in the test run?
-- Mark Lewis
On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:
Hi
After upgrading DBI and DBD::Pg
So this might be a dumb question, but the above statements apply to the
cluster (e.g. postmaster) as a whole, not per postgres
process/transaction correct? So each transaction is blocked waiting for
the main postmaster to retrieve the data in the order it was requested
(i.e. not multiple
figure out which indexes are missing.
-- Mark Lewis
On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote:
I've just fired off a DELETE FROM table command (i.e. unfiltered
DELETE) on a trivially small table but with many foreign key references
(on similar-sized tables), and I'm waiting
The last I checked (years ago), sql-bench was very synthetic (i.e.
reflecting no realistic use case). It's the sort of test suite that's
useful for database developers when testing the effects of a particular
code change or optimization, but not so applicable to real-world uses.
Historically
Monitoring the processes using top reveals that the total amount of
memory used slowly increases during the test. When reaching insert
number 4, or somewhere around that, memory is exhausted, and the the
systems begins to swap. Each of the postmaster processes seem to use a
constant
it's worse than that. if you need to read something that is not in
the o/s cache, all the disks except for one need to be sent to a
physical location in order to get the data. Thats the basic rule with
striping: it optimizes for sequential i/o in expense of random i/o.
There are some
Well, at least on my test machines running gnome-terminal, my pgbench
runs tend to get throttled by gnome-terminal's lousy performance to no
more than 300 tps or so. Running with 2/dev/null to throw away all the
detailed logging gives me 2-3x improvement in scores. Caveat: in my
case the db is
is going to max out somewhere around 333 tps. (2*1/60).
-- Mark Lewis
On Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:
I'm exhausted to try all performance tuning ideas, like following
parameters
shared_buffers
fsync
max_fsm_pages
max_connections
shared_buffers
work_mem
.
To get the text-based one, execute SET SHOWPLAN_ALL ON which toggles
diagnostic mode on, and each query that you run will return the explain
plan instead of actually running until you execute SET SHOWPLAN_ALL
OFF.
-- Mark Lewis
On Thu, 2006-08-17 at 09:11 -0400, Tom Lane wrote:
Peter Hardman
Can you provide an EXPLAIN ANALYZE of the query in PG? Have you
analyzed the PG database? How many rows is this query expected to
return? Which version of PG are you running? What indexes have you
defined?
-- Mark
On Tue, 2006-08-15 at 14:38 +, Sebastián Baioni wrote:
Hello, I'm
reads would be
about equal or else maybe give a slight edge to RAID-10.
-- Mark Lewis
On Fri, 2006-07-28 at 13:31 -0400, Jeff Trout wrote:
I too have a DL385 with a single DC Opteron 270.
It claims to have a smart array 6i controller and over the last
couple of days I've been runnign some
A volatile function has may return a different result for each row;
think of the random() or nextval() functions for example. You wouldn't
want them to return the same value for each row returned.
-- Mark Lewis
On Fri, 2006-07-28 at 13:59 -0700, Ben wrote:
It's volatile, but it will always
for other folks too.
Thanks in advance for any feedback :)
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can
The IDE drive is almost certainly lying about flushing data to the disk.
Lower-end consumer drives often do.
What this means is that commits will be a whole lot faster, but the
database loses its ACID guarantees, because a power failure at the wrong
moment could corrupt the whole database.
If
, it is possible to (mis)configure
the system to create an unbounded number of cached prepared statements
on any particular connection. Older versions of DBCP were also known to
have bugs which aggravated this issue when prepared statement caching
was enabled, IIRC.
-- Mark Lewis
the
sequential scan is much faster because it avoids traversing the index
and performing random read operations.
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail
On Thu, 2006-06-15 at 14:46 -0400, John Vincent wrote:
One question that we came up with is how does this affect other
aggregate functions like MAX,MIN,SUM and whatnot? Being that this is
our data warehouse, we use these all the time. As I've said
previously, I didn't know a human could
=off, at least for some cheap controllers.
-- Mark Lewis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
No. You need fsync on in order to force the data to get TO the NetApp
at the right time. With fsync off, the data gets cached in the
operating system.
-- Mark Lewis
On Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:
All,
So I thought I'd pose this question:
If I have a pg database
On Fri, 2006-06-02 at 15:25 -0300, [EMAIL PROTECTED] wrote:
Hi,
I would like to know if my supposition is right.
Considering an environment with only one hard disk attached to a server, an
initial loading of the database probably is much faster using an IDE/ATA
interface with write-back
On Fri, 2006-06-02 at 17:37 -0300, [EMAIL PROTECTED] wrote:
Many thanks Mark,
I will consider fsync=off only to do an initial load, not for a database
normal operation.
This approach works well. You just need to remember to shut down the
database and start it back up again with fsync
that the index is cached (or
index + relevant chunks of data file if using InnoDB?) then that would
explain how MySQL can use an index to get fast results.
-- Mark Lewis
---(end of broadcast)---
TIP 4: Have you searched our list archives
(where N is the
number of processor cores available) might be a win over a single-
threaded import.
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL
), and discovered that for most databases,
the optimal batch size was 1. For PostgreSQL I think it was 2.
The moral of the story is that you're probably better off running a
bunch of small selects than in trying to optimize things with one
gargantuan select.
-- Mark Lewis
On Mon, 2006-05-08 at 13
They are not equivalent. As I understand it, RAID 0+1 performs about
the same as RAID 10 when everything is working, but degrades much less
nicely in the presence of a single failed drive, and is more likely to
suffer catastrophic data loss if multiple drives fail.
-- Mark
On Tue, 2006-05-02 at
vulnerable to OS crashes, failures in
non-RAID hardware, UPS failures, or anything else that would necessitate
a hard reboot.
So a UPS is a decent replacement for a BBU only if you trust your app
server/OS more than you value your data.
-- Mark Lewis
On Mon, 2006-05-01 at 10:58 -0700, Erik Myllymaki
to the deep guru who knew the proper undocumented
incantations.
-- Mark Lewis
On Thu, 2006-04-20 at 20:00 +0200, Mikael Carneholm wrote:
We're going to get one for evaluation next week (equipped with dual
2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,
performance wise
Can you post an explain analyze for the delete query? That will at
least tell you if it is the delete itself which is slow, or a trigger /
referential integrity constraint check. Which version of PG is this?
-- Mark Lewis
On Wed, 2006-03-29 at 12:58 -0500, Eric Lauzon wrote:
Greetings
, just
active/passive fault tolerance.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
which would degenerate to the exact same sort
behavior in use today.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
) where n=4 in SQL_ASCII though.
In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit
sortKey as elsewhere suggested). The sorting key doesn't need to be a
one-to-one mapping.
-- Mark Lewis
---(end of broadcast)---
TIP 9
Machine 1: $2000
Machine 2: $2000
Machine 3: $2000
Knowing how to rig them together and maintain them in a fully fault-
tolerant way: priceless.
(Sorry for the off-topic post, I couldn't resist).
-- Mark Lewis
On Wed, 2006-02-15 at 09:19 -0800, Craig A. James wrote:
Jeremy Haile wrote:
We
want to upgrade as soon as possible, and refer to the
on-line docs about what to do with your FSM settings.
-- Mark Lewis
On Mon, 2006-01-30 at 23:57 +0100, Emmanuel Lacour wrote:
Hi everybody,
I have the following problem, on a test server, if I do a fresh import
of production data
means that you would need to either
implement a lot of complex locking for little material gain, or just
hold the cursors in moderately long-running transactions, which leads
back to the solution suggested earlier.
-- Mark Lewis
---(end of broadcast
thousands of open cursors. I don't know if this introduces an
unacceptable performance penalty or other bottleneck in the server?
-- Mark Lewis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http
If this is a query that will be executed more than once, you can also
avoid incurring the planning overhead multiple times by using PREPARE.
-- Mark Lewis
On Wed, 2006-01-11 at 18:50 -0500, Jean-Philippe Côté wrote:
Thanks a lot for this info, I was indeed exceeding the genetic
optimizer's
was to their current RHEL4 kernel.
Thanks,
Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through
Do you have an index on the date column? Can you post an EXPLAIN
ANALYZE for the slow query?
-- Mark Lewis
On Wed, 2005-10-26 at 13:41 -0700, aurora wrote:
I am running Postgre 7.4 on FreeBSD. The main table have 2 million
record (we would like to do at least 10 mil or more). It is mainly
Which version of PG are you using? One of the new features for 8.0 was
an improved caching algorithm that was smart enough to avoid letting a
single big query sweep everything else out of cache.
-- Mark Lewis
On Tue, 2005-10-04 at 10:45 -0400, Lane Van Ingen wrote:
Yes, Stefan, the kind
operations != passes. If you were clever, you could probably write a
modified bubble-sort algorithm that only made 2 passes. A pass is a
disk scan, operations are then performed (hopefully in memory) on what
you read from the disk. So there's no theoretical log N lower-bound on
the number of
I had a similar problem, so I downloaded 8.1 from CVS, ran it on a
relatively gnarly dev workstation, imported a dump of my 8.0 database,
and ran my troublesome queries with the new EXPLAIN ANALYZE.
This process took about an hour and worked great, provided that you've
actually named your foreign
performance with
this particular hardware and workload?
Ah well. Thought myself in circles and have no real conclusions to show
for it. Posting anyway, maybe this will give somebody some ideas to
work with.
-- Mark Lewis
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
Paul,
Before I say
Easier and faster than doing the custom trigger is to simply define a
unique index and let the DB enforce the constraint with an index lookup,
something like:
create unique index happy_index ON happy_table(col1, col2, col3);
That should run faster than the custom trigger, but not as fast as the
On Fri, 2005-07-08 at 16:43 +0200, Enrico Weigelt wrote:
* PFC [EMAIL PROTECTED] wrote:
snip
For Python it's the reverse : the MySQL driver is slow and dumb,
and the postgres driver (psycopg 2) is super fast, handles all
quoting,
and knows about type conversions, it will
' package for RH7.2 anyway.
-- Mark Lewis
On Thu, 2005-06-16 at 07:46 -0700, Todd Landfried wrote:
Yes, it is 7.2. Why? because an older version of our software runs on
RH7.3 and that was the latest supported release of Postgresql for
RH7.3 (that we can find). We're currently ported to 8, but we
If the original paper was published in 1984, then it's been more than 20
years. Any potential patents would already have expired, no?
-- Mark Lewis
On Tue, 2005-05-10 at 14:35, Mischa Sandberg wrote:
Quoting Jim C. Nasby [EMAIL PROTECTED]:
Well, in a hash-join right now you normally end up
to
a broker, which could then implement connection pooling.
-- Mark Lewis
On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
This is a serious problem for me as there are multiple users using our
software on our server and I would want to avoid having connections
open for a long time. In the scenario
in Tom's fan club today. I imported my test dataset
and was almost immediately able to track down the cause of my
performance problem.
Thanks!
Mark Lewis
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
table?
-- Mark Lewis
On Thu, 2005-03-24 at 16:52, Tom Lane wrote:
Karim Nassar [EMAIL PROTECTED] writes:
Here is the statement:
orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE
id_meas_type IN (SELECT * FROM meas_type_ids
97 matches
Mail list logo