Looking for replication solutions, I find:
Slony-I
Seems good, single master only, master is a single point of failure,
no good failover system for electing a new master or having a failed
master rejoin the cluster. Slave databases are mostly for safety or
for parallelizing queries for perfo
Greg Smith wrote:
If you're going to the trouble of building a tool for offering
configuration advice, it can be widly more effective if you look inside
the database after it's got data in it, and preferably after it's been
running under load for a while, and make your recommendations based on
Bill Moran wrote:
In response to Heikki Linnakangas <[EMAIL PROTECTED]>:
Can anyone else confirm this? I don't know if this is a windows-only
issue, but I don't know of a way to check fragmentation in unix.
I can confirm that it's only a Windows problem. No UNIX filesystem
that I'm aware of s
Merlin Moncure wrote:
In the context of this debate, I see this argument all the time, with
the implied suffix: 'If only we used integer keys we would not have
had this problem...'. Either the customer identifies parts with a
part number or they don't...and if they do identify parts with a
numbe
Merlin Moncure wrote:
Since the part number is NOT the primary key, duplicate part numbers
are not a problem. If you had instead used the part number as the
primary key, you'd be dead in the water.
You are redefining the primary key to be (part_number,
obsoletion_date). Now, if you had not a
Merlin Moncure wrote:
In the context of this debate, I see this argument all the time, with
the implied suffix: 'If only we used integer keys we would not have
had this problem...'. Either the customer identifies parts with a
part number or they don't...and if they do identify parts with a
numbe
Merlin Moncure wrote:
Using surrogate keys is dangerous and can lead to very bad design
habits that are unfortunately so prevalent in the software industry
they are virtually taught in schools. ... While there is
nothing wrong with them in principle (you are exchanging one key for
another as a
Spiegelberg, Greg wrote:
We have a query which generates a small set of rows (~1,000) which are
to be used in a DELETE on the same table. The problem we have is that
we need to join on 5 different columns and it takes far too long.
You may have encountered the same problem I did: You *must*
I had a 'scratch' database for testing, which I deleted, and then disk went
out. No problem, no precious data. But now I can't drop the tablespace, or
the user who had that as the default tablespace.
I thought about removing the tablespace from pg_tablespaces, but it seems wrong
to be monkey
Kevin Grittner wrote:
Management has simply given a mandate that the software be independent
of OS and database vendor, and to use Java to help with the OS independence.
... we write all of our queries in ANSI SQL in our own query tool, parse it,
and generate Java classes to run it.
A better so
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
Steve Atkins wrote:
As long as you're ordering by some row in the table then you can do that in
straight SQL.
select a, b, ts from foo where (stuff) and foo > X order by foo limit 10
Then, record the last va
Steve Atkins wrote:
As long as you're ordering by some row in the table then you can do that in
straight SQL.
select a, b, ts from foo where (stuff) and foo > X order by foo limit 10
Then, record the last value of foo you read, and plug it in as X the next
time around.
We've been over this be
Tino Wildenhain wrote:
You guys can correct me if I'm wrong, but the key feature that's
missing from Postgres's flexible indexing is the ability to maintain
state across queries. Something like this:
select a, b, my_index_state() from foo where ...
offset 100 limit 10 using my_index(prev_
Tino Wildenhain wrote:
Craig A. James schrieb:
...
In our case (for a variety of reasons, but this one is critical), we
actually can't use Postgres indexing at all -- we wrote an entirely
separate indexing system for our data...
...There is no need to store or
maintain this information
Brian Hurt wrote:
One of our biggest single problems is this very thing. It's not a
Postgres problem specifically, but more embedded in the idea of a
relational database: There are no "job status" or "rough estimate of
results" or "give me part of the answer" features that are critical to
man
Michael Stone wrote:
On Thu, Mar 22, 2007 at 01:30:35PM +0200, [EMAIL PROTECTED] wrote:
approximated count?
why? who would need it? where you can use it?
Do a google query. Look at the top of the page, where it says "results N
to M of about O". For user interfaces (which is where a lot o
Dan Harris wrote:
I've found that it would be helpful to be able to tell how busy my
dedicated PG server is ...
I have seen some other nice back-end things exposed through PG functions
( e.g. database size on disk ) and wondered if there was anything
applicable to this.
I'd write a simple p
Carlos,
Now, yet another thing that you (Craig) seem to be missing: you're
simply putting the expense of all this time under the expenses column
in exchange for solving the particular problem...
More like I was trying to keep my response short ;-). I think we're all in
agreement on pretty mu
Rodrigo Madera wrote:
I would just like to note here that this is an example of inefficient
strategy.
We could all agree (up to a certain economical point) that Alex saved
the most expensive one thousand dollars of his life.
I don't know the financial status nor the size of your organization
Bill Moran wrote:
I'm curious, what problem does the disclaimer cause?
I wrote the following TOS for my personal system:
https://www.potentialtech.com/cms/node/9
Excerpt of the relevant part:
"If you send me email, you are granting me the unrestricted right to use
the contents of that email howe
Stephan Szabo wrote:
I tried posting to the bugs, and they said this is a better question for here.
I have to queries. One runs in about 2 seconds. The other takes upwards
of 2 minutes. I have a temp table that is created with 2 columns. This
table is joined with the larger database of call detai
Ray,
I'd like to have a toolbox prepared for when performance goes south.
I'm clueless. Would someone mind providing some detail about how to
measure these four items Craig listed:
I hope I didn't give the impression that these were the only thing to look at
... those four items just popped
Andreas Tille wrote:
My web application was running fine for years without any problem
and the performance was satisfying. Some months ago I added a
table containing 450 data rows ...
Since about two weeks the application became *drastically* slower
and I urgently have to bring back the old
Craig A. James wrote:
The "idiom" to replace count() was
"select col from tbl order by col desc limit 1". It worked miracles for
my app.
Sorry, I meant to write, "the idiom to replace MAX()", not count()... MAX()
was the function that was killing me,
Guy,
The application is fairly straightforward, but as you say, what is
working okay with BigDBMS isn't working as well under PG. I'm going to
try other configuration suggestions made by others before I attempt
logic changes. The core logic is unchangeable; millions of rows of data
in a sin
Guy Rouillier wrote:
I've got back access to my test system. I ran another test run with the
same input data set. This time I put pg_xlog on a different RAID volume
(the unused one that I suspect is a software RAID), and I turned
fsync=off in postgresql.conf. I left the rest of the configura
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
CREATE OR REPLACE FUNCTION cansmiles(text) RETURNS text
AS '/usr/local/pgsql/lib/libchem.so', 'cansmiles'
LANGUAGE 'C' STRICT IMMUTABLE;
Umm ... this is a single-argument func
Adam Rich wrote:
Craig,
What version of postgres are you using? I just tested this on PG 8.1.2
and was unable to reproduce these results. I wrote a simple function
that returns the same text passed to it, after sleeping for 1 second.
I use it in a where clause, like your example below, and rega
Well, once again I'm hosed because there's no way to tell the optimizer the
cost for a user-defined function. I know this issue has already been raised
(by me!) several times, but I have to remind everyone about this. I frequently
must rewrite my SQL to work around this problem.
Here is the
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
I'm using pg_dump/pg_restore to quickly copy databases between servers. But my
server keeps crashing when I run pg_restore:
glibc detected *** double free or corruption (!prev): 0x0a00b1a0
What can I do
I'm using pg_dump/pg_restore to quickly copy databases between servers. But my
server keeps crashing when I run pg_restore:
glibc detected *** double free or corruption (!prev): 0x0a00b1a0
Postgres: 8.1.4
Linux: 2.6.12-1.1381_FC3
glibc: 2.3.6-0.fc3.1
Server: Dell
CPU: Xeon 2.80GHz
Me
Ron wrote:
We are not going to get valuable contributions nor help people become
more valuable to the community by "flaming them into submission".
Let's support getting definitive evidence. No matter who brings it to
the table ;-)
Thanks, Ron, for a voice of respect and reason. Since I fir
Alex Turner wrote:
The problem I see with software raid is the issue of a battery backed
unit: If the computer loses power, then the 'cache' which is held in
system memory, goes away, and fubars your RAID.
I'm not sure I see the difference. If data are cached, they're not written
whether it
Rod Taylor wrote:
Just a minor clarification here: INSERT does not create dead rows, only
UPDATE and DELETE do. Thus, if you only insert rows, you do not need to
vacuum (although you probably need to analyze).
Is there no real-time garbage collection at all in Postgres? And if so, is
this bec
You realize that it had to be turned on explicitly on IRIX, right? But
don't let facts get in the way of a good rant...
On the contrary, with Irix 4 and earlier it was the default, but it caused so many problems that
SGI switched the default to OFF in IRIX 5. But because it had been available
Michael Stone wrote:
At one point someone complained about the ability to configure, e.g.,
IRIX to allow memory overcommit. I worked on some large IRIX
installations where full memory accounting would have required on the
order of 100s of gigabytes of swap, due to large shared memory
allocatio
Richard Troy wrote:
I did that - spent about two f-ing hours looking for what I wanted. (Guess
I entered poor choices for my searches. -frown- ) There are a LOT of
articles that TALK ABOUT OOM, but prescious few actually tell you what you
can do about it.
Trying to save you some time:
On linux
By the way, in spite of my questions and concerns, I was *very* impressed by
the recovery process. I know it might seem like old hat to you guys to watch
the WAL in action, and I know on a theoretical level it's supposed to work, but
watching it recover 150 separate databases, and find and fix
Russell Smith wrote:
For the third time today, our server has crashed...
I would guess it's the linux OOM if you are running linux. You need to
turn off killing of processes when you run out of memory. Are you
getting close to running out of memory?
Good suggestion, it was a memory leak in
For the third time today, our server has crashed, or frozen, actually something
in between. Normally there are about 30-50 connections because of mod_perl
processes that keep connections open. After the crash, there are three
processes remaining:
# ps -ef | grep postgres
postgres 23832 1
A few months ago a couple guys got "bragging rights" for having the most
separate databases. A couple guys claimed several hundred databases and one said he had
several thousand databases. The concensus was that Postgres has no problem handling many
separate databases.
I took that to heart a
Spiegelberg, Greg wrote:
The data are on two different computers, and I do processing
of the data as it passes through the application. Otherwise,
the INSERT INTO ... SELECT is my first choice.
Would dblink() help in any way?
It might if perl wasn't so damned good at this. ;-)
Craig
Jim C. Nasby wrote:
Wait... so you're using perl to copy data between two tables? And using
a cursor to boot? I can't think of any way that could be more
inefficient...
What's wrong with a plain old INSERT INTO ... SELECT? Or if you really
need to break it into multiple transaction blocks, at le
Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl for big database copies (usually with some processing/transformation
alon
Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect that
if you want performance perl isn't the way to go. I think you can
comp
Alexander Staubo wrote:
On Oct 17, 2006, at 17:10 , Craig A. James wrote:
These tables are particularly egregious examples of ignorant
database design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe
Mr. Staubo, and the
These tables are particularly egregious examples of ignorant database
design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe Mr.
Staubo, and the Postgress community, an apology.
There is absolutely no reason to insult peop
So let's cut to the bone: If someone thinks a proposal is a bad idea, and
they're volunteering their time on an open-source project, why would they
implement the proposal?
In all the heat and smoke, I believe there are two basic conclusions we all
agree on.
1. Optimizer:
a) A perfect optimi
Josh Berkus wrote:
I actually think the way to attack this issue is to discuss the kinds of
errors the planner makes, and what tweaks we could do to correct them.
Here's the ones I'm aware of:
-- Incorrect selectivity of WHERE clause
-- Incorrect selectivity of JOIN
-- Wrong estimate of rows r
Brian Herlihy wrote:
PG does support hints actually..
The only thing is, the hints are expressed in an obscure, ad-hoc and
implementation dependant language.
For example, the "Don't use index X" hint (the one I used) can be accessed by
replacing your index with an index on values derived from t
Mark Kirkwood wrote:
The result? I can't use my function in any WHERE clause that involves
any other conditions or joins. Only by itself. PG will occasionally
decide to use my function as a filter instead of doing the join or the
other WHERE conditions first, and I'm dead.
this is an argum
Bruce Momjian wrote:
I can do 100! on my computer, but can't do it in my head.
A poor example. 100! is a simple repetative calculation, something computers
are very good at. Optimizing an SQL query is very difficult, and a completely
different class of problem.
The fact is the PG team has
... and add 100 other problems. Hints are used because the DBA thinks that
they are smarter than the optimizer; 99% of the time, they are wrong.
Just try manually optimizing a complex query, you'll see -- with three
join types, several scan types, aggregates, bitmaps, internal and external
so
I have two tables, SAMPLE and HITLIST that when joined, generate a monsterous
sort.
HITLIST_ROWS has about 48,000 rows
SAMPLE has about 16 million rows
The joined column is indexed in SAMPLE
HITLIST_ROWS is a scratch table which is used a few times then discarded.
HITLIST_ROWS has no index
Alan Hodgson wrote:
On Thursday 14 September 2006 11:05, "Craig A. James"
<[EMAIL PROTECTED]> wrote:
I'm experiment with RAID, looking for an inexpensive way to boost
performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG
dual-port SATA cards. (NB: I don
I'm experiment with RAID, looking for an inexpensive way to boost performance.
I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG dual-port SATA cards.
(NB: I don't plan to run RAID 0 in production, probably RAID 10, so no need to
comment on the failure rate of RAID 0.)
I used this ra
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0
0 12 124.66 129 15.67 124.39 129 15.64 12 0
Suppose, hypothetically of course, someone lacked foresight, and put a tablespace somewhere with a
dumb name, like "/disk2", instead of using a symbolic link with a more descriptive name.
And then /disk2 needs to be renamed, say to "/postgres_data", and this (hypothetical)
DBA realizes he has
Michael Stone wrote:
On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:
This causes massive file-system activity and flushes all files that
the kernel has cached. If you run this between each Postgres test
(let it run for a couple minutes), it gives you an apples-to-apples
The real issue here is caching across successive queries, an effect that
Postgres doesn't deal with very well at the moment. If you run these
queries from a standing start (freshly booted machine) you'll likely
find that the indexscan plan is indeed slower than the seqscan/hash
plan, just like th
Hassan,
1. I have a function that given two parameter produces an arbitrary id, and
text. However arbitrary the id and text are, they are in certain order. i.e. it
is imperative that whatever processing I do, the order is preserved.
What type of function is this? Did you write it in C? An SQ
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
Bottom line is that I was expecting "instant death" with SIGTERM, but
instead got an agonizing, drawn out -- but safe -- death of the query.
What was the query exactly?
Our expectation is that all or at
Steinar H. Gunderson wrote:
On Wed, Jul 12, 2006 at 08:43:18AM -0700, Craig A. James wrote:
Then you killed the wrong backend...
No queries run in postmaster. They all run in postgres backends. The
postmaster does very little actual work, other than keeping track of
everybody else.
It turns out
Thanks for your reply, Mark:
I'm back to my original question: How do you kill a runaway query
without bringing down the whole database? Is there really no answer
to this?
... if you actually want to disconnect a backend
process then there is nothing to let you do this remotely. I recently
I wrote:
I can't find an address to complain about the mailing list itself, so
apologies but I'm posting directly to this list. Every time I post to
this group, I get returned mails about OTHER subscribers' invalid
accounts, like the one below.
Michael Glaesemann replied:
Is this when you're
I can't find an address to complain about the mailing list itself, so apologies
but I'm posting directly to this list. Every time I post to this group, I get
returned mails about OTHER subscribers' invalid accounts, like the one below.
What's up? This seems to be a new phenomenon. Should th
Magnus Hagander wrote:
This raises the question: Why doesn't Postgres have a "kill
session" command that works? Oracle has it, and it's
invaluable; there is no substitute. Various writers to these
PG lists have raised the question repeatedly. Is it just a
matter that nobody has had the time
There have been dozens, perhaps hundreds, of entries in the pg-admin,
pg-general, and pg-performance lists regarding killing a session, but as far as
I can tell, there is no Postgres solution. Did I miss something?
This raises the question: Why doesn't Postgres have a "kill session" command th
If I insert a bunch of rows, then truncate, then insert a bunch more rows, do I
need to vacuum? I've been assuming that TRUNCATE TABLE is a brute-force
technique that more-or-less tosses the old table and starts fresh so that no
vacuum is necessary.
Second question: Same scenario as above, bu
Richard Broersma Jr wrote:
Each table with-in the database is assigned an OID and is located inside the DB
directory. So if
there is a file-system limitation on the number of files with-in a given
directory it would also
be a limit to the number of tables that could be created for each databas
Here is a subtle question about SQL. I have a one-to-many pair of tables (call them "P"
and "C" for parent and child). For each row of P, there are many rows in C with data,
and I want to sort P on the min(c.data). The basic query is simple:
select p_id, min(data) as m from c group by p_id
Here is a question about SQL. I have a one-to-many pair of tables (call them "P" and
"C" for parent and child). For each row of P, there are many rows in C with data, and I
want to sort P on the min(c.data). The basic query is simple:
select p_id, min(data) as m from c group by p_id order b
I have a query that needs to run faster, with the obvious solution being to add an index.
But to confirm this, I ran explain analyze. When I run the actual query, it
consistently takes 6-7 seconds by the wall clock. My application with a
"verbose" mode enabled reports 6.6 seconds consistentl
Arjen van der Meijden wrote:
First of all, this graph has no origin. Its a bit difficult to test with
less than one cpu.
Sure it does. I ran all the tests. They all took infinite time, and I got
zero results. And my results are 100% accurate and reliable. It's perfectly
valid data. :-)
C
Greg Stark wrote:
However that's not enough to explain what you've shown. How about you show the
actual query and actual plan you're working with? The plan you've shown can't
result from the query you sent.
Mea culpa, sort of. But ... in fact, the plan I sent *was* from query I sent, with the
My application has a function, call it "foo()", that requires initialization from a table
of about 800 values. Rather than build these values into the C code, it seemed like a good idea to
put them on a PG table and create a second function, call it "foo_init()", which is
called for each value
Tom Lane wrote:
The idea I just had was: why do we need EXPLAIN ANALYZE to run to
completion? In severe cases like this thread, we might be able to
discover the root cause by a *partial* execution of the plan, as long as
it was properly instrumented. That way, the OP might have been able to
disco
Simon Riggs wrote:
Well, it's a big query. If it ought to take a second or two, and
instead is taking an hour or two (1800 times the expected runtime), that
might be close enough to "never" to exhaust Chris' patience. Besides,
we don't know whether the 1800 might itself be an underestimate (too
Meetesh Karia wrote:
Hi all,
We've recently started having a problem where a query that normally
executes in ~15ms starts to take upwards of 20s to complete. When the
connection that ran query is returned to the connection pool, it appears
as though a transaction is still in progress so the
Tom Lane wrote:
There is not anything in there that considers whether the table's
physical order is so nonrandom that the search will take much longer
than it would given uniform distribution. It might be possible to do
something with the correlation statistic in simple cases ...
In this case,
Here's a "corner case" that might interest someone. It tripped up one of our
programmers.
We have a table with > 10 million rows. The ID column is indexed, the table
has been vacuum/analyzed. Compare these two queries:
select * from tbl where id >= 1000 limit 1;
select * from tbl wh
Christian Paul Cosinas wrote:
I am creating an application that gets the value of a large table and write
it to a file.
Why I want to use offset and limit is for me to create a threaded
application so that they will not get the same results.
For example:
Thread 1 : gets offset 0 limit 5000
Thr
Tom Lane wrote:
>My suspicion is that it's an incompatibility between malloc()
>libraries.
On Linux there's only supposed to be one malloc, ie, glibc's version.
On other platforms I'd be worried about threaded vs non-threaded libc
(because the backend is not threaded), but not Linux.
I guess I
Chris wrote:
This is a deadly bug, because our web site goes dead when this
happens, ...
Sounds like a deadlock issue.
...
stats_command_string = true
and restart postgresql.
then you'll be able to:
select * from pg_stat_activity;
to see what queries postgres is running and that might give yo
I'm having a rare but deadly problem. On our web servers, a process occasionally gets stuck, and
can't be unstuck. Once it's stuck, all Postgres activities cease. "kill -9" is required
to kill it -- signals 2 and 15 don't work, and "/etc/init.d/postgresql stop" fails.
Here's what the process
mcelroy, tim wrote:
Sorry, been up all night and maybe provided too much information or not
the right information and only confused folks, tired I guess. When I
say 'in use' I am referring to the 'used' column. Thanks all who have
responded to this inquiry, I appreciate it.
Here's free from
Jim C. Nasby wrote:
No, I don't agree with this. Too many people waste time designing for
"what if..." scenarios that never happen. You don't want to be dumb and
design something that locks out a foreseeable and likely future need, but
referential integrity doesn't meet this criterion. There
Jim C. Nasby wrote:
1. You have only one application that modifies the data. (Otherwise, you
have to duplicate the rules across many applications, leading to a
code-maintenance nightmare).
You forgot something:
1a: You know that there will never, ever, ever, ever, be any other
application th
I think this is an old question, but I want to know if it really
is well worth to not create some foreign keys an deal with the
referential integrity at application-level?
Trust me : do it in the application and you'll enter a world of
hurt. I'm doing it with some mysql apps, and
This is off-topic for this group so I'll just give a brief reply; I'm happy to
carry on more just between the two of us...
Gorshkov wrote:
That being said . what *is* the difference between coding a website -
major or otherwise - in an "old-fashioned" compiled language and a
non-compiled l
Gorshkov wrote:
/flame on
if you were *that* worried about performance, you wouldn't be using PHP or
*any* interperted language
/flame off
sorry - couldn't resist it :-)
I hope this was just a joke. You should be sure to clarify - there might be
some newbie out there who thinks you are ser
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
It looks to me like the problem is the use of nested loops when a hash
join should be used, but I'm no expert at query planning.
Given the sizes of the tables involved, you'd likely have to boost up
work_me
I'm reposting this -- I sent this out a month ago but never got a response, and
hope someone can shed some light on this.
Thanks,
Craig
--
This is a straightforward query that should be fairly quick, but takes about 30
minutes. It's a query across three tables, call t
I've seen it said here several times that "update == delete + insert". On the other
hand, I've noticed that "alter table [add|drop] column ..." is remarkably fast, even for
very large tables, which leads me to wonder whether each column's contents are in a file
specifically for that column.
M
Mikael Carneholm wrote:
I am responisble for an exciting project of evaluating migration of a
medium/large application for a well-known swedish car&truck manufacturer
... The goal right now is to find the set of parameters that gives as
short bulk insert time as possible, minimizing downtime whi
Alvaro Herrera wrote:
If I only insert data into a table, never update or delete, then I should
never have to vacuum it. Is that correct?
You still need to vacuum eventually, to avoid transaction Id wraparound
issues. But not as often.
Thanks. Any suggestions for what "not as often" means?
If I only insert data into a table, never update or delete, then I should never
have to vacuum it. Is that correct?
Thanks,
Craig
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Joshua D. Drake wrote:
I find this strains credibility, that this major manufacturer of PC's
would do something deceptive that hurts performance, when it would be
easily detected and widely reported. Can anyone cite a specific
instances where this has happened? Such as, "I bought Dell model X
Bruce Momjian wrote:
Dell often says part X is included, but part X is not the exact same as
part X sold by the original manufacturer. To hit a specific price
point, Dell is willing to strip thing out of commodity hardware, and
often does so even when performance suffers. For many people, this
"Virag Saksena" <[EMAIL PROTECTED]> writes:
The individual queries run in 50-300 ms. However the optimizer is
choosing a nested loop to join them rather than a Hash join...
I have what appears to be the identical problem.
This is a straightforward query that should be fairly quick, but takes a
1 - 100 of 135 matches
Mail list logo