I don't know what this will change wrt how often you need to run VACUUM
(I'm a SQL Server guy), but instead of an update and insert, try a
delete and insert. You'll only have to find the duplicate rows
once and your insert doesn't need a where clause.
MeeteshOn 6/23/05, Bricklen Anderson <[EMAIL
import;
select i_field1, i_field2, max(ts_field1) as ts_field1, count(*) as dup_count
from my_table_import
where i_field1 between 0 and
group by i_field1, i_field2
Thanks for the help,
Meetesh Karia
Hi all,
We're using 8.0.3 and we're seeing a problem where the planner is
choosing a seq scan and hash join over an index scan. If I set
enable_hashjoin to off, then I get the plan I'm expecting and the query
runs a lot faster. I've also tried lowering the random page cost
(even to 1) but the pl
rgetId set statistics 1000;
analyze candidates617004;
alter table lte_user
alter column user_id set statistics 1000;
analyze lte_user;
Thanks for your suggestion,
Meetesh
On 8/2/05, Tobias Brox <[EMAIL PROTECTED]> wrote:
[Meetesh Karia - Tue at 12:19:27AM +0200]> We're using 8.0.3 an
Thanks John. I've answered your questions below:
Has lte_user and candidates617004 been recently ANALYZEd? All estimates,except for the expected number of rows from lte_user seem to be okay.
I ANALYZEd both tables just before putting together my first
email. And, unfortunately, modifying the stat
Index Cond: ("outer".targetid = t.user_id)
Total runtime: 188.000 ms
Thanks,
MeeteshOn 8/2/05, Tom Lane <[EMAIL PROTECTED]> wrote:
Meetesh Karia <[EMAIL PROTECTED]> writes:> Sure. The lte_user table is just a collection of users. user_id is assigned=> uniquely
djust that cost
estimate? Are there any side effects of doing that? Or is
my best solution to simple set enable_hashjoin to off for this query?
Thanks,
MeeteshOn 8/2/05, Meetesh Karia <[EMAIL PROTECTED]> wrote:
Thanks Tom,
That modifies the query plan slightly, but the planner still deci
PG is creating the union of January, February and March tables first
and that doesn't have an index on it. If you're going to do many
queries using the union of those three tables, you might want to place
their contents into one table and create an index on it.
Otherwise, try something like this:
Hi all,We've recently started having a problem where a query that normally executes in ~15ms starts to take upwards of 20s to complete. When the connection that ran query is returned to the connection pool, it appears as though a transaction is still in progress so the connection pool tries to can
o the other. Unfortunately, the problem only reappears after 15 mins once it occurs the first time. If it occurs again today I'll attach gdb to it and see whether it's stuck on a mutex.
MeeteshOn 5/22/06, Craig A. James
<[EMAIL PROTECTED]> wrote:
Meetesh Karia wrote:> Hi all,&
lem has been to reboot the machine with a 'noht' kernel param. The machine has been up for 1 day, 13:18 since then and we haven't seen the problem yet.
Has anyone been able to solve this problem?Thanks,MeeteshOn 5/22/06, Meetesh Karia <
[EMAIL PROTECTED]
> wrote:Hi Craig,Thank
11 matches
Mail list logo