On 11/14/05, Steve Wampler <[EMAIL PROTECTED]> wrote:
However, even that seems to have a much higher cost than I'd expect: lab.devel.configdb=# explain delete from "tmp_table2" where id in(select id from tmp_table2 where name='obsid' and value = 'oid080505');
NOTICE: QUERY PLAN: Seq
On 11/17/05, William Yu <[EMAIL PROTECTED]> wrote:
> No argument there. But it's pointless if you are IO bound.Why would you just accept "we're IO bound, nothing we can do"? I'd doeverything in my power to make my app go from IO bound to CPU bound --
whether by optimizing my code or buying more ha
On 12/4/06, Mark Lonsdale <[EMAIL PROTECTED]> wrote:
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
Do you know how often and when you will be creating indexes or clustering?
We set ours to 2GB because of the performance gains. We've also thought
about testing it at 4GB.
I am having problems performing a join on two large tables. It seems to only
want to use a sequential scan on the join, but that method seems to be slower
than an index scan. I've never actually had it complete the sequential scan
because I stop it after 24+ hours. I've run joins against large
On 9/13/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote:
> data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM
> v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH
> FROM s.custda
On 9/13/06, Jeff Davis <[EMAIL PROTECTED]> wrote:
On Wed, 2006-09-13 at 08:49 -0600, Joshua Marsh wrote:
> That seems to have done it. Are there any side effects to this
> change? I read about random_page_cost in the documentation and it
> seems like this is strictly for pla
On 9/13/06, Bucky Jordan <[EMAIL PROTECTED]> wrote:
Setting to 0.1 finally gave me the result I was looking for. I know
that the index scan is faster though. The seq scan never finished (i
killed it after 24+ hours) and I'm running the query now with indexes
and it's progressing nicely (will pr
Hmm... that sounds bad. I'm sure your system will always choose indexes
with that value.
Is it overestimating the cost of using indexes or underestimating the
cost of a seq scan, or both? Maybe explain with the 0.1 setting will
help?
Regards,
Jeff Davis
data=# explain SELECT v.phonedire
On 9/13/06, Terje Elde <[EMAIL PROTECTED]> wrote:
Jeff Davis wrote:
> Is it overestimating the cost of using indexes or underestimating the
> cost of a seq scan, or both? Maybe explain with the 0.1 setting will
> help?
>
If enable_seqscan is off, and cost is still set to 1, it could
be t
Are the tables perhaps nearly in order by the dsiacctno fields?
If that were the case, and the planner were missing it for some reason,
these results would be plausible.
BTW, what are you using for work_mem, and how does that compare to your
available RAM?
regards, tom lan
On 9/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Joshua Marsh" <[EMAIL PROTECTED]> writes:
>> Are the tables perhaps nearly in order by the dsiacctno fields?
> My assumption would be they are in exact order. The text file I used
> in the COPY statement had t
Wow, that correlation value is *way* away from order. If they werereally in exact order by dsiacctno then I'd expect to see
1.0 inthat column. Can you take another look at the tables and confirmthe ordering? Does the correlation change if you do an ANALYZE on thetables? (Some small change is t
On 10/12/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
Posting here instead of hackers since this is where the thread gotstarted...The argument has been made that producing a hints system will be as hard
as actually fixing the optimizer. There's also been clamoring for anactual proposal, so here's on
On 10/18/06, Ioana Danes <[EMAIL PROTECTED]> wrote:
# explain select max(transid) from public.transaction; QUERYPLAN
-- Result (cost=0.04..0.05 rows=1 wi
On 11/18/06, rakesh kumar <[EMAIL PROTECTED]> wrote:
Hi ,
I wanted to know , how the start up cost is computed in
postgresql . can u give me an example to illustrate the estimation of start
up cost .
thanku raa .
It would
do so wel
with singular processes. We will have parallel process running, but
it's more important that the speed of each process be faster than
several parallel processes at once.
Any help would be greatly appreciated!
Thanks,
Joshua Marsh
P.S. Off-topic, I have a few invitations to gmail.
Any time you run subqueries, it's going to slow down the update
process a lot. Each record that is updated in source_song_title runs
two additional queries. When I do large updates like this, I usualy
Run a transaction that will select all the new data into a new table
on a join. For example
SE
ame servers really are.
> If you are seriously considering memory sizes over 4G you may want to
> look at an opteron.
>
> Dave
>
>
>
> Joshua Marsh wrote:
>
> >Hello everyone,
> >
> >I am currently working on a data project that uses PostgreSQL
>
18 matches
Mail list logo