Hi All,
Is it usual that the following query to take
22 secs with the machine I have?
Any other reason?
Hope I have provided all the details
need.
Thanks,
Waruna
Tables:
/*
Table
structure for table "tvDiary"
-
Josh Berkus <[EMAIL PROTECTED]> writes:
> Back in the 7.0 days,
> WHERE EXISTS (SELECT * FROM a WHERE condition)
> was significantly slower on broad tables than
> WHERE EXISTS (SELECT small_col FROM a WHERE condition)
> Is this still true, or something that's been fixed in the last 3 versions?
Tom,
Back in the 7.0 days,
WHERE EXISTS (SELECT * FROM a WHERE condition)
was significantly slower on broad tables than
WHERE EXISTS (SELECT small_col FROM a WHERE condition)
Is this still true, or something that's been fixed in the last 3 versions?
Joe Celko is making fun of me because Ora
"Matt Clark" <[EMAIL PROTECTED]> writes:
> Know what we (OK, I) need? An explicitly non-aggregate max() and min(),
> implemented differently, so they can be optimised.
Not per se. The way I've been visualizing this is that we add to
pg_aggregate a column named, say, aggsortop, with the definitio
> "Matt Clark" <[EMAIL PROTECTED]> writes:
> > Actually, referring down to later parts of this thread, why can't this
> > optimisation be performed internally for built-in types? I
> understand the
> > issue with aggregates over user-defined types, but surely
> optimising max()
> > for int4, text,
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
"Matt Clark" <[EMAIL PROTECTED]> writes:
> Actually, referring down to later parts of this thread, why can't this
> optimisation be performed internally for built-in types? I understand the
> issue with aggregates over user-defined types, but surely optimising max()
> for int4, text, etc is safe a
> Actually, referring down to later parts of this thread, why can't this
> optimisation be performed internally for built-in types? I understand the
> issue with aggregates over user-defined types, but surely optimising max()
> for int4, text, etc is safe and easy?
Sorry, missed the bit about use
>This is a Frequently asked question about something that isn't likely to
>change any time soon.
You're right, it is in the FAQ, but pretty well buried. It is entirely
non-obvious to most people that min() and max() don't/can't use indices.
Something so counterintuitive should be explicitly and
Scott,
> Not exactly. While max(id) is easily optimized by query replacement,
> more complex aggregates will still have perfomance issues that would not
> be present in a row locking database. i.e. max((field1/field2)*field3) is
> still going to cost more to process, isn't it?
Sorry, no.
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] ("scott.marlowe") belched
out...:
> On Mon, 8 Sep 2003, Neil Conway wrote:
>> On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
>> > Basically, Postgresql uses an MVCC locking system that makes massively
>> > parallel operation possible,
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> Basically, Postgresql uses an MVCC locking system that makes massively
As discussed, uh, a few days ago, this particular problem is not caused by
MVCC but by postgres having a general purpose aggregate system and not having
special code for handling
On Mon, 08-sep-2003 at 16:29, Rhaoni Chiu Pereira wrote:
> Could anyone tell me a documentation that explains the " explain " result
> and how to analyze it ?
>
http://archives.postgresql.org/pgsql-performance/2003-09/msg0.php
Regards,
--
Alberto Caso Palomino
Adaptia Soluciones Integ
Hi List,
Could anyone tell me a documentation that explains the " explain " result
and how to analyze it ?
Atenciosamente,
Rhaoni Chiu Pereira
Sistêmica Computadores
Visite-nos na Web: http://sistemica.info
Fone/Fax : +55 51 3328 1122
---(end of broadcast)
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> On Mon, 8 Sep 2003, Neil Conway wrote:
>> As was pointed out in a thread a couple days ago, MIN/MAX() optimization
>> has absolutely nothing to do with MVCC. It does, however, make
>> optimizing COUNT() more difficult.
> Not exactly. While max(id) is
On Mon, 8 Sep 2003, Neil Conway wrote:
> On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
> > Basically, Postgresql uses an MVCC locking system that makes massively
> > parallel operation possible, but costs in certain areas, and one of those
> > areas is aggregate performance over large sets.
[EMAIL PROTECTED] ("scott.marlowe") writes:
> On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote:
>
> Asking a question about why max(id) is so much slower than select id order
> by id desc limit 1, Pailloncy said:
>
>> I ask for the same thing. That's better !
>
> This is a Frequently asked questio
On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
> Basically, Postgresql uses an MVCC locking system that makes massively
> parallel operation possible, but costs in certain areas, and one of those
> areas is aggregate performance over large sets. MVCC makes it very hard
> to optimize all but
On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote:
Asking a question about why max(id) is so much slower than select id order
by id desc limit 1, Pailloncy said:
> I ask for the same thing.
> That's better !
This is a Frequently asked question about something that isn't likely to
change any time
On 8 Sep 2003 at 17:31, Andri Saar wrote:
> Note to future generations: default postgres configuration settings are very
> conservative and don't be afraid to VACUUM very often.
You should have looked at earlier default postgresql they were arcane by that
standard.
7.4 at least attempts to dete
On Monday 08 September 2003 17:04, Tom Lane wrote:
>
> It looks to me like throwing a plain VACUUM into your poller cycle
> (or possibly VACUUM ANALYZE depending on how fast the table's stats
> change) would solve your problems nicely.
>
I compled the pg_autovacuum daemon from 7.4beta sources as S
Andri Saar <[EMAIL PROTECTED]> writes:
> If this is the best you can get with postgres right now, then I'll just have
> to increase the frequency of VACUUMing, but that feels like a hackish
> solution :(
Not at all. The overhead represented by VACUUM would have to be paid
somewhere, somehow, in
On Mon, 8 Sep 2003 13:50:23 +0300, Andri Saar <[EMAIL PROTECTED]>
wrote:
>Basically I do this:
>1) select about ~700 ID's I have to poll
>2) poll them
>3) update those 700 rows in that "table" I used (~2700 rows total).
>
>And I do this cycle once per minute, so yes, I've got a zillion updates. 700
On 8 Sep 2003 at 13:50, Andri Saar wrote:
> If this is the best you can get with postgres right now, then I'll just have
> to increase the frequency of VACUUMing, but that feels like a hackish
> solution :(
Use a autovacuum daemon. There is one in postgresql contrib module. It was
introduced du
Hope that you don't find it too distracting, I decided to answer to emails in
one go.
On Saturday 06 September 2003 03:05, Tom Lane wrote:
> indexscans. If you've also got sequential-scan queries, and you're
> doing many zillion updates between vacuums, the answer is to vacuum
> more ofte
25 matches
Mail list logo