Hi,
I have the following query which has been running very slowly and after a
lot of testing/trial and error I found an execution plan that ran the query
in a fraction of the time (and then lost the statistics that produced it).
What I wish to know is how to force the query to use the faster execu
On Sep 25, 2006, at 10:58 AM, yoav x wrote:
I am not comparing Postgres to MyISAM (obviously it is not a very
fair comparison) and we do need
ACID, so all comparison are made against InnoDB (which now supports
MVCC as well). I will try
again with the suggestions posted here.
Make sure that
On Sep 23, 2006, at 8:19 AM, Markus Schaber wrote:
Btw, would it be feasible to enhance normal index scans by looking at
all rows in the current table block whether they meet the query
criteria, fetch them all, and blacklist the block for further
revisiting
during the same index scan?
I think
On Sep 26, 2006, at 5:36 PM, Marc Morin wrote:
1- partitions loaded without indexes on them.. And build index "when
partition is full". Slow to drill down into incomplete partitions.
2- paritions with index as loaded. Slow, on insert (problem
mentioned)
but good to drill down
How big a
On Sep 24, 2006, at 2:12 PM, Ben wrote:
Ah, so I do. Thanks, that helps an awful lot.
But the plan is still twice as expensive as when I put in the
static values. Is it just unreasonable to expect the planner to see
that there aren't many rows in the subselect, so to use the bitmap
scans a
Have you ever done any testing to see if just setting
default_statistics_target to 500 has a negative impact on the system?
On Sep 22, 2006, at 4:48 PM, Cosimo Streppone wrote:
Christian Storm wrote:
At the moment, my rule of thumb is to check out the ANALYZE VERBOSE
messages to see if all
Hi all,
I am still encountering this issue.
I am doing further troubleshooting.
Here is what I found:
When I do: dtrace -s /usr/demo/dtrace/whoio.d
I found that there's one process that is doing majority of i/o, but that process is not listed on pg_stat_activity.
I am also seeing more of this type
Hi, Marc,
Marc Morin wrote:
>> I wonder whether there is a way to use table partitioning to
>> make the insert pattern more localized? We'd need to know a
>> lot more about your insertion patterns to guess how, though.
>
> We're doing partitioning as well.
And is constraint exclusion set
>
> So, I'd like my cake and eat it too... :-)
>
> I'd like to have my indexes built as rows are inserted into the
> partition so help with the drill down...
>
So you want to drill down so fine grained that summary tables don't do
much good? Keep in mind, even if you roll up only two records, th
Tobias Brox <[EMAIL PROTECTED]> writes:
> What causes the nested loops to be estimated so costly - or is it the
> merge joins that are estimated too cheaply? Should I raise all the
> planner cost constants, or only one of them?
If your tables are small enough to fit (mostly) in memory, then the
p
Yes, that is our application. We have implemented both scenarios...
1- partitions loaded without indexes on them.. And build index "when
partition is full". Slow to drill down into incomplete partitions.
2- paritions with index as loaded. Slow, on insert (problem mentioned)
but good to drill d
> > The bottom line here is likely to be "you need more RAM" :-(
>
> Yup. Just trying to get a handle on what I can do if I need more than
> 16G
> Of ram... That's as much as I can put on the installed based of
> servers 100s of them.
>
> >
> > I wonder whether there is a way to use table pa
I have some odd cases here joining two tables - the planner insists on
Merge Join, but Nested Loop is really faster - and that makes sense,
since I'm selecting just a small partition of the data available. All
planner constants seems to be set at the default values, the only way to
get a shift tow
13 matches
Mail list logo