On Thu, Sep 4, 2008 at 2:21 PM, Kiran Mukhyala <[EMAIL PROTECTED]> wrote:
>Can someone please tell me how to improve the query planner
>estimate? I did try vacuum analyze. Here are some details:
Have you tried increasing the statistics target for that table (or in general)?
--
-
Thanks for your reply.
Greg wrote:
> On Tue, 2 Sep 2008, Duan Ligong wrote:
> > - Does Vacuum delete the old clog files?
>
> Yes, if those transactions are all done. One possibility here is that
> you've got some really long-running transaction floating around that is
> keeping normal clog cle
On Thu, Sep 4, 2008 at 2:01 PM, Ulrich <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
>>
>> On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[EMAIL PROTECTED]> wrote:
>>
I wouldn't set shared_buffers that high
just because things like vacuum and sorts need memory too
>>>
>>> Okay, I u
Scott Marlowe wrote:
On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[EMAIL PROTECTED]> wrote:
I wouldn't set shared_buffers that high
just because things like vacuum and sorts need memory too
Okay, I understand that vacuum uses memory, but I thought sorts are done in
work_mem? I am only sort
Tom Lane <[EMAIL PROTECTED]> writes:
> Guillaume Cottenceau <[EMAIL PROTECTED]> writes:
>> It seems to me that if the correlation is 0.99[1], and you're
>> looking for less than 1% of rows, the expected rows may be at the
>> beginning or at the end of the heap?
>
> Right, but if you know the value
On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
>>
>> Stop using a virtual server?
>
> That is not possible...
Sorry shoulda had a smiley face at the end of that. :) <-- there
>> I wouldn't set shared_buffers that high
>> just because things like vacuum a
Scott Marlowe wrote:
Stop using a virtual server?
That is not possible...
I wouldn't set shared_buffers that high
just because things like vacuum and sorts need memory too
Okay, I understand that vacuum uses memory, but I thought sorts are done
in work_mem? I am only sorting the result of one
On Thu, 4 Sep 2008, Thomas Finneid wrote:
I am unsure if I need a journal in the fs or if the db covers that problem.
There are some theoretical cases where the guarantees of ext3 seems a
little weak unless you've turned the full journal on even in a database
context (we just had a long thre
On Thu, Sep 4, 2008 at 1:24 PM, Ulrich <[EMAIL PROTECTED]> wrote:
> Hi,
> I have a virtual server with 256 MB of RAM. I am using it as a webserver,
> mailserver and for postgres. So there is something like 150MB left for
> postgres.
>
> Here are my configs (I haven't benchmarked...)
> max_connectio
Hi,
I have a virtual server with 256 MB of RAM. I am using it as a
webserver, mailserver and for postgres. So there is something like 150MB
left for postgres.
Here are my configs (I haven't benchmarked...)
max_connections = 12 (I think, I will not have more parallel
connections, because I onl
"Tom Lane" <[EMAIL PROTECTED]> writes:
> I'm not sure offhand whether the existing correlation stats would be of use
> for
> it, or whether we'd have to get ANALYZE to gather additional data.
Please forgive the tangent, but would it be practical to add support for
gathering statistics on an arbi
Hello,
I am running a select on a large table with two where
conditions.
Explain analyze shows that the estimated number of rows returned
(190760) is much more than the actual rows returned (58221),
which is probably the underlying cause for the poor
On Thu, Sep 4, 2008 at 10:14 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Actually, an even easier hack (which would have the nice property of not
> needing to know the exact value being searched for), would simply use
> the existing cost estimates if the WHERE variables have low correlation
> (mea
Guillaume Cottenceau <[EMAIL PROTECTED]> writes:
> It seems to me that if the correlation is 0.99[1], and you're
> looking for less than 1% of rows, the expected rows may be at the
> beginning or at the end of the heap?
Right, but if you know the value being searched for, you could then
estimate w
On Thu, 4 Sep 2008, Guillaume Cottenceau wrote:
It seems to me that if the correlation is 0.99, and you're
looking for less than 1% of rows, the expected rows may be at the
beginning or at the end of the heap?
Not necessarily. Imagine for example that you have a table with 1M rows,
and one of
Matthew Wakeling writes:
> On Thu, 4 Sep 2008, Tom Lane wrote:
>> Ultimately the only way that we could get the right answer would be if
>> the planner realized that the required rows are concentrated at the end
>> of the table instead of being randomly scattered. This isn't something
>> that is
[EMAIL PROTECTED] wrote:
Is there some kind of limit in postgresql about the number of partitions ? Do
you know some tuning in the conf files to improve postgresql management of so
many tables ? I have already used different tablespaces, one for each main table
and its 288 partitions.
Postgre
On Thu, 4 Sep 2008, Tom Lane wrote:
Ultimately the only way that we could get the right answer would be if
the planner realized that the required rows are concentrated at the end
of the table instead of being randomly scattered. This isn't something
that is considered at all right now in seqscan
"Matt Smiley" <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> default cost settings will cause it to prefer bitmap scan for retrieving
>> up to about a third of the table, in my experience). I too am confused
>> about why it doesn't prefer that choice in the OP's example.
What about filesystem properties?
on linux I am using:
ext3(with journal) and auto,rw,async,noatime,nodiratime
on disks for data and journal
I am unsure if I need a journal in the fs or if the db covers that
problem. With regards to that, do I then need to set some linux setting
to forc
20 matches
Mail list logo