"Kevin Grittner" writes:
> Tom Lane wrote:
>> Any sane text search application is going to try to filter out
>> common words as stopwords; it's only the failure to do that that's
>> making this run slow.
> I'd rather have the index used for the selective test, and apply the
> remaining tests to
Tom Lane wrote:
> Any sane text search application is going to try to filter out
> common words as stopwords; it's only the failure to do that that's
> making this run slow.
Imagine a large table with a GIN index on a tsvector. The user wants
a particular document, and is sure four words are
On Mon, Nov 2, 2009 at 7:50 AM, Peter Meszaros wrote:
> Increasing max_fsm_pages can be also helpful, but I've read that
> 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.
> I've never seen such messag, this command is either run and finish or
> goes to an endless loop as it
I would recommend (if at all possible) to partition the table and drop
the old partitions when not needed. This will guarantee the space
free-up without VACUUM overhead. Deletes will kill you at some point
and you dont want too much of the VACUUM IO overhead impacting your
performance.
On Mon, Nov
Grant Masan wrote:
Hi Hi all,
I have now readed many many forums and tried many different solutions
and I am not getting good performance to database. My server is Debian
linux, with 4gb ram, there is also java application and I am giving to
that 512mb (JAVA_OPTS) memory. In database there is
On Mon, Nov 2, 2009 at 2:16 PM, Grant Masan wrote:
> Hi Hi all,
>
> I have now readed many many forums and tried many different solutions and I
> am not getting good performance to database. My server is Debian linux, with
> 4gb ram, there is also java application and I am giving to that 512mb
>
Hi Hi all,
I have now readed many many forums and tried many different solutions and I
am not getting good performance to database. My server is Debian linux, with
4gb ram, there is also java application and I am giving to that 512mb
(JAVA_OPTS) memory. In database there is now like 4milj rows. Wh
Thank you all for the fast responses!
I changed the delete's schedule from daily to hourly and I will let you
know the result. This seems to be the most promising step.
The next one is tuning 'max_fsm_pages'.
Increasing max_fsm_pages can be also helpful, but I've read that
'vacuum verbose ...' wi
At 05:24 02/11/2009, you wrote:
>The only reason I can think of for wanting to compress very small
>datums is if you have a gajillion of them, they're highly
>compressible, and you have extra CPU time coming out of your ears. In
>that case - yeah, you might want to think about pre-compressing the