Tom Lane wrote:
> But having said that, this particular test case is far from compelling.
> Any sane text search application is going to try to filter out
> common words as stopwords; it's only the failure to do that that's
> making this run slow.
Below is tests-runs not with a "commonterm" but an
Jesper Krogh writes:
> I've now got a test-set that can reproduce the problem where the two
> fully equivalent queries (
> body_fts @@ to_tsquery("commonterm & nonexistingterm")
> and
> body_fts @@ to_tsquery("coomonterm") AND body_fts @@
> to_tsquery("nonexistingterm")
> give a difference of x30
On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris wrote:
> So, on the becoming more intelligent front: PostgreSQL already does
> some operations as background maintenance (autovacuum). Extending
> this to de-bloat indices does not seem conceptually impossible
It could be done but it's not easy bec
On 10/30/2009 08:01 PM, Greg Stark wrote:
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
This is t
Database are designed to handle very large tables..but effectiveness
is always at question. A full table scan on a partitioned table is
always preferable to a FTS on a super large table. The nature of the
query will of-course dictate performance..but you run into definite
limitations with very larg
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
> Any relational database worth its salt has partitioning for a reason.
>
> 1. Maintenance. You will need to delete data at some
> point.(cleanup)...Partitions are the only way to do it effectively.
This is true and it's unavoidably a manual proce
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
2. Performance. Partitioning offer a way to query smaller slices of
data automatically (i.e the query opt
Hi.
I've now got a test-set that can reproduce the problem where the two
fully equivalent queries (
body_fts @@ to_tsquery("commonterm & nonexistingterm")
and
body_fts @@ to_tsquery("coomonterm") AND body_fts @@
to_tsquery("nonexistingterm")
give a difference of x300 in execution time. (grows wi
On 10/30/2009 12:43 PM, Merlin Moncure wrote:
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
Use a parent table and 20 child tables. Create a new child every day and
drop the 20-day-old table. Table drops are far faster and lower-impact than
delete-from a 120-million row table. Index-
2009/10/30 Grzegorz Jaśkiewicz :
> for explains, use http://explain.depesz.com/
> besides, why are you using left join ?
> equivlent of IN () is just JOIN, not LEFT JOIN.
> And please, format your query so it readable without twisting eyeballs
> before sending.
I prefer to have things posted to th
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
> Peter Meszaros wrote:
>>
>> Hi All,
>>
>> I use postgresql 8.3.7 as a huge queue. There is a very simple table
>> with six columns and two indices, and about 6 million records are
>> written into it in every day continously commited every 1
for explains, use http://explain.depesz.com/
besides, why are you using left join ?
equivlent of IN () is just JOIN, not LEFT JOIN.
And please, format your query so it readable without twisting eyeballs
before sending.
2009/10/29 Andreas Hartmann
> Hi everyone,
>
> I want to model the following scenario for an online marketing application:
>
> Users can create mailings. The list of recipients can be uploaded as
> spreadsheets with arbitrary columns (each row is a recipient). I expect the
> following maximum qua
13 matches
Mail list logo