On 2015-05-15 18:31, Scott Marlowe wrote:
> On Fri, May 15, 2015 at 9:18 AM, Job wrote:
>> i have a table of about 10 millions of records, with the index on a string
>> field.
>> Actually is alphabetical; since queries are about 100/200 per seconds, i was
>> looking for a better way to improve p
Hi,
My approach would be to improve the uniqueness of each record/row. Otherwise
you'll have to traverse the entire table for every query. At 100/200 queries
per second you are asking for trouble on several fronts. Including crashing
your hard disk faster than need be.
Hope this helps. Good luc
On Fri, May 15, 2015 at 9:18 AM, Job wrote:
> Hello,
>
> i have a table of about 10 millions of records, with the index on a string
> field.
> Actually is alphabetical; since queries are about 100/200 per seconds, i was
> looking for a better way to improve performance and reduce workload.
>
> T
You should probably experiment with a btree-gin index on those.
Em 15/05/2015 12:22, "Job" escreveu:
> Hello,
>
> i have a table of about 10 millions of records, with the index on a string
> field.
> Actually is alphabetical; since queries are about 100/200 per seconds, i
> was looking for a bett