Re: [PERFORM] Query RE: Optimising UUID Lookups

2015-03-24 Thread Maxim Boguk
(1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000; OUTPUT FROM EXPLAIN (ANALYZE, BUFFERS): Limit (cost=0.00..4661.02 rows=4000 width=16) (actual time=0.009..1.036 rows=4000 loops=1) Buffers: shared hit=42 - Seq Scan on lookup

Re: [PERFORM] Query RE: Optimising UUID Lookups

2015-03-24 Thread Roland Dunn
Thanks for replies. More detail and data below: Table: lookup uuid: type uuid. not null. plain storage. datetime_stamp: type bigint. not null. plain storage. harvest_date_stamp: type bigint. not null. plain storage. state: type smallint. not null. plain storage. Indexes: lookup_pkey PRIMARY

Re: [PERFORM] Query RE: Optimising UUID Lookups

2015-03-24 Thread David Rowley
On 21 March 2015 at 23:34, Roland Dunn roland.d...@gmail.com wrote: If we did add more RAM, would it be the effective_cache_size setting that we would alter? Is there a way to force PG to load a particular table into RAM? If so, is it actually a good idea? Have you had a look at EXPLAIN