> (1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000;
>
> OUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):
>
> Limit (cost=0.00..4661.02 rows=4000 width=16) (actual
> time=0.009..1.036 rows=4000 loops=1)
>Buffers: shared hit=42
>-> Seq Scan o
Thanks for replies. More detail and data below:
Table: "lookup"
uuid: type uuid. not null. plain storage.
datetime_stamp: type bigint. not null. plain storage.
harvest_date_stamp: type bigint. not null. plain storage.
state: type smallint. not null. plain storage.
Indexes:
"lookup_pkey" PRIM
On 21 March 2015 at 23:34, Roland Dunn wrote:
>
> If we did add more RAM, would it be the effective_cache_size setting
> that we would alter? Is there a way to force PG to load a particular
> table into RAM? If so, is it actually a good idea?
>
Have you had a look at EXPLAIN (ANALYZE, BUFFERS) f
Hi Roland,
> Do you speculate that the 10,000 rows would be randomly spread because
> of the uuid-type that we chose, namely the uuid-4 type? i.e. the
> completely random one? If we'd chosen the uuid-1 type (mac
> address+timestamp), rows would have been more regularly placed and so
> faster t
Hi Maxim,
Thanks for the reply, v interesting.
Do you speculate that the 10,000 rows would be randomly spread because
of the uuid-type that we chose, namely the uuid-4 type? i.e. the
completely random one? If we'd chosen the uuid-1 type (mac
address+timestamp), rows would have been more regularly
On Sat, Mar 21, 2015 at 6:01 AM, Roland Dunn wrote:
> Hi,
> Wonder if anyone can help.
>
> Have a lookup table where the primary key is a native uuid type
> (filled with uuid's of type 4), around 50m rows in size.
>
> Have a separate table, table A, similar size (around 50m rows).
> Primary key i