On Fri, Dec 15, 2017 at 02:54:06PM -0600, Justin Pryzby wrote:
> SSD: good question.
> Here's an rackspace VM with PG9.6.6, 2GB shared_buffers, 8GB RAM (~4GB of 
> which
> is being used as OS page cache), and 32GB SSD (with random_page_cost=1).  The
> server is in use by our application.
> I believe you could scale up the size of the table to see this behavior with
> any cache size.  0.0001 controls the "jitter", with smaller values being more
> jittery..
> postgres=# CREATE TABLE t(i int,j int) TABLESPACE tmp; CREATE INDEX ON t(i); 
> INSERT INTO t SELECT (0.0001*a+9*(random()-0.5))::int FROM 
> generate_series(1,99999999) a; VACUUM ANALYZE t;
>  public | t    | table | pryzbyj | 3458 MB |
> relpages | 442478

I realized I've made a mistake here; the table is on SSD but not its index...
So all this cost is apparently coming from the index and not the heap.

   ->  Bitmap Heap Scan on t  (cost=855041.91..1901994.06 rows=40298277 
width=4) (actual time=14202.624..27754.982 rows=40009853 loops=1)
         ->  Bitmap Index Scan on t_i_idx1  (cost=0.00..844967.34 rows=40298277 
width=0) (actual time=14145.877..14145.877 rows=40009853 loops=1)

Let me get back to you about that.


Reply via email to