Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Fabio Pardi



On 28/09/18 11:56, Vladimir Ryabtsev wrote:
>
> > It could affect space storage, for the smaller blocks.
> But at which extent? As I understand it is not something about "alignment" to 
> block size for rows? Is it only low-level IO thing with datafiles?
>

Maybe 'for the smaller blocks' was not very meaningful.
What i mean is 'in terms of wasted disk space: '

In an example:

create table test_space (i int);

empty table:

select pg_total_relation_size('test_space');
 pg_total_relation_size

  0
(1 row)

insert one single record:

insert into test_space values (1);


select pg_total_relation_size('test_space');
 pg_total_relation_size

   8192


select pg_relation_filepath('test_space');
 pg_relation_filepath
--
 base/16384/179329


ls -alh base/16384/179329
-rw--- 1 postgres postgres 8.0K Sep 28 16:09 base/16384/179329

That means, if your block size was bigger, then you would have bigger space 
allocated for one single record.

regards,

fabio aprdi



Re: To keep indexes in memory, is large enough effective_cache_size enough?

2018-09-28 Thread David Rowley
On 28 September 2018 at 16:45, Sam R.  wrote:
> That was what I was suspecting a little. Double buffering may not matter in
> our case, because the whole server is meant for PostgreSQL only.
>
> In our case, we can e.g. reserve almost "all memory" for PostgreSQL (shared
> buffers etc.).
>
> Please correct me if I am wrong.

You mentioned above:

> RAM: 64 GB
> Data: 500 GB - 1.5 TB, for example.

If most of that data just sits on disk and is never read then you
might be right, but if the working set of the data is larger than RAM
then you might find you get better performance from smaller shared
buffers.

I think the best thing you can go and do is to go and test this. Write
some code that mocks up a realistic production workload and see where
you get the best performance.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Vladimir Ryabtsev
> Does your LVM have readahead
> ramped up ?  Try lvchange -r 65536 data/postgres (or similar).

Changed this from 256 to 65536.
If it is supposed to take effect immediately (no server reboot or other
changes), then I've got no changes in performance. No at all.

Vlad