+1
larger record size can increase compression ratio,so reduce the io.

Did you set atime off for zfs?

2016年9月28日 6:16 AM,"Karl Denninger" <k...@denninger.net>写道:

> On 9/27/2016 16:38, Tomas Vondra wrote:
>
> On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:
>
>
>
> On 29.07.2016 08:30, Tomas Vondra wrote:
>
>
>
> On 07/29/2016 08:04 AM, trafdev wrote:
>
> Hi.
>
> I have an OLAP-oriented DB (light occasional bulk writes and heavy
> aggregated selects over large periods of data) based on Postgres 9.5.3.
>
> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,
> mirror).
>
> The largest table is 13GB (with a 4GB index on it), other tables are 4,
> 2 and less than 1GB.
>
> After reading a lot of articles and "howto-s" I've collected following
> set of tweaks and hints:
>
>
> ZFS pools creation:
> zfs create zroot/ara/sqldb
> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql
>
>
> zfs get primarycache,recordsize,logbias,compression
> zroot/ara/sqldb/pgsql
> NAME                   PROPERTY      VALUE         SOURCE
> zroot/ara/sqldb/pgsql  primarycache  all           local
> zroot/ara/sqldb/pgsql  recordsize    8K            local
> zroot/ara/sqldb/pgsql  logbias       latency       local
> zroot/ara/sqldb/pgsql  compression   lz4           inherited from zroot
>
> L2ARC is disabled
> VDEV cache is disabled
>
>
> pgsql -c "mkdir /ara/sqldb/pgsql/data_ix"
> pgsql -c "initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data"
>
>
> /etc/sysctl.conf
> vfs.zfs.metaslab.lba_weighting_enabled=0
>
>
> postgresql.conf:
> listen_addresses = '*'
> max_connections = 100
> shared_buffers = 16GB
> effective_cache_size = 48GB
>
>
> It may not be a problem for your workload, but this effective_cache_size
> value is far too high.
>
>
> May i asked why? ZFS in default caches your size of RAM minus 1 GB.
> Getting the shared buffer from the 64 GB RAM i would asume 47 GB
> would be a better value. But this would not be far too high. So
> please can you explain this?
>
>
> Because it's not a global value, but an estimate of how much RAM is
> available as a cache for a single query. So if you're running 10 queries at
> the same time, they'll have to share the memory.
>
> It's a bit trickier as there's often a fair amount of cross-backend
> sharing (backends accessing the same data, so it's likely one backend loads
> data into cache, and then other backends access it too).
>
> It also ignores that memory may get allocated for other reasons - some
> queries may allocate quite a bit of memory for sorts/aggregations, so not
> only is
>
>    effective_cache_size = RAM - shared_buffers
>
> excessive as it ignores the per-query nature, but also because it neglects
> these other allocations.
>
> regards
>
> You may well find that with lz4 compression a 128kb record size on that
> filesystem is materially faster -- it is here for most workloads under
> Postgres.
>
>
>
> --
> Karl Denninger
> k...@denninger.net
> *The Market Ticker*
> *[S/MIME encrypted email preferred]*
>

Reply via email to