Re: [PERFORM] Very poor read performance, query independent

2017-07-10 Thread Jeff Janes
On Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau 
wrote:

>
> The problem I have is very poor read. When I benchmark my array with fio I
> get random reads of about 200MB/s and 1100IOPS and sequential reads of
> about 286MB/s and 21000IPS.
>


That doesn't seem right.  Sequential is only 43% faster?  What job file are
giving to fio?

What do you get if you do something simpler, like:

time cat ~/$PGDATA/base/16402/*|wc -c

replacing 16402 with whatever your biggest database is.

Cheers,

Jeff


Re: [PERFORM] Very poor read performance, query independent

2017-07-10 Thread Igor Neyman

From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Charles Nadeau
Sent: Monday, July 10, 2017 11:48 AM
To: Andreas Kretschmer 
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Very poor read performance, query independent

Andreas,

Because the ratio between the Sequential IOPS and Random IOPS is about 29. 
Taking into account that part of the data is in RAM, I obtained an "effective" 
ratio of about 22.
Thanks!

Charles

On Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer 
> wrote:


Am 10.07.2017 um 16:03 schrieb Charles Nadeau:
random_page_cost | 22


why such a high value for random_page_cost?

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com


--
Charles Nadeau Ph.D.
http://charlesnadeau.blogspot.com/


Considering RAM size of 72 GB and your database size of ~225GB, and also the 
fact that Postgres is the only app running on the server, probably 1/3 of your 
database resides in memory, so random_page_cost = 22 looks extremely high, 
probably it completely precludes index usage in your queries.

You should try this setting at least at its default value: random_page_cost =4, 
and probably go even lower.
Also, effective_cache_size is at least as big as your shared_buffers. Having 
72GB RAM t effective_cache_size should be set around 64GB (again considering 
that Postgres is the only app running on the server).

Regards,
Igor Neyman






[PERFORM] vacuum analyze affecting query performance

2017-07-10 Thread rverghese
We are on Postgres 9.5, and have been running a daily vacuum analyze on the
entire database since 8.2 
The data has grown exponentially since, and we are seeing that queries are
now being significantly affected while the vacuum analyze runs. The query
database is a Slony slave. 
So the question is, is this typical behavior and should we still be running
a daily vacuum analyze on the database?

Thanks!
RV



--
View this message in context: 
http://www.postgresql-archive.org/vacuum-analyze-affecting-query-performance-tp5970681.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor read performance, query independent

2017-07-10 Thread Charles Nadeau
Andreas,

Because the ratio between the Sequential IOPS and Random IOPS is about 29.
Taking into account that part of the data is in RAM, I obtained an
"effective" ratio of about 22.
Thanks!

Charles

On Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer  wrote:

>
>
> Am 10.07.2017 um 16:03 schrieb Charles Nadeau:
>
>> random_page_cost | 22
>>
>
>
> why such a high value for random_page_cost?
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



-- 
Charles Nadeau Ph.D.
http://charlesnadeau.blogspot.com/


Re: [PERFORM] Very poor read performance, query independent

2017-07-10 Thread Andreas Kretschmer



Am 10.07.2017 um 16:03 schrieb Charles Nadeau:
random_page_cost | 22 



why such a high value for random_page_cost?

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor read performance, query independent

2017-07-10 Thread Rick Otten
Although probably not the root cause, at the least I would set up hugepages
 (
https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES
), and bump effective_io_concurrency up quite a bit as well (256 ?).


On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau 
wrote:

> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).
> Hardware is:
>
> *2x Intel Xeon E5550
>
> *72GB RAM
>
> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%
> read/20% write) for Postgresql data only:
>
> Logical Drive: 3
>
> Size: 273.4 GB
>
> Fault Tolerance: 1+0
>
> Heads: 255
>
> Sectors Per Track: 32
>
> Cylinders: 65535
>
> Strip Size: 128 KB
>
> Full Stripe Size: 256 KB
>
> Status: OK
>
> Caching: Enabled
>
> Unique Identifier: 600508B1001037383941424344450A00
>
> Disk Name: /dev/sdc
>
> Mount Points: /mnt/data 273.4 GB
>
> OS Status: LOCKED
>
> Logical Drive Label: A00A194750123456789ABCDE516F
>
> Mirror Group 0:
>
> physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)
>
> physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)
>
> Mirror Group 1:
>
> physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)
>
> physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)
>
> Drive Type: Data
>
> Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v
> /dev/sdc1.
>
> Mounted in /etc/fstab with this line: 
> "UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0
> /mnt/data ext4 rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro
> 0 1"
>
> Postgresql is the only application running on this server.
>
>
> Postgresql is used as a mini data warehouse to generate reports and do
> statistical analysis. It is used by at most 2 users and fresh data is added
> every 10 days. The database has 16 tables: one is 224GB big and the rest
> are between 16kB and 470MB big.
>
>
> My configuration is:
>
>
> name | current_setting | source
>
> -+--
> --+--
>
> application_name | psql | client
>
> autovacuum_vacuum_scale_factor | 0 | configuration file
>
> autovacuum_vacuum_threshold | 2000 | configuration file
>
> checkpoint_completion_target | 0.9 | configuration file
>
> checkpoint_timeout | 30min | configuration file
>
> client_encoding | UTF8 | client
>
> client_min_messages | log | configuration file
>
> cluster_name | 9.6/main | configuration file
>
> cpu_index_tuple_cost | 0.001 | configuration file
>
> cpu_operator_cost | 0.0005 | configuration file
>
> cpu_tuple_cost | 0.003 | configuration file
>
> DateStyle | ISO, YMD | configuration file
>
> default_statistics_target | 100 | configuration file
>
> default_text_search_config | pg_catalog.english | configuration file
>
> dynamic_shared_memory_type | posix | configuration file
>
> effective_cache_size | 22GB | configuration file
>
> effective_io_concurrency | 4 | configuration file
>
> external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file
>
> lc_messages | C | configuration file
>
> lc_monetary | en_CA.UTF-8 | configuration file
>
> lc_numeric | en_CA.UTF-8 | configuration file
>
> lc_time | en_CA.UTF-8 | configuration file
>
> listen_addresses | * | configuration file
>
> lock_timeout | 100s | configuration file
>
> log_autovacuum_min_duration | 0 | configuration file
>
> log_checkpoints | on | configuration file
>
> log_connections | on | configuration file
>
> log_destination | csvlog | configuration file
>
> log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |
> configuration file
>
> log_disconnections | on | configuration file
>
> log_error_verbosity | default | configuration file
>
> log_file_mode | 0600 | configuration file
>
> log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file
>
> log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file
>
> log_lock_waits | on | configuration file
>
> log_min_duration_statement | 0 | configuration file
>
> log_min_error_statement | debug1 | configuration file
>
> log_min_messages | debug1 | configuration file
>
> log_rotation_size | 1GB | configuration file
>
> log_temp_files | 0 | configuration file
>
> log_timezone | localtime | configuration file
>
> logging_collector | on | configuration file
>
> maintenance_work_mem | 3GB | configuration file
>
> max_connections | 10 | configuration file
>
> max_locks_per_transaction | 256 | configuration file
>
> max_parallel_workers_per_gather | 14 | configuration file
>
> max_stack_depth | 2MB | environment variable
>
> max_wal_size | 4GB | configuration file
>
> max_worker_processes | 14 | configuration file
>
> min_wal_size | 2GB | configuration file
>
> parallel_setup_cost | 1000 | configuration file
>
> parallel_tuple_cost | 0.012 | configuration file
>
> port | 5432 | configuration file
>
> random_page_cost | 22 | configuration file
>
> seq_page_cost | 1 | configuration file
>
> shared_buffers | 34GB | configuration file
>
> 

[PERFORM] Very poor read performance, query independent

2017-07-10 Thread Charles Nadeau
I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).
Hardware is:

*2x Intel Xeon E5550

*72GB RAM

*Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%
read/20% write) for Postgresql data only:

Logical Drive: 3

Size: 273.4 GB

Fault Tolerance: 1+0

Heads: 255

Sectors Per Track: 32

Cylinders: 65535

Strip Size: 128 KB

Full Stripe Size: 256 KB

Status: OK

Caching: Enabled

Unique Identifier: 600508B1001037383941424344450A00

Disk Name: /dev/sdc

Mount Points: /mnt/data 273.4 GB

OS Status: LOCKED

Logical Drive Label: A00A194750123456789ABCDE516F

Mirror Group 0:

physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)

physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)

Mirror Group 1:

physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)

physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)

Drive Type: Data

Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v
/dev/sdc1.

Mounted in /etc/fstab with this line:
"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0 /mnt/data ext4
rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1"

Postgresql is the only application running on this server.


Postgresql is used as a mini data warehouse to generate reports and do
statistical analysis. It is used by at most 2 users and fresh data is added
every 10 days. The database has 16 tables: one is 224GB big and the rest
are between 16kB and 470MB big.


My configuration is:


name | current_setting | source

-++--

application_name | psql | client

autovacuum_vacuum_scale_factor | 0 | configuration file

autovacuum_vacuum_threshold | 2000 | configuration file

checkpoint_completion_target | 0.9 | configuration file

checkpoint_timeout | 30min | configuration file

client_encoding | UTF8 | client

client_min_messages | log | configuration file

cluster_name | 9.6/main | configuration file

cpu_index_tuple_cost | 0.001 | configuration file

cpu_operator_cost | 0.0005 | configuration file

cpu_tuple_cost | 0.003 | configuration file

DateStyle | ISO, YMD | configuration file

default_statistics_target | 100 | configuration file

default_text_search_config | pg_catalog.english | configuration file

dynamic_shared_memory_type | posix | configuration file

effective_cache_size | 22GB | configuration file

effective_io_concurrency | 4 | configuration file

external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file

lc_messages | C | configuration file

lc_monetary | en_CA.UTF-8 | configuration file

lc_numeric | en_CA.UTF-8 | configuration file

lc_time | en_CA.UTF-8 | configuration file

listen_addresses | * | configuration file

lock_timeout | 100s | configuration file

log_autovacuum_min_duration | 0 | configuration file

log_checkpoints | on | configuration file

log_connections | on | configuration file

log_destination | csvlog | configuration file

log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |
configuration file

log_disconnections | on | configuration file

log_error_verbosity | default | configuration file

log_file_mode | 0600 | configuration file

log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file

log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file

log_lock_waits | on | configuration file

log_min_duration_statement | 0 | configuration file

log_min_error_statement | debug1 | configuration file

log_min_messages | debug1 | configuration file

log_rotation_size | 1GB | configuration file

log_temp_files | 0 | configuration file

log_timezone | localtime | configuration file

logging_collector | on | configuration file

maintenance_work_mem | 3GB | configuration file

max_connections | 10 | configuration file

max_locks_per_transaction | 256 | configuration file

max_parallel_workers_per_gather | 14 | configuration file

max_stack_depth | 2MB | environment variable

max_wal_size | 4GB | configuration file

max_worker_processes | 14 | configuration file

min_wal_size | 2GB | configuration file

parallel_setup_cost | 1000 | configuration file

parallel_tuple_cost | 0.012 | configuration file

port | 5432 | configuration file

random_page_cost | 22 | configuration file

seq_page_cost | 1 | configuration file

shared_buffers | 34GB | configuration file

shared_preload_libraries | pg_stat_statements | configuration file

ssl | on | configuration file

ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file

ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration file

statement_timeout | 100s | configuration file

stats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |
configuration file

superuser_reserved_connections | 1 | configuration file

syslog_facility | local1 | configuration file

syslog_ident | postgres | configuration file

syslog_sequence_numbers | on | configuration file

temp_file_limit | 80GB |