On Mon, Jan 28, 2019 at 12:03 AM Saurabh Nanda
wrote:
> All this benchmarking has led me to a philosophical question, why does PG
> need shared_buffers in the first place?
>
PostgreSQL cannot let the OS get its hands on a dirty shared buffer until
the WAL record "protecting" that buffer has
Le 29/01/2019 à 07:15, Saurabh Nanda a écrit :
c) I tried another cloud hosting provider (E2E Networks) and just
the raw performance numbers (with default configuration) are
blowing Hetzner out of the water.
I noticed that on E2E, the root filesystem is mounted with the
following
Yet another update:
a) I've tried everything with me EX41-SSD server on Hetzner and nothing is
increasing the performance over & above the default configuration.
b) I tried commissioning a new EX41-SSD server and was able to replicate
the same pathetic performance numbers.
c) I tried another
>
> Do you know which of the settings is causing lower TPS ?
>
> I suggest to check shared_buffers.
>
> If you haven't done it, disabling THP and KSM can resolve performance
> issues,
> esp. with large RAM like shared_buffers, at least with older kernels.
>
>
I've disabled transpare huge-pages and enabled huge_pages as given below.
Let's see what happens. (I feel like a monkey pressing random buttons
trying to turn a light bulb on... and I'm sure the monkey would've had it
easier!)
AnonHugePages: 0 kB
ShmemHugePages:0 kB
>
> You should probably include the detailed hardware you are working on -
> especially for the SSD, the model can have a big impact, as well as its
> wear.
>
What's the best tool to get meaningful information for SSD drives?
-- Saurabh.
Le 28/01/2019 à 15:03, Saurabh Nanda a écrit :
An update. It seems (to my untrained eye) that something is wrong with
the second SSD in the RAID configuration. Here's my question on
serverfault related to what I saw with iostat -
All this benchmarking has led me to a philosophical question, why does PG
need shared_buffers in the first place? What's wrong with letting the OS do
the caching/buffering? Isn't it optimised for this kind of stuff?
>
>
> You could also try pg_test_fsync to get low-level information, to
>> supplement the high level you get from pgbench.
>
>
> Thanks for pointing me to this tool. never knew pg_test_fsync existed!
> I've run `pg_test_fsync -s 60` two times and this is the output -
>
> Do you know which of the settings is causing lower TPS ?
> I suggest to check shared_buffers.
>
I'm trying to find this, but it's taking a lot of time in re-running the
benchmarks changing one config setting at a time. Thanks for the tip
related to shared_buffers.
>
> If you haven't done
On Sun, Jan 27, 2019 at 01:09:16PM +0530, Saurabh Nanda wrote:
> It seems that PGOPTIONS="-c synchronous_commit=off" has a significant
> impact. However, I still can not understand why the TPS for the optimised
> case is LOWER than the default for higher concurrency levels!
Do you know which of
>
>
> PGOPTIONS="-c synchronous_commit=off" pgbench -T 3600 -P 10
>
>
> I am currently running all my benchmarks with synchronous_commit=off and
> will get back with my findings.
>
It seems that PGOPTIONS="-c synchronous_commit=off" has a significant
impact. However, I still can not
Is there any material on how to benchmark Postgres meaningfully? I'm
getting very frustrated with the numbers that `pgbench` is reporting:
-- allocating more resources to Postgres seems to be randomly dropping
performance
-- there seems to be no repeatability in the benchmarking numbers [1]
--
13 matches
Mail list logo