On Mon, Oct 10, 2016 at 9:44 AM, Bruce Momjian wrote:
> On Thu, Oct 6, 2016 at 02:38:56PM -0400, Robert Haas wrote:
>> I decided to do some testing on hydra (IBM-provided community
>> resource, POWER, 16 cores/64 threads, kernel 3.2.6-3.fc16.ppc64) using
>> the newly-enhanced wait event stuff to
On Thu, Oct 6, 2016 at 02:38:56PM -0400, Robert Haas wrote:
> Hi,
>
> I decided to do some testing on hydra (IBM-provided community
> resource, POWER, 16 cores/64 threads, kernel 3.2.6-3.fc16.ppc64) using
> the newly-enhanced wait event stuff to try to get an idea of what
> we're waiting for duri
On Fri, Oct 7, 2016 at 8:51 AM, Jeff Janes wrote:
>
> I think we need to come up with some benchmarking queries which get more
> work done per round-trip to the database. And build them into the binary,
> because otherwise people won't use them as much as they should if they have
> to pass "-f" f
On Fri, Oct 7, 2016 at 11:14 PM, Amit Kapila
wrote:
>
> > Another strategy that may work is actually intentionally
> waiting/buffering
> > some few ms between flushes/fsync,
>
> We do that before attempting to write if user has set "commit_delay"
> and "commit_siblings" guc parameters.
>
If you
On Fri, Oct 7, 2016 at 1:28 PM, Robert Haas wrote:
> On Fri, Oct 7, 2016 at 11:51 AM, Jeff Janes wrote:
> > What happens if you turn fsync off? Once a xlog file is fully written,
> it
> > is immediately fsynced, even if the backend is holding WALWriteLock or
> > wal_insert (or both) at the time
On Fri, Oct 7, 2016 at 11:34 PM, Alfred Perlstein wrote:
>
> Are the batched writes all done before fsync is called?
>
In most cases, but if while writing, the wal segment (size - 16M) is
finished, we do fsync and then do the remaining writes and at end
again perform fsync.
> Are you sure that A
On Fri, Oct 7, 2016 at 11:51 AM, Jeff Janes wrote:
> What happens if you turn fsync off? Once a xlog file is fully written, it
> is immediately fsynced, even if the backend is holding WALWriteLock or
> wal_insert (or both) at the time, and even if synchrounous_commit is off.
> Assuming this machi
On Fri, Oct 7, 2016 at 1:39 PM, Andres Freund wrote:
> On 2016-10-06 14:38:56 -0400, Robert Haas wrote:
>> Obviously, there's a vast increase in TPS, and the backends seem to
>> spend most of their time actually doing work. ClientRead is now the
>> overwhelmingly dominant wait event, although wal
On 10/7/16 10:42 AM, Andres Freund wrote:
Hi,
On 2016-10-06 20:52:22 -0700, Alfred Perlstein wrote:
This contention on WAL reminds me of another scenario I've heard about that
was similar.
To fix things what happened was that anyone that the first person to block
would be responsible for wri
Hi,
On 2016-10-06 20:52:22 -0700, Alfred Perlstein wrote:
> This contention on WAL reminds me of another scenario I've heard about that
> was similar.
>
> To fix things what happened was that anyone that the first person to block
> would be responsible for writing out all buffers for anyone block
Hi,
On 2016-10-06 14:38:56 -0400, Robert Haas wrote:
> Obviously, there's a vast increase in TPS, and the backends seem to
> spend most of their time actually doing work. ClientRead is now the
> overwhelmingly dominant wait event, although wal_insert and
> WALWriteLock contention is clearly still
Hi,
On 2016-10-06 18:15:58 -0400, Robert Haas wrote:
> That's pretty tight, especially since I now notice Andres also left a
> postmaster running on this machine back in April, with
> shared_buffers=8GB.
Oops, sorry for that.
- Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@post
On Thu, Oct 6, 2016 at 11:38 AM, Robert Haas wrote:
>
> Next, I tried lowering the scale factor to something that fits in
> shared buffers. Here are the results at scale factor 300:
>
> 14 Lock| tuple
> 22 LWLockTranche | lock_manager
> 39 LWLockNamed | WALBuf
Robert,
This contention on WAL reminds me of another scenario I've heard about
that was similar.
To fix things what happened was that anyone that the first person to
block would be responsible for writing out all buffers for anyone
blocked behind "him".
The for example if you have many thr
Robert,
This contention on WAL reminds me of another scenario I've heard about
that was similar.
To fix things what happened was that anyone that the first person to
block would be responsible for writing out all buffers for anyone
blocked behind "him".
The for example if you have many thr
On Fri, Oct 7, 2016 at 3:38 AM, Robert Haas wrote:
> I decided to do some testing on hydra (IBM-provided community
> resource, POWER, 16 cores/64 threads, kernel 3.2.6-3.fc16.ppc64) using
> the newly-enhanced wait event stuff to try to get an idea of what
> we're waiting for during pgbench. I did
On Thu, Oct 6, 2016 at 4:40 PM, Jeff Janes wrote:
> Scale factor 3000 obviously doesn't fit in shared_buffers. But does it fit
> in RAM? That is, are the backends doing real IO, or they just doing fake IO
> to the kernel's fs cache?
That's a good question.
[rhaas@hydra ~]$ free -g
On Thu, Oct 6, 2016 at 11:38 AM, Robert Haas wrote:
> Hi,
>
> I decided to do some testing on hydra (IBM-provided community
> resource, POWER, 16 cores/64 threads, kernel 3.2.6-3.fc16.ppc64) using
> the newly-enhanced wait event stuff to try to get an idea of what
> we're waiting for during pgben
Hi,
I decided to do some testing on hydra (IBM-provided community
resource, POWER, 16 cores/64 threads, kernel 3.2.6-3.fc16.ppc64) using
the newly-enhanced wait event stuff to try to get an idea of what
we're waiting for during pgbench. I did 30-minute pgbench runs with
various configurations, bu
19 matches
Mail list logo