On 2017-10-20 11:54, Sokolov Yura wrote:
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay();
>
> The way you moved this around has the disadvantage that we now do this -
> a number of writes - e
On 2017-10-26 22:01, Sokolov Yura wrote:
On 2017-09-27 14:46, Stas Kelvich wrote:
On 7 Sep 2017, at 18:58, Nikhil Sontakke <nikh...@2ndquadrant.com>
wrote:
Hi,
FYI all, wanted to mention that I am working on an updated version of
the latest patch that I plan to submit to a later CF.
RIN index at all just "recommends executor to scan that heap page".
Thus
BRIN index is at whole just an optimisation (regardless is it `minmax`
or
`bloom`).
So that is ok.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers
/* compute next bit h */
h += d++;
if (h >= filter->nbits) h -= filter->nbits;
if (d == filter->nbits) d = 0;
}
Modulo of one 64bit result by two coprime numbers (`nbits` and
`nbits-1`)
gives two good-quality functions usa
: "false" if called
`with-catalog-changes`
(ie need to call DecodePrepare), and "true" otherwise. With this
change, catalog changing two-phase transaction is decoded as simple
one-phase transaction, if `pg_logical_slot_get_changes` is called
without `with-catalog-changes`.
--
With r
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay();
>
> The way you moved this around has the disadvantage that we now do this -
> a number of writes - even in the very common case where
On 2017-10-19 02:28, Andres Freund wrote:
On 2017-06-05 16:22:58 +0300, Sokolov Yura wrote:
Algorithm for LWLockWaitForVar is also refactored. New version is:
1. If lock is not held by anyone, it immediately exit.
2. Otherwise it is checked for ability to take WaitList lock, because
variable
Hi,
On 2017-10-19 03:03, Andres Freund wrote:
Hi,
On 2017-09-08 22:35:39 +0300, Sokolov Yura wrote:
/*
* Internal function that tries to atomically acquire the lwlock in
the passed
- * in mode.
+ * in mode. If it could not grab the lock, it doesn't puts proc into
wait
+ * queue
On 2017-10-03 17:30, Sokolov Yura wrote:
Good day, hackers.
During hard workload sometimes process reaches deadlock timeout
even if no real deadlock occurred. It is easily reproducible with
pg_xact_advisory_lock on same value + some time consuming
operation (update) and many clients.
When
s, 38387.9 tps, lat 20.886 ms stddev 104.482
(autovacuum led to trigger deadlock timeout,
but postgresql did not stuck)
I believe, patch will positively affect other heavy workloads
as well, although I have not collect benchmarks.
`make check-world` passes with configured `--enable-tap-tests
--enab
3540 postmaster [.]
>ResourceArrayAdd
>+1,68% 1,66% 3282 postmaster [.]
>hash_uint32
>+1,65% 1,63% 3214 postmaster [.]
>hash_search_with_hash_value
>+1,64% 1,62% 3208 postmaster [.]
>CatalogCacheComputeHashValue
>+1,28% 1,26% 2498 postmaster [.]
>MemoryContextAllocZeroAligned
>+1,25% 1,24% 2446 postmaster [.] palloc0
>
>Any ides why SearchCatCache is called too often?
>
>
>
>> Regards
>>
>> Pavel
>>
Looks like you've already collected profile with call-graph. So you can tell us
where it were called from.
With regards,
--
Sokolov Yura aka funny_falcon
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Good day, Kyoutaro
On 2017-09-29 11:26, Kyotaro HORIGUCHI wrote:
Hello,
At Wed, 27 Sep 2017 14:28:37 +0300, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote in
<90bb67da7131e6186b50897c4b0f0...@postgrespro.ru>
On 2017-09-12 11:28, Kyotaro HORIGUCHI wrote:
> Hello,
> At Wed,
were called, and also sleeps at least 1 ms.
I'm not sure about correctness of my patch. Given test exists,
you may suggest better solutions, or improve this solution.
I'll set commitfest topic status to 'Need review' assuming
my patch could be reviewed.
--
Sokolov Yura aka funny_falcon
Postgres
Hello, Claudio.
Thank you for review and confirm of improvement.
On 2017-09-23 01:12, Claudio Freire wrote:
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro H
On 2017-09-22 16:22, Sokolov Yura wrote:
On 2017-09-22 11:21, Masahiko Sawada wrote:
On Fri, Sep 22, 2017 at 4:16 PM, Kyotaro HORIGUCHI
<horiguchi.kyot...@lab.ntt.co.jp> wrote:
At Fri, 22 Sep 2017 15:00:20 +0900, Masahiko Sawada
<sawada.m...@gmail.c
Open question: what to do with index statistic? For simplicity this
patch skips updating stats (just returns NULL from btvacuumcleanup).
Probably, it should detect proportion of table changes, and do not
skip scans if table grows too much.
--
Sokolov Yura
Postgres Professional: https://postgrespr
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting
them
together in a single file isn't such a great idea. If you don't edit
the patches further
Hi, Jesper
Thank you for reviewing.
On 2017-09-08 18:33, Jesper Pedersen wrote:
Hi,
On 07/18/2017 01:20 PM, Sokolov Yura wrote:
I'm sending rebased version with couple of one-line tweaks.
(less skip_wait_list on shared lock, and don't update spin-stat on
aquiring)
I have been running
On 2017-09-06 16:36, Tom Lane wrote:
Sokolov Yura <funny.fal...@postgrespro.ru> writes:
On 2017-09-06 15:56, Tom Lane wrote:
The point I'm trying to make is that if tweaking generic.h improves
performance then it's an indicator of missed cases in the
less-generic
atomics code, and the
ggest to fix
generic atomic functions. Of course, gcc intrinsic gives more gain.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscr
On 2017-09-06 14:54, Sokolov Yura wrote:
On 2017-09-06 07:23, Tom Lane wrote:
Jeff Janes <jeff.ja...@gmail.com> writes:
What scale factor and client count? How many cores per socket? It
looks
like Sokolov was just starting to see gains at 200 clients on 72
cores,
using -N trans
r generic atomic functions to make
all functions uniform.
It is bad style guide not to changing worse (but correct) code into
better (and also correct) just because you can't measure difference
(in my opinion. Perhaps i am mistaken).
(and yes: gcc intrinsic gives more improvement).
--
With regards,
e to determine if to send keepalive message. I also added
> CHECK_INTERRUPRS call to fast code path because otherwise walsender
> might ignore them for too long on large transactions.
>
> Thoughts?
>
On 2017-08-10 14:20 Sokolov Yura wrote:
> On 2017-08-09 16:23, Petr Jelinek w
://www.adjust.com/
And we had production issue with pg_rewind which copied huge textual
logs from pg_log (20GB each, cause statements were logged for
statistic). It will be convenient to tell pg_rewind not to copy logs
too.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian
esql.org/14/1166/
It positively affects performance on any platform (I've tested it
on Power as well).
And I have prototype of further adaptation of that patch for POWER
platform (ie using inline assembly) (not published yet).
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian
.
--
With regards,
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
From 5851f01de6b793be5e682c67db8db339667be92d Mon Sep 17 00:00:00 2001
From: Sokolov Yura <funny.fal...@postgrespro.ru>
Date: Mon, 10 Jul 2017 12:34:48 +
Subject: [PATCH] Mak
p-array itself),
- its insert algorithm looks to be much more expensive (at least, it is
more complex in instructions count).
I thing, using simplehash here will lead to much more invasive patch,
than this ad-hoc table. And I believe it will be slower (and this place
is very performance critical), thoug
В Thu, 10 Aug 2017 18:12:34 +0300
Alexander Korotkov <a.korot...@postgrespro.ru> пишет:
> On Thu, Aug 10, 2017 at 3:30 PM, Sokolov Yura
> <funny.fal...@postgrespro.ru> wrote:
>
> > On 2017-07-31 18:56, Sokolov Yura wrote:
> >
> >> Good day,
On 2017-07-18 20:20, Sokolov Yura wrote:
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detec
On 2017-07-31 18:56, Sokolov Yura wrote:
Good day, every one.
In attempt to improve performance of YCSB on zipfian distribution,
it were found that significant time is spent in XidInMVCCSnapshot in
scanning snapshot->xip array. While overall CPU time is not too
noticable, it has measura
ill, I could be mistaken, it is just suggestion).
Is it hard to add test for case this patch fixes?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
On 2017-07-27 11:53, Sokolov Yura wrote:
On 2017-07-26 20:28, Sokolov Yura wrote:
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
On 2017-07-26 20:28, Sokolov Yura wrote:
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
time activity tps latency stddev
On 2017-07-27 11:30, Masahiko Sawada wrote:
On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire
<klaussfre...@gmail.com> wrote:
On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire
<klaussfre...@gmail.com> wrote:
On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
<funny.fal...@postgr
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
time activity tps latency stddev min max
11130 av+ch 198
On 2017-07-24 19:11, Claudio Freire wrote:
On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
My fr
On 2017-07-21 20:41, Sokolov Yura wrote:
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
No, I definitely
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloatin
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
No, I definitely won't admit that. With default se
could debate the exact constants.
regards, tom lane
Attached version is with min(shared_buffers/8/autovacuum_workers, 16MB).
With regards
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Compa
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresq
ulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own. *
Have you measured increased vacuum ring buffer?
This will require recompilation, though.
With regards,
--
Sokolov Yura
Postgres Professional: https://postgres
tgresql, will increase their
shared_buffers, and autovacuum will have its 16MB ring buffer.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to y
On 2017-07-20 17:59, Robert Haas wrote:
On Tue, Jul 18, 2017 at 6:09 AM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's
etting
'autovacuum_cost_delay=2ms').
This is single line change, and it improves things a lot.
With regards,
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subsc
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detected.
If there is possibility to take a lock, CAS tried.
If
nt.
Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The
org/14/1166/
But they are both about LWLock itself, and not its usage.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/ma
h?
Yes I do -- tomorrow or Thursday -- needs a little cleaning up ...
> Sokolov Yura has a patch which, to me, looks good for pgbench rw
> performance. Does not do so well with hammerdb (about the same as
base) on
> single socket and two socket.
Any idea why? I think we will have to und
that and together with that?5 июня 2017 г. 11:11 PM пользователь Sokolov Yura <y.soko...@postgrespro.ru> написал:Hi, Jim.How do you ensure of transaction order?Example:- you lock shard A and gather info. You find transaction T1 in-progress.- Then you unlock shard A.- T1 completes. T2, that depends
Hi, Jim.How do you ensure of transaction order?Example:- you lock shard A and gather info. You find transaction T1 in-progress.- Then you unlock shard A.- T1 completes. T2, that depends on T1, also completes. But T2 was on shard B.- you lock shard B, and gather info from.- You didn't saw T2 as in
set it to more conservative `spins_per_delay / 4`, but I have
no enough evidence in favor of either decision.
Though it is certainly better to have it larger, than
`spins_per_delay / 8` that is set for EXCLUSIVE lock.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Profess
356 | 78.1k | 90.1k
395 | 70.2k | 79.0k
434 | 61.6k | 70.7k
Also graph with more points attached.
On 2017-05-25 18:12, Sokolov Yura wrote:
Hello, Tom.
I agree that lonely semicolon looks bad.
Applied your suggestion for empty loop body (/* skip */).
Patch in first letter
in a loop body and it is
condition for loop exit, so it is clearly just a loop.
Optimization is valid cause compare_exchange always store old value
in `old` variable in a same atomic manner as atomic read.
Tom Lane wrote 2017-05-25 17:39:
Sokolov Yura <funny.fal...@postgrespro.ru> writes:
@@ -
, but difference between
83000tps and 86000tps is not so obvious in NUMA system).
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
A bit cleaner version of a patch.
Sokolov Yura писал 2017-05-25 15:22:
Good day, everyone.
I've been played with pgbench on huge machine.
(72 cores, 56 for postgresql, enough memory to fit base
both into shared_buffers and file cache)
(pgbench scale 500, unlogged tables, fsync=off,
synchronous
) it is compressed better by chunks.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom b1bf73d8111b0620558542c34098e0fee5d19b58 Mon Sep 17 00:00:00 2001
From: Sokolov Yura aka funny_falcon <funny.fal...@gmail.com>
Date: Mon, 22 May 2017 11
with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom
Sokolov Yura писал 2017-05-15 18:23:
Alvaro Herrera писал 2017-05-15 18:04:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/
Thank you for suggestion.
I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach
correctly in commitfest topic. So I do it with this email.
Please, correct me, if I do something wrong.
With regards.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 2cde4cb6b0c4c5868d99e13789b0ac33364d7315 Mon Sep 17 00:00:00 2001
Sokolov Yura писал 2017-05-15 15:08:
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a
compactify_tuples
(called by PageRepareFragmentaion
degenerate test case.
But probably this improvement still has sense.
With regards,
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom f8ed235dbb60a79b4f59dc8a4af014b2ca698772 Mon Sep 17 00:00:00 2001
From: Sokolov Yura aka funny_falcon
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
Did you want me to work on this? I could probably put some time into it
this coming weekend.
I'll try to get to it before that --- if no serious bugs come up this
week, core is thinking of wrapping 8.3.0 at the end of the week, so
Sorry for previous message having no comments.
Just remark:
These aggregates created successfuly both in 8.2 and 8.3beta4:
CREATE AGGREGATE array_concat(anyarray) (
SFUNC=array_cat,
STYPE=anyarray
);
CREATE AGGREGATE array_build(anyelement) (
SFUNC=array_append,
STYPE=anyarray
);
But
On Tue, 28 Feb 2006 01:04:00 -0500, Tom Lane wrote
Jim C. Nasby jnasby ( at ) pervasive ( dot ) com writes:
On Mon, Feb 27, 2006 at 03:05:41PM -0500, Tom Lane wrote:
Moreover, you haven't pointed to any strong reason to adopt this
methodology. It'd only be a win when vacuuming pretty small
equivalent tables we have 66% short time.
What will be in case of three, four ... ?
--
Sokolov Yura mailto:[EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
68 matches
Mail list logo