On 2017-10-20 11:54, Sokolov Yura wrote:
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay(&delayStatus);
>
> The way you moved this around has the disadvantage that we now do this -
> a numb
On 2017-10-26 22:01, Sokolov Yura wrote:
On 2017-09-27 14:46, Stas Kelvich wrote:
On 7 Sep 2017, at 18:58, Nikhil Sontakke
wrote:
Hi,
FYI all, wanted to mention that I am working on an updated version of
the latest patch that I plan to submit to a later CF.
Cool!
So what kind of
then scans
heap-pages to find necessary tuples.
BRIN index at all just "recommends executor to scan that heap page".
Thus
BRIN index is at whole just an optimisation (regardless is it `minmax`
or
`bloom`).
So that is ok.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Ru
i < filter->nhashes; i++)
{
/* set or check bit h */
/* compute next bit h */
h += d++;
if (h >= filter->nbits) h -= filter->nbits;
if (d == filter->nbits) d = 0;
}
Modulo of one 64bit re
with-catalog-changes`
(ie need to call DecodePrepare), and "true" otherwise. With this
change, catalog changing two-phase transaction is decoded as simple
one-phase transaction, if `pg_logical_slot_get_changes` is called
without `with-catalog-changes`.
--
With regards,
Sokolov Yura
Postgr
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay(&delayStatus);
>
> The way you moved this around has the disadvantage that we now do this -
> a number of writes - even in the very common ca
On 2017-10-19 02:28, Andres Freund wrote:
On 2017-06-05 16:22:58 +0300, Sokolov Yura wrote:
Algorithm for LWLockWaitForVar is also refactored. New version is:
1. If lock is not held by anyone, it immediately exit.
2. Otherwise it is checked for ability to take WaitList lock, because
variable
Hi,
On 2017-10-19 03:03, Andres Freund wrote:
Hi,
On 2017-09-08 22:35:39 +0300, Sokolov Yura wrote:
/*
* Internal function that tries to atomically acquire the lwlock in
the passed
- * in mode.
+ * in mode. If it could not grab the lock, it doesn't puts proc into
wait
+ *
On 2017-10-03 17:30, Sokolov Yura wrote:
Good day, hackers.
During hard workload sometimes process reaches deadlock timeout
even if no real deadlock occurred. It is easily reproducible with
pg_xact_advisory_lock on same value + some time consuming
operation (update) and many clients.
When
70.0 s, 38387.9 tps, lat 20.886 ms stddev 104.482
(autovacuum led to trigger deadlock timeout,
but postgresql did not stuck)
I believe, patch will positively affect other heavy workloads
as well, although I have not collect benchmarks.
`make check-world` passes with configured `--enable-tap-tests
-
1,66% 3282 postmaster [.]
>hash_uint32
>+1,65% 1,63% 3214 postmaster [.]
>hash_search_with_hash_value
>+1,64% 1,62% 3208 postmaster [.]
>CatalogCacheComputeHashValue
>+1,28% 1,26% 2498 postmaster [.]
>MemoryContextAllocZeroAligned
>+1,25% 1,24% 2446 postmaster [.] palloc0
>
>Any ides why SearchCatCache is called too often?
>
>
>
>> Regards
>>
>> Pavel
>>
Looks like you've already collected profile with call-graph. So you can tell us
where it were called from.
With regards,
--
Sokolov Yura aka funny_falcon
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Good day, Kyoutaro
On 2017-09-29 11:26, Kyotaro HORIGUCHI wrote:
Hello,
At Wed, 27 Sep 2017 14:28:37 +0300, Sokolov Yura
wrote in
<90bb67da7131e6186b50897c4b0f0...@postgrespro.ru>
On 2017-09-12 11:28, Kyotaro HORIGUCHI wrote:
> Hello,
> At Wed, 06 Sep 2017 13:46:16 +,
irst iteration after
send_data were called, and also sleeps at least 1 ms.
I'm not sure about correctness of my patch. Given test exists,
you may suggest better solutions, or improve this solution.
I'll set commitfest topic status to 'Need review' assuming
my patch
Hello, Claudio.
Thank you for review and confirm of improvement.
On 2017-09-23 01:12, Claudio Freire wrote:
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
wrote:
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As
On 2017-09-22 16:22, Sokolov Yura wrote:
On 2017-09-22 11:21, Masahiko Sawada wrote:
On Fri, Sep 22, 2017 at 4:16 PM, Kyotaro HORIGUCHI
wrote:
At Fri, 22 Sep 2017 15:00:20 +0900, Masahiko Sawada
wrote in
On Tue, Sep 19, 2017 at 3:31 PM, Kyotaro HORIGUCHI
wrote:
> I was just looking
s deleted, lazy scans start to
be skipped.
Open question: what to do with index statistic? For simplicity this
patch skips updating stats (just returns NULL from btvacuumcleanup).
Probably, it should detect proportion of table changes, and do not
skip scans if table grows too much.
--
Sokolov Yura
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting
them
together in a single file isn't such a great idea. If you don't edit
the patches fur
Hi, Jesper
Thank you for reviewing.
On 2017-09-08 18:33, Jesper Pedersen wrote:
Hi,
On 07/18/2017 01:20 PM, Sokolov Yura wrote:
I'm sending rebased version with couple of one-line tweaks.
(less skip_wait_list on shared lock, and don't update spin-stat on
aquiring)
I have be
On 2017-09-06 16:36, Tom Lane wrote:
Sokolov Yura writes:
On 2017-09-06 15:56, Tom Lane wrote:
The point I'm trying to make is that if tweaking generic.h improves
performance then it's an indicator of missed cases in the
less-generic
atomics code, and the latter is where our attent
functions. Of course, gcc intrinsic gives more gain.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-09-06 14:54, Sokolov Yura wrote:
On 2017-09-06 07:23, Tom Lane wrote:
Jeff Janes writes:
What scale factor and client count? How many cores per socket? It
looks
like Sokolov was just starting to see gains at 200 clients on 72
cores,
using -N transaction.
This means that Sokolov
mic functions to make
all functions uniform.
It is bad style guide not to changing worse (but correct) code into
better (and also correct) just because you can't measure difference
(in my opinion. Perhaps i am mistaken).
(and yes: gcc intrinsic gives more improvement).
--
With regards,
Sokolov
> that we use to determine if to send keepalive message. I also added
> CHECK_INTERRUPRS call to fast code path because otherwise walsender
> might ignore them for too long on large transactions.
>
> Thoughts?
>
On 2017-08-10 14:20 Sokolov Yura wrote:
> On 2017-08-09 16:23,
://www.adjust.com/
And we had production issue with pg_rewind which copied huge textual
logs from pg_log (20GB each, cause statements were logged for
statistic). It will be convenient to tell pg_rewind not to copy logs
too.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian
t positively affects performance on any platform (I've tested it
on Power as well).
And I have prototype of further adaptation of that patch for POWER
platform (ie using inline assembly) (not published yet).
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
.
--
With regards,
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
From 5851f01de6b793be5e682c67db8db339667be92d Mon Sep 17 00:00:00 2001
From: Sokolov Yura
Date: Mon, 10 Jul 2017 12:34:48 +
Subject: [PATCH] Make hash table for xip in
rt algorithm looks to be much more expensive (at least, it is
more complex in instructions count).
I thing, using simplehash here will lead to much more invasive patch,
than this ad-hoc table. And I believe it will be slower (and this place
is very performance critical), though I will not bet
В Thu, 10 Aug 2017 18:12:34 +0300
Alexander Korotkov пишет:
> On Thu, Aug 10, 2017 at 3:30 PM, Sokolov Yura
> wrote:
>
> > On 2017-07-31 18:56, Sokolov Yura wrote:
> >
> >> Good day, every one.
> >>
> >> In attempt to improve performance of
On 2017-07-18 20:20, Sokolov Yura wrote:
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detected.
If there
On 2017-07-31 18:56, Sokolov Yura wrote:
Good day, every one.
In attempt to improve performance of YCSB on zipfian distribution,
it were found that significant time is spent in XidInMVCCSnapshot in
scanning snapshot->xip array. While overall CPU time is not too
noticable, it has measura
return;
+ }
(Still, I could be mistaken, it is just suggestion).
Is it hard to add test for case this patch fixes?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing
On 2017-07-27 11:53, Sokolov Yura wrote:
On 2017-07-26 20:28, Sokolov Yura wrote:
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
time activity tps latency stddev
On 2017-07-26 20:28, Sokolov Yura wrote:
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
time activity tps latency stddev min max
11130 av+ch
On 2017-07-27 11:30, Masahiko Sawada wrote:
On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire
wrote:
On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire
wrote:
On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
wrote:
On 2017-07-24 19:11, Claudio Freire wrote:
I was mostly thinking about something
On 2017-07-26 19:46, Claudio Freire wrote:
On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
wrote:
On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log
...
time activity tps latency stddev min max
11130 av+ch 198198ms374ms 7ms 1956ms
On 2017-07-24 19:11, Claudio Freire wrote:
On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
wrote:
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
wrote:
My friend noticed, that I didn't said why I bother with autovacuum
On 2017-07-21 20:41, Sokolov Yura wrote:
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
wrote:
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
No, I definitely won't admit that. With de
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
wrote:
My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experi
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
wrote:
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
No, I definitely won't admit that. With default settings autovacuum
won't write
we could debate the exact constants.
regards, tom lane
Attached version is with min(shared_buffers/8/autovacuum_workers, 16MB).
With regards
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 8ebd5e7eb498fdc75fc7b724ac
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we a
aising vacuum_cost_limit fixes that problem just
fine.
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent
support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own. *
Have you measured increased vacuum ring buffer?
This will require recompilation, though.
With regards,
--
Sokolov Yura
Postgres Professional:
o care about tuning its postgresql, will increase their
shared_buffers, and autovacuum will have its 16MB ring buffer.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-20 17:59, Robert Haas wrote:
On Tue, Jul 18, 2017 at 6:09 AM, Sokolov Yura
wrote:
I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's commit 6382448cf96:
Tom Lane 20
etting
'autovacuum_cost_delay=2ms').
This is single line change, and it improves things a lot.
With regards,
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detected.
If there is possibility to take a lock, CAS tried.
If
it is very important.
Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional:
y are both about LWLock itself, and not its usage.
--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ursday -- needs a little cleaning up ...
> Sokolov Yura has a patch which, to me, looks good for pgbench rw
> performance. Does not do so well with hammerdb (about the same as
base) on
> single socket and two socket.
Any idea why? I think we will have to understand *why* certain
thing
and together with that?5 июня 2017 г. 11:11 PM пользователь Sokolov Yura написал:Hi, Jim.How do you ensure of transaction order?Example:- you lock shard A and gather info. You find transaction T1 in-progress.- Then you unlock shard A.- T1 completes. T2, that depends on T1, also completes. But T2 was
Hi, Jim.How do you ensure of transaction order?Example:- you lock shard A and gather info. You find transaction T1 in-progress.- Then you unlock shard A.- T1 completes. T2, that depends on T1, also completes. But T2 was on shard B.- you lock shard B, and gather info from.- You didn't saw T2 as in p
ably it is
better to set it to more conservative `spins_per_delay / 4`, but I have
no enough evidence in favor of either decision.
Though it is certainly better to have it larger, than
`spins_per_delay / 8` that is set for EXCLUSIVE lock.
With regards,
--
Sokolov Yura aka funny_fal
15.9k
356 | 78.1k | 90.1k
395 | 70.2k | 79.0k
434 | 61.6k | 70.7k
Also graph with more points attached.
On 2017-05-25 18:12, Sokolov Yura wrote:
Hello, Tom.
I agree that lonely semicolon looks bad.
Applied your suggestion for empty loop body (/* skip */).
Patch in first l
loop body and it is
condition for loop exit, so it is clearly just a loop.
Optimization is valid cause compare_exchange always store old value
in `old` variable in a same atomic manner as atomic read.
Tom Lane wrote 2017-05-25 17:39:
Sokolov Yura writes:
@@ -382,12 +358,8 @@ static inline
round 86000tps, but difference between
83000tps and 86000tps is not so obvious in NUMA system).
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make c
A bit cleaner version of a patch.
Sokolov Yura писал 2017-05-25 15:22:
Good day, everyone.
I've been played with pgbench on huge machine.
(72 cores, 56 for postgresql, enough memory to fit base
both into shared_buffers and file cache)
(pgbench scale 500, unlogged tables, fsync=off,
synchr
compressed better by chunks.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom b1bf73d8111b0620558542c34098e0fee5d19b58 Mon Sep 17 00:00:00 2001
From: Sokolov Yura aka funny_falcon
Date: Mon, 22 May 2017 11:25:51 +0300
Subject: [PATCH
ing new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
Th
Sokolov Yura писал 2017-05-15 18:23:
Alvaro Herrera писал 2017-05-15 18:04:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/
Thank you for suggestion.
I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should a
show correctly in commitfest topic. So I do it with this email.
Please, correct me, if I do something wrong.
With regards.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 2cde4cb6b0c4c5868d99e13789b0ac33364d7315 Mon Sep 17 00:00:00
Sokolov Yura писал 2017-05-15 15:08:
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a
compactify_tuples
(called by PageRepareFr
.165
I understand that it is quite degenerate test case.
But probably this improvement still has sense.
With regards,
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom f8ed235dbb60a79b4f59dc8a4af014b2ca698772 Mon Sep 17 00:00:00 2001
From:
Sorry for previous message having no comments.
Just remark:
These aggregates created successfuly both in 8.2 and 8.3beta4:
CREATE AGGREGATE array_concat(anyarray) (
SFUNC=array_cat,
STYPE=anyarray
);
CREATE AGGREGATE array_build(anyelement) (
SFUNC=array_append,
STYPE=anyarray
);
But aggregate
Tom Lane wrote:
Joe Conway <[EMAIL PROTECTED]> writes:
Did you want me to work on this? I could probably put some time into it
this coming weekend.
I'll try to get to it before that --- if no serious bugs come up this
week, core is thinking of wrapping 8.3.0 at the end of the week, so
On Tue, 28 Feb 2006 01:04:00 -0500, Tom Lane wrote
"Jim C. Nasby" writes:
On Mon, Feb 27, 2006 at 03:05:41PM -0500, Tom Lane wrote:
Moreover, you haven't pointed to any strong reason to adopt this
methodology. It'd only be a win when vacuuming pretty small numbers
of tuples, which is not the d
4178.468..4195.689 rows=2621 loops=1)
-> Seq Scan on t (cost=0.00..4718.80 rows=262119 width=8)
(actual time=0.096..1944.151 rows=262099 loops=1)
Filter: (grp < 2621)
Total runtime: 4286.724 ms
(actal time ~812ms)
So, in case of union two equival
68 matches
Mail list logo