would be just a ShareLock on the
transactionid. On standby it would wait for the commit or rollback
record for that transaction to be replayed.
Robert made a good point that people will still rely on the token
being an LSN, but perhaps they will be slightly less angry when we
explicitly tel
On Mon, Oct 23, 2017 at 12:29 PM, Ivan Kartyshov
wrote:
> Ants Aasma писал 2017-09-26 13:00:
>>
>> Exposing this interface as WAITLSN will encode that visibility order
>> matches LSN order. This removes any chance of fixing for example
>> visibility order of async
ccasionally used as a
planner "hint" to correct for row count overestimates. Not a great
solution, but PostgreSQL doesn't really have a better way to guide the
planner. Those queries will now have to do something else, like col =
col + 0, which still works.
Regards,
Ants Aasma
--
n tell these are not in mainline LLVM. Is there a branch
or patchset of LLVM available somewhere that I need to use this?
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ing it so
the token is an opaque commit visibility token that just happens to be
a LSN would still allow for progress in transaction management. For
example, making PostgreSQL distributed will likely want timestamp
and/or vector clock based visibility rules.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
0 minutes - making
the ratio of work from generating WAL to parsing it be about 750:1.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jan 18, 2017 at 4:33 PM, Merlin Moncure wrote:
> On Wed, Jan 18, 2017 at 4:11 AM, Ants Aasma wrote:
>> On Wed, Jan 4, 2017 at 5:36 PM, Merlin Moncure wrote:
>>> Still getting checksum failures. Over the last 30 days, I see the
>>> following. Since enabling
We use a
regular fast shutdown, but that can take a long time due to the
shutdown checkpoint. The leader lease may run out during this time so
we would have to escalate to immediate shutdown or have a watchdog
fence the node. If we knew that no user backends are left we can let
the shutdown check
.
Me neither, but it currently is, and it looks like that's broken in a
"silently corrupts your data" way in face of torn writes. Using OFB
mode (xor plaintext with pseudorandom stream for cipher) looks like it
might help here, if other approaches fail.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ould. I don't know how many places we've got assumptions
> like this baked into the system, but I'm guessing there are a bunch.
I think we need to require wal_log_hints=on when encryption is
enabled. Currently I have not considered tearing on CLOG bits. Other
SLRUs probably have sim
On Mon, Jun 12, 2017 at 10:38 PM, Robert Haas wrote:
> On Mon, Jun 13, 2016 at 11:07 AM, Peter Eisentraut
> wrote:
>> On 6/7/16 9:56 AM, Ants Aasma wrote:
>>>
>>> Similar things can be achieved with filesystem level encryption.
>>> However this is not alw
terpreter is about 15% (5% speedup on a workload that spends 1/3 in
ExecInterpExpr). My idea of prefetching op->resnull/resvalue to local
vars before the indirect jump is somewhere between a tiny benefit and
no effect, certainly not worth introducing extra complexity. Clang 3.8
does the correct thi
On Fri, Feb 24, 2017 at 10:30 PM, Bruce Momjian wrote:
> On Fri, Feb 24, 2017 at 10:09:50PM +0200, Ants Aasma wrote:
>> On Fri, Feb 24, 2017 at 9:37 PM, Bruce Momjian wrote:
>> > Oh, that's why we will hopefully eventually change the page checksum
>> > al
On Fri, Feb 24, 2017 at 9:49 PM, Jim Nasby wrote:
> On 2/24/17 12:30 PM, Tomas Vondra wrote:
>>
>> In any case, we can't just build x86-64 packages with compile-time
>> SSE4.1 checks.
>
>
> Dumb question... since we're already discussing llvm for the executor, would
> that potentially be an option
apping out the current algorithm. And I don't really see
a reason to, it would introduce a load of headaches for no real gain.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
oth LLVM and GCC are capable of compiling the code that we have to a
vectorized loop using SSE4.1 or AVX2 instructions given the proper
compilation flags. This is exactly what was giving the speedup in the
test I showed in my e-mail.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (p
On Fri, Feb 24, 2017 at 7:47 PM, Bruce Momjian wrote:
> On Sat, Jan 21, 2017 at 09:02:25PM +0200, Ants Aasma wrote:
>> > It might be worth looking into using the CRC CPU instruction to reduce this
>> > overhead, like we do for the WAL checksums. Since that is a different
&g
ravasundaram/bairavasundaram.pdf
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
t for minimizing data
loss.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Jan 21, 2017 at 10:16 PM, Michael Banck
wrote:
> On Sat, Jan 21, 2017 at 09:02:25PM +0200, Ants Aasma wrote:
>> On Sat, Jan 21, 2017 at 6:41 PM, Andreas Karlsson wrote:
>> > It might be worth looking into using the CRC CPU instruction to reduce this
>> > overh
tool for turning on checksums while
the database is offline. FWIW, based on customers and fellow
conference goers I have talked to most would gladly take the
performance hit, but not the downtime to turn it on on an existing
database.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing lis
.
That said the actual checksum calculation was not a big issue for
performance. The only way to make it really matter was with a larger
than shared buffers smaller than RAM workload with tiny per page
execution overhead. My test case was SELECT COUNT(*) on wide rows with
a small fill factor. Havin
On Thu, Jan 19, 2017 at 2:22 PM, Thomas Munro
wrote:
> On Thu, Jan 19, 2017 at 8:11 PM, Ants Aasma wrote:
>> On Tue, Jan 3, 2017 at 3:43 AM, Thomas Munro
>> wrote:
>>> Long term, I think it would be pretty cool if we could develop a set
>>> of features tha
r client state or a global agreement on
what snapshots are safe to provide, both of which you tried to avoid
for this feature.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
in data files surrounding the failed page. If the requested block
number contains something completely else, but the page that follows
contains the expected checksum value, then it would support this
theory.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To m
a test to go with it.
>
> You could probably just add to src/test/recover/t/001 which now
> contains my additions for hot standby.
Do you feel the test in the attached patch is enough or would you like
to see anything else covered?
Regards,
Ants Aasma
diff --git a/src/backend/replica
#x27;d personally be
happier if it had a test to go with it.
You could probably just add to src/test/recover/t/001 which now
contains my additions for hot standby.
I'm travelling right now, but I should be able to give it a shot next week.
Regards,
Ants Aasma
On Wed, Dec 21, 2016 at 2:09 PM, Craig Ringer wrote:
> On 21 December 2016 at 15:40, Ants Aasma wrote:
>
>>> So -1 on this part of the patch, unless there's something I've
>>> misunderstood.
>>
>> Currently there was no feedback sent if hot stand
case.
However I did not consider cascading replica slots wanting to hold
back xmin, where resetting the parents xmin is indeed wrong. Do you
know if GetOldestXmin() is safe at this point and we can just remove
the HotStandbyActive() check? Otherwise I think the correct approach
is to move the
.
A shell script to reproduce the problem is also attached, adjust the
PGPATH variable to your postgres install and run in an empty
directory.
Regards,
Ants Aasma
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index cc3cf7d..31333ec 100644
--- a/src
ock numbers were used for writing out and reading in the
page. Either the blocknum gets corrupted between calculating the
checksum and writing the page out (unlikely given the proximity), or
the pages are somehow getting transposed in the storage.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mai
at about going even further than [1] in converting the executor to
being opcode based and merging projection and qual evaluation to a
single pass? Optimizer would then have some leeway about how to order
column extraction and qual evaluation. Might even be worth it to
special case some functions as
e visible. Simplest
solution is to not require CSN == LSN and just assign a CSN value
immediately before becoming visible.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
not unnecessarily conflate commit record
durability and transaction visibility ordering. Not having them tied
together allows for an external source to provide CSN values, allowing
for interesting distributed transaction implementations. E.g. using a
timestamp as the CSN a'la Google Spanne
t; [1]. While effective it seems to be quite
heavy-weight, so would probably need support for tiered optimization.
[1]
https://courses.cs.washington.edu/courses/cse544/11wi/papers/markl-vldb-2005.pdf
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To m
uffer for recent xids could be accessed lock
free, would take care most of the traffic in most of the cases. Looks
like it would be a good trade-off for complexity/performance.
* To keep workloads with wildly varying transaction lengths in bounded
amount of memory, a significantly more c
On Mon, Jun 13, 2016 at 5:17 AM, Michael Paquier
wrote:
> On Sun, Jun 12, 2016 at 4:13 PM, Ants Aasma wrote:
>>> I feel separate file is better to include the key data instead of pg_control
>>> file.
>>
>> I guess that would be more flexible. However I th
.
I guess that would be more flexible. However I think at least the fact
that the database is encrypted should remain in the control file to
provide useful error messages for faulty backup procedures.
Thanks for your input.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
rameters, like key length
or different cipher primitive.
Regards,
Ants Aasma
diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
index 18bad1a..04ce887 100644
--- a/contrib/pgcrypto/Makefile
+++ b/contrib/pgcrypto/Makefile
@@ -20,7 +20,7 @@ SRCS = pgcrypto.c px.c px-hmac.c px-crypt.c
tead of a hypothetical
realvector or realmatrix did not prove to be a huge overhead, so
overall I'm on the fence for the usefulness of a special type. Maybe a
helper function or two to validate the additional restrictions in a
check constraint would be enough.
Regards,
Ants Aasma
--
Sent via
ould test if it's
still possible to trigger starvation with the new code.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
tion scenarios the exclusive
lockers will get a chance to run once per starvation grace period.
That might still not be ideal or "fair", but it is a lot better than
the status quo of indefinitely blocking.
PS: if/when you are picking up the CSN work, ping me to write up some
of the insigh
On Wed, May 11, 2016 at 3:52 AM, Andres Freund wrote:
> On 2016-05-11 03:20:12 +0300, Ants Aasma wrote:
>> On Tue, May 10, 2016 at 7:56 PM, Robert Haas wrote:
>> > On Mon, May 9, 2016 at 8:34 PM, David Rowley
>> > wrote:
>> > I don't have any at the mo
roborated
by what I have seen found by other VM implementations. Once you get
the data into an uniform format where vectorized execution could be
used, the CPU execution resources are no longer the bottleneck. Memory
bandwidth gets in the way, unless each input value is used in multiple
calculatio
stily slap on such a
feature, but I just wanted the thought to be out there for
consideration.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
5. mai 2016 6:14 AM kirjutas kuupäeval "Andres Freund" :
>
> On 2016-05-05 06:08:39 +0300, Ants Aasma wrote:
> > On 5 May 2016 1:28 a.m., "Andres Freund" wrote:
> > > On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
> > > > How would the
e xids are
> assigned. That seems perfectly alright, but it'll change behaviour.
FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be made
in the next release.
Regards,
Ants Aasma
SnapshotThresholdTimestamp
+ 10.47% TransactionIdLimitedForOldSnapshots
+ 0.71% TestForOldSnapshot_impl
+ 0.57% GetSnapshotCurrentTimestamp
Now this is kind of an extreme example, but I'm willing to bet that on
multi socket hosts similar issues can crop up with common real wor
er this task doesn't seem too bad and the consequence of falling
behind is just delayed timing out of old snapshots.
As far as I can see this approach would get rid of any scalability
issues, but it is a pretty significant change and requires 64bit
atomic reads to get rid of contention on x
On Thu, Apr 21, 2016 at 5:16 PM, Kevin Grittner wrote:
> On Wed, Apr 20, 2016 at 8:08 PM, Ants Aasma wrote:
>
>> However, while checking out if my proof of concept patch actually
>> works I hit another issue. I couldn't get my test for the feature to
>> actually wo
e. Based on documentation I
would expect the following:
* The interfering query gets cancelled
* The long running query gets to run
* Old rows will start to be cleaned up after the threshold expires.
However, testing on commit 9c75e1a36b6b2f3ad9f76ae661f42586c92c6f7c,
I'm seeing that the old rows do not get cl
whatever else serialisation format du jour. It will still
have the same backwards compatibility issues as adding the raw output,
but the payoff is greater.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
and use that to drive a simple loop. The code size
would be pretty similar to insertion sort and the loop overhead should
mostly be hidden by the CPU OoO machinery. Probably won't help much,
but would be interesting and simple enough to try out. Can you share
you code for the benchmark so I ca
k
a global lock would be good enough for a proof of concept that only
evaluates cache hit ratios.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
eSQL that would be capable of distinguishing interesting
variations from irrelevant doesn't seem like a feasible plan. In my
view the best we could do is to aim to have entries roughly correspond
to application query invocation points and leave the more complex
statistical analysis use cases t
ith a newer timestamp constant, and the rebuild
would be a lot faster if it could use the existing index to perform an
index only scan of 10% of data instead of scanning and sorting the
full table.
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: htt
n the token, and then there are "dumb" clients that want to
use write side waits.
Also, it should be possible to configure which standbys are considered
for waiting on. Otherwise a reporting slave will occasionally catch up
enough to be considered "available" and then cause a latenc
referenced relation and remove the
xmin from procarray. Vacuum would access this map by relation, determine
the minimum and use that if it's earlier than the global xmin. I'm being
deliberately vague here about the datastructure in shared memory as I don't
have a great idea what to use there. It's somewhat similar to the lock
table in that in theory the size is unbounded, but in practice it's
expected to be relatively tiny.
Regards,
Ants Aasma
lations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the error is deterministic, vacuum on the high
churn tables doesn't have to wait for the old transaction delay to
expire and we avoid a hard to tune GUC (what do I need to set
old_snapshot_threshold to,
ldn't be a good idea?
Are there any similar options for other platforms? Alternatively, does
anyone know of linker flags that would give a similar effect?
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.
On Fri, Nov 21, 2014 at 12:11 PM, Abhijit Menon-Sen
wrote:
> If anyone has other suggestions, I'm all ears.
Do you have a WIP patch I could take a look at and tweak? Maybe
there's something about the compilers code generation that could be
improved.
Regards,
Ants Aasma
--
Cyb
nd of xlog on old master at x2, x1 < x2, master will request
streaming at tli1.x2, wal sender does tliSwitchPoint(tli1) to lookup
x1, finds that x1 < x2 and gives the error "requested starting point
%X/%X on timeline %u is not in this server's history". The alignmen
re sequential integers, not GUID's, but at least it's significantly
harder.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql
On Oct 15, 2014 7:32 PM, "Ants Aasma" wrote:
> I'm imagining a bucketized cuckoo hash with 5 item buckets (5-way
> associativity). This allows us to fit the bucket onto 2 regular sized
> cache lines and have 8 bytes left over. Buckets would be protected by
> seqlocks st
sing asymmetrically sized tables.
[1] https://www.cs.princeton.edu/~mfreed/docs/cuckoo-eurosys14.pdf
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
predicate (e.g. soft delete). AFAICS this is
currently not possible to implement correctly without a retry loop.
The hypothetical ON CONFLICT REPLACE and ON CONFLICT
UPDATE-AND-THEN-INSERT modes would also make sense in the unique index
case.
Not saying that I view this as necessary for the first c
blocknum first and possibly have better branch
prediction.
Do you have a workload where I could test if this helps alleviate the
comparison overhead?
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
diff --g
On Tue, Sep 23, 2014 at 8:15 PM, Florian Weimer wrote:
> * Ants Aasma:
>
>> CRC has exactly one hardware implementation in general purpose CPU's
>
> I'm pretty sure that's not true. Many general purpose CPUs have CRC
> circuity, and there must be some which al
shing lookup tables.
If we choose to stay with CRC we must accept that we can only solve
the performance issues for Intel CPUs and provide slight alleviation
for others.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgres
CRC calculations for testing this
patch to see if the performance improvement is due to less data being
checksummed.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (
usive state introducing
coherency traffic. Not locking the buffer only saves transfering the
cacheline back to the pinning backend, not a huge amount of savings.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-suppor
sort with the full sized array and then
compress it to a list of buffer IDs that need to be written out. This
way most of the time you only need a small array and the large array
is only needed for a fraction of a second.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmü
enough to
just interleave writes of each tablespace, weighed by the amount of
writes per tablespace.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
arisons,
> especially of strings, ISTM, and that could still be available - this would
> just be a bit of extra arithmetic.
I don't think binary search is the main problem here. Objects are
usually reasonably sized, while arrays are more likely to be huge. To
make matters worse, js
r for a while after commit.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
an implementation for
Apache Derby. You may find some interesting ideas in there.
[1]
http://code.google.com/p/derby-nb/source/browse/trunk/derby-nb/ICDE10_conf_full_409.pdf
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.pos
/x86/vdso/vclock_gettime.c#L223
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
sactions to decide if those records are visible or not unless
> they're very recent transactions which started in that short window
> while the committing transaction was in the process of committing.
I don't believe this is worth the complexity. The contention window is
extremely sh
eadPage()
does SlruReportIOError(), which in turn does ereport(ERROR), while
inside a critical section initiated in RecordTransactionCommit().
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgs
e prudent, as the simpler approach
needs mostly the same ground work and if turns out to work well
enough, simpler is always better.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers
erall win in terms of total number of I/Os performed. Maybe we need
to invent Generalized CLOCK-Pro with a larger number of levels,
ranging from cold, hot and scalding to infernal. :)
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://ww
_clockpro.c?v=NETBSD
[6] http://lwn.net/Articles/147879/
[7]
http://derby-nb.googlecode.com/svn-history/r41/trunk/derby-nb/ICDE10_conf_full_409.pdf
[8]
http://www.postgresql.org/message-id/ca+tgmozypeyhwauejvyy9a5andoulcf33wtnprfr9sycw30...@mail.gmail.com
Regards,
Ants Aasma
--
Cybertec Schönig
It seems to me that when flushing logical mappings to disk, each
mapping file leaks the buffer used to pass the mappings to XLogInsert.
Also, it seems consistent to allocate that buffer in the RewriteState
memory context. Patch attached.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig
er) provides that guarantee in
this specific case.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your su
but if anyone has any better ideas please let
> > them be known.
>
> I'd be a bit inclined to build the terminology around "reverse" instead of
> "negative" --- the latter seems a bit too arithmetic-centric. But that's
> just MHO.
To contribute to the bike shedding, inverse is often used in similar
contexts.
--
Ants Aasma
ssimization. Currently it's more of a
catch all "here be dragons" declaration for data structures.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
nchronized by a lock.
I guess the best approach for deciding would be to try to convert a
couple of the existing unlocked accesses to the API and see what the
patch looks like.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.post
in RAM and/or stuff is on SSD's.
Selecting a single row takes about 20us on my computer, I picked 100us
as a reasonable limit below where the exact speed doesn't matter
anymore.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http:
the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the val
On Fri, Oct 18, 2013 at 8:04 PM, Peter Geoghegan wrote:
> On Fri, Oct 18, 2013 at 9:55 AM, Ants Aasma wrote:
>> FWIW, I think that if we approach coding lock free algorithms
>> correctly - i.e. "which memory barriers can we avoid while being
>> safe", instead of &q
reate a new architecture with a similarly loose
memory model. The experience with Alpha and other microprocessors
shows that the extra hardware needed for fast and strong memory
ordering guarantees more than pays for itself in performance.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing l
On Wed, Oct 2, 2013 at 10:37 PM, Merlin Moncure wrote:
> On Wed, Oct 2, 2013 at 9:45 AM, Ants Aasma wrote:
>> I haven't reviewed the code in as much detail to say if there is an
>> actual race here, I tend to think there's probably not, but the
>> specific pattern
On Wed, Oct 2, 2013 at 4:39 PM, Merlin Moncure wrote:
> On Mon, Sep 30, 2013 at 7:51 PM, Ants Aasma wrote:
>> So we need a read barrier somewhere *after* reading the flag in
>> RecoveryInProgress() and reading the shared memory structures, and in
>> theory a full barrier
ds and even
stores out from the conditional block losing the control dependency.
:( It's quite unlikely to do so as it would be a very large code
motion and it probably has no reason to do it, but I don't see
anything that would disallow it. I wonder if we should just emit a
full fence
s good enough for me. I would consider using a naming scheme that
accounts for possible future uint64 atomics.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de
--
Sent via pgsql-hackers mailing list (pgsql-hacker
ompiler fences. That would
> eliminate the need for scads of volatile references all over the
> place.
+1. The commits that you showed fixing issues in this area show quite
well why this is a good idea.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wie
s that would be
insane.
[1]
http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-suppo
art, at least it would make the intention more clear
than the current approach of sprinkling around volatile pointers.
Regards,
Ants Aasma
[1] http://en.cppreference.com/w/c/atomic
[2] (long video about atomics)
http://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-2012-Herb-Sutter-atomic-W
o imagine it having any measurable effect. A single core can
checksum several gigabytes per second of I/O without vectorization,
and about 30GB/s with vectorization.
Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-su
nderstand the appeal of staying
with what we have, but this would cap the speedup at 4x and has large
caveats with the extra lookup tables. A 28x speedup might be worth the
extra effort.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
1 - 100 of 247 matches
Mail list logo