Seriously, people are trading on ASICs?
The amount of effort going into this is astounding. I can't help
thinking that it won't end well.
On 04/09/2018 10:55 AM, Greg Young wrote:
To be fair many of the fpga based things have also moved to asics. You
know you are in for fun when a fpga is too slow.
On Mon, Apr 9, 2018 at 12:58 AM, Martin Thompson <[email protected]
<mailto:[email protected]>> wrote:
5+ years ago it was pretty common for folks to modify the Linux
kernel or run cut down OS implementations when pushing the edge of
HFT. These days the really fast stuff is all in FPGAs in the
switches. However there is still work done on isolating threads to
their own exclusive cores. This is often done by exchanges or
those who want good predictable performance but not necessarily be
the best.
A simple way I have to look at it. You are either predator or
prey. If predator then you are mostly likely on FPGAs and doing
some pretty advanced stuff. If prey then you don't want to be at
the back of the herd where you get picked off. For the avoidance
of doubt if you are not sure if you are prey or predator then you
are prey. ;-)
On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:
Hello,
I've read about thread affinity and I see that it is popular
in high-performance-libraries (for example
https://github.com/OpenHFT/Java-Thread-Affinity
<https://github.com/OpenHFT/Java-Thread-Affinity>). Ok,
jugglery a thread between cores has impact (generally) on
performance so it is reasonable to bind a specific thread to a
specific core.
*Intro*:
It is obvious that the best idea to make it possible that any
process will be an owner of core [let's call it X] (in
multi-core CPU). I mean that main thread in a process will be
one and only thread executed on core X. So, there is no
problem with context-switching and cache flushing [with expect
system calls].
I know that it requires a special implementation of scheduler
in kernel, so it requires a modification of [Linux] kernel. I
know that it is not so easy and so on.
*Question*:
But, we know that we have systems that need a high
performance. So, it could be a solution with context-switching
once and at all. So, why there is no a such solution? My
suspicions are:
* it is pointless, the bottleneck is elsewhere [However, it is
meaningful to get thread-affinity]
* it is too hard and there is too risky to make it not correctly
* there is no need
* forking own linux kernel doesn't sound like a good idea.
--
You received this message because you are subscribed to the Google
Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected]
<mailto:[email protected]>.
For more options, visit https://groups.google.com/d/optout
<https://groups.google.com/d/optout>.
--
Studying for the Turing test
--
You received this message because you are subscribed to the Google
Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected]
<mailto:[email protected]>.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.