I read Avi's response as simply concurring with mine - i.e., that under *normal* low-latency operation, siblings do not hinder each other during kernel mode switching. In other words, refuting the assertion made in the book.
But I'll let him respond directly to you. On Sun, Jul 3, 2022 at 10:24 AM Wojciech Kudla <[email protected]> wrote: > Yes, this absolutely does not apply when mitigations are disabled or when > running with no SMT, so proper low latency shops never face this issue. > > However I'm confused about Avi's comment as he seems to contradict what > Intel are saying here: > > > https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/intel-analysis-microarchitectural-data-sampling.html > > In the section about ring 0 transitions and IPIs. > I always trust Avi's enormous experience so hope he finds a little time to > chime in and explain where my understanding is wrong. > > W. > > > On Sun, 3 Jul 2022, 15:51 Mark Dawson, <[email protected]> wrote: > >> Wojciech, >> >> Just to be clear, you're illuminating a particular set of circumstances >> that only occurs within cpu security vulnerability mitigation routines, >> correct? >> >> In other words, in the instance that an engineer has configured his/her >> system for low noise/jitter/latency, this serialization at kernel mode >> switch time among sibling threads should not occur, correct? >> >> On Sun, Jul 3, 2022, 1:57 AM Wojciech Kudla <[email protected]> >> wrote: >> >>> @Peter >>> >>> > Does the CPU only need to serialize the transition, or does it need to >>> serialize the interrupt/systemcall while it is in ring 0? >>> >>> Sadly yes, the kernel does need to temporarily "idle" the sibling thread >>> while the other is in ring 0. This is described as transitioning from state >>> 6a/6b to state 4 in the Intel doc I cited. In practice it's a bit more >>> complicated because what happens depends on whether the thread in kernel >>> mode is accessing "secret data" or not. >>> Regardless, the sibling thread is stuck in the IPI routine until the >>> other one exits the kernel. When that happens, CPU's microarchitectural >>> data buffers are cleared with VERW instruction and both siblings are >>> allowed to resume execution. >>> Hope this helps. >>> >>> It seems like this thread branched off to a general discussion about the >>> quality of the cited book wrt the culture of secrecy in our field. When I >>> was entering this industry many years ago I was put off by this and >>> sometimes found it much harder to make progress in my work because you >>> start dealing with problems you won't find blog posts about or an easy >>> solution on SO. With time I learned to appreciate that since it often costs >>> us a lot of frustration-filled effort and grey hair to work out the secret >>> sauce on our own and this is a big part of one's value as a professional. >>> I think it's totally fair to not want to give away something like that >>> for free (you'll see the same problem in the world of magicians). Not even >>> mentioning how reliant the HFT industry is on staying ahead of the pack in >>> terms of technology and innovation. >>> Just my two cents... >>> >>> On Sat, Jul 2, 2022 at 4:58 PM Mark Dawson <[email protected]> wrote: >>> >>>> Andrew, >>>> >>>> Yes, you are correct about the secrecy of the HFT industry, in general. >>>> As a matter of fact, when I worked at Jump Trading we used to have to >>>> deliver presentations at Linux Kernel/Tracing Summits under a completely >>>> different company name (that has changed since 2013 or so when Jump joined >>>> the Linux Foundation). >>>> >>>> However, aside from secret trading strategies and esoteric protocol >>>> exchange handling techniques, OS configuration guidelines are pretty >>>> standard across the board. In fact, Red Hat publishes a new Low Latency >>>> Guidelines document with every release primarily targeted at our domain. >>>> Our representation in the kernel community ushered a lot of the >>>> advancements that we have today in the Adaptive Ticks (i.e. nohz_full) >>>> infrastructure. >>>> >>>> Also, general techniques around pre-faulting memory, pinning threads, >>>> cache warming, and avoiding runtime overhead via template >>>> metaprogramming/constexpr, etc. - these are all things that you'll see >>>> members of HFT firms speak about freely and openly at tech conferences. We >>>> *all* do these things on the portions of our trading that still runs in >>>> software. >>>> >>>> But no one is gonna talk about their secret sauce, of course. That >>>> secret trigger they discovered which works perfectly for their FPGA-based >>>> strategies. Things of that sort. But I'm willing to bet a nice chunk of >>>> change that no one in the industry who cares about latency is leaving the >>>> cpu security vulnerability mitigations enabled (which are, also, addressed >>>> in Red Hat's Low Latency Tuning guides). >>>> >>>> On Sat, Jul 2, 2022, 10:34 AM Andrew Hunter <[email protected]> >>>> wrote: >>>> >>>>> On Sat, Jul 2, 2022 at 9:42 AM Mark Dawson <[email protected]> >>>>> wrote: >>>>> >>>>>> I'd hope these authors aren't referring to behavior under mitigation >>>>>> circumstances since *all* HFT firms disable cpu mitigation schemes from >>>>>> the >>>>>> kernel boot parameter list as a standard procedure. >>>>>> >>>>> >>>>> Without expressing any opinion on this particular claim, it's >>>>> important to realize that serious HFT practitioners are *extremely* >>>>> secretive. They don't talk about anything they consider important >>>>> publicly, >>>>> and go to legends both to keep their own secrets and infer as much as they >>>>> can about their competitors from public statements. >>>>> >>>>> You should be extremely skeptical of any claims like "All HFTs X". I >>>>> haven't read the book OP mentions but I'm doubtful you should infer much >>>>> from it. >>>>> >>>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "mechanical-sympathy" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> To view this discussion on the web, visit >>>>> https://groups.google.com/d/msgid/mechanical-sympathy/CANf_6Th9Dwg6Cv4ccbNiiVfBa1UbyRV2fbSpCeahpcEdUc-i8A%40mail.gmail.com >>>>> <https://groups.google.com/d/msgid/mechanical-sympathy/CANf_6Th9Dwg6Cv4ccbNiiVfBa1UbyRV2fbSpCeahpcEdUc-i8A%40mail.gmail.com?utm_medium=email&utm_source=footer> >>>>> . >>>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "mechanical-sympathy" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To view this discussion on the web, visit >>>> https://groups.google.com/d/msgid/mechanical-sympathy/CAFvqqVe9E-_hZWYCmTpubNPpq58RnV%3DZTdMSESCcLP3YN_Uu7A%40mail.gmail.com >>>> <https://groups.google.com/d/msgid/mechanical-sympathy/CAFvqqVe9E-_hZWYCmTpubNPpq58RnV%3DZTdMSESCcLP3YN_Uu7A%40mail.gmail.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "mechanical-sympathy" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To view this discussion on the web, visit >>> https://groups.google.com/d/msgid/mechanical-sympathy/CAHNMKArHU1XMWAJSVcdwfVvgqTrbDcF2vgPLhvtAUSLgyoCkuA%40mail.gmail.com >>> <https://groups.google.com/d/msgid/mechanical-sympathy/CAHNMKArHU1XMWAJSVcdwfVvgqTrbDcF2vgPLhvtAUSLgyoCkuA%40mail.gmail.com?utm_medium=email&utm_source=footer> >>> . >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "mechanical-sympathy" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To view this discussion on the web, visit >> https://groups.google.com/d/msgid/mechanical-sympathy/CAFvqqVfD3t-8ptvwkyFzFGDT9MNaa9duJbCsLv7mpfGjgxUyCA%40mail.gmail.com >> <https://groups.google.com/d/msgid/mechanical-sympathy/CAFvqqVfD3t-8ptvwkyFzFGDT9MNaa9duJbCsLv7mpfGjgxUyCA%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> > -- > You received this message because you are subscribed to the Google Groups > "mechanical-sympathy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/mechanical-sympathy/CAHNMKAp%3D-3mD27bHbANN_oD1fngdNzUP9VX4ojUZtTBFG9KmVw%40mail.gmail.com > <https://groups.google.com/d/msgid/mechanical-sympathy/CAHNMKAp%3D-3mD27bHbANN_oD1fngdNzUP9VX4ojUZtTBFG9KmVw%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web, visit https://groups.google.com/d/msgid/mechanical-sympathy/CAFvqqVd8gaO8xLGhXHNJWJwBTudrhDuGENTEjq1g8MhidnPGRg%40mail.gmail.com.
