> On Jan 10, 2019, at 10:18 AM, Josh Poimboeuf <jpoim...@redhat.com> wrote: > > On Thu, Jan 10, 2019 at 05:32:08PM +0000, Nadav Amit wrote: >>> On Jan 10, 2019, at 8:44 AM, Josh Poimboeuf <jpoim...@redhat.com> wrote: >>> >>> On Thu, Jan 10, 2019 at 01:21:00AM +0000, Nadav Amit wrote: >>>>> On Jan 9, 2019, at 2:59 PM, Josh Poimboeuf <jpoim...@redhat.com> wrote: >>>>> >>>>> With this version, I stopped trying to use text_poke_bp(), and instead >>>>> went with a different approach: if the call site destination doesn't >>>>> cross a cacheline boundary, just do an atomic write. Otherwise, keep >>>>> using the trampoline indefinitely. >>>>> >>>>> NOTE: At least experimentally, the call destination writes seem to be >>>>> atomic with respect to instruction fetching. On Nehalem I can easily >>>>> trigger crashes when writing a call destination across cachelines while >>>>> reading the instruction on other CPU; but I get no such crashes when >>>>> respecting cacheline boundaries. >>>>> >>>>> BUT, the SDM doesn't document this approach, so it would be great if any >>>>> CPU people can confirm that it's safe! >>>> >>>> I (still) think that having a compiler plugin can make things much cleaner >>>> (as done in [1]). The callers would not need to be changed, and the key can >>>> be provided through an attribute. >>>> >>>> Using a plugin should also allow to use Steven’s proposal for doing >>>> text_poke() safely: by changing 'func()' into 'asm (“call func”)', as done >>>> by the plugin, you can be guaranteed that registers are clobbered. Then, >>>> you >>>> can store in the assembly block the return address in one of these >>>> registers. >>> >>> I'm no GCC expert (why do I find myself saying this a lot lately?), but >>> this sounds to me like it could be tricky to get right. >>> >>> I suppose you'd have to do it in an early pass, to allow GCC to clobber >>> the registers in a later pass. So it would necessarily have side >>> effects, but I don't know what the risks are. >> >> I’m not GCC expert either and writing this code was not making me full of >> joy, etc.. I’ll be happy that my code would be reviewed, but it does work. I >> don’t think an early pass is needed, as long as hardware registers were not >> allocated. >> >>> Would it work with more than 5 arguments, where args get passed on the >>> stack? >> >> It does. >> >>> At the very least, it would (at least partially) defeat the point of the >>> callee-saved paravirt ops. >> >> Actually, I think you can even deal with callee-saved functions and remove >> all the (terrible) macros. You would need to tell the extension not to >> clobber the registers through a new attribute. > > Ok, it does sound interesting then. I assume you'll be sharing the > code?
Of course. If this what is going to convince, I’ll make a small version for PV callee-saved first. >>> What if we just used a plugin in a simpler fashion -- to do call site >>> alignment, if necessary, to ensure the instruction doesn't cross >>> cacheline boundaries. This could be done in a later pass, with no side >>> effects other than code layout. And it would allow us to avoid >>> breakpoints altogether -- again, assuming somebody can verify that >>> intra-cacheline call destination writes are atomic with respect to >>> instruction decoder reads. >> >> The plugin should not be able to do so. Layout of the bytecode is done by >> the assembler, so I don’t think a plugin would help you with this one. > > Actually I think we could use .bundle_align_mode for this purpose: > > > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsourceware.org%2Fbinutils%2Fdocs-2.31%2Fas%2FBundle-directives.html&data=02%7C01%7Cnamit%40vmware.com%7Cfa29fb8be208498d039008d67727fe30%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636827411004664549&sdata=elDuAVOsSlidG7pZSZfjbhrgnMOHeX6AWKs0hJM4cCE%3D&reserved=0 Hm… I don’t understand what you have in mind (i.e., when would this assembly directives would be emitted).