On Thu, 27 Apr 2023 at 05:27, Josh Gargus <jj...@google.com> wrote:
>
> Hi, I'm from the Fuchsia team at Google.  We would like to provide Lavapipe 
> as an ICD within Fuchsia.  However, our default security policy is to deny 
> client apps the capability to map memory as writable/executable; we don't 
> want to relax this for every client app which uses Vulkan.  Therefore, we are 
> investigating the feasibility of splitting "Lavapipe" into two parts, one of 
> which runs in a separate process.
>
> "Lavapipe" is in quotes because we don't know quite where the split should be 
> (that's what I'm here to ask you); perhaps it wouldn't be within Lavapipe per 
> se, but instead e.g. somewhere within llvmpipe.
>
> Another important goal is to make these changes in a way that is upstreamable 
> to Mesa.
>
> We considered a few different options, deeply enough to convince ourselves 
> that none of them seems desirable.  These ranged from proxying at the Vulkan 
> API level (so that almost everything runs in a separate process) to doing 
> only compilation in the separate process (into shared memory that is only 
> executable, not writable, in the client process).

Have you considered using venus over a socket/pipe to do it at the
vulkan layer? (just asking in case you hadn't).

>
> This exploration was limited by our unfamiliarity with the corner cases of 
> implementing a Vulkan driver.  For example, we're not quite clear on how much 
> code is generated outside of vkCreateGraphics/ComputePipelines().  For 
> example is there any code generated lazily, perhaps at draw time, to optimize 
> texture sampling?  That's just one question we don't know the answer to, and 
> there are surely many other questions we haven't thought to ask.
>
> Rather than delve into such minutiae, I'll simply ask how you recommend 
> approaching this problem.  Again, the two main constraints are:
> - no JITing code in the client process
> - clean enough solution to be upstreamable

So code is generated in a lot of places, particularly at shader bind
and at draw time depending on bound textures/samplers etc. I think
your best bet would be to maybe split a client/server model in the
gallivm layer. gallivm_compile_module to gallivm_jit_function are
where the LLVM executions happen, so you'd have to construct enough of
a standalone gallivm/LLVM environment to take an LLVM module, compile
it, pass back the JITed code in shared memory like you said. I'm not
sure how memory address independent the resulting binaries from llvm
are, or if they have to be placed at the same address. There are also
a bunch of global linkages for various things that have to be hooked
up, so there would need to be some thought around those (debug printf,
coroutine malloc hooks, and clock hook).

Dave.

Reply via email to