So I have the following idea: I run the lguest and a userspace which mirrors the host and its userspace. I then make a copy of the sys_call_table and update the kernel pointer to the copy. The copy contains stubs which vector syscalls to the ring1 kernel.
This way, if a process of interest (running on host) is context switched onto the processor (note this is a scheduling switch), then some subset of its syscalls (say file i/o) is redirected to the lguest kernel. Why do this instead of simply running the process on lguest itself? I'm thinking of avoiding porting issues like graphics/rendering related syscalls. Instead of writing drivers that passthru to the host. Moreover, I am trying to build a sandbox that protects from certain types of native code. For example, I trust some portion of native code (of a process) but not all of it. Thus the syscalls that the untrusted code makes should go into the ring1 kernel. Example: My process allows the execution of downloaded arbitrary native code. This could be a privilege escalation. Under my model, since this code is downloaded, I don't trust it. Therefore, its syscalls (the ones that make sense, redirecting fork()/exec doesn't make sense) that make sense are redirected to a "honeypot" kernel. Thoughts Welcome. -Earlence
_______________________________________________ Lguest mailing list Lguest@lists.ozlabs.org https://lists.ozlabs.org/listinfo/lguest