On 01/08/13 10:54, Jean-Philippe Ouellet wrote:
Hello misc@,
I'm researching locking things down, and I'm wondering what the current
best practice is for isolating risky programs. It seems this community
has traditionally shunned virtualization as a solution, and also called
exclusively chrooting "insufficient". Okay, sure.
But what is better then?
Say, for example, I'm running firefox, and I don't trust it. Running it
as-is straight out of pkg_add doesn't run it as its own user:
$ ps -o user,command | grep firefox
jpouellet firefox
As I understand it, the next time a remote code execution vulnerability
comes along, it could, among many many other things, read my
~/.ssh/id_rsa and then it's game over.
A chroot or even just a separate user would seem to fix that problem,
assuming they couldn't easily break out of it (probably not a safe
assumption), but that still leaves many other issues, for example it
would still be able to send network traffic originating from my machine,
which would be extremely valuable to an attacker.
The historical solution (as of 2005) [1] to this seems to have been to
use systrace. But then vulnerabilities for that were found (in 2007)
[2]. So, unless I'm missing something, it seems that virtualization
remains the most wholesome solution, but if that's broken, then we're
back at square one!
So what do you guys recommend? Should I just chroot a vm who's network
traffic all goes through a local filter, and hope for the best? I'm
really at a loss for what to do here.
Many thanks,
Jean-Philippe
[1] http://marc.info/?l=openbsd-misc&m=113459984810732&w=2
[2]
http://www.watson.org/~robert/2007woot/2007usenixwoot-exploitingconcurrency.pdf
I mentioned some of my Firefox setup in an older post:
http://www.mail-archive.com/[email protected]/msg117422.html
My rough notes are here:
https://www.secondfloor.ca/spring/firefox.txt
I'm working on the sort of thing you're talking about. Email me
privately if you want to collaborate.