On Fri, Oct 28, 2005 at 04:40:16AM +0200, Yoshinori K. Okuji wrote: > The recent debate in this list makes me very afraid of a possibility of > extreme cost which never finish in a reasonable amount of time or human > resource.
This is indeed very important. I agree with Marcus that this is exactly what we are making sure at this moment. > No matter whatever design mistakes the Hurd has, the reason why it > is nearly unusable even after 15 years is that most people have merely > dreamed and never implemented their dreams. I can't speak for the others, but this isn't the reason I haven't done so much. I was simply interested in other things as well. The thing is, the Hurd looks like it will bring some nice features to my system. But "some nice features" isn't something that I go spending all my time on. As others have said, we need somthing truely great. If we know we're making something that'll really change things, that'll really add something good to the experience of using computers, now that's a motivation to implement it. I believe that "paranoid security", combined with user control (so it doesn't come at the cost of usability of the system) is such a great feature. From the recent discussions I think we can conclude that this is theoretically possible. Now we have to make sure we choose our goals such that it is feasable to implement, without losing the greatness of it. > Those who have been hanging around for years, such as Marcus and Alfred, > should know this fact. When I see people talking about an ideal future again > and again, I must imagine that the same destiny would arrive. I understand your fear, but in fact I feel exactly the opposite. I've been hanging around here for some years as well now (though not as long as the people you mention), and I've never seen as much productivity happen as last month. This productivity isn't counted in cvs commits, but I definitely have the feeling that something is happening now. I think this is mostly because Shapiro joined in (and started) several discussions, for which I thank him. > What makes me scared the most is security paranoia (I'm sorry for my > wording). As I said above, "security paranoia" is what changes the system from "nice" to "great" for me. Knowing what I do now about security, I wouldn't want to build a system which doesn't have such paranoia. > Some people tend to desiring 100% security, but it is simply impossible or > not feasible in reality. It is the same question as "How can I avoid a death > with 100% promise?" The only possible answer is "Die. Once you are dead, you > never die." Indeed, unplugging your machine is the best security there is. Nobody proposes to make that our security policy though. ;-) 100% security is only possible with 100% verified code (and certainty that the verification is correct, but that's a different story), at least for the parts which handle untrusted things, such as the internet or downloaded plugins. And even then we haven't secured the system against social engineering attacks. So no, we cannot reach that goal. But compared to today's computers (where an attacker can essentially take over your whole machine if he finds just a few bugs), we can improve orders of magnitude. It is the difference between going to a meeting with a maffia boss and going to the toilet. Sure, you can die when you go to the toilet. But it's a lot less likely. And it doesn't feel like a risky operation. That's what it's all about IMO: make using computers feel like something safe. > Security is a matter of a degree. The question is not if it is secure or not, > but how much it is secure. With a good capability system, you may prevent > malicious code from destroying all of your data, but you may not stop it > doing all malicious behaviors, anyway. For example, if a plugin for a browser > is to assist setting up a preference (in fact, there exist several plugins of > this kind for Mozilla), you must allow the plugin to rewrite your > configuration. Then, if the plugin is malicious enough, it can insert a proxy > setting secretly. If the system is well designed, then there is no problem. First of all, it doesn't sound like a good idea to need a plugin just to set your preferences. But even if it is, you don't need to give it permission to write to your *entire* configuration. If mozilla is well designed (where well-designed means "using the capability system effectively", which of course it doesn't), it can allow the plugin to write some configuration once, but not allow it to install a proxy. Considering bug-free trusted code, I think most security problems, if not all, can be grouped into "covert channels" or "social engineering". Social engineering probably is the biggest security problem in the modern world, and unfortunately that doesn't go away. But the idiocy of "don't click on attachments, because you don't know if they're opened with a trusted viewer, or simply executed directly" will at least be history. And that probably closes a lot of social enineering holes as well. (Not that you need a capability system for that, but anyway). > I'm not planning to take part in design decisions, simply because I don't > have time to read recent papers. But I wish Marcus and others would take it > into account that the project must be feasible and realistic. Thank you for this advice. While it may seem otherwise, I think this is precisely what we are doing. Thanks, Bas -- I encourage people to send encrypted e-mail (see http://www.gnupg.org). If you have problems reading my e-mail, use a better reader. Please send the central message of e-mails as plain text in the message body, not as HTML and definitely not as MS Word. Please do not use the MS Word format for attachments either. For more information, see http://129.125.47.90/e-mail.html
signature.asc
Description: Digital signature
_______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
