okay... i was trying to avoid entering this discussion, but here i go. lemme preface my comments with a bit of my background... blah blah blah... bell canada... blah blah blah... rsadsi (back when it was a real engineering company)... certicom... cigital... meson group... managed borland's secure product development group... etc.
so... while only an idiot would claim to be "an expert" in all areas of internet security, i've as much right to say i know something about these issues as the next guy. the reason the doctrine of "fair disclosure" was developed was to provide some protection for the organization developing the vulnerable software as well as encouraging the security research community to report vulnerabilities and their related exploits. the reasoning went something like this... back in the go-go 90's there was a cottage industry of private security research. i'm not talking just hackers in basements trying to figure out how systems work for fun, but groups of consulting engineers trying to prove to the world that they have the ability to delve into the inner workings of software, find some problems and figure out how to exploit (and later repair) them. real money. real engineering practice. embarrassing the "big guys" and getting the press to look at you as you described problems in popular software tended to have a positive effect on these company's bottom lines. all software has vulnerabilities. sorry. it's the truth. the vulnerabilities sometimes come from "sloppy" coding, but increasingly problems emerge from integrating multiple components with insufficient abstractions to model security behavior. so it might be advantageous to cast the discussion in this light. given that the LL viewer code includes X lines of code and there's a non-zero per-line chance of introducing a vulnerability, and that we use several libraries maintained by 3rd parties (with Y lines of code), we must have Z undiscovered vulnerabilities in the LL viewer. some of the vulnerabilities are no doubt of "low impact", but i've found a good working assumption is, "there's at least one more serious vulnerability in your code." (it seems to be true more often than not.) because this vulnerability is yet to be discovered does not limit it's eventual impact. so.. for the sake of discussion.. let's assume the following discussion is true "there is a serious exploit in the current LL viewer code which will lead to disclosure of sensitive user information, compromise of systems running the client, illegal asset or funds transfer and global thermonuclear war." if a security researcher out in the trenches discovers a vulnerability, disclosing it widely before a fix is available is clearly bad for not only Linden, but for the user community. nuclear war is generally bad for everybody. let's turn it around... lets say that your wallet has an undisclosed vulnerability that allows 10%-50% of it's contents to be magically transported to Joey Soprano's bank account. who do _you_ want to know about this vulnerability. "fair disclosure" provides direction to security researchers, asking them to report the problem to people who can create a fix in a reasonable time frame. i've painted some pretty dire ramifications of vulnerabilities... nuclear war and extensive funds loss. there is no reason to believe either of these events would be the result of a bug in the LL viewer code, but it is sometimes nice to "plan for the absurdly worst." and.. btw.. microsoft DOES NOT always announce security vulnerabilities as they learn about them. the fact that the viewer is open is orthogonal to the discussion of fair disclosure. we're talking about what do you do when you notice a vulnerability, not who has the ability to look at the source code and search for a vulnerability. under the doctrine of fair disclosure, the answer to the former is "anyone who can reasonably be expected to assist in the remediation of the vulnerability." i would argue that as an organization, Linden would be disproportionally affected by multiple exploits in the viewer code, which would encourage Linden to move heaven and earth to get these things fixed in a reasonable time frame. so... this is just a long winded discussion to support the following statement: "telling everybody about a security vulnerability before remediation is available is bad." > So who decides who is "good" or "bad" to receive or > not to receive security > bulletins? I think it's the wrong way to follow an > Apple's approach to keep > security issues secret until they possibly are fixed. > I like the approach to > openly disclose security gaps, make users aware of > imminent risk, and try to > fix issues ASAP with help of the users. That's how > Microsoft handles it. > > Surely there are defenders of both camps. But SL is > now opensource. I think > the only way to properly handle security issues > detected is to make > everybody aware of them. Not to select a few deemed > "white hats" to be > informed but all people who work with the code. Be > ensured the "black hats" > do the same. > > One additional point @ Henri. You are registered with > your RL details @ LL. > As such I don't see a point with anonymity here. There > is none. > > Merry Xmas! > > Boy > > http://my.opera.com/boylane > > > > > _______________________________________________ > Policies and (un)subscribe information available here: > http://wiki.secondlife.com/wiki/SLDev > Please read the policies before posting to keep unmoderated posting > privileges > _______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges
