Leo Sutic wrote: > > > -----Original Message----- > > From: Berin Loritsch [mailto:[EMAIL PROTECTED] > > Sent: den 21 juni 2001 15:23 > > To: [email protected] > > Subject: Coding Standards Update > > > > > > I beleive we need to go through our Coding Standards document, > > purge some items (since they do not apply to modern JVMs) and > > incorporate ideas from this list of documents: > > > > Twelve rules for developing more secure Java code > > ------------------------------------------------- > > http://www.javaworld.com/javaworld/jw-12-1998/jw-12-securityrules_p.html > > While I belive that the rules are appropriate for some situations, I do not > think that they approach the type of security needed in Avalon from the > right angle. > > Besides leading to a lot less developer-friendly code I think they ignore > one important point: If someone else chooses what code gets executed on your > machine, you can not claim any security.
The whole point of the artical is protecting yourself from code that your executing code loads in a classloader. If you refuse to seal your packages (each and every one of them), code that you load later on will be able to "augment" an existing package in undocumented ways. > Almost all "gotchas" that were listed assumed that the attack came on the > form of malicious classes that forced a violation of the contract between > the objects in the system. They are targeted at protecting your code from a "masquerading" attack. In other words, code that masquerades as good code, but after it is loaded into a running system begins to utilize weakness in code at run time. This should not be allowed. > Now, given the existence of Java decompilers and the fact that Avalon is > open source and thus available for any attacker to modify, I must question > an approach to security that puts this great an emphasis on the types of > attacks described in the article. I think you missed one major point: we are protecting a run-time system. These approaches are even more important for projects like Phoenix that load other jars on purpose. A run-time class that gets loaded into the same package as the Phoenix code has access to all the attributes and methods that are package access or methods that are protected. This is the type of attack that it is protecting itself from. We can have correctly developed code, but if it does not protect itself by design, other malicious code can have more reaching damage than just the classloader it was loaded into. This is due to the way the java language works. > Instead, I believe that the fact that Avalon is open source, coupled with > peer-review and the possibility of source code inspections are the things > that make Avalon "secure". Security is a large subject, and the more I learn about it the more I have to say that your last comment is only part of the answer. You also have to understand the way a language works and know its weaknesses. There is the balance that the cost of getting information from a runtime system improperly should be higher than the value of that information. In other words, a public site that hosts public articals to read needs far less security than a bank where you can potentially get hundreds of thousands of dollars. > In the same way as one usually lets the OS handle file permissions, I > believe that untrusted code must run in a sandbox, just like applications > run in user space and not kernel space, and that security is therefore best > handled at the VM level. I get the feeling that we're trying to patch small > security holes while there's a train-sized hole that we can never cover up > and that all attackers will use. The article's proposed solutions protect against code that is supposed to run in a sandbox, but takes advantage of weaknesses in the language. Try it. Take a class that has package access strings that is in the main system (in an unsealed package), and have that system load another class in a sandbox. That sandbox class can still read the package access strings. I am not saying we should ignore the larger holes. I am saying that if we adopt these practices, there will be fewer smaller holes to patch once the large hole is filled. > I assume that there are people on this list with more experience in this > field, so what do you say? I deal with customers who are very protective of their data--which makes it my business to be protective of their data too. I urge you to do your own research. Security by design is more difficult, but when you are developing a server framework it is mission critical.
smime.p7s
Description: S/MIME Cryptographic Signature
