On Jan 28, 2008 4:53 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > > You don't NEED intrusion detection if intrusion cannot be done. If > > your software doesn't read anything from outside, it's not possible to > > attack it. If it reads that data and correctly does nothing with it, > > it's not possible to attack it. If it reads that data and correctly > > processes it, it's not possible to attack it. > > Might I suggest some literature on security engineering before you trivialize > the problem. I found the book by Ross Anderson to be a good introduction. > http://www.amazon.com/Security-Engineering-Building-Dependable-Distributed/dp/0471389226/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1201483200&sr=8-1
Matt, as I said before I know that there are all sorts of *practical* problems with handling this issue, that create a market for processes, tools and expertise, but it's theoretically possible, given enough effort (which isn't available for most applications). > > > Consider the following subset of possible requirements: the program is > > correct > > > if and only if it halts. > > > > > > > It's a perfectly valid requirement, and I can write all sorts of > > software that satisfies it. I can't take a piece of software that I > > didn't write and tell you it it satisfies it, but I can write piece of > > software that satisfies it, that also does all sorts of useful stuff. > > That is not the hard problem. Going from a formal specification (actually a > program) to code is just a matter of compilation. But verifying that the > result is correct is undecidable. What do you mean by that? What word 'result' in your last sentence refers to? Do you mean result of compilation? There are verified stacks, from the ground up. Given enough effort, it should be possible to be arbitrarily sure of their reliability. And anyway, what is undecidable here? > Of course it is much worse when the specification is written in English. > Usually users do not know exactly what they want. Even if they do, > specifications are typically vague, incomplete, ambiguous, have errors, and > make assumptions that the developer will misinterpret. If you have ever > written code for somebody else, you will know what I mean. > > For example, a specification for a database may require that users be > authenticated, but does not say how. Or it may say that a user has to enter a > password, but does not say how the password is transmitted or stored, or what > to do with users who don't know what a username is, or type their password > into phishing sites. This is the result. > http://en.wikipedia.org/wiki/Storm_botnet > > Maybe AGI will solve some of these problems that seem to be beyond the > capabilities of humans. But again it is a double edged sword. There is a > disturbing trend in attacks. Attackers used to be motivated by ego, so you > had viruses that played jokes or wiped your files. Now they are motivated by > greed, so attacks remain hidden while stealing personal information and > computing resources. Acquiring resources is the fitness function for > competing, recursively self improving AGI, so it is sure to play a role. Now THAT you can't oppose, competition for resources by deception that relies on human gullibility. But it's a completely different problem, it's not about computer security at all. It's about human phychology, and one can't do anything about it, as long as they remain human. It probably can be kind of solved by placing generally intelligent 'personal firewalls' on all input that human receives. -- Vladimir Nesov mailto:[EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=90422037-3280fb
