To report a botnet PRIVATELY please email: [EMAIL PROTECTED] ----------
On Fri, Feb 16, 2007 at 09:59:08AM -0800, [EMAIL PROTECTED] wrote: > I do not know when web-based, cross platform scripting > vulnerabilities actually started. My first run in with this problem > was in 1995 with the perl based formmail exploit. This exploit was > documented in CVE-1999-0172. Although this was not exactly remote > code execution, it did allow a perpetrator to hijack the server to > relay spam.
IIRC, this was yet another security hole caused by mixing control
and data signals, notably in the arguments to perl's open function
(apart from specifying a file, it can specify a mode, or a pipe
to actually execute programs). If I remember incorrectly, then
I have seen it on another similar perl CGI recently.
I believe the first instance of this kind of attack came in the 60's
when Woz and his merry band discovered the magic of the blue box.
Because Ma Bell (why is it called Ma, anyway?) used in-band signalling
at 2600Hz to control the routing of the trunk, it was child's play
(compared to exploits today) to control it oneself.
PHP's mail() function does exactly the same thing, by not providing
a carriage return between the headers and body; this leads to email
header injection on feedback forms and other such annoyances.
The sad part about this is that everyone seems hell-bent on
reinventing the wheel before learning about the existing wheels. As
far as I can tell, PHP doesn't do anything Perl* or python can't do as
well (except let in intruders), and this is after nearly a decade
(IIRC) of evolution and work and language fragmentation. And most of
the sites I see that use active content (like javascript or flash)
don't really need it; the webmaster just got bored or lazy or both.
Most email doesn't require hypertext markup and embedded images.
[*] PHP is designed for dealing with untrusted data from
unauthenticated remote parties, and doesn't even have taint()
tracking, which Perl has had since the early 90s.
> servers inherently want to
> be known and accessed which makes them a great static target to be
> easily analyzed in depth.
Bingo. They are named in canonical ways, and they provide service(s)
to the general public (e.g. HTTP, SMTP, DNS). Which means they cannot
be isolated with a normal layer 3 firewall without nullifying their
reason for existence. And there's lots of technology for manipulating
them (search engines, browsers), and such technology has widespread
adoption and availability. And it's too complicated to interpret;
think of all the vulnerabilities due to URI encoding - %20, %0A, %3B
look familiar to anyone?).
I don't think there's a single root cause that stands out:
1) service provided to the public
2) popular, well-known and widespread, even among non-programmers
3) commerce-enabling
4) complex (most people can't write a syntactically correct HTML page
first time, nor can they properly sanitize inputs)
5) low barrier to entry ($5/mo shared hosting)
6) idiosyncracies that lead to vulnerabilities (mail(),
register_globals, allow_url_fopen, buffers in C)
7) Overly permissive software (Oh, you forgot to include half your
HTML tags? That's alright, I'll make an uninformed guess about what
you intended and render that. Oh, you can't send a newline? That's
alright, use %10 instead! Hey, heard of UTF-8? How about deep
unicode support?)
8) Function creep. The Internet is an evolving, open-ended design,
and nobody can guarantee that future changes won't seriously undermine
the security of a current system, in a way analogous to "bit-rot"
which makes software stop working (that is, we are constantly changing
the things our software uses and upon which it depends).
9) The difficulty in making something that doesn't fail, as opposed to
making something that works when you run it once. This has been
called "programming Satan's computer" and is much, much more difficult
than it seems. Strictly speaking, security is a subset of
correctness; if you shoot for the latter you get the former as a
consequence. Just look at the difficulty people have getting
comprehensive code coverage in their test cases and you'll get an idea
of how difficult being correct on _every_ input and in _every_
environment is.
> However, the point that Gadi made that writing securely in PHP is
> inherently difficult, I strongly disagree with. For example, NVD
> shows that the same perl formmail that I identified above continued
> to have exploited vulnerabilities at least through the end of 2002.
The first sentence does not seem related to the second.
> Although computer scientists contend that strongly typed languages
> are better than loosely typed ones,
Hrm, I think the general feeling is that strongly-typed languages
catch more errors at compile/design time, unlike loosely
typed-languages which can throw an exception any time. As a
consequence, the latter must have virtually every code path traversed
for proper testing of what could be caught automatically in a language
with stronger typing. In exchange, you have to spend a lot of time
dealing with things that may never be problems (I like to joke that
the problem with strongly typed languages is that they require such a
strong typist).
But I don't see how this really relates to security or correctness.
It may be fine for your application to throw an unchecked exception
and die at run-time. I'd trade it for a language without strings as
first-class objects (buffer overflows) or exceptions (unchecked return
code) any day of the week. Dying due to a failed assertion may well
be the most secure thing a program can do; the whole point of putting
them in is to document the states in which you don't know what to do,
and taking them out in production is like taking your seat belts off
after driver's education.
> Hosting services could easily hold their clients to strict TOS,
> perform proper patch and vulnerability management, scan their clients
> disk space for software versions that have identified vulnerabilities
> and disable hosts until the software has been updated, monitor httpd
> logs and block non local IPs in realtime that attempt to access
> awstats.pl, mambo files when mambo is not installed, and other threat
> signatures, monitor for irc traffic on webservers, etc.
Technically, yes. Ideally, yes. Economically, that's a lot of effort
and sunk costs for no ROI.
They could also prevent their customers from installing any software
except that which is on an approved list, but that's unlikely to earn
them many customers.
As is so often the case in practical computer security problems, the
roots of the problems are economic; the people with the ability don't
have an incentive, secure software looks just like exploitable
software to the average customer, joe doesn't care if his account
password was guessable (it's "joe") because he only uses it for
printing, and the email attachment he just sent to everyone is a
really cute game that requires administrator privileges to run.
> Whether the fact
> that many ISPs and hosting services are not technically equipped to
> deal with the "server" problem or just don't care is unknown.
I care, but when it comes down to having a satisfied customer because
mambo is running versus explaining to an angry customer that being a
good netizen involves not running the all-singing all-dancing craplet
of the week as root, I know which my CFO would prefer.
--
Good code works. Great code can't fail. -><-
<URL:http://www.subspacefield.org/~travis/>
For a good time on my UBE blacklist, email [EMAIL PROTECTED]
pgpJpp8WC6bZp.pgp
Description: PGP signature
_______________________________________________ To report a botnet PRIVATELY please email: [EMAIL PROTECTED] All list and server information are public and available to law enforcement upon request. http://www.whitestar.linuxbox.org/mailman/listinfo/botnets
