Note:
The following analysis is a compilation of several technical discussions
with Allaire that I feel should be published to allow the Allaire customer
community to draw their own conclusions as to the actual threat risk and
severity of this vulnerability. The example exploit reconnaissance
information has been altered to protect the innocent.
> We've run a check of denial of service scenarios with
> different ColdFusion
> Administrator settings that we believe are relevant to this
> issue. Below are
> what we believe an attacker would need to perpetrate a
> successful attack
> involving the use of <CFCACHE>.
>
> - knowledge of which ColdFusion template(s) on the target
> server use the
> <CFCACHE> tag.
http://www.somewebsite.com/cfcache.map
Contents:
[index.cfm]
Mapping=D:\Web65\Files\CFC155.tmp
SourceTimeStamp=04/24/1999 04:49:43 PM
The directory structure and cfcache.map file location of the target website
is irrelevant. Obviously there are still quite a number of sites who
haven't implemented the previously mentioned Security Bulletin:
(ASB00-03): Patch Available For Potential Information Exposure By The
CFCACHE Tag (URL:
http://www.allaire.com/handlers/index.cfm?ID=13978&Method=Full)
Obscuring this information won't do much good either, because that really
doesn't address the core issue of the vulnerability of CFCACHE.
> - knowledge of server state regarding which template(s) that use the
> <CFCACHE> tag have had cached copies removed from the template's
> CACHEDDIRECTORY attribute-specified caching directory or have
> never been
> loaded by the server.
Using the method above, as an attacker, I can safely assume that other files
(if they exist) in the directory have a pretty good likelihood of having
caching enabled.
http://www.somewebsite.com/developer/cfcache.map
[index.cfm]
Mapping=D:\Web65\Files\developer\CFC7E.tmp
SourceTimeStamp=02/03/1999 12:34:43 PM
[gallery.cfm]
Mapping=D:\Web65\Files\developer\CFC82.tmp
SourceTimeStamp=04/01/1999 10:17:45 PM
In a matter of a few minutes, it is trivial to spider the current directory
and find out what other files in the directory use caching by comparing the
contents of my initial capture of the cfcache.map file to the cfcache.map
version after my spidering experiment completes.
So, for the purposes of this example, let's say I performed this against the
current directory mentioned above and discovered a file, foo.cfm that hadn't
been cached yet but had caching enabled. What have I learned as an
attacker?
1.) knowledge of server state regarding which template(s) that use the
<CFCACHE> tag have had cached copies removed from the template's
CACHEDDIRECTORY attribute-specified caching directory or have
never been loaded by the server.
2.) discovered a file that is not accessed very often, more likely to expire
in cache, and probably won't be refreshed after the cache expiration date.
What is the attacker's next step?
Patience. Patience is the key to performing this attack with no inside
knowledge of the system. I document this information, continue gathering
information on this site (or others) and then plan my scope of attack:
1.) When do I want to attempt an attack? What period is safe to
assume the cache file will expire? In most cases, the answer is probably 24
hours.
2.) Do I want to attack one site at once or launch a multiple-site
attack, attempting to attack multiple sites on different domains
simultaneously?
> - knowledge of CFA Settings: 'Limit Simultaneous Requests'
> setting AND
> that 'Timeout requests after XX seconds' is OFF (or that
> 'Timeout requests
> after XX seconds' is set to a very high number of seconds).
This item is really trivial, if a serious attack were to occur, we should
assume that anywhere between 15 - several thousand requests will be made.
15 being established as just slightly higher than the baseline simultaneous
Allaire recommendation of 10 for a dual-processor system (easily including
80-95% of all webservers).
Timeout settings are irrelevant if more than XX connections are made where
XX is any number moderately higher than the maximum number of allowed
requests. If timeouts are enabled and set, timeouts will occur and queued
requests will enter the active state and also timeout (extending the time
period that the server is unavailable to everyone else).
Crunching the numbers:
50 requests for 1 second @ 60 second timeouts / 5 simultaneous requests =
600 seconds (10 minutes)
Not too bad there, no one will probably even notice... but how about
steadily applying 50 requests per second for 10 minutes:
50 * 60 seconds * 10 minutes = 30,000 total requests, 25, 950 requests still
in queue at the 10 minute mark.
25,950 requests @ 60 second timeouts / 5 simultaneous requests = 311,400
seconds (5190 minutes or 86.5 hours)
Both of these scenarios assume that I'm the only one making requests on the
server, but in a production environment where even the most basic example
threshold of 600 seconds of server unavailability time has been reached,
this problem would continue to grow exponentially by casual users
unwittingly compounding the issue by adding queue requests to the load of
the server. On a moderately busy server, I can apply a simple form of the
attack and allow the rest of the legitimate visitor base to inflate the
attack scope. Kind of reminds you of a Distributed Denial of Service,
doesn't it? The cool part here is that I've just recruited the rest of my
target website's audience - without their knowledge - to help me participate
in the attack.
This is also assuming that the server in question can even handle this load
gracefully before hanging for performance reasons, but you can clearly see
that the possibility exists.
> 6. Use of a load testing or other load-generation or
> denial-of-service
> tool to actually request the template in question exactly
> simultaneously
> with more connections than the ColdFusion Administrator
> setting for 'Limit
> Simultaneous Requests'. Tests could not cause a successful
> attack manually
> using Internet browsers; an automated load testing tool had
> to be used.
>
> Using exactly the same number of full-speed load test robots as the
> ColdFusion Administrator setting for 'Limit Simultaneous
> Requests' creates a
> stress condition the server will recover normally from. Using a large
> number of load test robots could cause the deadlock condition
> if all of the
> above information is known, conditions are right and all
> settings are set as
> described, but our testing indicates a substantially higher number of
> automated test robots would be required than the number of
> Simultaneous
> Requests set in the ColdFusion Administrator. Additionally,
> the attack
> could not be initiated via a regular Internet browser issuing repeated
> identical requests.
My initial tests using only a browser would confirm this, however, I also
found, at least on 4.5.1 that 'Refresh'-ing browser requests would easily
cause the active thread queue to begin to grow quickly, and I could and can
still cause this to occur using only a browser and my swift-clicking index
finger (tm).
Given the availability of off-line caching software and freely available
load testing tools (Microsoft Web Stress Tool, for example), this attack can
be successfully carried out by both novice script kiddies and even the most
casual website visitor, probably without them even being aware that the slow
loading webpage they are attempting to refresh is actually causing the Cold
Fusion server a great amount of stress.
While this might seems to be a very strong point in your favor, it is really
the least of your problems concerning this vulnerability.
> Given these demanding requirements, our current thinking is that this
> particular attack would likely originate from a combination
> of insider site
> information and conventional denial of service attack, rather than a
> strictly conventional denial of service vulnerability.
I believe I have effectively shown this to be inaccurate with my comments
above, while true that a conventional denial of service would also be
effective against any site, there also exists the possibility of this
vulnerability being exploited without much attacker knowledge, preparation
or effort.
> The rest of your analysis is consistent with the design of a
> conventional
> denial of service attack. Since information required to make
> a focused
> attack is not easily available to an outsider, the methods
> you outline for
> stressing a server to the point of deadlocking are equivalent to a
> conventional denial of service attack. Any server will
> bottom out under a
> certain load. It's the responsibility of the system and
> network architects
> to make sure that the chance of a server or cluster of
> servers going down is
> minimal given traffic estimates. It's possible that this topic is
> well-suited to a knowledge-base or best practices article,
> and we thank you
> for bringing that to our attention.
There are really two scenarios for attack that need to be addressed here:
1.) Conventional Denial-of-Service
Defined as a insane amount of requests intent on dead-locking the server.
Fact: Any server, any platform, any application is vulnerable.
Fact: Easy to detect, react and prosecute.
2.) Minimum Effort Denial of Service Attack
Defined as an amount of requests sufficient to hang the Cold Fusion Server,
a baseline amount of 15 simultaneous requests would likely exceed the amount
of allowable requests and allow this attack to occur.
Fact: Knowledge of the CFCACHE'ed page locations a bonus, but not necessary
- any large site will use caching, its simply a matter of trial-and-error.
Fact: Low amount of requests needed to execute attack.
Fact: Attack can be sustained by legitimate website visitors, unknowingly
contributing to the Cold Fusion Server Application deadlock condition.
Fact: Difficult to detect without exhaustive analysis of the logs, difficult
to correct if not aware of the issue, and difficult to prosecute.
> Although it is true that web stress tools are widely
> available, any attempt
> to use such tools against commercial websites today would result in
> aggressive investigation and criminal prosecution. As
> evidenced by the
> attacks on several large commercial websites proved, even the
> most carefully
> planned attempts to cover an attacker's tracks in a denial of
> service (or
> DDoS) attack will likely still result in criminal prosecution.
It is Allaire's responsibility to its customers to make its products as
stable, reliable and secure as possible. In my opinion, this matter is
still not being given the attention it deserves.
Regards,
Ryan
Ryan Hill, MCSE
Director of Systems Integration
Market Matrix, Inc. - http://www.marketmatrix.com
------------------------------------------------------------------------------
Archives: http://www.eGroups.com/list/cf-talk
To Unsubscribe visit
http://www.houseoffusion.com/index.cfm?sidebar=lists&body=lists/cf_talk or send a
message to [EMAIL PROTECTED] with 'unsubscribe' in the body.