http://www.softwarefreedom.org/resources/2010/transparent-medical-devices.html

III  Why Free Software is More Secure

“Continuous and broad peer-review, enabled by publicly available
source code, improves software reliability and security through the
identification and elimination of defects that might otherwise go
unrecognized by the core development team. Conversely, where source
code is hidden from the public, attackers can attack the software
anyway …. Hiding source code does inhibit the ability of third parties
to respond to vulnerabilities (because changing software is more
difficult without the source code), but this is obviously not a
security advantage. In general, ‘Security by Obscurity’ is widely
denigrated.” — Department of Defense (DoD) FAQ’s response to question:
“Doesn’t Hiding Source Code Automatically Make Software More
Secure?”17

The DoD indicates that FOSS has been central to its Information
Technology (IT) operations since the mid-1990’s, and, according to
some estimates, one-third to one-half of the software currently used
by the agency is open source.18 The U.S. Office of Management and
Budget issued a memorandum in 2004, which recommends that all federal
agencies use the same procurement procedures for FOSS as they would
for proprietary software.19 Other public sector agencies, such as the
U.S. Navy, the Federal Aviation Administration, the U.S. Census Bureau
and the U.S. Patent and Trademark Office have been identified as
recognizing the security benefits of publicly auditable source code.20

To understand why free and open source software has become a common
component in the IT systems of so many businesses and organizations
that perform life-critical or mission-critical functions, one must
first accept that software bugs are a fact of life. The Software
Engineering Institute estimates that an experienced software engineer
produces approximately one defect for every 100 lines of code.21 Based
on this estimate, even if most of the bugs in a modest, one
million-line code base are fixed over the course of a typical program
life cycle, approximately 1,000 bugs would remain.

In its first “State of Software Security” report released in March
2010, the private software security analysis firm Veracode reviewed
the source code of 1,591 software applications voluntarily submitted
by commercial vendors, businesses, and government agencies.22
Regardless of program origins, Veracode found that 58 percent of all
software submitted for review did not meet the security assessment
criteria the report established.23 Based on its findings, Veracode
concluded that “most software is indeed very insecure …[and] more than
half of the software deployed in enterprises today is potentially
susceptible to an application layer attack similar to that used in the
recent …Google security breaches.”24

Though open source applications had almost as many source code
vulnerabilities upon first submission as proprietary programs,
researchers found that they contained fewer potential backdoors than
commercial or outsourced software and that open source project teams
remediated security vulnerabilities within an average of 36 days of
the first submission, compared to 48 days for internally developed
applications and 82 days for commercial applications.25 Not only were
bugs patched the fastest in open source programs, but the quality of
remediation was also higher than commercial programs.26

Veracode’s study confirms the research and anecdotal evidence into the
security benefits of open source software published over the past
decade. According to the web-security analysis site SecurityPortal,
vulnerabilities took an average of 11.2 days to be spotted in Red
Hat/Linux systems with a standard deviation of 17.5 compared to an
average of 16.1 days with a standard deviation of 27.7 in Microsoft
programs.27

Sun Microsystems’ COO Bill Vass summed up the most common case for
FOSS in a blog post published in April 2009: “By making the code open
source, nothing can be hidden in the code,” Vass wrote. “If the Trojan
Horse was made of glass, would the Trojans have rolled it into their
city? NO.”28

Vass’ logic is backed up by numerous research papers and academic
studies that have debunked the myth of security through obscurity and
advanced the “more eyes, fewer bugs” thesis. Though it might seem
counterintuitive, making source code publicly available for users,
security analysts, and even potential adversaries does not make
systems more vulnerable to attack in the long-run. To the contrary,
keeping source code under lock-and-key is more likely to hamstring
“defenders” by preventing them from finding and patching bugs that
could be exploited by potential attackers to gain entry into a given
code base, whether or not access is restricted by the supplier.29 “In
a world of rapid communications among attackers where exploits are
spread on the Internet, a vulnerability known to one attacker is
rapidly learned by others,” reads a 2006 article comparing open source
and proprietary software use in government systems.30 “For Open
Source, the next assumption is that disclosure of a flaw will prompt
other programmers to improve the design of defenses. In addition,
disclosure will prompt many third parties — all of those using the
software or the system — to install patches or otherwise protect
themselves against the newly announced vulnerability. In sum,
disclosure does not help attackers much but is highly valuable to the
defenders who create new code and install it.”

Academia and internet security professionals appear to have reached a
consensus that open, auditable source code gives users the ability to
independently assess the exposure of a system and the risks associated
with using it; enables bugs to be patched more easily and quickly; and
removes dependence on a single party, forcing software suppliers and
developers to spend more effort on the quality of their code, as
authors Jaap-Henk Hoepman and Bart Jacobs also conclude in their 2007
article, Increased Security Through Open Source.31

By contrast, vulnerabilities often go unnoticed, unannounced, and
unfixed in closed source programs because the vendor, rather than
users who have a higher stake in maintaining the quality of software,
is the only party allowed to evaluate the security of the code base.32
Some studies have argued that commercial software suppliers have less
of an incentive to fix defects after a program is initially released
so users do not become aware of vulnerabilities until after they have
caused a problem. “Once the initial version of [a proprietary software
product] has saturated its market, the producer’s interest tends to
shift to generating upgrades …Security is difficult to market in this
process because, although features are visible, security functions
tend to be invisible during normal operations and only visible when
security trouble occurs.”33

The consequences of manufacturers’ failure to disclose malfunctions to
patients and physicians have proven fatal in the past. In 2005, a
21-year-old man died from cardiac arrest after the ICD he wore
short-circuited and failed to deliver a life-saving shock. The fatal
incident prompted Guidant, the manufacturer of the flawed ICD, to
recall four different device models they sold. In total 70,000 Guidant
ICDs were recalled in one of the biggest regulatory actions of the
past 25 years.34

Guidant came under intense public scrutiny when the patient’s
physician Dr. Robert Hauser discovered that the company first observed
the flaw that caused his patient’s device to malfunction in 2002, and
even went so far as to implement manufacturing changes to correct it,
but failed to disclose it to the public or health-care industry.

The body of research analyzed for this paper points to the same
conclusion: security is not achieved through obscurity and closed
source programs force users to forfeit their ability to evaluate and
improve a system’s security. Though there is lingering debate over the
degree to which end-users contribute to the maintenance of FOSS
programs and how to ensure the quality of the patches submitted, most
of the evidence supports our paper’s central assumption that
auditable, peer-reviewed software is comparatively more secure than
proprietary programs.

Programs have different standards to ensure the quality of the patches
submitted to open source programs, but even the most open, transparent
systems have established methods of quality control. Well-established
open source software, such as the kind favored by the DoD and the
other agencies mentioned above, cannot be infiltrated by “just
anyone.” To protect the code base from potential adversaries and
malicious patch submissions, large open source systems have a “trusted
repository” that only certain, “trusted,” developers can directly
modify. As an additional safeguard, the source code is publicly
released, meaning not only are there more people policing it for
defects, but more copies of each version of the software exist making
it easier to compare new code.
...

-- 
FOSS Nepal mailing list: [email protected]
http://groups.google.com/group/foss-nepal
To unsubscribe, e-mail: [email protected]

Mailing List Guidelines: 
http://wiki.fossnepal.org/index.php?title=Mailing_List_Guidelines
Community website: http://www.fossnepal.org/

Reply via email to