Re: [SC-L] SearchSecurity: Cyber Security and the Law

2012-08-02 Thread Greg Beeley
How would we recognize good engineering?

It seems to me like the very same problem faced by the idea of software
liability law - that it is hard to define good engineering for software
security - would be faced by an incentive program.  If "good
engineering" is fuzzy enough to give a big corporate legal dept the
upper hand against an individual, wouldn't it be similarly fuzzy enough
to counter the fairness of a tax incentive?

Tax breaks are a big deal - I doubt the government is going to want to
issue tax breaks to a company because the company claims they have
achieved level X in a CMM -- think about the economic cost in
demonstrating something like that to the point where it is fair and
worth something.  I also doubt that a metric based on vulnerability
counts will work -- that will just encourage companies to hide
vulnerabilities, fixing them silently and/or with great delay, instead
of disclosing them.

Not that I think that incentives inherently wouldn't work -- rather I'd
be interested in seeing some discussion here on some of the above issues.

One alternative that has worked well in many other areas of
manufacturing -- encourage some kind of limited warranty, at least in
certain industries.  For consumer mobile devices, it might be something
as simple as, "if your device's security is ever compromised due to a
flaw in the bundled device software, we'll repair it free of charge".
The big challenges are 1) getting customers to care about their device's
security, and 2) making a vendor's commitment to security recognizable
by the customer.  By no means ideal, but at least a talking point.

- Greg

Gary McGraw wrote, On 08/02/2012 08:40 AM:
> Hi Jeff,
> 
> I'm afraid I disagree.  The hyperbolic way to state this is, imagine YOUR
> lawyer faced down by Microsoft's army of lawyers. You lose.
> 
> Software liability is not the way to go in my opinion.  Instead, I would
> like to see the government develop incentives for good engineering.
> 
> gem
> 
> On 8/2/12 10:26 AM, "Jeffrey Walton"  wrote:
> 
>> Hi Dr. McGraw,
>>
>>> Cyber Intelligence Sharing and Protection Act (CISPA) passed by
>>> there House in April) has very little to say about building security in.
>> I'm convinced (in the US) that users/consumers need a comprehensive
>> set of software liability laws. Consider the number of mobile devices
>> that are vulnerable because OEMs stopped providing (or never provided)
>> patches for vulnerabilities. The equation [risk analysis] needs to be
>> unbalanced just a bit to get manufacturers to act (do nothing is cost
>> effective at the moment).
>>
>> Jeff
>>
>> On Wed, Aug 1, 2012 at 10:28 AM, Gary McGraw  wrote:
>>> hi sc-l,
>>>
>>> This month's [in]security article takes on Cyber Law as its topic.  The
>>> US Congress has been debating a cyber security bill this session and is
>>> close to passing something.  Sadly, the Cybersecurity and Internet
>>> Freedom Act currently being considered in the Senate (as an answer to
>>> the problematic  Cyber Intelligence Sharing and Protection Act (CISPA)
>>> passed by there House in April) has very little to say about building
>>> security in.
>>>
>>> Though cyber law has always lagged technical reality by several years,
>>> ignoring the notion of building security in is a fundamental flaw.
>>>
>>>
>>> http://searchsecurity.techtarget.com/opinion/Congress-should-encourage-bu
>>> g-fixes-reward-secure-systems
>>>
>>> Please read this month's article and pass it on far and wide.  Send a
>>> copy to your representatives in all branches of government.  It is high
>>> time for the government to tune in to cyber security properly.
>>>
> 
> 
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
> ___
> 
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] BSIMM3 lives

2011-10-22 Thread Greg Beeley
Gary,

Could you clarify your (and/or the BSIMM) position on "secure by design"
vs "designed to be secure"?  You're encouraging the adoption of
secure-by-design building blocks, as a part of SFD2.1, but then warning
that "designed to be secure" != "secure".  I can think of examples/ways
that what you've said can be true, but am not sure what you're actually
referring to.

Of course we all know that all systems have design and implementation
defects, though solid processes can significantly reduce the number of
those.  And we all can think of plenty of examples of security add-ons
that have actually worsened the true vulnerability of the resulting
software system.

>From my perspective, there are a lot of security frameworks out there
that help software engineers "do the same thing more securely", and then
there are approaches that fundamentally change the way the "thing" is
done.  One example might be giving someone a better strcpy() on the one
hand, versus entirely swapping out their imperative programming paradigm
for a more declarative one.

Thanks,

- Greg

Gary McGraw wrote, On 10/21/2011 11:14 AM:
> The particular BSIMM activity in questions is SFD2.1 (one of the 109 BSIMM
> activities).  Here is its description from page 27 of the BSIMM:
> SFD2.1: **Build secure-by-design middleware frameworks/common libraries.**
> The SSG takes a proactive role in software design by building or providing
> pointers to secure-by-design middleware frameworks or common libraries.

...

> What is implied is
> a warning that even things designed to be secure often may not be
> (buyer...or cut-n-paster...beware).
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] Re: What do you like better Web penetration testing or static code analysis?

2010-05-05 Thread Greg Beeley
Regarding the code snippet -- it does depend on the environment -- point well
taken.  But in this case (from what I can tell), unless you actually have the
file_exists() function *disabled* in php.ini, this is vulnerable to XSS.

- Greg

Sebastian Schinzel wrote, On 04/28/2010 04:03 AM:
> On Apr 28, 2010, at 7:10 AM, SneakySimian wrote:
>> > $file = $_GET['file'];
>>
>> if(file_exists($file))
>> {
>> echo $file;
>> }
>>
>> else
>> {
>>echo 'File not found. :(';
>> }
>>
>> Ignoring the other blatant issues with that code snippet, is that
>> vulnerable to XSS? No? Are you sure? Yes? Can you prove it? As it
>> turns out, it depends on a configuration setting in php.ini. The only
>> real way to know if it is an issue is to run it in the environment it
>> is meant to be run in. Now, that's not to say that the developer who
>> wrote that code shouldn't be told to fix it in a source code analysis,
>> but the point is, some issues are wholly dependent on the environment
>> and may or may not get caught during code analysis. Other issues such
>> as code branches that don't execute or do execute in certain
>> environments can be problematic to spot during normal source code
>> analysis.
> 
> So you suggest to actually perform n black-box tests where n is the set
> of all possible permutations of all variables in php.ini (hint: n will
> be very
> large)? This is certainly not feasible.
> 
> Your code shows a very simple data flow, which may or may not be
> exploitable. But this is not the point. The point of software security
> is to
> increase the reliability of the software when under attack.
> 
> Reliable software performs output encoding when user input is printed
> to HTML. This code does not perform output encoding and should therefore
> be fixed.
> 
> The discussion about whether or not this is exploitable on which platforms
> is a waste of time. In many cases, you will find yourself spending a lot of
> time in trying to get a running exploit, whereas the actual fix for the
> code
> takes a fraction of the time.
> 
> For me, penetration testing is solely a method to raise awareness and to
> gather new
> security requirements FOR a customer application FROM security researchers.
> Knowledge transfer from security researchers to the business is key here.
> It helps finding actual attacks but does not help the customer writing
> better
> code.
> 
> Code audits (where automated or manual) are the way to go to improve
> reliability by pointing out dangerous coding patterns.
> 
> My 0.02€...
> 
> Cheers
> Sebastian
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc -
> http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
> ___
> 
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] blog post and open source vulnerabilities to blog about

2010-03-17 Thread Greg Beeley
Matt,

You can find quite a list of OSS vulnerabilities over an CVE (cve.mitre.org)
or NVD (nvd.nist.gov), but here are a couple ones that I tend to use for
illustrative purposes when teaching.

- Apache Chunked Encoding vuln (#CVE-2002-0392), an integer overflow.  Of
particular interest because when it was first discovered it was not believed
to be exploitable to gain remote root, but due to a nuance in a memcpy() /
memmove() implementation, it was (I think I'm remembering this right).  An
example that non-exploitability depends on more than just the program itself,
but also on the underlying systems (libraries, compiler, hardware, etc).

- OpenSSH crc32 compensation attack detector vulnerability (#CVE-2001-0144).
Of interest because this was a remote-root vulnerability in a piece of code
that was used solely to try to thwart an SSH protocol 1 cryptographic attack.
A good example of more code introducing more bugs, even when the "more code"
had an important security purpose.

- Never made it into any distributed code, as it was in version control only,
but there was a Linux kernel vulnerability that was a backdoor attempt.
(http://kerneltrap.org/node/1584). Of interest because it was apparently an
intentional "typo" bug to create a backdoor.  A good example of something that
could have easily slid by, but the way that version control was set up as well
as the many eyes working on the kernel, resulted in it coming to light quickly.

- A sendmail bug publicized back in 2006 (#CVE-2006-0058) was of interest
because the vulnerability was not a "typical" buffer overflow, but was due to
(if I remember correctly -- the discussion of this vuln was pretty opaque at
the time, so I could be wrong on this) the intermixing of static and automatic
C function variables in a fairly complex attack scenario (where a residual
static pointer was pointing to a previous incarnation of an automatic buffer),
resulting in an attacker being able to overwrite a section of the stack if the
attack was timed "just right" (it didn't need the nanosecond precision that
was widely publicized at first).  A good example of complex code being more
difficult to secure.

- Greg Beeley
  LightSys

Matt Parsons wrote, On 03/16/2010 10:41 AM:
>  
> 
> Hello,
> 
> I am working on a software security blog and I am trying to find open
> source vulnerabilities to present and share.  Does anyone else have any
> open source vulnerabilities that they could share and talk about?   I
> think this could be the best way to learn in the open source community
> about security.   I have a few but I would like to blog about a
> different piece of code almost every day.  
> 
>  
> 
> God Bless.
> Matt
> 
>  
> 
>  
> 
> http://parsonsisconsulting.blogspot.com/
> 
>  
> 
>  
> 
> Matt Parsons, MSM, CISSP
> 
> 315-559-3588 Blackberry
> 
> 817-294-3789 Home office
> 
> "Do Good and Fear No Man" 
> 
> Fort Worth, Texas
> 
> A.K.A The Keyboard Cowboy
> 
> mailto:mparsons1...@gmail.com
> 
> http://www.parsonsisconsulting.com
> 
> http://www.o2-ounceopen.com/o2-power-users/
> 
> http://www.linkedin.com/in/parsonsconsulting
> 
> http://parsonsisconsulting.blogspot.com/
> 
> http://www.vimeo.com/8939668
> 
>  
> 
> 0_0_0_0_250_281_csupload_6117291
> 
>  
> 
> untitled
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> 
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Some Interesting Topics arising from the SANS/CWE Top 25

2009-01-13 Thread Greg Beeley
Steve I agree with you on this one.  Both input validation and output encoding
are countermeasures to the same basic problem -- that some of the parts of
your string of data may get treated as control structures instead of just as
data.  For the purpose of this email I'm using a definition of "input
validation" as sanitizing/restricting data at its entry to a program, and
"encoding" as the generation of any string in any format other than straight
binary-safe data.  (obviously in many cases you will have a more complex
architecture with individual modules/classes also doing their own "input
validation" too).

Having both countermeasures in place is a belt-and-suspenders perspective
which is healthy.

However, input validation is primarily tied to business requirements (what
characters are required in the data field), and output encoding is tied to a
technical knowledge of the output format being used (whether HTML, SQL, a
shell command, CSV data, text for an eval() call, a UTF-8 string, etc.).

The only upside to relying primarily on input validation is that it gives a
sort of "perimeter protection", a firewall of sorts to the data coming in that
tends to protect all of the code "behind the firewall".  But it necessarily is
not likely to be a very "smart" firewall.

One big problem to relying primarily on input validation is that input
validation can be very far structurally removed from the point that causes the
trouble -- the injection/encoding point.  In fact, the programmer doing the
input processing may have no knowledge of how the data may be encoded later,
and in fact the encodings needed may change with time as well.  Proper output
encoding puts the countermeasure in the same place as the knowledge of the
output format, and puts the responsibility where the expertise is.  It also
makes the code much easier to audit, as you can tell easily that the encoding
process isn't vulnerable without having to trace the route of every single
encoded data item through the code and back up to its entry point into the
program (of course for thorough auditing you'd do that anyhow but for purposes
other than just that one encoding point).

A second big problem - as mentioned - is that input validation relies on
business requirements -- and you can't guarantee that the business
requirements won't require you to permit "troublesome" characters in the data
field, as in the example you gave.

- Greg

Steven M. Christey wrote:
>For example, is SQL injection strictly an input validation
>vulnerability, or output sanitization/validation/encoding or whatever
>you want to call it? In a database, the name "O'Reilly" may pass your
>input validation step, but you need to properly quote it before sending
>messages to the database.  And the actual database platform itself has
>no domain model to "validate" whether the incoming query is consistent
>with business logic.  My personal thinking, which seems reflected by
>many web application people, is that many injection issues are related
>to encoding at their core, and the role of input validation is more
>defense-in-depth (WITH RESPECT TO INJECTION ONLY).  Yet smart people
>insist that it's still input validation, even when presented with the
>example I gave.  So So what's the perspective difference that's causing
>the disconnect?
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: Secure Coding Certification

2007-05-15 Thread Greg Beeley
 > [...] White list validation is the answer to everything except the
 > difficult choices developers have to make and often get wrong.
 > [...]
 > (past,present,future) of the data is that single application? How do you
 > test the ability for developers to make the best decisions in imperfect
 > situations?

I generally teach the developer to both whitelist where the data enters
the application from the user, AND to escape/encode when building a string
that combines both data (of any origin) and control elements (such as a SQL
statement, HTML document, file and pathname, zero-terminated string, etc.).

The first is always done based on the nature of the data (requirements).
The second is done based on the control boundaries and characters with
special meanings (quotes, html special characters, nul terminators,
directory slashes, "..", etc.)  While both are important ("defense in
depth"), in my opinion, the latter is as important, if not more important,
than the first.  This is for several reasons:

1) as you said, often a character used for a control boundary or
   which has special meaning is needed in the input data in order for
   the application to work;

2) when I read code, I want to be able to tell at a glance that it has
   a good chance of being secure, so countermeasures should preferably be
   done close to where they matter;

3) internal changes to the path of data through a program are not at all
   uncommon.  Basing your input-validation (at the point where the data
   enters the application) on the issues that might be present in the
   totality of the data's path through the application is something that
   is very unmaintainable.  And what happens when someone decides after-
   the-fact "we need to allow an apostrophe in that field", and the
   developers have to remember that the field in question eventually ends
   up going into a SQL statement between a pair of quotes?; and

4) Trusted data can become untrusted in a hurry!

For instance, I've seen a fair bit of PHP code, and the quality varies quite
a bit, as we all know.  If there isn't very consistent use of escaping /
encoding / etc. when building HTML, SQL, cookies, filenames, etc., I worry.
And that is regardless of how much the application tends to "validate the
input data".  I am concerned when an application only uses escaping /
encoding / etc. when the developer thinks it must be necessary, based on
the apparent origin of the data.  Sure, there is a performance tradeoff
for more generous use of escaping/encoding/etc., but in my opinion it is
very much worth it. (there is a price to be paid for performance, and if
performance is a top requirement, the project managers need to factor
in the security implications of tradeoffs like this one).

When I teach this stuff, I tell folks, "don't just code so that it is
secure, but instead code so that others can see relatively easily that
it is secure".

Greg.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: Secure Coding Certification

2007-05-14 Thread Greg Beeley
 > 1. ONLY consultants and vendors have jumped on the bandwagon. Other IT
 > professionals such as those who work in large enterprises have no
 > motivation to pursue.
 >
 > 2. The target price for the exams will be an impediment as many folks who
 > can't get reimbursed for taking them will not bother.

Agreed.  There might be some value to a software development outsourcing
company, but that will limit coverage.  I definitely know that the pricing
issue would prevent me from taking the exam, but I'm in nonprofit/charity
work; I am not representative of most of the industry

 > 3. It needs to be more language agnostic. Folks who code in Smalltalk,
 > Ruby or scripting languages should not be treated as second class citizens

Agreed in concept to the "no second-class citizens" idea.  But I think
the test needs to have a language-specific element to it.  Every language
and environment has unique pitfalls and security considerations.  A
developer who knows to avoid memory management, buffer, and integer issues
in C may have no clue about nul-poisoning in a web scripting language's
counted (as opposed to zero-terminated) strings.

 > 4. I would not measure "experience" but desire to pursue knowledge.
 > Experience over time can get static. How many of us know a COBOL
 > programmer who has had one years of experience twenty times.

To me, the "experience" qualification isn't so much "how many years of
coding", but how much has the person actually practiced "secure coding"?
An experienced secure coder is much more able to recognize, at a glance,
issues in the code and in the design, as compared to someone who has been
recently trained at a secure coding "boot camp".  But I do agree with you
that experience in terms of time is a somewhat rough metric.

Greg.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: Secure Coding Certification

2007-05-12 Thread Greg Beeley
 > I agree that multiple choice alone is inadequate to test the true
 > breadth and depth of someone's security knowledge. Having contributed
 > a few questions to the SANS pool, I take issue with Gary's article
 > when it implies that you can pass the GSSP test while clueless.
 >
 > There is indeed a body of knowledge that is being tested. SANS has
 > been soliciting comments on the document.

Having taught this type of material before at the university and
vocational levels, I think there are three main aspects which are
important to someone's capability to code "securely":

1 - Knowledge of pitfalls, countermeasures, and good practices;
2 - The right mindset; and
3 - Experience carrying it out

(there are also the surrounding business issues, like project management
and planning, risk assessment and how well-vetted the software must be
given the cost/risk scenario, but I'll just stick to the coder for now).

I have not reviewed the GSSP (practice exams, etc.), but I am guessing
that it goes after the "low hanging fruit" of covering (1) above, which
is testable most easily with an exam.  It's much better than nothing, and
the knowledge is very important, but this test does not necessarily mean
that a particular coder will be a better "secure coder".  There's a lot
more to this than just a body of knowledge.

For example, you could give any auto mechanic a test that they could
pass if they know what the risks are in leaving a bolt loose, or a
fuel system clamp unsecured, or not replacing an O-ring when a
connection is open (or, if they can figure out those risks during the
exam, esp. a multiple-choice one).  But that does not mean that the
mechanic will actually follow through with those things, or that, in
practice, the mechanic will actually be more prone to even notice...

So, although I think the GSSP is an important first step, I tend to
agree with Gary.  In my university-level teaching of software
security, I would never even begin to consider evaluating my students
merely via multiple choice exams.  Not with this subject matter.

Greg.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What defines an InfoSec Professional?

2007-03-08 Thread Greg Beeley
 > [...] I do suspect that some of it is tied to the romance of
 > certifications such as CISSP whereby the exams that prove you are a
 > security professional talk all about physical security and network
 > security but really don't address software development in any meaningful
 > way. [...]

That's interesting.  While I have not taken the CISSP, I have studied
it a bit, and software & app development security is supposed to be one
of the 10 domains that the test covers.

Perhaps one of the issues here is that if you are in operations work
(network security, etc.), there are more aspects of the CISSP that are
relevant to your daily work.  In software development, there is usually
just the one - app development sec - that the developer thinks about,
unless the code has inherent security functionality, in which case
access control, architecture/models, and cryptography can be important
too.

I agree that the software developer is a key part of the security
big picture.  In fact one of the reasons that firewalls have become so
popular today is because of software bugs in host OS's and services...

But software dev is unique in several ways that mean that it may be
hard for the CISSP to cover it in a balanced manner.  Teaching an IT
person about fire and lightning protection, or about routers or
firewalls, about ACL's, or even about risk management, does not have
a steep learning curve.  But learning the basics needed to really
understand even high-level concepts regarding software security &
high-assurance development practices is a much higher learning
curve endeavor, in my view, for the typical IT person.

A few questions, then -- should all developers be/become security
professionals?  Even the most innocent "pet project" application can
end up having worldwide security implications, given the way apps
can be rapidly popularized these days.  What qualifications should a
developer meet, to be a "security professional"?  Should there be
something like the Common Criteria EAL's, but somewhat less formal,
to encourage broader use in labeling projects and code, esp. in the
open-source world?

- Greg

08-Mar-2007

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Fwd: re-writing college books - erm.. ahm...

2006-11-07 Thread Greg Beeley
Hi all,

I've been watching this discussion with interest, as I've taught a
undergrad-level course a couple of times that focuses on infosec
with a concentration in software security.  Yes, _Secure Coding_
was one of the books we used :)

A few observations from my experience so far:

   - Sure, we can teach "don't overflow the buffer" in lower division
 undergrad courses, but many students won't understand the
 reasons why this results in an exploitable condition, since those
 reasons require understanding concepts that are not normally taught
 until the upper division of undergrad CS.

   - I think we need to not only give the students the right *tools*
 to code securely, but also the right *mindset*.  It is harder
 to teach the "mindset" in the earlier courses.

   - As for a specialized course on software security, it can be
 tricky working it into the undergrad CS curriculum.  When I've
 taught this material, I could not assume (for instance) a
 certain degree of student knowledge about computer architecture
 and the way the call stack works.  I had to explain that stuff
 just to be able to explain how a buffer overflow works, for instance.

   - We can teach, "be more secure, use Java/C#/etc instead of C",
 and that is good, but remember that these students are going
 out into the real workforce and will use the language(s)
 chosen by their employers (or already in place on an existing
 product line).  I do believe that students still need to know
 how to use C/C++ responsibly.  Otherwise, they may very well
 be ill-prepared for the real world :)

   - As for vocational vs. academic, I think there's a lot of room
 for software security in each.  At the academic level, you
 spend more time explaining the underlying concepts.  For
 example, teaching why having a call stack share data and program
 flow control constructs tends to cause trouble (when no enforcement
 of the bounds of data and control is performed).  Vocational
 teaching is much more hands-on and tools oriented.  At the
 academic level, you want your students to be able to take the
 knowledge and apply it in new and creative ways, not just learn
 a tool or a technique.

   - Many universities want to teach in the academic world the kind
 of knowledge that will give their students a definite edge when
 they go into private industry.  If potential employers (or
 graduate programs, etc.) look favorably on some "software security"
 experience, we will probably see more of it taught and/or
 integrated into existing coursework.

   - I found Corewars to be an interesting tool for starting to
 exercise that "defensive coding" muscle.  It gets students used
 to assuming that their program will be abused and misused,
 among other things :)

Greg.


Greg Beeley, President & Co-Founder [EMAIL PROTECTED]
LightSys Technology Services, Inc.  http://www.LightSys.org/


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Greg Beeley
Wietse Venema wrote:
> My experience is otherwise. Without detailed documentation I can
> usually see where in the life cycle the mistake was made: analysis
> (e.g., solving the wrong problem), design (e.g., using an inappropriate
> solution) or coding.

I tend to agree - for *many* design related problems.  But I think it
is only true for design flaws that are violations of well-recognized
approaches to things (for instance, putting too much trust in a source
IP address for authentication, or blatant misuse of cryptography), or
when the problem being "solved" by the software is self-evident enough
that the auditor essentially repeats much of the software engineering
process, albeit (possibly) very informally, just by auditing the code.

Other design related defects are hard to find if you don't have a
well-defined problem - the old "validation" vs "verification" issue.
When the problem being solved by the software is an uncommon one, or
unique to the software, it is more likely that a design flaw will go
undetected by an auditor (for instance, your average code auditor
won't catch a design flaw in how retinal scanning software authenticates
a person, without having studied how it is supposed to work in the
first place).

- Greg

03-Feb-2006

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php