Re: [SC-L] How have you climbed the wall?

2011-07-28 Thread Wall, Kevin
Rohit Sethi wrote:

> Recently I sent a note about the Organic Progression of the Secure SDLC.
> One of the major points that we raise in that model is the difficulty with
> "Climbing the Wall": Getting the lines of business to commit resource
> to application/software security. This is one of the most fundamental
> challenges in building a secure SDLC.
>
> We offer some simple high level thoughts and a PPT deck you can use here:
> http://www.sdelements.com/secure-sdlc/software-security-throughout-life-cycle-9-steps/how-climb-wall/
>
> I'm curious to see what others have  have done / seen to climb the
> wall effectively

I can't speak for others--although I think that the BSIMM data bears this
out--is that our company formed a separate Application Security team. This team
was placed within the IT organization (vs. under Risk Management) and was
comprised of staff with extensive and varied application development experience
who had a common interest in application security. (Note that this team was
formed 11 years ago and for the most part, is still intact. I was the technical 
lead
of this group up until about 6 months ago.)

For us, this worked out well. Among the first initiatives of this group
was to build a custom proprietary application security library similar
in intent to ESAPI (although much less ambitious). We also evaluated
several vendor web access management solutions, chose one, and then over
the period of the last 8 or 9 years, integrated that that vendor solution with
close to 250 applications, both internal and external.  For the first several
years, we also offered free consulting to internal development groups.

I think the keys to the team's success in "climbing the wall" was that it was
placed under the IT organization and it was made up of senior developers
who had lots of development experience. (I've always believed it's easier to
teach a good developer about security than it is to teach a security person
about development.) The later was important because they speak the same
lingo as developers and could identify with the obstacles that developers face.
It's not perfect, but it seems to have been relatively successful.

-kevin
---
Kevin W. Wall   CenturyLink / Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
Blog: http://off-the-wall-security.blogspot.com/
"There are only 10 types of people in the world...those who can count
in binary and those who can't."



This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful. If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] informIT: software security zombies

2011-07-21 Thread Wall, Kevin
Gary McCraw wrote:
> This month's informIT article covers the zombies:
[snip]
> * Software security defects come in two main flavors—bugs at the 
> implementation level (code) and flaws at the architectural level (design)

So, two questions:
1) How is this (software *security* defects) different than any other software 
defect?
It seems to me that these are also 2 main categories of software defects in 
general.

2) In terms of "flavors", where do you see software security defects arising 
from either incorrect
requirements or simply lack of specifications?

My experience with #2 is that poor specifications are a major contributor to 
software
security defects, more so even than with software defects in general. That's 
because generally
these specs are usually considered "non-functional requirements" and they get 
missed
more by systems analysts simply because they are not something that the business
wants and therefore they aren't put into the reqts and thus never get built.

I'm not talking about the pseudo-PCI-like requirements that says things like 
making
sure that you have no OWASP Top Ten vulnerabilities, but rather actual omission
of requirements such as failure to specify that data must be kept confidential,
or to access this data requires authorization by a certain role, or that an 
audit
trail must be required for certain actions, etc.

I don't see how you can chalk these sorts of defects up to flaws at the
architecture level (unless you and I have drastically different views of
system architecture). The are outright omissions in the specifications
and because they are not there, the application team never even thinks
about building in that functionality.  Bottom line: I think you are missing
a "flavor".

Thanks,
-kevin
--
Kevin W. Wall   614.215.4788   Qwest Risk Management / Information Security Team
Blog: http://off-the-wall-security.blogspot.com/
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents."-- Nathaniel Borenstein, co-creator of MIME


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Question about HIPAA Compliance in application development

2011-04-26 Thread Wall, Kevin
Jim Manico wrote...
> The most cost-effective way to handle these requirements is to get
> your HIPPA auditor drunk nightly.

Uh..., the old bribery and extortion approach. ;-)

> I'm being partially serious here because these and other HIPPA
> requirements are:
>
> (1) Technically ambiguous
> (2) Often in conflict with other HIPPA requirements
> (3) Impossible to achieve cost effectively

That's what happens when you let a bunch of politicians and
lobbyists mandate security & privacy regulations.

Seriously, PCI DSS started much the same way (and still is to
a degree, but not nearly as bad as DSS 1.0). I recall asking a
PCI auditor for clarification of section 3.6.4 of DSS 1.0, which
merely said (regarding encryption keys)

3.6.4. Periodic key changes

So, I naturally asked the auditor "how often do we have to change the
encryption keys"? He replied, "It doesn't say...just every so often."
So I said, "OK, we'll change ours every 1000 years." He then said he
needed to get clarification from his management and he eventually came
back a few weeks later and said that we at least need to change them yearly.

Later, PCI DSS 1.1 later revised this particular section to say:

3.6.4 Periodic changing of keys
+ As deemed necessary and recommended by the associated application
  (for example, re-keying); preferably automatically
+ At least annually.

So, pushing back on the auditors does work if you know how to push their
buttons to get clarity.

Of course, I'd also say, if one's whole point to HIPPA is merely in
passing the HIPPA auditor's inspection, then you're missing the whole
point of HIPPA. Sometimes we need to remind *our management* of that
as sometimes passing the audit supersedes all other needs including
providing actual security and privacy.

> For example, there are HIPPA access control requirements that demand
> that you only give doctors access to transmit patient data in a minimal
> way; only transmitting data needed for a diagnosis. Good luck coding that.
> It's also bad medicine.

What? You're not comfortable letting some obscure developer make
decisions about what information your doctor needs to make a proper
diagnosis??? Why not? What could *possibly* go wrong? :-p

Of course, as developers and/or security consultants, it's important that
we point out such absurdities to our HIPPA auditors. Eventually when
they slowly see a pattern developing, we can only hope a light bulb
goes on and the appropriate parties decide we need v2.0 thus keeping all
of us employed for yet another 3 or 4 years. ;-)

-kevin
--
Kevin W. Wall   CenturyLink / Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
Blog: http://off-the-wall-security.blogspot.com/
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME




This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Question about HIPAA Compliance in application development

2011-04-26 Thread Wall, Kevin
On Tue 4/26/2011 11:13 AM, Rohit Sethi wrote:

> It sounds like people generally deal with this through techniques
> outside of the application logic itself such as checksums and/or
> digital signatures on files / database values that contain protected
> health information.  My initial thought was that databases would offer
> some kind of integrity check feature and that seems to be the feeling
> from people on the list as well.

First, I think that 'checksums' are not going to meet the HIPPA need. They
will allow you to detect ACCIDENTAL data integrity issues such as
detecting typos upon data entry, data corrupted by disk errors, etc.,
but they do NOT allow you to detect DELIBERATE tampering attempts that
would affect data integrity. Chances are that any attacker who has
the ability to change the data can also re-compute and store the
updated checksum.  So you need a cryptographically strong mechanism to
detect such data integrity issues. Generally, that leaves you with
HMACs or digital signatures. (Or store the data encrypted using a
cipher mode that also provides authenticity, such as GCM, CCM, etc.
Shameless plug: Note that the default configuration for symmetric encryption
in ESAPI 2.0 provides authenticity so if you need to encrypt the data
anyway, that might be a valid approach for you.)

However, *in general*, DBs do not offer this type of protection, especially
if there is a concern with a privileged attacker (say a rogue DBA) making
unauthorized changes to the database. (Some of the features with packages
like Oracle's Transparent Data Encryption may provide this, but generally
it is not something that is part of the vanilla DBMS. The data integrity
issues that a DBMS addresses are _referential_ integrity, not *content*
data integrity.)

> Has anyone actually implemented this kind of control *within* custom
> application logic? For example, verifying the integrity of stored
> protected health data by (for example) checking that a digital signature
> is valid before displaying it back to the user?

I was tech lead on a project that we did this, but it was unrelated to
HIPPA. It was a project that dealt with some very sensitive data.
The application associated each DB record with an HMAC.  The HMAC
key was computed by the application, based on a derived key from a Shamir
shared secret calculated by signed Shamir shares entered by at least 2
authorized operations personal when the application cluster was
initialized. This was done to explicitly prevent the threat of rogue DBAs
changing the extremely sensitive data or changing who was authorized to
access to that sensitive data. When the data was retrieved from the DB,
the HMAC was computed from all the other columns and them compared with
the stored HMAC column for that record. If they did not agree, a security
exception was logged and thrown. (HMACs were used instead of dsigs because
they are so much faster in computing and nonrepudiation was not an issue.)

If you are not concerned with rogue DBAs as a threat source, this logic
could be placed in the DB itself using triggers and stored procedures (or
just stored procedures if you don't have an legacy code base and you
somehow prevent direct inserts / updates). If you are interested in
pursuing that angle, you may want to reference Kevin Kenan's
book, _Cryptography in the Database: The Last Line of Defense_.

Finally--and this is probably obvious to most SC-L readers--you need to
ensure that your application has no SQLi vulnerabilities that would allow
data to be altered in an unauthorized manner. If you don't, then the HMAC
or dsig or whatever you are using to ensure data integrity will simply
be inserted / updated by your application or DB code like it would for
an otherwise legitimate authorized attempt.

HTH,
-kevin
--
Kevin W. Wall   CenturyLink / Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
Blog: http://off-the-wall-security.blogspot.com/
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Asso

Re: [SC-L] Question about HIPAA Compliance in application development

2011-04-26 Thread Wall, Kevin
Rohit,
You wrote:
> Has anyone had to deal with the following HIPAA compliance requirements
> within a custom application before:
>
> §164.312(c)(2)
> Implement electronic mechanisms to corroborate that electronic
> protected health information has not been altered or destroyed in
> an unauthorized manner.
>
> §164.312(e)(2)(i)
> Implement security measures to ensure that electronically transmitted
> electronic protected health information is not improperly modified
> without detection until disposed of.
>
> How have you actually implemented these controls in applications? Have
> you used a third party tool to do this? Does §164.312(c)(2) simply
> boil down to sufficient access control?

Having never had any practical experience with HIPPA, my take on these sections
may be different (read "wrong") than others, but the way I read these 
requirements,
they have to do more with ensuring data integrity than *merely* proper access
control.

If that is their intent, then I would look at access control as providing a
necessary, but not sufficient security measure to satisfy these requirements.

Consequently, I would think that a mechanism such as HMACs or digital signatures
may be appropriate security measures here.

-kevin
---
Kevin W. Wall   CenturyLink / Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
Blog: http://off-the-wall-security.blogspot.com/
"There are only 10 types of people in the world...those who can count
in binary and those who can't."

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java DOS

2011-02-15 Thread Wall, Kevin
Chris,

On Feb 15, 2011, 8:20 AM, Kevin Wall wrote:
> On Feb 15, 2011, at 12:06 AM, Chris Schmidt  wrote:
>> On Feb 14, 2011, at 8:57 AM, "Wall, Kevin"  wrote:
>>> [snip]
>>> So on a somewhat related note, does anyone have any idea as to how
>>> common it is for application developers to call ServletRequest.getLocale()
>>> or ServletRequest.getLocales() for Tomcat applications? Just curious.
>>> I'm sure it's a lot more common than developers using
>>> double-precision floating point in their applications (with
>>> the possible exception within the scientific computing community).
>>
>> I would assume just about any app with a shopping cart does. This is of
>> course compounded by libraries like struts and spring mvc that autobind
>> your form variables for you. Use a form with a double in it and your boned.
>
> Good point about things like Spring and Struts. Hadn't thought of those
> cases. OTOH, if I were implementing a shopping cart, I'd write special
> Currency class and there probably use Float.parseFloat() rather than
> Double.parseDouble() [unless I were a bank or otherwise had to compute
> interest], and hopefully Float does not have similar issues.

A thousand pardons for me responding to my own post, but I've been thinking
more deeply about what Chris wrote and how I responded the first time and I
don't think either of us were quite on target.

Your *typical* shopping cart application is going to have the end user
select a *quantity* of a specific item, and *almost always* this is going
to be some integer type. (Yes, there are some exceptions, but they are
comparatively few.) The calculation of the final price may involve
floats or doubles, but those should be extremely difficult, if not impossible,
to exploit given that the price generally will only have two decimal places
of precision and that the end user can (hopefully) only enter a whole number.

So, IMO, properly implemented applications using a traditional shopping
cart is not likely to be exploited by this Double.parseDouble(String)
vunlerability. (Note that if you are storing your price info somewhere
that a client can access it, you have much bigger problems than a DoS
attack.)

What is more likely is if you have applications where a user can enter
a specific payment amounts directly. I'd guess that those that might be
vulnerable would be things like people accepting donations via PayPal,
etc. That's probably not something that is very prevalent in telecomm
applications though. But thanks for helping me think through this.

-kevin
---
Kevin W. Wall   Qwest Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java DOS

2011-02-15 Thread Wall, Kevin
On Feb 15, 2011, at 12:06 AM, Chris Schmidt  wrote:
> On Feb 14, 2011, at 8:57 AM, "Wall, Kevin"  wrote:
[snip[
>> So on a somewhat related note, does anyone have any idea as to how common it 
>> is for
>> application developers to call ServletRequest.getLocale() or 
>> ServletRequest.getLocales()
>> for Tomcat applications? Just curious. I'm sure it's a lot more common than
>> developers using double-precision floating point in their applications (with
>> the possible exception within the scientific computing community).
>
> I would assume just about any app with a shopping cart does. This is of 
> course compounded
> by libraries like struts and spring mvc that autobind your form variables for 
> you. Use a form with
> a double in it and your boned.

Good point about things like Spring and Struts. Hadn't thought of those cases. 
OTOH, if
I were implementing a shopping cart, I'd write a special Currency class and 
there
probably use Float.parseFloat() rather than Double.parseDouble() [unless I were 
a bank
or otherwise had to compute interest], and hopefully Float does not have 
similar issues.

-kevin
--
Kevin W. Wall   614.215.4788   Qwest Risk Management / Information Security Team
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents."-- Nathaniel Borenstein, co-creator of MIME
________
From: Chris Schmidt [chrisisb...@gmail.com]
Sent: Tuesday, February 15, 2011 12:06 AM
To: Wall, Kevin
Cc: Jim Manico; Rafal Los; sc-l@securecoding.org
Subject: Re: [SC-L] Java DOS



This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java DOS

2011-02-14 Thread Wall, Kevin
Jim Manico wrote...
> Rafal,
>
> It's not that tough to blacklist this vuln while you are waiting for your
> team to patch your JVM (IBM and other JVM's have not even patched yet).
> I've seen three generations of this filter already. Walk with me, Rafal and
> I'll show you. :)
>
> 1) Generation 1 WAF rule (reject one number only)
>
> This mod security rule only blocks a small portion of the DOSable range.
> The mod security team is working to improve this now (no disrespect meant
> at all!)
>
> SecRule ARGS|REQUEST_HEADERS "@contains 2.2250738585072012e-308"
> "phase:2,block,msg:'Java Floating Point DoS Attack',tag:'CVE-2010-4476'"
>
> Reference: http://mobile.twitter.com/modsecurity/status/35734652652093441
>

Depending how & when the exponent conversion is done, this mod_security rule
may be completely ineffective. For example, if an attacker can write this
floating point # as the equivalent

22.250738585072012e-309

(which note, I have not tested), then the test above is invalid. I presumed that
this was why Adobe's blacklist *first* removed the decimal point. Adobe's 
blacklist
could be generalized a bit to cover appropriate ranges with a regular 
expression,
but I agree wholeheartedly with you that what you dubbed as the "Chess Defense"
(I like it) is the best approach short of getting a fix from the vendor of your
JRE.

So on a somewhat related note, does anyone have any idea as to how common it is 
for
application developers to call ServletRequest.getLocale() or 
ServletRequest.getLocales()
for Tomcat applications? Just curious. I'm sure it's a lot more common than
developers using double-precision floating point in their applications (with
the possible exception within the scientific computing community).

-kevin
---
Kevin W. Wall   Qwest Risk Mgmt / Information Security
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java: the next platform-independent target

2010-10-21 Thread Wall, Kevin
On October 20, 2010, Benjamin Tomhave wrote:

> 
> If I understand this all correctly (never a safe bet), it seems these
> are actual attacks on Java, not on coding with Java. Ergo, this isn't
> something ESAPI can fix, but rather fundamental problems. What do you
> think? Overblown? Legit? Solutions forthcoming?
> 

In a private, off-list email to Ben Tomhave, Kevin Wall incorrectly
speculated in a reply:

> W/out having read this at all (will do later, when I get home), I'd just
> say that there's a lot of safety nets built into Java / JavaEE, but
> most (99%) of development teams don't use them.
>
> Examples (from most to least important IMO) are:
>
> Java security manager and appropriately restrictive security policy
> Sealed jars
> Signed jars
>
> If you are running w/out a security manager, you are really working w/out
> a safety net. I know that Dinis Cruz and I have had this conversation
> a few times and I think we are both in agreement on that matter.

Ben,

When you first referenced these URLs, I thought these were about server-side
exploits, but reading through these, it appears that most of them are
client-side exploits that are attacked using malware applets.  Since
applets do use a Java security manager, that shoots my original theory I
mentioned below. (I would stand behind this conclusion for server-side
exploits though.)

So, you are right about ESAPI not helping here as one is downloading
and running untrusted code in the first place and it's doubtful that
an attacker is going to use ESAPI to protect their victims. :)

However, I think I reached a different conclusion then you did. It
appears that the major issue here is users are not updating their
Java installations either because either they are not aware it is installed
to start with or perhaps because automatic Java updates are
disabled on not installed correctly.

I think the "solution" for this problem (which, IMO, is unlikely, at least
in the near-term future) is that Windows Update needs to include
*all* the software (or at least all the common software that adheres
to some common packaging format) running under a Windows OS.
Most Linux systems already do this. For instance on Ubuntu or OpenSUSE,
if I wish to update Java or Adobe Flash or Acrobat, I don't do anything
different then when I'd update something that is provided by the vendor.
Frankly, the fact that that Windows Update doesn't do this was one major
reason why I've replaced all my Windows instances with various Linux
installs. I used to run Secunia PSI to scan my Windows system, but even
though I knew *what* to patch, doing so was way too painful. I can only
imagine that the typical PC user is much less diligent about keeping
their system patched that I am, which would explain a lot. Java is on
most systems and rarely is patched, ergo, recipe for disaster.

So my conclusion is that these findings say more about the fact that these
systems are not being patched  than it is a poor reflection on the quality
of Java. (The report indicates that this enormous jump in the number of
exploits is "due to the fact that three particular vulnerabilities are
being constantly exploited" and that "These vulnerabilities have been
patched for a while, but the problem is that users fail to update Java
on their system".)

Given the slant of posts to this list, I originally thought these
reports were about JavaEE app servers being vulnerable.  So, my bad
for jumping to conclusions, but I think my new conclusion is that this
is not so much much an issue with Java (the language + VM) as it is
with the way that Java Update (as delivered by Sun / Oracle) sucks.

Regards,
-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Solution for man-in-the-browser

2010-09-11 Thread Wall, Kevin
On Sep 10, 2010, at 5:34 PM, smurray1  wrote:
> Hello,
>
> I have been discussing an issue with an organization that is having
> an issue with malware on it's customer's clients that is intercepting
> user credentials and using them to create fraudulent transactions.
> (man-in-the-browser type attacks similar to what Zeus and other trojans
> are capable of).  I told them that they should be using a shared secret
> between the user and the system to use as an input to HMAC to create
> a MAC for the form for the critical transaction.
> I see it working like this.  The form that is used for the critical
> transaction would have either a java object or javascript that, after
> the user fills the field and the presses the "submit" button:
> 1) Accepts  a single use shared secret from the user.

<...deleted...>

Jim Manico responded:
> I do not think this will work. Once your browser is trojaned, it's
> game over. The Trojan has the capability to just sit in your browser
> and wait for the user to log in. (Trojans do not need to steal
> credentials to cause harm). Once the user has logged on, the Trojan
> can simulate any user activity such as requesting and submitting
> forms, circumventing CSRF tokens and other web app defenses.

Jim is absolutely correct. You are better off spending time removing
all the malware and securing your machines properly, trying to
educate your users, etc. You may also want to add AV scanning
during the web browsing sessions if you don't already support that.

Besides, once your browser is trojaned, there is no shared "secret", or more
accurately, you would also be "sharing" your secret with the malware
which obviously would not do you any good. Once the browser endpoint
is compromised, NOTHING sent from it can be trusted any longer. For
instance, since TLS provides only point-to-point encryption, malware
running in the browser can read plaintext and insert data at will.

Bottom line, don't waste your development $$ on a problem that cannot
be fixed in this manner.

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] any one a CSSLP is it worth it?

2010-04-14 Thread Wall, Kevin
Dana Epp wrote:
> Not sure that would work either though.

Dana,

My comment was meant tongue-in-cheek. Guess I used the wrong
emoticon. Figured that ';-)' would work 'cuz I never can remember
the one for "tongue-in-cheek". I've seen several variations of the
latter...

:-? :-Q :-J -)

Take your pick. Good in depth analysis though. Seriously. And I
agree with you completely.

In my experience as an adjunct faculty member teaching a master's
level Computer Security course (based in part on the McGraw/Viega book
as well as Ross Anderson's _Security Engineering_) for 6 yrs, I came to the
conclusion that multiple guess (as I call them) alone only proves
how well someone memorizes something, at best, or how clueless people
are (if they get incorrect answers) at worst. I would argue that
most of academia it is unsuited for discerning cluefulness the the
real world. Over the course of 30+ yrs in IT (yes, I am an old fart!),
I've seen all too many people that exceled in academia but were miserable
disappointments in industry.  In fact, to that end, quality guru Demming
is rumored to have said about (then) AT&T Bell Labs:
"Bell Labs only hires the top 10% of graduatesc...and they
deserve what they get!"

There is no substitute for real experience.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] RE: How to stop hackers at the root cause

2010-04-14 Thread Wall, Kevin
Jeremiah Heller writes...

> do security professionals really want to wipe hacking
> activity from the planet? sounds like poor job security to me.

Even though I've been involved in software security for the
past dozen years or so, I still think this is a laudable goal,
albeit a completely unrealistic one. I for one, would be completely
happy to go back to software development / systems programming if
all the security issues completely disappeared. But unfortunately,
I don't think we ever have to worry about this happening.

> the drive for survival seems key. i think that when the
> survival of many is perceived as threatened, then 'bad
> hacking' will be addressed on a scale which will contain it
> to the point that slavery is contained today... after all
> don't hackers simply 'enslave' other computers? j/k

And of course, that is a good thing. After all, once the
first sentient AI takes control of all the world's computers
to subjugate all humanity, we have to have a way to fight back.
Evil h4><0rs to the rescue! ;-)

> until then it seems that educating people on how these things
> /work/ is the best strategy. eventually we will reach the
> point where firewalls and trojan-hunting are as common as
> changing your oil and painting a house.

I agree. Even though one risks ending up with smarter criminals,
by and large if one addresses the poverty issues most people
ultimately seem to make the right decisions in the best interests
of society. I think for many, once their curiosity is satisfied
and the novelty wears off they put these skills to good use. At
least it seems to me a risk worth taking.

> first we should probably unravel the electron... and perhaps
> the biological effects of all of these radio waves bouncing
> around our tiny globe... don't get me wrong, i like my
> microwaves, they give me warm fuzzy feelings:)o

Jeremiah, you do know that you're not supposed to stick your *head*
in the microwave, don't you? No wonder you're getting the warm
fuzzies. :)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] any one a CSSLP is it worth it?

2010-04-14 Thread Wall, Kevin

Gary McGraw wrote...

> Way back on May 9, 2007 I wrote my thoughts about
> certifications like these down.  The article, called
> "Certifiable" was published by darkreading:
>
> http://www.darkreading.com/security/app-security/showArticle.jhtml?articleID=208803630

I just reread your Dark Reading post and I must say I agree with it
almost 100%. The only part where I disagree with it is where you wrote:

The multiple choice test itself is one of the problems. I
have discussed the idea of using multiple choice to
discriminate knowledgeable developers from clueless
developers (like the SANS test does) with many professors
of computer science. Not one of them thought it was possible.

I do think it is possible to separate the clueful from the clueless
using multiple choice if you "cheat". Here's how you do it. You write
up your question and then list 4 or 5 INCORRECT answers and NO CORRECT
answers.

The clueless ones are the ones who just answer the question with one of
the possible choices. The clueful ones are the ones who come up and argue
with you that there is no correct answer listed. ;-)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] seeking hard numbers of bug fixes...

2010-02-23 Thread Wall, Kevin
Benjamin Tomhave wrote:
> ... we're looking for hard research or
> numbers that covers the cost to catch bugs in code pre-launch and
> post-launch. The notion being that the organization saves itself money
> if it does a reasonable amount of QA (and security testing)
> up front vs trying to chase things down after they've been identified
> (and possibly exploited).

Ben,

Not sure if this is what you are looking for or not, but back in the
mid- to late-1980s or so, John Musa, a DMTS at Bell Labs, wrote up a
couple of papers that showed this data, although this was in the more
general context of software quality assurance and not specific to
security testing.

I'm pretty sure that Musa published something in either one of the ACM
or IEEE CS journals and included some hard data, collected from a bunch
of (then AT&T) Bell Labs projects. IIRC, the main finding was something
like the cost was ~100 times more to catch and correct a bug during
the normal design / coding phase than it was to catch / correct it
after post-deployment.

Can't help you much more than that. I'm surprised I remembered that much! :)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html



This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Wall, Kevin
On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

> Among other things, David [Rice] and I discussed the difference between
> descriptive models like BSIMM and prescriptive models which purport to
> tell you what you should do.  I just wrote an article about that for
> informIT.  The title is
>
> "Cargo Cult Computer Security: Why we need more description and less
> prescription."
> http://www.informit.com/articles/article.aspx?p=1562220

First, let me say that I have been the team lead of a small Software
Security Group (specifically, an Application Security team) at a
large telecom company for the past 11 years, so I am writing this from
an SSG practitioner's perspective.

Second, let me say that I appreciate descriptive holistic approaches to
security such as BSIMM and OWASP's OpenSAMM. I think they are much
needed, though seldom heeded.

Which brings me to my third point. In my 11 years of experience working
on this SSG, it is very rare that application development teams are
looking for a _descriptive_ approach. Almost always, they are
looking for a _prescriptive_ one. They want specific solutions
to specific problems, not some general formula to an approach that will
make them more secure. To those application development teams, something
like OWASP's ESAPI is much more valuable than something like BSIMM or
OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
many companies' SSGs have developed their own proprietary security APIs
for use by their application development teams. Therefore, to that end,
I would not say we need less _prescriptive_ and more _descriptive_
approaches. Both are useful and ideally should go together like hand and
glove. (To that end, I also ask that you overlook some of my somewhat
overzealous ESAPI developer colleagues who in the past made claims that
ESAPI was the greatest thing since sliced beer. While I am an ardent
ESAPI supporter and contributor, I proclaim it will *NOT* solve our pandemic
security issues alone, nor for the record will it solve world hunger. ;-)

I suspect that this apparent dichotomy in our perception of the
usefulness of the prescriptive vs. descriptive approaches is explained
in part by the different audiences with whom we associate. Hang out with
VPs, CSOs, and executive directors and they likely are looking for advice on
an SSDLC or broad direction to cover their specifically identified
security gaps. However, in the trenches--where my team works--they want
specifics. They ask us "How can you help us to eliminate our specific
XSS or CSRF issues?", "Can you provide us with a secure SSO solution
that is compliant with both corporate information security policies and
regulatory compliance?", etc. If our SSG were to hand them something like
BSIMM, they would come away telling their management that we didn't help
them at all.

This brings me to my fourth, and likely most controversial point. Despite
the interesting historical story about Feynman, I question whether BSIMM
is really "scientific" as the BSIMM community claims. I would contend
that we are only fooling ourselves if we claim otherwise. And while
BSIMM is a refreshing approach opposed to the traditional FUD modus
operandi taken by most security vendors hyping their security products,
I would argue that BSIMM is no more scientific than the those
who gather common quality metrics of counting defects/KLOC. Certainly
there is some correlation there, but cause and effect relationships
are far from obvious and seem to have little predictive accuracy.

Sure, BSIMM _looks_ scientific on the outside, but simply collecting
specific quantifiable data alone does not make something a scientific
endeavor.  Yes, it is a start, but we've been collecting quantifiable
data for decades on things like software defects and I would contend
BSIMM is no more scientific than those efforts. Is BSIMM moving in
the right direction? I think so. But BSIMM is no more scientific
than most of the other areas of computer "science".

To study something scientifically goes _beyond_ simply gathering
observable and measurable evidence. Not only does data needs to be
collected, but it also needs to be tested against a hypotheses that offers
a tentative *explanation* of the observed phenomena;
i.e., the hypotheses should offer some predictive value. Furthermore,
the steps of the experiment must be _repeatable_, not just by
those currently involved in the attempted scientific endeavor, but by
*anyone* who would care to repeat the experiment. If the
steps are not repeatable, then any predictive value of the study is lost.

While I am certainly not privy to the exact method used to arrive at the
BSIMM data (I have read through the "BSIMM Begin" survey, but have not
been involved in a full BSIMM assessment), I would contend that the
process is not repeatable to the necessary degree required by science.
In fact, I would claim in most organizations, you could take any group
of BSIMM interviewers and have them 

Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Wall, Kevin
Larry Kilgallen wrote...

> At 10:43 AM -0600 1/7/10, Stephen Craig Evans wrote:
>
> > I am VERY curious to learn how these happened... Only using the last
> > digit of the year? Hard for me to believe. Maybe it's in a
> single API
> > and somebody tried to be too clever with some bit-shifting.
>
> My wife says that in the lead-up to the year 2000 she caught
> some programmers "fixing" Y2K bugs by continuing to store
> year numbers in two digits and then just prefixing output
> with 19 if the value was greater than some two digit number
> and prefixing output with 20 if the value was less than or
> equal to that two digit number.
>
> Never underestimate programmer creativity.
>
> Never overestimate programmer precision.

While I never fixed any Y2K problems I worked next to someone
who did for about 6 months. What you refer to is pretty much what
I mentioned as the "fixed window" technique that was very common
to those developers who were addressing the problems at the time.

IIRC, it was a particularly popular approach for those who waited until
the last moment to address Y2K issues in there systems because it still
allowed for 2 digit year fields in all their forms and databases and output.

---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Wall, Kevin
Stephen Craig Evans wrote...

> Looks like there's another one:
>
> Symantec Y2K10 Date Stamp Bug Hits Endpoint Protection Manager
> http://www.eweek.com/c/a/Security/Symantec-Y2K10-Date-Stamp-Bu
g-Hits-Endpoint-Protection-Manager-472518/?> kc=EWKNLSTE01072010STR1
>
> I am VERY curious to learn how these happened... Only using the last
> digit of the year? Hard for me to believe. Maybe it's in a single API
> and somebody tried to be too clever with some bit-shifting.

Just speculation, but perhaps all these systems are using the "fixed window"
technique to address these two digit year fields common on credit cards.
Depending on the "pivot point" year that is chosen determines whether
a 2 digit year field belongs to one century or the other. This could
just be a carry over from the Y2K fixes and the rather poor choice for
a pivot point. I worked next to a person who did some Y2K fixes for
lots of mainframes back in 1998-99, and he said that using 'windowing'
to address this was a pretty common technique because companies did not
want to expand all their databases and forms, etc. to allow for 4 digits.

For example, if 1980 was chosen as the pivot year, then 2 digit years
80 through 99 would be assigned '1900' as the century and 00 through 79
would be assigned '2000' as the century. So perhaps 1910 was chosen as
the pivot year (if DOB was a consideration, that would not be all that
unreasonable) so that 10 through 99 is interpreted as 1900s and
00 through 09 was considered as 2000 something. So we hit 2010 and
a credit card has a 2 digit year for it's expiration or transaction
date or whatever, and all of a sudden 01/10 or 01/07/10 is interpreted
as 1910.

Usually using such a fixed windowing technique (there is also a sliding
window technique that was a more expensive "fix") was only considered a
stop-gap measure with most organizations fixing things for real before
the pivot year gave them trouble. But we all know about how good intentions
work...or not.

Anyhow, like I said, this is only a GUESS of what might be going on. I have
no hard data to back it up.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Provably correct microkernel (seL4)

2009-10-03 Thread Wall, Kevin
Steve Christy wrote...

> I wonder what would happen if somebody offered $1 to the first applied
> researcher to find a fault or security error.  According to
> http://ertos.nicta.com.au/research/l4.verified/proof.pml, buffer
> overflows, memory leaks, and other issues are not present.  Maybe people
> would give up if they don't gain some quick results, but it seems like
> you'd want to sanity-check the claims using alternate techniques.

I was actually wondering how they could make that statement unless they
can somehow ensure that other components running in kernel mode (e.g.,
maybe devices doing DMA or device drivers, etc.) can't overwrite the
microkernel's memory address space. It's been 20+ years since I've done
any kernel hacking, but back in the day, doing something like that with
the MMU I think would have been prohibitively expensive in terms of
resources. I've not read through the formal proof (figuring I probably
wouldn't understand most of it anyhow; it's been 30+ years since my
last math class so those brain cells are a bit crusty ;-) but maybe that
was one of the "caveats" that Colin Cassidy referred to. In the real world 
though,
that doesn't seem like a very reasonable assumption. Maybe today's MMUs
support this somehow or perhaps the seL4 microkernel runs in kernel mode
and the rest of the OS and any DMA devices run in a different address
space such as a "supervisory" mode. Can anyone who has read the nitty-gritty
details explain it to someone whose brain cells in these areas have
suffered significant bit rot?

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Provably correct microkernel (seL4)

2009-10-02 Thread Wall, Kevin
Thought there might be several on this list who might appreciate
this, at least from a theoretical perspective but had not seen
it. (Especially Larry Kilgallen, although he's probably already seen it. :)

In 
http://www.unsw.edu.au/news/pad/articles/2009/sep/microkernel_breakthrough.html,

"Professor Gernot Heiser, the John Lions Chair in Computer Science in
the School of Computer Science and Engineering and a senior principal
researcher with NICTA, said for the first time a team had been able to
prove with mathematical rigour that an operating-system kernel -- the
code at the heart of any computer or microprocessor -- was 100 per cent
bug-free and therefore immune to crashes and failures."

In a new item at NICTA


it mentions this proof was the effort of 6 people over 5 years (not quite
sure if it was full-time) and that "They have successfully verified 7,500
lines of C code [there's the problem! -kww] and proved over 10,000
intermediate theorems in over 200,000 lines of formal proof". The proof is
"machine-checked using the interactive theorem-proving program Isabelle".

Also the same site mentions:
The scientific paper describing this research will appear in the 22nd
ACM Symposium on Operating Systems Principles (SOSP)
http://www.sigops.org/sosp/sosp09/.
Further details about NICTA's L4.verified research project can be found
at http://ertos.nicta.com.au/research/l4.verified/.

My $.02... I don't think this approach is going to catch on anytime soon.
Spending 30 or so staff years verifying a 7500 line C program is not going
to be seen as cost effective by most real-world managers. But interesting
research nonetheless.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Another WAF in town

2009-09-24 Thread Wall, Kevin
> Interesting approach. Curious to know if this will satisfy a
> PCI auditor as a compensating control (section 6)

I think that's presently untested and therefore likely unknown.
I would guess it depends on the auditor's perspective. On one
had, having a separate WAF appliance provides you with separation
of duties so it's harder for a dev team to configure the WAF so
it accepts everything (much like I've seem some folks use a regex
of ".*" for things in Struts validators that they haven't gotten
around to thinking more deeply about). On the other hand, the
dev team is in a much better position to truly customize the rule
set to use an actual whitelist approach. The mod_security WAF
approach generally leads to a signature-based, black-list approach.
So I can see pros and cons to each. But for a clueful dev team,
this could be a big asset if they are willing to take the time to
do things right.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-27 Thread Wall, Kevin
Ben Tomhave wrote:
> Wall, Kevin wrote:
> >
> > I don't mean to split hairs here, but I think "fundamental concept"
> > vs "intermediate-to-advanced concept" is a red herring. In your case
> > of you teaching a 1 yr old toddler, "NO" is about the only thing
> > they understand at this point. That doesn't imply that concepts like
> > "street" are intermediate-to-advanced. It's all a matter of perspective.
> > If you are talking to someone with a Ph.D. in physics about partial
> > differential equations, PDEs *are* a fundamental concept at that level
> > (and much earlier in fact). The point is, not to argue semantics, but
> > rather to teach LEVEL-APPROPRIATE concepts.
> >
> I think you do mean to split hairs, and I think you're right to do so.
> Context is very important. For example, all this talk about
> where to fit secure coding into the curriculum is great, but it also
> ignores the very arge population of self-taught coders out there,
> as well as those who learn their craft in a setting other than a
> college or university. Ergo, it still seems like we're talking at
> ends about an issue that, while important, is still only at best a
> partial solution.

Of course it's only a partial solution and I think you raise some
very valid concerns. Normally, I wouldn't consider the "self-taught"
in a discussion of where does secure coding belong in the CURRICULUM,
but we can't ignore that 800 lb gorilla either. That of course is a
much harder challenge. I suppose in some sense we should expect / hope
that these same concepts that we've been discussing are addressed in
the numerous books, periodicals, web sites, etc. where most of this
learning happens. But that's probably much more difficult sitation to
change...more of a wild, wild west in comparison to academia.

Ultimately, most sane people act in accordance with that they are
rewarded for doing things correct and disciplined for doing wrong.
In academia, we can do this with grades for students, pay and/or tenure
or other perks for professors / lecturers, etc. But once we get into
books and magazines realm, we have to look for the publishers to
reward / discipline appropriately and IMO they don't necessarily have
the same drivers as to academia.  Many publishers seem to be more
concerned with just making a quick $$ rather than being accurate
or thoroughly training people to do things correctly. (How else can you
explain books explain tabloids, unless you subscribe to the MiB theory.
And IMHO, there are plenty of "tabloid"-like publishers writing
books in the programming field, but I digress.) Getting back to my
point, you don't have that less "control" for someone putting up
their own educational web pages that profess to teach programming
to which many of the self-educated seem to rely on. There are plenty
good ones, but most I've seen seem to be oblivious to secure coding
practice (w/ exception of security-related sites such as OWASP, etc.)

So it's only things like reputation, and ultimately market
pressures that force any corrective actions in regards to publishers
of written and web material. Add to that the problem that BECAUSE
these people are self-taught, the generally don't have someone to
provide guidance to separate the wheat from the chaff like instructors
hopefully do with their students.

But if self-taught programmers are the 800 pound gorilla, then corporate
business is the 4 ton elephant.  If anything, I would say that
addressing the pressures that seem to be on corporate programmers that
come to bear _against_ secure coding practice (although unintentionally)
is the MUCH BIGGER problem. (Most people go into CS to move into industry
after all, not to stay and teach/research in academia.)

Most businesses rate secure code as a very low need and to emphasize
time-to-market (which presumably has a direct correlation to market share,
or so we've been told) over everything else. IMHO, that leads to more
slip-shod code than any other single factor. Adding defensive code to
make it more robust against attacks takes additional time, which on
large projects can be quite significant. To make matters worse, many
IT shops in the USA seem to reward the "how fast can you crank out code"
(no matter how insecure) over the "how good of quality do you deliver"
mentality. What is rewarded in IT shops is quantity of LOC cranked out
each week (wrongly widely perceived as equivalent to productivity)
over quality (less buggy code, which I believe correlates well less
vulnerabilities).

I have no sour grapes here--never wanted to move into management--yet
over my 30+ years in industry (mostly telecom), I've seen the "fast" get
rewarded, transfer to another project before things crash-and-

Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-26 Thread Wall, Kevin
> Actually, I'm not teaching my 1 yo toddler much of anything about
> traffic right now. I'm more playing guardian when she runs around the
> house and making sure she doesn't get into situations for which she
> would be completely and totally unprepared (and in serious
> danger). She lacks the language skills to even marginally
> understand basic concepts like "street" let alone "don't play
> in the street." I think this rather proves my point that
> secure coding is not itself a fundamental concept,
> but rather an intermediate-to-advanced concept. Matt Bishop's comments
> are great, but they've also been applied in a context of
> higher ed., and recognize the limits of student understanding
> at different phases of development.

I don't mean to split hairs here, but I think "fundamental concept"
vs "intermediate-to-advanced concept" is a red herring. In your case
of you teaching a 1 yr old toddler, "NO" is about the only thing
they understand at this point. That doesn't imply that concepts like
"street" are intermediate-to-advanced. It's all a matter of perspective.
If you are talking to someone with a Ph.D. in physics about partial
differential equations, PDEs *are* a fundamental concept at that level
(and much earlier in fact). The point is, not to argue semantics, but
rather to teach LEVEL-APPROPRIATE concepts.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-26 Thread Wall, Kevin
Brad Andrews writes...

> I had proofs in junior high Geometry too, though I do not recall using
> them outside that class.  I went all the way through differential
> equations, matrix algebra and probability/statistics and I don't
> recall much focus on proofs.  This was in the early 1980s in a good
> school (Illinois), so it wasn't just modern teaching methods that were
> too blame.  I am not sure that the proofs were all that useful for
> understanding some things either, though the logic they taught has
> value that I missed a bit of since I did hit some modern techniques.

This may be heading slightly OT, but I don't think your experience
is really that unusual. My BS was a double major in math and physics
and my MS was in CS.

We used "proofs" in most of my math classes, many of my physics classes,
and several of my CS classes.

Besides the frequency, what varied in each of these was the level of
rigor expected. The proofs in math were extremely rigorous, the ones
in physics less so, and the ones in most of my CS classes would have
been classified as only so much "hand waving" if they would have been
done in my math classes. But an important thing to note in all of these
courses was, with the exception of very few advanced (senior & grad
level) math classes such as "advanced calculus" and "abstract algebra"
and "number theory", the use of 'proofs' wasn't the end, but only a
means to the end.

But still 'proofs' were utilized throughout much of this very diverse
coursework to add to the rigor of the logic and presumably to reinforce
understanding and learning.

In the same way, I think that 'security' (or 'robustness' or 'correctness'
or whatever you wish to call it) needs to be CONSISTENTLY blended into the
college and possibly even high school CS curriculum so some element of it
is touched upon in each of the classes and as one progresses it is discussed
more and more. So just as 'proofs' are sprinkled into math, physics, CS,
etc. we need to sprinkle in basic security / robustness concepts such
as:
+ An understanding of what input may be 'trusted' and what inputs
  cannot be trusted leading to the concept of trust boundaries.
+ The concept of correctness extends merely past handling 'correct' input
  and needs to somehow gracefully handle incorrect input as well.
+ Understanding the concept of risk, eventually leading to an understanding
  of risk analysis in upper level CS courses
+ Having an adversarial testing mindset, always thinking "how can I 'break'
  this program or system?". (BTW, sad to say, this has probably been the
  hardest thing to teach my colleagues. Some of them seem to get it, and
  some of them never do.)

There are probably others--this is by no means a complete list--but we
need to emphasize that to those instructing CS that this is not going to
take up a significant portion of their coursework nor require a significant
amount of time or effort on there part. Rather it needs to be folded into
the mix as appropriate.

I think back to my days in elementary mathematics. I recall learning at a
very early age, when learning division, that you can't divide by 0. The
explanation given by the teach wasn't in depth, it was more like "you are
just not permitted to do that", or occasionally "it's undefined" without
telling us WHY it's undefined. In a similar manner, we can teach "don't
blindly accept unchecked input", etc. And then if that is reinforced in
the grading process I do think it will come through.

Surely if we could just do that much, it would be a good start. But my
observation, based on my CS colleagues that I've taught with and before
that, the CS courses that I've taken at the graduate level, is that
other than the obligatory half hour mention of security in my operating
systems course, I can barely recall it ever even coming up. And I also
seldom recall that instructors would every toss your programs truly
malformed input either. By comparison, when I had an opportunity to
teach a masters level CS course on distributed systems (the Tannenbaum
book), I tossed in matters of security throughout, not just in the
chapters about security. Of course, I don't think until we got to the
chapters about security that the students realized that's what I was
teaching them, but that's OK too. The subliminal methods sometimes
work as well.

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://ww

Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-26 Thread Wall, Kevin
James McGovern wrote...

> - Taking this one step further, how can we convince
> professors who don't
> teach secure coding to not accept insecure code from their students.
> Professors seed the students thinking by accepting anything
> that barely
> works at the last minute. Universities need to be consistent amongst
> their own teaching/thinking.

Well, actually, I think that what Matt Bishop wrote in his response to
Benjamin Tomhave is the key:

> But in introductory classes, I tend to focus on what I am calling
> "robust" above; when I teach software security, I focus on
> both, as I consider robustness part of security.
>
> By the way, you can do this very effectively in a beginning
> programming class. When I taught Python, as soon as the students got
> to basic structures like control loops (for which they had to do
> simple reading), I showed them how to catch exceptions so that they
> could handle input errors. When they did functions, we went into
> exceptions in more detail. They were told that if they didn't handle
> exceptions in their assignments, they would lose points -- and the
> graders gave inputs that would force exceptions to check that
> they did.
>
> Most people got it quickly.

That is, Matt suggested a direct reward / punishment. Specifically, if
the students don't account for bad input via exceptions or some other
suitable mechanism, the simply loose points.

Matt's right. If it boils down to grades, most students will get it, and
fast.

And whether we call this secure-coding, robustness, or simply correctness,
it's a start.

I think that too many people when they hear that we need to start teaching
security at every level of CS are thinking of more complicated things like
encryption, authentication protocols, Bell-LaPadula, etc. but I don't think
that was where the thrust of this thread was leading.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-21 Thread Wall, Kevin
Karen Goertzel wrote...

> I think we need to start indoctrinating kids in the womb. Start selling Baby 
> Schneier CDs alongside Baby Mozart. :)

Yeah, I can hardly wait to hear Schneier's remake of that Dr. Seuss children's 
classic

 One Fish, Twofish, Red Fish, Blowfish

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Where Does Secure Coding Belong In the Curriculum?

2009-08-21 Thread Wall, Kevin
Karen Goertzel wrote...

> I'm more devious. I think what needs to happen is that we
> need to redefine what we mean by "functionally correct" or
> "quality" code. If determination of functional correctness
> were extended from "must operate as specified under expected
> conditions" to "must operate as specified under all
> conditions", functional correctness would necessarily require
> security, safety, fault tolerance, and all those other good
> things that make software dependable instead of just correct.

Except, unfortunately, as an industry / profession, we can't even
get the far-simpler (IMO) _functional correctness_ right let
alone (so-called) "non-functional" issues such as security, safety,
fault tolerance, etc. (Mathematical rigor and proof-of-correctness aside,
but in many [most?] cases that's not practical and even if it were, most
programmers' brains turn to mathematical mush whenever they see any
kind of correctness proof. Meaning that "it ain't going to happen"
if it requires thinking. ;-)

In some regard, I think this holds things back. If we don't do a
good job testing that the software does all that it's supposed to do
under *ideal* conditions, how are we ever to expect developers and
testers to test to make sure that the software doesn't do additional
things that it's NOT supposed to do under less than ideal conditions.
There's a reason why Ross Anderson and Roger Needham talked about
"Programming Satan's Computer" (see
http://www.cl.cam.ac.uk/~rja14/Papers/satan.pdf). [Yes, I 'm aware that
paper was about the correctness of distributed cryptographic protocols,
but I think both Anderson and Needham would agree that the term
"Programming Satan's Computer" applies more generally than just to that
narrow aspect of security.]

Not that I'm advocating of giving up, mind you. If the battle seems
hopeless, perhaps we would see more progress if we were to address
secure programming issues simply as a related aspect of program
correctness. Why? Because the development community seems to be more
willing to address those things. (Obviously, part of that is that
many programming flaws are rather tangible and something that casual
users can experience. Yeah! That's the ticket. Let's teach the general
populace how to hack into systems! Pass out free "You've been pwnd!"
T-shirts with every successful pwnage. Now *THAT* would be devious. ;-)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-05 Thread Wall, Kevin
Arian J. Evans wrote...

> The problem I had in the past with benchmarks was the huge degree of
> customization in each application I would test. While patterns emerge
> that are almost always automatable to some degree, the technologies
> almost always require hand care-and-feeding to get them to an
> effective place. I think this notion of combining the tools with
> qualified users is the true potential power of the SaaS solutions that
> are coming to market.

It's a pity that the these dynamic-scanning vendors can't work together to
come up with a common approach to at least helping this automation
you speak of at least part way along. (Yes, I know. I'm dreaming. ;-)

Some ideas that I've had in the past is that they could request and make
use of:
1) HTTP access logs from Apache and/or the web / application server.
   These might be especially useful when the logs are specially configured
   to also collect POST parameters and then the application's regression
   tests are run against the application to collect the log data. Most web /
   app servers support Apache HTTPD style access log format, so parsing
   shouldn't be too terribly difficult in terms of the # of variations they need
   to handle.
2) For Java, the web.xml could be used to gather data that might allow some
   automation, especially wrt discovery of dynamic URLs that otherwise difficult
   to discover by autoscanning.
3) If Struts or Strut2 is being used, gather info from the Struts validators 
(forget
OTTOMH what the XML files called where this is placed, bot those are what 
I'm
referring to).
4) Define some new custom format to allow the information they need to be
independently gathered. Ideally this would be minimally some file format
(maybe define a DTD or XSD for some XML format), but their tools could offer
some GUI interface as well.

Of course, I'm not sure I'd expect to see anything like this in my lifetime. At
this point, most of the users of these tools don't even see this as a need to
the same degree that Arian and readers of SC-L do and it's not clear how
vendors addressing these shortcomings IN A COMMON WAY would help them
to compete. More likely, we'll get there from here by evolution and vendors
copying ideas from one another.  The other significant driver AGAINST this
as I see it as many vendors sell "professional services" for specialized
consulting on how to do these things manually. That bring in extra $$
into their companies so convincing them to give up their cash cow is
a hard sell. And as a purchaser of one of these tools, if you don't have
the needed expertise in house (many do, but I'm guessing a lot more
don't), it's hard to tell your director that you can't use that $75K piece of
shelfware that your security group just bought because they can't figure out
how to configure it. Instead, they are more likely to quietly just drop another
$10K or so for consulting discretely and hope their director or VP doesn't
notice.

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Source or Binary

2009-07-30 Thread Wall, Kevin
In a message dated July 30, 2009 10:09 AM EDT, Paco Hope wrote...

> The Java Virtual Machine is a theoretical machine, and Java
> code is compiled
> down to Java bytecode that runs on this theoretical machine.
> The Java VM is
> the actual Windows EXE that runs on the real hardware. It reads these
> bytecodes and executes them. There is a very significant level of
> abstraction between a Java program running in a Java virtual
> machine and
> native code that has been compiled to a native object format (e.g., an
> .exe).

There's theory, and then there's practice. This is almost 100% accurate
from a practical matter with the exception of HotSpot or other JIT compiler
technologies that compile certain byte code into machine code and then
execute that instead of the original byte code.

I'm sure that Paco is aware of that, but just not sure all the other
readers are.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Insecure Java Code Snippets

2009-05-10 Thread Wall, Kevin
Larry Kilgallen wrote...
> So tell me what you think is easier in C/C++.

Well, just from a pure language POV, in comparing C++ with Java (sorry,
not qualified to comment on Ada), there is one advantage to C/C++ over
Java and that is in C++ I have a much higher level of confidence of
doing things to clean up something when and object is destroyed. In
C++, you can write a destructor to do this clean-up and you can predict
exactly when the DTOR will execute. In Java, it is not so simple, even if
all you want is to reclaim memory. You never really know when an object's
finalize() method will be called (or even if it ever will except at exit) You 
are
not supposed to rely on the timeliness of object finalization, nor are you
really supposed to rely on finalizers to really do anything except for release
memory.

So why is this important? It's important because sometimes correctness
and/or security depends on proper "clean-up". E.g., releasing a semaphore,
overwriting memory (say for passwords or crypto keys), etc.

Aside from that, the fact that Java doesn't allow you (native) access to
all system calls can make things much more difficult. E.g., no direct
access to fcntl(2) makes it impossible to use *nix mandatory file and/or
record locking unless resorting to JNI. Or the fact that (AFAIK) Java doesn't
support exclusive opens of a file or deal natively with sym links can cause
security issues in certain situations. Of course these things are not so
much the language per se as they are the supporting runtime environment
which Sun wanted to ensure was portable across all the OSes that they
support for Java. It's a trade-off and given their design goals, probably an
appropriate one, but it does make doing certain tasks a bit more difficult
to do securely. (Another example, Java supports nothing in its runtime
environment to prevent an object from being paged out to swap--something
that again would be desirable for things like crypto keys.)

-kevin
--
Kevin W. Wall   614.215.4788Application Security Team / Qwest IT
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist(informIT)

2009-03-18 Thread Wall, Kevin
Gary McGraw wrote:

> We had a great time writing this one.  Here is my favorite
> paragraph (in the science versus alchemy vein):
> "Both early phases of software security made use of any sort
> of argument or 'evidence' to bolster the software security
> message, and that was fine given the starting point. We had
> lots of examples, plenty of good intuition, and the best of
> intentions. But now the time has come to put away the bug
> parade boogeyman, the top 25 tea leaves, black box web app
> goat sacrifice, and the occult reading of pen testing
> entrails. The time for science is upon us."

I might agree with your quote of "The time for science is upon us." if
it were not for the fact that the rest of computer science / engineeering
is far ahead of computer security (IMO), and they are *still* not anywhere
near real "science", at least as practiced as a whole. (There probably are
pockets here and there.) For the most part, based on what I see in industry,
I'm not even sure we have reached the alchemy stage! (Compare where most
organizations are still at with respect to SEI's CMM. The average is probably
Level 2. Most organizations no longer even think of CMM as relevant.)

My observation is that very few people in the IT profession--outside
of academia at least--belong to neither ACM or IEEE-CS or any other
professional organization that might challenge them. I question, on
a professional level, how much we are going to progress as an industry
when most in this profession seem to think that they do not need anything
beyond the "Learn X in 24 Hours" type pablum. (Those are fine as far
as they go, but if you think that's all that's required to make you
proficient in X, you have surely missed the boat.)

Please note, however, that I do not think this mentality is limited
to those in the IT / CS professions. Rather, it is a pandemic of this age.

Anyhow, I'll shut up now, since this will surely take us OT if I persist.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] InformIT: budgeting for software security

2008-04-11 Thread Wall, Kevin
Jim,

In response to Stephen's question, you wrote...

>> What does 'green technology' have to do with infosec?
> 
> Data centerers worldwide use at least 3% of all global electricity. With 
> the growing cost of oil/power - most large corporations are looking for 
> ways to reduce power consumption at their data centers. Google is 
> building new database centers near cheap power, cheap land, and cheap 
> water. Sun has "bet the farm" on Green issues. IBM and Intel have 
> green/sustainability departments as well.
> 
> http://www.baselinemag.com/c/a/Infrastructure/Disruptive-Forces-Sun-Microsystems/

Maybe I need someone to connect the dots for me, but IMO, your response
_still_ doesn't adequately answer Stephen's question.

You addressed why 'green technology' is good in general and why businesses
are pursuing it, but not what it has to do w/ information security. Certainly,
if there is a connection here, is is not a direct one.

I don't want to speak for Stephen (but will anyways ;-), but I think it's unfair
to interpret his remark as implying that green technology is bad or some sort
of voodoo. In the context, I think his concern was that in the past, the RSA
conferences were focused on infosec, and on cryptography in particular. 
Apparently,
based on Stephen and gem's comments, it seems to have lost its focus. I think
that's all that was being implied here.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former White House cyber-security adviser, Richard Clarke,
at eWeek Security Summit


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] implementable process level secure development thoughts

2008-03-11 Thread Wall, Kevin
Andy,

You wrote...

> I have been working on developing a series of documents to turn the
> ideas encompassed on this list and in what I can find in books &
> articles.  I am not finding, and it may just be I am looking in the
> wrong places, for any information on how people are actually
> implementing the concepts.  I have found the high level ideas (like in
> "Software Security" and the MS SDL) and the low level code level
> rules, but there does not seem to be any information on how these two
> are being merged and used in actual development projects.  Are there
> any non-proprietary materials out there?
> 
> If there are none, could this be part of the problem of getting secure
> development/design/testing/coding out into the real world?

Not sure what you are exactly looking for, but I recently reviewed
the book

Integrating Security and Software Engineering: Advances and
Future Vision, Mouratidis H., Giorgini P., IGI Global, 2006,
ISBN-10: 1599041480, ISBN-13: 978-1599041483.

for Computing Reviews. (Review was posted online a 2 or 3 weeks ago.
Not sure if it's still up or not.) The cost for the book on Amazon.com
is ~$80.

This book covered some of the "gaps" that you may be referring to. E.g.,
it covered quite a few secure design methodologies and how they
(more or less) fit into an SDLC.

NOTE: This book is very academic in nature and difficult reading
and does not truly reflect current _practice_. However, it has a
excellent
bibliography that is useful if you wish to explore the topics more
deeply.
Can't really say much more about this (at least in a public forum)
because
Computing Reviews (http://www.reviews.com/) owns the copyright of the
review.

Contact me off-list if you want any specific question answered regarding
this book.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html 


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Wall, Kevin
Ken,

You wrote...
> Mind you, the overrun can only be exploited when specific characters  
> are used as input to the loop in the code.  Thus, I'm inclined to  
> think that this is an interesting example of a bug that would have  
> been extraordinarily difficult to find using black box testing, even  
> fuzzing.
> <...deleted...>
> The iDefense team doesn't say how the (anonymous) person  
> who reported it found it, but I for one would be really curious to  
> hear that story.

Reading from the iDefense security advisory on this, it says:

  IV. DETECTION

  iDefense has confirmed the existence of this vulnerability in version
  10.5-GOLD of RealNetworks' RealPlayer and HelixPlayer. Confirmation of
  the existence this vulnerability within HelixPlayer was done via
SOURCE
  CODE REVIEW. Older versions are assumed to be vulnerable. 

(Emphasis mine.)

So looks like it was discovered manually, possibly with the aid of a
static source code analyzer that ignores Flawfinder comments.
Apparently,
you missed that because of your jet lag. ;-)

The sad thing is that based on the documented "Disclosure Timeline", it
seems that almost 8 full months have past since the vendor
(RealNetworks)
responded with a fix. I mean, was the fix really rocket science that it
had to take THAT LONG??? IMHO, no excuse for taking that long.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Tools: Evaluation Criteria

2007-05-24 Thread Wall, Kevin
James McGovern wrote...

> Maybe folks are still building square windows because we haven't
> realized how software fails and can describe it in terms of a pattern.
> The only pattern-oriented book I have ran across in my travels is the
> Core Security Patterns put out by the folks at Sun. Do you think we
> should stop talking solely about code and start talking about how
> vulnerabilities are repeatedly introduced and describe using patterns
> notation?

You might want to check out securitypatterns.org, and more specifically,
http://www.securitypatterns.org/patterns.html
which mentions a few other books.

I think there are a few other books by Markus Schumacher, one of which
was based on his doctoral dissertation that is not shown there.

As to your question, should we stop talking _SOLEY_ about code? Probably,
yes. But I think the reason we don't is two-fold -- the first is that most
of us view that as the easy-part, the low-hanging fruit so-to-speak. The
second is that the development community for the most part, still doesn't
seem to be applying the securing CODING principles, so many of us think
it would be premature to move on to try to teach them secure design
principles, developing security reqts with abuse cases, etc., threat modeling,
etc. From a personal POV, I think that's something that a small team of
security specialists can handle. (At least it mostly works here. Security
evaluations are mandatory shortly after the design is complete.) But we
can't possibly do manual code inspections with a small security team,
so we try to instruct (alas, w/out too much success) developers secure
coding practices to avoid the problems at that level in the first place.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-20 Thread Wall, Kevin
James McGovern apparently wrote...

> The uprising from customers may already be starting. It is 
> called open source. The real question is what is the duty of 
> others on this forum to make sure that newly created software 
> doesn't suffer from the same problems as the commercial 
> closed source stuff...

While I agree that the FOSS movement is an uprising, it:
1) it's being pushed by "customers" so much as IT developers
2) the "uprising" isn't so much as being an outcry against
   security as it is against not being able to have the
   desired features implemented in a manner desired.

At least that's how I see it.

With rare exceptions, in general, I do not find that the
open source community is that much more security consciousness
than those producing closed source. Certainly this seems true
if measured in terms of vulnerabilities and we measure "across
the board" (e.g., take a random sampling from SourceForge) and
not just our favorite security-related applications.

Where I _do_ see a remarkable difference is that the open source
community seems to be in general much faster in getting security
patches out once they are informed of a vulnerability. I suspect
that this has to do as much with the lack of bureaucracy in open
source projects as it does the fear of loss of reputation to their
open source colleagues.

However, this is just my gut feeling, so your gut feeling my differ.
(But my 'gut' is probably bigger than yours, so feeling prevails. ;-)
Does anyone have any hard evidence to back up this intuition. I
thought that Ross Anderson had done some research along those lines.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html 


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Vulnerability tallies surged in 2006 | The Register

2007-01-23 Thread Wall, Kevin
Benjamin Tomhave wrote...
> This is completely unsurprising.  Apparently nobody told the agile
> dev community that they still need to follow all the secure coding
> practices preached at the traditional dev folks for eons.  XSS,
> redirects, and SQL injection attacks are not revolutionary, are not
> all that interesting, and are so common-place that it makes one wonder
> where these developers have been the last 5-10 years.
> 
> Solution to date: throw out traditional design review, move to agile
> security testing.  Why?  Because there seems rarely to be a design
> to review, and certainly no time to do it in.  Overall, it's important
> that agile apps be built on an underlying publishing framework so
> that inherited vulns can be found and fixed across the board by
> focusing on a single platform.
 
I think many in the security community have had similar thoughts for
years. IIRC, I think Gary McGraw even made a prediction at one point
that agile development methods would be the worst setback to information
security in years, or something to that affect. (At least I think it
was Gary. If not, my bad memory is at fault. Or perhaps I should say...
my bad...memory?)
 
Agile development is good at things when their users have some idea of
how they'd like to see the system work, so it usually works fairly well
for things like laying out screens, workflow, etc. as well as many
business applications which are presently being done manually. However,
generally these users are clueless when it comes to security. If the developers
were using a more traditional SDLC and their users were writing up business
requirements, the typical requirement (ones I have actually seen) are things
like "The user must login" and "The system must be secure". That's about
as sophisticated as they get. If your have developers are know a lot of
security, then they _might_ make it work, but the agile development
methods, which emphasizing working closely with your users, doesn't
work well for security matters because most users don't even know what
to ask for.
 
IMO, another reason why agile development fails miserably to result in
secure programs is because security cuts across the grain of the entire
application. While you can have a trusted kernel or whatever be a
logically isolated component, much of security has to be DESIGNED IN,
FROM THE START. Because all it takes is a single incorrectly functioning
piece to ruin your entire security. Most agile development teams that I've
seen here think that they can leave security issues to the end that then
put in in that special "security module". One problem is that security
must be anywhere that untrusted input can come from which is usually
quite a few places.  In that sense, trying to add in security to an
application developed using agile methods is similar to attempting to
add concurrency / multi-threading to an application after-the-fact.
Sure, you _can_ do it, but what it results in is a system with a few
course grained locks and very little concurrency. Concurrency cuts across
various aspects, just like security does. That's why I don't see it as
a particularly good fit for agile development. (OTOH, I think it should
be a great fit with Aspect Oriented Programming, but that's another topic.)
And while I'm on a roll (at least in the ego of my own mind ;-) one other
place that I think is a rotten match for agile development. If the
software you are developing is supposed to be a reusable class library,
I don't find agile development a particularly good fit. Publish the
interfaces of a reusable class library for the first release and then
go ahead and just try to refactor those interfaces after you have a 1/2
dozen clients using release 1.0. If they don't skin you alive, they
certainly won't be your clients for your next release.
 
But enough rambling.
-kevin
---
Kevin W. Wall Qwest Information Technology, Inc.
[EMAIL PROTECTED] Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
-- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Could I use Java or c#? [was: Re: re-writingcollegebooks]

2006-11-15 Thread Wall, Kevin
Larry Kilgallen wrote:
> At 8:18 PM -0600 11/14/06, Wall, Kevin wrote:
> 
> > That makes a Java inappropriate for a lot of
> > system-level programming tasks. Simple example: There's no
> > way in pure Java that I can lock a process in memory. Wrt this 
> > list, that has a lot of security ramifications especially on
> > shared processors. Sure makes hiding secrets a lot harder.
> 
> Please explain that issue.
> 
> Is there some multiuser operating system that does not clear 
> memory before retasking it for another  process ?

By shared processors, I didn't mean multi-CPU systems, but simple
computers used by multiple users. Sorry for the possible confusion.

I wasn't referring to the OS not clearing memory between use by
different processes. Instead consider a case where there's a secret
such as an encryption key, password, etc. in a chunk of memory that
has been paged out to a swap device (*nix) or pagefile.sys (Windows).
The process them terminates abnormally because of a signal, abrupt
power outage, etc. From a security POV, this is a bad thing since
now your secret is somewhere on the hard drive. If your computer
is physically secure and the OS is probably secured, this is a
relatively low risk, but it still may not be acceptable. In the
first case, where your Java process might abnormally terminated
via a signal, you may even end up with a process [core] dump.
In C/C++, I'd use something like mlock(2) or perhaps memcntl(2) and
lock the secret in physical memory. But there's no way to do this
using _pure_ Java.

Another example, perhaps easier to grasp and still security related.
Say you have a regular "console" application (i.e., doesn't bring up
it's own window; e.g., consider Windows console programs). Your
application wants to _prompt_ the user for a password. You would
like to disable echoing of the password to the console window.

In Java, you can do this if you use something like Swing or AWT,
etc., but there's no portable way that I've found using pure Java.
(Note: kludges such as continously overwriting each character as
it's input isn't secure. Consider case where user is running 'script'
or has xterm logging enabled, etc. Also, IMHO, calling an external
program, such as stty, from a Java program in order to disable and
re-enable character echo is also a kludge.)

In C/C++ or other language where I can call system calls directly,
this is easy to do (though not necessarily portable across different
OSes) via appropriate ioctl(2) call or various other means. (E.g., see
termio(7) for *nix systems.)

Overall, I find these types limitations with Java or C# more frustrating
than I do with the performance issues. If the performance issue is that
bad, it's usually my algorithm or data structures that need the tuning.
If all you need is to run machine code to get bettere performance,
then compile your Java with gjc or TowerJ or something similar. (Not
sure what options exist for C#.)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writingcollege books]

2006-11-15 Thread Wall, Kevin
Crispin Cowan wrote...
> mikeiscool wrote:
...
> > True, but that doesn't mean runtime portability isn't a 
> good thing to aim for.
> > 
> It means that compromising performance to obtain runtime portability
> that does not actually exist is a poor bargain.

To me, the bigger loss than performance is all the functionality that
you give up to gain the portability. E.g., because several system calls
(in a functional/feature way, not the _specific_ sys calls) aren't
portable
across all OSes that Sun wanted to support with Java, they dumbed down
the list to the lowest common demoninator. That makes a Java
inappropriate
for a lot of system-level programming tasks. Simple example: There's no
way
in pure Java that I can lock a process in memory. Wrt this list, that
has
a lot of security ramifications especially on shared processors. Sure
makes
hiding secrets a lot harder.

Plea to moderator: Ken: While I find this debate interesting, I think it
has little to do with secure coding. I'm trying to bring it back
on
track a bit, but I fear that it is too far gone. My vote is to
kill
this topic unless someone has a major objection or we can make
it
relevant to security. Thanks.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books - erm.. ahm...

2006-11-06 Thread Wall, Kevin
In response to a post by Jerry Leichter, Gadi Evron wrote...

> A bridge is a single-purpose device. A watch is a simple
> purpose computer, as was the Enigma machine, if we can call
> it such.
> 
> Multi-purpose computers or programmable computers are where
> our problems start. Anyone can DO and create. One simply has
> to sit in front of a keyboard and screen and make it happen.

Let us keep in mind that in the name of profits (and ignoring our
prophets, see .sig, below), as an industry, we have strived to
lower the entry level of programming by introducing "diseases"
(I'll probably catch some flack for that) such as Visual Basic,
etc. so that managers who have never had even the simplest
introduction to computer science can now develop their own
software, complete with security vulnerabilities. This only
exacerbates the situation. To add to that, often you get some
manager or marketing type who slaps together a "working" prototype
of something they or a customer is asking for by using a spreadsheet,
an Access "database", and some VB glue that works for maybe 100
records and then s/he thinks that a small development team should be
able to tweak that prototype to turn it into an enterprise-wide,
Internet-facing application that can handle millions of records,
handle a transaction volume that is 3 or 4 orders of magnitude larger
than the prototype handles, and slap it all together in a couple
of weeks.

Developers have to cut corners somewhere, and since security issues
are not paramount, that's often what gets overlooked.

As an industry, I think that we've, in part, done this to ourselves.
When I started in this industry 27 years ago, at least real software
engineering techniques were _attempted_. There were requirements
gathered, specifications written and reviewed, designs written,
reviewed, and applied, and an extensive testing period after
coding was more or less complete. But that used to take 15-20 people
about 1 to 2 years. Now we've compressed that down to 90 days or so,
so something had to give (besides our sanity ;-). What I see today
is a few "analysts" go talk to marketing or other stakeholders and
they write up some "user stories" (not even real "use cases"; what
I'm referring to but more like a sentence or two describing some basic,
sunny-day-only usage scenario collected into a spreadsheet). From
there, the application development teams jump directly into coding/testing,
magically expecting the design to somehow just "emerge" or expecting to
be able to "refactor it" later (if there ever is a "later"). (Can you
tell I think that extreme programming--at least as practiced here--has
been a horrible failure, especially from a security POV? :)

I ask you, just where would civil or mechanical engineering be today
if they had encouraged the average construction worker to develop their
own bridge or designed their own buildings rather than relying on
architects and engineers to do this? That's just one reason why things
are as bad as they are. Today, I don't even see professional software
developers develop software using good software engineering principles.
("It takes too long" or "It's too expensive" are the usual comments.)
Or where would we be if the city council expected to build a new
80-story skyscraper, starting from inception, in only 6 months?
It's no wonder that we so often here that remark that says

 "If [building] architects built buildings the way that
 software developers build software, the first woodpecker
 that came by would destroy civilization."

Maybe what we need is to require that as part of the software development
education, we need to partly indoctrinate them into other "real"
engineering disciplines and hope that some of it rubs off. Because, IMO
what we are doing now is failing miserably.

BTW, if you've not yet read the Dijkstra article referenced below, I
highly recommend it. It's quite dated, but it's a gem for .sig quotes.

-kevin

Std disclaimer: Everything I've written above reflects solely my own
opinion and not the opinion of any of my employers,
past or present.
--- 
Kevin W. Wall   Qwest Information Technology, Inc. 
[EMAIL PROTECTED]  Phone: 614.215.4788 
"It is practically impossible to teach good programming to students 
 that have had a prior exposure to BASIC: as potential programmers 
 they are mentally mutilated beyond hope of regeneration" 
- Edsger Dijkstra, How do we tell truths that matter? 
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html 
 



This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attach

Re: [SC-L] How can we stop the spreading insecure coding examplesattraining classes, etc.?

2006-08-31 Thread Wall, Kevin
Tim Hollebeek writes...

> Really, the root of the problem is the fact that the simple version
> is short and easy to understand, and the secure version is five
> times longer and completely unreadable.  While there always is some
> additional complexity inherent in a secure version, it is nowhere
> near as bad as current toolkits make it seem.

I would say that secure versions that are *not* well thought out
(particularly where security wasn't part of the original design)
may tend to be FIVE times longer, but I don't think that's the
typical case with code that is well designed. These security
checks can be be modularized and reused like most other code.
However, it may very well be that it is five times more
difficult to develop the examples in the first place though,
and THAT'S probably a major reason that we don't see it
more often in example code.

> Demo code generally demonstrates some fairly powerful capability;
> the reason it is often short and sweet is because lots of effort
> has gone into making it possible to do useful things with minimal
> effort.  Unfortunately, it is often the case that much less effort
> has gone into making it possible to do the same thing securely, so
> that code is quite a bit longer.  You're right, if there was more
> of a pushback against broken demo code, maybe more effort would go
> into making it easy to do things securely, instead of insecurely.

Well, I'm going to start pushing back when I can. Tomorrow I get
the chance to bend the ear of some security folks from the
Live.com site.  I'm definitely going to be letting them know
of my dissatisfaction wrt recent MS Atlas training and asking
what I can do about it (other than completing the training
evaluation). Every little bit helps.

As Ed Reed pointed out, as an industry we did manage to get
rid of computed gotos, spaghetti code, etc., so maybe there's
hope. (But the pessimist in me says that it's probably easier
to get people to _stop_ doing some poor practice than to start
doing some good practice. I hope I'm wrong there.)-:

For the moment, perhaps all we can do is try to publically
shame them and bring about peer presure that way. I dunno.
I think it's primarilly a people problem rather than simply
a technological one, which is why it's so hard to solve.
So while doing things like showing people secure coding
idioms and secure design patterns (ala Marcus Schumacher)
will only have minor impact until there is some major
attitude change with both developers and upper management.

> I think part of the problem is that people have fallen into the
> trap of thinking that security is supposed to be hard and that
> checking all your errors is supposed to bloat your code by a factor
> of five, instead of wondering why library functions are designed
> in such a way that omitting complex logic around them fails in an
> insecure way.  Secure code can be short and sweet, too, just not
> with most of the languages and tools that are currently popular.

That's definitely a large part of it. Historically, most
libraries haven't taken much of a security slant unless they've
been crypto related. Most often, they first become well
entrenched, and *then* there's an outpour of security
vulnerabilites discovered as the library usage builds up
a critical mass of usage. E.g., libc was this way. It wasn't
until the Morris Internet worm in 1988 that people really
started paying much attention to libc security issues. By
that time libc was pretty much everywhere and what's worse,
there wasn't really any viable alternative unless you wanted
to roll your own. (That was long before GNU was prevalent.)
And yes, buffer overflows were known very early, before Unix/C
were widespread. But it was a different world then.

> This is an old, old problem.  strcpy is insecure, and any code
> involving strncpy or a length check will be longer and/or more
> complex.  But this is really just an artifact of the fact that
> buffers don't know their own length, making an additional check
> necessary.  There is no reason why the secure version couldn't
> have been just as short and sweet, it just wasn't done.

Or when strcpy() and its ilk were originally written, no one
was concerned about buffer overflows...they were more concerned
with program speed and size. The world changes. IMO, if you
are still writing in a unsafe language like C or C++ when
you don't really have to and are only using it because
that's ALL YOU KNOW, then someone should take away your
keyboard. Obviously there are legitimate reasons for using
C/C++ and other "pointy" languages, but those reasons are
holding less and less water every day. In the security class
I've taught for the past 4.5 yrs or so, one of the things I
tell my students is, "if you have a choice, select a 'safe'
language like Java or C# where you don't need to worry about
buffer overflows or heap corruption. Not only is it safer,
but it is also likely to improve your productivity."

But at this point, I'd be (somewha

[SC-L] How can we stop the spreading insecure coding examples at training classes, etc.?

2006-08-28 Thread Wall, Kevin
First a bit of background and a confession.

The background: I recently attended a local 4 hr
Microsoft training seminar called "Get Connected with the
.NET Framework 2.0 and Visual Studio(c) 2005". However, I
want to clarify that this example is NOT just a Microsoft
issue. It's an industry-wide issue; only the examples I give
are from this most recent Microsoft training seminar. (Actually,
calling it "training" is probably a stretch; it was more like
and advertisement for .NET Framework 3.0 (a.k.a., "Atlas")
and a forthcoming version of Visual Studio, but that's another
gripe. ;-)

The confession: 90% of the examples presented at this training
was in Visual Basic and about 10% was C#. I personally think that
VB is horrid as a programming language (for the most part, I
concur with Dijkstra about anything BASIC-like; see .sig, below),
and so may be causing some bias with my observations.

Anyway, back to the main point. At this training seminar, the
demo code that they showed (which mostly was about Atlas features,
not about .NET Framework 2.0 features) had very little in the way
of proper security checks. This was most notable in the almost
complete absence of authorization checks and proper data
validation checks. Also, most of these quotations below
are actually paraphrases (including my own!), but nevertheless,
I believe they are fairly accurate and hopefully non-biased.

At one point, the speaker asked "what's wrong with this code
fragment" (a demo to upload a file). I said "there's no proper
data validation, so one can use a series of '..\' in the file
name and use it to do directory traversal attacks and overwrite
whatever file the user id running the web server has permissions
to write to." (Of course, that was the "wrong" answer. Their
"proper" answer was that W3C had instituted a max size of 4MB or
so on files that could be uploaded in this manner, but they had a
mechanism to get around it.)

At another point, while Atlas JavaScript gadgets was being demoed,
someone asked if one could use XMLHttpRequest (XHR) to invoke
_any_ URL. The speaker correctly answered "no; only back to the
originating host:port from where the JavaScript was downloaded
from". The questioner then remarked something like "oh, that's too
bad". But instead of explaining why allowing cross-domain requests
is inherently a BAD Thing, the speaker replied "oh, don't worry;
we also provide you with some software [apparently a proxy of
sorts -kww] that Microsoft wrote that you can put on your web
server so your users can call out to any URL that they wish,
so it's not limited to calling just pages on your own site."
"Great, I thought. Why don't you also provide some mechanisms to
automatically insert random XSS and SQL injection vulnerabilities
into your code too." Sigh. 

Now understand, these are only a few recent examples presented
at this training. Over the past few years, I've seen numerous
other lame examples of demo code elsewhere, ranging from symmetric
encryption examples using block ciphers (where it's implied that
you should always just use ECB mode; in fact no other cipher
modes are even mentioned!), to showing how to create a web
service without even mentioning any mechanisms for authentication
or authorization. (Actually, many typically don't even
_recognize_ the NEED for such things.)

Make no mistake in thinking that this poor practice is limited
to Microsoft. At one point or another, we probably have all done
something like this and then just casually mentioned "of course
you need to do X, Y, and Z to make it secure". I understand the
the pedagogical need that example code has to be kept simple.
Usually much of the error code is completely omitted, not just
security-related checks. But most developers have enough sense
of the application-related errors to know they need to add the
application-level error checking; not so for security-related
checks--at least not for your average developer.

Of course the big problem with any poor examples is that this
is just the type of thing that developers who have little security
experience will copy-and-paste into their PRODUCTION code, thus
exposing the vulnerabilities.  And nowadays, it's even made
easier to copy-and-paste this insecure code by either making it
available for download from the Internet or passing it out on a
CD or DVD. Many of us have probably been guilty of that too,
at least once in our lives.

But we need to all recognize that there is no reason that the
demo code that made available for downloading or on a CD needs
to be the exact same code displayed during the training.
(N.B.: I've not yet checked the DVD supplied by Microsoft,
but the instructors said [paraphrase] we would find "the exact
same code were are using in our examples on the DVD".)

I think that this practice of leaving out the "security
details" to just make the demo code short and sweet has got
to stop. Or minimally, we have to make the code that people
copy-and-paste from have all the proper

Re: [SC-L] bumper sticker slogan for secure software

2006-07-20 Thread Wall, Kevin
Dana,

Regarding your remarks about writing perfectly secure code...
well put.

And your remarks about Ross Anderson...

> Ross Anderson once said that secure software engineering is about
> building systems to remain dependable in the face of malice, error,
> or mischance. I think he has something there. If we build systems
> to maintain confidentiality, integrity and availability, we have the
> ability to fail gracefully in a manner to recover from unknown or
> changing problems in our software without being detrimental to
> the user, or their data.

remined me of Anderson and Ralph Needham coining the phrase
(hope I'm getting this right) that "security is like programming
Satan's computer" in the sense that you have an evil extremely
intelligent adversary with unlimited resources and time, etc.
[http://www.cl.cam.ac.uk/ftp/users/rja14/satan.pdf]

So there's a bumper sticker for you:

Security: programming Satan's computer

Of course, it's likely to be misunderstood by most.
(Maybe we could attribute it to SNL's "church lady".
Sorry Ross. ;-)

BTW, does anyone besides me think that it's time to put
this thread to rest?

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-17 Thread Wall, Kevin
Crispin Cowan writes...

> IMHO, bumper sticker slogans are necessarily short and glib. 
> There isn't room to put in all the qualifications and caveats
> to make it a perfectly precise statement. As such, mincing
> words over it is a futile exercise.
> 
> Or you could just print a technical paper on a bumper 
> sticker, in really small font :)

Actually, I like that I idea. And it could end with the cliche:
"If you can read this, you are too close."

Seriously, while I understand that there may be a reason to have
a bumper-sticker-like catch phrase for the definition of "secure",
I think that in the long run, it is more likely to backfire.

I have already reviewed an untold number of security "requirements"
that said "The system shall be secure". Having some bumper-sticker
slogan that we all use would only allow those yo-yos to justify
their "requirements", at least if it reflects  anything regarding
an actual definition of security such as Ivan's comment that Crispan
posted.

With that in mind, maybe it would be less "dangerous" to use something
more pithy or sardonic, but less to the point of an actual definition.

Security: Pay me now, or I'll pay myself later.

Of course that would only be appropriate for black or grey hats. ;-)

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Dr. Dobb's | Quick-Kill Project Management | June 30, 2006

2006-07-07 Thread Wall, Kevin
Kenneth Van Wyk writes...

> http://www.ddj.com/dept/architect/189401902
> ...
> Put another way, how does a team hold  onto its good practices (not
> just security reviews) when they're in crisis mode?  I'm sure that
> the answer varies a lot by team,  priorities, etc., but I'd welcome
> any comments, opinions, etc. from  any of you who have been in
> similar situations.

I've been in this situation several times in my 25+ years in software
development (especially with post-divestiture Bell Labs). During the
times that it has been successfully addressed, I've noticed one major
common theme and that is that the development team (and particularly the
team lead) has got to have the "balls" to stand up to management and
tell
them that the dev team is unwilling to compromise on quality / security
/
(fill-in-the-blank-with-whatever-is-important-to-you) and that the
ONLY way that it can be done is to drop certain FUNCTIONALITY
(which usually management is reluctant to do).

How successful this approach is depends on the several things
(not a comprehensive list):

1) Whether the team has any credibility with management or not.
   (Hint: If you've already slipped your original schedule several
   times, you probably don't have any. :-)
2) How well the dev team presents its arguments, especially in
   terms of risks of NOT doing testing / security reviews /
   insert-your-favorite-here. (Since security is in a large part
   about _managing_ risks, this fits in nicely.) You need to
remember
   though that ultimately, it is management's discretion whether or
   not to accept the risk(s), not the development team's.
3) How tactfully you present your case. (I.e., don't be
arrogant; be
   willing to show some flexibility, e.g., working some additional
   hrs of unpaid OT; etc.)
4) Know your Brooks' _Mythical Man-Month. Management almost
certainly
   will offer to give you more developers/testers/etc. This is
almost
   always a bad ROI since you will spend more time bringing those
   individuals up-to-speed on your project than you will get back
   in productivity. Also, know your Capers Jones'; he has produced
some
   excellent documentation that shows that dropping things like
software
   code inspections actually increases software costs and
   time-to-delivery.

It also greatly helps to have a team with enough spine that they all are
willing to walk and find another job if they feel that they are being
held hostage and being forced to make unnecessary compromises.

I have been fortunate to have worked with many such teams in the past
who consider their craftsmanship and pride in their work more important
than prevalent "finish this yesterday" mentality, including a few
teams that HAVE been willing to walk out in such situations. (Of course,
that was during the dotcom boom, when all you had to do to find an
development job was to know how to spell "www". I'm not sure how
committed
those people would have been during the leaner times.)

BTW, I am proud to say that the application security team that I have
worked with for the past six years or so have had the courage to insist
that best practices such as formal use cases, design reviews, code
inspections, etc. be followed even though the _formal_ IT process had
thrown such practices out the window in favor of so-called XP
development
practices. (When it comes to security, I tell management that XP stands
for "eXpect Problems" rather than "eXtreme Programming".)

Of course, it is best to avoid situation like those described in the
_Dr. Dobb's_ article in the first place. That's where it's useful to
have a skill in estimating development effort, and one unfortunately
where I think we as an industry have a rather poor track record.

But that's another topic entirely.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit
> I saw an article on Dr. Dobb's (via Slashdot) this morning that made  
> me pause a bit.  The article is on "Quick-Kill Project 
> Management" --  
> full link is here:
> 
> http://www.ddj.com/dept/architect/189401902
> 
> The article describes a small project team (say 5 developers) who  
> have suddenly had their dev schedule drastically accelerated on them  
> by powers outside of their control.  It describes some techniques  
> that the dev leader can use to concentrate the team's focus on  
> killing (hence the name) the most pressing of issues.  Not  
> surprisingly, there's no mention of security in the article, 
> although  
> they do talk about conducting code reviews, but only for functional  
> defects in the code.
> 
> What caught my attention here is that I'll bet that a *lot* of small  
> dev t

RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-08 Thread Wall, Kevin
Dinis Cruz writes...

> Stephen de Vries wrote:
> > Java has implemented this a bit differently, in that the byte code 
> > verifier and the security manager are independent.  So you could for

> > example, run an application with an airtight security policy (equiv
to 
> > partial trust), but it could still be vulnerable to type confusion 
> > attacks if the verifier was not explicitly enabled.  To have both 
> > enabled you'd need to run with:
> > java -verify -Djava.security.policy ...
> This is a very weird decision by the Java Architects, since what is
the 
> point of creating and enforcing a airtight security policy if you can 
> jump strait out of it via a Type Confusion attack?
> 
> In fact, I would argue that you can't really say that you have an 
> 'airtight security' policy if the verifier is not enabled!
> 
> Right?
> 
> Is there a example out there where (by default) java code is 
> executed in an environment with :
> 
> * the security manager enabled (with a strong security policy) and
> * the verifier disabled

Just a hunch, but I suspect that it was designed this way to support
mobile code, or more specifically applets. There is a security manager
enabled (policy not airtight though; see McGraw/Felten's book on the
subject)
with applets, and the byte code verifier only verifies *remotely* loaded
classes,
which are the only ones presumed to be hostile. Dumb assumption, I know,
but
initially applets ran so slow, Sun probably had little choice if they
hoped
to "sell" applets. Besides, back then most of the hostile code WAS
coming
from different attack vectors--infected floppies or ftp'ing / running
infected code. AV software monitored that attack vector, but not
executable
code coming in via HTTP through your browser. (Many do today, though.)
But the assumption Sun made back then was that all locally loaded
classes
could be trusted and therefore were type-safe.

In retrospect, several wrong decisions were made regarding web security.
(Don't
even get me started on Radio-ActiveX! ;-) But as they say, backward
compatibility
is the curse of software design, so we probably are stuck with it.

Fortunately the verifier is pretty simple to enable in Java. OTOH,
coming
up with a good security policy is not so easy. I've only done it twice
and
it's been a laborious process each time assuming you start with
essentially
a fail-safe "no permissions" approach and only add permissions
as-needed.

Anyway, I'd say that applets were probably what drove this security
model. Curious
that applets probably now comprise less than %1 of all Java code today.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
<... add your favorite pithy quote about hindsight here ...>


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-03 Thread Wall, Kevin
David Eisner wrote...

> Wall, Kevin wrote:

The correct attribution for bring this up (and the one whom you are
quoting) is Dinis Cruz.

> >>  same intuition about the verifier, but have just tested
> >> this and it is not the case.  It seems that the -noverify is the
> >> default setting! If you want to verify classes loaded from the
local  
> >> filesystem, then you need to explicitly add -verify to the cmd
line.
> 
> Is this (still) true?  The -verify and -noverify flag are no longer
> documented [1], although they are still accepted.
> 
> I did a little experiment (with my default 1.5 VM).  I compiled a
> HelloWorld program, then changed a few byes in the class file with a
> hex editor.

Perhaps no longer true (at least one could hope), but I can't take
credit
for the part you quoted above. That was Dinis.

Also, from the results of your test, it seems to indicate that SOME TYPE
of verification is taking place, but if all you did was change a few
ARBITRARY bytes in the .class file, I don't think that proves the
byte code verifier is being being run in it's entirety. IIRC, the
discussion was around the issue of 'type safety'. It's hard to see how
a HelloWorld program would show that.

It's entirely possibly that the (new 1.5) default just does some
surface level of byte code verification (e.g., verify that everything
is legal "op codes" / byte code) before HotSpot starts crunching
on it and that this works differently if either the '-verify' or
'-noverify' flags are used. E.g., suppose that '-verify' flag, does
some deeper-level analysis, such as checks to ensure type safety, etc,
whereas the '-noverify' doesn't even validate the byte codes are
legal op codes or that the .class file has a legal format. This might
even make sense because checking for valid file format and valid
Java "op codes" ought to be fairly "cheap" checks compared to the
deeper analysis required for things like type safety.

You didn't discuss details of what bits you tweaked, so I'm not
quite yet ready to jump up and down for joy and conclude that Sun has
now seen the light and has made the 1.5 JVM default to run the byte
code through the *complete* byte code verifier. I think more tests
are either necessary or someone at Sun who can speak in some official
capacity steps up and gives a definitive word one way or another on
this.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"Linux *is* user-friendly.  It's just choosy about its friends."
- Robert Slade, http://sun.soci.niu.edu/~rslade/bkl3h4h4.rvw


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-02 Thread Wall, Kevin
[Moderator: Feel free to discard some or all of Dinis' original post
below.
I wasn't sure how much to trim because I don't know how
much people have been paying attention to this particular
discussion and I didn't want them to loose context and have
to resort to searching the archives.]

It might be nice if the Java VM would / could distinguish betw running
from
a LOCAL disk vs a remote networked disk (NFS, Samba share, etc.) and
enable tye byte code verify automagically for any classes loaded
remotely. That doesn't seem too different (in terms of attack vectors)
of running applets locally using remotely loaded classes.  A similar
thing
might also be done if any jars, zip files, .class files, or the
directories in which they reside were writable by anyone other than
root (or equivalent on Windows, MacOS, etc.) or the user id executing
the Java program.  Of course that's not too likely to be too portable
across the various supported OSes, so perhaps that's why Sun choose
not to do it. Perhaps asking / begging Microsoft to do a similar thing
for .NET assemblies might be an easier sell because they wouldn't face
the OS portability issue (as much).

Dinis: I deliberately did not cross-post to the owasp-dotnet list.
   You can if you wish.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]Phone: 614.215.4788
"The reason you have people breaking into your software all
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit 

-Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dinis Cruz
> Sent: Tuesday, May 02, 2006 7:48 PM
> To: 'Secure Coding Mailing List'
> Cc: '[EMAIL PROTECTED]'
> Subject: [SC-L] By default, the Verifier is disabled on .Net and Java 
> 
> 
> Here is a more detailed explanation of why (in my previous post) I
said: "99% of .Net and Java code that is currently deployed is executed
on an environment where the VM verifier is disabled,  ."
> 
> --
> 
> In .Net the verifier (the CLR function that checks for type safety) is
only enabled on partial trust .Net environments.
> 
> For example, in Full Trust .Net you can successfully assign Type A to
Type B (also called a Type Confusion attack) which clearly breaks type
safety.
> 
> I have done some research on this topic, and on my spare time I was
able to find several examples of these situations:
> 
> 
> Possible Type Confusion issue in .Net 1.1 (only works in FullTrust)
(http://owasp.net/blogs/dinis_cruz/archive/2005/11/08/36.aspx) 
> Another Full Trust CLR Verification issue: Exploiting Passing
Reference Types by Reference
(http://owasp.net/blogs/dinis_cruz/archive/2005/12/28/393.aspx) 
> Another Full Trust CLR Verification issue: Changing Private Field
using Proxy Struct
(http://owasp.net/blogs/dinis_cruz/archive/2005/12/28/394.aspx) 
> Another Full Trust CLR Verification issue: changing the Method
Parameters order
(http://owasp.net/blogs/dinis_cruz/archive/2005/12/26/390.aspx) 
> C# readonly modifier is not enforced by the CLR (when in Full Trust
(http://owasp.net/blogs/dinis_cruz/archive/2005/12/26/390.aspx)
> 
> Also related: 
> 
> JIT prevents short overflow (and PeVerify doesn't catch it)
(http://owasp.net/blogs/dinis_cruz/archive/2006/01/10/422.aspx) 
> 
> and ANSI/UNICODE bug in System.Net.HttpListenerRequest
(http://www.owasp.net//blogs/dinis_cruz/archive/2005/12/17/349.aspx)
> 
> Here is Microsoft's 'on the record' comment about this lack of
verification (and enforcement of type safety) on Full Trust code (note:
I received these comments via the MSRC):
> 
> "...
> Some people have argued that Microsoft should always enforce type
safety
> at runtime (i.e. run the verifier) even if code is "Fully Trusted".
> We've chosen not to do this for a number of reasons (e.g. historical,
> perf, etc). There are at least two important things to consider about
> this scenario:
> 
> 1) Even if we tried to enforce type safety using the verifier for
Fully
> Trusted code, it wouldn't prevent Fully Trusted from accomplishing the
> same thing in 100 other different ways. In other words, your example
> accessed an object as if it were a different incompatible type - The
> verifier could have caught this particular technique that allowed him
to
> violate type safety. However, he could have accomplished the same
> result using private reflection, direct memory access with unsafe
code,
> or indirectly doing stuff like using PInvoke/native code to disable
> verification by modifying the CLR's verification code either on disk
or
> in memory. There would be a marginal benefit to insuring people wrote
> "cleaner" more "type safe" code by enforcing verification at runtime
for
> Full Trust, but you wouldn't get any additional security benefits
> because you can perform unverifiable actions in dozens of ways the
> verifier won't prevent

RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-25 Thread Wall, Kevin
Dinis,

Dinis Cruz wrote...

Finally, you might have noticed that whenever I talked
about 'managed code', I mentioned 'managed and verifiable code',
the reason for this distinction, is that I discovered recently
that .Net code executed under Full Trust can not be (or should
not be) called 'managed code', since the .Net Framework will
not verify that code (because it is executed under Full Trust).
This means that I can write MSIL code which breaks type safety
and execute it without errors in a Full Trust .Net environment.

Indeed this is somewhat surprising that there is no byte-code
verification
in place, especially for strong typing, since when you think about it,
this is not too different than the "unmanaged" code case.

Apparently the whole "managed" versus "unmanaged" code only has to do
with whether or not garbage collection is attempted. Given the fact
that Microsoft has added almost 50 new keywords just for their new
"managed C++", one certainly could hope they could do better than
this--IF this applies to ALL "managed" code in general.

However, the real question is "is this true for ALL managed code or
only managed code in the .NET Framework"? If it is the latter, I don't
see it as being much different where Java java.* packages may use
"native"
code (via JNI) in (say) rt.jar to interface with OS-level constructs.
I believe that such code is fully-trusted in the JVM as well. Of course,
it is reasonable to ask of Sun Microsystems and Microsoft to restrict
such trust
for "native" and "managed" code (respectively) by requiring it be
limited
to digitally signed jars / assemblies signed by trusted sources that are
verified at runtime when these jars / assemblies are first loaded. This
could
be extended so that the "trusted sources" ultimately could be defined
by the end users (or their administrators), but by default, it would
include
the vendors (Sun and Microsoft) themselves. I recall early versions of
Sun's
domestic JCE jars that were distributed as separate jars. The
verification
of the signatures for these signed jars significantly slowed down the
initial
loading of those jars, but after verification, it had little, if any
performance impact. Of course if software quality improvement does not
take
place in these companies, their signing would be somewhat vacuous. But
it
would be better than nothing, since at least all such code would not be
fully trusted by default.

Of course (not to open another can of worms) if we could actually
enforce
liability for software just as we commonly do with other manufactured
products, we problably wouldn't need some elaborate constructs to ensure
secure coding. Because once some of the major vendors had had their feet
held to the fire and been burned by millions or billions in lawsuits,
I suspect all of a sudden they would SEE a valid business reason where
none existed before. (Usual company disclaimers apply.)

Comments?
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Wall, Kevin
John Steven wrote:
...
> 2) Flaws are different in important ways bugs when it comes to presentation,
> prioritization, and mitigation. Let's explore by physical analog first.

Crispin Cowan responded:  
> I disagree with the word usage. To me, "bug" and "flaw" are exactly
> synonyms. The distinction being drawn here is between "implementation
> flaws" vs. "design flaws". You are just creating confusing jargon to
> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
> ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
> has the property of being exploitable.

I'm not sure if this will clarify things or further muddy the waters,
but... partial definitions taken SWEBOK
(http://www.swebok.org/ironman/pdf/Swebok_Ironman_June_23_%202004.pdf)
which in turn were taken from the IEEE standard glossary
(IEEE610.12-90) are:
+ Error: "A difference…between a computed result and the correct result"
+ Fault: "An incorrect step, process, or data definition
  in a computer program"
+ Failure: "The [incorrect] result of a fault"
+ Mistake: "A human action that produces an incorrect result"

Not all faults are manifested as errors. I can't find an online
version of the glossary anywhere, and the one I have is about 15-20 years old
and buried somewhere deep under a score of other rarely used books.

My point is though, until we start with some standard terminology this
field of information security is never going to mature. I propose that
we build on the foundational definitions of the IEEE-CS (unless there
definitions have "bugs" ;-).

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] free lunch almost over

2005-02-02 Thread Wall, Kevin
Jeff Williams wrote...

> I think we're focused on different aspects of 'important.' 
> The sheer number of web applications does make concurrency
> in that environment an important issue for this list.
> Concurrency used to be the province of a relatively 
> small number of developers who understood that were working in a 
> multithreaded environment. Now the number of programmers who need to 
> understand concurrency is, well, almost all of them. That's 
> why the issue is important for the list (or at least to me).

Well, I agree completely with your assessment. In fact, the
author (Herb Sutter) more or less states this as well, when
he writes:

 "The vast majority of programmers today don't grok
 concurrency, just as the vast majority of programmers
 15 years ago didn't yet grok objects."

(Of course, I think Sutter is being overly generous here. If the
truth be told, most programmers _still_ don't grok objects. In
fact, about half don't even grok programming IMO! ;-)

There was an editorial article recently in one of the ACM publications
that I get (I think it may have been the _ACM Queue_) that somewhat
discussed this as a normal phenomena of our culture. Twenty plus years
ago, you didn't have everyone and their brother claiming to be
programmers
just because there was lots of money to be found in it. Then as things
got
easier (e.g., the compiler did more for you, the advent of higher level
programming languages, etc.) and the potential rewards went up (recall
the VC $$ in late '90s), anyone who happened to slap together a few
likes of
VB (ugh!) or construct a simple static HTML web page using some WYSIWYG
HTML editor went around claiming to be a "programmer". Unfortunately,
many of these people are still with us (i.e., in our profession) and
worse,
many are now our managers.

Welcome to the dumbing down of programming and it's inevitable results.

BTW, Richard Clarke seems to be one of the few that has the guts to
state
this publicly in a straightforward manner. (See .sig, below.)

-kevin
---
Kevin W. Wall  Qwest Information Technology, Inc.
[EMAIL PROTECTED] Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit




RE: [SC-L] How do we improve s/w developer awareness?

2004-11-12 Thread Wall, Kevin
Gary,

While I've not yet seen your _Exploiting Software_ book, I use
your _Building Secure Software_ as one of the texts for a graduate
CS class in security (http://cs.franklin.edu/Syllabus/comp676/).

Unfortunately, I've not seen any copies of your book in any major
bookstore in the Columbus, OH area, so appologies about my
naïvety in this area, but I've not had a chance to peruse it.

But I have one question regarding your "attack patterns"--and please
don't take this as a challenge. My question is, what makes you
and Hoglund's "attack patterns" so different from (say) the security
patterns as described by Yoder and Barcalow or by Mark Schumacher?

What makes "attack patterns" so different than the work done by
the OASIS Web Application Security technical committee (charter at:
http://www.oasis-open.org/committees/was/charter.php) that
is chaired by Mark Curphey, Peter Michalek, and your Citadel
colleague, David Raphael and had its roots in the OWASP ASAC
work started several years ago.

I'm not trying to accuse you of academic plaigarism, but am curious
in how you see your "attack patterns" fitting together in the
bigger picture along with the OASIS WAS work and the generic
security patterns work of Yoder, Barcalow, Schumacher, and others?
Or perhaps are they more like the CVE work maintained by Mitre?

What are the differences? What is the overlap? Can you point to
an online example of any documented attack pattern that you have
in the book so we can see one or a few of them? Why should I want
to use your attack patterns rather than one of these other efforts
I've mentioned?

Like I said, not only have I not _read_ your book, I've not even had
a chance to thumb through it.

I think it's good to try to catalogue such things, but I think it's
going in the wrong direction when there is little industry consensus
on exactly how to do this. Adding yet another classification scheme
is not valuable unless we can understand exactly what the new scheme
brings to the table and how it fits along side other similar attempts.

Also, one last thing... not to nitpick, but it seems that your 48 attack
patterns can be grouped into a few broader categories? Does your book
do this as well?

Thanks in advance for your response,
-kevin wall
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit



-Original Message-
From: [EMAIL PROTECTED] on behalf of Gary McGraw
Sent: Fri 11/12/2004 8:39 AM
To: ljknews; Secure Coding Mailing List
Subject: RE: [SC-L] How do we improve s/w developer awareness?
 
One of the reasons that Greg Hoglund and I wrote Exploiting Software was
to gain a basic underdstanding of what we call "attack patterns".  The
idea is to abstract away from platform and language considerations (at
least some), and thus elevate the level of attack discussion.

We identify and discuss 48 attack patterns in Exploiting Software.  Each
of them has a handful of associated examples from real exploits.  I will
paste in the complete list below.  As you will see, we provided a start,
but there is plenty of work here remaining to be done.

Perhaps by talking about patterns of attack we can improve the signal to
noise ratio in the exploit discussion department.

gem

Gary McGraw, Ph.D.
CTO, Cigital
http://www.cigital.com
WE NEED PEOPLE!

Make the Client Invisible
Target Programs That Write to Privileged OS Resources 
Use a User-Supplied Configuration File to Run Commands That Elevate
Privilege 
Make Use of Configuration File Search Paths 
Direct Access to Executable Files 
Embedding Scripts within Scripts 
Leverage Executable Code in Nonexecutable Files 
Argument Injection 
Command Delimiters 
Multiple Parsers and Double Escapes 
User-Supplied Variable Passed to File System Calls 
Postfix NULL Terminator 
Postfix, Null Terminate, and Backslash 
Relative Path Traversal 
Client-Controlled Environment Variables 
User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth) 
Session ID, Resource ID, and Blind Trust
Analog In-Band Switching Signals (aka "Blue Boxing") 
Attack Pattern Fragment: Manipulating Terminal Devices 
Simple Script Injection 
Embedding Script in Nonscript Elements 
XSS in HTTP Headers 
HTTP Query Strings 
User-Controlled Filename 
Passing Local Filenames to Functions That Expect a URL 
Meta-characters in E-mail Header
File System Function Injection, Content Based
Client-side Injection, Buffer Overflow
Cause Web Server Misclassification
Alternate Encoding the Leading Ghost Characters
Using Slashes in Alternate Encoding
Using Escaped Slashes in Alternate Encoding 
Unicode Encoding 
UTF-8 Encoding 
URL Encoding 
Alternati

RE: [SC-L] Exploiting Software: How to Break Code

2004-11-11 Thread Wall, Kevin
You wrote...

> Does anyone have any comments about this book?  I have read some
> reviews but it is on the site advertising the book for sale   They
> stated that this book is a must for anyone wanting to harden code
> in programs, softwares and hardwares but then that could just be
> a sales pitch.  I would like to see some posts both pro and con
> about this book.
> 
> Exploiting Software: How to Break Code
> By Greg Hoglund, Gary McGraw.
> Published by Addison Wesley Professional

Robert Slade has reviewed this book. See
http://victoria.tc.ca/int-grps/books/techrev/bkexplsw.rvw

for details. In general, I find that he writes some pretty
decent reviews.
---
Kevin W. Wall  Qwest Information Technology, Inc.
[EMAIL PROTECTED] Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


RE: [SC-L] Design flaw in Lexar JumpDrive

2004-09-30 Thread Wall, Kevin
Joel Kamentz wrote...

> Also, shouldn't it be easy enough to steal one of these and lift a fingerprint
> from it with scotch tape and then be able to get at all of the passwords in the 
> device?

If that didn't work, the "gummy bear" approach probably would.
---
Kevin W. Wall  Qwest Information Technology, Inc.
[EMAIL PROTECTED] Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


RE: [SC-L] Top security papers

2004-08-09 Thread Wall, Kevin
Matt Setzer wrote...

> It's been kind of quiet around here lately - hopefully just because everyone
> is off enjoying a well deserved summer (or winter, for those of you in the
> opposite hemisphere) break.  In an effort to stir things up a bit, I thought
> I'd try to get some opinions about good foundational materials for security
> professionals.  (I'm relatively new to the field, and would like to broaden
> my background knowledge.)  Specifically, what are the top five or ten
> security papers that you'd recommend to anyone wanting to learn more about
> security?  What are the papers that you keep printed copies of and reread
> every few years just to get a new perspective on them?  

Okay, for starters, in no particular order:

  Ken Thompson's Turing Award lecture, _Reflections on Trusting Trust_, URL:
http://www.acm.org/classics/sep95/

  Saltzer & Schroeder, "The Protection of Information in Computer Systems",
Proceedings of the IEEE, Sept. 1975, pp. 1278-1308, available at:
http://web.mit.edu/Saltzer/www/publications/protection/

  David Wheeler, "Secure Programming for Linux and Unix HOWTO", URL:
http://www.dwheeler.com/secure-programs/

  Aleph One, "Smashing the Stack for Fun and Profit", URL:
http://www.insecure.org/stf/smashstack.txt

  Bruce Schneier, "Why Cryptography Is Harder Than It Looks", URL:
http://www.schneier.com/essay-037.html

  Carl Ellison and Bruce Schneier, "Ten Risks of PKI: What You're Not Being
Told About Public Key Infrastructure", URL:
http://www.schneier.com/paper-pki.html

Also, I'd probably through in a few RFCs and the Firewall and Snake-Oil
Cryptography FAQs in there as well, but I'm too lazy to look them up
right now.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit





RE: [SC-L] Programming languages -- the "third rail" of secure coding

2004-07-21 Thread Wall, Kevin
der Mouse wrote...

[Michael Hines wrote]
>> I've been compiling a list of programming languages..  [...]
>> My list -- (feel free to add to it).
>
> 42. BCPL
> 43. sh
[snip]
> 50. Machine code
> 
> I'd also point out that if it's languages you're trying to list,
> JavaScript arguably should not have a separate entry from Java (and
> probably VBScript vs Visual Basic too).  I also think ADA should be
> spelled Ada - you seem to be _trying_ to capitalize correctly

I already responded off-list to Michael Hines, the originator of this
thread, but perhaps I can just add if all everyone is interested in
is a list of programming languages, you can find a nice list
at the ACM Hello World page:

http://www2.latech.edu/~acm/HelloWorld.shtml

I would have mentioned it earlier had I known that we were
going to continue down this rabbit trail.  Besides, there are
lots of other similar sites available. (Google is your friend. ;-)

Now let's get back on track of WHAT collecting such a list
has to do with secure programming. Does the original poster
(Michael Hines) intend on developing an ontology of programming
languages based on how they support (or fail to support--whoa,
really big list ;-) secure programming? If so, I can see how
we might all learn some lessons from that. If not, I guess I'm
missing the whole point this thread was started, so please
enlighten me.

Thanks,
-kevin wall
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]Phone: 614.215.4788
"The difference between common-sense and paranoia is that common-sense
 is thinking everyone is out to get you. That's normal -- they are.
 Paranoia is thinking that they're conspiring."-- J. Kegler




RE: [SC-L] Risk Analysis: Building Security In #3

2004-07-14 Thread Wall, Kevin

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of ljknews
> Sent: Tuesday, July 13, 2004 8:37 AM
Larry Kilgallen wrote...

> At 5:30 PM -0600 7/12/04, Jared W. Robinson wrote:
> >I read the paper, and found it interesting. I read the statistic "50
> >percent of security problems are the result of design 
> flaws". Where does
> >that number come from? Experience?
> 
> I would say it comes from sloppy wording.
> 
> At best, the author might discuss "50 percent of security problems
> discovered to date...".
 
Back as late as either 1998 or 1999, approximately 50% of the
CERT advisories were attribued to security issues caused by
buffer overflows. Now I certainly wouldn't count buffer overflows
as DESIGN errors, but some people might. Likewise, I probably wouldn't
count most data validation-related errors (specifically, the lack
thereof) as design errors, but again, some people might. If those
reporting this statistic were of that ilk, I could see the number
being close to 50%. But in my experience from the past 5 years (through
code inspections, pen testing, etc.), in my small sample of the world,
that number has been closer to 20-25%. (But that could be because
we develop in Java or C#; no more C or C++.)

But, numbers such of these, in absence of any context of how the
figures were derived are IMHO, close to meaningless.

-kevin wall
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The difference between common-sense and paranoia is that common-sense
 is thinking everyone is out to get you. That's normal -- they are.
 Paranoia is thinking that they're conspiring."-- J. Kegler




RE: [SC-L] Programming languages used for security

2004-07-10 Thread Wall, Kevin
David Crocker wrote:

> Whilst I agree that the distinction between specification and
> programming languages is not completely clear cut, there is
> nevertheless a fundamental difference between specification
> and programming.
> 
> In a programming language, you tell the computer what you want
> it to do, normally by way of sequential statements and loops.
> You do not tell the computer what you are trying to achieve.
[snip]
> In a specification language, you tell the computer what you are
> trying to achieve, not how to achieve it. This is typically done
> by expressing the desired relationship between the input state
> and the output state. The state itself is normally modelled at
> a higher level of abstraction than in programming (e.g. you
> wouldn't refer to a hash table, because that is implementation
> detail; you would refer to a set or mapping instead).

I'm sorry, but I don't quite see how this description sufficiently
delineates between declarative programming languages (such as
SQL, various logic and functional prog langs (Prolog, ML, Haskell,
Miranda, etc.)) from specification languages.

Do you consider them declarative programming languages and specification
languages one in the same? (Note: PLEASE, let's not turn this into a
discussion of language X is / is not a declarative programming
language, especially since the last time I used Prolog was in 1989
and the others I've only read about and/or wrote a few toy
programs. ;-)

My impression always has always been that a declarative programming
language is a high-level language that describes a problem rather
than defining a solution, but that pretty much sounds like your
definition of a specification language.

-kevin wall
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit




RE: [SC-L] Programming languages used for security

2004-07-10 Thread Wall, Kevin
David Crocker wrote...

> I think there are two other questions that should be asked before
> trying to answer this:
> 
> 1. Is it appropriate to look for a single "general purpose" programming
> language? Consider the following application areas:
> 
> a) Application packages
> b) Operating systems, device drivers, network protocol stacks etc.
> c) Real-time embedded software
> 
> The features you need for these applications are not the same. For
> example, garbage collection is very helpful for (a) but is not
> acceptable in (b) and (c).  For (b) you may need to use some
> low-level tricks which you will not need for (a) and probably not
> for (c).

I did not mean to imply that a *SINGLE* general purpose programming
language be the optimal, end-all, be-all solution for all software
problems. Rather, I was trying to understand what would we, as security
professionals, find useful in a programming language in terms of specific
feature sets, etc. (At this point, I don't even want to particularly
discuss syntax and semantics, although I would argue that these things
do have an impact on secure coding as well.)

The very reason that I stated "a GENERAL PURPOSE programming language"
rather than just "a programming language" is I didn't want the
discussion to digress into fine grained application areas such as
"for web applications, you need features F1 and F2, but for
programming firewalls, you want features F1' and F2'", etc.
For any given application area, I'm of the opinion that you can
design an application specific prog language that will be better
suited and likely offer more security than you can in the general
case. However, this is usually just not practical, which is why we
have other mechanisms to extend the basic functionality of programming
languages (usually application specific libraries). (Of course,
sometimes the language itself goes beyond that; for example Prolog
offers its "Declarative Clause Grammar" form which is great for parsing.
And Forth can be used or abused almost ad infinitum.)

My vary reason for posing these questions is to see if there is any
type of consensus at all on what mechanisms / features a language
should and should not support WITH RESPECT TO SECURE PROGRAMMING.
For example, you mentioned garbage collection. To that I would add
things like strong static typing, encapsulation that can not be
violated, very restrictive automatic type conversion (if allowed
at all), closed "packages" or libraries or some other programming
unit, elegant syntax and semanatics (oops, said I wouldn't go
there ;-), etc.

In the past few days (actually, all through my career), I've hear a
lot of gripes about what people think is wrong regarding languages,
but little in terms of what they think is valuable.

> 2. Do we need programming languages at all? Why not write precise
> high-level specifications and have the system generate the
> program, thereby saving time and eliminating coding error?
> [This is not yet feasible for operating systems, but
> it is feasible for many applications, including many classes of
> embedded applications].

Well, I guess I'd argue that this is _somewhat_ irrelevant. If you are
proposing something like Z or VDM, than that in essence becomes your
programming language for all practical purposes. How it's translated
to machine code is not what I was trying to get at. IMO, I think that
formal programming languages have their place, but given that 95%
of programmers are weak in formal proofs, they are likely to be at
least as error prone as more conventional programming languages for
all but a select few.  So, if you wish, you can rephrase my original
question from "general purpose programming language" to "general
purpose high-level specification method". In either case, what would
you like to see to specifically support writing secure software?
(Obviously, the details will vary at spec methods vs. traditional
prog languages as you are working at different levels, but I think
my questions could be generalized / extended to deal with specifiction
languages as well.

-kevin wall

> David Crocker, Escher Technologies Ltd.
> Consultancy, contracting and tools for dependable software development
> www.eschertech.com
> 
> 
> Kevin Wall wrote:
> 
> >>
>If a GENERAL PURPOSE programming language were designed by
>scratch by someone who was both a security expert and
>programming language expert, what would this language (and
>it's environment) look like?
> 
>More specifically,
> 
>   + What set of features MUST such a language support (e.g.,
> strong static typing, etc.)?
>   + Perhaps just as importantly, what set of features should
> the language omit (e.g., pointer arithmetic, etc.)?
>   + What functionality should the accompanying libraries support
> (e.g., encryption, access control, etc.)?
>   + What would be the optimal paradigm (from a theoretical, rather
> than pragmatic perspective) that such a langua

RE: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-09 Thread Wall, Kevin
David Crocker wrote...

> There is a tendency to regard every programming problem as an
> O-O problem.  Sometime last year I read a thread on some
> programming newsgroup in which contributors argued about the
> correct way to write a truly O-O "Hello world" program. All
> the solutions provided were cumbersome compared to the traditional
> "printf("Hello, world!")" solution. The point is, printing
> "Hello, world!" is not an O-O problem!

Amen to that! I made similar remarks in the 'comp.compiler' and
'comp.object' USENET newsgroups as far back as 1991 (see for
example [URL probably will wrap] ...
http://groups.google.com/groups?hl=en&lr=lang_en&ie=UTF-8&newwindow=1&safe=active&threadm=91-08-148%40comp.compilers&rnum=1&prev=/groups%3Fq%3Dcblph!kww%2Bgroup:comp.*%26hl%3Den%26lr%3Dlang_en%26ie%3DUTF-8%26newwindow%3D1%26safe%3Dactive%26selm%3D91-08-148%2540comp.compilers%26rnum%3D1)

I also muttered similar things within the Bell Labs community
much earlier than that during a time that C++ was first gaining momentum.

I'm of the belief that one should use the appropriate programming paradigm
that best fits the problem at hand. Contrary to how some may feel, I
strongly believe that does NOT mean that the best solution is always
an OO approach. Unfortunately, when all you have is a hammer...
[Note: In general, I'm a fan of OO--where and when appropriate.]

But this is getting way-off topic, so I'll cease my ranting.
(About time he shuts up! ;-)
-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit




[SC-L] Programming languages used for security

2004-07-09 Thread Wall, Kevin
I think the discussion regarding the thread

   Re: [SC-L] Education and security -- another perspective
   (was  "ACMQueue - Content")

is in part becoming a debate of language X vs language Y. Instead,
I'd like to take this thread off into another direction (if Ken
thinks it's appropriate to the charter of this list).  [Perhaps,
this has been discussed before here or elsewhere. A quick Google
search revealed a thread in SecurityFocus 'Security Basics'
mailing list that didn't contain much in the way of substance.]

[Ed. Concur, and I was rapidly approaching the point of asking the thread
to die or go elsewhere.  The pros and cons of a particular language's
strength in an academic curriculum are interesting, however -- 
particularly when it comes to issues re teaching students secure coding
practices.  KRvW]

Anyway, here's my question...

   If a GENERAL PURPOSE programming language were designed by
   scratch by someone who was both a security expert and
   programming language expert, what would this language (and
   it's environment) look like?

   More specifically,

  + What set of features MUST such a language support (e.g.,
strong static typing, etc.)?
  + Perhaps just as importantly, what set of features should
the language omit (e.g., pointer arithmetic, etc.)?
  + What functionality should the accompanying libraries support
(e.g., encryption, access control, etc.)?
  + What would be the optimal paradigm (from a theoretical, rather
than pragmatic perspective) that such a language would fit into
(e.g., object-oriented, functional, imperative, logic programming,
etc.)? [Note: I mention "theoretical, rather than pragmatic" so
that such a language would be unduly influenced by the fact that
presently developers familiar with OO and imperative styles vastly
out number all the others, with functional coming up a distant
3rd.]
  + (Related to the previous item) Would such a language be compiled
or interpreted or something in between.

Also, if anyone chooses to discuss these issues, let's leave things like
portability and performance out of the equation UNLESS you feel these
things directly have an impact on secure coding. I think that we can
all agree that we'd desire a portable and fast-executing language
(although perhaps a slow-executing language would be more secure in
that it might slow down the propagation of malware ;-).

Finally, lets try to keep this abstract and not a grocery list of
"it should have X, Y, and Z, which by the way happens to be in
".

Looking for the ensuing discussion (assuming Ken thinks this is
appropriate to this lists charter).

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit


RE: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-07 Thread Wall, Kevin
Fernando Schapachnik wrote...

> I've considered 'secure coding' courses, and the idea always 
> look kind oversized. How much can you teach that students can't read 
> themselves from a book? Can you fill a semester with that? I'm
> interested in people's experiences here.

I suppose that depends largely on how you define "secure coding"
and how much depth you want to go into.

For the past 2 years I've taught a CS masters level course in
computer security. In addition to "secure coding" practices, I
also discuss:

* fundamental concepts of security (e.g., authentication,
  authorization, confidentiality, data integrity, nonrepudiation,
etc.);
* risk management and threat models;
* cryptographic foundations;
* authentication, access control, and auditing;
* common threats and vulnerabilities, and
* designing secure systems.

For the most part, because of time constraints and the fact that we are
trying to cover broader things then simply coding-related issues, the
"secure coding" practices are interspersed with the other topics, where
and when appropriate. (If you want more detail, you can find the
syllabus
at http://cs.franklin.edu/Syllabus/comp676/.)

> Adding a 'security chapter' to existing courses seems more 
> appropiate (at least to me). At the end of our Programming II
> course, I showcase students the vulnerabilities that can be
> understood or are related with what they've saw in
> class: these includes buffer overflows, input validation, integer
> over/underflows, race conditions, least priviledge, etc. I 
> stress that these are only samples, and point them to links
> (like David's great 'Secure Programming How-To') and books.
> I haven't had the chance to evaluate the impact of that, but
> it is on my to-do list.

I think that is a good approach, although I prefer mixing these
issues in where/when they might be more appropriate (e.g., discuss
security issues arising from race conditions when discussing
multi-threading, etc.) rather then saving them up for the end--if
only because there's a chance that they get crowded out if the
pace goes slower than expected.

> Similary, some other courses where security can be plugged 
> include operating systems, networking, SE, system's design, etc.

Indeed. At the same university, I taught the Distributed Operating
Systems masters level course. We used the Tannebaum / van Steen
text. The longest single chapter in the book was on security,
so the topic naturally fit in. (However, I know of other instructors
before me who simply skipped that chapter, thinking it wasn't as
important as the rest of the stuff.)

> I'd be interested to hear what people think of the two 
> approaches (separate security courses vs. spreading security
> all over the curricula).

I don't see it as "either / or", but rather "both / and". I think that
we sprinkle discussion of security, where appropriate, throughout
core courses (OS, networking, software design, etc.) and also have a
course or two at an upper (junior/senior) level. In that way, I
see it very similar to the way that we approach software design.
Generally there's a specific course or two on design, but we (hopefully)
teach it in bits and pieces at the beginning programming levels as well.
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The difference between common-sense and paranoia is that common-sense
 is thinking everyone is out to get you. That's normal -- they are.
 Paranoia is thinking that they're conspiring."-- J. Kegler




RE: [SC-L] Protecting users from their own actions

2004-07-06 Thread Wall, Kevin
In Ken van Wyk's cited article at
http://www.esecurityplanet.com/views/article.php/3377201
he writes...

> As I said above, user awareness training is a fine practice
> that shouldn't be abandoned. Users are our first defense
> against security problems, and they should certainly be
> educated on how to spot security problems and who to report
> them to. By all means, teach your users to be wary of incoming
> email attachments. Teach them to keep their anti-virus software
> up to date, and their firewall software locked down tight.
> 
> Do not, however, be shocked when they make the ''wrong'' choice. 

I would contend that in any sufficiently large user population the
probability that someone will open up a suspect attachment approaches
one. In fact, I think that in a sufficiently large population, this
probability approaches 1 even if:

1) the e-mail were from a complete stranger;
2) the name of attached file was
   "i_am_a_worm_that_will_destroy_your_harddrive.exe".

(#2 assuming that your mail filter didn't catch something so
obvious -- and it it didn't, time to revise your filtering
rules! ;-)

So, I completely agree that we ought to EXPECT that users will do
foolish things (with malice or out of ignorance--I'm not trying to
make a moral judgement here) and thus we need to be prepared to
practice "security in depth".

However, (repeating here, from above) Ken also wrote...

> ... Teach them [users] to keep their anti-virus software
> up to date, and their firewall software locked down tight.

I'm not sure why this is something that should be left up to users.
Isn't this something that users probably shouldn't be given a choice
on? Normally I would think that corporate security policy dictate
keeping the AV software / signatures up-to-date as well as dictating
the (personal) firewall configurations. Some centrally administered
software should do these things. I don't think that (except under very
rare circumstances), users should even be given a _choice_ about
such things. While that may seem Draconian to some, thats what works
best in practice.

Cheers,
-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The difference between common-sense and paranoia is that common-sense
 is thinking everyone is out to get you. That's normal -- they are.
 Paranoia is thinking that they're conspiring."-- J. Kegler




[SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-02 Thread Wall, Kevin
Kenneth R. van Wyk wrote...

> FYI, there's an ACM Queue issue out that focuses on security -- see 
> http://acmqueue.com/modules.php?name=Content&pa=list_pages_issues&issue_id=14
>
> Two articles there that should be of interest to SC-L readers include
> Marcus Ranum's "Security: The root of the problem -- Why is it we can't
> seem to produce secure, high quality code?"  ..

I've been thinking alot about some of the statements that Marcus Ranum
made in his most recent article in the _ACM Queue_ (Vol 2, No 4)...
even before Ken invited us all to comment on it.

I mostly agree with Ranum's conclusions, although perhaps for
different reasons.

Ranum states:
"It's clear to me that we're:
 + Trying to teach programmers how to write more secure code
 + Failing miserably at the task"

He goes on to say that "it [educational approach] flat out hasn't
worked".

In general, I don't think this is an issue that is unique to _secure_
programming (coding, design, etc.). I think over the past 40 years or
so, as a discipline, we've failed rather miserably at teaching
programming, period. For the past 25 years, I've worked closely with
both highly educated Ph.D. computer scientists and with those whose
formal CS education consisted of at most a course or two in something
like C or Pascal. In many of these cases, the less educated are
beating out those who have had more formal education. (In fact,
I'd say this has been true in at least as many as 50% of the cases.)

What makes the difference? Well, it goes beyond mere aptitude and
general intelligence. I think in part at least, it goes with having
a passion for what you do. To some, doing design and coding and
other creative aspects is an artistic expression, a noble cause
and they would do it even if there weren't paid for its--witness
the open source movement which is largely funded by volunteer
labor. Others see it as a "job" or a "career path", but not much
more. In my 25 year observation, those with this PASSION almost
always "get it", and those without it are usually left behind
after the first few years into the profession.

I think that the same can be said for "secure coding / design".
Not only do those people have a passion for coding / design, but
the ones who seem to "get it" are the ones who have a passion for
security as well.

Okay, so probably no surprise here, right? Do what you enjoy and
you'll excel at it more often than ones who do it out of other
motives (no matter how noble--such as making an affordable living to
provide for your family).

So I agree with Ranum in a sense--that educational approaches to
security have overall failed, but I think it is not because the
educational process / system per se has failed us (not that I'm
arguing that it couldn't be improved), but because we haven't been
able to ignite the passion for security in others. (And frankly,
I'm not even to what degree that's possible. I'll leave that to
another discussion.)

In the past two years, I've had the fortune to teach a computer
security course that I had the major part in organizing / developing.
I have learned two things about the students during that time:
1) All the students do well when it comes to rote
   memorization. (E.g., questions such as "What cipher mode
   doesn't require an Initialization Vector?", etc.)
2) Only the students that seem to "get it" seem to do well
   on the questions requiring thought (i.e., ones requiring
   reasoning "outside the box").

Surprisingly (at least at first), I have often been discovered that
those who other faculty members often consider the brightest students
are ones who do the worst on the "questions requiring thought".

But in general, by the end of the 12 week period, I usually can tell
who is going to take and try to apply what they learned and those
who just chalk up the course as another 3 credit hours.

I see what I think is a related phenomena in the commercial world
as well. I've worked with a lot of developers who have worked on
security-related software (e.g., firewalls, crypto, secure proxies,
authentication and access control systems, etc.). One would EXPECT
that the groups that work on these projects would as a whole do
better at developing secure programs than the IT industry as a whole.
But overall, I don't think that their batting average is all that
much higher than the industry at large. We often hear excuses for
this ("security software is more complex", etc.), but I'm not buying
it. If anything, it's this observation more than anything else that
makes me think that formal education is not THE answer (although,
I do think it is part of the answer).

On a related note to security and education, I was wondering if anyone
knows of any experimental data that shows that those with formal
education in security develop more secure programs than those
who have never had such formal training?  If no such experimental
data exists, why not? Can 

RE: [SC-L] Interesting article on the adoption of Software Security

2004-06-12 Thread Wall, Kevin
Dana Epp wrote...

[...snip...]
> For those of us who write kernel mode / ring0 code, what language are 
> you suggesting we write in? Name a good typesafe language that you have 
> PRACTICALLY seen to write kernel mode code in. Especially on Windows and
> the Linux platform. I am not trying to fuel the argument over which 
> language is better, it comes down to the right tool for the right job. I
> know back in December ljknews suggested PL/I and Ada, but who has 
> actually seen production code in either Windows or Linux using it?

I suppose it's _possible_ that one might be able to sneak in a bit of
carefully constructed C++ in one of these kernels, but you'd probably
have to be very careful about what you used (e.g., probably most of
STL is out) and in the end, you'd probably have to use

extern "C" {
  ...
}

wrappers around most of your stuff so it could interface with the
rest of the kernel.

I thought of doing something like this back in 1990 when working on
device drivers with the Unix System V kernel at Bell Labs, but the
potential problems (several having to do with DTORs IIRC and the binding
issues) seemed to outweigh any potential gain. I thought of also using
C++ as "a better (more strongly typed) C", but that too didn't seem
worth it.

Of course, there are some kernels that were implemented in C++; Clouds
comes to mind.

> Lets face it. You aren't going to normally see Java or C# in kernel code
> (yes I am aware of JavaOS and some guys at Microsoft wanting to write 
> everything in their kernel via managed code) but its just not going to 
> happen in practice. C and ASM is the right tool in this area of code.

I'd pretty much agree with this. You seldom even see Java or C# used in
real-time systems (and let's face it, the kernel itself is pretty much
real-time; don't want to be missing an interrupt while doing GC).
Perhaps once the Real-time Specification for Java is approved and
implemented by Sun, this will change, but I don't think that we'll
be seeing many new OSes adopt Java or C# for their kernel code.
(However, I think this also has to do in part with the fact that most
OS/kernel developers are not experts in OO...just my opinion.)

[...snip...]
> Cripin is right; new code SHOULD be written in a type safe language 
> unless there is a very strong reason to do otherwise. The reality is 
> that many developers don't know when that right time is. And resulting 
> is poor choice in tools, languages and structure.

I think, in a large part, that's because your average developer knows
only one or maybe two programming languages. And if they know more,
they only know languages from a single paradigm (e.g., OO, logic programming,
functional programming, procedural, etc.).  Because of this, the view is
"when all you have is a hammer, everything looks like a nail".

> I'd love for someone to show me... no... convince me, of a
> typesafe language that can be used in such a place.
 

Not sure I get your drift here. Did you mean "in commercial systems"
or in OS kernels or something else? (Cut me some slack; I've only
had 2 cups of coffee so far. ;-)

> I have yet to see it for production code, used on a regular basis.

Here at Qwest, we've been pretty much exclusively using nothing else
besides Java and C# since the last 6 years. (Java for about 6+ years
and C# for the past 2 years.)

So buffer overflows are pretty much things of the past, but developers
still don't validate most of their input data so there's still plenty
of XSS and SQL injection problems left. (IMO, these are just another
example of failure to do proper data validation, as are buffer overflows.)

[...snip...]
> ... Nor is right to assume you can use 
> typesafe languages as the panacea for secure coding.

To be sure, about 50% of the security holes that I still see are
the results of dumb design decisions (e.g., no authorization checks
whatsoever, placing sensitive data in persistent cookies, etc.).
Keeps my team plenty busy. ;-)

OTOH, I'm sure we'd be a lot worse off if developers here were still
allowed to use C or C++ to write new code in.

Cheers,
-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former Whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit