Re: [SC-L] The Organic Secure SDLC

2011-07-19 Thread Paco Hope

 To clarify further, this is not meant to be prescriptive or even a set
of best
 practices. It's simple observation on how many organizations tend to
evolve if
 secure SDLC is not a major priority. I can't say it's based on hard data
but we
 have compiled the steps from experiences at several clients and
validated it with
 several others.

That is exactly the process we followed with the BSIMM. Some of the BSIMM
participants were well-established, highly capable, and mature. Others,
however, were just getting their security initiatives off the ground. We
didn't cherry-pick the best of the world. We went to firms that were
significant and found out what they were doing.

 If you were seeking advice on how to build security into the SDLC from
the ground
 up or looking for a set of activities to perform, you'd be better served
by looking
 at BSIMM.

I don't think someone starting from the ground up looks at the BSIMM. If
you do, it's a brainstorming exercise to acquaint yourself with terms and
activities. If you want something prescriptive, Cigital's touchpoints, or
Microsoft's SDL are methodologies that tell you what to do. Think of the
BSIMM like a thermometer. It can tell you the temperature of your SDLC.
What it can't tell you is whether that's the right temperature or not. If
you're making ice cream or if you're making waffles, you have different
temperature needs. BSIMM simply tells you how you're doing right now. (And
over time if you take repeated measurements).

 The organic secure SDLC misses things, like threat modeling, because in
our
 observations they don't seem to be done consistently.

I think this organic SDLC is mis-named. It is not a software development
lifecycle. It is, if anything, a description of how security awareness
evolves at some organisations. That is, minimally aware people take the
first step of pen testing production systems. As they grow additionally
more aware, they start looking earlier and earlier in the lifecycle. This
thing itself is not a lifecycle. It's an observation about some
organisations and how they gradually awaken to the need for security in
the SDLC.

It is entirely possible that climbing the wall might happen as the
result of taking a measurement using the BSIMM. Instead of a linear arrow,
I wonder if you want to have time on the X axis and level of effort on the
Y axis. There's a curve here and climb the wall is a point in the curve
where the effort is high.

Anyways, this is just the order that some firms seem to adopt activities
in their lifecycles. It is not a lifecycle.

Paco


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] The Organic Secure SDLC

2011-07-19 Thread Paco Hope
Jim,

You're spot on. BSIMM is not a lifecycle for any company. Heck, it's not even a 
set of recommendations. It's simply a way to measure what a firm does. It's a 
model formulated from observations about how some firms' implement software 
security in their lifecycles. You'll never catch us calling the BSIMM a 
lifecycle.

As for not translating into the SMB market, I don't understand that. Unlike, 
say prescriptive standards which say thou shalt do X regardless of how big 
you are, the BSIMM measures maturity of what a firm actually does. There is no 
reason an SMB could not measure the maturity of their effort using the BSIMM.

Maturity is not a function of size. A team of 10 developers might score higher 
on various criteria than a multi-national bank that has a whole team of people 
dedicated to app sec. Maturity is a function of the depth to which one takes a 
certain activity and their capability within that activity.

This isn't Pac-Man, either. The goal is not to get the highest score and an 
extra man. :) The goal is to put the right level of effort into the right 
places. A firm can't do that until they know how much effort they're spending 
on different activities. The BSIMM will illuminate the level of effort. It 
allows a firm to decide to rebalance and spread the budget/people around across 
the activities that make sense. Whether that's a team of 10 developers or a 
team of 1000 developers, the principle is the same. The execution varies.

Here's another analogy. You can have a GPS and know your exact coordinates, to 
within 3 meters, but not know how to get to the airport by car. The BSIMM will 
tell you your coordinates at the present time. It does not tell you the best 
way to the airport. It can tell you the crow-fly distance to the airport, but 
it can't tell you that the airport is where you want to be.

Paco


Paco,

By your same logic I would not consider BSIMM a lifecycle either. It's
a thermometer to measure an SDLC against what some some of the largest
companies are doing. As others have noted, BSIMM  does not translate
well into the SMB market where most software is written. Don't get me
wrong, BSIMM is very interesting data and is useful. But a
comprehensive secure software lifecycle for every company it is not.

- Jim Manico

On Jul 19, 2011, at 9:35 AM, Paco Hope 
p...@cigital.commailto:p...@cigital.com wrote:

Think of the
BSIMM like a thermometer. It


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] any one a CSSLP is it worth it?

2010-04-14 Thread Paco Hope

On 14 Apr 2010, at 16:24, Wall, Kevin wrote:
 I just reread your Dark Reading post and I must say I agree with it
 almost 100%. The only part where I disagree with it is where you wrote:
 
The multiple choice test itself is one of the problems. I
have discussed the idea of using multiple choice to
discriminate knowledgeable developers from clueless
developers (like the SANS test does) with many professors
of computer science. Not one of them thought it was possible.

This is the part of the article I disagree with most, as well. Asking whether 
multiple choice exams can discriminate between clueful and clueless developers 
is a valid and important question to ask.  However, I believe few professors of 
computer science could discriminate between clueful and clueless developers if 
developer and clue have industry-relevant definitions.  What passes for 
development in an academic sense and what is required for clue in an 
academic sense are usually defined on very different axes than the axes used in 
industry.

So, I think asking college professors whether standardised tests are valid in 
this respect is posing the important question to the wrong people. There are 
notorious disconnects between what academics and industry value. Perhaps if you 
asked the folks who hire, promote, and evaluate developers, they could give a 
better opinion as to whether clue and standardised test performance correlate. 
Even then, I'd prefer to see something somewhat objective, like months between 
promotions versus certifications held, as opposed to calling a bunch of CIOs or 
VPs of Engineering and asking how well they think tests work.

Having said this, I am a CSSLP and I have helped write a ton of questions for 
the exam. I can tell you we struggle long and hard to write meaningful 
questions that actually discriminate a practitioner who has experience from a 
random, unqualified candidate. We use follow well-established psychometric 
principles when designing the questions. The whole test creation/maintenance 
process is ANSI-approved and audited. Careful statistics are kept on the 
pass/fail rates on individual questions to discard questions that do not 
discriminate well. Over time, the question bank is maintained to remove 
questions that don't test well and to write new questions that represent 
changes in the landscape. Some of you will undoubtedly dismiss this, saying 
garbage in, garbage out, regardless of how pristine the pipes are. I believe 
that's too simplistic a view.

Paco
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] web apps are homogenous?

2010-02-24 Thread Paco Hope

On Feb 23, 2010, at 10:06 AM, Jon McClintock wrote:
 This provides a pretty good examination of the costs of patching 
 commercial software. Has anyone done a similar analysis for web 
 applications? I'd expect the costs to be dramatically lower, given
 thant you're typically producing a single patch for a handful of
 homogenous systems.

I don't think webness conveys any more homogeneity than, say windowsness or 
linuxness.

What part of being a web application provides homogeneity in a way that makes 
patching cheaper?

Paco
--
Paco Hope, CISSP - CSSLP
Technical Manager, Cigital, Inc.
http://www.cigital.com/
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Source or Binary

2009-07-30 Thread Paco Hope
On 7/29/09 8:08 PM, silky michaelsli...@gmail.com wrote:

 Of course it's a binary, it runs by itself, when there is a java vm
 to run it. Just like you need a win32 vm to run a typical .exe.

You misunderstand the notion of virtual machines if you think of Win32 as a
virtual machine. There is nothing virtual about windows. It runs on the
real hardware (ignoring things like VMWare). Your Windows EXEs (except those
running in the .NET CLR) also run on the real x86 hardware. I.e., your
variables are loaded into CPU registers and operated on, etc.

The Java Virtual Machine is a theoretical machine, and Java code is compiled
down to Java bytecode that runs on this theoretical machine. The Java VM is
the actual Windows EXE that runs on the real hardware. It reads these
bytecodes and executes them. There is a very significant level of
abstraction between a Java program running in a Java virtual machine and
native code that has been compiled to a native object format (e.g., an
.exe).
 
 Realizing that java binaries hold a lot more is a mental shift that
 probably must be actively kept in mind.
 
 Hold a lot more what? This doesn't make sense.

It makes a lot of sense. Because Java is a string-based language, a great
deal of symbolic information (e.g., class names, method names, inheritance
hierarchies) remains in the class file, literally in string format, after
you compile. If you're in the C++ world and you compile and then strip your
binaries, that symbolic information is reduced a lot. If you use a java
decompiler (e.g., jad, jode, etc.), you can get .java files from .class
files and they are remarkably usable. While C++ decompilation is possible,
the fidelity of decompiled Java programs is significantly higher. I.e., they
match their original source, sometimes in astonishing accuracy.

Take a look at java decompilation and compare it to what you know about
native code decompilation. It is absolutely true that (ignoring
anti-reversing techniques like obfuscation), Java binaries carry a lot more
usable information to help in the dissection and understanding of their
execution than an equivalent native-code program would.

Paco

-- 
Paco Hope, CISSP, CSSLP
Technical Manager, Cigital, Inc
http://www.cigital.com/ ? +1.703.585.7868
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] CSSLP

2009-03-23 Thread Paco Hope
On 3/21/09 6:43 PM, Jim Manico j...@manico.net wrote:

 What really bothers me is that the CSSLP looks appsec operations focused - not
 developer SDLC focused (or so I've heard). The SANS cert for software
 security seems to drill a lot more into actual activities a developer should
 take in order write secure code and seems somewhat reasonable to me. I think a
 secure software architecture cert would round out current offerings well.

As a SME for that exam (i.e., one of the guys who makes exam questions and
such), you're exactly right. It definitely is skewed towards a holistic,
operations-type feel. However, you've misidentified its target.

The target of the CSSLP is anyone involved in the software (though perhaps
we should say system) development lifecycle. It targets not just
developers, but also testers, release managers, test managers, and others
who are important to the big picture of getting software out the door. It's
not a certified secure developer (i.e., code-slinger). The person who holds
the cert should be acquainted with security in more phases of the lifecycle
than just one. It does not, however, certify them as a security ninja in any
phase.

There was another comment about the CISSP that I found poignant: It was too
damn easy to pass and too damn hard to keep up with the CPE point entry...

Although point entry is tedious, it keeps the cert honest. You can't spend 3
years converting oxygen into CO2 and remain certified. You actually have to
do a few things. A CISSP person who has renewed once or twice is quite
different from someone who has passed the exam after a cram session. Someone
who certified once and lets their certification lapse is indistinguishable
from the marginally-qualified candidate who crammed, passed, but ultimately
couldn't maintain their cert.

To reject certifications altogether is (to me) to endorse a continuation of
the wild, wild west attitude towards security. Hire the best gunslinger you
can get, and figure out who that is by word of mouth, rumor, and wanted
posters at the post office. Like it or not, the citizens of this wild west
are going to demand governance by a recognizable authority. Sooner or later
these badge-wearing officials will come to town, and the scofflaws will be
marginalized. The era of Wild Bill Hickock and Billy the Kid are over. It's
only a matter of time before, for better or worse, the law moves in. We need
to be on the right side, shaping those laws, not avoiding them.

(Apologies to our international audience for an intensely US-centric
metaphor)

Paco
-- 
Paco Hope, CISSP, CSSLP
Technical Manager, Cigital, Inc
http://www.cigital.com/ ? +1.703.585.7868
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Announcing LAMN: Legion Against Meaningless certificatioNs

2009-03-19 Thread Paco Hope
On 3/18/09 5:29 PM, Jeremy Epstein jeremy.j.epst...@gmail.com wrote:

 If you don't have a CISSP, CISM, MCSE, or EIEIO - and you're proud of it

...then I'd say you have an overly simplistic view of the world.

Anyone who believes that a credential automatically conveys some magical
knowledge that you didn't have before is just as overly-simplistic as
someone who disparages all credentials equally. It just isn't a black and
white world. 

Paco
-- 
Paco Hope, CISSP, CSSLP
Technical Manager, Cigital, Inc
http://www.cigital.com/ ? +1.703.585.7868
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Security in QA is more than exploits

2009-02-05 Thread Paco Hope
 that priority and maintenance
straightforward. At this point I'm not disagreeing with you, but taking your
good approach and extending it a step farther.

Cheers,
Paco
--
Paco Hope, CISSP - CSSLP
Technical Manager, Cigital, Inc.
http://www.cigital.com/ - +1.703.585.7868
Software Confidence. Achieved.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Security in QA is more than exploits

2009-02-04 Thread Paco Hope
All,

I just read Robert's blog entry about re-aligning training expectations for 
QA. (http://bit.ly/157Pc3) It has some useful points that both developers and 
so-called security people need to hear. I disagree with some implicit biases, 
however, and I think we need to get past some stereotypes that sneak out in the 
article.

Bias #1, obviously, is the focus on the web. Despite its omnipresence, there is 
more non-web software than web software in the world, and non-web software does 
more important stuff than all the web software combined. The role of security 
in _software_ testing is vital, and the presence or absence of web technologies 
does not change that. Despite writing a recent book on Web Security Testing, I 
know my place in the universe. Quality assurance and software testing are 
disciplines far older than the web, and their mission is so much bigger than 
finding vulnerabilities.

Bias #2 is vulnerabilities über alles. By talking about weaving vulnerabilities 
into security test plans, we've overlooked the first place where security goes 
into the QA process: test strategy. Look at any of the prominent folks in QA 
(Jon Bach, Michael Bolton, Rex Black, Cem Kaner), the people I'm privileged to 
share podiums with at QA conferences like STAR West, STAR East, and Better 
Software, and you'll see that security is part of the overall risk-based 
testing strategy. Risk-based testing has been around for a really long time. 
Longer than the web.

Before anyone talks about vulnerabilities to test for, we have to figure out 
what the business cares about and why. What could go wrong? Who cares? What 
would the impact be? Answers to those questions drive our testing strategy, and 
ultimately our test plans and test cases.

Bias #3 is that idea that a bunch of web vulnerabilities are equivalent in 
impact to the business. That is, you just toss as many as you can into your 
test plan and test for as much as you can. This isn't how testing is 
prioritized.

You don't organize testing based on which top X vulnerabilities are likely to 
affect your organization (as the blog suggests). Likelihood is one part of the 
puzzle. Business impact is the part that is missing. You prioritize security 
tests by risk severity—that marriage of likelihood and impact to the business. 
If I have a whole pile of very likely attacks that are all low or negligible 
impact, and I have a few moderately likely attacks that have high impact, I 
should prioritize my testing effort around the ones with greater impact to my 
business.

Bias #4 is the treatment of testers like second class citizens. In the blog 
article, developers are detail oriented have a deep understanding of flows. 
Constrast this with QA who merely understand what is provided to them. They 
sound impotent, as if all they can do is what they're told. Software testing, 
despite whatever firsthand experience the author may have, is a mature 
discipline. It is older and more formalized than security as a discipline. 
Software testing is older than the Internet or the web. If software testing as 
a discipline has adopted security too slowly, given security's rise to the 
forefront in the marketplace, that might be a legitimate criticism. But I don't 
approve of the slandering QA by implying that they just take what's given them 
and execute it. QA is hard and there are some really bright minds working in 
that field.

As someone who has been training in risk-based security testing for several 
years now, I totally agree with some points, but very much disagree with 
others. I agree that the bug parade (as we call it) of top X vulnerabilities 
to find is the wrong way to teach security testing. Risk management, though, 
has been a fundamental part of mainstream QA for a very long time. Likewise, 
risk management is the same technique that good security people use to 
prioritize their results. Risk management is certainly how the business is 
going to make decisions about which issues to remediate and when. Risk 
management is what ties this all together.

If there's something that QA needs to learn that they're not already learning, 
it's the weaving of security into the risk management techniques they already 
know how to do. If testers fall short in their ability to apply risk management 
techniques, then they are falling short against the QA yardstick, there's 
nothing particularly security-related in this observation. If they are applying 
mature QA practices with modern risk management, but are not adequately 
addressing the software-induced business risks facing their stakeholders, then 
some security training might be in order. That security training should be 
built on the foundation of modern QA practice, including risk-based testing.

So, in some ways we agree: speak the lingo of QA. But in other ways we 
disagree. I think the original article fails to give credit to the decades of 
substantial research and practice in QA. In other words, it's a lot 

Re: [SC-L] Survey

2008-08-26 Thread Paco Hope
On 8/26/08 3:03 PM, ljknews [EMAIL PROTECTED] wrote:

I am not interested in dealing with people who cannot get
the simple things right.

Right. Because we all know that the HTML, xHTML, DHTML, CSS, and the related 
standards are really simple. Nothing to it. Writing valid HTML in our 
applications is a snap. And when management says so, why are we a week late 
getting the application into production? they'll be pleased to hear that it 
was to make sure the HTML on all 300 screens validated. Nevermind that the app 
was satisfying its users and business owners when it didn't validate. It's 
important to make the validation programs happy, not the users or the business.

As it is, web applications are shoved out the door with insufficient attention 
paid to their functional capabilities. Then there's the insufficient attention 
paid to their security capabilities. Standards compliance is orthogonal to all 
that. I'd rather have a functional and sufficiently secure web site that was 
non-compliant than one that was compliant but lacking in functionality or 
security.

Either way, I think Gary's point in putting the survey out on this list was to 
see if we were interested in the survey. It's a shame we've gone off on a 
tangent about the value of validating HTML.

Paco
--
Paco Hope, CISSP
Technical Manager, Cigital, Inc
http://www.cigital.com/ * +1.703.585.7868
Software Confidence. Achieved.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Survey

2008-08-24 Thread Paco Hope
Clearly the survey's content is only of interest if the HTML validates.

On Aug 24, 2008, at 9:47 AM, ljknews [EMAIL PROTECTED] wrote:

 At 2:43 PM -0400 8/22/08, Gary McGraw wrote:

 BankInfoSecurity is running a survey on software security that some
 of you may be interested in participating in.  Try it yourself here:

 http://www.bankinfosecurity.com/surveys.php?surveyID=1

 Hmmm.  http://validator.w3.org says there are 973 errors on that page.
 --
 Larry Kilgallen
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com
 )
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Paco Hope
On 6/26/07 4:25 PM, Wall, Kevin [EMAIL PROTECTED] wrote:

I mean, was the fix really rocket science that it had to take THAT LONG??? 
IMHO, no excuse for taking that long.

8 months seems awfully long, but it doesn't surprise me that a big organization 
takes a really long time to get things like this out the door. I have worked 
with a lot of software companies over the years, and few have their entire test 
suite (unit, integration, system, regression) fully automated. It just doesn't 
work that way. People on this list would be just as condemning of a company 
that rushed a fix out the door with inadequate testing and managed to ship a 
new buffer overflow in the fix for an old one. Furthermore, it's not like the 
source code had been sitting idle with no modifications to it. They were surely 
in the middle of a dev cycle where they were adding lots of features and 
testing and so on. They have business priorities to address, since those 
features (presumably) are what bring revenue in the door and keep competitors 
off their market turf.

So, if everyone dropped everything they were doing and focused solely on fixing 
this one issue and doing a full battery of tests until it was release-worthy, 
it would have gone out a lot faster. But a company that did that on every bug 
that was reported would get no features released and go out of business. They 
have to weigh the impact of missing product goals versus the risk of exploits 
of a buffer overflow. I'm not sure we can categorically say (none of us being 
RealNetworks people) that they made the wrong decision. We don't have the 
information.

Paco
--
Paco Hope, CISSP
Technical Manager, Cigital, Inc
http://www.cigital.com/ * +1.703.585.7868
Software Confidence. Achieved.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Building Security In vs Auditing

2007-01-04 Thread Paco Hope
 Gary, I would love a little refinement of the benefits to badnessometers.
 Let's say I get a tool to tell me something I already suspect is wrong,
 what percentage of the population are better than they expected?

I won't speak for Gary, but working a few doors down I have seen a few of the 
same things he has.

Occasionally developers internally run free tools and surrepetitiously fix 
problems that the tools find (this happens in some cultures where management is 
particularly antagonistic towards security as a first class concern). In those 
unusual instances, I could see the results of a badnessometer coming out better 
than expected. Management would perceive that such things had never been run, 
and would be pleasantly surprised to learn that the sky might not be falling. 
Other than that, few people run a tool for the first time and see results 
better than they expected. Tools codify all manner of stuff that your 
developers almost certainly do not know how to check for (and if they do, they 
probably don't have time).

 Is it better to do such a badness test by doing a POC with one of the
 tool vendors in this space or do I get additional lift by going with
 a consulting firm in this regard?

I'm a consultant, take that as implied bias. But, I think you do get lift, and 
here's my analogy. Consider yourself a handy guy around the house who is going 
to do something moderately complicated, like redo a whole bathroom. You can buy 
all the tools and read all the books on how to do it for a lot less money than 
hiring a contractor to do the whole thing.  There's some pretty specialized 
tools in plumbing, though, and they're tools you probably haven't used more 
than once or twice. Do you gain some extra insight into the use of those tools 
by hiring someone experienced to assist on the complicated parts? I think so. 
That someone experienced will come in, help you wield the unfamiliar tool, show 
you some things that he has experienced, and get you through the difficult 
parts. Then you, being the handy guy you are, are left to finish the bathroom, 
doing things you know how to do well.

I think this analogy holds with a lot of the tools in security. You learn a lot 
by getting the experience someone brings, assuming you get a good someone. We, 
for example, have run a bunch of tools on a lot of different code bases. We 
know which rules tend to be alarmist and which ones are really important if 
they fire. Tool vendors won't give you that objectivity on their own tool, and 
some of the sales engineers don't have the insight into their own tool to know 
which warnings are just noise and which warnings are a big deal. A consultant 
can help you have a bake-off between tools, whereas a tool vendor typically 
lacks that objectivity.

Paco




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] auditing

2004-05-03 Thread Paco Hope
On 5/3/04 11:48 AM, ljknews [EMAIL PROTECTED] wrote:

 At 10:04 AM -0500 5/3/04, jnf wrote:
 
 Someone just suggested ctags, I've never heard of ctags or cscope- I will
 look at them. I don't really know what I was looking for, I often find it
 quite furstrating trying to keep track of whats going on across XX global
 variables inside of XX internal functions, and so on
 
 What you are looking for is a tool, and a debugger really is not it
 (for a thorough job), since a debugger just deals with the current
 active call, not all situations in which a subprogram might be called.

One commercial tool that I have had some reasonable success with is called
SourceInsight (http://www.sourceinsight.com/).  It builds a database of all
the function calls, variable definitions, macros, etc.  You can right click
on any variable, data structure, file, etc and click on things like where
is this defined? or where is this called?  If you're editing under a
Borland or Microsoft MFC environment, it also can import the system files to
help navigate dependencies on system calls.

They intend it to be a full-fledged code editor for development, but I've
never used it that way.  It's never going to replace emacs for me, and it
doesn't run native under MacOS X, either.  So if you're auditing Windows
code using a Windows box, it's highly relevant.  If you're auditing
UNIX-oriented code, it's a little less relevant.  You can copy the UNIX code
to a Windows box and run it, and you get many of the benefits.  You can run
it under VirtualPC on MacOS X, but it's a bit slow.

When I do source code audits of very large projects and I have to grok large
sets of intertwining code, this is a decent navigation tool.

Paco
-- 
Paco Hope, CISSP
Senior Software Security Consultant
Cigital, Inc. http://www.cigital.com/
[EMAIL PROTECTED] -- +1.703.404.5769




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.





Re: [SC-L] virtual server - IPS

2004-03-31 Thread Paco Hope
On 3/31/04 10:05 AM, Jeremy Epstein [EMAIL PROTECTED] wrote:
 You might also consider one of the IPS products (e.g., Okena/Cisco,
 Entercept/NAI, or PlatformLogic), all of which will allow you to constrain
 what happens and may be somewhat more scalable than VMware if you need
 to run a bunch of instances of the virtual environment.

This answer decidedly beyond the scope of secure coding.  IPSes don't even
run on the host with the code. IPS systems are so far removed from the
actual host that they have no context on which to base decisions about
custom code. The OS can't stop bad programmers from shooting themselves in
the foot. It can at least apply a few limits to the damage when they do.

The original question was how can I limit one user's ability to interfere
with other users on the box?  An answer that takes the box offline when bad
stuff happens is probably not the answer he was hoping for.  It's a
host-based question, and the network is not the right place to solve it.

Paco
-- 
Paco Hope, CISSP
Senior Software Security Consultant
Cigital, Inc. http://www.cigital.com/
[EMAIL PROTECTED] -- +1.703.404.5769




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.