Re: [SC-L] What's the next tech problem to be solved in software security?

2007-06-06 Thread Michael Silk
you've got a few questions there ... i'll answer the first one.

i might copy the suggestion from someone [i can't remember who at the
moment] who suggested the next step in programming in-general is more
parallel programs [in order to increase speed]. this is obviously
complicated and will create new security problems.

but i mean (it hardly needs to be said), we have enough trouble with
the problems we already have.


On 6/6/07, Kenneth Van Wyk [EMAIL PROTECTED] wrote:
 Hi SC-L,

 [Hmmm, this didn't make it out to the list as I'd expected, so here's
 a 2nd try. Apologies for any duplicates. KRvW]

 At the SC-L BoF sessions held to date (which admittedly is not
 exactly a huge number, but I'm doing my best to see them continue), I
 like to ask those that attend what we can be doing to make SC-L more
 useful and meaningful to the subscribers.  Of course, as with all
 mailing lists, SC-L  will always be what its members make of it.
 However, at one recent SC-L BoF session, it was suggested that I pose
 periodic questions/issues for comment and discussion.  As last week
 was particularly quiet here with my hiatus and all, this seems like a
 good opportunity to give that a go, so...

 What do you think is the _next_ technological problem for the
 software security community to solve?  PLEASE, let's NOT go down the
 rat hole of senior management buy-in, use [this language], etc.  (In
 fact, be warned that I will /dev/null any responses in this thread
 that go there.)  So, what technology could/would make life easier for
 a secure software developer?  Better source code analysis?  High(er)
 level languages to help automate design reviews?  Better security
 testing tools?  To any of these, *better* in what ways, specifically?

 Any takers?

 Cheers,

 Ken
 -
 Kenneth R. van Wyk
 SC-L Moderator
 KRvW Associates, LLC
 http://www.KRvW.com


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___





-- 
mike
68 65 6c 6c 6f 20 74 6f 20 79 6f 75 2c
20 68 65 78 20 64 65 63 6f 64 65 72 2e
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Perspectives on Code Scanning

2007-06-06 Thread Michael Silk
On 6/7/07, McGovern, James F (HTSC, IT) [EMAIL PROTECTED] wrote:
 I really hope that this email doesn't generate a ton of offline emails and 
 hope that folks will
 talk publicly. It has been my latest thinking that the value of tools in this 
 space are not really
 targeted at developers but should be targeted at executives who care about 
 overall quality
 and security folks who care about risk. While developers are the ones to 
 remediate, the
 accountability for secure coding resides elsewhere.

and that's the problem. the accountability for insecure coding should
reside with the developers. it's their fault [mostly].



 It would seem to be that tools that developers plug into their IDE should be 
 free since the
 value proposition should reside elsewhere. Many of these tools provide 
 audit functionality
 and allow enterprises to gain a view into their portfolio that they 
 previously had zero clue
 about and this is where the value should reside.

 If there is even an iota of agreement, wouldn't it be in the best interest of 
 folks here to get
 vendors to ignore developer specific licensing and instead focus on 
 enterprise concerns?


 *
 This communication, including attachments, is
 for the exclusive use of addressee and may contain proprietary,
 confidential and/or privileged information.  If you are not the intended
 recipient, any use, copying, disclosure, dissemination or distribution is
 strictly prohibited.  If you are not the intended recipient, please notify
 the sender immediately by return e-mail, delete this communication and
 destroy all copies.
 *


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___



-- 
mike
68 65 6c 6c 6f 20 74 6f 20 79 6f 75 2c
20 68 65 78 20 64 65 63 6f 64 65 72 2e
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Chinese Professor Cracks Fifth Data Security Algorithm (SHA-1)

2007-03-21 Thread Michael Silk

Awesome.

---

 http://en.epochtimes.com/tools/printer.asp?id=50336


 The Epoch Times

 Home  Science  Technology

 Chinese Professor Cracks Fifth Data Security Algorithm

 SHA-1 added to list of accomplishments

 Central News Agency

 Jan 11, 2007


 Associate professor Wang Xiaoyun of Beijing's Tsinghua University and
 Shandong University of Technology has cracked SHA-1, a widely used data
 security algorithm. (Daniel Berehulak/Getty Images)

 TAIPEI-Within four years, the U.S. government will cease to use SHA-1
 (Secure Hash Algorithm) for digital signatures, and convert to a new and
 more advanced hash algorithm, according to the article Security
 Cracked! from New Scientist . The reason for this change is that
associate
 professor Wang Xiaoyun of Beijing's Tsinghua University and Shandong
 University of Technology, and her associates, have already cracked SHA-1.

 Wang also cracked MD5 (Message Digest 5), the hash algorithm most commonly
 used before SHA-1 became popular. Previous attacks on MD5 required over a
 million years of supercomputer time, but Wang and her research team
 obtained results using ordinary personal computers.

 In early 2005, Wang and her research team announced that they had
succeeded
 in cracking SHA-1. In addition to the U.S. government, well-known
companies
 like Microsoft, Sun, Atmel, and others have also announced that they will
 no longer be using SHA-1.

 Two years ago, Wang announced at an international data security conference
 that her team had successfully cracked four well-known hash
algorithms-MD5,
 HAVAL-128, MD4, and RIPEMD-within ten years.

 A few months later, she cracked the even more robust SHA-1.

 Focus and Dedication

 According to the article, Wang's research focusses on hash algorithms.

 A hash algorithm is a mathematical procedure for deriving a 'fingerprint'
 of a block of data. The hash algorithms used in cryptography are
one-way:
 it is easy to derive hash values from inputs, but very difficult to work
 backwards, finding an input message that yields a given hash value.
 Cryptographic hash algorithms are also resistant to collisions: that is,
 it is computationally infeasible to find any two messages that yield the
 same hash value.

 Hash algorithms' usefulness in data security relies on these properties,
 and much research focusses in this area.

 Recent years have seen a stream of ever-more-refined attacks on MD5 and
 SHA-1-including, notably, Wang's team's results on SHA-1, which permit
 finding collisions in SHA-1 about 2,000 times more quickly than
brute-force
 guessing. Wang's technique makes attacking SHA-1 efficient enough to be
 feasible.

 MD5 and SHA-1 are the two most extensively used hash algorithms in the
 world. These two algorithms underpin many digital signature and other
 security schemes in use throughout the international community. They are
 widely used in banking, securities, and e-commerce. SHA-1 has been
 recognized as the cornerstone for modern Internet security.

 According to the article, in the early stages of Wang's research, there
 were other researchers who tried to crack it. However, none of them
 succeeded. This is why in 15 years hash research had become the domain of
 hopeless research in many scientists' minds.

 Wang's method of cracking algorithms differs from others'. Although such
 analysis usually cannot be done without the use of computers, according to
 Wang, the computer only assisted in cracking the algorithm. Most of the
 time, she calculated manually, and manually designed the methods.

 Hackers crack passwords with bad intentions, Wang said. I hope efforts
 to protect against password theft will benefit [from this]. Password
 analysts work to evaluate the security of data encryption and to search
for
 even more secure
algorithms.

 On the day that I cracked SHA-1, she added, I went out to eat. I was
 very excited. I knew I was the only person who knew this world-class
 secret.

 Within ten years, Wang cracked the five biggest names in cryptographic
hash
 algorithms. Many people would think the life of this scientist must be
 monotonous, but That ten years was a very relaxed time for me, she says.

 During her work, she bore a daughter and cultivated a balcony full of
 flowers. The only mathematics-related habit in her life is that she
 remembers the license plates of taxi cabs.

 With additional reporting by The Epoch Times.

--
mike
00110001 3 00110111
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: compliance

2007-03-13 Thread Michael Silk

On 3/14/07, Gary McGraw [EMAIL PROTECTED] wrote:


Once again i'll ask.  Which vertical is the kind of company where you're
seeing this awful behavior in?




well, fwiw, i've noticed it in finance/investment, and the entertainment
industries. but i honestly don't think the industry type makes a whole lot
of difference. it's a corporate management thing.


BTW, sammy migues agrees with you in a thread we're having on the justice

league blog www.cigital.com/justiceleague (look under SOX).

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com.



-Original Message-
From:   Bruce Ediger [mailto:[EMAIL PROTECTED]
Sent:   Tue Mar 13 12:10:42 2007
To:
Cc: SC-L@securecoding.org
Subject:Re: [SC-L] Darkreading: compliance

On Tue, 13 Mar 2007, somebody wrote (attribution isn't clear to me):

 no. my feeling is that it focuses management on unimportant things like
 meeting checkpoints rather then actually doing useful things.

I heartily agree. Compliance almost always becomes (in the worst sense
of the word) a mantra to chant down all disagreement.  Compliance
becomes
the *administrative* stick-and-carrot, rather like a driver's license in
the US.

That is, every US citizen has this set of nominal rights that nobody
can take away.  On the other hand, a driver's license is a privilege,
so you have to jump through some hoops to get it, and it comes with
mandatory behaviors, not all of them legal, most of them administrative.
Life in the US without a driver's license is marginal.  So, administrators
use driver's licenses to punish and guide behavior in ways nominally,
or legally, forbidden.  Wink wink, nudge nudge.

I'm most familiar with PCI, and some of the things that people put in
it are just downright stupid.  If you run your credit card processing
on Solaris, why should you put in a virus scanner?  Seriously, folks...

Since compliance becomes an administrative tool, the weapons against
actually paying for compliance become administrative, hence the focus
on meeting checklist items.  A checklist can't really contain all the
capability of a general purpose computing system, as checklists do not
have looping or decision making in them.  So, they'll always have weird
limits, and people will try to overcome those limitations by adding to
the checklists.

Compliance becomes a rallying point for the professional meeting
attenders, parasites and hangers on, hierarchy jockeys.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___






This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive
this
message by the intended recipient), any disclosure, copying, distribution
or
use of the contents of the information is prohibited.  If you have
received
this electronic message transmission in error, please contact the sender
by
reply email and delete all copies of this message.  Cigital, Inc. accepts
no
responsibility for any loss or damage resulting directly or indirectly
from
the use of this email or its contents.
Thank You.



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___





--
mike
00110001 3 00110111
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: compliance

2007-03-12 Thread Michael Silk

On 3/13/07, Gary McGraw [EMAIL PROTECTED] wrote:


hi sc-l,

this month's darkreading column is about compliance.  my own belief is
that compliance has really helped move software security forward.  in
particular, sox and pci have been a boon:

http://www.darkreading.com/document.asp?doc_id=119163

what do you think?  have compliance efforts you know about helped to
forward software security?




no. my feeling is that it focuses management on unimportant things like
meeting checkpoints rather then actually doing useful things.


gem


company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive
this
message by the intended recipient), any disclosure, copying, distribution
or
use of the contents of the information is prohibited.  If you have
received
this electronic message transmission in error, please contact the sender
by
reply email and delete all copies of this message.  Cigital, Inc. accepts
no
responsibility for any loss or damage resulting directly or indirectly
from
the use of this email or its contents.
Thank You.



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___





--
mike
00110001 3 00110111
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What defines an InfoSec Professional?

2007-03-08 Thread Michael Silk

On 3/9/07, McGovern, James F (HTSC, IT) [EMAIL PROTECTED]
wrote:


Traditionally InfoSec folks defined themselves as being knowledgable in
firewalls, policies, etc. Lately, many enterprises are starting to recognize
the importance of security within the software development lifecycle where
even some have acknowledged that software is a common problem space for
those things traditionally thought of as infrastructure.

The harder part is not in terms of recognizing the trend but in terms of
folks from the old world acknowledging folks from the new world (software
development) also as security professionals. I haven't seen many folks make
this transition. I do suspect that some of it is tied to the romance of
certifications such as CISSP whereby the exams that prove you are a security
professional talk all about physical security and network security but
really don't address software development in any meaningful way.

Would be intriguing for folks here that blog to discuss ways for folks to
transition / acknowledge respect not as just software developers with a
specialization in security but in being true security professionals and
treat them like peers all working on one common goal.




i hear you on this one.

australia, at least melbourne, still doesn't seem to have any idea of
software/application security professionals. almost all jobs that have
'security' in them, then go on to talk about all the firewalls you must know
how to configure. *sigh*. then there is the pen-testing side. there's should
be a new field, security design that accompanies application architect,
etc. then you have professional guidance of the security issues when
building for app.



-Original Message-

From: Shea, Brian A [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 08, 2007 2:07 PM
To: Gunnar Peterson; McGovern, James F (HTSC, IT)
Cc: SC-L@securecoding.org
Subject: RE: [SC-L] What defines an InfoSec Professional?


The right answer is both IMO.  You need the thinkers, integrators, and
operators to do it right.  The term Security Professional at its basic
level simply denotes someone who works to make things secure.

You can't be secure with only application security any more than you can
be secure with only firewalls or NIDs.  The entire ecosystem and
lifecycle must be risk managed and that is accomplished by security
professionals.  Each professional may have a specialty due to the
breadth of topics covered by Security (let's not forget our Physical
Security either), but all would be expected to act as professionals.
Professionals in this definition being people who are certified and
expected to operate within specified standards of quality and behavior
much like CISSP, CPA, MD, etc.


*
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___





--
mike
00110001 3 00110111
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Dark Reading - Desktop Security - Here Comes the (Web) Fuzz - Security News Analysis

2007-02-27 Thread Michael Silk
On 2/27/07, Kenneth Van Wyk [EMAIL PROTECTED] wrote:

 Here's an interesting article from Dark Reading about web fuzzers.  Web
 fuzzing seems to be gaining some traction these days as a popular means of
 testing web apps and web services.

 http://www.darkreading.com/document.asp?doc_id=118162f_src=darkreading_section_296

 Any good/bad experiences and opinions to be shared here on SC-L regarding
 fuzzing as a means of testing web apps/services?  I have to say I'm
 unconvinced, but agree that they should be one part--and a small one at
 that--of a robust testing regimen.

unconvinced of what? what fuzzing is useful? or that it's the best
security testing method ever? or you remain unconvinced that fuzzing
in web apps is  fuzzing in os apps?

fuzzing has obvious advantages. that's all anyone should care about.


 Cheers,

 Ken

-- mike
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Disclosure: vulnerability pimps? or super heroes?

2007-02-27 Thread Michael Silk

On 2/28/07, Gary McGraw [EMAIL PROTECTED] wrote:


Hi all,

The neverending debate over disclosure continued at RSA this year with a
panel featuring Chris Wysopl and others rehashing old ground.  There are
points on both sides, with radicals on one side (say marcus ranum)
calling the disclosure people vulnerability pimps and radicals on the
other saying that computer security would make no progress at all
without disclosure.

I've always sought some kind of middle ground when it comes to
disclosure.  The idea is to minimize risk to users of the broken system
while at the samne time learning something about security and making
sure the system gets fixed.



I think havning extremists is a good thing. Forces people to re-evaluate
their position and actually do things, instead of having a disucssion about
it. Without that there would be middle grounders sitting around wondering
about the best approach. With the extremists these middlegrounders have to
react, or at least companies do. Which is only good.


Disclosure is the subject of my latest Darkreading column:

http://www.darkreading.com/document.asp?doc_id=118174

What do you think?  Should we talk about exploits?  Should we out
vendors?  Is there a line to be drawn anywhere?




I think if you find an exploit do what you personally want. If I had time to
research them, I'd probably be pimping them out for as much as I could; why
not? I can decide. I found it. Same to you, with what you found.

The only line will come if some authority in some country makes it illegal
to sell them. And obviously there would be incredible difficulties in
isolating that, IMHO.


gem


company www.cigital.com
podcast www.cigital.com/silverbullet
book www.swsec.com




-- mike (s1, s2, s3) ;
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-13 Thread Michael Silk

Stephen,

I think the reason you get the IllegalAccessError is because the VM
thinks you are loading from a remote url.

I don't think the user of a classloader per-se forces the
verification of the class. (I wrote a class loader like yours that
just loads a file with no package in the current dir and was able to
access the private method).

You can also note that your class isn't verified if -noverify is
passed (perhaps this is obvious :)

-- Michael



On 5/13/06, Stephen de Vries [EMAIL PROTECTED] wrote:


On 12 May 2006, at 09:10, Charles Miller wrote:

 It's not reflection: you're confusing IllegalAccessException and
 IllegalAccessError.

 For any non-Java nerd still listening in: there are two fundamental
 types of Throwable exception-conditions in Java: Exceptions and
 Errors[1]. Exceptions represent application-level conditions --
 things an application is likely to be able to recover from, like
 network timeouts, trying to read beyond the end of a file, and so
 on. Errors, on the other hand, represent VM-level problems that an
 application can't really do anything about, like running out of
 memory, not finding a required native library, or encountering
 corrupted class files.

 IllegalAccessException happens when reflective code attempts to
 access some field or method it's not supposed to. Because it's a
 result of reflection, it's considered an application-level problem
 and it's assumed your code can recover gracefully.

 Amusingly enough, you can get around most IllegalAccessExceptions
 in java just by calling {field|method}.setAccessible(true). So long
 as there's no explicit SecurityManager installed, as soon as you've
 done that you're free to modify the field or call method to your
 heart's content[2].

 IllegalAccess_Error_, on the other hand, happens when some non-
 reflective code issues a bytecode instruction that attempts to
 access a field or method it shouldn't be able to see. If you look
 at its class hierarchy, the meaning of the class is pretty clear:
 IllegalAccessError is a subclass of IncompatibleClassChangeError,
 which is a subclass of LinkageError. Because this is a problem at
 the bytecode/classloading level, and literally something that could
 happen on _any_ method-call or field-access, it's flagged as an error.

 The Error generally occurs when class A has been compiled against a
 version of class B where a method is public, but that method is
 private in the version of the same class it encounters at runtime.
 This sort of thing happens quite often in Java, you're frequently
 stuck in jar file hell, in a twisty turny maze of library
 interdependencies, all with slightly different version numbers.

 More about the circumstances of IllegalAccessError here:

http://java.sun.com/docs/books/vmspec/2nd-edition/html/
 ConstantPool.doc.html

 Dynamic classloading isn't really at fault here. There are all
 sorts of pits you can fall into when you start rolling your own
 classloader (the Java webapp I develop supports dynamic runtime-
 deployable plugins, and the classloading issues are a HUGE
 headache), but IllegalAccessError isn't one of them.

 Charles

[1] Exceptions are further divided into checked exceptions and
 runtime exceptions, but that's beyond the scope of this email
[2] See also: http://www.javaspecialists.co.za/archive/
 Issue014.html

Thanks for clearing this up Charles.
I've created another example that uses a class loader to load the
classes, and this time, it throws an IllegalAccessError just like
Tomcat does:

Loading class: /Users/stephen/data/dev/classloader/myclass/
somepackage/MyTest.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/
Runnable.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/
Object.class
Loading class: /Users/stephen/data/dev/classloader/myclass/
somepackage/MyData.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/
System.class
Exception in thread main java.lang.IllegalAccessError: tried to
access method somepackage.MyData.getName()Ljava/lang/String; from
class somepackage.MyTest
 at somepackage.MyTest.run(MyTest.java:15)
 at classloader.Main.main(Main.java:26)
Java Result: 1

This error is thrown irrespective of the -verify flag.  So it looks
like using a classloader causes the VM to perform verification,
whether or not the verifier was enabled.  Michael Silk made a
similar statement earlier in this thread.  Would you agree?

PoC code below:

package classloader;

public class Main {

 public Main() {
 }

 public static void main(String[] args) {
 //Illegal Access Error
 try {
 CustomLoader cl = new CustomLoader(System.getProperty
(user.dir)+/myclass/);
 Class myClass = cl.loadClass(somepackage.MyTest);
 Runnable r = (Runnable)myClass.newInstance();
 r.run();

 } catch (Exception e) {
 e.printStackTrace

Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-12 Thread Michael Silk

On 5/12/06, Dinis Cruz [EMAIL PROTECTED] wrote:


 Michael Silk wrote:


What is the point of the verifier?' , 'Why use it? and 'What are the
 real security advantages of enabling the verifier if the code is
 executed in an environment with the security manager disabled?'

 Huh? You can find what it does here:

 http://java.sun.com/sfaq/verifier.html

 The security manager and the verifier are different.

 Of course they are, but it is pointless to have only ONE of them enabled.


No it isn't.



You need BOTH Security Manager and Verifier to start to have a Sandbox (and
even then the Sandbox's security will depend on the security policies)


You can have a sandbox without the verifier.



The security manager allows you to restrict runtime-knowledge type
 things. Connecting a socket, opening a file, exiting the vm, and so
 on.

 Sure, but when you have a security policy that restricts access to sockets
files, if the code you are trying to restrict is executed with -noverify,
then there are ways around those restrictions and have 'unauthorized' access
to those sockets or files.


How?



Given that the main attack vector (to exploit the lack of verification)
 is the same for all cases mentioned above (the attack vector being the
 injection of malicious Java code on the application being executed) then
 I would like to know the reason for the inconsistency?

 I also would like to ask if the following comments are correct:

 Option A) 99% of the Java code deployed to live systems is executed in
 an environment with the verifier disabled

 Option B) 99% of the Java code deployed to live systems is executed in
 an environment with the verifier disabled OR with the security manager
 disabled

 I'd say no. We already know BEA/Tomcat/(And from my knowledge
 Websphere) all run with verification ON by default. All these 3 don't
 take up only 1% of all java code.

 Ok, but what about the security manager?

 I agree that in Option A) the value is not 99% (since the
BEA/Tomcat/Webshepre code will take more that %1)

 But on Option B) ( Java code deployed to live systems is executed in an
environment with the verifier disabled OR with the security manager
disabled ), which it the one that matters, we are still at 99% since the
only code that falls outside that category (i.e. executed with the verifier
AND the security manager enabled) is the mobile code which is executed under
the default browser Java policies (we have to exclude the mobile Java code
which requests and is granted more privileges).


Are you talking about java desktop applications now instead of j2ee
websites? If so, then who cares? You'll be running inside a new vm.



If not, what value should Option A and B be? 10%, 50%, 75?
 Download the app servers and check the documentation? I'd guess most
 have it on by default. Not off. Just my guess though ...

 Am I the only one that finds it surprising that such a pillar of Java
Security is not properly known and information about 'who does what' doesn't
seem to be readily available?


What are you talking about? Documentation _is_ readily available and
has been provided throughout this entire discussion.



 Dinis Cruz
 Owasp .Net Project
 www.owasp.net


-- Michael

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-11 Thread Michael Silk

The verifier is enabled via the commandline. It is either on or off.

the VM does other forms of verification though.

http://java.sun.com/docs/books/vmspec/2nd-edition/html/ConstantPool.doc.html#79383

...

-- Michael

On 5/11/06, Jeff Williams [EMAIL PROTECTED] wrote:

Stephen de Vries wrote:
 With application servers such as Tomcat, WebLogic etc, I think we have a
 special case in that they don't run with the verifier enabled - yet they
 appear to be safe from type confusion attacks.  (If you check the
 startup scripts, there's no mention of running with -verify).

You're right -- I checked that too.  So I think it's just too simple to talk
about the verifier being either on or off.  It appears to me that the
verifier can be enabled for some code and not for other code.  I think
you're right that this behavior has something to do with the classloader
that is used, but I'd really like to understand exactly what the rules are.

--Jeff


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-08 Thread Michael Silk

On 5/9/06, Dinis Cruz [EMAIL PROTECTED] wrote:

Stephen de Vries wrote:
 Java has implemented this a bit differently, in that the byte code
 verifier and the security manager are independent.  So you could for
 example, run an application with an airtight security policy (equiv to
 partial trust), but it could still be vulnerable to type confusion
 attacks if the verifier was not explicitly enabled.  To have both
 enabled you'd need to run with:
 java -verify -Djava.security.policy ...

This is a very weird decision by the Java Architects, since what is the
point of creating and enforcing a airtight security policy if you can
jump strait out of it via a Type Confusion attack?

In fact, I would argue that you can't really say that you have an
'airtight security' policy if the verifier is not enabled!


You can't disable the security manager even with the verifier off. But
you could extend some final or private class that the security manager
gives access to.



Is there a example out there where (by default) java code is executed in
an environment with :

* the security manager enabled (with a strong security policy) and
* the verifier disabled


Yes. Your local JRE.

-- Michael

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-05 Thread Michael Silk

 Two important clarifications for Java (based on my experiments):

 1) The verifier IS enabled for the classes that come with the Java
platform, such as those in rt.jar.  So, for example, if you create a class
that tries to set System.security (the private variable that points to the
SecurityManager instance), you get a verification exception. (If this was
possible, it would allow a complete bypass of the Java sandbox).
 But with either Type Confusion attacks or with the Security Manager
disabled, you can still completely bypass the Java Sandbox, right?






 2) The verifier also seems to be enabled for classes running inside Tomcat.
I'm not sure about other J2EE containers.

 This is interesting, do you have any documentation to back this up? Ideally
there would be a document somewhere which listed which J2EE containers run
with the verifier on by default






 So I don't think it's fair to say that most Java code is running without
verification.
 If all jsp out there is verified then yes 99% is not a correct number, any
idea what percentage it could be?








 But Denis is right. There is a real problem with verification, as
demonstrated in the message below.  This is a clear violation of the Java VM
Spec, yet my messages to the team at Sun developing the new verifier have
been ignored.  And it's a real issue, given the number of applications that
rely on libraries they didn't compile.  I don't think a real explanation of
how the Sun verifier actually works is too much to ask, given the risk.
 I agree, and Sun will probably do the same thing that Microsoft is doing at
the moment: Ignore the issue and hope that it goes away


I don't think so; they've got their Crack the Verifier initiative; and
honestly, I don't think it's a big deal of verification isn't on by
default on the desktop vm; when you double-click a jar it runs in a
new vm, not with a current app you might also be running.

As long as j2ee containers (tomcat as Jeff says, websphere as I've
heard, and BEA WebLogic as someone else suggested) all run with
verification I don't think it's a big deal [but interesting to note],
is it?

FWIW I've seen this issue come up many many times in the java forums
and it's never taken this track. It's interesting to see it develop
this way ...

-- Michael

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-04 Thread Michael Silk

On 5/4/06, Dinis Cruz [EMAIL PROTECTED] wrote:

Wall, Kevin wrote:

 Also, from the results of your test, it seems to indicate that SOME TYPE
 of verification is taking place, but if all you did was change a few
 ARBITRARY bytes in the .class file, I don't think that proves the
 byte code verifier is being being run in it's entirety.
I agree with this analysis and think that the error on David's test was
caused by a malformed bytecode instruction
 It's entirely possibly that the (new 1.5) default just does some
 surface level of byte code verification (e.g., verify that everything
 is legal op codes / byte code)
Well, some level of verification (or bytecode check) must always occur
during the conversion from bytecode to native code (but this has nothing
to do with type safety)
 before HotSpot starts crunching
 on it and that this works differently if either the '-verify' or
 '-noverify' flags are used. E.g., suppose that '-verify' flag, does
 some deeper-level analysis, such as checks to ensure type safety, etc,
 whereas the '-noverify' doesn't even validate the byte codes are
 legal op codes or that the .class file has a legal format. This might
 even make sense because checking for valid file format and valid
 Java op codes ought to be fairly cheap checks compared to the
 deeper analysis required for things like type safety.

I am a little bit confused here, isn't Java an open platform and almost
Open Source?


The source for the j2se sdk is available (i.e. all the classes) and
the source for the runtime is available as well, but under a
different, special license.



If so, shouldn't issues like this be quite well documented?


Actually, the verification process is documented here:
http://java.sun.com/docs/books/vmspec/2nd-edition/html/ClassFile.doc.html#88597

-- Michael

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-03 Thread Michael Silk

Verifier in 1.5 is definately OFF by default:

to confirm this do the following:

1. Create this class:
==
public class Foo {
 public static int k = 23;

 static {
   System.out.println(initially k:  + k);
 }

 public static void m(){
   System.out.println(m() k:  + k);
 }
}
==

2. Compile it.

3. Create this class
==
public class Test {
 public static void main(String[] args){
   Foo.k = 34;
   Foo.m();
 }
}
==

4. Compile it.
5. Run it like so:
5a. java Test
5b. java -verify Test

6. Change Foo.java like so:
==
public class Foo {
 private static int k = 23;

 static {
   System.out.println(initially k:  + k);
 }

 public static void m(){
   System.out.println(m() k:  + k);
 }
}
==

(note k is now private).

7. recompile Foo.java (DO NOT recompile Test.java)
8. run Test.java like so:
8a. java Test
8b. java -verify Test


Note the differences ...

Tested with
===
java version 1.5.0_06
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05)
Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode)
===


-- Michael


On 5/4/06, Wall, Kevin [EMAIL PROTECTED] wrote:

David Eisner wrote...

 Wall, Kevin wrote:

The correct attribution for bring this up (and the one whom you are
quoting) is Dinis Cruz.

   same intuition about the verifier, but have just tested
  this and it is not the case.  It seems that the -noverify is the
  default setting! If you want to verify classes loaded from the
local
  filesystem, then you need to explicitly add -verify to the cmd
line.

 Is this (still) true?  The -verify and -noverify flag are no longer
 documented [1], although they are still accepted.

 I did a little experiment (with my default 1.5 VM).  I compiled a
 HelloWorld program, then changed a few byes in the class file with a
 hex editor.

Perhaps no longer true (at least one could hope), but I can't take
credit
for the part you quoted above. That was Dinis.

Also, from the results of your test, it seems to indicate that SOME TYPE
of verification is taking place, but if all you did was change a few
ARBITRARY bytes in the .class file, I don't think that proves the
byte code verifier is being being run in it's entirety. IIRC, the
discussion was around the issue of 'type safety'. It's hard to see how
a HelloWorld program would show that.

It's entirely possibly that the (new 1.5) default just does some
surface level of byte code verification (e.g., verify that everything
is legal op codes / byte code) before HotSpot starts crunching
on it and that this works differently if either the '-verify' or
'-noverify' flags are used. E.g., suppose that '-verify' flag, does
some deeper-level analysis, such as checks to ensure type safety, etc,
whereas the '-noverify' doesn't even validate the byte codes are
legal op codes or that the .class file has a legal format. This might
even make sense because checking for valid file format and valid
Java op codes ought to be fairly cheap checks compared to the
deeper analysis required for things like type safety.

You didn't discuss details of what bits you tweaked, so I'm not
quite yet ready to jump up and down for joy and conclude that Sun has
now seen the light and has made the 1.5 JVM default to run the byte
code through the *complete* byte code verifier. I think more tests
are either necessary or someone at Sun who can speak in some official
capacity steps up and gives a definitive word one way or another on
this.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]Phone: 614.215.4788
Linux *is* user-friendly.  It's just choosy about its friends.
   - Robert Slade, http://sun.soci.niu.edu/~rslade/bkl3h4h4.rvw


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Credentials for Application use

2005-05-12 Thread Michael Silk
If you are just talking about a password to access a db, the 'typical'
approach (at least the approach I use) is just to store that password
in the code/config file. You may like to add a layer to that by
encrypting it in some config file, and requiring a 'decryption'
(initialisation) of the 'server' to take place, where the key is
entered and the db password is kept in 'application' memory until the
next reset, etc.

But, if you want to use the db resource manage permissions for various
users AS WELL as your app logic (i.e. some redundant security system;
[which is good]) then you'll need to create sql/whatever accounts for
each user, obviously.

Depends what you want, I guess. I think the answer to your question is
that the password is stored in a config file.

-- Michael

On 5/12/05, Mikey [EMAIL PROTECTED] wrote:
 Chris,
 
 Your situation is a little unique in that you encrypt the data with the
 password. The data backend I was referring to is simply a backend database
 like an SQL Server, Oracle 8i or DB2 data repository. All users need to do
 to get access to it is to authenticate to it and then have the right access
 controls to its tables/rows.
 
 SSO may solve my problem but the problem I have right now is that SSO is
 not here for us yet. What I like to understand is from people with
 experience in this stuff who have not implemented enterprise SSO solutions
 so that I can get that light bulb above my head to work. :-)
 
 Thanks.
 
 At 11:00 AM 11/05/2005 -0500, Gizmo wrote:
 Maybe I don't fully understand the concept of Single Sign-On.
 
 As I understand it, SSO allows a user to login to an application portal, and
 all of the applications that user accesses via that portal know who the user
 is and what rights they have within their respective application realms.  As
 such, it is a front-end technology; the back-end applications don't know
 anything about this.  Since my application is a server in a client-server
 architecture, it is a back-end app.  In any case, SSO wouldn't help the
 situation where the data are encrypted by the password, if the data are
 accessed by more than one user.  The idea behind this implementation is to
 ensure that even if a bad guy gains access to the server and the data files
 of the DB, he still can't get at the actual data without the key.
 
 Or am I missing something?
 
 Later,
 Chris




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-14 Thread Michael Silk
I don't think that analogy quite fits :) If the 'grunts' aren't doing
their job, then yes - let's blame them. Or at least help them find
ways to do it better.

-- Michael

[Ed. Let's consider this the end of the thread, please.  Unless someone
wants to say something that is directly relevant to software security,
I'm going to let it drop.  KRvW]

On 4/13/05, Dave Paris [EMAIL PROTECTED] wrote:
 So you blame the grunts in the trenches if you lose the war?  I mean,
 that thinking worked out so well with Vietnam and all...  ;-)

 regards,
 -dsp

  I couldn't agree more! This is my whole point. Security isn't 'one
  thing', but it seems the original article [that started this
  discussion] implied that so that the blame could be spread out.
 
  If you actually look at the actual problems you can easily blame the
  programmers :)




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-13 Thread Michael Silk
On 4/13/05, der Mouse [EMAIL PROTECTED] wrote:
  I would question you if you suggested to me that you always assume
  to _NOT_ include 'security' and only _DO_ include security if
  someone asks.
  Security is not a single thing that is included or omitted.
  Again, in my experience that is not true.  Programs that are labelled
  'Secure' vs something that isn't.

 *Labelling as* secure _is_ (or at least can be) something that is
 boolean, included or not.  The actual security behind it, if any, is
 what I was talking about.

  In this case, there is a single thing - Security - that has been
  included in one and not the other [in theory].

 Rather, I would say, there is a cluster of things that have been boxed
 up and labeled security, and included or not.  What that box includes
 may not be the same between the two cases, even, never mind whether
 there are any security aspects that aren't in the box, or non-security
 aspects that are.

  Also, anyone requesting software from a development company may say:
  Oh, is it 'Secure'?  Again, the implication is that it is a single
  thing included or omitted.

 Yes, that is the implication.  It is wrong.

I couldn't agree more! This is my whole point. Security isn't 'one
thing', but it seems the original article [that started this
discussion] implied that so that the blame could be spread out.

If you actually look at the actual problems you can easily blame the
programmers :)

-- Michael




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Michael Silk
Dave,

On Apr 11, 2005 9:58 PM, Dave Paris [EMAIL PROTECTED] wrote:
 The programmer is neither the application architect nor the system
 engineer.

In some cases he is. Either way, it doesn't matter. I'm not asking the
programmer to re-design the application, I'm asking them to just
program the design 'correctly' rather than 'with bugs' (or - security
problems). Sometimes they leave 'bugs' because they don't know any
better, so sure, train them. [oops, I'm moving off the point again].
All I mean is that they don't need to be the architect or engineer to
have their decisions impact the security of the work.


 If
 security is designed into the system and the programmer fails to code to
 the specification, then the programmer is at fault.

Security can be design into the system in many ways: maybe the manager
was vauge in describing it, etc, etc. I would question you if you
suggested to me that you always assume to _NOT_ include 'security' and
only _DO_ include security if someone asks. For me, it's the other way
round - when receiving a design or whatever.


 While there are cases that the programmer is indeed at fault (as can
 builders be), it is _far_ more often the case that the security flaw (or
 lack of security) was designed into the system by the architect and/or
 engineer.

So your opinion is that most security flaws are from bad design?
That's not my experience at all...

What are you classifying under that?


 It's also much more likely that the foreman (aka
 programming manager) told the builder (programmer) to take shortcuts to
 meet time and budget -

Maybe, but the programmer should not allow 'security' to be one of
these short-cuts. It's just as crucial to the finsihed application as
implementing that method to calculate the Net Proceedes or something.
The manager wouldn't allow you to not do that; what allow them to
remove so-called 'Security' (in reality - just common sense of
validating inputs, etc.).

-- Michael




Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-07 Thread Michael Silk
On Apr 7, 2005 12:43 PM, Blue Boar [EMAIL PROTECTED] wrote:
 Michael Silk wrote:
  See, you are considering 'security' as something extra again. This is
  not right.
 
 It is extra.  It's extra time and effort.  And extra testing.  And extra
 backtracking and schedule slipping when you realize you blew something.
 All before it hits beta.

All of this is part of _programming_ though.  To me it should be on
the same level as, say, using an 'Array' at an appropriate point in a
program. You won't say to management: Oh, I didn't use an array there
because you didn't ask me to.. It's ridiculous to even consider. And
so it should be with so-called 'Security' that is added to
applications.

Consider the bridge example brought up earlier. If your bridge builder
finished the job but said: ohh, the bridge isn't secure though. If
someone tries to push it at a certain angle, it will fall. You would
be panic stricken. Fear would overcome you, and you might either run
for the hills, or attempt to strangle the builder... either way, you
have a right to be incredibly upset by the lack of 'security' in your
bridge. You WONT (as is the case now) sit back and go: Oh well, fair
enough. I didn't ask you to implement that, I just said 'build a
bridge'. Next time i'll ask. Or make sure the public specifically
requests resistence to that angle of pushing.

Hopefully my point is obvious...

-- Michael

 Any solution that ends up with us having secure software will
 neccessarily need to address this step as well as all others.  The
 right answer just might end up being suck it up, and take the
 resource hit.  It might be switch to the language that lends itself to
 you coding securly at 75% the productivity rate of sloppy coding.  I
 don't know enough about the languages involved to participate in that
 debate.
 
 Strangely enough, for the last year and a half or so, I've been sitting
 here being QA at a security product company.  Doing software right takes
 extra resources.  I are one.
 
Ryan





Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Michael Silk
Quoting from the article:
''You can't really blame the developers,''

I couldn't disagree more with that ...

It's completely the developers fault (and managers). 'Security' isn't
something that should be thought of as an 'extra' or an 'added bonus'
in an application. Typically it's just about programming _correctly_!

The article says it's a 'communal' problem (i.e: consumers should
_ask_ for secure software!). This isn't exactly true, and not really
fair. Insecure software or secure software can exist without
consumers. They don't matter. It's all about the programmers. The
problem is they are allowed to get away with their crappy programming
habits - and that is the fault of management, not consumers, for
allowing 'security' to be thought of as something seperate from
'programming'.

Consumers can't be punished and blamed, they are just trying to get
something done - word processing, emailing, whatever. They don't need
to - nor should. really. - care about lower-level security in the
applications they buy. The programmers should just get it right, and
managers need to get a clue about what is acceptable 'programming' and
what isn't.

Just my opinion, anyway.

-- Michael


On Apr 6, 2005 5:15 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
 Greetings++,
 
 Another interesting article this morning, this time from eSecurityPlanet.
 (Full disclosure: I'm one of their columnists.)  The article, by Melissa
 Bleasdale and available at
 http://www.esecurityplanet.com/trends/article.php/3495431, is on the general
 state of application security in today's market.  Not a whole lot of new
 material there for SC-L readers, but it's still nice to see the software
 security message getting out to more and more people.
 
 Cheers,
 
 Ken van Wyk
 --
 KRvW Associates, LLC
 http://www.KRvW.com





Re: [SC-L] Mobile phone OS security changing?

2005-04-06 Thread Michael Silk
On Apr 7, 2005 3:12 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
 On Wednesday 06 April 2005 09:26, Michael Silk wrote:
  The last thing I want is my mobile phone updating itself. I imagine
  that sort of operation would take up battery power, and possibly cause
  other interruptions ... (can you be on a call and have it update
  itself?)
 
 I vividly remember a lot of similar arguments a few years ago when desktop PCs
 started doing automatic updates of OS and app software.  Now, though, my
 laptop gets its updates when it's connected and when I'm not busy doing other
 things.

Hmm, I wasn't around then but I can see what you are saying... Still,
though, a phone seems so simple, and I can completely live without net
access (I guess they said this too) so it just seems wrong, and a
little annoying, to bring security problems to them...

 
 My main point, though, is that the status quo is unacceptable in my opinion.
 If a nasty vulnerability is found in most of today's mobile phone software,
 the repair process -- take the phone to the provider/vendor and have them
 burn new firmware -- just won't cut it.  For that matter, a lot of PDAs are
 in the same boat.

True. But I wonder if an update strategy like that allows them to be
more secure? I.e. perhaps a physical interface can allow more
programming options? Options that aren't available over the HTTP
interface (like installing apps, for example).

This could increase their security.

Corporations giving phones out to employee's, or developing software
for them, could buy these attachments and have policies at work.
Regular people would need to go back to the phone store, or a
speciality Mobile Phone Software Installer store to get it done.

 
 Sure, we'd all prefer better software in those devices to begin with, but as
 long as there are bugs and flaws, the users of these devices need a better
 way of getting the problems fixed.

Fair enough..


  Personally, I would prefer a phone that doesn't connect to the
  internet at all rather than a so called 'secure' phone.
 
 For the most part, those days are over.

I guess I better hold on to my 'non-internet' phone for as long as I
can, then, if I won't be able to replace it :)

-- Michael


 Cheers,
 
 Ken van Wyk
 --
 KRvW Associates, LLC
 http://www.KRvW.com




Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Michael Silk
Jeff,

On Apr 7, 2005 11:00 AM, Jeff Williams [EMAIL PROTECTED] wrote:
  I would think this might work, but I - if I ran a software development
  company - would be very scared about signing that contract... Even if
  I did everything right, who's to say I might not get blamed? Anyway,
  insurance would end up being the solution.
 
 What you *should* be scared of is a contract that's silent about security.

If you're silent you can claim ignorance :D

But of course, I agree. Security should be mentioned under the part
of applications Working Right.

What I meant I would be scared of, however, is that if the contract
didn't fully specify what I would be taking responsibility for. I.e. I
could be blamed if some misconfiguration on the server allowed a user
to run my tool/component as admin and enter some information or do
whatever.

The contract would have to be specific (technical?) so-as to avoid
problems like this. But I presume you have had far more experience
with these issues than I have... can you share any w.r.t to problems
like that?

Because I can imagine [if I wasn't ethical] trying to blame a security
problem in My Big Financial Website on a 3rd party tool if I could.


 Courts will have to interpret (make stuff up) to figure out what the two
 parties intended.  I strongly suspect courts will read in terms like the
 software shall not have obvious security holes.  They will probably rely on
 documents like the OWASP Top Ten to establish a baseline for trade practice.
 
 Contracts protect both sides.  Have the discussion.  Check out the OWASP
 Software Security Contract Annex for a
 template.(http://www.owasp.org/documentation/legal.html).

Yes, I've read the before, and even discussed it with you! :)

-- Michael

 
 --Jeff
 
 
  - Original Message -
  From: Michael Silk [EMAIL PROTECTED]
  To: Kenneth R. van Wyk [EMAIL PROTECTED]
  Cc: Secure Coding Mailing List SC-L@securecoding.org
  Sent: Wednesday, April 06, 2005 9:40 AM
  Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?
 
   Quoting from the article:
   ''You can't really blame the developers,''
  
   I couldn't disagree more with that ...
  
   It's completely the developers fault (and managers). 'Security' isn't
   something that should be thought of as an 'extra' or an 'added bonus'
   in an application. Typically it's just about programming _correctly_!
  
   The article says it's a 'communal' problem (i.e: consumers should
   _ask_ for secure software!). This isn't exactly true, and not really
   fair. Insecure software or secure software can exist without
   consumers. They don't matter. It's all about the programmers. The
   problem is they are allowed to get away with their crappy programming
   habits - and that is the fault of management, not consumers, for
   allowing 'security' to be thought of as something seperate from
   'programming'.
  
   Consumers can't be punished and blamed, they are just trying to get
   something done - word processing, emailing, whatever. They don't need
   to - nor should. really. - care about lower-level security in the
   applications they buy. The programmers should just get it right, and
   managers need to get a clue about what is acceptable 'programming' and
   what isn't.
  
   Just my opinion, anyway.
  
   -- Michael
  
  
   On Apr 6, 2005 5:15 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
   Greetings++,
  
   Another interesting article this morning, this time from
   eSecurityPlanet.
   (Full disclosure: I'm one of their columnists.)  The article, by
   Melissa
   Bleasdale and available at
   http://www.esecurityplanet.com/trends/article.php/3495431, is on the
   general
   state of application security in today's market.  Not a whole lot of
   new
   material there for SC-L readers, but it's still nice to see the
   software
   security message getting out to more and more people.
  
   Cheers,
  
   Ken van Wyk
   --
   KRvW Associates, LLC
   http://www.KRvW.com
  
  
  
 
 
 





Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Michael Silk
On Apr 7, 2005 1:16 AM, Goertzel Karen [EMAIL PROTECTED] wrote:
 I think it's a matter of SHARED reponsibility. Yes, the programmers and
 their managers are directly responsible. But it's consumers who create
 demand, and consumers who, out of ignorance, continue to fail to make
 the connection between bad software security and the viruses, privacy,
 and other issues about which they are becoming increasingly concerned.

Quite frankly I don't think consumers need to care at all about this.

Do you, when buying chips, ask how they were cooked? Do you go back
and inspect the kitchen? Do you ask for a report on their compliance
to local health laws? No. The most you might do is glance at a box
with some ticks on it.

Why should software be any different? Why place the burden on
consumers to now evalutate the security of your products? Not only
don't they care, nor do they have the time, they wouldn't know where
to start!


 The consumer can't be held responsible for his ignorance...

Exactly!


 Because practioners of safe software have not done a very good
 job of getting the message out in terms that consumers, vs. other
 software practioners and IT managers, can understand.
 
 I propose that the following is the kind of message that might make a
 consumer sit up and listen:
 
 We understand that you buy software to get your work or online
 recreation done as easily as possible. But being able to get that work
 done WITHOUT leaving yourself wide open to exploitation and compromise
 of YOUR computer and YOUR personal information is also important, isn't
 it?

Answer: Duh.

 
 A number of software products, including some of the most popular ones,
 are full of bugs and other vulnerabilities that DO leave those programs
 wide open to being exploited by hackers so they can get at YOUR personal
 information, and take over YOUR computing resources.

Answer: So? I need to use them.

 
 Why is such software allowed to be sold at all? Because no-one
 regulates the SECURITY of the software products that these the companies
 put out, least of all the programmers who write that software. And, more
 importantly, because you the consumer hasn't been told before that you
 can make a difference. You can vote with your feet.

Answer: But how will I pay my GST next month if I can't use my
accounting program? I don't want to waste time transferring all my
data to another product...


 Demand that the software you use not be full of holes and 'undocumented
 features' that can be exploited by hackers.

Answer: How? I buy my software at a department store.

 
 If we can start to raise consumer awareness 

It's easy to blame the consumer - it means we
programmers/management/whatever don't need to do anything until they
ask us. But they will _never_ be able to ask all the right questions.
_Never_. So to put that requirement on them is just our 'easy way out'
of the problem.

-- Michael

 
 --
 Karen Goertzel, CISSP
 Booz Allen Hamilton
 703-902-6981
 [EMAIL PROTECTED]
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Michael Silk
  Sent: Wednesday, April 06, 2005 9:40 AM
  To: Kenneth R. van Wyk
  Cc: Secure Coding Mailing List
  Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?
 
  Quoting from the article:
  ''You can't really blame the developers,''
 
  I couldn't disagree more with that ...
 
  It's completely the developers fault (and managers). 'Security' isn't
  something that should be thought of as an 'extra' or an 'added bonus'
  in an application. Typically it's just about programming _correctly_!
 
  The article says it's a 'communal' problem (i.e: consumers should
  _ask_ for secure software!). This isn't exactly true, and not really
  fair. Insecure software or secure software can exist without
  consumers. They don't matter. It's all about the programmers. The
  problem is they are allowed to get away with their crappy programming
  habits - and that is the fault of management, not consumers, for
  allowing 'security' to be thought of as something seperate from
  'programming'.
 
  Consumers can't be punished and blamed, they are just trying to get
  something done - word processing, emailing, whatever. They don't need
  to - nor should. really. - care about lower-level security in the
  applications they buy. The programmers should just get it right, and
  managers need to get a clue about what is acceptable 'programming' and
  what isn't.
 
  Just my opinion, anyway.
 
  -- Michael
 
 
  On Apr 6, 2005 5:15 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
   Greetings++,
  
   Another interesting article this morning, this time from
  eSecurityPlanet.
   (Full disclosure: I'm one of their columnists.)  The
  article, by Melissa
   Bleasdale and available at
   http://www.esecurityplanet.com/trends/article.php/3495431,
  is on the general
   state of application security in today's market.  Not a
  whole lot of new
   material there for SC-L readers

Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Michael Silk
Inline

On Apr 7, 2005 1:06 AM, Dave Paris [EMAIL PROTECTED] wrote:
 And I couldn't disagree more with your perspective, except for your
 inclusion of managers in parenthesis.
 
 Developers take direction and instruction from management, they are not
 autonomous entities.  If management doesn't make security a priority,

See, you are considering 'security' as something extra again. This is
not right.

My point is that management shouldn't be saying 'Oh, and don't forget
to add _Security_ to that!' The developers should be doing it by
default.


 then only so much secure/defensive code can be written before the
 developer is admonished for being slow/late/etc.

Then defend yourself ... ! Just as you would if the project was too
large due to other reasons. Don't allow security to be 'cut off'.
Don't walk in and say 'Oh, I was just adding security to it,'. A
manager will immediately reply: Oh, we don't care about that
Instead say: Still finishing it off... (This _has_ worked for me in
the past, by the way...)

 
 While sloppy habits are one thing, it's entirely another to have
 management breathing down your neck, threatening to ship your job
 overseas, unless you get code out the door yesterday.

Agreed. (Can't blame consumers for this issue, however..)

 
 I'm talking
 about validation of user input, 

This is something that all programmer should be doing in _ANY_ type of
program. You need to handle input correctly for your app to function
correctly, otherwise it will crash with a dopey user.


 ensuring a secure architecture to begin
 with, and the like.  

'Sensible' architecture too, though. I mean, that's the whole point of
a design - it makes sense. For example, an app may let a user update
accounts based on ID's, but it doesn't check if the user actually owns
the ID of the account they are updating. They assume it's true because
they only _showed_ them ID's they own.

You'd hope that your 'sensible' programmer would note that and confirm
that they did, indeed, update the right account. Not only for security
purposes, but for consistency of the _system_. The app just isn't
doing what it was 'specified' to do if the user can update any
account. It's _wrong_ - from a specification point of view - not just
'insecure'.

You would, I guess, classify this as something the managers/consumers
need to explicitly ask for. To me, it seems none of their business. As
a manager, you don't want to be micromanaging all these concepts (but
we are - CIO's...) they should be the sole responsibility of the
programmer to get right.


 The later takes far more time to impliment than is
 given in many environments.  The former requires sufficient
 specifications be given upfront 

Agreed.

-- Michael


 Michael Silk wrote:
  Quoting from the article:
  ''You can't really blame the developers,''
 
  I couldn't disagree more with that ...
 
  It's completely the developers fault (and managers). 'Security' isn't
  something that should be thought of as an 'extra' or an 'added bonus'
  in an application. Typically it's just about programming _correctly_!
 
  The article says it's a 'communal' problem (i.e: consumers should
  _ask_ for secure software!). This isn't exactly true, and not really
  fair. Insecure software or secure software can exist without
  consumers. They don't matter. It's all about the programmers. The
  problem is they are allowed to get away with their crappy programming
  habits - and that is the fault of management, not consumers, for
  allowing 'security' to be thought of as something seperate from
  'programming'.
 
  Consumers can't be punished and blamed, they are just trying to get
  something done - word processing, emailing, whatever. They don't need
  to - nor should. really. - care about lower-level security in the
  applications they buy. The programmers should just get it right, and
  managers need to get a clue about what is acceptable 'programming' and
  what isn't.
 
  Just my opinion, anyway.
 
  -- Michael
 [...]