Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-06 Thread Dinis Cruz




Eric Swanson wrote:

  
  


  
  
  
   What we need now is
focus, energy and commitment
to create a business environment where it is possible (and profitable)
the
creation, deployment and maintenance of applications executed in secure
sandboxes.
  
  Traditionally,
the quickest answer to a
need like this is terrorism of some kind to get people to wake up
to imminent threats. But, since were in the business of only helping
and not hurting
  

true, but the issue here is that the solution for this problem is not
simple and will take a huge amount of effort and focus from all parties
involved. So the later we start the process the more painful it will be.

We have been lucky so far that the number of attacker(s) with both
intent, technical capability, business-process understanding and
opportunity have been very small. It is still also hard today to make
huge amounts of money with digital assets (for example a data center)
without using extortion or blackmail (I call this the 'monetization of
digital assets')

So what you need to do is ask the question "Will the current rate of
security enhancements that we are doing to our systems will be higher
than the rate of growth in the attacker(s): numbers (as in quantity),
skills, ability to monetize digital assets and opportunity".

If those two lines (the 'security enhancements' and the 'attacker(s)
profile') don't cross (situation we live in today), we are ok. But if
the lines do cross over, then we will have a major crisis in our hands.

  
  
  How do we
motivate management decisions to
support developing more secure solutions?
  

You make them aware of the 'reality' of the situation, and the
consequences of the technological decisions they make everyday (i.e.
make them aware that the CIA (Confidentiality, Integrity and
Availability) or his/hers IT systems is completely dependent on the
honesty, integrity and non-malicious intent of thousands and thousands
of individuals, organizations and governments.

  
   Its the
same question as
motivating better problem definitions, code requirements gathering,
documentation,
refactoring, performance optimizations, etc. Time and budget. The
answer is to have an affordable, flexible development process and tools
that
support these motivations. 
  
  

For me (a key part of) the answer is to have an '...affordable,
flexible development process and tools that support...' the
creation of applications which can be executed in secure partial trust
environments :)

  
   In .NET,
code reflection and in-line XML
comments coupled with formatting tools like NDoc made professional
code documentation an instant option available to every .NET developer,
even
those on a shoe-string budget.
  

Yes, but unfortunately it also made development partially trusted code
very expensive

  
  
  
  The answer
from OWASP might be to re-evaluate
development processes and develop both sandboxes for clients as well as
security
patterns, components, wizards, and utilities for developers. 
  

We could do that, but we would need much more resources that the ones
we currently have (and until Microsoft joins the party, it will be a
pointless exercise)

  
  We could
re-write
development processes like the hot topics Agile Development and
Extreme
Programming to include the SSDL, Secure Software Development
Lifecycle. Perhaps we should be making a better business case for the
SSDL, like the 2nd Edition of Code Completes Utterly
Compelling and Foolproof Argument for Doing Prerequisites
Before Construction (Print
ISBN: 0-7356-1967-0). 
  

Agree. I am a big fan of SSDL and believe that it is an integral part
of the environment required to create secure applications

  
  Our guides
and vulnerability detection utilities just scratch the
surface. 
  
  

Yes, and also (specially the tools) show how little interest there is
in this topic

  
  The
utilities in particular do not directly address our concerns
for motivating the community, except by opening the eyes of the
developers who
actually use them and giving them something fun to play with.
  

even then, most developers and managers don't have the security
experience to understand the implications of the security issues
highlighted by these tools (and when they do, they find that there is
no market for securer apps/hosting environments)

  
  
  Given the
many options that lay ahead of
the group, my opinion would be to work on better incorporating the SSDL
into
popular development processes and making a clear-cut business case
(with
statistics) for its inclusion. To motivate participation, we continue
to
develop the utilities, patterns, components, and wizards for developers
(both
before and after the development release cycle). Perhaps we take the
online
guides, checklists, and utilities and begin to formulate what SSDL
looks like
through OWASPs eyes.
  

That's the plan :)

Very soon we (Owasp) should be making an announcement which will talk
about this

Dinis Cruz
Owasp .Net Project

Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-29 Thread Stephen de Vries


Hi Dinis,

On 29 Mar 2006, at 05:52, Dinis Cruz wrote:



Thanks for confirming this (I wonder how many other other Java
developers are aware of this (especially the ones not focused on
security)).



Most I've worked with aren't really aware of the security manager,  
never mind bytecode verification.
It is an issue, but the security risk in the real world may be a bit  
overstated.  If I were a maliciously minded attacker that wanted  
users to execute my evil Java program, I wouldn't need to mess about  
with the lack of verification, I could just write evil code in  
perfectly verifiable format and rely on users to execute it.
Can anyone come up with attack vectors that exploit lack of  
verification on downloaded code that couldn't be exploited by other  
(easier) means?




Stephen, do you have any idea of what is the current percentage of  
'real

world' Java applications are executed:

a) with verification

b) on a secure sandbox



Very few.  As Jeff mentioned some Java Application servers ship with  
a security policy enabled, but the policy doesn't restrict anything  
(e.g. JBoss), other's show you how to run with a sec policy, but  
don't apply it by default (e.g. Tomcat).  In some cases, with the  
more complex app servers a sec policy would be of little security  
benefit because the server needs so much access in order to function  
properly that the policy could be considered completely open.


In some ways I think we're applying double standards here.  Just  
because a virtual machine offers the facility for defining a security  
policy and verification doesn't mean that it _has_ to use it.  There  
are  native executable programs that I trust, so why should a program  
that runs in a VM be subject to more stringent security controls just  
because they're available?  IMO whether code needs to be sandboxed  
and controlled by a policy should be decided on a case by case basis  
rather than a blanket rule.


Note that for example I have seen several Java Based Financial
Applications which are executed on the client which either require  
local

installation (via setup.exe / App.msi) or require that the user grants
that Java application more permissions that the ones allocated to a
normal Sandboxed browser based Java App.


This is quite common for an app, and granting more permissions is  
fine as long as those are tightly controlled by the java security  
policy.






Humm, this is indeed interesting. Ironically, the 1.1 and 2.0 versions
of the CLR will thrown an exception in this case (even in Full Trust).
Since verification is not performed on that .Net Assembly, the CLR  
might

pick up this information when it is resolving the method's relative
address into the real physical addresses (i.e. during JIT).


Using the same code with an Applet loaded from the filesystem throws
an IllegalAccessError exception as it should.



What do you mean by 'Applet loaded from the filesystem'?

Where? In a Browser?



If you load an applet in a browser using a url such as: file:///data/ 
stuff/launch.html then no verification is performed.

But if you access the applet using http/s then it will be verified.

cheers,

--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-29 Thread Gunnar Peterson
This comes back to that great concept called 'Faith-based' Security (see Gunnar Peterson's post http://1raindrop.typepad.com/1_raindrop/2005/11/net_and_java_fa.html ), which is when people are told so many times that something is secure, that that they believe that it MUST be secure. Some examples:This is also neatly summarized by Brian Snow thusly:We will be in a truly dangerous stance: we will think we are secure (and act accordingly) when in fact we are not secure.-gp1. Notes and links on "We Need Assurance!" paperhttp://1raindrop.typepad.com/1_raindrop/2005/12/the_road_to_ass.html___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-28 Thread Dinis Cruz




Jeff, as you can see by Stephen de Vries's response on this thread, you
are wrong in your assumption that most Java code (since 1.2) must go
through the Verifier (this is what I was sure it was happening since I
remembered reading that most Java code executed in real-world
applications is not verified)

I think your answer shows clearly that the Java camp should also be
participating in these discussions. 

In fact I also would like to ask
"Where are the Java Guys/Girls?" 

I have been talking for two years now
on the dangers of .Net Full Trust code, and have not seem much
discussion on the dangers of 'Security Manager disabled Java code'
(since the problems are exactly the same). Malicious Java code,
executed with the Security Manager Disabled in a user's desktop or in
a server, is as dangerous as Full Trust .Net code.

This comes back to that great concept called 'Faith-based' Security
(see Gunnar Peterson's post
http://1raindrop.typepad.com/1_raindrop/2005/11/net_and_java_fa.html ),
which is
when people are told so many times that something is secure, that that
they believe that it MUST be secure. Some examples:

 - "Java is more secure than .Net" (meaningless discussion unless we
also talk about the Sandboxes the code is running under)

 - "IIS 6.0 is more secure that IIS 5.0" (today, is a fully patched
IIS 5 (with urlscan ISAPI filter) more 'secure' than a IIS 6.0? Most
people will automatically say yes, but if you do a Risk analysis to
both, you
will see that the risk is just about the same: both ARE able to sustain
malicious 'Internet based' anonymous attacks (since there are no
reported unpatched vulnerabilities and zero-days exploits), and both
are NOT ABLE to sustain malicious Full Trust Asp.Net code executed from
within one of its worker processes

 - "Open Source apps are more secure than Closed Source apps"
(again, not an automatic truism)

 - "Linux and Mac are more secure than Windows" (that depends on how
it is configured, deployed, maintained, and more importantly, how it is
used).

 - "If only we could get the developers to write 'secure code' there
would be no more vulnerabilities" (this is the best one, a good
example of 'Faith Based Security' with 'Blame the guy in the trenches
that doesn't complain (i.e. the developers)' (note that I don't think
that developers have SOLE (or even MAIN) responsibility in the process
that leads to the creation of insecure applications))

 -"I.E. is more insecure than Firefox" (apart from the unmanaged
code discussion we had earlier, I just say this: Firefox plug-ins. The
best way to Own millions of computers is to write a popular Firefox
plug-in (which to my understanding runs directly on Firefox's process
space and not contained in any type of Sandbox)) 

I hope the Java camp will also join this discussion on how to create
'real world' applications which can be executed in safe Sandboxes.

Ultimately my main frustration is that both .Net and Java have built
into their framework technological solutions which COULD deliver such
environments (CAS and Security Manager). The problem is that they were
designed to handle a very specific type of code (the so called 'Mobile
code') for a specific set of applications (browser based components and
mobile devices), not the complicated,massively interconnected,
feature-rich apps that we have today.

What we need now is focus, energy and commitment to create a business
environment where it is possible (and profitable) the creation,
deployment and maintenance of applications executed in secure sandboxes.

Dinis Cruz
Owasp .Net Project
www.owasp.net

Jeff Williams wrote:

  
I am not a Java expert, but I think that the Java Verifier is NOT used on

  
  Apps that are executed with the Security Manager disabled (which I believe
is the default setting) or are loaded from a local disk (see "... applets
loaded via the file system are not passed through the byte code verifier"
in http://java.sun.com/sfaq/) 

I believe that as of Java 1.2, all Java code except the core libraries must
go through the verifier, unless it is specifically disabled (java
-noverify).  Note that Mustang will have a new, faster, better? verifier and
that Sun has made the new design and implementation available to the
community with a challenge to find security flaws in this important piece of
their security architecture. https://jdk.dev.java.net/CTV/challenge.html.
Kudos to Sun for engaging with the community this way.

--Jeff



-
This List Sponsored by: SpiDynamics

ALERT: "How A Hacker Launches A Web Application Attack!" 
Step-by-Step - SPI Dynamics White Paper
Learn how to defend against Web Application Attacks with real-world 
examples of recent hacking methods such as: SQL Injection, Cross Site 
Scripting and Parameter Manipulation

https://download.spidynamics.com/1/ad/web.asp?Campaign_ID=70130003gRl

Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-28 Thread Dinis Cruz
Hello Eric (comments inline)

Eric Swanson wrote:
 Because I believe that Microsoft will never be as cooperative with .NET and
 the developer community as Sun is with Java, is there an opportunity for
 another company to step up to the plate on Microsoft's behalf? 
There is definitely an opportunity here. At the moment I see two big
players that could move into that space: Novell and IBM.

Both have the resources to do it, and the motivation. The main questions
are:

- Do they want to buy that 'war' with Microsoft?
- Do they 'believe' that this a worthwhile project and one that will
help their bottom line?
- Can they do it in an open and transparent way that attracts a
strong community to it? (note that this community will be critical to
the project, since I believe that no company in the world has the
resources to it by itself)

This could also be done by a very dynamic and well funded Open Source
project (maybe by several governments or by companies/corporations which
decide that they need to be more proactive in the protection of their
critical resources and assets)
  The .NET
 Framework is completely public, and, although Mono continues to have its
 workload increased by each Framework release, I think there may be an
 opportunity for a company or organization to step-in and take the reigns
 where Microsoft left off.  How possible is it to plug-in to the CLR and
 make extensions to the core?
   
It is very doable. Note that there are already 4 different flavors of
the CLR (Microsoft's .Net Framework, Rotor, Mono and DotGnu)

See also the Postbuild commercial application
(http://www.xenocode.com/Products/Postbuild/) which claims (I have not
used it) to create Native x86 executables which  allows .NET
applications to run anywhere, with or without the Framework.

This is something that I always wanted to do since it should (depending
how it is done) allow the dramatic reduction of code (and dlls) that
needs to be loaded in memory (the ultimate objective would be to create
mini-VMs that were completely isolated from the host OS (or only having
very specific interfaces / contact points)).

Also while I was doing my 'Rooting the CLR' research, since Microsoft
does provide the Symbols for core .Net Assemblies, there is a lot that
can be done at that level. That said, this work would be greatly
simplified if Microsoft released the source code of the entire .Net
Framework :)

 Perhaps a better project for OWASP.NET than security vulnerability detection
 utilities is a security plug-in to the CLR engine for byte code signature
 registration and verification?  
Agree, the problem we have is resources (and lack of funding)

Btw, at Owasp .Net we have now much more than just 'Security
Vulnerability Detection Utilities' :)

Apart from those utilities (namely ANSA and ANBS) we now also have:

* Owasp Site Generator : Dynamic website creator to test Web
Application Scanners and Web Application Firewalls (and a great tool for
developers to learn about security vulnerabilities)
* Owasp PenTest Reporter : Tool that aids in the process of
documenting,  reporting and tracking security vulnerabilities discovered
during Penetration Testing engagements
* DefApp (Proof of Concept): Web Application Firewall
 
Another project that I would love to do is to work on a plug-in manager
for Firefox which would execute all Firefox plug-ins in a managed and
verifiable .Net sandbox (maybe built around mono (which was were this
idea was suggested to me))
 Would this task even be feasible?  (Managed
 code only?)  Is it even worth the effort, considering the possibility of
 further development from Microsoft?
   
I think that it would be worth the effort, the problem is 'who will fund
this'.

I don't think that this is a project that can be done on the backs of
the odd spare times that its main developers would be able to allocate
to it.
 *Personally, I have never attempted to work below the top layers of .NET.
   
It's not that hard :)
 But, it seems to me that plugging into the core would be a better option
 than some kind of wrapper sandbox, especially with regard to change control
 (top layers are likely to change more often and more drastically than lower
 layers).
   
Absolutely, and remember that ideally this tool would also remove 95% of
that 'top layer' since it is not required.

I am also not convinced of the robustness of the current implementation
of CAS in .Net 1.1 and 2.0. There are too many security demands in too
many places.
 Should it be a task of the OWASP.Java team to work with Sun Mustang?
   
Maybe, but first you need to create that Owasp.Java team :)

There are a lot of Java guys at Owasp, but they all are working on
separate projects
 Could there ever be a silver bullet sandbox for all executables, regardless
 of language? 
No I don't think so.

You will need to look at each different type of executables (mobile
code, web apps, desktop apps, windows services, 'real-time apps', etc..)
and 

Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Dinis Cruz




Hi Kevin

  Indeed this is somewhat surprising that there is no byte-code
verification
in place, especially for strong typing, since when you think about it,
this is not too different than the "unmanaged" code case.

  

Well there is some byte coding verification. For example if you
manipulate MSIL so that you create calls to private members (something
that you can't compile with VS.NET) you will get a runtime error saying
that you tried to access a private member. So in this case there is
some verification.

What I found surprising was how little verification was done by the CLR
when verification is disabled, see for example these issues:

  Possible
Type Confusion issue in .Net 1.1 (only works in Full Trust)
  Another
Full Trust CLR Verification issue: Exploiting Passing Reference Types
by Reference
  Another
Full Trust CLR Verification issue: Changing Private Field using Proxy
Struct
  Another
Full Trust CLR Verification issue: changing the Method Parameters order
  C#
readonly modifier is not inforced by the CLR (when in Full Trust
  Also related: JIT
prevents short overflow (and PeVerify doesn't catch it) and ANSI/UNICODE
bug in System.Net.HttpListenerRequest

Basically, Microsoft decided against performing verification on Full
Trust code (which is 99% of the .Net code out there remember). Their
argument (I think) is: "if it is Full Trust then it can jump to
unmanaged code anyway, so all bets are off" (I am sure I have seen this
documented somewhere in a Microsoft book, KB article or blog, but can't
seem to find it (for the Microsofties that are reading this (if any),
can you post some links please? thanks))

Apart from a basic problem which is "You cannot trust Full Trust code
EVEN if it doesn't make ANY direct unmanaged call or reflection" there
is a much bigger one.

When (not if) Applications start to be developed so that they run in
secure Partially Trusted environments,I think that the developers will
find that they code will suffer from an immediate performance hit due
to the fact that Verification is now being done on their code (again
for the Microsofties that are reading this (if any), can you post some
data related to the performance impact of the current CLR Verification
process? thanks)

  Apparently the whole "managed" versus "unmanaged" code only has to do
with whether or not garbage collection is attempted. 

yes, although I still think that we should fight for the words "Managed
Code" to include verification 


  However, the real question is "is this true for ALL managed code or
only managed code in the .NET Framework"? 

I am not a Java expert, but I think that the Java Verifier is NOT used
on Apps that are executed with the Security Manager disabled (which I
believe is the default setting) or are loaded from a local disk (see
"... applets loaded via the file system are not passed through the byte
code verifier" in http://java.sun.com/sfaq/) 

  Of course if software quality improvement does not take
place in these companies, their signing would be somewhat vacuous. Butit
would be better than nothing, since at least all such code would not be
fully trusted by default.
  

Yes, and note that I strongly defend that: "All local code must NOT be
given Full Trust by default" (at the moment it is)

Dinis

PS: For the Microsofties that are reading this (if any)  sorry for
the irony and I hope I am not offending anyone, but WHEN are you
going to join this conversion? (i.e. reply to this posts)

I can only see 4 reasons for your silence: a) you are not reading these
emails, b) you don't care about these issues, c) you don't want to talk
about them or d) you don't know what to say.

Can you please engage and publicly participate in this conversation ...

Thanks


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread ljknews
At 2:34 AM +0100 3/27/06, Dinis Cruz wrote:

 PS: For the Microsofties that are reading this (if any)   sorry for
the irony and I hope I am not offending anyone, but WHEN are you going
to join this conversion? (i.e. reply to this posts)

 I can only see 4 reasons for your silence: a) you are not reading these
emails, b) you don't care about these issues, c) you don't want to talk
about them or  d) you don't know what to say.

e) Your employer has a company policy against such participation.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Stephen de Vries


On 27 Mar 2006, at 11:02, Jeff Williams wrote:



I am not a Java expert, but I think that the Java Verifier is NOT  
used on
Apps that are executed with the Security Manager disabled (which I  
believe
is the default setting) or are loaded from a local disk (see ...  
applets
loaded via the file system are not passed through the byte code  
verifier

in http://java.sun.com/sfaq/)

I believe that as of Java 1.2, all Java code except the core  
libraries must

go through the verifier, unless it is specifically disabled (java
-noverify).


I had the same intuition about the verifier, but have just tested  
this and it is not the case.  It seems that the -noverify is the  
default setting! If you want to verify classes loaded from the local  
filesystem, then you need to explicitly add -verify to the cmd line.   
I tested this by compiling 2 classes where one accesses a public  
member of the other.  Then recompiled the other and changed the  
method access to private.  Tested on:

Jdk 1.4.2 Mac OS X
Jdk 1.5.0 Mac OS X
Jdk 1.5.0 Win XP

all behave the same.

[~/data/dev/applettest/src]java -cp . FullApp
Noone can access me!!
[~/data/dev/applettest/src]java -cp . -verify FullApp
Exception in thread main java.lang.IllegalAccessError: tried to  
access field MyData.secret from class FullApp at FullApp.main 
(FullApp.java:23)


Using the same code with an Applet loaded from the filesystem throws  
an IllegalAccessError exception as it should.



--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php