Re: [SC-L] Insider threats and software

2007-08-16 Thread Michael S Hines
Doesn't an execution sandbox serve similar funtions to a firewall, but at
the host level?  Can't even more control be added to a sandbox than can be
set on a firewall?

Second, doesn't a host based firewall (even on desktops) provide the
security you are talking about (providing they work propery - which is
another topic).

Am I missing the point?

Or are you thinking of something that checks message queues for proper
semantics and syntax (since some OS's are message based and work from
message queues)?

M.
-
Michael S Hines
[EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Pierre Parrend
Sent: Thursday, August 16, 2007 4:20 AM
To: silky
Cc: SC-L@securecoding.org
Subject: Re: [SC-L] Insider threats and software


Hello all,

 I do not agree with Mike's point of view. Of course the unique way to cheat
a system is to understand how it is working, and to abuse it. But the main
difference is that you can hardly talk about protocol in the case of
applications: if you have a given protocol, you 'just' need to build a
firewall that checks that the protocol is properly working. In the case of
software level insider attack, you would therefore need a dedicated firewall
for every application you provide, which seem difficult both in term of
development and performance cost.

The differences I see between the two cases are the following:

- attacks are now performed at the applicative level. And no simple
interface between the user and the application can be identified, since a
heavy client is involved (the interface is no longer a single protocol, but
a whole application).

- the matter becomes even worse if the systems are dynamic (such as with
MIDP, or OSGi, or any plug-in mechanism), which does not yet occurs with
online games, but soon could.

last case make a shift in the potential attacks quite likely: it is
sufficient to make malicious components freely available to perform attacks,
even without illegally modifying existing code. The problem of client-based
attack is bound with the one of integration of off-the-shelf components: how
is it possible to control the execution process for every self-developed of
third party, local or remote, piece of code ? Both involve application level
'protocols' to perform insider attacks, which are not so easy to tackle,

I.e what Gary is describing is (to my view) not the ultimate insider, but a
step toward a worsening of the security state of systems.

regards,

Pierre P.


Quoting silky [EMAIL PROTECTED]:

 i really don't see how this is at all an 'insider' attack; given that
 it is the common attack vector for almost every single remote exploit
 strategy; look into the inner protocol of the specific app and form
 your own messages to exploit it.



 On 8/15/07, Gary McGraw [EMAIL PROTECTED] wrote:
  Hi sc-l,
 
  My darkreading column this month is devoted to insiders, but with a
twist.
 In this article, I argue that software components which run on
 untrusted clients (AJAX anyone?  WoW clients?) are an interesting new
 flavor of insider attack.
 
  Check it out:
  http://www.darkreading.com/document.asp?doc_id=131477WT.svl=column1
  _1
 
  What do you think?  Is this a logical stretch or something obvious?
 
  gem
 
  company www.cigital.com
  podcast www.cigital.com/silverbullet blog
  www.cigital.com/justiceleague book www.swsec.com
 
  ___


 --



--
Pierre Parrend
Ph.D. Student, Teaching Assistant
INRIA-INSA Lyon, France
[EMAIL PROTECTED]
web : http://www.rzo.free.fr
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] The Specifications of the Thing

2007-06-12 Thread Michael S Hines
So - aren't a lot of the Internet security issues errors or omissions in the
IETF standards - leaving things unspecified which get implemented in
different ways - some of which can be exploited due to implementation flaws
(due to specification flaws)?

Mike H.
-
Michael S Hines
[EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Monday, June 11, 2007 5:50 PM
To: Gary McGraw
Cc: SC-L@securecoding.org; Blue Boar
Subject: Re: [SC-L] Harvard vs. von Neumann

Gary McGraw wrote:
 Though I don't quite understand computer science theory in the same way
that Crispin does, I do think it is worth pointing out that there are two
major kinds of security defects in software: bugs at the implementation
level, and flaws at the design/spec level.  I think Crispin is driving at
that point.

Kind of. I'm saying that specification and implementation are relative
to each other: at one level, a spec can say put an iterative loop here and
implementation of a bunch of x86 instructions. At another level,
specification says initialize this array and the implementation says for
(i=0; iARRAY_SIZE;i++){ At yet another level the specification says
get a contractor to write an air traffic control system and the
implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change properties
when you move the game up, some for the better, some for the worse. Some
examples:

* If you move up to type safe languages, then the compiler can prove
  some nice safety properties about your program for you. It does
  not prove total correctness, does not prove halting, just some
  nice safety properties.
* If you move further up to purely declarative languages (PROLOG,
  strict functional languages) you get a bunch more analyzability.
  But they are still Turing-complete (thanks to Church-Rosser) so
  you still can't have total correctness.
* If you moved up to some specification form that was no longer
  Turing complete, e.g. something weaker like predicate logic, then
  you are asking the compiler to contrive algorithmic solutions to
  nominally NP-hard problems. Of course they mostly aren't NP-hard
  because humans can create algorithms to solve them, but now you
  want the computer to do it. Which begs the question of the
  correctness of a compiler so powerful it can solve general purpose
  algorithms.


 If we assumed perfection at the implementation level (through better
languages, say), then we would end up solving roughly 50% of the software
security problem.

The 50% being rather squishy, but yes this is true. Its only vaguely what I
was talking about, really, but it is true.

Crispin

--
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org List information,
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Perspectives on Code Scanning

2007-06-07 Thread Michael S Hines
 and that's the problem. the accountability for insecure coding should
 reside with the developers. it's their fault [mostly].

The customers have most of the power, but the security community has
collectively failed to educate customers on how to ask for more secure
software.  There are pockets of success, but a whole lot more could be done.

--- the software should work and be secure (co-requirements).  The user
community has been educated to accept CTL-ALT-DEL and wait as an acceptable
method of computing (and when things are really haywire - resintall the OS
and loose all your work).   We've got a long way to go for them to expect
software to also be secure, since they now accept that it doesn't work right
as SOP.

Mike Hines
[EMAIL PROTECTED]


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] FW: What's the next tech problem to be solved in softwaresecurity?

2007-06-06 Thread Michael S Hines
Product integration - why have an editor, separate source code analizer,
separate 'lint' product, compiler, linker, object code analyzer, Fuzz
testing tools, etc...apart from marketing and revenue stream - it
doesn't help the developer any.

Who tests the products that test the code?

Mike H.
-
Michael S Hines
[EMAIL PROTECTED]



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What defines an InfoSec Professional?

2007-03-09 Thread Michael S Hines
I respectfully disagree.

The need for a firewall or IDS is due to the poor coding of the receptor of
network traffic - so you have to prevent bad things from reaching the
receptor (which is the TCP/IP stack and then the host operating system - and
then the middleware and then the application).

The reason you have to prevent bad things from reaching the receptor (OS) is
because of poor coding practices in the receptor (OS).

In terms of state diagrams - you have an undefied state in the code - which
produces unpredictable actions.  Technically speaking, it's undesireable but
predictable actions - that's how the software can be used to gain
unauthorized entry.  And once someone finds the hole - the very mechanism
used for protection (networks) is used to spread the story.  Kind of like
the farmer eating his seed corn.   :)

Regarding roles - there are many who do Infosec - in many different roles.
Law makers, lawyers, Boards of Directors, management, policy staff,
technical staff, network engineers, programmers, quality assurance staff,
users, ethical hackers, unethical hackers, et al.

I'm not sure we're moving the industry forward by trying to say I am one
but You are not - are we?

Mike Hines
-
Michael S Hines
[EMAIL PROTECTED]


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Dr. Dobb's | The Truth About Software Security | January 20, 2007

2007-01-30 Thread Michael S Hines
One examining only source code will miss any errors or problems that may be
introduced by the compiler or linker.  As Symantec says - working with the
object code is working at the level the attackers work.  
 
Of course one would have to verify the object code made public is the same
object code that was analyzed/verified.   Otherwise you could get the case
where the code was advertised as 'checked' and it still have a
vulnerability.Of course that could happen anyway - as the process
probabily isn't perfect (thought much better than nothing).   
 
Not all compilers or linkers are perfect either.   
 
There is only one way to get it right, yet so many ways to get it wrong.   
 
Mike Hines
 
-
Michael S Hines
[EMAIL PROTECTED] 
 

  _  

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Kenneth Van Wyk
Sent: Tuesday, January 30, 2007 5:25 AM
To: Secure Coding
Subject: [SC-L] Dr. Dobb's | The Truth About Software Security | January
20,2007


FYI, there's an interesting article on ddj.com about a Symantec's new
Veracode binary code analysis service.

http://www.ddj.com/dept/security/196902326 

Among other things, the article says, Veracode clients send a compiled
version of the software they want analyzed over the Internet and within 72
hours receive a Web-based report explaining--and prioritizing--its security
flaws. 


Any SC-Lers have any first-hand experience with Veracode that they're
willing to share here? Opinions?


Cheers,


Ken

-
Kenneth R. van Wyk
SC-L Moderator
KRvW Associates, LLC
http://www.KRvW.com




___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Retrying exceptions - was 'Coding with errors in mind'

2006-09-06 Thread Michael S Hines
Oh, you mean like the calling conventions on the IBM Mainframe where a dump
produces a trace back up the call chain to the calling program(s)?  Not to
mention the trace stack kept within the OS itself for problem solving
(including system calls or SVC's as we call them on the mainframe).   And
when all else fails, there is the stand alone dump program to dump the whole
system?

Mainframes have been around for years.  It's interesting to see open
systems take on mainframe characteristics after all this time

Mike Hines
Mainframe Systems Programmer

-Original Message-
From: Gunnar Peterson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 05, 2006 5:29 PM
To: Hines, Michael S.
Cc: sc-l@securecoding.org
Subject: Re: [SC-L] Retrying exceptions - was 'Coding with errors in mind'

I can't say enough good things about this interview:

Conversation with Bruce Lindsay
Design For Failure
http://www.acmqueue.org/modules.php?name=Contentpa=showpagepid=233

snip
BL: There are two classes of detection. One is that I looked at my own guts
and they didn't look right, and so I say this is an error situation. The
other is I called some other component that failed to perform as requested.
In either case, I'm faced with a detected error. The first thing to do is
fold your tent-that is, put the state back so that the state that you manage
is coherent. Then you report to the guy who called you, possibly making some
dumps along the way, or you can attempt alternate logic to circumvent the
exception.

In our database projects, what typically happens is it gets reported up, up,
up the chain until you get to some very high level that then says, Oh, I
see this as one of those really bad ones. I'm going to initiate the massive
dumping now.
When you report an error, you should classify it. You should give it a name.
If you're a component that reports errors, there should be an exhaustive
list of the errors that you would report.

That's one of the real problems in today's programming language architecture
for exception handling. Each component should list the exceptions that were
raised:
typically if I call you and you say that you can raise A, B, and C, but you
can call Joe who can raise D, E, and F, and you ignore D, E, and F, then I'm
suddenly faced with D, E, and F at my level and there's nothing in your
interface that said D, E, and F errors were things you caused. That seems to
be ubiquitous in the programming and the language facilities. You are never
required to say these are all the errors that might escape from a call to
me.
And that's because you're allowed to ignore errors. I've sometimes advocated
that, no, you're not allowed to ignore any error. You can reclassify an
error and report it back up, but you've got to get it in the loop.
/snip

-gp


Quoting Michael S Hines [EMAIL PROTECTED]:

 That's a rather pragmatic view, isn't it?

 Perhaps if other language constructs are not used, they should be removed?

 OTOH - perhaps the fault is not the language but the coder of the
language?

   - lack of knowledge
   - pressure to complete lines of code
   - lack of [management] focus on security
   - or 100s of other reasons not to do the right thing...

 Sort of like life, isn't it?

 Mike Hines

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]
 On Behalf Of Jonathan Leffler
 Sent: Friday, September 01, 2006 3:44 PM
 To: sc-l@securecoding.org
 Subject: [SC-L] Retrying exceptions - was 'Coding with errors in mind'

 Pascal Meunier [EMAIL PROTECTED] wrote:
 Tim Hollebeek [EMAIL PROTECTED] wrote:
  (2) in many languages, you can't retry or resume the faulting code.
  Exceptions are really far less useful in this case.
 
 See above.  (Yes, Ruby supports retrying).

 Bjorn Stroustrup discusses retrying exceptions in Design and
 Evolution of
 C++ (http://www.research.att.com/~bs/dne.html).  In particular, he
 described one system where the language supported exceptions, and
 after some number of years, a code review found that there was only
 one retryable exception left - and IIRC the code review decided they
 were better off without it.  How much are retryable exceptions really
 used, in Ruby or anywhere else that supports them?

 --
 Jonathan Leffler ([EMAIL PROTECTED]) STSM, Informix Database
 Engineering, IBM Information Management Division 4100 Bohannon Drive,
 Menlo Park, CA 94025-1013
 Tel: +1 650-926-6921 Tie-Line: 630-6921
   I don't suffer from insanity; I enjoy every minute of it!



 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at -
 http://www.securecoding.org/list/charter.php


 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http

[SC-L] Coding with errors in mind - a solution?

2006-08-30 Thread Michael S Hines



a simple structure that provides for errors would go a long 
way... 

If - then - else - on error
Do - end - on error
Let x = y - on error
Let x = function() on error
etc... 

The problem is writing code without thinking of the 
possible errors that might arise. This forces you to think about the 
consequences of executing a command...

Where 'error' is doing something intelligent when the 
original command doesn't work... 

Just a brainstorm... any merit to 
it?

Mike Hines
[EMAIL PROTECTED]


From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Ed Reed 
(Aesec)Sent: Wednesday, August 30, 2006 1:17 PMTo: 
sc-l@securecoding.orgSubject: [SC-L] e: How can we stop the spreading 
insecure coding examples at, training classes, etc.?

Message: 1
Date: Tue, 29 Aug 2006 15:48:17 -0400
From: [EMAIL PROTECTED]
Subject: Re: [SC-L] How can we stop the spreading insecure coding
	examples	at training classes, etc.?
To: "Wall, Kevin" [EMAIL PROTECTED]
Cc: SC-L@securecoding.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=ISO-8859-1

Quoting "Wall, Kevin" [EMAIL PROTECTED]:


  
  I think that this practice of leaving out the "security
details" to just make the demo code short and sweet has got
to stop. Or minimally, we have to make the code that people
copy-and-paste from have all the proper security checks even
if we don't cover them in training. If we're lucky, maybe
they won't delete them when the re-use the code.

I agree, and would like to extend it: security should be discussed *at the same
time* that a topic is.  Teaching security in a separate class, like I have been
doing, reaches only a fraction of the audience, and reinforces an attitude of
security as an afterthought, or security as an option.  Comments in the code
should explain (or refer to explanations of) why changing or deleting those
lines is a bad idea.  

However, I'm afraid that it would irritate students, and make security the new
"grammar and spelling" for which points are deducted from "perfectly valid
write-ups" (i.e., "it's my ideas that count, not how well I spell").  The 
same used to be said about unstructured programming examples (computed gotos, 
spaghetti code, multiple entry and exit points from functions, etc). We 
got past it.We need a similar revolution in thought with regard to 
security, and some one to take the lead on providing clear, crisp examples of 
coding style that is more secure by its nature. I don't have one handy - 
but that's my wish.Ed
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Segments, eh Smithers?

2006-04-04 Thread Michael S Hines
Or consider the IBM Mainframe and z/OS Operating Systems - protected memory and 
paging
together - also privileged programs vs. application programs, also prefetched 
programs vs
loaded on demand programs.   

Mike Hines
Mainframe Systems Programmer
---
Michael S Hines
[EMAIL PROTECTED] 

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


FW: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Michael S Hines
Isn't it possible to break out of the sandbox even with managed code? (That is, 
can't
managed code call out to unmanaged code, i.e. Java call to C++)?  I was 
thinking this was
documented for Java - perhaps for various flavors of .Net too?  

---
Michael S Hines
[EMAIL PROTECTED] 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Dinis Cruz
Sent: Saturday, March 25, 2006 6:39 AM
To: '[EMAIL PROTECTED]'; [EMAIL PROTECTED];
SC-L@securecoding.org; full-disclosure@lists.grok.org.uk
Cc: [EMAIL PROTECTED]
Subject: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, 
User vs
Admin risk profile, and browsers coded in 100% Managed Verifiable code

Another day, and another unmanaged-code remote command execution in IE.

What is relevant in the ISS alert (see end of this post) is that IE 7
beta 2 is also vulnerable, which leads me to this post's questions:

1) Will IE 7.0 be more secure than IE 6.0 (i.e. will after 2 years it
being released the number of exploits and attacks be smaller than today?
and will it be a trustworthy browser?)

2) Given that Firefox is also build on unmanaged code, isn't Firefox as
insecure as IE and as dangerous

3) Since my assets as a user exist in user land, isn't the risk profile
of malicious unmanaged code (deployed via IE/Firefox) roughly the same
if I am running as a 'low privileged' user or as administrator? (at the
end of the day, in both cases the malicious code will still be able to:
access my files, access all websites that I have stored credentials in
my browser (cookies or username / passwords pairs), access my VPNs,
attack other computers on the local network, install key loggers,
establish two way communication with a Internet based boot net, etc ...
(basically everything except rooting the boot, disabling AVs and
installing persistent hooks (unless of course this malicious code
executes a successful escalation of privilege attack)))

4) Finally, isn't the solution for the creation of secure and
trustworthy Internet Browsing environments the development of browsers
written in 100% managed and verifiable code, which execute on a secure
and very restricted Partially Trusted Environments? (under .Net, Mono or
Java). This way, the risk of buffer overflows will be very limited, and
when logic or authorization vulnerabilities are discovered in this
'Partially Trusted IE' the 'Secure Partially Trusted environment' will
limit what the malicious code (i.e. the exploit) can do.
   
This last question/idea is based on something that I have been defending
for quite a while now (couple years) which is: Since it is impossible
to create bug/vulnerability free code, our best solution to create
securer and safer computing environments (compared to the ones we have
today), is to execute those applications in sandboxed environments.

Basically we need to be able to safely handle malicious code, executed
in our user's session, in a web server, in a database engine, etc... Our
current security model is based on the concept of preventing malicious
code from being executed (something which is becoming more and more
impossible to do) versus the model of 'malicious payload containment' 
(i.e. Sandboxing).

And in my view, creating sandboxes for unmanaged code is very hard or
even impossible (at least in the current Windows Architecture), so the
only solution that I am seeing at the moment is to create sandboxes for
managed and verifiable code.

Fortunately, both .Net and Java have architectures that allow the
creation of these 'secure' environments (CAS and Security Manager).

Unfortunately, today there is NO BUSINESS case to do this. The paying
customers are not demanding products that don't have the ability to
'own' their data center, software companies don't want to invest in the
development of such applications, nobody is liable for anything,
malicious attackers have not exploited this insecure software
development and deployment environment (they have still too much to
money to harvest via Spyware/Spam) and the Framework developers
(Microsoft, Sun, Novell, IBM, etc...) don't want to rock the boat and
explain their to their clients that they should be demanding (and only
paying for) applications that can be safely executed in their corporate
environment (i.e. ones where malicious activities are easily detectable,
preventable and contained (something which I believe we only have a
chance of doing with managed and verifiable code)).

I find ironic the fact that Microsoft now looks at Oracle and says 'We
are so much better than them on Security', when the reason why Oracle
has not cared (so far) about security is the same why Microsoft doesn't
make any serious efforts to promote and develop Partially Trusted .Net
applications: There is no business case for both. Btw, if Microsoft
publicly admitted that the current application development practices of
ONLY creating Full Trust code IS A MASSIVE PROBLEM

RE: [SC-L] Intel turning to hardware for rootkit detection

2005-12-14 Thread Michael S Hines



Isn't Smashguard the same technology (in software) added to 
the latest Microsoft .NET compiler and run time?

While protecting against one method of hijacking a system 
(altering the function return address) - it really doesn't protect from 
inserting your own code into a stream and then using an existing jump to jump to 
your code - does it? 
Nor does it protect from altering the system managed data 
blocks? 

That is to say - it only protects one form of a hijack 
attack. Or am I missing something? 

Mike Hines

Smashguard most recent CACM publication (Nov 05) is at 
-
https://engineering.purdue.edu/ResearchGroups/SmashGuard/cacm.pdf
if you are interested.

The Smashguard Group web site is at -
https://engineering.purdue.edu/ResearchGroups/SmashGuard/BoF.html

I'm not affiliated with that group at Purdue - being on the 
Admin side.
---Michael S 
Hines[EMAIL PROTECTED] 


  
  
  From: mudge [mailto:[EMAIL PROTECTED] 
  Sent: Tuesday, December 13, 2005 6:01 PMTo: Hines, 
  Michael S.Cc: 'Secure Coding Mailing List'Subject: Re: 
  [SC-L] Intel turning to hardware for rootkit detection 
  
  There was a lady who went to 
  Purdue, I believe her name was Carla Brodley. She is a professor at Tufts 
  currently. One of her projects, I'm not sure whether it is ongoing or 
  historic, was surrounding hardware based stack protection. There wasn't any 
  protection against heap / pointer overflows and I don't know how it fares when 
  stack trampoline activities (which can be valid, but are rare outside of older 
  objective-c code).
  
  www.smashguard.org and https://engineering.purdue.edu/ 
  ResearchGroups/SmashGuard/smash.html have more 
data.
  
  I'm not sure if this is a similar solution to what Intel might 
  be pursuing. I believe the original "smashguard" work was based entirely on 
  Alpha chips.
  
  cheers,
  
  .mudge
  
  
  
  On Dec 13, 2005, at 15:19, Michael S Hines wrote:
  
Doesn't a hardware 'feature' such as this lock 
software into a two-state model
(user/priv)?

Who's to say that model is the best? Will that be the model of the 
future? 

Wouldn't a two-state software model that works be 
more effective? 

It's easier to change (patch) software than to 
rewire hardware (figuratively speaking).

Just wondering...

Mike Hines
---
    Michael S Hines
[EMAIL PROTECTED] 

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Intel turning to hardware for rootkit detection

2005-12-13 Thread Michael S Hines
Doesn't a hardware 'feature' such as this lock software into a two-state model
(user/priv)?

Who's to say that model is the best?  Will that be the model of the future? 

Wouldn't a two-state software model that works be more effective?  

It's easier to change (patch) software than to rewire hardware (figuratively 
speaking).

Just wondering...

Mike Hines
---
Michael S Hines
[EMAIL PROTECTED] 

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Michael S Hines
Wonder what happens if we apply that same logic to building design or bridge 
design and
contstruction? 

Those who don't place blame at the source are just trying to blame shift.   Bad 
idea..  

Mike Hines
---
Michael S Hines
[EMAIL PROTECTED] 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Michael Silk
Sent: Wednesday, April 06, 2005 8:40 AM
To: Kenneth R. van Wyk
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?

Quoting from the article:
''You can't really blame the developers,''

I couldn't disagree more with that ...

It's completely the developers fault (and managers). 'Security' isn't
something that should be thought of as an 'extra' or an 'added bonus'
in an application. Typically it's just about programming _correctly_!

The article says it's a 'communal' problem (i.e: consumers should
_ask_ for secure software!). This isn't exactly true, and not really
fair. Insecure software or secure software can exist without
consumers. They don't matter. It's all about the programmers. The
problem is they are allowed to get away with their crappy programming
habits - and that is the fault of management, not consumers, for
allowing 'security' to be thought of as something seperate from
'programming'.

Consumers can't be punished and blamed, they are just trying to get
something done - word processing, emailing, whatever. They don't need
to - nor should. really. - care about lower-level security in the
applications they buy. The programmers should just get it right, and
managers need to get a clue about what is acceptable 'programming' and
what isn't.

Just my opinion, anyway.

-- Michael


On Apr 6, 2005 5:15 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
 Greetings++,
 
 Another interesting article this morning, this time from eSecurityPlanet.
 (Full disclosure: I'm one of their columnists.)  The article, by Melissa
 Bleasdale and available at
 http://www.esecurityplanet.com/trends/article.php/3495431, is on the general
 state of application security in today's market.  Not a whole lot of new
 material there for SC-L readers, but it's still nice to see the software
 security message getting out to more and more people.
 
 Cheers,
 
 Ken van Wyk
 --
 KRvW Associates, LLC
 http://www.KRvW.com






RE: [SC-L] How do we improve s/w developer awareness?

2004-12-02 Thread Michael S Hines
I've been trying to get IT Auditors and the Audit community in general to apply 
the same
due dilligence to operating systems (infrastructure or general controls) that 
they apply
to applications systems testing.

I'm not aware of anyone in the IT Audit community doing OS audits - to verify 
that the
systems work as advertised and do not fail where they should not.   I become 
quite aware
of this a few years ago when I was in a group doing Penetraiton Testing of an 
OS and
discovered many flaws.

Why don't auditors audit the OS?  I, frankly, don't know. 

But Auditors do have the ear of upper management and they could be the ones 
indicating the
weaknessed in the infrastructure that puts the organization at risk. 

We wouldn't put in a new payroll system without verifying that it works 
properly.  Yet
we're more than willing to unpackage and plug in a desktop computer without the 
same due
dilligence.  Why?It's beyond me.  

Perhaps if more people were asking the right questions to the right people ...  
?  

Why we've come to accept the CTL_ALT_DEL 'three finger salute' as SOP is beyond 
me.  

Of course the issues above aren't limited to one particular OS.  There are 
plenty of
problems to go around.
(see the work done at Univ of Wisconsin - the Fuzz Testing project 
http://www.cs.wisc.edu/~bart/fuzz/fuzz.html )

Mike Hines
---
Michael S Hines
[EMAIL PROTECTED] 




RE: [SC-L] ACM Queue article and security education

2004-07-01 Thread Michael S Hines
I can just see an OS go into a wait state now while the VM/.NET or whatever
does garbage collection; and the delays while the intermediate code is
turned into executable code by the loaders.   

Not!  

HLL have given us portability (witness - *nix) but at some price of
performance.  The HW development has outpaced SW development - to the tune
where we hardly notice the performance hit at all.  After all, now fast can
one person type (grin)?

It's always a trade off...   HW/SW.  

Mike Hines 
---
Michael S Hines
[EMAIL PROTECTED] 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Blue Boar
Sent: Thursday, July 01, 2004 11:11 AM
To: Peter Amey
Cc: [EMAIL PROTECTED]
Subject: Re: [SC-L] ACM Queue article and security education

Peter Amey wrote:
 There are languages which are more suitable for the construction of
 high-integrity systems and have been for years.  We could have
 adopted Modula-2 back in the 1980s, people could take the blinkers of
 prejudice off and look properly at Ada.  Yet we continue to use
 C-derived languages with known weaknesses.

So we trade the known problems for a set of unknown ones?  It might be 
appropriate to do so; C may be broken enough that it's better to go 
for an unknown with a design that allows for a possible correct 
implementation.  I keep thinking of Java, for example.  It's a good 
paper design for security purposes (I'll leave functionality alone for 
now.)  But there are still all the issues with the VM implementation and 
libraries to deal with.

Language X may very well be a much better starting point, I don't know. 
  I do believe that it will never be properly looked at until the whole 
world starts using it for everything, though.

BB





RE: [SC-L] ACM Queue article and security education

2004-06-30 Thread Michael S Hines
If the state of the art in automobile design had progressed as fast as the
state of the art of secure programming - we'd all still be driving Model
T's.  

Consider-
  - System Development Methods have not solved the (security) problem -
though we've certainly gone through lots of them.
  - Languages have not solved the (security) problem - though we've
certainly gone through (and continue to go through) lots of them.
  - Module/Program/System testing has not solved the (security) problem -
though there has been a plethorea written about system testing (both white
box and black box).

And a question/comment/observation.
First the comment - As an IT Auditor we approach auditing in two stages -
first we look at general controls, and then application controls (won't go
into details here - there's information on this available elsewhere).  If
general controls are not in place, application controls are not relevant
(that is any application control can be circumvented due to weak general
controls). 
Then the question - Why do we not subject computer operating systems (which
are a general control) to the same level of testing that we subject
applications?   
And the observation - weaknesses in operting systems have been documented
(but not widely circulated) - yet we (as Sysadmins/users/auditors/security
experts - you pick) do not have a problem using faulty system software and
laying applications on top of them.  Why is that? 

And then a thought question - in message passing operating systems (those
that respond to external stimuli, or internal message queues) - if one can
inject messages into the processing queue, can't one in essence 'capture the
flag'?  Yet we see message passing systems as middleware (and OS core
technology in some cases) to facilitate cross platform interfaces.  Aren't
we introducing inherient security flaws in the process? 

Mike Hines
---
Michael S Hines
[EMAIL PROTECTED] 




RE: [SC-L] Interesting article on the adoption of Software Security

2004-06-11 Thread Michael S Hines
Likewise for the IBM Mainframe operating systems MVS,OS/390,z/OS - much of
which is written in (I believe) PL/M - a dialect much like PL/1.

Many of our Operating Systems seem to have evolved out of the old DEC RSTS
system.  For example, CP/M had a PIP command.  Later renamed to COPY in DOS.


UNIX had a hierarchical file structure.  DOS inherited this feature early
on.  

When you've been around for a while, you start to see the same features
converge..  UNIX had quotas, we got Quotas with Win XP Server (well earlier,
when you include the third party ISVs - as an add on).  IBM had Language
Environment (LE) before .NET come along.  

It all sort of runs together over time - it seems.

Mike Hines
---
Michael S Hines
[EMAIL PROTECTED] 




[SC-L] IBM OS Source Code

2004-06-11 Thread Michael S Hines
I was a bit wrong earlier..   IBM System Programming language was PL/X (not
PL/M)...

Here's a link to an older reference manual...
http://www.bitsavers.org/pdf/ibm/360/pls/GC28-6794-0_PLSIIguideMay74.pdf

Mike H.

---
Michael S Hines
[EMAIL PROTECTED]