Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-14 Thread Dave Paris
Michael Silk wrote:
I don't think that analogy quite fits :) If the 'grunts' aren't doing
their job, then yes - let's blame them. Or at least help them find
ways to do it better.
If they're not doing their job, no need to blame them - they're
critically injured, captured, or dead. ...or in the case of programmers
- fired.  If you insist on blaming them, you're redirecting blame and
that's BS.
As for "finding ways to do it better" .. they're well trained - if
they're not well trained, they're (again) critically injured, captured,
or dead.  But as happened in the most recent "event in the big sandbox",
they're not well supplied in all cases.  Wow.  Sound familiar?  What?  A
programmer not given full specifications or the tools they need?  Yeah.
 That never happens in the Corporate World.
The analogy works.
Some comparisons:
You call in for close air support .. and friendlies drop munitions on
your position (your manager just told the VP "yeah, we can ship two
weeks early, no problems").
You call in for intel on your position and you're told the path to your
next objective is clear - only to get ambushed as you're halfway there
(the marketing guys sold the customer a bill of goods that can't
possibly be delivered in the time alloted - and your manager agreed to
it without asking the programmers)
You're recon and you light up a target with a laser designator and then
call in the bombers - only to find they can't drop the laser-guided
munitions because "friendlies" just blew up the nearby fuel depot and
now they can't get a lock on the designator because of the smoke (sorry,
you can't get the tools you need to do your job so make due with what
you've got - nevermind that the right tool is readily available - i.e.
GPS-guided munitions in this example - it's just not supplied for this
project).
.. ok, enough with the examples, I hope I've made my point.
Mr. Silk, it's become quite clear to me from your opinions that you
appear to live/work in a very different environment (frankly, it sounds
somewhat like Nirvana) than the bulk of the programmers I know.
Grunts and programmers take orders from their respective chain of
command.  Not doing so with get a grunt injured, captured, or killed and
a programmer fired.  Grunts and programmers each come with a skillset
and a brain trained and/or geared to accomplishing the task at hand.
Experience lets them accomplish their respective jobs more effectively
and efficiently by building on that training - but neither can disregard
the chain of command without repercussions (scantions, court martial,
injury, or death in the case of a grunt - and demotion or firing in the
case of a programmer).  If the grunt or programmer simply isn't good at
their job, and the chain of command doesn't move them to a more
appropriate position, they're either dead or fired.
Respectfully,
-dsp


Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-13 Thread Dave Paris
So you blame the grunts in the trenches if you lose the war?  I mean,
that thinking worked out so well with Vietnam and all...  ;-)
regards,
-dsp
I couldn't agree more! This is my whole point. Security isn't 'one
thing', but it seems the original article [that started this
discussion] implied that so that the blame could be spread out.
If you actually look at the actual problems you can easily blame the
programmers :)



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Dave Paris
Joel Kamentz wrote:
Re: bridges and stuff.
I'm tempted to argue (though not with certainty) that it seems that the bridge 
analogy is flawed
in another way --
that of the environment.  While many programming languages have similarities 
and many things apply
to all programming,
there are many things which do not translate (or at least not readily).  Isn't 
this like trying to
engineer a bridge
with a brand new substance, or when the gravitational constant changes?  And 
even the physical
disciplines collide
with the unexpected -- corrosion, resonance, metal fatigue, etc.  To their 
credit, they appear far
better at
dispersing and applying the knowledge from past failures than the software 
world.
Corrosion, resonance, metal fatigue all have counterparts in the
software world.  glibc flaws, kernel flaws, compiler flaws.  Each of
these is an outside influence on the application - just as environmental
stressors are on a physical structure.
Engineering problems disperse faster because of law suits that happen
when a bridge fails.  I'm still waiting for a certain firm located in
Redmond to be hauled into court - and until that happens, nobody is
going to make security an absolute top priority.
Let's use an example someone else already brought up -- cross site scripting.  
How many people
feel that, before it
was ever known or had ever occurred the first time, good programming practices 
should have
prevented any such
vulnerability from ever happening?  I actually think that would have been 
possible for the
extremely skilled and
extremely paranoid.  However, we're asking people to protect against the 
unknown.
Hardly unknowns.  Not every possiblity has been enumerated, but then
again, not every physical phenomena has been experienced w/r/t
construction either.
I don't have experience with the formal methods, but I can see that, supposing 
this were NASA,
etc., formal approaches
might lead to perfect protection.  However, all of that paranoia, formality or 
whatever takes a
lot of time, effort
and therefore huge economic impact.  I guess my personal opinion is that unit 
testing, etc. are
great shortcuts
(compared to perfect) which help reduce flaws, but with lesser expense.
Unit testing is fine, but tests "inside the box" and doesn't veiw your
system through the eyes of an attacker.
All of this places me in the camp that thinks there isn't enough yet to 
standardize.  Perhaps a
new programming
environment (language, VM, automation of various sorts, direct neural 
interfaces) is required
before the art of
software is able to match the reliability and predictability of other fields?
You're tossing tools at the problem.  The problem is inherently human
and economically driven.  A hammer doesn't cause a building to be
constructed poorly.
Is software more subject to unintended consequences than physical engineering?
not "more subject", just "subject differently".
Respectfully,
-dsp



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Dave Paris
Michael Silk wrote:
Ed,
[...]
 Back to the bridge or house example, would you allow the builder to
leave off 'security' of the structure? Allow them to introduce some
design flaws to get it done earlier? Hopefully not ... so why is it
allowed for programming? Why can people cut out 'security' ? It's not
extra! It's fundamental to 'programming' (imho anyway).
-- Michael
This paragraph contains the core dichotomy of this discussion.
The builder and the programmer are synonomous.
The builder is neither the architect, nor the engineer for the 
structure.  If the architect and engineer included "security" for the 
structure and the builder failed to build to specification, then the 
builder is at fault.

The programmer is neither the application architect nor the system 
engineer.  If the architect and engineer fail to include (or includes 
faulty) security "features" (as though it were an add-on, right) then 
the programmer is simply coding to the supplied specifications.  If 
security is designed into the system and the programmer fails to code to 
the specification, then the programmer is at fault.

While there are cases that the programmer is indeed at fault (as can 
builders be), it is _far_ more often the case that the security flaw (or 
lack of security) was designed into the system by the architect and/or 
engineer.  It's also much more likely that the "foreman" (aka 
programming manager) told the builder (programmer) to take shortcuts to 
meet time and budget - rather than the programmer taking it upon 
themselves to be sloppy and not follow the specifications.

In an earlier message, it was postulated that programmers are, by and 
large, a lazy, sloppy lot who will take shortcuts at every possible turn 
and therefore are the core problem vis-a-vis lousy software.  It's been 
my expreience that while these people exist, they wash out fairly 
quickly and most programmers take pride in their work and are highly 
frustrated with management cutting their legs out from under them, 
nearly _forcing_ them to appear to fit into the described mold.  Ever 
read "Dilbert"?  Why do you think so many programmers can relate?

I think the easiest summary to my position would be "don't shoot the 
messenger" - and that's all the programmer is in the bulk of the cases.

Respectfully,
-dsp



Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-07 Thread Dave Paris
Michael Silk wrote:
[...]
Consider the bridge example brought up earlier. If your bridge builder
finished the job but said: "ohh, the bridge isn't secure though. If
someone tries to push it at a certain angle, it will fall". You would
be panic stricken. Fear would overcome you, and you might either run
for the hills, or attempt to strangle the builder... either way, you
have a right to be incredibly upset by the lack of 'security' in your
bridge. You WONT (as is the case now) sit back and go: "Oh well, fair
enough. I didn't ask you to implement that, I just said 'build a
bridge'. Next time i'll ask. Or make sure the public specifically
requests resistence to that angle of pushing".
Hopefully my point is obvious...
[...]
Actually, it's obvious - but I still can't agree with it.
Using the bridge example, it's fairly trivial for the ironworker to 
realize there should be guesset plate where one wasn't called for.  It's 
a small piece, fairly quickly and easily fabricated and attached - so 
the ironworker puts it in.  No, it wasn't part of the specification - 
but the ironworker has built enough bridges and can explain away the 
extra half-day and slight cost it took to add the appropriate guessets.

On the other hand, the ironworker has worked on enough bridges to 
realize that  will lead to 
a functional, but flawed bridge.  It won't fall down, but it won't be as 
robust as a bridge should be.  The bridge was specified to support 10 
tons.  Five years from now, the road traffic will have 15 ton trucks and 
the bridge may fail.

What you're proposing is that the ironworker should reengineer the 
bridge in-situ (as if he even has the authority!), causing weeks of 
delay, cost overruns, and possibly lead to his employer never getting a 
bridge contract again.

Somehow, that just doesn't hold water.
Kind Regards,
-dsp



Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Dave Paris
And I couldn't disagree more with your perspective, except for your 
inclusion of managers in parenthesis.

Developers take direction and instruction from management, they are not 
autonomous entities.  If management doesn't make security a priority, 
then only so much secure/defensive code can be written before the 
developer is admonished for being slow/late/etc.

While sloppy habits are one thing, it's entirely another to have 
management breathing down your neck, threatening to ship your job 
overseas, unless you get code out the door yesterday.

It's an environment that fosters insecure habits and resultant products. 
 I'm not talking about habits like using strncpy vs strcpy, I'm talking 
about validation of user input, ensuring a secure architecture to begin 
with, and the like.  The later takes far more time to impliment than is 
given in many environments.  The former requires sufficient 
specifications be given upfront - otherwise you have insufficient 
information to correctly use a function like strncpy.

Kind Regards,
-dsp
Michael Silk wrote:
Quoting from the article:
''You can't really blame the developers,''
I couldn't disagree more with that ...
It's completely the developers fault (and managers). 'Security' isn't
something that should be thought of as an 'extra' or an 'added bonus'
in an application. Typically it's just about programming _correctly_!
The article says it's a 'communal' problem (i.e: consumers should
_ask_ for secure software!). This isn't exactly true, and not really
fair. Insecure software or secure software can exist without
consumers. They don't matter. It's all about the programmers. The
problem is they are allowed to get away with their crappy programming
habits - and that is the fault of management, not consumers, for
allowing 'security' to be thought of as something seperate from
'programming'.
Consumers can't be punished and blamed, they are just trying to get
something done - word processing, emailing, whatever. They don't need
to - nor should. really. - care about lower-level security in the
applications they buy. The programmers should just get it right, and
managers need to get a clue about what is acceptable 'programming' and
what isn't.
Just my opinion, anyway.
-- Michael
[...]



RE: [SC-L] Origins of Security Problems

2004-06-16 Thread Dave Paris
Following the logic in the original post...

God is love.
Love is blind.
Ray Charles was blind.
Ray Charles was god.


The origins of security problems are simply based in the designers of the
systems.  Humans, on the whole, are a fallible lot.  We're not perfect and
when we design systems, it's quite conceivable that in some cases we simply
cannot account for all possibilities - aka "the imperfection shows through".
The better of us bipeds can get close to creating an actually secure system,
but the bulk of us simply do the best we can and sometimes it works out,
other times it doesn't.

Kind Regards,
-dsp

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Mark Rockman
> Sent: Tuesday, June 15, 2004 1:56 PM
> To: [EMAIL PROTECTED]
> Subject: [SC-L] Origins of Security Problems
>
>
> Before widespread use of the Internet, computers were isolated from
> malicious attacks.  Many of them were not networked.  CPUs were slow.
> Memory was small.  It was common practice to "trust the user" to minimize
> the size of programs to speed up processing and to make programs fit in
> memory.  Non-typesafe languages permitted playing with the stack.  It
> occurred to me repeatedly during that period that it would have been
> extremely helpful if the compiler/runtime would have detected buffer
> overflows.  Implementers always shot back that their prime concern was
> minimizing path lengths (i.e. execution time) and that it was the
> programmer's responsibility to guarantee buffer overflows would not occur.
> With blunt instruments such as strcpy() and strcat() available to almost
> guarantee occasional buffer overflows, and stacks arranged so
> that transfer
> of control to malicious code could conveniently occur, it
> evidently doesn't
> take a rocket scientist to figure out how to make a program misbehave by
> providing invalid input that passes whatever passes for input validation.
> Once code became mobile and access to vulnerable buffers became possible
> over a wire, an epidemic of security breaches occurred.
> Moreover, Internet
> protocols were designed individually to provide a specific
> service.  Little
> consideration went into how the protocols could be abused.   Computers are
> now widespread and many of them today reside on the Internet with
> vulnerable
> ports wide open.  The average computer owner doesn't know what a
> port is or
> that it represents a potential avenue for abuse.  Software vendors remain
> unmotivated to instruct owners as to what vulnerabilities exist and how to
> minimize them because that would work against marketing and
> convenience.  A
> small network desires file and printer sharing among the member computers.
> Does this mean everybody on the Internet should have access to those files
> and printers?  Of course not.  A standalone computer has the sharing port
> wide open to the Internet because someday it might become a member of a
> network.  Things have gotten better with additional features
> (e.g. Internet
> Connection Firewall), default configurations set to restrict not for
> convenience, and anti-virus software.  The origin of security
> problems lies
> in widespread Internet usage and habitual lack of effort to ensure that
> programs don't do things that owners don't want them to do.
>
>
>
>




RE: [SC-L] Off-by-one errors: a brief explanation

2004-05-06 Thread Dave Paris
Highly Recommended Reading:
Hacking:  The Art of Exploitation
Author:   Jon Erickson
ISBN: 1593270070
Bookpool.com price:  $24.50

Kind Regards,
-dsp

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of jnf
> Sent: Wednesday, May 05, 2004 6:27 PM
> To: Steven M. Christey
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SC-L] Off-by-one errors: a brief explanation
> 
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> I will add that a phrack paper, which im pretty sure introduced the 
> concept to the public called 'overwriting the frame pointer', or 
> something 
> similar to that effect explains in all its gruesome detail.
[...]




RE: [SC-L] White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-30 Thread Dave Paris
A couple key phrases come to mind when reading this:

1) conflict of interest (he's selling "a solution")
2) inappropriate comparison (embedded OS vs. general OS)

I have no problems with someone pointing out flaws in XYZ product when compared to ABC 
product, provided:

a) they're an independent, uninvolved 3rd party
and 
b) the two products are identical in feature, function, and purpose.

So there are "a couple trusted people" who do the core work.  I wonder what their 
price is to put a flaw in the product?  If they're smart enough to know the entire 
system, they're undoubtedly smart enough to hide a subtle flaw.  Money?  Compromising 
photos?  Threats against themselves or families?  What would it take?

Frankly, I found the entire article nothing but a not-so-thinly veiled advertisement.  
Would he be so bold in comparing against VxWorks or QNX?  Those are his direct 
competitors, not the general Linux kernel.  If he wants to go head to head against 
Linux, he needs to specifically cite and compare against the embedded Linux 
distributions, be it uClinux or other.

Kind Regards,
-dsp


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Kenneth R. van Wyk
> Sent: Thursday, April 29, 2004 8:25 AM
> To: [EMAIL PROTECTED]
> Subject: [SC-L] White paper: "Many Eyes" - No Assurance Against Many
> Spies
> 
> 
> FYI, there's a white paper out by Dan O'Dowd of Green Hills Software (see 
> http://www.ghs.com/linux/manyeyes.html) that "It is trivial to 
> infiltrate the 
> loose association of Linux organizations which have developers 
> all over the 
> world, especially when these organizations don't even try to prevent 
> infiltration, they accept code from anyone."
> 
> Although I don't agree with the positions expressed in the paper, 
> I still find it
> interesting to hear what folks have to say.  A story re the paper 
> has been 
> picked up by Computerworld and LinuxSecurity.com thus far.
> 
> Cheers,
> 
> Ken van Wyk
> http://www.KRvW.com
> 
> 






RE: [SC-L] Change of position

2004-04-02 Thread Dave Paris
> [Ed. Yes, quite a few people responded to Gary's note before
[...]
> ;-).  It strikes me
> that no community on earth gets more into April Fools day jokes than the
> techies AND that no community on earth falls for them more predictably.
> Why is that?]

I don't know and I'm too busy shutting down servers for Internet Spring
Cleaning Day to be bother figuring it out... now if you'll excuse me..

-dsp ;-)

[Ed. *grin*  And now that we're safely into 2 April territory, can we declare
this thread dead, at least for another 364 days?  KRvW]


RE: [SC-L] virtual server - security

2004-03-31 Thread Dave Paris
a few notes..

> -Original Message-
> From: jnf [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 31, 2004 11:23 AM
> To: Dave Paris
> Cc: Serban Gh. Ghita; [EMAIL PROTECTED]
> Subject: RE: [SC-L] virtual server - security
[...]
> > What's the point of the exercise if you're passing plaintext passwords
> > across on port 21?  At the very least, mandate SCP/SFTP on port 22.
>
> yes because having a remote exploit every month or two for
> root^H^H^HSecure shell is much better than limiting it to sniffing on the
> lan, or even better than using one of the ssl type wrappers for telnet.

Sniffing on the LAN isn't my main concern, it's the concentration points
inbetween A and B.  Good idea on the SSL wrapper on Telnet, although the
original poster said they doesn't want to offer shell access.  I'm not quite
sure the security community's concensus would agree that FTP is better than
SCP/SFTP.  I certainly don't, but I've already made that point.  So that
leaves us with flaws in implementation *and* plaintext usernames/passwords.
That doesn't give me warm fuzzies.

> > use 'chroot' jails
>
> and look into kernel patches like grsec that take some of the damn
> stupidity out of the standard chroot system call. You perhaps may want to
> look into where you might be able to use read only filesystems in your
> setup, while breaking out of a (good) chroot jail on a read only
> partition
> is not impossible- it could make life hell for quite a few.

Good call.  Perhaps better would be using SELinux as a base, although the
learning curve is one heckuvalot steeper.

> > "PHP" and "run safely" in the same sentence?  Have you perused Bugtraq
> > lately?
>
> have you ever noticied that a good 80-90% of those posts are cross site
> scripting holes or sql injections that are the result of shoddy
> programming (web developers bad programmers as a whole? nooo never.)
> And less often language specific. As to answer the poster's question, I'm
> not sure if suexec works with php, i dont think it does, but you might
> want to look into that or see if you can find something similar.
>
>
> > That's primarily because PHP will let you shoot yourself in the head, as
> > opposed to most languages which will only let you shoot yourself in the
> > foot, or at least no higher than the knee.  (snide
> commentary... unless it's
> > a microsoft product, which seem to aim squarely for "the jewels")
>
> yea I'd describe a stack or heap based overflow to be shooting
> yourself in
> the foot.

Assuming your foot is squarely between your thighs or in front of your
nose.. ;-)  My comments were based on the nature of the poster's message,
which seemed to allow scripted/interpreted languages rather than compiled
executables, given the lack of shell access.  (that's not to say that a user
can't upload a binary, but if a non-x86 arch is chosen as a base for the
deployment, things get tougher for a user to screw up by default... save for
misconfigurations of the host, of course)

> > Yes.  Near daily bugtraq reports about why PHP is a darned good
> idea that
> > made a left turn into a really bad neighborhood.  The manpage for
> > SCP/SFTP/SSH.  The manpage for 'chroot'.
>
> I will agree that php could be more secure, although i must admit
> its come
> a hell of a long ways since its first introduction, there are plenty of
> articles over php security on google- I'm sure your local bookstore will
> have books that will at least cover the subject to some degree. Just like
> any language php will let you screw yourself- most of what you find on
> bugtraq as I said are not language problems, but programmer problems. A
> quick google search will show nearly as many exploits (if not more) for
> [open]ssh as for wuftp, yet wu is considered horribly insecure and ssh
> secure, go figure. I'd also look into chroot as suggested, I am unsure of
> whether it is avail. to php programs, it might be- and you might consider
> figuring a way to wrap all php scripts executed in chroot, although if it
> is anything like perl, chroot'ing it will be a major pain in the ass.
> In short, screw bugtraq- goto google or your book store, or even
> php.net -
> they are all bound to have tons of information about what you are looking
> for.

It's not the poster who's writing the PHP, it's the users.  Unless the users
are sufficiently clued into the existing issues, the view doesn't change.
My comments regarding PHP are centered around most of the default
configuration issues that too many "web programmers" (for very loose values
of the word "programmer") won&

RE: [SC-L] virtual server - security

2004-03-31 Thread Dave Paris
comments interspersed below...

Kind Regards,
-dsp

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Serban Gh. Ghita
> Sent: Tuesday, March 30, 2004 4:05 AM
> To: [EMAIL PROTECTED]
> Subject: [SC-L] virtual server - security
>
>
> Hello
>
> I am banging my head on the table every day, because i cannot find an
> elegant and safe solution to secure a virtual shared environment (server).
> Take the following facts:
[...]
> -no one has acces to shell, cronjobs or stuff like that, only 21 and 80

What's the point of the exercise if you're passing plaintext passwords
across on port 21?  At the very least, mandate SCP/SFTP on port 22.

> -you dont want anyone to get out of his 'box' (eg /home/sasha/)

use 'chroot' jails

> -you want to allow php, perl or other web languages to run safely

"PHP" and "run safely" in the same sentence?  Have you perused Bugtraq
lately?

> and in the
> same time will _almost_ all features.
> -in php (because this is the one of the most user language for web - for
> mostly endusers), i have options like safe_mode, but if i activate that,
> many functions and features will not work. i know (because i tested) that
> the best solution is open_basedir, but i cannot create an restriction like
> that for each user, or at least i dont know how to do that.

That's primarily because PHP will let you shoot yourself in the head, as
opposed to most languages which will only let you shoot yourself in the
foot, or at least no higher than the knee.  (snide commentary... unless it's
a microsoft product, which seem to aim squarely for "the jewels")

> My problem is that i tested some script-kiddies local exploits (php,perl)
> and the system is vulnerable, the user can get out of his box and
> see system
> files (etc/passwd, other dirs).

::feigns shock::

> What are the options here. Any paper or book written about this?

Yes.  Near daily bugtraq reports about why PHP is a darned good idea that
made a left turn into a really bad neighborhood.  The manpage for
SCP/SFTP/SSH.  The manpage for 'chroot'.





RE: [SC-L] Any software security news from the RSA conference?

2004-02-27 Thread Dave Paris
http://www.dean.usma.edu/socs/ir/ss478/General%20Gordon%20Bio.pdf

What John Gordon is doing giving a keynote at the RSA conference is utterly
and completely beyond my ability to comprehend.  If you read his bio at the
link above, you'll find he has absolutely zero background in software or
computer systems.  He's obviously a smart cookie (ex-physicist at Air Force
Weapons Lab, a stint at Sandia, etc) but he's not in any position to
authoritatively say jack sqat about software vulnerabilities - unless
there's something I'm not reading about his background.

I love his perspective though .. Sure John, it's the DEVELOPERS fault that
MANAGEMENT makes the promises and DEMANDS product be shipped two weeks
before it's even spec'd.  God, I sure do wish I had though of just spending
more time debugging when the CEO was screaming at me.. "either you ship *IT*
or I ship *YOU*".  This also tells me he's completely unfamiliar with the
concept of offshore outsourcing.  psss.. hey, John .. A LOT OF THE CODE'S
NOT EVEN WRITTEN HERE, BUDDY! :-)

I'm glad I didn't go .. I would have felt cheated out of my admission fee by
hearing the blathering of someone like this.

Kind Regards (and in somewhat of a cranky mood),
-dsp

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Mark Curphey
> Sent: Thursday, February 26, 2004 7:33 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [SC-L] Any software security news from the RSA conference?
>
>
> Looks like the link I was pointing to didn't make it
>
> Here it is again
>
> http://news.zdnet.co.uk/internet/security/0,39020375,39147413,00.htm
>
> And the text below
>
> Software makers could eliminate most current security issues if
> they only tried harder, according to a Homeland Security advisor
>
>
> An advisor to the US' Homeland Security Council has lashed out at
> software developers, arguing their failure to deliver secure code
> is responsible for most security threats.
>
> Retired lieutenant general John Gordon, presidential assistant
> and advisor to the Homeland Security Council, used his keynote
> address at the RSA Security conference in San Francisco on
> Wednesday to question how much effort developers are putting into
> ensuring their code is watertight. "This is a problem for every
> company that writes software. It cannot be beyond our ability to
> learn how to write and distribute software with much higher
> standards of care and much reduced rate of errors and much
> reduced set of vulnerabilities," he said.
>
> Gordon's keynote followed a day after that of Microsoft chairman
> Bill Gates.
>
> According to Gordon, if developers could reduce the error and
> vulnerability rate by a factor of 10, it would "probably
> eliminate something like 90 percent of the current security
> threats and vulnerabilities.
>
> "Once we start writing and deploying secure code, every other
> problem in cybersecurity is fundamentally more manageable as we
> close off possible points of attack," he said.
>
> Gordon also criticised wireless network manufacturers for making
> encryption too difficult to deploy, even for "technically
> competent" users. He made the comments after explaining that he
> had spent a long weekend trying to set up a Wi-Fi network at his house.
>
> "One manufacturer got to invest an entire man-day of tech support
> and about eight hours of telephone charges. At the end of the
> day, I still had not accomplished a successful installation,"
> said Gordon, who eventually managed to get the network running by
> "taking some steps that were not in the documentation".
>
> However, he said the documentation didn't make it clear how to
> secure his network: "The industry needs to make it easy for users
> like me -- who are reasonably technically competent -- to employ
> solid security features and not make it so tempting to simply
> ignore security."
>
>
>
>  Mark Curphey <[EMAIL PROTECTED]> wrote:
> > I thought this was interesting. I missed it but I am sure the
> message will
> > please many on this list (myself included)
> >
> >  Bill Cheswick <[EMAIL PROTECTED]> wrote:
> > > Bill Gates gave a keynote on their current approach to security, and
> > > the contents of SP2, due out 1H 2004.  From what I heard, Bill
> > > "gets it."  He addressed about 4 of my top 6 complaints and
> remediations.
> > > Quite a change from the rhetoric of five years ago.
> > > But it is an Augean stable, and they have a long way to go.
> > >
> > > Of course, the devil is in the details, and we will have to see.
> > >
> > > On Wed, Feb 25, 2004 at 02:38:32PM -0500, Kenneth R. van Wyk wrote:
> > > > Greetings,
> > > >
> > > > It's been a rather quiet week so far here on SC-L.  I guess
> that everyone
> > > > is either at the RSA conference (http://2004.rsaconference.com/) or
> > > > otherwise too busy.  I've been watching some of the reports
> that have been
> > > > appearing in the trade press regarding announcements and
> such at the RSA
> > > > confe

RE: [SC-L] Code signing and Java Web Start

2004-02-26 Thread Dave Paris
Some potentially useful analogies...

a) Would you trust a random person off the street to make your _cash_ bank
deposit for you?
b) Would you be willing to warranty your neighbor's car?
c) States make you prove (in a plentora of ways) you are who you say you are
and that you know how to drive before handing you a driver's licence.
d) Would you be willing to sign off on a Sarbanes-Oxley audit without
actually *doing* the audit?
e) Would you be willing to give an alabi, in court, if you were _not_
actually with the accused at the time in question?

It's about knowledge and trust.  If you aren't 100% sure of the code and you
don't haven't performed a full & rigorous audit of the code, then you don't
have full knowledge of what you're signing nor do you have trust of what
you're signing.  Yet you're telling the users of that signed 3rd party code
that you *do* know and trust the code.

On the other hand, if by signing the code all you're intending to say is
that "yes, this code did come from So-and-so", then hey .. sign away if they
handed you the code directly.  If you just downloaded the code, you have no
way of telling if the code has been trojaned or if it's even the *actual*
code you're looking for!

Kind Regards,
-dsp

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Mona Wong-Barnum
> Sent: Wednesday, February 25, 2004 6:26 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: [SC-L] Code signing and Java Web Start
>
>
> Hi:
>
>   I am asking for opinions on the issue of code signing and Java Web
> Start.
>
>   We are about to have a meeting on this issue and I need
> some ammunition
> on why we should NOT be signing other people's code which we use
> in our Java
> applications that we serve out of Java Web Start.  I know that
> signing coding
> from unknown sources is very bad...but I think I need some
> "proof" or info that
> will help the managers understand the implication of this in term
> of reliability
> and responsibility.  It is my responsibility to educate my
> managers so that they
> can make the best possible choice; the rest is then out of my hands.
>
>   All help will be greatly appreciated!
>
> thanks,
> Mona
>
> ==
>   Mona Wong-Barnum
>   National Center for Microscopy and Imaging Research
>   University of California, San Diego
>   http://ncmir.ucsd.edu/
>
>   "If you don't have time to do it right, will you have time
>   to do it over?"
>-- unknown
> ==
>
>
>
>