Freenet fork appears likely (was Re: Gmane -- Re: Why is Freenet so sick at the moment?)

2003-10-07 Thread Steve Schear

On Sat, Oct 04, 2003 at 11:31:36PM -0700, Ian Clarke spake thusly:
 I have never ever characterized Freenet as being anything other than in
 development.  If you don't like the fact that Freenet is taking so-long
 to perfect, then either help, or use Earth Station 5 - I hear its great.
You never said anything to this effect when people started putting things
in the network that could get them sent to prison so it was rather
implicit.
And now after finding that fred is unable to open /dev/random on my system
due to what appears to be a bug (opening for write instead of read) I am
now worried about the security of the encryption due to lack of entropy.
I'm glad I don't use freenet for anything illegal/unpopular but I'm quite
worried for those who do.
On IIRC a new channel #fredisdead has been receiving quite a bit of 
interest (along with discussions on #anonymous and #freenet).  It appears 
that a small group of developers, fed up with the recent spate of Freent 
problems has decided to take a step back, to release 692 and have started a 
revolt.

http://mids.student.utwente.nl/~mids/freenet/fid.html

steve 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


CCIA Microsoft report--the core issues

2003-10-07 Thread R. A. Hettinga
Wherin Carroll trashes Schneier a bit...

Cheers,
RAH
---

http://zdnet.com.com/2102-1107_2-5086379.html?tag=printthis


CCIA Microsoft report--the core issues
By John Carroll
Special to ZDNet
October 6, 2003, 5:13 AM PT
URL: http://zdnet.com.com/2100-1107-5086379.html

COMMENTARY--The Computer  Communications Industry Association (CCIA) has
been a long-time Microsoft opponent.  The lobbying group filed numerous
friend of the court briefs during the antitrust trial in America, and is an
active participant in the antitrust investigation being conducted by the
European Commission.  It is composed of a number of Microsoft?s fiercest
competitors, among them AOL, Sun Microsystems, Oracle, Intuit and Nokia .

Since the end of the American trial, however, the CCIA has pretty much
fallen off the radar screen. Recently, however, they?ve managed to generate
a bit of noise with  CyberInsecurity: The Cost of Monopoly , which is
presented as  a wake up call that government and industry need to hear 
regarding security issues in Microsoft's near-ubiquitous operating system.
The report has garnered an unusual amount of attention, possibly because
Bruce Schneier, author of Applied Cryptography and generally recognized
expert in the realm of cryptography, was included as one of the report?s
authors.

My respect for Mr. Schneier?s work, however, doesn?t extend to ignoring
flaws in reports to which he contributes.  This is part one in a three part
series which rebuts the arguments made in the CyberInsecurity report.
Today?s installment deals with the core issues, namely, the risks
associated with software monoculture and complex systems.  Part two is a
collection of general criticisms relating to the report?s content, and
details its uncanny ability to put a negative spin on practically
everything Microsoft does.  Part three is my treatment of the proposed
remedies, and closes with some parting thoughts.

Do note that you can read the entire report yourself by going to
http://www.ccianet.org/papers/cyberinsecurity.pdf .

The risks of a software monoculture
Protection from cascade failure is instead the province of risk
diversification--that is, using more than one kind of computer or device,
more than one brand of operating system, which in turns assures that
attacks will be limited in their effectiveness. This fundamental principle
assures that, like farmers who grow more than one crop, those of us who
depend on computers will not see them all fail when the next blight hits.
(Page 11)

In other words, by having a diverse operating system environment, you
prevent a virus that targets one platform from bringing down the entire
infrastructure.  The targeted platform might be laid low, but other
platforms will live on to propagate the species...or just continue
computing.

It?s true that a monoculture has certain costs from the standpoint of
shared risks which lead to a larger pool within which a computer virus
might thrive.  On the other hand, there are also real costs to the lack of
a standardized computing architecture, which is the flip-side of the
monoculture detailed in the report.

The benefits of standardization
As I discussed in my Tunney comments , software lacks the inherent
standards found in other industries.  Software APIs can take practically
any shape imaginable, which means that the initial state of a young
software market is extreme fragmentation.

This is a tremendous inhibitor to development, as a particular software
product can only reach a small, platform-specific market.  This attracts
less developer attention, leading to higher software costs and fewer users.
As a result, the market?s natural tendency has been to standardize on one
provider. That one provider might start with only a slight lead over its
competitors, but that slight lead will cause more developers to target the
favored platform, leading to greater economics of scale and lower costs,
which attracts more customers and gives rise to the virtuous cycle which
gave companies like Microsoft, and IBM before it, a dominant share of the
marketplace.

With Windows, consumers have the most hardware and software choice, lower
costs due to economics of scale, and are guaranteed compatibility with
practically any product on the market.  Companies have a large pool from
which to draw technical staff, all of whom benefit from the deeper
knowledge which comes with the ability to specialize in one platform (Adam
Smith would appreciate this).  Employers also benefit from the fact that
potential employees, if they have computer skills, will have those skills
on Windows.

It?s not just the Windows? market, however, that realizes cost savings in
this fashion.  Increasingly, the UNIX market is organizing around an open
source operating system named Linux.  It is my opinion that this
consolidation will continue, making Linux THE standard for the UNIX
development domain.  Few would say this consolidation is a negative thing,
nor suggest that government use 

Re: Other OpenSSL-based crypto modules FIPS 140 validated?

2003-10-07 Thread Peter Gutmann
Nathan P. Bardsley [EMAIL PROTECTED] writes:

Anecdotally, I've heard that there are many, but almost all of them were done
by vendors for embedding in their proprietary products.

Ditto.  The problem is that when vendors have spent $100K+ on the
certification, they're very reluctant to give anyone else (and specifically
their competitors) the benefits of their expenditure, so you end up getting
the same thing re-certified over and over for private use, rather than a
single generally-usable version being certified.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymity +- credentials

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]

 [...]
 In terms of actual practical systems, ones
 that implement to Brands' level don't exist,
 as far as I know?  

There were however several projects that implemented 
and tested the credentials system.  There was CAFE, an 
ESPRIT project.

At Zeroknowledge there was working implementation written 
in Java, with a client that ran on a blackberry.

There was also the implementation at ZKS of a library in C 
that implemented Brands's stuff, of which I participated in.
The library implemented issuing and showing of credentials,
with a limit on the number of possible showing (if you passed
the limit, identity was revealed, thus allowing for off-line
verification of payments for example.  If you did not pass the
limit, no information about your identity was revealed).  
The underlying math was modular, you could work in a 
subgroup of Z*p for prime p, or use Elliptic curves, or 
base it on the RSA problem.  We plugged in OpenSSL 
library to test all of these cases.
Basically we implemented the protocols described in 
[1], with some of the extensions mentioned in the conclusion.

The library was presented by Ulf Moller at some coding
conference which I don't recall the name of...

It was to be used in Freedom, for payment of services, 
but you know what happended to that projet.

 Also, the use of Brands work
 would need to consider that he holds a swag of
 patents over it all (as also applies to all of
 the Chaum concepts).

Yes, most of the stuff is patented, as is Chaum's stuff.
Somebody had suggested that to build an ecash system
for example, you could start out by implementing David
Wagner's suggestion as described in Lucre [2], and then
if you sell and want extra features and flexibility get the
patents and implement Brands stuff.  Similar strategy 
would seem to apply for digital credentials in general.

 There is an alternate approach, the E/capabilities
 world.  Capabilities probably easily support the
 development of psuedonyms and credentials, probably
 more easily than any other system.   But, it would
 seem that the E development is still a research
 project, showing lots of promise, not yet breaking
 out into the wider applications space.
 
 A further alternate is what could be called the
 hard-coded psuedonym approach as characterised
 by SOX.  (That's the protocol that my company
 wrote, so normal biases expected.)  This approach
 builds psuedonyms from the ground up, which results
 in a capabilities model like E, but every separate
 use of the capability must be then re-coded in hard
 lines by hardened coders.

Do you have any references on this?

Thanks.

--Anton

[1] http://crypto.cs.mcgill.ca/~stiglic/Papers/brands.pdf
[2] http://anoncvs.aldigital.co.uk/lucre/theory2.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Perry E. Metzger

I was asked by someone to anonymously forward the following reply to
Joshua Hill to the list. (Second time in a week, and on the same topic!)

If you reply, please don't put my name in the reply -- this isn't my
comment.

--

  The government will still buy your encryption devices (FIPS-140
  certified)

 That will greatly depend on the sophistication of the agency concerned.
 The US Forest Service (for example) may not have the level understanding
 of the FIPS 140-2 standard that the US Navy has.

The last time we delt with the Navy, they barely knew what FIPS
140 was, weren't aware there were multiple levels, and when
informed of this had no idea what level they should be using. All
they were interested in was checking the box that said Must be
FIPS certified.

--

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: Peter Gutmann [EMAIL PROTECTED]
 [...]
 If you think that's scary, look at Microsoft's CryptoAPI for Windows XP
FIPS
 140 certification.  As with physical security certifications like BS 7799,
you
 start by defining your security perimeter, defining everything inside it
to be
 SECURE, and ignoring everything outside it.  Microsoft defined their
perimeter
 as the case of the PC.  Everything inside the PC is defined to be
SECURE.
 Everything outside is ignored.

I believe that is typical of most software crypto modules that are FIPS 140
certified, isn't it?
It classifies the module as multi-chip standalone.

This is why you get requirements of the type that it should run on Windows
in
single-user mode, which I take to mean have only an admin account.  This
prevents
privilege escalation attacks (regular user to root) that are easily done.

I think this is reasonable, since you really are relying on the OS and the
PC for the
security of the module.

More scary to me is stuff like
DSSENH does not provide persistent storage of keys.  While it is possible
to
store keys in the file system, this functionality is outside the scope of
this validation.

This is where Microsoft's CSPs do the dirty work, and use what is called
the Data Protection API (DPAPI) to somehow safeguard keys somewhere
in your system.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protocol implementation errors

2003-10-07 Thread Markus Friedl
On Sat, Oct 04, 2003 at 05:58:49PM +1200, Peter Gutmann wrote:
 Bill Frantz [EMAIL PROTECTED] writes:
 
 This is the second significant problem I have seen in applications that use
 ASN.1 data formats.  (The first was in a widely deployed implementation of
 SNMP.)  Given that good, security conscience programmers have difficultly
 getting ASN.1 parsing right, we should favor protocols that use easier to
 parse data formats.
 
 I think this leaves us with SSH.  Are there others?
 
 I would say the exact opposite: ASN.1 data, because of its TLV encoding, is
 self-describing (c.f. RPC with XDR), which means that it can be submitted to a
 static checker that will guarantee that the ASN.1 is well-formed.  In other
 words it's possible to employ a simple firewall for ASN.1 that isn't possible
 for many other formats (PGP, SSL, ssh, etc etc).  This is exactly what
 cryptlib does, I'd be extremely surprised if anything could get past that.
 Conversely, of all the PDU-parsing code I've written, the stuff that I worry
 about most is that which handles the ad-hoc (a byte here, a unit32 there, a
 string there, ...) formats of PGP, SSH, and SSL.  We've already seen half the
 SSH implementations in existence taken out by the SSH malformed-packet
 vulnerabilities,

I don't think so.  The SSH packet format is _much_ simpler than ASN.1
and neither the original ssh-1.x nor OpenSSH had problems due to
the packet parsing, both have been immune to last years malformed
packet tests.   I've seen more problems related to ASN.1 parsing
than to SSH packet parsing...

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open Source (was Simple SSL/TLS - Some Questions)

2003-10-07 Thread Rich Salz
  I took the initial view that closed source and trustable
 crypto are mutually incompatible

Of course this isn't true.  When is the last time you built your
own ATM or credit-card POS terminal?

 Claims such
 as Download this app and you will be secure should definitely need to
 be proven, and if the app is built with TLS++ that would mean
 distributing the source code.

That's not enough.  Are you validating the toolchain?  (See Ken Thompson's
Turing Aware lecture on trusting trust).  Are you going to prevent
users from storing private keys in world-readable files?  Think very
carefully before you make *any* claims about what features your software
will provide, and what is necessary to truly ensure those features.
Are you planning on taking real liability here?   That would be a first
in the software world.

 I don't want to restrict the distribution of TLS++, but I
 also don't want crippled versions of it being used to fool the public.

Do you really think that someone who wants to fool the public will
be deterred by a LICENSE.txt file in an open source distribution?

 If anyone could help me to outline a reasonable possibility?

I think that rather than spending time on deciding what to call this
library that is to-be-written, and how to license this library that is
to-be-written, that time should be spent on, well, writing it. :)
/r$
--
Rich Salz  Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway  http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Simple SSL/TLS - Some Questions

2003-10-07 Thread Jerrold Leichter
| From: Jill Ramonsky [EMAIL PROTECTED]
|   From: Ian Grigg [mailto:[EMAIL PROTECTED]
|  
|   The only question I wasn't quite sure of
|   was whether, if I take your code, and modify it,
|   can I distribute a binary only version, and keep
|   the source changes proprietary?
|
| You can't distribute a binary only version of ANY crypto product,
| surely? No crypto product can EVER be trustworthy unless you can see the
| source code and verify that it has no back doors, and then compile it.
| Unless you give your users the power to inspect the source code, and
| /know/ that it is the source code (because they can actually compile it
| and run the resulting executable) then you could have put all sorts of
| back doors into it. You could have added password theft, key escrow, who
| knows what?
This sounds nice in principle, but if you follow that where it goes, you are
left with GPL - not even LGPL.  There's no way for a library to protect itself
against code it is linked with.  (Where's TOPS-20 when you need it)  Even
if you let me completely rebuild the crypto library you've shipped with your
product, why should I trust it?

| Don't get me wrong. I agree with you that crypto has enough barriers
| already, and I would like to produce something that is as freely
| distributable as possible. For the masses crypto is, I guess, an
| unwritten design goal. But allowing people to hide the crypto source
| from crypto users would allow the bad guys (you can define your own bad
| guys) to produce Trojan Horse crypto. Closed source crypto is to all
| intents worthless. (In my opinion). Please feel free to argue that I'm
| wrong.
This is not really a soluable problem.  Going full GPL will render your project
useful for just about any commercial use.  Anything less leaves even someone
with the necessary skills, desire, and time in a position where they can't
say anything meaningful about the system in which the code is embedded.

|   Q:  Does your employer  have any say or comment
|   on this project?  Might be wise to clear up the
|   posture, and either get it in writing, or make
|   the repository public from the git-go.  Many an
|   open source project has foundered when the boss
|   discovered that it works...
|
| It has absolutely nothing whatsoever to do with my employer. All my code
| will be written at home in my spare time, and uploaded to CVS or
| whatever also from home. It is true that I happen to be sending this
| email from work, but even that's in my own time. I don't see how they
| have any say. To be /really/ safe,  I'd be happy to always post to this
| list only from home, but right now I don't think it's a problem.
Check your employment contract carefully.  Many contracts these days basically
say if you invented/developed/wrote it while you worked for us, it's ours.
(Sometimes there are provisions like if it's related to what we sell, but
when you look more closely, they will define what we sell so broadly - and
then add the ability to change their minds later anyway - that it doesn't
change anything.)

Lawsuits about this sort of stuff get ugly and *very* expensive.  Just because
your current employer is reasonable doesn't mean it won't be acquired by
someone who isn't.  Doing a bit of up-front checking is only prudent.

BTW, someone remarked in another issue that you could put of your choice of
license from among a variety of possibilities until later.  Well ... yes and
no.  Once something goes out the door under less restrictive terms, you can't
add restrictions later.  Let a copy go out with the phrase released to the
public domain and it's no longer yours - it's everyone's.  Let it out under
a BSD license, and you can't apply GPL terms later.  (You can apply stricter
terms to new versions, but if you try that, the resulting confusion will kill
any possibilities of broad acceptance:  You'll end up with a competition
between your restrictive versions and evolved versions of less-restrictive
ones that other people do because they aren't willing, or can't accept your
later restrictions.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Peter Gutmann
Anton Stiglic [EMAIL PROTECTED] writes:

This is why you get requirements of the type that it should run on Windows in
single-user mode, which I take to mean have only an admin account.  This
prevents privilege escalation attacks (regular user to root) that are easily
done.

I think this is reasonable, since you really are relying on the OS and the PC
for the security of the module.

Uhh, so you're avoiding privilege escalation attacks by having everyone run as
root, from which you couldn't escalate if you wanted to.  This doesn't strike
me as a very secure way to do things (and it would still get MSDOS certified,
because you've now turned your machine into a DOS box protection-wise).

More scary to me is stuff like DSSENH does not provide persistent storage of
keys.  While it is possible to store keys in the file system, this
functionality is outside the scope of this validation.

That's the Define the bits that we can easily get away with to be secure and
ignore the rest approach that I commented on.  It was actually part of a
posting to another list where I was poking fun at BS 7799:

-- Snip --

Some years ago I witnessed a BS 7799 security certification being done.  For
those of you who aren't familiar with this, it's ISO 9000 for security.  It
went something like this:

  First, we define the region from the rug in the corner to Dave's desk to the
  pot-plant on the right to be... SECURE.  Everything inside this region is by
  definition SECURE.  Everything outside the region is none of our concern.
  Access to the server room from the SECURE area is by locked door.  The keys
  are on a hook on the wall, but since the hook is outside the SECURE area, we
  don't have to worry about that.

  Now we need to produce a lot of paperwork.  I'll help you with this, it
  should only take a few weeks.

  Congratulations, you now have a BS 7799-certified SECURE facility.  Here's
  my bill.

In other words they didn't change anything at all in their insecure (except in
the eyes of BS 7799) work area.  The whole certification process was an
exercise in meeting the certification requirements purely through the
production of paperwork.

-- Snip --

The SECURE facility has since been decomissioned, so I guess it's safe to talk
about it now.  Incidentally, almost everyone knew where the key was because
the room in question had the best air-conditioning in the building (it was
packed full of servers and networking gear), so it became quite popular in the
summer with the sysadmins, who'd find various reasons to do extended amounts
of work in there.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protocol implementation errors

2003-10-07 Thread Peter Gutmann
Markus Friedl [EMAIL PROTECTED] writes:
On Sat, Oct 04, 2003 at 05:58:49PM +1200, Peter Gutmann wrote:
 We've already seen half the
 SSH implementations in existence taken out by the SSH malformed-packet
 vulnerabilities,

I don't think so.

According to the CERT advisory, roughly half of all known SSH implementations
are vulnerable (some of the vendor statements are a bit ambiguous), and the
number would have been higher if it weren't for the fact that several of the
non-vulnerable implementations share the OpenSSH code base (there are a number
of implementations not in the advisory, but we can take it as being a
representative sample).

The reason I appear to be defending ASN.1 here is that there seems to be an
irrational opposition to it from some quarters (I've had people who wouldn't
recognise ASN.1 if they fell over it tell me with complete conviction that
it's evil and has to be eradicated because... well just because).  I don't
really care about the religious debate one way or the other, I'm just stating
that from having used almost all of the bit-bagging formats (starting with PGP
1.0) for quite a number of years, ASN.1 is the one I feel the most comfortable
with in terms of being able to process it safely.

Incidentally, if anyone wants to look for holes in ASN.1 data in the future,
I'd be really interested in seeing what you can do with malformed X.509 and
S/MIME data.

Peter (who's going to look really embarrassed if the NISCC test suite finds
   problems in his ASN.1 code :-).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: Peter Gutmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, October 07, 2003 11:07 AM
Subject: Re: NCipher Takes Hardware Security To Network Level


 Anton Stiglic [EMAIL PROTECTED] writes:

 This is why you get requirements of the type that it should run on
Windows in
 single-user mode, which I take to mean have only an admin account.  This
 prevents privilege escalation attacks (regular user to root) that are
easily
 done.
 
 I think this is reasonable, since you really are relying on the OS and
the PC
 for the security of the module.

 Uhh, so you're avoiding privilege escalation attacks by having everyone
run as
 root, from which you couldn't escalate if you wanted to.  This doesn't
strike
 me as a very secure way to do things (and it would still get MSDOS
certified,
 because you've now turned your machine into a DOS box protection-wise).

Did you read the security policy of Netscape Security Module?  Basically,
if you want to get the configuration that is FIPS 140 certified, you need
to install the module on a PC and add tamper resistant seals over
appropriate
interfaces, junctions and fasteners of all doors and covers in the enclosure
of the PC, so that you can't open the cover without the fact being
physically
noticeable.  I suggest adding some duct tape in strategic positions for
additional
security :).

By reasonable I mean in the framework of having a general purpose software
cryptographic library be certified FIPS.  I'm not saying I find this secure.
When I see a software library being certified FIPS 140, I say to myself it
must
implement the cryptographic algorithms in a descent way, has a descent
random number generator, and stuff like that.  I don`t care much about the
physical boundary that they artificially determine.

If I want high security, I will go with hardware.  At the end of the line,
what
you want to protect is your secret keys, and if you don't have a tamper
resistant
hardware (that zeroizes your secrets when someone tries to poke at it)
to do that it is difficult if not impossible.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-07 Thread Ralf Senderek
On Mon, 6 Oct 2003, Ian Grigg wrote: (answering Jill's questions)

 The only question I wasn't quite sure of
 was whether, if I take your code, and modify it,
 can I distribute a binary only version, and keep
 the source changes proprietary?

I'd strongly recommend to think about some code-signing which would
best be included in the source code but could as well be distributed
as separate signature files. Including a note in your licence (whatever
it will turn out to be) this will not only help to spot and reject
unauthorized and dubious attempts to improve your code but
will also deter those who might call your code crap without having
seen the real thing.

Good luck.

Ralf

*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*
* Ralf Senderek  [EMAIL PROTECTED] http://senderek.de  *  What is privacy  *
* Sandstr. 60   D-41849 Wassenberg  +49 2432-3960   *  without  *
* PGP: AB 2C 85 AB DB D3 10 E7  CD A4 F8 AC 52 FC A9 ED *Pure Crypto?   *
49466008763407508762442876812634724277805553224967086648493733366295231438448

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]