Re: Did you *really* zeroize that key?

2002-11-07 Thread Rich Salz
Probably moving out of the domain of the crypto list.

   volatile char *foo;

volatile, like const, is a storage-class modifier.  As written, it
means a pointer to memory that is volatile; this means, in particular,
that you can't optimize away dereferences.  If you wrote
char * volatile foo;
That means that foo itself is volatile, and you must fetch it from
memory whenever you want its value.

You might find the cdecl program useful...

; cdecl
Type `help' or `?' for help
cdecl explain volatile void* vp
declare vp as pointer to volatile void
cdecl explain void * volatile vp
declare vp as volatile pointer to void
cdecl explain volatile void * volatile vp
declare vp as volatile pointer to volatile void



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: New Protection for 802.11

2002-11-07 Thread Donald Eastlake 3rd
Well, you see some of the people working on improving 802.11 security,
in particular some members of 802.11 Task Group i noted that IEEE
procedures have no interoperability demonstration requirements. So they
formed a little group that took a subset of the then current 802.11i
draft and tried to implement it and interoperate. (Problems were found
and fixes feed back into the standards process.) The subset choosen,
called SSN, included the 802.1X authentication and anti-replay features
of 802.11i and the TKIP branch of 802.11i. SSN does not cover ad-hoc 
(station to station) mode, only station - access point.

(The current 802.11i draft has three branch,
TKIP (Temporal Key Ingegrity Protocol) for legacy hardware via
firmware/sofware upgrade that uses RC4, but with a different key for
every packet, plus a specially designed (for weak legacy hardware) keyed
message integrity code with about 22 bits of strength (optional)
WRAP (Wirelss Robust Authenticated Protocol) for new hardware
that uses AES in OCB mode for encryption and integrity (optional)
CCMP (CCM Protocol) for new hardware that uses AES in CCM mode,
that is, AES-CTR for encryption and AES-CBC-MAC for integrity.  
(mandatory)

There being a lot of pressure for improved security soon, the WiFi
Alliance essentiallly adopted SSN with some profiling as a security
certification standard and called this WiFi Protected Access (WPA) v1.
The plan is for full 802.11i to be called WiFi Protected Access v2.

Donald

On 6 Nov 2002, Perry E. Metzger wrote:

 Date: 06 Nov 2002 15:32:30 -0500
 From: Perry E. Metzger [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: New Protection for 802.11
 
 From Dave Farber's Interesting People list.
 
 Does anyone know details of the new proposed protocols?
==
 Donald E. Eastlake 3rd   [EMAIL PROTECTED]
 155 Beaver Street  +1-508-634-2066(h) +1-508-851-8280(w)
 Milford, MA 01757 USA   [EMAIL PROTECTED]




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: New Protection for 802.11

2002-11-07 Thread thomas lakofski
David Wagner said:
 It's not clear to me if WPA products come with encryption turned on by
 default.  This is probably the #1 biggest source of vulnerabilities in
 practice, far bigger than the weaknesses of WEP.

Maybe this is the case in the USA but from my own informal surveys in
Helsinki and London I've found that 90% of private WLANs operate with WEP
enabled (FWIW).  Those with no WEP often appear to be deliberate,
indicated by 'welcoming' SSIDs.  Commercial WLAN operators also typically
choose to deploy with no WEP, controlling access via transparent proxying
or similar methods.

If WLAN systems were supplied supposedly 'secure' out of the box,
consumers might have even less interest in changing defaults.  Automated
key distribution at set-up time would likely introduce its own problems.

I'm fairly sure that J. Consumer connecting their home PC to DSL or cable
with no firewall typically expose themselves to greater risk than
deploying 802.11b with no WEP.

cheers,

-thomas

-- 
   Men of lofty genius when they are doing the
 least work are most active  -- da Vinci
gpg: pub 1024D/81FD4B43 sub 4096g/BB6D2B11=p.nu/d
2B72 53DB 8104 2041 BDB4  F053 4AE5 01DF 81FD 4B43



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: New Protection for 802.11

2002-11-07 Thread Nelson Minar
Reading the Wifi report, it seems their customers stampeded them and
demanded that the security hole be fixed, fixed a damned lot sooner
than they intended to fix it.

Which is sort of a shame, in a way. 802.11b has no pretense of media
layer security. I've been thinking of that as an opportunity for folks
to get smarter about network and application layer security - PPTP,
IPSEC, proper authentication, etc. A lot of sites are putting their
wireless access points outside the firewall and doing VPNs and the
like to build secure links.

If WiFi gets reasonable media layer security soon, that pressure will
go away and we'll go back to media-based security. I think that's a
bad thing in the long run; you end up with systems that may be
somewhat secure at the gateway/firewall but are soft inside. 

 [EMAIL PROTECTED]
.   .  . ..   .  . . http://www.media.mit.edu/~nelson/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



DOS attack on WPA 802.11?

2002-11-07 Thread Arnold G. Reinhold
The new Wi-Fi Protected Access scheme (WPA), designed to replace the 
discredited WEP encryption for 802.11b wireless networks, is a  major 
and welcome improvement. However it seems to have a significant 
vulnerability to denial of service attacks. This vulnerability 
results from the proposed remedy for the self-admitted weakness of 
the Michael message integrity check (MIC) algorithm.

To be backward compatible with the millions of 802.11b units already 
in service,  any MIC algorithm must operate within a very small 
computing budget. The algorithm chosen, called Michael,  is spec'd as 
offering only 20 bits of effective security.

According to an article by Jesse Walker of Intel 
http://cedar.intel.com/media/pdf/security/80211_part2.pdf :

This level of protection is much too weak to afford much benefit by 
itself, so TKIP complements Michael with counter-measures. The design 
goal of the counter-measures is to throttle the utility of forgery 
attempts, limiting knowledge the attacker gains about the MIC key. If 
a TKIP implementation detects two failed forgeries in a second, the 
design assumes it is under active attack. In this case, the station 
deletes its keys, disassociates, waits a minute, and then 
reassociates. While this disrupts communications, it is necessary to 
thwart active attack. The countermeasures thus limits the expected 
number of undetected forgeries such an adversary might generate to 
about one per year per station.

Unfortunately the countermeasures cure may invite a different 
disease. It would appear easy to mount a denial of service attack by 
simply submitting two packets with bad MIC tags in quick succession. 
The access point then shuts down for a minute or more. When it comes 
back up, one repeats the attack.  All the attacker needs is a laptop 
or hand held computer with an 802.11b card and a little software. 
Physically locating the attacker is made much more difficult than for 
an ordinary RF jammer by the fact that only a couple of packets per 
minute need be transmitted. Also the equipment required has innocent 
uses, unlike a jammer, so prosecuting an apprehended suspect would be 
more difficult.

The ability to deny service might be very useful to miscreants in 
some circumstances. For example, an 802.11b network might be used to 
coordinate surveillance systems at some facility or event.  With 
802.11b exploding in popularity, it is impossible to foresee all the 
mission critical uses it might be put to.

Here are a couple of suggestions to improve things, one easier, the 
other harder.

The easier approach is to make the WPA response to detected forgeries 
more configurable.  The amount of time WPA stays down after two 
forgeries might be a parameter, for example.  It should be possible 
to turn the countermeasures off completely. Some users might find the 
consequences of forgeries less than that of lost service. For a firm 
offering for-fee public access, a successful forgery attack might 
merely allow free riding by the attacker, while denied service could 
cost much more in lost revenue and reputation.

Another way to make WPA's response more configurable would be for the 
access point to send a standard message to a configurable IP address 
on the wire side when ever it detects an attack. This could alert 
security personal to scan the parking lot or switch the access point 
to be outside the corporate firewall. The message also might quote 
the forged packets, allowing them to be logged.  Knowing the time and 
content of forged packets could also be useful to automatic radio 
frequency direction finding equipment. As long as some basic hooks 
are in place, other responses to forgery attack could be developed 
without changing the standard.

The harder approach is to replace Michael with a suitable but 
stronger algorithm (Michelle?).  I am willing to assume that 
Michael's designer, Niels Ferguson, did a fine job within the 
constraints he faced. But absent a proof that what he created is 
absolutely optimal, improving on it seems a juicy cryptographic 
problem. How many bits of protection can you get on a tight budget? 
What if you relaxed the budget a little, so it ran on say 80% of 
installed access points? A public contest might be in order.

Clearly, WPA is needed now and can't wait for investigation and 
vetting of a new MIC. But if a significantly improved MIC were 
available in a year or so, it could be included as an addendum or as 
as part of the 802.11i specification.  Some might say that 802.11i's 
native security will be much better, so why bother? My answer is that 
802.11i will not help much unless WPA compatibility is shut off.  And 
with so many millions of 802.11 cards in circulation that are not 
.11i ready, that won't happen in most places for a long time. On 
the other hand, an upgraded MIC could  be adopted by an organization 
that wished improved security with modest effort. Backward 
compatibility could be maintained, with a 

Re: Windows 2000 declared secure

2002-11-07 Thread Arnold G. Reinhold
At 6:38 AM -0500 11/4/02, Jonathan S. Shapiro wrote:


Requirements, on the other hand, is a tough problem. David Chizmadia and
I started pulling together a draft higher-assurance OS protection
profile for a class we taught at Hopkins. It was drafted in tremendous
haste, and we focused selectively on the portions of CC we would cover
in class, but it may provide some sense of how hard this is to actually
do:

	http://www.eros-os.org/assurance/PP/ASP-OS.pdf



A couple of comments:
I realize that this is a very preliminary draft. Please don't take 
this as criticism of your protection profile. It is a very useful 
start. I am not familiar with this stuff, so please accept these 
comments as coming from a naif.

I think these profiles should start with criteria (functionality, 
requirements, assumptions, etc.) that directly address non-technical 
users.  Ideally they should be quotable in a white paper describing 
the benefits of the target of evaluation or, even better, in the 
product warrantee.  More technical criteria added later in the 
document should be tied to these or explicitly justified in some 
other manner. The NSA CAPP does this to some extent in sections 3 and 
4.

So, for example, requirements on IPL and power fail behavior might 
derive from a general specifications that the system not be subject 
to compromise during abnormal situations.  A list of such conditions 
would include IPL and power failures, along with hardware 
malfunction, periodic maintenance, terrorist attack and so forth. 
Conditions where security is not protected, say hardware maintenance, 
would then be made explicit.

I also think there is an opportunity to componentize protection 
profiles.  The designer of a new profile should not have to reinvent 
stuff like authentication or entropy generation. Profiles for these 
components would be included by reference, perhaps with parameters 
for components with options. This has the additional advantage of 
allowing the components to be updated independently.  (Any particular 
certification would specify the revision level  of all components 
used.) Here are some candidate components, not all of which involve 
software but are none the less important in secure systems:

o Entropy generation -- there is lots of art that can be captured 
here, e.g. batching entropy input, that might not be obvious to 
profile writers

o Login authentication -- again there are many approaches that should 
be captured, multi-sue password, PKI credentials,  multi-factor,  no 
lone access.

o Cryptographic algorithms with key and salt length recommendations-- 
The widely accepted algorithms and modes of operation might simply be 
listed, with provision for Type 1 supplied by some government 
owner. Home grown algorithms should be banned. By the way your 
Quantum assumption is too narrow. None of the popular cryptographic 
algorithms have been mathematically proven secure. This risk should 
be explicitly stated.

o Secure networking--safe use of TCP/IP stacks, VPNs, services to be 
avoided, ...

o Multi-site systems --Security solutions employing several machines 
at individual locations, each backing up the other's data and cross 
checking proper operation.

o Event logging (e.g. what to log, being careful not to log passwords 
typed in the wrong box, sending logs to remote sites, dealing with 
lack of space)

o Configuration management -- validating that the software in use is 
the software that was certified; secure patch distribution and 
installation; verification that all patches are installed

o Forensic requirements -- what kind of evidence of misuse must be 
collected and how must it be handled to have legal standing in 
various jurisdictions.

o Data port protection -- preventing attacker from breaching security 
by gaining access to built in ports (RS232, USB, Firewire, SCSI, 
etc.) This could involve special drives or physical covers.

o Attack detection

o A common threat vocabulary--levels of attacker sophistication (nosy 
user, malicious insider, script kiddies, teams of hackers, well 
funded organizations, large national security services) and attack 
geography (intercepted packets enroute, attacks via publicly 
accessible ports, war dialing/driving,  inside job,  physical capture 
and  exploitation of the hardware.

o Detected attack response (this can vary of course. A secure system 
might zeroize all keys and seeds.  A long term  archive might publish 
keys before all data is lost.)

o Secure sensor modules -- GPS receivers, cameras (still and movie), 
intrusion detectors,  sound recorders, biometrics, etc that include a 
tamper resistant capability to sign the data they produce.

o Power availability assurance (loss of power is a denial of service 
as much as any flooding attack)

o Physical security, FIPS 140, for example. There may be useful stuff 
in the DOD Industrial Security Manual and insurance industry 
guidelines. There are potential tie-ins to software. A system might 

Re: New Protection for 802.11

2002-11-07 Thread James A. Donald
--
Reading the Wifi report,
http://www.weca.net/OpenSection/pdf/Wi-
Fi_Protected_Access_Overview.pdf 
it seems their customers stampeded them and demanded that the
security hole be fixed, fixed a damned lot sooner than they
intended to fix it.

I am struck the contrast between the seemingly strong demand 
for wifi security, compared to the almost complete absence of 
demand for email security.

Why is it so? 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 IWe4JFeDeor04Pxb96ZsQ7xX+JAwxSs8HQfoAeG5
 4rQX6tgLhAvAwLjF+SXlRswSmphBhw4cOXLe9Y4r5


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Did you *really* zeroize that key?

2002-11-07 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Peter Gutmann writes
:
[Moderator's note: FYI: no pragma is needed. This is what C's volatile
 keyword is for. 

No it isn't.  This was done to death on vuln-dev, see the list archives for
the discussion.

[Moderator's note: I'd be curious to hear a summary -- it appears to
work fine on the compilers I've tested. --Perry]

Regardless of whether one uses volatile or a pragma, the basic point 
remains:  cryptographic application writers have to be aware of what a 
clever compiler can do, so that they know to take countermeasures.

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (Firewalls book)



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: New Protection for 802.11

2002-11-07 Thread Trei, Peter
 James A. Donald[SMTP:[EMAIL PROTECTED]] wrote:
 
 
 Reading the Wifi report,
 http://www.weca.net/OpenSection/pdf/Wi-
 Fi_Protected_Access_Overview.pdf 
 it seems their customers stampeded them and demanded that the
 security hole be fixed, fixed a damned lot sooner than they
 intended to fix it.
 
 I am struck the contrast between the seemingly strong demand 
 for wifi security, compared to the almost complete absence of 
 demand for email security.
 
 Why is it so? 
 
 --digsig
  James A. Donald
 
How many stories have you read in the last year about
non-LEOs stealing email?

How many stories in the last year have you read about
wardriving?

Further, tapping into 802.11b nets 

* gives the attacker access to your internal
  network. You already know what you're
  sending in email, and eavesdropping on 
  data you've already decided to send to someone
  else feels different than someone trolling through
  your file system without your knowledge.

* requires that the tapper be more or less
  nearby physically. This feels a lot
  different than worrying that a distant
  router is compromised.

Peter Trei



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Did you *really* zeroize that key?

2002-11-07 Thread David Honig
At 03:55 PM 11/7/02 +0100, Steven M. Bellovin wrote:
Regardless of whether one uses volatile or a pragma, the basic point 
remains:  cryptographic application writers have to be aware of what a 
clever compiler can do, so that they know to take countermeasures.

Wouldn't a crypto coder be using paranoid-programming 
skills, like *checking* that the memory is actually zeroed? 
(Ie, read it back..)  I suppose that caching could still
deceive you though?

I've read about some Olde Time programmers
who, given flaky hardware (or maybe software), 
would do this in non-crypto but very important apps. 









-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Astromerkez'den büyük hizmet!..

2002-11-07 Thread Astromerkez
Title: Dünya







  

Dünya'nn lk Astroloji
ve Gizli limler Portal
www.astromerkez.com
  
  

Astromerkez'den
görülmemi hizmet. Kiiye özel günlük astroloji yorumu, hemde hiçbiryerde
göremeyeceininiz detaylaryla... Astromerkez'in ziyaretçilerine ücretsiz
hizmetidir. Sadece üye olmanz yeterli (Üyelik ücretsizdir).
te size bir örnek:

Çiftler
: 

Birbirinizi iyi tandnz için uyumun büyüsünü yayorsunuz. Yeni aklar
ise bunu yeni yaamaya balyor. Yldzlar size istediiniz hereyi
veriyor. 
Yalnzlar
: 

Hareketlerin üzerine gitmektense ksayollar kullann. O size bir teklifte
bulunuyor. Önemli olanda bu zaten . nsanlarla iletiimde bulunmanz gereken
bir dönem 
Transitler
: 


120
) Olumlu
durum: Müzik dinlemek isteyeceksiniz, tablolara, uyum içinde, elegan,
estetik olan tüm güzelliklere eskiden oldugundan daha fazla saygi
duyacaksiniz. 
204
) Olumlu durum: Enerji dolu olacaksiniz. Sonuçlarini önceden düsünüp
tasinarak, önemli kararlar alacaksiniz. Acelecilik etmeyeceksiniz.
Yaptiginiz her iste basarili olacak, üzerinizdeki projeleri
tamamlayacaksiniz. Olumlu düsüneceksiniz. Cinsel içgüdüleriniz ayaklanacak. 
...
-141
) Olumsuz durum: Bagimsizliga büyük ölçüde ihtiyaç duyacaksiniz.
Kendinizi bir mahkum gibi hissedeceksiniz. Bagimsizliginizi kazanmak için
isyan edeceksiniz. Günlük yasantisinin kesmekesine dayanamayacak, hayatinizi
degistirmek isteyeceksiniz. Sanki harika birseyleri kaçiriyormuscasina bir
 huzursuzluk hissedeceksiniz.Genellikle bu dönemde çok büyük degisiklikler
olmadigi için bu hissiniz geçmeyecek. 
...

  








-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: DOS attack on WPA 802.11?

2002-11-07 Thread Donald Eastlake 3rd
On Thu, 7 Nov 2002, Arnold G. Reinhold wrote:

 Date: Thu, 7 Nov 2002 16:17:48 -0500
 From: Arnold G. Reinhold [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: DOS attack on WPA 802.11?
 
 The new Wi-Fi Protected Access scheme (WPA), designed to replace the 
 discredited WEP encryption for 802.11b wireless networks, is a  major 
 and welcome improvement. However it seems to have a significant 
 vulnerability to denial of service attacks. This vulnerability 
 results from the proposed remedy for the self-admitted weakness of 
 the Michael message integrity check (MIC) algorithm.

Needless to say, this has been discussed time and time again in the 
meetings and on the mailing list of IEEE 802.11i.

 To be backward compatible with the millions of 802.11b units already 
 in service,  any MIC algorithm must operate within a very small 
 computing budget. The algorithm chosen, called Michael,  is spec'd as 
 offering only 20 bits of effective security.

That's right, there is this TKIP branch of 802.11i to support the 
15,000,000+ legacy units out there. If you can come up with a better MIC 
that almost all of them can support with just a firmware upgrade, you 
are welcome to submit it but to overcome the current commitment it would 
need to be substantially better and out pretty quick.

 According to an article by Jesse Walker of Intel 
 http://cedar.intel.com/media/pdf/security/80211_part2.pdf :
 
 This level of protection is much too weak to afford much benefit by 
 itself, so TKIP complements Michael with counter-measures. The design 
 goal of the counter-measures is to throttle the utility of forgery 
 attempts, limiting knowledge the attacker gains about the MIC key. If 
 a TKIP implementation detects two failed forgeries in a second, the 
 design assumes it is under active attack. In this case, the station 
 deletes its keys, disassociates, waits a minute, and then 
 reassociates. While this disrupts communications, it is necessary to 
 thwart active attack. The countermeasures thus limits the expected 
 number of undetected forgeries such an adversary might generate to 
 about one per year per station.
 
 Unfortunately the countermeasures cure may invite a different 
 disease. It would appear easy to mount a denial of service attack by 
 simply submitting two packets with bad MIC tags in quick succession. 
 The access point then shuts down for a minute or more. When it comes 
 back up, one repeats the attack.  All the attacker needs is a laptop 
 or hand held computer with an 802.11b card and a little software. 
 Physically locating the attacker is made much more difficult than for 
 an ordinary RF jammer by the fact that only a couple of packets per 
 minute need be transmitted. Also the equipment required has innocent 
 uses, unlike a jammer, so prosecuting an apprehended suspect would be 
 more difficult.

So throw all your legacy hardware in the trash (or sell it on eBay), get
only new hardware, and don't enable TKIP, if you are so worried about
this.

 The ability to deny service might be very useful to miscreants in 
 some circumstances. For example, an 802.11b network might be used to 
 coordinate surveillance systems at some facility or event.  With 
 802.11b exploding in popularity, it is impossible to foresee all the 
 mission critical uses it might be put to.

Mission critial uses on an unlicensed band where 802.11b gets to fight
it out with blue tooth, cordless phones, diathermy machines, and who
knows what else? (at least efforts are underway to coordinate with blue
tooth)

 Here are a couple of suggestions to improve things, one easier, the 
 other harder.
 
 The easier approach is to make the WPA response to detected forgeries 
 more configurable.  The amount of time WPA stays down after two 
 forgeries might be a parameter, for example.  It should be possible 
 to turn the countermeasures off completely. Some users might find the 
 consequences of forgeries less than that of lost service. For a firm 
 offering for-fee public access, a successful forgery attack might 
 merely allow free riding by the attacker, while denied service could 
 cost much more in lost revenue and reputation.

I think the feeling was there are lots of ways you can run insecure if
you want. Like just using WEP. If you want to be secure with legacy
hardware, you need countermeasures. If you don't want to be secure, you 
don't need any of TKIP or the rest of 802.11i.

 Another way to make WPA's response more configurable would be for the 
 access point to send a standard message to a configurable IP address 
 on the wire side when ever it detects an attack. This could alert 
 security personal to scan the parking lot or switch the access point 
 to be outside the corporate firewall. The message also might quote 
 the forged packets, allowing them to be logged.  Knowing the time and 
 content of forged packets could also be useful to automatic radio 
 frequency direction finding equipment. As long as some basic 

Re: Did you *really* zeroize that key?

2002-11-07 Thread Don Davis
At 3:07 PM +1300 11/7/02, Peter Gutmann wrote:
 [Moderator's note: FYI: no pragma is needed.
 This is what C's volatile keyword is for. 

 No it isn't.  This was done to death on vuln-dev,
 see the list archives for the discussion.

 [Moderator's note: I'd be curious to hear a summary --
 it appears to work fine on the compilers I've tested.
   --Perry]

i include below two parts:  a summary of the vuln-dev
thread, and a compiler jock's explanation of why peter's
#pragma is the _only_ solution that reliably will work.

- don davis, boston


vuln-dev thread:
   http://online.securityfocus.com/archive/82/298061/2002-10-28/2002-11-03/1
   (thanks to tim fredenburg sending this URL to me.)

summary:  programmers can obstruct dead-code elimination
in various ways:
   - use the volatile attribute (but correctly);
   o introduce dynamic dependency;
   + do the memset with an external call.
punchline:  the subtler or newer the obstruction,
the less likely we are to see that _all_ compilers
treat the obstruction correctly.  the safest route
is to code with obstructions that have long been
known to obstruct dead-code elimination.  hence,
wrapping memset() in an external routine is most
likely to work with various buggy compilers.

synopsis: 
 * peter posted the same message as he posted to
   the cryptography list, appealing for new support
   from the compilers;
   * syzop said, didn't happen w/ gcc 2.95.4;
   * michael wojcik suggested:
 define an external call that does memset's job,
 so as to defeat dead-code elimination 
   * dan kaminsky suggested: introduce dynamic [runtime]
 dependencies;
   * dom de vitto said, use the volatile attribute;
 * kaminsky replied:  compilers are more likely
   to reliably respect dynamic dependency, than
   to correctly support the volatile attribute;
 * pavel kankovsky replied, volatile is mandatory
   in the standard, so it's ok to trust it;
 * peter also replied to kaminsky:  the dead-code
   elimination problem seems specific to gcc 3.x .
   the underlying problem is unreliable support for
   standard features and for standards compliance.  
 * michael wojcik explains (to peter, pavel, and
   kaminsky) why volatile isn't as good as his
   external call:
 - passing a volatile object to memset
invokes undefined behavior
 - access to volatile objects may be
significantly slowed 
 - volatile seems like the sort of thing
broken implementations may get wrong
   michael also argues that more compiler support
   isn't necessary, since the standard provides
   effective features.

end of synopsis/summary



since i used to build compilers long ago, before i got
into security work, i asked an expert friend (32 yrs of
compiler development) about what he thought of this
problem, and of the proposed solutions.  this guy, btw,
was the lead engineer for digital/compaq's fx32! runtime
binary translator for the alpha workstations,  he knows
a lot about optimizers.  he says that of the four
proposed solutions -

   * #pragma dont_remove_this_code_you_bastard;
   * use the volatile attribute (but correctly);
   * introduce dynamic dependency;
   * do the memset with an external call;

- only peter's pragma can be expected to work reliably:

   * the c99 standard and its predecessors don't
 at all intend volatile to mean what we naively
 think it means.  specifically, in the hands of a
 high-end compiler developer, the spec's statement:
any expression referring to [a volatile]
 object shall be evaluated strictly according
 to the rules of the abstract machine
 is really talking about what the compiler can
 infer about the program's intended semantics.
 a c99-compliant compiler _can_ legitimately
 remove a volatile access, as long as the compiler
 can deduce that the removal won't affect the
 program's result.  here, the program's result
 is defined by the compiler's sense of what the
 abstract machine is:  the abstract machine
 is mostly defined by the language features, but
 can also take into account whether a debugger
 or specialized hardware are running during
 compilation  or runtime execution.

 for example, such a savvy compiler might leave
 our volatile-memory memset() call in place when
 the debugger is running (knowing that the debug-
 ger might want to view the zeroed key). but then,
 when the debugger is turned off, the same compiler
 could decide to remove the dead memset() call,
 because this won't affect the program's results.

   * standards-compliant compilers normally distinguish
 between conformant source programs and noncon-
 formant source programs.  for example, a noncon-
 formant program might be one that uses a deprecated
   

the volatile keyword

2002-11-07 Thread Perry E. Metzger

Don Davis writes:

   * the c99 standard and its predecessors don't
 at all intend volatile to mean what we naively
 think it means.  specifically, in the hands of a
 high-end compiler developer, the spec's statement:
any expression referring to [a volatile]
 object shall be evaluated strictly according
 to the rules of the abstract machine
 is really talking about what the compiler can
 infer about the program's intended semantics.
 a c99-compliant compiler _can_ legitimately
 remove a volatile access, as long as the compiler
 can deduce that the removal won't affect the
 program's result. 

Sorry, but that is really not correct at all.

volatile exists because there are times when you absolutely need to
know that the compiler will not alter your intent. A typical example
is in touching a device register in a device driver. You may very well
need to write a certain set of values out to a particular memory
location in a particular order and not have them optimized away or
reorganized. It may be vitally important to access register 2 and then
register 1, or to write multiple values out to register 4 before
touching register 3, or what have you.

In a driver or in a situation like this you really do mean write a
one there and then write a ten there and never mind that you think you
can optimize away writing the one there.  volatile means that the
memory location has side effects and that you CANNOT deduce the result
of the operations and thus are required to not touch the sequence at
all. The spec specifically states that you may NOT remove or reorder
sequence points if volatile is in use.

That is why volatile exists. It means do NOT reorder or eliminate
access to these memory locations on pain of death. The intent of the
spec is precisely what I've said, and I'll happily quote chapter and
verse to prove it.

There are several similar misconceptions about the volatile keyword
that have been propagated in recent messages.

Claims that volatile does not guarantee a safeguard against such
optimizations are specious. That is exactly why volatile was
introduced, and if, for example, gcc did not honor it, the machine I
am typing at right now would not work because the device drivers would
not work. Any optimizing compiler that people write device drivers in
practically *has* to support volatile or it won't work for that
purpose. (In the days before volatile you needed vile tricks to
assure your intent was followed, or you needed to not optimize driver
code, or both.)

Some have claimed volatile is not a mandatory part of C. Well, it is
certainly mandatory in the C standards I have at hand. C99 makes it
abundantly clear that you have to do it and do it correctly.

Some have claimed you can't know that the compiler writer implemented
volatile correctly so you need a #pragma. Well, that doesn't actually
help you. If they haven't implemented volatile right, why should
they implement the pragma correctly? We already have a way of
indicating do not reorder or eliminate this code which is in
existing standards -- if it doesn't work, that's a bug in your
compiler, and it is better to get the bug fixed than to ask for
another feature to be added that might also be buggy and which is not
part of the standard.

So in short, yes, volatile might be implemented in a buggy way by
your compiler (which you should certainly test for if it is important
to you!) but if your compiler is in fact properly implemented and
standards compliant, volatile is the way to accomplish what you are
trying to accomplish here.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Did you *really* zeroize that key?

2002-11-07 Thread Patrick Chkoreff


From: Trei, Peter [EMAIL PROTECTED]

[Moderator's note: FYI: no pragma is needed. This is what C's
volatile keyword is for. Unfortunately, not everyone writing in C
knows the language. --Perry]


Thanks for the reminder about volatile.  It is an ancient and valuable 
feature of C and I suppose it's implemented correctly under gcc and some of 
the Windoze compilers even with high optimization options like -O2.

From RISKS:
http://catless.ncl.ac.uk/Risks/22.35.html#subj6

Those of us who write code need to be reminded of this
now and then.


Everybody probably also knows about the gnupg trick, where they define a 
recursive routine called burn_stack:

static void
burn_stack (int bytes)
{
char buf[64];

memset (buf, 0, sizeof buf);
bytes -= sizeof buf;
if (bytes  0)
burn_stack (bytes);
}

Then there's the vararg technique discussed in Michael Welschenbach's book 
Cryptography in C and C++:

static void purgevars_l (int noofvars, ...)
{
  va_list ap;
  size_t size;
  va_start (ap, noofvars);
  for (; noofvars  0; --noofvars)
{
  switch (size = va_arg (ap, size_t))
{
  case 1:  *va_arg (ap, char *) = 0;
   break;
  case 2:  *va_arg (ap, short *) = 0;
   break;
  case 4:  *va_arg (ap, long *) = 0;
   break;
  default:
   memset (va_arg(ap, char *), 0, size);
}
}
  va_end (ap);
}

Here's an example of how you might call the routine:

  purgevars_l(2, sizeof (la), la,
   sizeof (lb), lb);


But hey, if volatile keyword works then so much the better.  I would 
recommend examining the assembly language output of your compiler to verify 
that it honours volatile.

-- Patrick
http://fexl.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Did you *really* zeroize that key?

2002-11-07 Thread Patrick Chkoreff


From: Trei, Peter [EMAIL PROTECTED]

[Moderator's note: FYI: no pragma is needed. This is what C's
volatile keyword is for. Unfortunately, not everyone writing in C
knows the language. --Perry]


Thanks for the reminder about volatile.  It is an ancient and valuable 
feature of C and I suppose it's implemented correctly under gcc and some 
of the Windoze compilers even with high optimization options like -O2.

Oops, I missed your real point, which is that volatile ought to suffice 
as a compiler guide and there is no need for an additional pragma.  By 
declaring a variable as volatile, the compiler would also leave untouched 
any code which refers to that variable.

Too bad that volatile is not guaranteed to work in all major ANSI-compliant 
compilers.  Oh well.  I wonder how gcc does with it?

[Moderator's note: I've quoted chapter and verse -- if it follows the
current standards, it is required to honor volatile. It isn't
compliant by definition if it does not. gcc does indeed honor
volatile, as do almost all other C compilers I have access to. --Perry]

I guess we should stick with either the recursive routine trick or the 
var-arg trick.

-- Patrick
http://fexl.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Did you *really* zeroize that key?

2002-11-07 Thread Peter Gutmann
David Honig [EMAIL PROTECTED] writes:

Wouldn't a crypto coder be using paranoid-programming skills, like 
*checking* that the memory is actually zeroed? (Ie, read it back..)
I suppose that caching could still deceive you though?

You can't, in general, assume the compiler won't optimise this away
(it's just been zeroised, there's no need to check for zero).  You 
could make it volatile *and* do the check, which should be safe from 
being optimised.

It's worth reading the full thread on vuln-dev, which starts at
http://online.securityfocus.com/archive/82/297827/2002-10-29/2002-11-04/0.
This discusses lots of fool-the-compiler tricks, along with rebuttals
on why they could fail.

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: patent free(?) anonymous credential system pre-print - asimpleattack and other problems

2002-11-07 Thread Stefan Brands
Hello Jason:

Page 193 and 210 do talk about having an identifying 
value encoded in the credentials which the holder can 
prove is or isn't the same as in other credentials. However, 
the discussion on page 193 is with respect to building 
digital pseudonyms

No, not at all. The paragraph on page 193 that I referred to is the one
starting with In some PKIs it is desirable that certificate holders can
anonymously prove to be the originator of several showing protocol
executions. It _preceeds_ the paragraph on digital pseudonyms, which
starts with A special application of the latter technique are
credential systems in which certificate holders [...] establish digital
pseudonyms with organizations. 

I can think of ways in which this feature might be leveraged to 
create otherwise-unlinkable sets of credentials from different 
(distrusting) CAs, but it's never addressed directly that I can 
see, and would need some specifics filled in.

There are no specifics to be filled in, the paragraph on 193 states
everything there is to it. If the credential holder engages in several
showing protocols (whether in sequence or in parallel, and regardless of
whether at the same time or at different times -- the paragraph applies
to any situation), all that is needed to prove that no pooling is going
on is the abovementioned proof that the credentials all contain the same
hidden identifier. 

Note that the prover can _hide_ this identifier, thereby allowing him to
prevent linkability with other showing protocol executions for which no
link needs to be established. Of course, the technique also works if
there are many Cas. The user can even prevent the CAs from learning the
built-in identifier that is central to all (or a subset of) his/her
credentials.  (A special CA could issue restrictively blindeded versions
of the user's identity, which the user then submits to different Cas
who encode it into the certificates they issue.)

Page 211 of your book talks about discouraging lending, which doesn't
help in the case when Bob answers in Alice's behalf when she shows his
credentials. 

Discouraging lending is not the same as preventing pooling. The lending
prevention technique was not intended to address pooling, the technique
on page 193 does a much more effective job at that. However, in your
approach, what prevents me from giving my credentials to someone else
who then uses them to gain access to a service without needing to pool
in any other credentials than the one I lent to him? 

Note also that when all credential attributes are specified within the
same certificate, and the verifier requires authorization information to
be contained within a single attribute certificate, pooling is
inherently prevented. 

What do you mean by forced to leave behind digital signatures?  

There is no zero-knowledge variant of your protocol; the verifier ends
up with undisputable evidence (towards third parties) of the
transaction, and in particular of which attribute values have been shown
by the credential holder. Any digital signatures that are made by
certificate holders can be added to their dossiers; they form
self-signed statements that cannot be repudiated, proving to the whole
world who is the originator of a message and possibly what information
they were willing to give up in return for a service. Doing a
zeroknowledge variant of your proposal requires one to prove knowledge
in zk of various elements rather than showing them in the clear; this
requires extrmely inefficient zk techniques, such as for proving
knowledge of a pre-image under a specific hash function.

I'll expand my related work section to point out that your system and
others have lots of features which my system doesn't attempt to
provide. 
My apologies if my terse treatment mischaracterized your work.

I realize that many of the features of my work are described in a very
dense manner in the book, and therefore it is easy to overlook them. For
example, on the same page 193 there is a sentence Using our techniques
it is also straightforward for several certificate holders to jointly
demonstrate that their showing protocol executions did not all originate
from
the same certificate holder, or for one certificate holder to show that
he or she was
not involved in a fraudulent transaction. The same applies to my
describtion of the simple hash selective disclosure technique on page
27, which only gets two sentences, and many others
techniques/functionalities. The only excuse I have for this is that the
book is a minor revision of my PhD thesis, and so the technical parts
had to be targetted towards an expert audience; while skilled
cryptographers will indeed find the dense statements more than
sufficient, and may even consider some of them as trivial applications
of the general techniques, I can see that this may not always be the
case for readers in general. 

Good luck with your research!
Stefan Brands