Gartner supports HK smart ID card use

2002-04-29 Thread R. A. Hettinga

http://technology.scmp.com/cgi-bin/gx.cgi/AppLogic+FTContentServer?pagename=SCMP/Printacopyaid=ZZZ2HRQ0B0D





Tuesday, April 23, 2002
Gartner supports HK smart ID card use


DOUG NAIRNE

Research firm Gartner has issued a favourable report on Hong Kong's
contentious smart identification card programme, saying the initiative will
put Hong Kong at the forefront of deploying the technology.

Dion Wiggins, research director at Gartner Group in Hong Kong, said the SAR
was continuing its history of pioneering smart card use with its decision
to issue ID cards with an embedded chip to all residents.

Once implemented, Hong Kong will be well-positioned to deliver efficient
government services as well as provide greater security, community
benefits, access and streamlined secure e-commerce to its entire
population, Mr Wiggins wrote in a briefing paper last week.

The implementation of the [smart card] project will take Hong Kong a long
way towards its goal of being one of the first truly digital economies.

Australia-based Gartner analyst Robin Simpson said Hong Kong would be the
first government to implement a multi-purpose, multi-application smart ID
for its population.

One reason that the Hong Kong SAR project is so significant is that for
the first time, smart card infrastructure will be very widely deployed
across the whole geography to service the entire population, he said.

Mr Simpson said other jurisdictions had wrestled with a dilemma where
citizens would not want smart cards unless they could use them everywhere,
but enterprise would not deploy sufficient infrastructure unless a large
number of people had smart cards.

He said the project was also significant because it was the first time a
government had encouraged private enterprise to take advantage of the smart
card infrastructure by allowing certified third-party applications to be
loaded on to the cards.

The new ID card programme will be formally launched in July, and the cards
will be phased in over four years.

They will store data including a photograph and fingerprints, and can
optionally be used as a digital certificate, driving licence and library
card.

Despite government assurances that the information stored on the cards will
be secure, there have been concerns over forgery or theft.

However, Gartner concludes that the existing ID card system, which was
introduced in 1987, is outdated and no longer able to meet the growing
needs of the . . . Government.

Mr Wiggins said Hong Kong's small population and mandatory identity card
programme made the adoption of smart cards easier to execute.

He said the HK$3 billion programme cost is only 10 per cent higher than the
cost to replace existing ID cards with a non-smart ID.

Gartner concludes that Hong Kong will be one of the few places where smart
ID cards are embraced in the near future.

In the United States, where the Government is pondering a national ID card
programme in the wake of the September 11 terrorist attacks, there has been
fierce resistance to the idea.

Adding smart functions to the cards will make them even less palatable,
Gartner predicts.

The report said: National identification cards will face a steep uphill
battle that will impede their deployment and acceptance in the US. Through
2007, US-based identification card deployers will encounter substantial
resistance to adoption that will increase with added functionality.



SCMP.com is the premier information resource on Greater China. With a
click, you will be able to access information on Business, Markets,
Technology and Property in the territory. Bookmark SCMP.com for more
insightful and timely updates on Hong Kong, China, Asia and the World.
Voted the Best Online newspaper outside the US and brought to you by the
South China Morning Post, Hong Kong's premier English language news source.


Published in the South China Morning Post. Copyright © 2002. All rights
reserved.

-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-29 Thread Wei Dai

I have one other question about the panel analysis. Why did it focus only 
on the linear algebra part of the NFS algorithm? I would like to know, 
given the same assumption on the factor base size (10^9), how much would 
it cost to build a machine that can perform the relationship finding phase 
of the algorithm in the same estimated time frame (i.e. months)?

Using a factor base size of 10^9, in the relationship finding phase you
would have to check the smoothness of 2^89 numbers, each around 46 bits
long. (See Frog3's analysis posted at
http://www.mail-archive.com/cryptography%40wasabisystems.com/msg01833.html.  
Those numbers look correct to me.)  If you assume a chip that can check
one number per microsecond, you would need 10^13 chips to be able to
complete the relationship finding phase in 4 months. Even at one dollar
per chip this would cost ten trillion dollars (approximately the U.S. 
GDP).

So it would seem that it's still not possible for even a major government
to factor 1024-bit keys within an operationally relevant time frame unless
it was willing to devote most of its national income to the effort.

BTW, if we assume one watt per chip, the machine would consume 87 trillion
kWh of eletricity per year. The U.S. electricity production was only 3.678 
trillion kWh in 1999.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



FW: NAI's seeming replacement for PGP desktop

2002-04-29 Thread R. A. Hettinga


--- begin forwarded text


Status:  U
From: Somebody
Subject: FW: NAI's seeming replacement for PGP desktop
Date: Wed, 24 Apr 2002 16:52:24 +0100
Thread-Topic: NAI's seeming replacement for PGP desktop
To: [EMAIL PROTECTED]


-Original Message-
From: Somebody Else
Sent: 24 April 2002 15:37
To: A buncha people
Subject: NAI's seeming replacement for PGP desktop

http://www.mcafeeb2b.com/products/ebusiness.asphttp://www.mcafeeb2b.com/products/ebusiness.asp

...named McAfee E-Business Desktop.  Just had a talk with our local
sales rep.  From our talk, and from what I can see on the URL above,
this appears to be (or at least incorporate) portions of PGP desktop,
with some restrictions:

- Purchaser must already have at least one E-Business Server license
(PGP Server in conjunction with some other crypto), at $2000 per license
- The product appears to require interaction with the server in order to
function -- no more standalone crypto?

Unknown are:

- whether there is a commandline interface, or if the app is scriptable
- whether there is compatibility with existing PGP keyrings
- key management -- is there a central repository, and how is it
managed?
- does crypto occur only on the server, or on the desktop?  If the
former, some interesting security issues

Still awaiting word on:

- whether NAI will still sell us licenses to make more copies of 6.5.8
(so far, answer is doubtful)

Q.  Why not use 6.5.8 freeware (for which there is source?)
A.  If used commercially, a license must be purchased from NAI -- same
as for commercial 6.5.8

Q.  What about gnupg?
A.  Possibility -- some key compatibility issues exist that may or may
not affect us.  Someone would need to build and own the code.

More info as it comes in...

incriminating snippage and .sigs removed...
--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: objectivity and factoring analysis

2002-04-29 Thread Anonymous

Nicko van Someren writes:
 I used the number 10^9 for the factor base size (compared to about
 6*10^6 for the break of the 512 bit challenge) and 10^11 for the
 weight of the matrix (compared to about 4*10^8 for RSA512).  Again
 these were guesses and they certainly could be out by an order of
 magnitude.

In his paper Bernstein uses a relatively larger factor base than in
typical current choices of parameters.  It's likely that the factor
bases which have been used in the past are too small in the sense that
the linear algebra step is being limited by machine size rather than
runtime, because of the difficulty of parallelizing it.  For example in
http://www.loria.fr/~zimmerma/records/RSA155 we find that the sieving took
8000 mips years but the linear algebra took 224 CPU hours on a 2GB Cray.
If there were a larger machine to do the matrix solution, the whole
process could be accelerated, and that's what Bernstein's figures assume.

Specifically he uses a factor base size of L^.7904, where L for 1024 bit
keys is approximately 2^45.  This is a matrix size of about 50 billion,
50 times larger than your estimate.  So a closer order of magnitude
esimate would be 10^11 for the factor base size and 10^13 for the weight
of the matrix.

 The matrix reduction cells are pretty simple and my guess was
 that we could build the cells plus inter-cell communication
 in about 1000 transistors.  I felt that, for a first order guess,
 we could ignore the transistors in the edge drivers since for a
 chip with N cells there are only order N^(1/2) edge drivers.
 Thus I guessed 10^14 transistors which might fit onto about 10^7
 chips which in volume (if you own the fabrication facility) cost
 about $10 each, or about $10^8 for the chips.  Based on past work
 in estimating the cost of large systems I then multiplied this
 by three or four to get a build cost.

The assumption of a larger factor base necessary for the large asymptotic
speedups would increase the cost estimate by a factor of about 50.
Instead of several hundred million dollars, it would be perhaps 10-50
billion dollars.  Of course at this level of discussion it's just as
easy to assume that the adversary spends $50 billion as $500 million;
it's all completely hypothetical.

 As far at the speed goes, this machine can compute a dot product
 in about 10^6 cycles.

Actually the sort algorithm described takes 8*sqrt(10^11) or about 2.5 *
10^6 cycles, and there are three sorts per dot product, so 10^7 cycles
would be a better estimate.

Using the larger factor base with 10^13 entries would imply a sort
time of 10^8 cycles, by this reasoning.

 Initially I thought that the board to
 board communication would be slow and we might only have a 1MHz
 clock for the long haul communication, but I messed up the total
 time and got that out as a 1 second matrix reduction.  In fact to
 compute a kernel takes about 10^11 times longer.  Fortunately it
 turns out that you can drive from board to board probably at a
 few GHz or better (using GMII type interfaces from back planes
 of network switches).  If we can get this up to 10GHz (we do have
 lots to spend on RD here) we should be able to find a kernel in
 somewhere around 10^7 seconds, which is 16 weeks or 4 months.

Taking into consideration that the sort algorithm takes about 8 times
longer than you assumed, and that a few minimal polynomials have to
be calculated to get the actual one, this adds about a factor of 20
over your estimate.  Instead of 4 months it would be more like 7 years.
This is pretty clearly impractical.

Apparently Ian Goldberg expressed concerns about the interconnections
when the machine was going to run at 1 MHz.  Now it is projected to run
10,000 times faster?  That's an aggressive design.  Obviously if this
speed cannot be achieved the run time goes up still more.  If only 1
GHz can be achieved rack to rack then the machine takes 70 years for one
factorization.  Needless to say, any bit errors anywhere will destroy the
result which may have taken years to produce, requiring error correction
to be used, adding cost and possibly slowing the effective clock rate.

Using the larger factor base from the Bernstein paper would increase
the time to something like 10^11 seconds, thousands of years, which is
out of the question.

 Lastly, I want to reiterate that these are just estimates.  I
 give them here because you ask.  I don't expect them to be used
 for the design of any real machines; much more research is
 needed before that.  I do however think that they are rather
 more accurate than my first estimates.

These estimates are very helpful.  Thanks for providing them.  It seems
that, based on the factor base size derived from Bernstein's asymptotic
estimates, the machine is not feasible and would take thousands of years
to solve a matrix.  If the 50 times smaller factor base can be used,
the machine is on the edge of feasibility but it appears that it would
still take years to factor a single value.


Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis

2002-04-29 Thread Anonymous

Lucky Green writes:
 Given how panels are assembled and the role they fulfill, I thought it
 would be understood that when one writes that certain results came out
 of a panel that this does not imply that each panelist performed the
 same calculations. But rather that that the information gained from a
 panel (Ian: math appears to be correct, Nicko: if the math is correct,
 these are the engineering implications of the math) are based on the
 combined input from the panelists. My apologies if this process of a
 panel was not understood by all readers and some readers therefore
 interpreted my post to indicate that both Ian and Nicko performed
 parallel engineering estimates.

What he wrote originally was:

: The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
: the following rough first estimates:
:
: While the interconnections required by Bernstein's proposed architecture
: add a non-trivial level of complexity, as Bruce Schneier correctly
: pointed out in his latest CRYPTOGRAM newsletter, a 1024-bit RSA
: factoring device can likely be built using only commercially available
: technology for a price range of several hundred million dollars to about
: 1 billion dollars
: Bernstein's machine, once built, ... will be able to break a 1024-bit
: RSA or DH key in seconds to minutes.

It's not a matter of assuming parallel engineering estimates, but rather
the implication here is that Ian endorsed the results.  In saying that
the panel put forth a result, and the panel is composed of named people,
it implies that the named people put forth the result.  The mere fact
that Ian found it necessary to immediately post a disclaimer makes it
clear how misleading this phrasing was.

Another problem with Lucky's comment is that somewhere between Nicko's
thinking and Lucky's posting, the fact was dropped that only the matrix
solver was being considered.  This is only 1/2 the machine; in fact in
most factoring efforts today it is the smaller part of the whole job.
Neither Nicko nor Ian nor anyone else passed judgement on the equally
crucial question of whether the other part of the machine was buildable.

 It was not until at least a week after FC that I contacted Nicko
 inquiring if he still believed that his initial estimates were correct,
 now that that he had some time to think about it. He told me that the
 estimates had not changed.

It is obvious that in fact Nicko had not spent much time going over
his figures, else he would have immediately spotted the factor of 10
million error in his run time estimate.  Saying that his estimates had
not changed is meaningless if he has not reviewed them.

Lucky failed to make clear the cursory nature of these estimates, that the
machine build cost was based on a hurried hour's work before the panel,
and that the run time was based on about 5 seconds calculation during
the panel itself.  It's not relevant whether this was in part Nicko's
fault for perhaps not making clear to Lucky that the estimate stood in
the same shape a week later.  But it was Lucky who went public with the
claim, so he must take the blame for the inaccuracy.

In fact, if Lucky had passed his incendiary commentary to Nicko and
Ian for review before publishing it, it is clear that they would have
asked for corrections.  Ian would have wanted to remove his name from
the implied endorsement of the numeric results, and Nicko would have
undoubtedly wanted to see more caveats placed on figures which were
going to be attached to his name all over the net, as well as making
clear that he was just talking about the matrix solution.  Of course
this would have removed much of the drama from Lucky's story.

The moral is if you're going to quote people, you're obligated to check
the accuracy of the quotes.  Lucky is not a journalist but in this
instance he is playing one on the net, and he deserves to be criticized
for committing such an elementary blunder, just as he would deserve
credit for bringing a genuine breakthrough to wide attention.

 For example, Bruce has been quoted in a widely-cited eWeek article that
 I don't assume that someone with a massive budget has already built
 this machine, because I don't believe that the machine can be built.

 Bruce shortly thereafter stated in his Cryptogram newsletter that I
 have long believed that a 1024-bit key could fall to a machine costing
 $1 billion.

 Since these quotes describe mutually exclusive view points, we have an
 example of what can happen when a debate spills over into the popular
 media.
 ...
 http://www.eweek.com/article/0,3658,s=712a=24663,00.asp

They are not mutually exclusive, and the difference is clear.  In the
first paragraph, Bruce is saying that Bernstein's design is not practical.
To get his asymptotic results of 3x key length, Bernstein must forego the
use of sieving and replace it with a parallel ECM factoring algorithm
to determine smoothness.  Asymptotically, this is a much lower cost
approach for finding relations,