Re: MUSCLE standards
Andreas Schwier wrote in reply to Dave's posting (and my responses are embedded in the text below): I fully agree with your statements, but it's not that terrible. I've been working in the industry for 14 years now and as a former member of the German standardisation body (NI17.4) I've been involved in the work that gave us ISO7816-3,4 and 5. Industry standards are always a tradeoff. They usually reflect the interest of companies that send people to these standardisation groups. It's quite expensive to do this kind of work, so it does not surprise that companies only invest that money, if they see some benefit (which is surely not interoperability). Interoperabiliy is now a mandatory requirement for the public sector in Europe, and the eEurope Initiative aims, with its Smart Card Charter, to bring about interoperability in transactions between the citizen and business. The user-centric goals are one contact reader slot spec or standard to take all the cards that the citizen may wish to use in transactions with government, banks, and business, and for all single and multi-app cards issued for this purpose to be always accepted in that slot. On top of this, there is a competition requirement: the public sector must not be tied into single supplier relationships in this area. Please look at the UK Framework for Smart Cards in Government on the www.iagchampions.gov.uk web site - there is a similar push for a common PKI infrastructure introduced in another document on that web site. eEurope covers the same topics: cards, terminals, PKI infrastructure. I was yesterday at a meeting to discuss the UK specifications for public transport ticketing. On the bus, the card interface has to be contactless. At other times (e.g. purchasing the ticket over the internet), the card interface can be contact. Over the next month, some of us hope to study the drafts of 14443 up to part 3 (parts 2 and 3 have just been completed, ready for the FDIS stage) in order to assess whether they are good enough to build interoperable systems for both microprocessor cards and Mifare cards. The other problem with standards, is that they only contain the results of very long disussions that were held in these groups. Reading the standard does not tell you why things are done in a certain way. You only see in a very compressed document, what everyone finally agreed on. And the reasons should be documented and published, because they would help designers to understand the often impenetrable text of the standard. There is another problem with standards: patents (but that is a long story, and someone else has explained part of it). Now other people take these standards and try to implement them. And because developers don't really like reading standards and because they don't have the background information, things gets out of hand. A good example is the chaining mechanism in T=1. Soon after the first drafts of T=1 were written, people started implementing T=1, but they left out chaining and claimed that it is optional. If you look at the standard it's clearly not and you see that T=1 without chaining doesn't make any sense. Standards take very long to develop and it would not work, if companies did not try to implement proprietary system, just to find the best solution for a standard. But because of this, standard often need to embrace even these poor first time shots, because they suddenly got successfull in the marketplace and defined the common standard (I can think of better ways to exchange documents than in Word format, but it's there and won't go away soon). There is no excuse for standards developers not doing a professional job, i.e. getting the standard right. The problem is basically finance. Many companies and countries want the standard, and they want it to be correct and complete, but they will not finance the work - so the developers are frequently not allowed to do the job properly. An ISO editor tells me that ISO has moved to pdf format, and we still have problems, as was evident at the UK contactless technical panel recently when we were comparing documents that we had each porinted off. PC/SC is a good global specification to start with. I guess the combination of PC/SC with the CT-API / CT-BCS approach in M.U.S.C.L.E will work quite well. I don't see companies writing terminal drivers for OCF, and IMHO OCF will always be used as a layer on top of PC/SC. When you look at the low levels of PC/SC (electrical, data comms, ATR processing), and providing that you have knowledge of both ISO 7816 and EMV, you quickly realise that PC/SC is a mess down there. Its is a muddled mixture of ISO and EMV, and it says that, if the reader can't cope with the ATR that the card sends, the reader should carry on and hope that soemthing works. That is not a professional way to do things. I jave asked PC/SC (and Microsoft) to sort this out, but nothing has been
Re: MUSCLE standards
On Tue, 23 May 2000, Andreas Schwier wrote: Hi Dave 1) One communication protocol should be used. Currently there are several: T=0, T=1, Synchronous, and others. My personal feeling is that all cards should communicate in the T=1 block protocol. It is much more efficient, and gives the card a way of communicating back to the reader to establish resynchronization or to communicate the need for more waiting time. T=0 does this through the ATR so if the card needs more time it has to change the ATR to notify the host of this. I feel this is a poor way of doing this. Well You have to define a first protocol, then find you can enhance it, etc That's why you have so many communication protocols... You're looking at your first smart card and maybe think it's been invented 2 years ago That's not true... Most modern cards do that today, unfortunately some French companies do not really like T=1 for historic reasons. Quite often these cards even support both protocols with PPS. 2) The ATR should be used as a means for card identification. It is ridiculous that much of the ATR can be changed except the protocol information. I think the ATR should have 6 historical bytes reserved for identification. 2 for manufacturer id, 2 for manufacturer mask, and 2 for user definition. That makes 65,000 manufacturers, 65,000 masks and 65,000 user defines. The user can only change their 2 bytes. Thus the card can still be identified by it's core OS 2 bytes manufactuer/2 bytes mask. And who is going to register and maintain these identifier ? And why would I need the ATR to identify the card ? What I need is a mechanism to identify the application on the card and I can do that using the AID stored in EF_DIR. PC/SC introducted the concept of identifying the card with the ATR and as you can see, that doesn't really work well. Why do I want Microsoft Plug and Play with a SmartCard ? In most cases I know what I want to do with the card before I insert it into the reader. Beside that, it would be difficult to be able to change the whole ATR, including the protocole type, the endianess, Smartcard manufacturers still have to deal with lack of physical space, small memory footprint, and low cost... That makes integration of "bells'n whistles" difficult to do... I also agree with Andreas, in the sense that a well written application should not rely on a specific ATR to identify an application. The ATR should be here only to identify the specific chip type and revision. With multiapplication smartcards, it's not possible to modify the ATR to reflect all the applications loaded in the card. I would prefer to use another mechanism: just specify a function with specific CLA/INS combination that returns a known value depending of the presence of an application... That way you could have several applications loaded in the card and test wether they're activated or not. Doing so consumes a small amount of precious EEPROM bytes, but ... The ATR is for the pure purpose of defining the interface characteristics for communication between the card and the terminal. Everything else is application. The historical bytes shall be used to offer some proprietary information about the chip, that can be used for diagnostic purposes. ISO7816-4 contains all thats needed in sections 8 - Historical bytes and 9 - Application independent card services. It's up to the application developer to make use of that. 3) ISO-7816 should include a command for the creation of a transparent file and a command for the listing of files. ISO7816 Part 9 - Additional interindustry commands and security attributes does that. Yeah, that's already the case... Some commands regarding file operations are ISO defined... 5) There must be a standard for putting the keys on the card. If RSA is used then do pq... whatever but in the same order on each card. Also, cards should have the same endianness. This is crazy that people haven't learned their lessons on this one yet. That's a little bit more tricky, as the keys are usually stored in specific areas in the card. This would clearly be the job of a card service provide in PC/SC. On top of that a PKCS#11 layer will do the rest. Storing the crypto objects in a standard way in a smart card is the goal of PKCS#15... But this PKCS doesn't address the commands used to make use of these crypto objects... I heard of a ISO-7816-8 that addressed this... In fact, it depends on the level of standardization you need... Do you want to be able to use the crypto functions of any card, only by changing the card and a library? Then you should look at PKCS#11. If you want to be able to use any smartcard with a standard set of API to access it, then you're already looking at PC/SC (yes, there existed a lot of proprietary libraries before PC/SC). If you want to be able to use the same PKCS#11 library with any smartcard, then
Re: MUSCLE standards
Here is a brief summary of work done in 1998 to compare ISO 7816- 3:1997 and EMV V3.1.1. Attached file ISO_EMV_Comp.doc Peter - Forwarded by: "Post Master" internet Forwarded to: PM:pwt Date forwarded: Mon, 22 May 2000 18:22:55 +0100 Date sent: Mon, 22 May 2000 15:59:39 +0200 From: Olivier Jeannet [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: MUSCLE standards Send reply to: [EMAIL PROTECTED] Peter W Tomlinson wrote: I will try to find time to give detailed responses to David's frustrations, but for now I generally support Matthias. Except I remind you all that EMV's T=1 is different from ISO's T=1 - both have errors in the error recovery, but the errors are different, and they also have different definitions of the fields for changing protocol parameters. This is interesting. Can you give us more details about the differences between EMV's T=1 and ISO's T=1 ? Regards, -- Olivier Jeannet - POS Servers team My Windoz PC didn't work well because I had not rebooted it enough... *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html *** Iosis, 34 Strathmore Road, Bristol BS7 9QJ, UK Phone fax +44 (0)117 951 4755 Email [EMAIL PROTECTED] or [EMAIL PROTECTED] The following section of this message contains a file attachment prepared for transmission using the Internet MIME message format. If you are using Pegasus Mail, or any another MIME-compliant system, you should be able to save it or view it from within your mailer. If you cannot, please ask your system administrator for assistance. File information --- File: ISO_EMV_Comp.doc Date: 22 May 2000, 22:08 Size: 27648 bytes. Type: MS-Word ISO_EMV_Comp.doc
Re: MUSCLE standards
Hi Dave I fully agree with your statements, but it's not that terrible. I've been working in the industry for 14 years now and as a former member of the German standardisation body (NI17.4) I've been involved in the work that gave us ISO7816-3,4 and 5. Industry standards are always a tradeoff. They usually reflect the interest of companies that send people to these standardisation groups. It's quite expensive to do this kind of work, so it does not surprise that companies only invest that money, if they see some benefit (which is surely not interoperability). The other problem with standards, is that they only contain the results of very long disussions that were held in these groups. Reading the standard does not tell you why things are done in a certain way. You only see in a very compressed document, what everyone finally agreed on. Now other people take these standards and try to implement them. And because developers don't really like reading standards and because they don't have the background information, things gets out of hand. A good example is the chaining mechanism in T=1. Soon after the first drafts of T=1 were written, people started implementing T=1, but they left out chaining and claimed that it is optional. If you look at the standard it's clearly not and you see that T=1 without chaining doesn't make any sense. Standards take very long to develop and it would not work, if companies did not try to implement proprietary system, just to find the best solution for a standard. But because of this, standard often need to embrace even these poor first time shots, because they suddenly got successfull in the marketplace and defined the common standard (I can think of better ways to exchange documents than in Word format, but it's there and won't go away soon). One of the biggest obstacles in the smartcard industry today is the lack of standardization between different cards, readers, and even the platform in which they are used. Back in the early days of the internet many different standards existed like token ring, Dec Net, and others making the existance of a single infrastructure in which anyone could plug into difficult. Eventually the players began to see the light and one standard emerged as the godfather of all internet connectivity standards: ethernet and TCP/IP. But without the Internet as the killer application, TCP/IP would not be the de-facto standard today. Just think back a little while, when IPX looked like a strong candidate. PC/SC is a good global specification to start with. I guess the combination of PC/SC with the CT-API / CT-BCS approach in M.U.S.C.L.E will work quite well. I don't see companies writing terminal drivers for OCF, and IMHO OCF will always be used as a layer on top of PC/SC. Now anyone can plug into the internet and connect with nearly everyone else in a simple and seamless manner. It is no wonder why companies such as Cisco are doing as well as they are. Do you think that these companies would be doing as well if many networking standards still existed today ? The Internet would not be growing as quickly if not a single standard emerged from the struggle because users need a seamless way of connectivity. The same must exist for smartcards. Although magnetic stripe cards are of a much simpler nature, it is still possible for myself to travel to France and use an Automatic Teller Machine to gain access to money. I can even use my VISA card in almost every terminal that exists. Do you think that these cards would be as useful if every bank issued their own proprietary location of information on the magnetic stripe ? Just take your MasterCard, VISA or Amex with an EMV application on the chip and you can stick it into any EMV terminal around the world. The standards for that are here, but it's not yet a business case for the banks to invest into chip. Smartcards must also develop such standards to make communication to them in a seamless manner. The following is a list of what I consider to be necessary for the smartcard industry: 1) One communication protocol should be used. Currently there are several: T=0, T=1, Synchronous, and others. My personal feeling is that all cards should communicate in the T=1 block protocol. It is much more efficient, and gives the card a way of communicating back to the reader to establish resynchronization or to communicate the need for more waiting time. T=0 does this through the ATR so if the card needs more time it has to change the ATR to notify the host of this. I feel this is a poor way of doing this. Most modern cards do that today, unfortunately some French companies do not really like T=1 for historic reasons. Quite often these cards even support both protocols with PPS. 2) The ATR should be used as a means for card identification. It is ridiculous that much of the ATR can be changed except the protocol information. I think the ATR
Re: MUSCLE standards
Hi, What about TCP/IP over ethernet?? That's an infrastructure just like smartcards are an infrastructure... It's the applications that a ubiquitous card/reader system would enable that are important... and I daresay required befor smartcards become ubiquitous Applications can be as diverse as the people of the world... The key issue is: If the cards and readers were ubiquitous and interoperable, a particular application could work anywhere that there was software support for it infinite flexibility... Heck, what kind of world would it be if you had to have a particular kind of computer for every website that you might want to browse?? A token ring computer for websites on token ring, a Compaq computer for websites hosted by a Compaq server?? Dave Sims On Wed, 17 May 2000, Alex Pilosov wrote: On Thu, 18 May 2000, Matthias Bruestle wrote: If you really do all this, you will have one card with one operating system. Ein volk, ein card, ein system. Sorry, I just couldn't resist ;) -alex *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html *** *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
David Corcoran wrote: Hi, here is a bit that I wrote up to vent on my lack of standards in the smartcard industry. Let me know if you agree or not. (and a great deal more besides). If Birmingham Aston University (UK) mounts it on their unrestricted web site, see the paper on smart card standards and e-purse originally written in 1997 by Ram Bannerjee, and greatly updated recently by myself and Steve Brunt (Steve is the senior ISO editor for ISO/IEC 10373, the smart card test standard). Smart card specifications were never designed from the top down, but just grew as a struggling technology (naked ICs embedded in a card, when everyone said they would be destroyed by static discharge) emerged from France. Bitter rivalry between suppliers, at a time when the technology was being used for closed schemes (cards and terminals being matched up by the systems integrators), a weak ISO organisation, and lack of public money for standardisation - all these contributed to the current mess. ISO gave the standardisation job to (or it was taken in by) a committee that was dedicated to defining the token, and was forbidden to look seriously at the terminal - in other words, the design work never looked at the complete system of card and terminal. The same mistakes are being made now in the contactless card standards, particularly in ISO 14443. The European Commission is trying to bring some commonality to the public use of smart cards for PKI and electronic commerce (the eEurope Initiative). RSA Labs has published a draft of PKCS#15 (interface to a PKI token, i.e. to a card) - PKCS#11 defines an API within the host system, not the interface to a device used to store the keys and permit their use in a secure manner. I will try to find time to give detailed responses to David's frustrations, but for now I generally support Matthias. Except I remind you all that EMV's T=1 is different from ISO's T=1 - both have errors in the error recovery, but the errors are different, and they also have different definitions of the fields for changing protocol parameters. Designers of this type of algorithm MUST use state diagrams as the normative definition of the processes, but ISO never makes them do that. In the UK's Contactless Standards Technical Panel, we have just been reviewing ISO 14443 part 4's draft, and we find the same problems of muddle due to there not being a state diagram. Peter Tomlinson [EMAIL PROTECTED] or [EMAIL PROTECTED] +44 117 951 4755 Iosis, 34 Strathmore Road, Bristol BS7 9QJ, UK Phone fax +44 (0)117 951 4755 Mobile +44 (0)7785 261475 Email [EMAIL PROTECTED] or [EMAIL PROTECTED] *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
Hi, what if we built a smartcard "environmental system", a kind of "middleware", that deals on one side with the cards and on the other side with the (host) applications? On the cards' side, it would handle all the specifics of smartcards. On the application side, it would offer the smartcard services on a much higher abstraction level. An application could be used with any smartcard that offers the required services. (I use the term application for the program that runs on a host and talks to a smartcard; smartcards offer services.) The application just talks to the smartcard middleware. In a networked world, it could be possible for the middleware to get support from somewhere on the net in how to use a specific smartcard. The manufacturers/issuers of smartcards could provide such a help in a way that the middleware understands. Standardization efforts could concentrate on the middleware and its interfaces (to the smartcard side and the application side). No need to standardize every bit on the cards. These interfaces could even be service independant such that new services could be introduced easily, without any standardization effort. How do you think about that? Best regards, Harald On Mit, 17 Mai 2000, David Corcoran wrote: Hi, here is a bit that I wrote up to vent on my lack of standards in the smartcard industry. Let me know if you agree or not. [...] *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
Mahlzeit Harald Vogt wrote: On the application side, it would offer the smartcard services on a much higher abstraction level. Are you talking about OCF? www.opencard.org Mahlzeit endergone Zwiebeltuete *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
On Don, 18 Mai 2000, Matthias Bruestle wrote: Mahlzeit Harald Vogt wrote: On the application side, it would offer the smartcard services on a much higher abstraction level. Are you talking about OCF? www.opencard.org Not necessarily. OCF has a few drawbacks. It's Java-specific, local (one single VM, but I guess they're working on it), assumes that all smartcard services and their implementations are known in advance. Very small devices would have a hard time running it. (Of course, for Java applications it's well-suited.) I think the networking aspect is not taken into account as it should be. Maybe you want to take a look at http://www.inf.ethz.ch/~rohs/JiniCard/ Cheers, Harald *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
unsubscribe MUSCLE end Met de vriendelijke groeten, Raymond. Treaties are like roses and young girls -- they last while they last. -- Charles DeGaulle *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
MUSCLE standards
Hi, here is a bit that I wrote up to vent on my lack of standards in the smartcard industry. Let me know if you agree or not. One of the biggest obstacles in the smartcard industry today is the lack of standardization between different cards, readers, and even the platform in which they are used. Back in the early days of the internet many different standards existed like token ring, Dec Net, and others making the existance of a single infrastructure in which anyone could plug into difficult. Eventually the players began to see the light and one standard emerged as the godfather of all internet connectivity standards: ethernet and TCP/IP. Now anyone can plug into the internet and connect with nearly everyone else in a simple and seamless manner. It is no wonder why companies such as Cisco are doing as well as they are. Do you think that these companies would be doing as well if many networking standards still existed today ? The Internet would not be growing as quickly if not a single standard emerged from the struggle because users need a seamless way of connectivity. The same must exist for smartcards. Although magnetic stripe cards are of a much simpler nature, it is still possible for myself to travel to France and use an Automatic Teller Machine to gain access to money. I can even use my VISA card in almost every terminal that exists. Do you think that these cards would be as useful if every bank issued their own proprietary location of information on the magnetic stripe ? Smartcards must also develop such standards to make communication to them in a seamless manner. The following is a list of what I consider to be necessary for the smartcard industry: 1) One communication protocol should be used. Currently there are several: T=0, T=1, Synchronous, and others. My personal feeling is that all cards should communicate in the T=1 block protocol. It is much more efficient, and gives the card a way of communicating back to the reader to establish resynchronization or to communicate the need for more waiting time. T=0 does this through the ATR so if the card needs more time it has to change the ATR to notify the host of this. I feel this is a poor way of doing this. 2) The ATR should be used as a means for card identification. It is ridiculous that much of the ATR can be changed except the protocol information. I think the ATR should have 6 historical bytes reserved for identification. 2 for manufacturer id, 2 for manufacturer mask, and 2 for user definition. That makes 65,000 manufacturers, 65,000 masks and 65,000 user defines. The user can only change their 2 bytes. Thus the card can still be identified by it's core OS 2 bytes manufactuer/2 bytes mask. 3) ISO-7816 should include a command for the creation of a transparent file and a command for the listing of files. 4) Card manufacturers need to be ISO compliant. Class instructions should be standardized to either 00 or C0 or whatever. I should be able to list the directory of files on the card in 1 way on any card. 5) There must be a standard for putting the keys on the card. If RSA is used then do pq... whatever but in the same order on each card. Also, cards should have the same endianness. This is crazy that people haven't learned their lessons on this one yet. This is just a few for now. I'll post more as my frustrations build up. Is there a forum for these kind of requests ? Let me know if you have any suggestions. I have about 1.5 months off right now as I take one class so I should have some free time. Hope all is well, Dave * David Corcoran Internet Security/Smartcards Home: Purdue University 1008 Cherry Lane Department of Computer Science West Lafayette, IN 47906 Home: (765) 463-0096 Cell: (317) 514-4797 http://www.linuxnet.com * *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
Mahlzeit David Corcoran wrote: Hi, here is a bit that I wrote up to vent on my lack of standards in the smartcard industry. Fine. [Interoperability] - All cards do not need to were everywhere. A key card needs not to work in an ATM. It certainly could make it cheaper. - Manufacturers want to differenciate. They want to have features other manufacturers do not have. That's why there are these many unices despite the many unix standards. 1) One communication protocol should be used. Currently there are several: T=0, T=1, Synchronous, and others. My personal feeling is that all cards should communicate in the T=1 block protocol. It is much more I agree, that today all cards should be powerfull enough to implement T=1, which is nicer than T=0, but you can't implement it on memory cards. So I would favour one protocol for smart cards and one for memory cards. T=0 does this through the ATR so if the card needs more time it has to change the ATR to notify the host of this. It can send 0x60s to extend the work waiting time. information. I think the ATR should have 6 historical bytes reserved for identification. 2 for manufacturer id, 2 for manufacturer mask, and 2 for user definition. I think user definition is not neccessary if there is a EF_DIR on the card. The ATR should provide enough information to know what card you are talking to and which commands it provides. 3) ISO-7816 should include a command for the creation of a transparent file and a command for the listing of files. That's not easy. This requires e.g. a standard for file access permissions, which are handled differently by nearly every card. Number of PINs, number of keys, access methods (MAC, encrypted), type of access regulated (write, update, read, execute, create, locked, ...), ... . I should be able to list the directory of files on the card in 1 way on any card. At least the FIDs and basic information. If you really do all this, you will have one card with one operating system. Mahlzeit endergone Zwiebeltuete *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***
Re: MUSCLE standards
Hi Dave, In the beginning of the Internet, there was a guy (whose name I can't remember) who published a periodical (I wouldn't call it a magazine but sort of) called 'ConneXions'... It featured articles (papers) by such luminaries as Marshall Rose, Vint Cerf, etc. discussing layers and protocols. It's theme was 'interoperability' and in fact it was the pre-cursor of the 'Interop' trade show The guy that started ConneXions also started the Interop... I think that the same model could easily be applied to smartcards and that this is a good forum for doing so... Dave Sims ** On Wed, 17 May 2000, David Corcoran wrote: Hi, here is a bit that I wrote up to vent on my lack of standards in the smartcard industry. Let me know if you agree or not. One of the biggest obstacles in the smartcard industry today is the lack of standardization between different cards, readers, and even the platform in which they are used. Back in the early days of the internet many different standards existed like token ring, Dec Net, and others making the existance of a single infrastructure in which anyone could plug into difficult. Eventually the players began to see the light and one standard emerged as the godfather of all internet connectivity standards: ethernet and TCP/IP. Now anyone can plug into the internet and connect with nearly everyone else in a simple and seamless manner. It is no wonder why companies such as Cisco are doing as well as they are. Do you think that these companies would be doing as well if many networking standards still existed today ? The Internet would not be growing as quickly if not a single standard emerged from the struggle because users need a seamless way of connectivity. The same must exist for smartcards. Although magnetic stripe cards are of a much simpler nature, it is still possible for myself to travel to France and use an Automatic Teller Machine to gain access to money. I can even use my VISA card in almost every terminal that exists. Do you think that these cards would be as useful if every bank issued their own proprietary location of information on the magnetic stripe ? Smartcards must also develop such standards to make communication to them in a seamless manner. The following is a list of what I consider to be necessary for the smartcard industry: 1) One communication protocol should be used. Currently there are several: T=0, T=1, Synchronous, and others. My personal feeling is that all cards should communicate in the T=1 block protocol. It is much more efficient, and gives the card a way of communicating back to the reader to establish resynchronization or to communicate the need for more waiting time. T=0 does this through the ATR so if the card needs more time it has to change the ATR to notify the host of this. I feel this is a poor way of doing this. 2) The ATR should be used as a means for card identification. It is ridiculous that much of the ATR can be changed except the protocol information. I think the ATR should have 6 historical bytes reserved for identification. 2 for manufacturer id, 2 for manufacturer mask, and 2 for user definition. That makes 65,000 manufacturers, 65,000 masks and 65,000 user defines. The user can only change their 2 bytes. Thus the card can still be identified by it's core OS 2 bytes manufactuer/2 bytes mask. 3) ISO-7816 should include a command for the creation of a transparent file and a command for the listing of files. 4) Card manufacturers need to be ISO compliant. Class instructions should be standardized to either 00 or C0 or whatever. I should be able to list the directory of files on the card in 1 way on any card. 5) There must be a standard for putting the keys on the card. If RSA is used then do pq... whatever but in the same order on each card. Also, cards should have the same endianness. This is crazy that people haven't learned their lessons on this one yet. This is just a few for now. I'll post more as my frustrations build up. Is there a forum for these kind of requests ? Let me know if you have any suggestions. I have about 1.5 months off right now as I take one class so I should have some free time. Hope all is well, Dave * David Corcoran Internet Security/Smartcards Home: Purdue University 1008 Cherry Lane Department of Computer Science West Lafayette, IN 47906 Home: (765) 463-0096 Cell: (317) 514-4797 http://www.linuxnet.com * *** Linux Smart Card Developers - M.U.S.C.L.E.
Re: MUSCLE standards
On Thu, 18 May 2000, Matthias Bruestle wrote: If you really do all this, you will have one card with one operating system. Ein volk, ein card, ein system. Sorry, I just couldn't resist ;) -alex *** Linux Smart Card Developers - M.U.S.C.L.E. (Movement for the Use of Smart Cards in a Linux Environment) http://www.linuxnet.com/smartcard/index.html ***