> >> A cert that isn't laid out to standard isn't useful. > > > > The dilemma comes out most strongly when a major > > browser accepts a non-standard cert. If a product > > has 90% of the market, and accepts a non-standard > > approach, it's useful. No matter how much one > > believes in standards, this happens over and over > > again. > > Yes, there is a widely used product that seems to have ignored numerous > details of various security standards. And from time to time, it is > caught sans pants, such as when it was revealed that said product > allowed ANY cert to act as a CA cert, and issue other certs, even when > the cert's extensions clearly identified it as a non-CA cert.
Right. So, what is it one does? Believes in this "widely used product" and thus their de facto standard? Or, subscribe to some standard as laid out by some organisation purporting to be some authority? Or, whatever one wants to do as a developer?
It's a tough question, there's no easy answer. But, one thing's for sure, subscribing to any of the other standards organisations out there is just an abrogation of intelligence. The logical extension of that is that we should be porting Mozilla over x.25 and avoiding this TCP/IP nonsense.
> If mozilla wanted to be fully "bug compatible" with that other product, > it could (well, it could come very close. Not all details about the > other product's behavior are publicly known, of course.) > > But I keep hearing from mozilla users how glad they are thet mozilla > doesn't have all the security fire-drills for which the other product is > (in)famous. So, I don't think that turning a blind eye to bad certs > is the right answer.
Sounds like it was a good improvement. Worth running the experiment then. One good result, however, just means it was a worthwhile experimenting, and we should go on challenging the assumptions that were laid down on tablets and passed on from generation to generation.
> Recently, the producers of that other product turned out a significant > patch that fixed some of the crypto related holes. That was motivated, > I believe, by the release several months ago of a huge PKI test suite > named NISCC from an arm of the UK government, and the publication of > the results of testing with that test suite. MOzilla benefited from > testing with that, too, last year. > > There's another test suite now getting attention. This one, known as > PKITS, is from USA's NIST. Pretty soon, big customers, including the > U.S. government, will begin to demand products that pass that test. > I predict that even more security holes will get closed due to that > demand. > > So, in the long term, many crypto products will be enforcing the rules > that mozilla now enforces, IMO, and perhaps more besides.
If you mean, many browser / SSL / PKI products, sure, every test suite helps. That doesn't mean that all products will necessarily go down that path; there is more security in other directions IMHO. Outside SSL, for example, IMHO, the trend is away from the PKI approach.
Opportunistic crypto, and embedded local home-built protocols for custom purposes are more likely in the future. There's enough empirical evidence that says this is a better way forward, all else being equal.
It's not that easy to see - mostly because people in the SSL world will see only the SSL solutions (you saw it last week, someone popped up and said they wanted to do a roll- there-own, and was discouraged). One has to look outside the SSL world, to those who aren't influenced in that direction by various forces.
> > If I was a commercial CA, and a browser maker > > set up such a rule to determine trouble by number > > of complaints & bugs, I would ensure that there > > were at least that many bugs and complaints filed > > for the weaker products. > > How would you do that, short of sabotage of their facilities?
Er, download the product, check it out, find where it deviates, and file a bug. Under some nym of course. The assumption was that they weren't laying it out according to some standard, but it was otherwise functional.
> All the "weaker" CAs need do is ensure that they don't issue any > certs that don't conform to the standards. The complaints have to > reveal actual flaws in the CAs' certs, in case that wasn't clear.
"All" ? I don't believe it is trivial to follow *all* the rules of standards. Nor is it cheap.
> > Why doesn't the normal "rule" covers it? If > > there aren't enough people to get around to it, > > then the bugs will sit there. > > With the new policy in place, bugs that say "my homebrew cert doesn't > work" will get marked invalid quickly. Trust me ;-)
Sure, as long as it is not a rule, that is fine, if the developers simply mark the bugs as "invalid coz we know they are not complete."
If the CA that results in poorly delivered certs can't afford to incentivise a programmer to fix it in Mozilla, then, tough luck. It cuts both ways... But, if they can afford a developer to join the group, and handle those bugs, then that sounds fine too. That's the beauty of open source, right?
> >> After all the SSL crypto computation, and the > >> validation of the cert chain, one of two outcomes arise. Either > >> we've proven to our own satisfaction that the party at the other end > >> of the SSL "pipe" is who the cert says it is and we now have a "secure > >> pipe" to that party, or we haven't and don't. > > > > No, you've shown that the cert is signed by a > > root cert, and that the cert id matches the website. > > And that the party at the other end of the pipe has the private key > corresponding to the public key in the cert, and (optionally) that the > cert hasn't been revoked. These things taken together prove the > identity of the party at the other end of the SSL pipe to the browser's > satisfaction.
I don't agree with that last part, see below!
> > If the root cert is of lesser pedigree, then > > the user might want to know that [1]. Are all > > CAs equal? > > They may not be all created equal :) but they all get treated equally > by all SSL-based https browsers. That is, each browser has its trusted > list (perhaps has modified by the local user), and trusts the CAs in > that list equally for the purposes for which they are trusted, e.g. > SSL server auth.
Yup. That's one bug, from a security pov. The fact that this bug is common to all browsers, and is how they've always done it does nothing to avoid the fact that it's a security bug.
> > You might assert that, but is that > > a reliable statement? And, does it make sense > > to claim that is a reliable statement? Do all > > CAs conduct anything like appropriate due > > diligence? > > Answering the question "do all mozilla's trusted CAs conduct appropriate > due diligence?" is the challenge now before the mozilla foundation, by > virtue of the path they've chosen to take.
Right. Let's take a leaf from the book - either the answer is no, OR, there is someone who is going to pick up the pieces when the answer is no. Either way, the answer is no. This whole governance / audit / trust arrangement is well developed in the business world - we don't try and tell ourselves that everything is fine because we set it up right, instead, we try and reduce and spread the risks so that when it fouls up, nobody gets too badly hurt.
Mozilla Foundation should not make the mistake of assuming that they can make the answer yes, either by rules or by cryptography or by audits. It isn't yes, and it's more never yes the greater the number of CAs in the list.
All they can do is satisfy themselves that they're covered when they answer is no. And that's the real challenge - as there is no money involved, there is not even a pretense at a proper allocation of risks. That's why bringing the user back into the protocol by showing her the cert issuer is so inviting as a big fix - this takes MF out of the equation of "who trusted who?"
> > I don't think so, simply because the due diligence is the same for a > > flower shop as it is for a bank. > > I agree that DD(flower shop) != DD(bank). > Mozilla Foundation now gets to separate the banks from the flower shops.
It does? How? AFAIK, Mozilla Foundation only deals with CA root certs, and the CAs will offer in general a fairly basic package deal to both flower shops and banks on the same commercial basis.
> > Luckily, this is never at issue, as a real attacker bypasses the > > HTTPS security altogether. > > I gather you're referring to the problem that many browser users have no > idea of the name of the party with whom they wish to securely communicate.
The salient attack these days is phishing, sending an email with a link that purports to be a trusted site. The user copies it in and goes to the site, then enters their confidential details. I don't have any hard stats on it, but I've seen these numbers: 5% of people are fooled, and 200 million dollars lost in 2003 (but it is only a "related" figure, not a phishing figure). I'm researching this, in my copious spare time, so more to follow....
> They want a secure pipe to person X, but that have no idea who person X > really is, or what his name is. E.g. www.ebay-something.com > > I assert there's nothing that ANY crypto-based scheme can do about that. > Whether the scheme is PGP-like with WOT, or PKI-like with hierarchy, if a > user has NO idea of the correct and proper name of the party with whom > he's communicating, he cannot know if the name he is shown is correct.
( Just on that point:
You might recall that one of the original claims of the browser security system was that it would secure the connection for parties with which there is *no prior relationship*. That is, you could go shopping with a name you didn't know, and know that you wouldn't be hit with an MITM.
Why is that? Because, if you already have a prior relationship it is possible to bootstrap a secure communication. E.g., as simple as posting a fingerprint on an advert (why not? people thought putting URLs on adverts was silly....).
http://www.financialcryptography.com/mt/archives/000077.html
The problem here is that the browser has organised the security of the shopping experience for the user, and the user hasn't got anything to do. So, the user trusts the browser. When, in fact, the browser doesn't do anything of the sort. What security it does deliver is quite specialised, and quite limited in scope. Purporting to present that as a complete security model so the user doesn't have to make any security decisions is just wrong. )
But, no, I was referring to phishing.
> The definition of the problem that SSL, SMIME and/or PGP solve requires > that the parties communicating are able to name each other (at least in > one direction). An encryptor must be able to name the decrypting > recipient. > A signature verifier must be able to name the signer. Take that away, > and you have a different problem that none of them solve. > > This gets back to the issue of the end user not being apt to discern > the real security risks. He wouldn't tell his secrets to a complete > stranger who rang his door bell, but he will tell his secrets to a > complete stranger who has a pretty web page.
Right. And, trying to step into that gap and tell the user who has a nice pretty face, and who has not, is one mistake that the SSL/CA architecture claims it can do. Mind you, what the SSL/CA architecture can do is state something like "Mr CA says this is a nice pretty face..."
> >> I think all security conscious folks who've participated here wish that > >> browsers would display more info about > >> a) what is the full name (as full as we have in the cert) of the party > >> at the other end of this pipe, and > >> b) who says so? > >> I know that the various managers of browser crypto securty software > >> under which I've worked in the last 7+ years all wanted that, as did > >> most (if not all) the crypto security developers. > >> > >> The fact that browsers don't do that today is not because security > >> people are holding back, but rather is because UI people do not value > >> security enough to be willing to devote the window real estate to it. > > > > That's a key point, so, we are agreed that the > > security is unsatisfactory. > > I didn't say that, and don't agree to it. Perhaps we agree that the > display of security relevant information is not as prominent as we might > wish it to be, but it is available. The security of my bank's vault is > not lessened by the fact that I cannot see all the door's locking bars > at a glance.
No. But, it can be lessened if you weren't able to know who the manufacturer of the safe is, who employs the guards, and ultimately, who "is" the bank - it's directors, its officers, its owners, and its regulators and insurers.
In banking, we know a lot about the chain of the various parties, and there is a notion that if the parts and places have to be displayed. It's evolved over time - through experience, which is not the case with SSL.
Another of the salient differences is that your bank can go broke, and you can go to insurers and others and get your money back.
No such luck with CAs. Which is why we actually have to deliver real security. Banks don't actually have to deliver real security, they just have to follow all the rules, and make money. Someone else picks up the cost of the security failure. (This is called "moral dilemma" in economics. E.g., S&L.)
> >> One thing we learned from Communicator 4.x was that if you give the > >> user a way to override a security error, the user will always choose > >> to do so. Any error or warning that you let the user disable, he will > >> disable. The user views warnings that say "you're not talking to the > >> party who you said you want to talk to" as just another annoying > >> dialog that they have to click through on the way to what they want. > >> IOW, the user is typically not able to aptly judge security risks, > >> even though it is they who are most likely to lose from their choices. > > > > > > > > Right. This brings up some quite difficult results, > > such as, even if the browser says, "this party is > > using a bad cert," then the user clicks through any > > way. [snip] > > Isn't that what I wrote immediately above?
Yep. It's a very important point, and bears repeating, because it has a bearing on the claim that the MITM problem is addressed. It is, but only in SSL, not in the browser. Therefore, not protected.
When we are talking about improving the security of browsing, one of the important (but subtle) things that makes it better for everyone would be if self-signed certs were treated as a positive improvement over an absence of crypto.
But, lots of security people shout "no, then the MITM will happen." To which the answer is a) there aren't enough MITMs to worry about, and b) the browser doesn't handle MITMs anyway, because people click through ...
Once we establish that MITM should not be part of the *compulsory* threat model for browser security, then it is possible to move forward and improve the security - by leaps and bounds.
http://iang.org/ssl/browser_threat_model.html
> > So, we seem to be in agreement that the current > > notion of popup dialogs that asks questions is not > > going to help. > > It doesn't help as much as had been intended, WHEN we let them override. > > > False certs are not defeated, in > > the practice of the browser activity, although they > > are addressed in the protocol and the CA regime. > > If we let user defeat them. As I said before, I think the answer is > fewer user overrides, fewer user decisions, fewer user mistakes.
I think your logic is something like: the user cannot be expected to judge security risks, even though it is they who are most likely to lose from their choices. Therefore, MF should judge security risks.
Unfortunately, this doesn't hold - there is no logical need for someone or anyone to step into the gap and cover for it - and, that strategy isn't the only choice!
Another choice is: users cannot understand their security choices. MF declines to step into that, and lets them make a few bad choices. Then, users learn from each other how to protect themselves, from their losses.
What's the difference? The second choice is cheaper, and is more secure.
> Giving the user more information only increases the chance that the > user will click past it. The user understands "No, you can't do it." > and not much more.
No, asking the user more *questions* only increases the chance that they will answer the default and carry on. Asking more questions is not the same as presenting more information.
Information can be presented in a nice way, without asking any questions. That's what the branding is, it presents, and aclimatises the user. But, it doesn't force the user to test her security ignorance, or otherwise. She can ignore it like any other advert, if she wishes.
> > Hence, the branding idea. This is an idea that has > > been floating around for some time, and was brought > > up recently by Tim Dierks. CAs want it, as per the > > below [1], and they should, otherwise their business > > is free riding off the other CAs, and they can't set > > their prices properly. > > Yes. Netscape's crypto security managers at one time wanted it too, > to reduce potential liability. The idea was expressed this way: when > the user asks "who says this is secure, and the party to whom I'm speaking > is so-and-so?", we want the answer to be "This CA", not "Netscape".
OK. Is there any doco on that? Do we know if M$ went through the same discussion?
> There was a facility developed to display a CA's logo, getting the image > from a URL in the cert. But this is where the aforementioned window > real estate battle occurred.
So, the liability question was asked, and explained in terms that everyone within Netscape understood. That's important to know.
I'm curious as to what the managers came up to that allowed them to ease off on their liability concerns?
I'd postulate that it was early in the game, and there was little worry about liability then. I understand those forces.
I think it is about to become a different game. The structure of the "window real estate battle" will change the day someone files a class action suit against microsoft on behalf of all victims of phishing.
(Microsoft has lots and lots of pending suits, so many that they have a production line. This will be just another one of them, the only variable is how long it takes the class action attornies work out what the angle is.)
> > There are some good reasons why brand works when > > dialogs don't: it enables the user to make choices > > based on information that they've acquired via > > other channels. It enables the CAs to express some > > real claims. There are also some reasons why it > > isn't perfect, but it does seem to be way way better > > than what's there at the moment. > > Maybe we could use the image of a mean devil for certs from unknown CAs.
Sure. I wouldn't use a "devil" as that might confuse people into thinking they are BSD certs, which would be more secure.... ;-)
Some (e.g., me) think that a self-signed cert is much more secure than no crypto at all, and should be celebrated with a nice cute little halfling, or what passes for "better than nothing" in such culture. We can save the elves for CAs, I guess.
> >> That's one reason why mozilla has fewer security dialogs that the user > >> can override, and more that simply say "no dice". And it hasn't been > >> a great tragedy. More web sites that used to be sloppy have now done > >> the right thing and fixed the problems. That's a GOOD THING. > >> > >> I'd say the way forward is to enforce the rules no less tightly than > >> before, maybe more so, and give the users fewer decisions to make, > >> fewer chances to hurt themselves. With the presence of low-cost CAs, > >> there won't be any remaining excuse for people to continue to use > >> improperly made certs from their own homebrew CAs. > > > > Security is a hard problem. It's not amenable > > to a binary result, that of being secure, or > > being insecure. > > Ultimately, it's always a binary decision to act, or not. > Do I spend the money? > Do I type in my confidential info?
As long as you have the information, and you make the decision, it's your lookout. But, if the security system hides some info from you, or purports some claims to you that it can't absolutely back up, then that security system hasn't done its job.
> > It's a process, and a progression. The HTTPS > > implementation is already so imbalanced within > > today's browsing, by means of its desire to > > create a user-simple binary security choice, > > that it is ignored by almost all attackers [2]. > > E-commerce has been a HUGE success. People buy things over the web now > in HUGE amounts. It really has impacted "brick and mortar" businesses. > And it gets almost NO bad press. People simply don't hear that large > numbers of people are incurring substantial losses because of unwise > choices in E-commerce. The public is largely fearless about internet > use, quite a switch from the way it was 8-10 years ago, when most > people were very afraid to attempt e-commerce.
OK. So, is all that in spite of HTTPS or because of HTTPS? It is pretty clear that people were scared of the net. It is also clear that the fears were *not* founded in those days. HTTPS was a useful tool to make people comfortable. A placebo, as it were.
But, there isn't any evidence that it was needed from a security pov - that's why I keep asking for information on real live MITMs - I have one anecdotal story from a student network. Plus a couple of non-browsing things to do with email.
There isn't much more out there on eavesdropping...
So, from any security pov, we would have been as secure - more or less - with ROT13. Because, while all this stuff was protecting us, credit cards were being ripped off in their 10's of thousands, by hackers and crackers. And now, phishing...
> Today, that fearless attitude is why people click past all the warnings. > They've never heard of someone who suffered because he ignored them. > So, the public ignores them too.
Right. Put in window dressing and call it security, and it will bounce back at some stage. I agree.
> branding doesn't solve that. > You can't entice people with the words "safer, more secure" when the > public feels no lack of safety.
Oh, I'm not saying that the *words* make any difference. There are several issues here:
1. when something goes wrong, the CA is now on the
hook, as far as the user is concerned, by default.
(The browser was just the software, after all...) 2. CAs now have a reason to care. Their brand is
now on display to real users, which means they
can now generate differential buying power, and
can start charging for better DD. Right now,
CACert is the same as Verisign. 3. Certs now become differentiated, which means
that there is room for self-signed certs to
be properly treated for what they are - a
vast stellar improvement (over nothing). 4. The arisal of self-signed certs (as a positive
force not a negative force) creates a market
where before there was a franchise. That means
it can grow. That means it can respond, and
more expensive tailored forms of DD can be
tried. Right now, Verisign doesn't make enough
money to spend on DD, IMHO.All of these issues feed back into security.
> > By any measures, browsing is not well protected. > > Unfortunately, I think it is better protected than the public cares it to > be. > > > The security that was built in is bypassed on a > > routine basis in real attacks. One of the core > > reasons for banking websites being so vulnerable > > is that HTTPS is "too secure" - so theoretically > > secure that it results in costs and inconveniences > > that lead to easy bypasses and easy ignoring. > > banking websites are so vulnerable? Can you substantiate that?
Yes, although not yet in monetary terms. That's one of the issues - documenting the losses, which are unfortunately suppressed and hidden in all terms.
> One area where people still do take some care about their web security > is banking. I've not heard or read any reports of succesful attacks > on bank web sites. (Except, see below)
This is exactly why California passed a law in 2003 (effective) that said, in essence, breaches of sites must be notified to customers. Because you didn't hear about any of the ones going on.
Banking web sites are regularly attacked and successfully raided. It's done by a number of mechanisms, and the money is normally washed out through a chain of transfers.
> > You are totally right that this is also/really > > a UI problem. But, the UI people won't look at > > it until they realise that the HTTPS system in > > browsers doesn't deliver the security that they > > thought. > > And I doubt that mozilla UI people will ever realize that.
Until they are told, no. But, there is the Mozilla Foundation. They asked for recommendations. They might do the work to understand, and pass it on to the UI people. Just a thought...
But, if it is to happen internally, it has to come from the crypto people, as for better or for worse, they are the ones that other people think know something about security.
> At one time, the predecessor of the mozilla browser was being developed > by a for-profit company whose paying customers were mostly enterprise > businesses who had more concern for security than the average consumer. > And even in that environment, where developers salaries were paid by > producing a product that met a business need, and that businesses were > willing to buy, the developers were unwilling to devote additional > window real estate to security info.
The first phishing started around 2002/2003. So, anything before that didn't really result in an attack on the browser security model. Now it's there, now it's real.
http://www.antiphishing.org/APWG.Phishing.Attack.Report.Jan2004.pdf
No dollar amounts, sadly.
People have known about these weaknesses for a long time. I've knew about them since about 1996, but didn't bother to do anything about it then. Without being able to validate a real active threat, it is hard to make any headway, and, frankly, it didn't matter, as ecommerce was as safe without SSL as it was with.
Now, it's 2004, and congress even talks about it. Few really know what's going on - check this site out for an example of an easily subverted solution, again, IMHO:
http://www.passmarksecurity.com/solution.html
> Today, mozilla is a nearly all volunteer effort. Most of the volunteer > time is devoted to things that are clearly visible in the UI. Very little > is devoted to such "invisible" things as crypto security. People work > on UI because it's visible, it's something they can literally point > to and say "I did that". The projects that get worked on are mostly > those that appeal to volunteers. Volunteers aren't likely to work on a > low visibility feature out of some sense of obligation or fair play, > even if it makes a different to the product's security. > > Now, if we couldn't get UI people to devote real estate to security when > arguably their jobs depended on it, what chance have we now that UI work > is all volunteer?
When their jobs depended on it, their jobs hid the security focus. They just did what their jobs told them to do. Security wasn't a big focus of Netscape, IMHO. Marketing was the focus, and finding a business model.
This is why we are having this conversation, and I've not bothered to talk to Microsoft. Because, you, and I, don't have our jobs interfering with proper security models.
> Maybe we should abandon trying to get screen real estate, and instead have > a voice that sounds like Mom, saying "Don't you even THINK of typing > your confidential information into that screen, young man!" :) > > In the meantime, taking away the overrides seems reasonably sensible.
I agree that it is a good experiment, and worth trying! I think it is also good that it worked, it tells us a lot about what users can and can't do. But, it's not the end of the story. It's only the start.
> There's a web site that offers a proxy service to visitors. They offer > the promise of "faster browsing", and the chance to win occasional cash > and "fabulous prizes" to people who will agree to install their software > and use their proxy. If you install their software, they will not only > monitor your http traffic, but also your https traffic (decrypted) for > many popular websites, such as Amazon (if I recall correctly). They're > up-front about this. If you read their FAQ and their privacy statement, > they spell out that they will monitor your secure encrypted banking > traffic too. Of course, they promise to keep it all confidential. > Their site even displays a WebTrust seal! > > When I checked into it a couple years ago, their proxy server identified > itself with a string similar to "Man-In-The-Middle Proxy". Their > software includes a trusted root CA cert. This is one of the few > sustained MITM > attacks on SSL known to me.
Nice one! This is the essence of the MITM attack that succeeds against HTTPS - ask the user and he'll say yes. It's important to understand that MITM is not really protected against by browsers.
> If you visit that web site with mozilla, you will be directed to a page > that informs you that your browser is not supported. Should we make > mozilla compatible with that "other browser" to get around this? > > I wish I knew how many users they have. Every one of their users is > someone who traded away his privacty and security for not much value, > IMO. (Please don't ask for the URL. I've given plenty of clues.)
They've traded their privacy for their convenience. That's their right and privilege, no? I don't think MF really should usurp that.
> At the risk of lanuching another tangent into a parallel universe ...
Ah, but you've hit on an important, and integral, aspect!
> I have wondered for a long time, why companies like Standard and Poor, > who rate companies on various merits that might directly correspond to > business trustworthiness, don't get into the CA business and put info > about a company's ratings into the certs they issue. A cert that > tells me I'm dealing with a fortune 500 company with a good credit rating > vs a fly-by-night outfit might be a LOT more meaningful than the present > certs. Similarly, a cert from the Better Business Bureau stating that > I'm dealing with a company that satisfactorily resolves 99% of consumer > complaints would be to my liking.
Because they've worked out that the structure isn't right, would be my guess. Lots of these players have looked at it, and some of them have gone in, but what they've realised is that ... brand means nothing because the browser hides that info. So, no matter who they are, they have to fight on a commodity basis, which means whoever was there first, wins.
The moment that Microsoft starts putting the brand onto the chrome, will be the time that CAs can start to leverage their brand. Until then .. the product doesn't deliver any worthwhile except smoke and mirrors dressed up as security (IMHO!), and no-one can see how to move on from there.
> Maybe it's up to the CAs themselves to start offering more real value > to the relying parties in their cert contents.
IMHO, it could make them a lot wealthier, even the low cost ones!
The market for CA certs is virtually untapped, at something less than 1% penetration.
It is only the current crippled structure that is keeping the numbers down so low. Unfortunately, the franchise was created and designed by people who understood the word "franchise" but not the word "marketing" ....
They don't actually understand the dynamics of the net environment, and think that they are best served by keeping the market small and controlled (i.e., the word "franchise" is accurately applied). What they don't understand is that the untapped demand will result in a smaller slice of a larger pie, where the smaller slice is much much larger than their current large slice of the small pie.
> Diclaimer: opinions and faulty memories expressed are solely mine.
We don't disagree so much here!
iang
_______________________________________________ mozilla-crypto mailing list [EMAIL PROTECTED] http://mail.mozilla.org/listinfo/mozilla-crypto
