Thanks for your comments. I'm mostly interested in watching the discussion unfold, but there are a couple of things to add:
On Sun, Mar 7, 2010 at 1:23 PM, John Panzer <[email protected]> wrote: > You can make the token short-lived if this is a concern. (Short lived > meaning minutes.) And presumably the user is rate limited in the first > place, so there is an upper bound on the damage that can be done. > It's possible to come up with scenarios where short lived bearer tokens > won't work for #5. I'm struggling to come up with any scenarios that really > wouldn't want TLS as well. > Note that TLS also takes care of the reverse trip -- the MITM cannot lie to > the client about what happened, or feed it false information. None of the > signature options under discussion sign responses IIRC. All true, but the upper bound of damage that can be done is definitely higher than with a signature over an insecure connection. I'm just very hesitant to take away signatures and provide only a less secure method (or no method at all) over an insecure connection. > I was excited to look at (1) a bit as I think actual research and numbers > would be good. It appears that this paper was also published in "IEEE > International Symposium on Performance Analysis of Systems and Software > (ISPASS), Austin Texas, Mar 2005," so the measurements are about 6 years old > at this point. > I also noticed that they did not mention HTTP keep-alive connections in > conjunction with SSL performance. If you are doing a high volume of server > to server requests, this would be an obvious optimization in any case, and > would amortize the public key operation overhead over a lot of transactions > (IIUC). > Below you mention the intermittent request case, where you have surprise > requests from any of a large number of clients that hit your server (and so > there's a cold-start penalty each time). This is a worst case scenario for > SSL connections. I wonder what the overhead of SSL is for modern client > hardware in these situations (and I'm thinking that in a thin client world, > this is a very good use for silicon on the client.) > They also didn't really go into the effects of specialized server hardware > for SSL, which may not have been as available in 2004(?). I recognize that the research I reference is dated. Unfortunately I do not have much access to good research article search or to subscription services for most journals. If someone with that access would do the leg-work to find more recent articles, I'd be delighted :-) That said, on non-specialized hardware, I don't think crypto operations have gotten much faster over the last 6 years aside from the Moore's law speedups that apply to everything, so I think the percentage estimates are probably OK. > For today's realistic request traffic, I've been told that the public key > setup is typically the limiting factor. And in fact even with keep-alives > turned on the main thing you'd need capacity for at scale is a cold start of > your service under load (worst case scenario). The public key operations > would be a major issue then. Actually, the finding that the private key operations are at least as intensive at the public key operations for requests over 64Kb in size was one of the things that most surprised me in this paper! I too was under the impression that the main limiting factor was the public-key operation and I didn't even think about the large-request case until I started researching. > Note that a possible workaround for these large-request situations is to use > an SSL-secure API to retrieve a one-time bearer token for use with a > subsequent large request (a video upload, say). Since it's one time it > can't be used to replay anything. Since it's a large request the overhead > of an initial small request is nearly irrelevant. True. That sounds like a good idea. This is not part of either the OAuth 1.0a or WRAP spec for bearer tokens. It would have to be added (and perhaps should be). > Thanks for writing all of this up. It would be good to get some general > quantification of user experience latency impact (if everyone agrees that's > the main issue) and compare it against the known interoperability issues > with signatures. (There's a user experience impact there too.) Agreed, that sounds like the right approach. I don't have the expertise or time to do that quantitative testing. Maybe a large provider would be willing to contribute a sampling of their test data. Ethan _______________________________________________ OAuth mailing list [email protected] https://www.ietf.org/mailman/listinfo/oauth
