The dynamic callbacks are not so much for changing the domain or path
but for adding extra parameters. OAuth Core 1.0 specifies that any
parameters specified on the callback URL must remain there and be
passed back unmodified to the callback. For example:
I pass the user auth URL a callback
pkeane wrote:
This seems like it addresses the the hole adequately as long as an
attacker that cannot manipulate the callback url cannot succeed (I
think that's true...).
Further thought on this whole thing makes me think that a one-time
only token exchange plus a non-modifiable callback
I still don't think passing session information through oauth for the
callback is a good thing. You could simply set a cookie in the browser if
you are a site
to hold any information you will need on the callback. Example: storing the
URL of the page the user was last on. Maybe we could add a
I just wrote a long post that just disappeared. Hmm. Testing...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
OAuth group.
To post to this group, send email to oauth@googlegroups.com
To unsubscribe from this
On Fri, Apr 24, 2009 at 9:53 PM, Tommi Laukkanen
tommi.s.e.laukka...@gmail.com wrote:
I am studying OAuth to be able to suggest and champion it to OpenSim
community.
Welcome Tommi! Would be great if you could advocate for OAuth in the OpenSim
community!
I am looking for a way to combine
I'd like to point out that anyone can organize one of these events wherever
they are — this need not happen in the Bay Area exclusively!
Moreover, I encourage folks to get together and talk through this threat,
how to exploit it and then develop solutions or fixes to what you came up
with, face to
I don't really see the need for the double trip to the service provider to
perform the login and authorization.
This can be done in one single step like I have outlined in my proposal.
User logs into provider, grants access, and returns back with the token.
The less work we do in our flow the less
On 4/25/09 1:33 PM, J. Adam Moore wrote:
I'm writing a blog post to explain why I think I have a solution, but
I believe it is as simple as moving the provider login to before the
consumer token generation which is triggered by a provider-side
redirect.
Yes. This is exactly what I've been
The idea is that the communication between the Consumer and Provider
sites consist of urls that are composed behind user logins ON BOTH
SITES at the same time. I believe that this prevents simpler attacks
like man in the middle or DNS or url tampering and allows secure token
generation based on
We don't really need the user to be logged into the consumer to generate our
token. The service provider should not care what our login is on the
consumer.
All it cares about is authorizing a consumer access to our data. We log into
the provider and authorize the creation of an access token for
On Sat, Apr 25, 2009 at 10:46 AM, Chris Messina chris.mess...@gmail.com wrote:
I'd like to point out that anyone can organize one of these events wherever
they are — this need not happen in the Bay Area exclusively!
We had a meeting in Mountain View on Friday, our notes are here:
Wow, never type an email right after running. Sorry about the piece-meal
grammar. That should be:
Pardon me if this seems naive, but if we're considering a solution in which
the user enters a pin at both ends, perhaps a better solution would be to
use an image instead. It would be similar to the
Thanks for posting that Brian.
I'm leaning towards signed approval URLs. Seems the best way to go IMO.
Seems to solve the issues and also helps simplify the OAuth flow.
On Sat, Apr 25, 2009 at 2:09 PM, Brian Eaton bea...@google.com wrote:
On Sat, Apr 25, 2009 at 10:46 AM, Chris Messina
Logically I find that the only way to guarantee that two different
users at two different sites are really the same person is to make
them self authenticate BEFORE establishing a secure communication. By
having both the Provider and Consumer redirect to a spot behind a
login on both sites it
What I should have added was that using my solution, the consumer is
completely capable of being stupid and giving the consumer a redirect
that doesn't require a login on the consumer side, but they can also
take a gun and blow their brains out. You can't stop people from being
stupid and it's
EDIT LAST POST: The second consumer I meant to say provider.
On Apr 25, 12:55 pm, J. Adam Moore jadammo...@gmail.com wrote:
What I should have added was that using my solution, the consumer is
completely capable of being stupid and giving the consumer a redirect
that doesn't require a login
I am not suggesting changing the entire spec, just dropping the request
token part.
This is what I'm getting at --
https://oauth.pbwiki.com/Signed-Approval-URLs
On Sat, Apr 25, 2009 at 2:58 PM, J. Adam Moore jadammo...@gmail.com wrote:
EDIT LAST POST: The second consumer I meant to say
Yes we would need a way to still allow for manually providing these device
the callback token.
The user can directly visit an authorization URL since their will be no
callback.
Example: http://service.example.com/authorize/testconsumer
This URL can be provided by the consumer device.
Once the
On Sat, Apr 25, 2009 at 1:11 PM, J. Adam Moore jadammo...@gmail.com wrote:
The problem itself is REALLY
specific: Phishing. Like fish in a barrel phishing. The solution is to
take away their bullets, and is not to try and harden the barrels or
educate the fish to dodge bullets.
The problem
On Sat, Apr 25, 2009 at 1:04 PM, Brian Eaton bea...@google.com wrote:
On Sat, Apr 25, 2009 at 12:26 PM, Josh Roesslein jroessl...@gmail.com wrote:
Thanks for posting that Brian.
I'm leaning towards signed approval URLs. Seems the best way to go IMO.
Seems to solve the issues and also helps
I think you are putting all your faith in to the security of the
information and not realizing the insecurity of the urls and
redirection. Why can't I act as a proxy for the unwitting user who
goes through your whole scenario only to find that his secure
authenticated token has been poached by my
The only place that a phishing attack would occur in the signed
authorization proposal is the authorization URL.
An attacker could lure an user to click on a link that directs the user to a
clone of the provider and steal the users credentials
when logging in. The best way to prevent this is users
On Apr 25, 1:19 pm, Josh Roesslein jroessl...@gmail.com wrote:
Yes we would need a way to still allow for manually providing these device
the callback token.
The user can directly visit an authorization URL since their will be no
callback.
As for the timing to apply this change, I think it would be worth it taking
the extra time to get it right. Most providers I think have already found
quick fixes
to block this session fixation attack. So I don't think we are in immediate
danger, but I could be wrong. Just by adding callback URL
On Sat, Apr 25, 2009 at 1:38 PM, Josh Roesslein jroessl...@gmail.com wrote:
As for the timing to apply this change, I think it would be worth it taking
the extra time to get it right. Most providers I think have already found
quick fixes
to block this session fixation attack.
Really? The
How can the attacker use that flow? He can't set a callback in that URL
since it can't be signed by him unless he has the consumer secrete.
If there is no signiture, the provider ignores any parameters. It can lookup
to see if a default callback has been registered and use that. If there is
no
You can't expect the user to type a URL with a signature in it, in the
case of a device that can't run a web browser. Sorry if I confused
the case we were talking about.
On Sat, Apr 25, 2009 at 1:43 PM, Josh Roesslein jroessl...@gmail.com wrote:
How can the attacker use that flow? He can't set
Yeah, I have that at my bank and it sucks all kinds of hell. Thank god
I can just Google my mother's maiden name to reset my password when
that fails. If a system is designed to work only by relying upon
people to not be stupid it will fail. You can't outwit a fool; only
fools try. I really need
On Sat, Apr 25, 2009 at 1:35 PM, Josh Roesslein jroessl...@gmail.com wrote:
Plus we can require that you only get once try to swap the callback for an
access token. After that it is invalidated and no longer useful.
You can't actually do that in the flow you proposed. In order to
limit the
On Sat, Apr 25, 2009 at 1:33 PM, Jonathan Sergent serg...@gmail.com wrote:
(These names are a bit unfortunate - the callback URL gets signed in
either case, just as part of a different request.)
I agree, just couldn't think of anything better.
I guess we could still support the current 1.0 spec for these sort of
devices that need the request-token flow. When a consumer registers they can
specify if they require
1.0 support. Consumer websites should not leave this enabled to prevent
session fixation attacks.
On Sat, Apr 25, 2009 at 4:14
I completely agree. The whole point of this thread (I thought) was to
develop a solution to a very specific security hole; this has already
been done with three things: once-only exchanging, signed/pre-
specified callbacks, and the concept of a callback nonce (a.k.a.
authorization token, and a
Zack, very good points. We have been probably over thinking this a bit and
have gotten off topic.
Our focus should be:
+ Secure the callback in the authorization URL from tampering
+ Make sure the user that authorized the request token is the same user that
requested it
The first issue can be
Exactly! Although yet again, you're using a different name to the ones
I've heard on here :) So that makes 'callback nonce', 'authorization
token', 'confirmation token' ... has anyone got any more? Let's get
back to discussing the color of the bike shed already!
OK, but on a more serious note,
Well I think some meet Friday. I've seen leah culver on the list, not sure
who else is an author and if they are watching.
Yeah I have used many words for this token, but I like the callback
token term. Seems fitting since it is the token being returned on the
callback.
Maybe we can just call it
w/r/t attempt-limiting, You can limit the attempts based on the token
that the consumer is attempting to exchange.
Also, I've noticed that there isn't much difference between the two
methods. A request token is essentially an alias, known to both the
consumer and service provider, for the
How would attackers be able to inject a callback w/o having access to
the consumer secret?
On Apr 25, 3:46 pm, Josh Roesslein jroessl...@gmail.com wrote:
Zack, very good points. We have been probably over thinking this a bit and
have gotten off topic.
Our focus should be:
+ Secure the
Josh, current we don't sign the authorization URL so an attacker can do what
ever they want. Once we start signing the callbacks, this won't be possible.
On Sat, Apr 25, 2009 at 7:05 PM, Josh Fraser joshf...@gmail.com wrote:
How would attackers be able to inject a callback w/o having access to
Well for server-based consumers they should be secrete.
I'm guessing you are referring to desktop-based consumers. Yes it is
impossible to keep a secrete concealed in that situation.
In that case the consumer would not being using callbacks anyways and they
should disable it with the proposed flag
As a rule, a server shouldn't look for OAuth parameters in the body of
a request whose content-type isn't application/x-www-form-urlencoded
(as specified by http://oauth.net/core/1.0/#consumer_req_param). In
the OpenSocial example, the client could send an XML content-type,
such as text/xml or
Sorry:
Almost all of the proposed solution attempt to minimize the
possibility that user at B is NOT the same as user at C.
is what it should say...
On Apr 25, 10:19 pm, pkeane pjke...@gmail.com wrote:
Here is an attempt to help spell out the OAuth security in simple
terms and thus provide a
How would a desktop client receive a callback? First it would need to be
running a webserver to process the incoming http request and
also the provider would need its IP address. Would a provider be registering
each IP as a separate consumer? If this is the case than best option
would to be
I agree that 2. test(B==C) , i.e., verify that the user at B is the
same user at C is
not the same as 2b. min Prob(B!=C).
The former is clearly more desirable.
If someone logs in to the both sites using something like OpenID,
then it is trivially achieved without much user interaction impact,
43 matches
Mail list logo