On Thu, Dec 01, 2016 at 03:26:56PM -0800, Brandon Williams wrote:

> > I started taking a look at your http redirect series (I really should
> > have taking a look at it sooner) and I see exactly what you're talking
> > about.  We can easily move this logic into a function to make it easier
> > to generate the two whitelists.
> 
> Thinking about this some more...I was told that having http redirect to
> file:// could be scary.  The way the new protocol configuration is setup
> we have file:// as a default known-safe protocol.  Do we need to worry
> about this or can we leave this be since this can be overridden by the
> user?

Hmm. I'm not sure if file:// should actually be USER_ONLY. The submodule
code allows it, and it's certainly a convenience, but I guess you could
do tricky things by probing somebody's filesystem with submodules URLs.
On the other hand, if you are recursively cloning untrusted repos and
have sensitive contents on disk, you really _should_ be setting up a
protocol whitelist.

For HTTP redirects within curl, I think it's a non-issue; curl
automatically disallows file:// for redirects, even without us telling
it so.

For redirects via http-alternates, it's a bit more tricky, as we feed
the URL to curl ourselves, so it can't tell the difference between
trusted and untrusted input. The main protection provided by my series
is "don't follow http-alternates at all". But assuming you did want to
use them (by setting http.followRedirects to "true", at least for the
server in question), we could then feed file:// directly to curl. But I
think we are still OK, because the restricted CURLOPT_PROTOCOL setting
would prevent that from working. I.e., git _never_ wants curl to handle
file://, because it handles it without calling into remote-curl.c at
all.

So arguably file:// should be USER_ONLY, but I'm not sure how much it
matters in practice.

-Peff

Reply via email to