Re: [tor-dev] Dealing with frequent suspends on Android

2018-11-26 Thread Nick Mathewson
On Wed, Nov 21, 2018 at 5:10 PM Michael Rogers  wrote:
>
> On 20/11/2018 19:28, Nick Mathewson wrote:
> > Hi!  I don't know if this will be useful or not, but I'm wondering if
> > you've seen this ticket:
> >   https://trac.torproject.org/projects/tor/ticket/28335
> >
> > The goal of this branch is to create a "dormant mode" where Tor does
> > not run any but the most delay- and rescheduling-tolerant of its
> > periodic events.  Tor enters this mode if a controller tells it to, or
> > if (as a client) it passes long enough without user activity.  When in
> > dormant mode, it doesn't disconnect from the network, and it will wake
> > up again if the controller tells it to, or it receives a new client
> > connection.
> >
> > Would this be at all helpful for any of this?
>
> This looks really useful for mobile clients, thank you!

Glad to hear it -- it's now merged into Tor's master branch.

> The comments on the pull request
> (https://github.com/torproject/tor/pull/502) suggest that Tor won't
> enter dormant mode if it's running a hidden service. Are there any plans
> to support that in future?

I want to support this for hidden services.  Here's the ticket to
track that: https://trac.torproject.org/projects/tor/ticket/28424

This is going to be harder than the other cases, though, so we decided
to defer it for now and see if we have time later.

> One of the comments mentions a break-even point for consensus diffs,
> where it costs less bandwidth to fetch a fresh consensus than all the
> diffs from the last consensus you know about. Are diffs likely to remain
> available up to the break-even point, or are there times when it would
> be cheaper to use diffs, but you have to fetch a fresh consensus because
> some of the diffs have expired?

This shouldn't be a problem: directory caches will (by default) keep
diffs slightly beyond the break-even point.

(I think. I haven't measured this in a while.)

> Essentially I'm wondering whether we'd
> want to wake Tor from dormant mode occasionally to fetch diffs before
> they expire, so we can avoid fetching a fresh consensus later.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-26 Thread Micah Lee
On 11/26/18 7:55 AM, David Goulet wrote:
> I've opened this and marked it for backport:
> https://trac.torproject.org/projects/tor/ticket/28619
> 
> Big thanks to everyone on that OnionShare ticket for the thorough report!
> David

Thank you!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-26 Thread David Goulet
On 24 Nov (21:30:16), Micah Lee wrote:

[snip]

Greetings Micah!

> But with v3 onion services, as soon as the OnionShare web server finishes
> sending the full HTTP response, the torified HTTP client stops downloading.
> I made a small python script, onion-bug.py, that reproduces the issue that
> you can test [2].

This is definitely on "tor" side. Here is what is going on:

When a DEL_ONION command is received, the v3 subsystem will close _all_
related circuits including the rendezvous circuit (where the data is being
transferred).

This is something the v2 subsystem does *not* do so there is your difference
between the two versions. Not closing the rendezvous circuit has the side
effect that the connected client can still talk to the .onion as long as the
application is still running behind. In the case of OnionShare, I don't think
it matters since the Web server is simply gone by then.

That being said, I see that your Python script waits until everything has been
given to "tor" before sending a DEL_ONION (correct me if I'm wrong). So then
the question is how can the circuit dies _before_ everything was sent to the
client if tor has recevied everything?

This is due to how tor handles cells. A DESTROY cell (which closes the circuit
down the path) can be sent even if cells still exist on the circuit queue. In
other words, once a DESTROY cell is issued, it will be high priority and thus
can leave behind some data cells. There are reasons for that so this isn't a
"weird behavior" but by design.

The solution here is to make v3 act like v2 does, that is close everything
except the existing established RP circuit(s) so any transfer can be finalized
and let the application on the other side close connections or simply stop
serving.

I've opened this and marked it for backport:
https://trac.torproject.org/projects/tor/ticket/28619

Big thanks to everyone on that OnionShare ticket for the thorough report!
David

-- 
cMwz8tlpxhpLpDEqM20YhohHUy3beqADHIjBSitF4P4=


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev