Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-26 Thread Micah Lee
On 11/26/18 7:55 AM, David Goulet wrote:
> I've opened this and marked it for backport:
> https://trac.torproject.org/projects/tor/ticket/28619
> 
> Big thanks to everyone on that OnionShare ticket for the thorough report!
> David

Thank you!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-26 Thread David Goulet
On 24 Nov (21:30:16), Micah Lee wrote:

[snip]

Greetings Micah!

> But with v3 onion services, as soon as the OnionShare web server finishes
> sending the full HTTP response, the torified HTTP client stops downloading.
> I made a small python script, onion-bug.py, that reproduces the issue that
> you can test [2].

This is definitely on "tor" side. Here is what is going on:

When a DEL_ONION command is received, the v3 subsystem will close _all_
related circuits including the rendezvous circuit (where the data is being
transferred).

This is something the v2 subsystem does *not* do so there is your difference
between the two versions. Not closing the rendezvous circuit has the side
effect that the connected client can still talk to the .onion as long as the
application is still running behind. In the case of OnionShare, I don't think
it matters since the Web server is simply gone by then.

That being said, I see that your Python script waits until everything has been
given to "tor" before sending a DEL_ONION (correct me if I'm wrong). So then
the question is how can the circuit dies _before_ everything was sent to the
client if tor has recevied everything?

This is due to how tor handles cells. A DESTROY cell (which closes the circuit
down the path) can be sent even if cells still exist on the circuit queue. In
other words, once a DESTROY cell is issued, it will be high priority and thus
can leave behind some data cells. There are reasons for that so this isn't a
"weird behavior" but by design.

The solution here is to make v3 act like v2 does, that is close everything
except the existing established RP circuit(s) so any transfer can be finalized
and let the application on the other side close connections or simply stop
serving.

I've opened this and marked it for backport:
https://trac.torproject.org/projects/tor/ticket/28619

Big thanks to everyone on that OnionShare ticket for the thorough report!
David

-- 
cMwz8tlpxhpLpDEqM20YhohHUy3beqADHIjBSitF4P4=


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-25 Thread Ivan Markin
On 2018-11-25 05:30, Micah Lee wrote:
> I've been working on a major OnionShare release that, among other
> things, will use v3 onion services by default. But it appears that
> either something in stem or in Tor deals with v3 onions differently
> than v2 onions, and causes a critical bug in OnionShare. It took a lot
> of work to track down exactly how to reproduce this bug, and I haven't
> opened an upstream issue for either stem or tor because I feel like I
> don't understand it enough yet.

Hi Micah and all,

Thanks for the heads-up!

I write here only to confirm that I can reproduce the issue without
stem (using bulb) [1]. It seems that the underlying issue should be
in little-t-tor and not in stem.
Or yes, maybe it's not even a bug (though it seems weird to me).

[1] https://github.com/nogoegst/onion-abort-issue

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev