Hi Leslie,

This sounds like a cool project.

I don't know how to answer #1 (maybe 0.0.0.0 would work?) but for #2,
instead of using run_until_complete() you should probably use a combination
of coroutines and tasks.

Inside a coroutine, if you want to wait for another coroutine, use "yield
from other_coroutine()". Since asyncio.wait() is also a coroutine you can
use that too: "yield from asyncio.wait(coros)". If you have something you
want to start without waiting for it, wrap it in a task and let go of it --
it will run independently as long as your event loop is running. Start it
with "loop.create_task(coro)" -- or if you have a list of them, use a
for-loop ("for coro in coros: loop.create_task(coro)"). Don't use yield
from in this case.

You'll need a main coroutine to kick everything off. If that remains active
and never returns until you want to exit the program, you can start it with
loop.run_until_complete(main_coro()). If you want to just kick it off and
let the other coroutines run free, you can start it with
loop.create_task(main_coro()); loop.run_forever(). Hit ^C to stop.

Good luck!

--Guido

On Mon, Nov 30, 2015 at 9:07 PM, Leslie Klein <[email protected]>
wrote:

> I am writing a BitTorrent client using asyncio and the Streams API. I'm
> trying to work at the highest level possible.
> I have the download part working (client opens a connection to peers
> provided by tracker and downloads all pieces). Client can also upload
> pieces to connected peers, if requested.
>
> Big Question: how to listen to incoming connections from peers that join
> the swarm? The client running on a peer must be able to listen for incoming
> connections and upload pieces.
> I set up a server (following the documentation in 18.5.5.1).
>
> Question 1: I don't know what address to bind the server to. I get the
> error message "[Errno 10049] ... the requested address is not valid in its
> context"
>
> Question 2: I don't know how to integrate the server with the other tasks
> that run concurrently in the loop. (I assume this can be done with 1 loop).
>
> # tasks that open all connections
> tasks that are running until complete:
> coros = [client.connect_to_peer(peer) for peer in
> client.active_peers.values()]
> loop.run_until_complete(asyncio.wait(coros))
>
> then...
>
> # tasks that get a single piece by getting blocks from multiple peers (if
> possible)
> coros = [client.get_piece(index, peer) for peer in peers]
> loop.run_until_complete(asyncio.wait(coros))
>
> I can easily do the following (if I could answer Question 1), but it
> doesn't seem right (since my program should be listening to incoming
> connections, even before the client becomes a seeder.)
>
> After the client is a seeder, the program can start the server and the
> handle_leecher coroutine handles incoming connections and handshakes.
> Then the server runs forever and uploads pieces to the remote peers.
>
> Thanks.
> Leslie
>
>
>
>
>
>


-- 
--Guido van Rossum (python.org/~guido)

Reply via email to