On May 8, 2010, at 2:35 PM, Roman Yakovenko wrote:

> On Sat, May 8, 2010 at 5:48 PM, Christian Vollmer
> <christian.voll...@gmx.net> wrote:
>> Hallo all,
>> 
>> I'm trying to achieve the following:
>> 
>> I have two hosts A and B. A establishes a remote port forwarding tunnel on
>> B, i.e. B is the one a port is forwarded on to some where else and A is the
>> one that sets up the tunnel. I tried the script rforward.py that ships with
>> paramiko and it works very well so far. (I'm running rforward.py on A, which
>> connects to B and forwards a port of B to somewhere else)
>> 
>> However, when B is shutting down, A doesn't seem to recognize it. I'd rather
>> like A to recognize that B is down and to try to reestablish the connection
>> periodically in case B comes up again.
>> 
>> Is there a way to do this? Thanks.
> 
> There is no (easy) way to achieve this. You can read this mailing
> list, I asked almost same question few month ago.
> 
> I tried to implement an automatic reconnect, but failed. Mainly
> because I didn't find a way to implement a "ping" method - a check
> that a remote host is alive and executing some command ( even simple
> "echo" ) would not help.
> 
> After all the speed was not an issue, so I "connect" and "disconnect"
> before executing any command.

I've had similar requirements recently -- in my case host A needed to setup and 
keep alive a set of local _and_ remote port forwards vis-a-vis host B.  My 
environment has multiple independent network links and network failure is the 
common case -- when one link dies a new may route become available (but the 
world routable ip addresses are not preserved)

It was a little tricky to do ...  The implementation I came up with was 
complicated by these factors:

- paramiko.transport.open_channel doesn't accept a timeout parameter and so can 
block for a long time
- ssh servers (at least openssh) don't allow you to cancel a port forward that 
was requested by a different ssh client connection -- so if you eg, 
        - connect to server, request port forwarding
        - lose network
        - reconnect (either because network comes back or a different link 
comes up)
        - request port forwarding again -- <-- this will fail if the old remote 
port forwarding request is still 'active'
                - this is problem because ssh server's don't detect dead 
sessions in a very predictable way (at least in a way very predictable by me) 
and old remote forwarding requests may stay active for a long time ...
                        - I ended up using the ugly hack of restarting the sshd 
process if the remote forwarding request ever fails: exec_cmd(['pkill', '-HUP', 
'-u', 'uid', 'sshd']) (assumes a unixy server host)
- sometimes openssh stops listening on a remote port forward socket even though 
the underlying session is still alive ... 

With those observations in mind what I did looks like this:

- reconnect_forever_loop()  <-- tries to reconnect, if the connection succeeds 
set up the port forwarding and then enter the monitor_session_loop
- monitor_session_loop()  <-- polls the connection by calling 
transport.open_channel in a separate thread (to fake timeout behavior for 
open_channel call)
        - if open_channel returns without timing out, do a somewhat heavy 
remote socket connect chan.exc_cmd(['nc', '-z'  ...]) to make sure the remote 
server is still forwarding as requested
        - if open_channel fails, times out, or the 'nc' socket connect fails, 
scrap the session artifacts and re-enter the connect loop

Maybe this is useful to you ...  For my part, I'm happy to hear comment from 
anyone whose come up with a better way ...

Cheers,
Ben
_______________________________________________
paramiko mailing list
paramiko@lag.net
http://www.lag.net/cgi-bin/mailman/listinfo/paramiko

Reply via email to