In article <[EMAIL PROTECTED]>,
Antony Stone  <[EMAIL PROTECTED]> wrote:
>Therefore IPsec and CIPE were not an option, but I've easily had the 
>PPP-over-SSH link up for days at a time (it's a fat corporate pipe of unknown 
>bandwidth at one end, and a 128kbps up/ 512kbps down cable modem link at the 
>other), and I've readily transferred 600Mb ISO images of CDs across it for 
>when I've needed to install some software...

How was the latency during those transfers?  For my PPP over SSH sessions
it would grow to 120 seconds, then either the SSH session or the TCP
connections being sent over it would timeout and die.  I had more or less
the same setup, but with more bandwidth.  

I set up dozens of PPP-over-SSH sessions between different sites before
I sat down with tcpdump and found out _why_ the performance so bad.

The biggest variable seems to be the packet loss.  If you're lucky
enough to have a link between two sites with near-zero packet loss most
of the time, you'll get away with PPP over SSH with no ill effects except
huge latency.

>If TCP-over-TCP is as bad as you say, maybe I should have set up IPsec and 
>tunneled ESP through SSH, but that idea just seemed silly..... :-)

Same problem--you send TCP over IP over ESP over SSH over TCP.

It is possible to use an HTTP proxy or similar to encapsulate IP
(or datagram packets in general) without buggering with TCP, but it
doesn't look like PPP-over-SSH, it looks like IPsec-over-HTTP-requests.
The session timeout is much shorter (5-30 seconds), and if the connection
_ever_ blocks on a write, close it and start the next one.

-- 
Zygo Blaxell (Laptop) <[EMAIL PROTECTED]>
GPG = D13D 6651 F446 9787 600B AD1E CCF3 6F93 2823 44AD

Reply via email to