Michael,
you are SO RIGHT!
now the tunneling works after your clue!
completely missed out this "*remote*"
On Monday, April 9, 2018 at 1:48:50 PM UTC, Michael Spiegle wrote:
>
> I think this is because the SSH command always expects a hostname even if
> there isn't a hostname to use. In your ss
I think this is because the SSH command always expects a hostname even if
there isn't a hostname to use. In your ssh -vvv debug output, you can see
that every single option is inside of [optional] brackets EXCEPT for the
hostname. SSH won't actually use this hostname for anything, it just wants
s
This is the -vvv output:
[root@WW-GVXQLC2 ansible]# ssh -F ssh.config bkusman@serverbehindjumpbox -p
670 -vvv
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: Reading configuration data ssh.config
debug1: ssh.config line 28: Applying options for *
debug1: Executing proxy command: exec ssh
Hi
ssh -F ssh.config -fN user@some_jumpbox --> im able to establish this and
send it back to the background
ssh -F ssh.config user@someserverbehindjumpbox --> does not work. the
message is: ssh_exchange_identification: Connection closed by remote host
the "hosts" im referring to is in the ssh.co
By "hosts" file, do you mean /etc/hosts or the hosts in ssh.config? Also,
if you just run SSH by hand to login to a remote host, what happens?
Ex:
$ ssh -F ssh.config -fN user@some_jumpbox
$ ssh -F ssh.config user@some_server_behind_jumpbox
On Thursday, April 5, 2018 at 6:20:14 AM UTC-4, Benny
this is a great step.
im able to establish the tunneling with the jumphost.
but i was wondering, what did you put in the hosts filfe ?
im still not able to reach the server
On Monday, February 16, 2015 at 12:11:58 AM UTC, Michael Spiegle wrote:
>
> As an additional datapoint, here's a brief summar
Many thanks Michael for the detailed writeup!
I'm going to experiment with my setup (quite similar to your's - multiple
jumpboxes in multiple locations, some AWS usage as well) to see how much of
your approach I can adapt to mine.
On Sunday, February 15, 2015 at 6:14:54 PM UTC-6, Michael Spieg
About half of my machines are in Amazon/EC2. In order to solve the
chicken/egg problem, I write out a "user_data" script which installs some
SSH keys for me to the root user of the VM upon first boot. This allows me
to run my initial bootstrap and get the machine joined to the domain, then
I
As an additional datapoint, here's a brief summary of how I deal with this.
To complicate matters, my machines are split across various labs in
different locations which each have their own bastion/jumpbox. I use ssh
keys sometimes, and hard coded passwords for some other machines:
ansible.cf
I did manage to get it to work - but it's really ugly :/
I had to change the ProxyCommand directive in the ssh config for the
wildcard section to:
ProxyCommand sshpass -p 'reallybadpassword' ssh -A root@56.66.3.10 'nc -w
14400 %h %p'
I guess Ansible has no way of overriding the ProxyCommand sp
Thanks for looking.
There are too many current processes dependent on passwords that I'm
migrating to Ansible - while converting to keys is partly underway, it
won't be complete for a while.
There's also a second bootstrapping problem. I'm using Ansible to run
baremetal bringup scripts on xen
Not the answer you're looking for, but why don't you just use ssh
keys? It's some minor work upfront with huge security and automation
benefits easier.
On Fri, Feb 13, 2015 at 3:51 PM, Ananda Debnath
wrote:
> Does anyone know of an example to do this using password authentication?
>
> My inventor
12 matches
Mail list logo