Update:  this morning, it seems to be working, albeit with about a 1-2
minute window of inaccessibility while sshd is brought back online.

Now, my ssh attempts simply block until sshd is accepting connections.
 Yesterday, the attempts failed quickly with a consistent error.
 Unfortunately, I killed that terminal window.  Oh, well.  I'll chalk it up
to gremlins and/or impatience.


On Wed, Feb 8, 2012 at 1:51 AM, Andrei Savu <[email protected]> wrote:

> Bootstrap Cluster -> Login over SSH -> Reboot -> Login over SSH
>
> This works as expected for me (quite fast also).
>
>
> On Wed, Feb 8, 2012 at 8:56 AM, Andrei Savu <[email protected]> wrote:
>
>>
>> On Wed, Feb 8, 2012 at 5:53 AM, Evan Pollan <[email protected]>wrote:
>>
>>> Is this a known limitation?  Does anyone know of a customization to the
>>> installation/configuration function scripts that will ensure that the ssh
>>> config persists through a reboot?
>>
>>
>> It should be persisted but the ec2-user is only used for bootstrap. Whirr
>> 0.7.0 creates a new user on the remote machines named whirr.cluster-user
>> (default value is the name of local user running Whirr).
>>
>> Try:  ssh -i ... <whirr.cluster-user>@remote-machine   or just ssh -i
>> remote-machine if you don't specify a value for whirr.cluster-user.
>>
>> Note: if you specify a value for whirr.cluster-user make sure it doesn't
>> already exist on the remote machines
>>
>> PS: I'm trying to replicate the behaviour you are seeing.
>>
>
>

Reply via email to