Hi,

I was trying to sort out a ulimit (open files) issue with a small Hadoop
cluster, and some of the remedies I came across on the web involved
tweaking the pam configuration and rebooting.  I've been using whirr for a
while now in EC2, and I am currently using version 0.7.0 to stand up a
CDH-based cluster using 64 bit Ubuntu 10.04 images.

When I rebooted my nodes, they came back up (i.e. AWS reports them as
running and healthy, using the basic cloud watch monitoring), but I can no
longer SSH to them using the ec2-user credentials/keys that normally work
in a whirr-instantiated cluster.  This applies to every node in the
cluster, and I've re-launched the cluster several times now -- 100%
reproducible.

Is this a known limitation?  Does anyone know of a customization to the
installation/configuration function scripts that will ensure that the ssh
config persists through a reboot?

thanks,
Evan

Reply via email to