Hi,

I'm using new src/contrib/cloud scripts created by Tom White recently. Great
work, now we can easily deploy 0.20.x onto EC2. Thanks, Tom!

I've just fixed several things in the scripts:

1. A typo in src/py/hadoop/cloud/cli.py: on "push" command the script
actually invokes proxy creation
2. Modified src/py/hadoop/cloud/service.py, to copy AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY variables into a user data script environment, so
later it can fill them in hadoop configs.
3. Fixed user data script to fill /home/hadoop/.bashrc file additionally to
root's .bashrc. Seems like with new scripts, it's better to issue jobs from
"hadoop" user. Some HDFS permissions don't let to run jobs from "root" user.

Should I open a JIRA issue on it? Or should I post the patch here, in the
mailing list?

And also I'm thinking if it's a good idea to create a "hardware" provider to
use cloud scripts for hardware (non-cloud) cluster deployment. What do you
think? Is there a better way to automate hardware  cluster deployment?

-- 
Andrew Klochkov

Reply via email to