[ 
https://issues.apache.org/jira/browse/HADOOP-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644854#action_12644854
 ] 

Chris K Wensel commented on HADOOP-4582:
----------------------------------------

You might find it easier to just stuff your java binary into S3 or on some 
other public site. This way the url never expires.

If in S3, you save on the minor bandwidth costs if you find yourself tweaking 
your customized ami's often.

That said, making the scripts more robust is a worth goal, thanks for the heads 
up.

> create-hadoop-image doesn't fail with expired Java binary URL
> -------------------------------------------------------------
>
>                 Key: HADOOP-4582
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4582
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/ec2
>    Affects Versions: 0.18.1
>            Reporter: Karl Anderson
>            Priority: Minor
>
> Part of creating a Hadoop EC2 image involves putting the URL for the Java 
> binary into hadoop-ec2-env.sh.  Ths URL is time-sensitive; a working URL will 
> eventually redirect to a HTML warning page.  create-hadoop-image-remote does 
> not notice this, and will create, bundle, and register a non-working image, 
> which launch-cluster will launch, but on which the hadoop commands will not 
> work.
> To fix, check the output status of the "sh java.bin" command in 
> create-hadoop-image-remote, die with that status, and check for that status 
> when create-hadoop-image-remote is run.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to