Alejandro,
All the tarballs are missing, the "/hdp" directory itself doesn't exist.
Ambari shows that all services are up but I'm getting FileNotFoundException
for mapreduce.tar.gz when trying to run MR jobs.

How can I execute these ops after installation?

Can somebody check if these are invoked when deploying via blueprints?

Thanks for your help!

On Tue, Jun 14, 2016 at 4:32 PM, Alejandro Fernandez <
[email protected]> wrote:

> Which tarballs are missing?
>
> They are uploaded to HDFS when certain services start, e.g., Hive Server,
> History Server, Tez Service Check, Spark Service Check.
> You can add logging to the method copy_to_hdfs in copy_tarball.py to
> ensure it is called.
> Even after installation, you can perform these ops to try uploading
> tarballs.
>
> For VMs deployed via blueprints, I'm not quite sure if we don't run those
> steps to improve start-up time.
>
> Thanks,
> Alejandro
>
> From: Andrey Klochkov <[email protected]>
> Reply-To: "[email protected]" <[email protected]>
> Date: Tuesday, June 14, 2016 at 2:19 PM
> To: "[email protected]" <[email protected]>
> Subject: Ambari not creating /hdp/apps/<hdp-version> when using a
> blueprint
>
> Hi,
> Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when
> creating a cluster from a blueprint?
>
> Seems it doesn't do that in our case. I see that there's
> Ambaripreupload.py script in Ambari sources that does that but can't figure
> out how that script is supposed to be executed. No references in source
> code. Nothing in the logs. How can I troubleshoot that?
>
> Thanks!
>
> --
> Andrey Klochkov
>
>


-- 
Andrey Klochkov
Grid Dynamics
Skype: aklochkov_gd
www.griddynamics.com
[email protected]

Reply via email to