Which tarballs are missing?

They are uploaded to HDFS when certain services start, e.g., Hive Server, 
History Server, Tez Service Check, Spark Service Check.
You can add logging to the method copy_to_hdfs in copy_tarball.py to ensure it 
is called.
Even after installation, you can perform these ops to try uploading tarballs.

For VMs deployed via blueprints, I'm not quite sure if we don't run those steps 
to improve start-up time.

Thanks,
Alejandro

From: Andrey Klochkov 
<[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, June 14, 2016 at 2:19 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Hi,
Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when creating 
a cluster from a blueprint?

Seems it doesn't do that in our case. I see that there's Ambaripreupload.py 
script in Ambari sources that does that but can't figure out how that script is 
supposed to be executed. No references in source code. Nothing in the logs. How 
can I troubleshoot that?

Thanks!

--
Andrey Klochkov

Reply via email to