Ambaripreupload.py is not used during Ambari based cluster installations.

I believe sqoop tarball is uploaded as part of HIVE installation.


-Sumit

________________________________
From: Jeff Zhang <[email protected]>
Sent: Tuesday, December 29, 2015 9:42 PM
To: [email protected]; [email protected]
Subject: When does tar copying happen ?

I install sqoop separately, but find there's no sqoop tar ball uploaded to hdfs.
I find the uploading script in Ambaripreupload.py, and wondering when this 
script called. Is it called only when the first hdp installation ? Then some 
tar ball may missing if I install them separately.



print "Copying tarballs..."

  
copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop/mapreduce.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/tez/lib/tez.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/tez/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hive/hive.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/hive/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/pig/pig.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/pig/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  
copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop-mapreduce/hadoop-streaming.jar"),
 hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/sqoop/sqoop.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/sqoop/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)



--
Best Regards

Jeff Zhang

Reply via email to