?Hive Server Start copies sqoop tarball - but only if it exists on the same 
host where Hive Server is deployed.


I see some code in spark_service.py that copies the tez tarball. You can use 
that as a reference.

________________________________
From: Jeff Zhang <[email protected]>
Sent: Tuesday, December 29, 2015 9:51 PM
To: [email protected]
Cc: [email protected]
Subject: Re: When does tar copying happen ?

>>> I believe sqoop tarball is uploaded as part of HIVE installation.
I don't think so. Because I installed hive, but no sqoop tarball found.
Actually I'd like to upload spark jar like other tarball when installing spark. 
Could you guide me how to do that ?



On Wed, Dec 30, 2015 at 1:46 PM, Sumit Mohanty 
<[email protected]<mailto:[email protected]>> wrote:

Ambaripreupload.py is not used during Ambari based cluster installations.


I believe sqoop tarball is uploaded as part of HIVE installation.


-Sumit

________________________________
From: Jeff Zhang <[email protected]<mailto:[email protected]>>
Sent: Tuesday, December 29, 2015 9:42 PM
To: [email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
Subject: When does tar copying happen ?

I install sqoop separately, but find there's no sqoop tar ball uploaded to hdfs.
I find the uploading script in Ambaripreupload.py, and wondering when this 
script called. Is it called only when the first hdp installation ? Then some 
tar ball may missing if I install them separately.



print "Copying tarballs..."

  
copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop/mapreduce.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/tez/lib/tez.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/tez/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hive/hive.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/hive/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/pig/pig.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/pig/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  
copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop-mapreduce/hadoop-streaming.jar"),
 hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/sqoop/sqoop.tar.gz"), 
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/sqoop/", 
'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user, 
params.user_group)



--
Best Regards

Jeff Zhang



--
Best Regards

Jeff Zhang

Reply via email to