> On Feb. 11, 2015, 11:36 p.m., Alejandro Fernandez wrote: > > ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py, > > line 57 > > <https://reviews.apache.org/r/30899/diff/1/?file=861153#file861153line57> > > > > When we copy a tarball to HDFS, we need to figure out the destination, > > e.g., /hdp/apps/2.2.1.0-2260/tez > > > > That version number has to come from somewhere, so we use hdp-select > > status {component_name}, where component name is the 2nd argument. > > Hitesh Shah wrote: > In that case, why is the version of mapreduce dependent on hive-server2? > > Hitesh Shah wrote: > Also, I am not sure why the mapreduce tarball is being uploaded as user > tez if that is what the 3rd param is used for.
It used to be dependent on the version that hive-server2 was on. That is no longer the case, not it only depends on whatever "hdp-select status hadoop-client" returns > On Feb. 11, 2015, 11:36 p.m., Alejandro Fernandez wrote: > > ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/tez_client.py, > > line 38 > > <https://reviews.apache.org/r/30899/diff/1/?file=861155#file861155line38> > > > > When hadoop-client is repointed, that means that tez has been upaded as > > well. This is important during a rolling upgrade, so that we copy the tez > > tarball to the appropriate folder in HDFS. > > Hitesh Shah wrote: > Why not just rely on the version of Tez itself? Why does it need to based > on hadoop client. Please run this by the folks who are driving rolling > upgrades as I am not sure of the full context. This is because tez does not have a component in hdp-select. Instead, it is updated along with the rest of the clients, all of which rely on "hdp-select status hadoop-client". - Alejandro ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30899/#review72057 ----------------------------------------------------------- On Feb. 11, 2015, 10:42 p.m., Alejandro Fernandez wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/30899/ > ----------------------------------------------------------- > > (Updated Feb. 11, 2015, 10:42 p.m.) > > > Review request for Ambari, Jonathan Hurley, Nate Cole, Sumit Mohanty, > Srimanth Gunturi, and Tom Beerbower. > > > Bugs: AMBARI-9585 > https://issues.apache.org/jira/browse/AMBARI-9585 > > > Repository: ambari > > > Description > ------- > > Today, the tez tarball is copied to HDFS only when HiveServer2 is installed, > and when the Pig Service check runs. > Instead, this should happen whenever Tez is installed. > For a RU, the install of new bits should also copy the tarball to HDFS. > > > Diffs > ----- > > > ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py > abfde14 > > ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/service_check.py > 7137e60 > > ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/tez_client.py > 00375d7 > ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 1fc3fbf > ambari-server/src/test/python/stacks/2.1/TEZ/test_tez_client.py 3d74113 > > Diff: https://reviews.apache.org/r/30899/diff/ > > > Testing > ------- > > Installed a 3-node cluster with HDFS, ZK, MR, YARN, and Tez. > Then ensured that the tez tarball was copied to hdfs, > > su - hdfs -c 'hdfs dfs -ls /hdp/apps/2.2.0.0-2041/tez' > > When I installed Pig client, on the same host that had the tez client, I was > able to run a Pig job, using the following example, > http://hortonworks.com/hadoop-tutorial/faster-pig-tez/ > > I then performed a Rolling Upgrade and the newer tez tarball was indeed > copied to HDFS. > su - hdfs -c 'hdfs dfs -ls /hdp/apps/2.2.1.0-2260/tez' > > Unit tests are in progress. > > > Thanks, > > Alejandro Fernandez > >
