Mahadev, thanks again you for the help. I have not looked at the Ambari code yet, but it would be nice to somehow get a report from nodes which includes what packages are installed on each. Then, in my case, I would be able to aggregate these node package reports to see what is running on my cluster. I don't know how much of this is an edge case, but it seems like a useful feature.
~Robin On Mon, Apr 1, 2013 at 7:49 PM, Mahadev Konar <[email protected]>wrote: > Hi Robin, > Currently the packages needed from epel are not documented. There has > been a lot of use cases where folks want to just download the needed > packages rather that all of epel but thats work in progress and not tested > as of now. The easiest way to do it is to use a VM and install the required > services you need. By doing a rpm -ql before and after the install you will > know which packages are getting used. Hope that helps. > > thanks > mahadev > > > On Mon, Apr 1, 2013 at 3:58 PM, Robin Carnow <[email protected]> wrote: > >> Mahadev & Yusaku, >> >> Thanks you for your help, that is exactly what I needed for item #1. >> >> Does anyone know a simple way to find out what EPEL packages are >> necessary if I know what my Hadoop deployment will consist of? >> >> I'm thinking of using Ambari to configuring nodes off site and somehow >> recording what packages are needed to get the cluster up; then making those >> packages available within our data center. Is there an easy way you know >> of to do this? >> >> Thanks in advance for your help, >> >> ~Robin >> >> >> On Mon, Apr 1, 2013 at 12:53 PM, Yusaku Sako <[email protected]>wrote: >> >>> Also, here's the local repo set up documentation for Ambari 1.2.1: >>> http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.1/bk_reference/content/reference_chap4.html >>> >>> We plan to add similar documentation in Ambari project website in the >>> near future. >>> >>> Yusaku >>> >>> >>> On Mon, Apr 1, 2013 at 9:47 AM, Mahadev Konar >>> <[email protected]>wrote: >>> >>>> Robin, >>>> You should probably use the 1.2.1 (that was just released). Here are >>>> instructions to that: >>>> >>>> >>>> http://incubator.apache.org/ambari/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1-6.html >>>> >>>> thanks >>>> mahadev >>>> >>>> On Mon, Apr 1, 2013 at 7:56 AM, Robin Carnow <[email protected]> wrote: >>>> >>>>> Hi, >>>>> >>>>> I am using Amabari to deploy hadoop on a cluster which never has a >>>>> connection to the Internet. From this >>>>> link<http://docs.hortonworks.com/CURRENT/index.htm#Appendix/Deploying_HDP_In_Production_Data_Centers_with_Firewalls/Deploying_HDP_In_Production_Data_Centers.htm>instructions >>>>> are provided which describe how to get this working which use >>>>> the tarball images listed below. After reading instructions of how to >>>>> install Ambari 1.2.x, for some reason I get the impression that the >>>>> tarballs below are not the correct repo versions for Ambari 1.2. >>>>> >>>>> From this 1.2 documentation >>>>> page<http://incubator.apache.org/ambari/1.2.0/installing-hadoop-using-ambari/content/ambari-chap1-6.html>, >>>>> there are no links to a similar description of how to "*1. Set up the >>>>> local mirror repositories as needed for HDP, HDP Utils and EPEL.*". >>>>> >>>>> So I'm trying get help to: >>>>> >>>>> 1. Confirm that the below listed tarballs are the correct version for >>>>> Ambari 1.2.x. >>>>> >>>>> 2. Find out how I'd be able to know what EPEL packages are necessary >>>>> if I know what my hadoop deployment will consist of? >>>>> >>>>> >>>>> *RHEL/CentOS 5.x* >>>>> >>>>> http://public-repo-1.hortonworks.com/HDP-1.1.1.16/repos/centos5/ >>>>> HDP-1.1.1.16-centos5.tar.gz >>>>> >>>>> http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.15/repos/ >>>>> centos5/HDP-UTILS-1.1.0.15-centos5.tar.gz >>>>> >>>>> >>>>> *RHEL/CentOS 6.x* >>>>> >>>>> http://public-repo-1.hortonworks.com/HDP-1.1.1.16/repos/centos6/ >>>>> HDP-1.1.1.16-centos6.tar.gz >>>>> >>>>> http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.15/repos/ >>>>> centos6/HDP-UTILS-1.1.0.15-centos6.tar.gz >>>>> >>>>> >>>>> ~Robin >>>>> >>>> >>>> >>> >> >
