Hi Aman,
Unless you plan to do your own Hadoop distro (which will be quite an endeavor 
on building your own stack + the RPMs),  I don't recommend trying this just to 
use your own Hadoop version.   The RPMs are build specifically for a distrothe 
and stack version.   So if you want use the HDP 2.3 version and update just 
Hadoop, for example, you will need to build your RPMs for that exact version; 
RPM is installed into a very specific location for each version.  The RPMs also 
has pre/post scripts that does extra setup required by the version of Ambari.   
There is a tight coupling between Ambari and Bigtop needed to produce a 
particular distro release such as HDP 2.3.   Unless you  have the Bigtop RPM 
spec files for that release, you will need to do some reverse engineer to get 
it right, and this is not trivial.   Then there are potential issues with 
inter-operability between the service version in the stack if you inject your 
own version.

You may be better off looking at just replacing Hadoop JARs if your aim is just 
to test out a new Hadoop release.  
 Respectfully,
Tuong 

   

 On Wednesday, October 12, 2016 11:55 PM, aman poonia 
<aman.poonia...@gmail.com> wrote:
 

 Hey Alejandro,
Thank you very much for pointing me to the right source code. I will see what 
can i figure out of this. :-)
-- 
With Regards,
Aman Poonia

On Tue, Oct 11, 2016 at 12:28 AM, Alejandro Fernandez 
<afernan...@hortonworks.com> wrote:

I think that requirement is based on the fact that Ambari needs to be able to 
compare version numbers.Typically, each service's metainfo.xml file defines how 
to performs a yum install of its packages, which can replace variables like the 
stack_version, and insert *
E.g.,<osSpecifics>
  <osSpecific>
    <osFamily>any</osFamily>
    <packages>
      <package>
        <name>hbase</name>
      </package>
    </packages>
  </osSpecific>
</osSpecifics>Or
<osSpecific>
  <osFamily>redhat7,amazon2015, redhat6,suse11,suse12</osFamil y>
  <packages>
    <package>
      <name>atlas-metadata_${stack_ version}</name>
    </package>
    <package>
      <name>ambari-infra-solr-client </name>
      <condition>should_install_ infra_solr_client</condition>
    </package>
    <package>
      <name>kafka_${stack_version}</ name>
    </package>
  </packages>
</osSpecific>
However, you may have to change several other python functions if you want 
package names that don't conform to that standard, or at least look at what 
these do
ambari-common/src/main/python/ resource_management/libraries/ 
functions/conf_select.pyambari-common/src/main/python/ 
resource_management/libraries/ 
functions/stack_select.pyambari-common/src/main/python/ 
resource_management/libraries/ functions/version.pyambari-server/src/main/ 
resources/custom_actions/ scripts/install_packages.py
Thanks,Alejandro
From: aman poonia <aman.poonia...@gmail.com>
Date: Saturday, October 8, 2016 at 2:28 AM
To: Alejandro Fernandez <afernan...@hortonworks.com>
Subject: Re: How to install and start apache distributed hadoop rather than 
hortonworks distribution

Hi Alejandro,
I downloaded Bigtop and created the zookeeper and Hadoop rpm from apache 
provided tarballs. And now i am trying to use these rpm instead of hortonworks 
to deploy a hadoop cluster. And i am facing difficulty in this. As ambari 
searches for a specific names like"yum install hadoop_x_x_x_x-xxxx""yum install 
hadoop_x_x_x_x-xxxx-hdfs"
and so on. 
How can i make it work with my own generated RPMs.

-- 
With Regards:-
Aman Poonia

On Fri, Oct 7, 2016 at 11:39 PM, Alejandro Fernandez 
<afernan...@hortonworks.com> wrote:

Hi Aman,
Making your own distribution is no easy task. You can literally spend months 
trying to do this since it requires 
tooling (like the equivalent of conf-select and hdp-select to change 
symlinks)packaging of Hadoop into RPMs (or equivalent for other Oses)finding 
compatible versions of each productproviding default configs based on those 
versionsyour own stack advisorhandling configs during stack upgrade 
(rolling/express)etc.
What exactly are you trying to accomplish?
Thanks,Alejandro
From: aman poonia <aman.poonia...@gmail.com>
Date: Friday, October 7, 2016 at 4:53 AM
To: Alejandro Fernandez <afernan...@hortonworks.com>
Cc: "user@ambari.apache.org" <user@ambari.apache.org>
Subject: Re: How to install and start apache distributed hadoop rather than 
hortonworks distribution

So essentially if i want to use apache distribution i need to define my own 
stack? Can't i just change some configuration so that it starts working with 
apache distribution. 
What i understood from documentation and code is to write a stack one needs to 
provide his own replacement of "hdp-select" and "conf-select" and couldnot find 
documentation around what is expected from these tools(like what all functions 
one need to implement) so it looks like a dark area to me.
A did a quick grep to see if there is something around version number of stack 
and found this in ambari-commons
ambari-common/src/main/python/ resource_management/libraries/ 
functions/stack_select.py:    match = re.match('[0-9]+.[0-9]+.[0-9]+ 
.[0-9]+-[0-9]+', stack_version)ambari-common/src/main/python/ 
resource_management/libraries/ functions/get_stack_version.py :  match = 
re.findall('[0-9]+.[0-9]+.[0-9 ]+.[0-9]+-[0-9]+', 
home_dir_split[iSubdir])ambari-common/src/main/python/ 
resource_management/libraries/ functions/get_stack_version.py :  match = 
re.match('[0-9]+.[0-9]+.[0-9]+ .[0-9]+-[0-9]+', stack_version)
Looks like there is some rule around the naming of rpm packages and stack 
naming which i am completely missing!!


-- 
With Regards:-
Aman Poonia

On Wed, Oct 5, 2016 at 11:11 PM, Alejandro Fernandez 
<afernan...@hortonworks.com> wrote:

Hi Aman,
Ambari is meant to work with any distribution, as long as it has a stack 
definition, which includes list of services, RPM names, etc. For example, 
https://github.com/ap ache/ambari/tree/trunk/ambari- 
server/src/main/resources/stac ksAre you trying to build your own stack?
Thanks,Alejandro
From: aman poonia <aman.poonia...@gmail.com>
Reply-To: "user@ambari.apache.org" <user@ambari.apache.org>
Date: Wednesday, October 5, 2016 at 3:10 AM
To: "user@ambari.apache.org" <user@ambari.apache.org>
Subject: How to install and start apache distributed hadoop rather than 
hortonworks distribution

I am new to Ambari and have been trying setting up a cluster. Amabri looks 
interesting to use. 
However, i am having a tough time to understand how to install and start apache 
distributed Hadoop rather than Hortonworks distributed Hadoop using Ambari. Is 
there a documentation i can refer to.There are instances when i don't want to 
use Hortonworks distribution and want to use apache distributed Hadoop.Also 
need some help in understanding naming convention of rpm packages that Ambari 
expects. Have i missed something in the documentation? 

-- 
With Regards,
Aman Poonia








   

Reply via email to