> 6) Beside installing deb on my machine and playing with it, what else
tests are needed can be done to ensure it works as intended.

We've a testing framework that runs smoke tests pretty much in the
automatic way. Checkout [1].
Do aware that the deployment and testing feature is not fully covered
across all bigtop components. Checkout the supporting matrix[2]

[1]
https://cwiki.apache.org/confluence/display/BIGTOP/Quickstart+Guide%3A+Bigtop+Integration+Test+Framework+2.0
[2]
https://cwiki.apache.org/confluence/display/BIGTOP/Overview+of+Bigtop+1.4.0+Support+Matrix

As Olaf said, just post your question here and we'd be happy to answer.

- Evans


Olaf Flebbe <[email protected]> 於 2020年6月6日 週六 下午9:48寫道:

> Hi
>
> Thank you for you questions!
>
>
>
> Am 06.06.2020 um 10:41 schrieb Jagat Singh <[email protected]>:
>
> Hi,
>
> I am trying to use Bigtop to create deb file for Hadoop 3.2.1. I am using
> Ubuntu 18.04.
>
> In the process of doing this, many questions came to my mind which I
> wanted to learn about.
>
> 1) I changed the bom file and ran gradle hadoop-deb command. Is this the
> correct process to create any deb or rpm?
>
>
> As a Bigtop User: no . You are not expected to roll your own Distribution .
> As a Developer: yes somehow, that’s usually the first try to upgrade a
> package.
> Since you already read the instructions: I like to stresst that you are
> supposed to use the docker containers (for git trunk
> bigtop/slaves:trunk-ubuntu-18.04 and run the gradle command within the
> container.
>
>
> 2) In what state you will use patches present in folder common/src/hadoop/
> , is it only usable when upstream system make additional changes after
> making a release?
>
>
> We try to use zero patches but if we find defects which are not fixed in
> the version we want to package, than we do patches. Or the build system is
> to be adapted to our needs… Usually we upstream fixes to the original
> project unless it is very bigtop specific. Most of the time we cherry-pick
> fixes from unreleased versions of the package.
>
> 3) Many times the build fails in between and I had to clean up build and
> output folder both and restart. What is the process to pick from where is
> failed to save time?
>
>
> No way, unfortunately. Neither deb nor rpm packaging have a way to resume
> where it failed.
>
> When I developed packaging I usually trying to do manually do the build,
> dependencies and try to automate stuff in the build scripts. This is very
> time consuming, indeed.
>
> Now you are hitting the ugly part. The upstream Hadoop project decided to
> rework all supporting start scripts, introduced different processes and
> frameworks with Hadoop3 , breaking our complete Hadoop setup.
>
> I did some initial work last year for Hadoop-3.1.2 in the "bigtop-alpha“
> git branch and dropped out of the project because of lack of time and
> changed priorities.
>
> If you like to invest into hadoop-3.2.1 I would recommend to look into
> this and merging with current master first.
>
>
> 4) Follow-up for 3), Building the hadoop and building deb for hadoop are
> two different things, what command can be used to do one after the another
> manually to save time in case one fails.
>
>
> Right, Building Hadoop should be straight forward, but packaging the new
> Hadoop 3.2 layout is hard, since Hadoop project decided to change to a
> monolithic approach. Without rewriting nearly all scripts it is not
> possible anymore to start daemons independently any more.
>
> I would recommend to do a manual installation first and start fresh with
> packaging the individual parts.
>
>
>
> > Task :hadoop-deb FAILED
> dpkg-source: warning: extracting unsigned source package
> (hadoop_3.2.1-1.dsc)
> dpkg-source: error: unpack target exists: hadoop-3.2.1
> dpkg-source: info: extracting hadoop in hadoop-3.2.1
>
> 4) How does Bigtop ensures compatability between products, example Hive is
> compiled with Hadoop version 3.2.1. Based on my understanding in the mvn -D
> commands there is an override and version it takes from the bom file the
> version information. Is this understanding correct? Do I still need to
> change any maven pom file for any software to ensure compatibility?
>
>
> You are right that is the way bigtop currently works. The crucial part is
> that it uses the local maven repository to pick up previous artifacts.
> (Dependency does "mvn install“ and mvn package of an package will pick up
> the previous compiled artifact from ~/.m2/repository ) .
>
> Additionally, some crucial parts like Hadoop or mapreduce jars are
> symlinked in packages so they are only installed once on the target system.
>
> I was proposing to not do this any more since we might break API’s that
> way (and we had that) . My approach in bigtop-alpha was to use the
> dependencies the devs chose and to accept different versions of a jar in
> the system, depending on the *network* API rather the *Java* API. That
> would allow us to compile packages independently.
>
>
> 5) Are there any more resources to learn about Bigtop other than
> https://cwiki.apache.org/confluence/display/BIGTOP/Bigtop+Packaging. What
> should I do next to learn more about Bigtop
>
>
> Best resources are questions. We will do our best to answer.
> It might help if you look into the jobs behind the scenes in
> ci.bigtop.apache.org .  Please create an account there and I am happy to
> give you read access to the configuration of bigtop-trunk-packages for
> instance
>
>
>
> 6) Beside installing deb on my machine and playing with it, what else
> tests are needed can be done to ensure it works as intended.
>
>
> I think Evans can answer this best.  We have docker deployments,
>
> 7) For Hadoop 3.1.2
>
> I see there is a exit 1 in the install_hadoop.sh
>
> My build fails around that, is there anyway to debug this. Why is this
> exit 1 ?
>
>
> I am assuming the jar file is simply not there any more, since Hadoop
> major version 3 changed a lot in the layout.
> AFAIK I worked around all these problems in the bigtop-alpha branch.
>
>
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/install_hadoop.sh#L438
>
> + '[' -e
> debian/tmp//usr/lib/hadoop-mapreduce//hadoop-annotations-3.2.1.jar ']'
> + exit 1
>
> debian/rules:55: recipe for target 'override_dh_auto_install' failed
> make[1]: *** [override_dh_auto_install] Error 1
> make[1]: Leaving directory
> '/home/jj/dev/code/open/bigtop/output/hadoop/hadoop-3.2.1'
> debian/rules:27: recipe for target 'binary' failed
> make: *** [binary] Error 2
> dpkg-buildpackage: error: fakeroot debian/rules binary subprocess returned
> exit status 2
> debuild: fatal error at line 1152:
> dpkg-buildpackage -rfakeroot -us -uc -ui -b
>  failed
>
> Thanks in advance
>
>
> The bigtop project would be very grateful if you submit pull requests for
> Hadoop 3.
>
> Best Olaf
>
>
>
>

Reply via email to