[
https://issues.apache.org/jira/browse/AMBARI-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14187830#comment-14187830
]
Hadoop QA commented on AMBARI-7842:
-----------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12677767/AMBARI-7842.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
ambari-server.
Test results:
https://builds.apache.org/job/Ambari-trunk-test-patch/394//testReport/
Console output:
https://builds.apache.org/job/Ambari-trunk-test-patch/394//console
This message is automatically generated.
> Ambari to manage tarballs on HDFS
> ---------------------------------
>
> Key: AMBARI-7842
> URL: https://issues.apache.org/jira/browse/AMBARI-7842
> Project: Ambari
> Issue Type: Bug
> Reporter: Alejandro Fernandez
> Assignee: Alejandro Fernandez
> Priority: Blocker
> Attachments: AMBARI-7842.patch, AMBARI-7842_branch-1.7.0.patch,
> ambari_170_versioned_rpms.pptx
>
>
> With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file
> system to a certain location in HDFS.
> The tarballs/jars no longer have a version number (either component version
> or HDP stack version + build) in the name), but the destination folder in
> HDFS does contain the HDP Version (e.g., 2.2.0.0-999).
> {code}
> /hdp/apps/$(hdp-stack-version)
> |---- mapreduce/mapreduce.tar.gz
> |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the
> file system, it is a symlink to a versioned file, so HDFS needs to follow the
> link)
> |---- tez/tez.tar.gz
> |---- pig/pig.tar.gz
> |---- hive/hive.tar.gz
> |---- sqoop/sqoop.tar.gz
> {code}
> Furthermore, the folders created in HDFS need to have a permission of 0555,
> while files need 0444.
> The owner should be hdfs, and the group should be hadoop.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)