[
https://issues.apache.org/jira/browse/AMBARI-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290599#comment-14290599
]
Hadoop QA commented on AMBARI-9319:
-----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12694358/AMBARI-9319.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in .
Test results:
https://builds.apache.org/job/Ambari-trunk-test-patch/1476//testReport/
Console output:
https://builds.apache.org/job/Ambari-trunk-test-patch/1476//console
This message is automatically generated.
> HBase fails to start after adding HBase Service to a cluster that has
> NameNode HA already enabled
> -------------------------------------------------------------------------------------------------
>
> Key: AMBARI-9319
> URL: https://issues.apache.org/jira/browse/AMBARI-9319
> Project: Ambari
> Issue Type: Bug
> Components: ambari-web
> Affects Versions: 2.0.0
> Reporter: Antonenko Alexander
> Assignee: Antonenko Alexander
> Priority: Critical
> Fix For: 2.0.0
>
> Attachments: AMBARI-9319.patch, hbase-site.xml, hbase_error.txt,
> hbase_output.txt
>
>
> HBase fails to start when enabling HA in a 3-node cluster with Ambari 2.0.0
> (build 339) and HDP 2.2.1.0-2165
> STR:
> Install Ambari 2.0.0 with default settings
> Install HDP 2.2.1.0 on a single node with just HDFS and ZK
> Add 2 more nodes with ZK servers on all 3 nodes
> Enable HA (service name is "ha")
> Add HBase service
> {code}
> 2014-12-30 19:29:40,270 - Error while executing command 'start':
> Traceback (most recent call last):
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
> line 142, in execute
> method(env)
> File
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py",
> line 48, in start
> self.configure(env) # for security
> File
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py",
> line 38, in configure
> hbase(name='master')
> File
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py",
> line 150, in hbase
> params.HdfsResource(None, action="execute")
> File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
> line 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 151, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 117, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
> line 105, in action_execute
> logoutput=logoutput,
> File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
> line 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 151, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 117, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
> line 265, in action_run
> raise ex
> Fail: Execution of 'hadoop --config /etc/hadoop/conf jar
> /var/lib/ambari-agent/lib/fast-hdfs-resource.jar
> /var/lib/ambari-agent/data/hdfs_resources.json hdfs://ha' returned 1.
> Creating: Resource [source=null,
> target=hdfs://c6404.ambari.apache.org,c6405.ambari.apache.org:8020/apps/hbase/data,
> type=directory, action=create, owner=hbase, group=null, mode=null,
> recursiveChown=false, recursiveChmod=false]
> Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS:
> hdfs://c6404.ambari.apache.org,c6405.ambari.apache.org:8020/apps/hbase/data,
> expected: hdfs://ha
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
> at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1426)
> at
> org.apache.ambari.fast_hdfs_resource.Resource.checkResourceParameters(Resource.java:152)
> at org.apache.ambari.fast_hdfs_resource.Runner.main(Runner.java:72)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)