Re: Review Request 57544: Atlas MetaData server start fails while granting permissions to HBase tables after unkerberizing the cluster

2017-03-14 Thread Laszlo Puskas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/57544/#review168895
---


Ship it!




Ship It!

- Laszlo Puskas


On March 12, 2017, 3:37 p.m., Robert Levas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/57544/
> ---
> 
> (Updated March 12, 2017, 3:37 p.m.)
> 
> 
> Review request for Ambari, Attila Magyar, Balázs Bence Sári, Eugene 
> Chekanskiy, Laszlo Puskas, and Sebastian Toader.
> 
> 
> Bugs: AMBARI-20408
> https://issues.apache.org/jira/browse/AMBARI-20408
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> STR
> 1. Deploy HDP-2.5.0.0 with Ambari-2.5.0.0 (secure MIT cluster installed via 
> blueprint)
> 2. Express Upgrade the cluster to 2.6.0.0
> 3. Disable Kerberos
> 4. Observed that Atlas Metadata server start failed with below errors:
> 
> ```
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py",
>  line 249, in 
> MetadataServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 282, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 720, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py",
>  line 102, in start
> user=params.hbase_user
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'cat 
> /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. 
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> atlas_titan
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas
> TABLE
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas_titan
> 2 row(s) in 0.2000 seconds
> 
> nil
> TABLE
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas_titan
> 2 row(s) in 0.0030 seconds
> 
> nil
> java exception
> ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: 
> org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered 
> coprocessor service found for name AccessControlService in region 
> hbase:acl,,1480905643891.19e697cf0c4be8a99c54e39aea069b29.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7692)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1897)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1879)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32299)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> ```
> # Cause
> When disabling Kerberos, the stack advisor recommendations are not properly 
> applied due to the order of operations and various conditionals.
> 
> # Solution
> Ensure that the stack advisor recommendations are properly applied when 
> disabling Kerberos.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/controller/KerberosHelper.java
>  0e27d03 
>   
> 

Re: Review Request 57544: Atlas MetaData server start fails while granting permissions to HBase tables after unkerberizing the cluster

2017-03-12 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/57544/#review168722
---




ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/AbstractPrepareKerberosServerAction.java
Lines 267 (patched)


Moved from 
`ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareKerberosIdentitiesServerAction.java`



ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareDisableKerberosServerAction.java
Lines 232-233 (original), 213-214 (patched)


Moved `applyStackAdvisorUpdates` out of `processServiceComponentHosts` to 
have better conrol over when in the workflow the stack advisor changes are 
applied,



ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareEnableKerberosServerAction.java
Lines 95-96 (patched)


Moved `applyStackAdvisorUpdates` out of `processServiceComponentHosts` to 
have better conrol over when in the workflow the stack advisor changes are 
applied,



ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareEnableKerberosServerAction.java
Lines 95-96 (patched)


Moved `applyStackAdvisorUpdates` out of `processServiceComponentHosts` to 
have better conrol over when in the workflow the stack advisor changes are 
applied,



ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareKerberosIdentitiesServerAction.java
Lines 100-101 (patched)


Moved `applyStackAdvisorUpdates` out of `processServiceComponentHosts` to 
have better conrol over when in the workflow the stack advisor changes are 
applied,



ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/PrepareKerberosIdentitiesServerAction.java
Line 211 (original)


Moved to 
`ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/AbstractPrepareKerberosServerAction.java`


- Robert Levas


On March 12, 2017, 11:37 a.m., Robert Levas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/57544/
> ---
> 
> (Updated March 12, 2017, 11:37 a.m.)
> 
> 
> Review request for Ambari, Attila Magyar, Balázs Bence Sári, Eugene 
> Chekanskiy, Laszlo Puskas, and Sebastian Toader.
> 
> 
> Bugs: AMBARI-20408
> https://issues.apache.org/jira/browse/AMBARI-20408
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> STR
> 1. Deploy HDP-2.5.0.0 with Ambari-2.5.0.0 (secure MIT cluster installed via 
> blueprint)
> 2. Express Upgrade the cluster to 2.6.0.0
> 3. Disable Kerberos
> 4. Observed that Atlas Metadata server start failed with below errors:
> 
> ```
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py",
>  line 249, in 
> MetadataServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 282, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 720, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py",
>  line 102, in start
> user=params.hbase_user
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File