Re: Review Request 49640: Identify config changes added to Ambari-2.4.0 and mark them to not get added during Ambari upgrade

2016-07-06 Thread Sumit Mohanty

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49640/#review141126
---




ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml 
(line 27)


Actually, lets leave these are true. Newer version of cluster-env will not 
affect service restart.


- Sumit Mohanty


On July 5, 2016, 4:29 p.m., Dmitro Lisnichenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49640/
> ---
> 
> (Updated July 5, 2016, 4:29 p.m.)
> 
> 
> Review request for Ambari, Jonathan Hurley, Nate Cole, and Sumit Mohanty.
> 
> 
> Bugs: AMBARI-17564
> https://issues.apache.org/jira/browse/AMBARI-17564
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Now that we have the mechanism to prevent configs from getting added as part 
> of Ambari upgrade, lets identify the configs that got added in 2.4.0 and mark 
> them as not to be added during Ambari upgrade.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-env.xml
>  b4eecec 
>   
> ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
>  871e571 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
>  3578d43 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/atlas-env.xml
>  b8f7715 
>   
> ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-site.xml
>  150b2c6 
>   
> ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-sysctl-env.xml
>  290239e 
>   
> ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
>  9811191 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-site.xml
>  61437d5 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/webhcat-site.xml
>  d8012dd 
>   
> ambari-server/src/main/resources/common-services/KERBEROS/1.10.3-10/configuration/kerberos-env.xml
>  29c46e9 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-env.xml
>  d0e51eb 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/configuration/tez-site.xml
>  e7a851c 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/configuration-mapred/mapred-site.xml
>  6951db0 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/configuration/yarn-env.xml
>  152c463 
>   
> ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml
>  89e05d7 
>   
> ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml
>  e1aab8a 
>   
> ambari-server/src/main/resources/stacks/HDP/2.3/services/STORM/configuration/storm-site.xml
>  f3bbce8 
>   
> ambari-server/src/main/resources/stacks/HDPWIN/2.3/services/OOZIE/configuration/oozie-site.xml
>  f2c41c3 
> 
> Diff: https://reviews.apache.org/r/49640/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Dmitro Lisnichenko
> 
>



Re: Review Request 49727: AMBARI-17598 Permission mismatch b/w 'Cluster user' and 'read only user' from older ambari

2016-07-06 Thread Yusaku Sako

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49727/#review141110
---


Ship it!




Ship It!

- Yusaku Sako


On July 6, 2016, 9:27 p.m., Zhe (Joe) Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49727/
> ---
> 
> (Updated July 6, 2016, 9:27 p.m.)
> 
> 
> Review request for Ambari, Jaimin Jetly, Richard Zang, Vivek Ratnavel 
> Subramanian, Xi Wang, and Yusaku Sako.
> 
> 
> Bugs: AMBARI-17598
> https://issues.apache.org/jira/browse/AMBARI-17598
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> In ambari 2.2.2 "Show only my widgets" check box was not shown for 'read 
> only' user.
> But in ambari-2.4.0, cluster user is being shown with this particular 
> checkbox.
> Steps to reproduce :
> 1) Create a cluster user.
> 2) Login as cluster user.
> 3) Go to browse widget.
> 4) Check that "Show only my widgets" is not shown for cluster user.
> Since 'read only user' in ambari 2.2.2 corresponds to the cluster user in 
> 2.4.0, even the permissions should remain intact.
> 
> 
> Diffs
> -
> 
>   ambari-web/app/templates/common/modal_popups/widget_browser_footer.hbs 
> 3d58948 
>   ambari-web/app/templates/common/modal_popups/widget_browser_popup.hbs 
> 2b94e9d 
> 
> Diff: https://reviews.apache.org/r/49727/diff/
> 
> 
> Testing
> ---
> 
> Local ambari-web test passed.
> 28944 tests complete (24 seconds)
> 154 tests pending
> Manual testing done.
> 
> 
> Thanks,
> 
> Zhe (Joe) Wang
> 
>



Re: Review Request 49735: AMBARI-17570 Lack of importing ClientComponentHasNoStatus

2016-07-06 Thread Juanjo Marron

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49735/#review141106
---


Ship it!




Ship It!

- Juanjo  Marron


On July 6, 2016, 11:45 p.m., Masahiro Tanaka wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49735/
> ---
> 
> (Updated July 6, 2016, 11:45 p.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, Jayush Luniya, and Juanjo  
> Marron.
> 
> 
> Bugs: AMBARI-17570
> https://issues.apache.org/jira/browse/AMBARI-17570
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Some of the *_client.py files (e.g. 
> ambari/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_client.py)
>  don't import ClientComponentHasNoStatus while it is used. It should be 
> imported.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_client.py
>  1e7ed1f3 
>   
> ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py
>  f8c33dd 
>   
> ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py
>  93d244d 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_client.py
>  1f85fc0 
>   
> ambari-server/src/main/resources/common-services/KERBEROS/1.10.3-10/package/scripts/kerberos_client.py
>  5a398db 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py
>  7c5b6e0 
>   
> ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/pig_client.py
>  111f4d2 
>   
> ambari-server/src/main/resources/common-services/SLIDER/0.60.0.2.2/package/scripts/slider_client.py
>  99314ec 
>   
> ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py
>  fac92e1 
> 
> Diff: https://reviews.apache.org/r/49735/diff/
> 
> 
> Testing
> ---
> 
> -1 overall. Here are the results of testing the latest attachment 
> http://issues.apache.org/jira/secure/attachment/12816319/AMBARI-17570.patch
> against trunk revision .
> +1 @author. The patch does not contain any @author tags.
> -1 tests included. The patch doesn't appear to include any new or modified 
> tests.
> Please justify why no new tests are needed for this patch.
> Also please list what manual steps were performed to verify this patch.
> +1 javac. The applied patch does not increase the total number of javac 
> compiler warnings.
> +1 release audit. The applied patch does not increase the total number of 
> release audit warnings.
> +1 core tests. The patch passed unit tests in ambari-server.
> Test results: 
> https://builds.apache.org/job/Ambari-trunk-test-patch/7710//testReport/
> Console output: 
> https://builds.apache.org/job/Ambari-trunk-test-patch/7710//console
> This message is automatically generated.
> 
> 
> Thanks,
> 
> Masahiro Tanaka
> 
>



Review Request 49735: AMBARI-17570 Lack of importing ClientComponentHasNoStatus

2016-07-06 Thread Masahiro Tanaka

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49735/
---

Review request for Ambari, Alejandro Fernandez, Jayush Luniya, and Juanjo  
Marron.


Bugs: AMBARI-17570
https://issues.apache.org/jira/browse/AMBARI-17570


Repository: ambari


Description
---

Some of the *_client.py files (e.g. 
ambari/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_client.py)
 don't import ClientComponentHasNoStatus while it is used. It should be 
imported.


Diffs
-

  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_client.py
 1e7ed1f3 
  
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py
 f8c33dd 
  
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py
 93d244d 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_client.py
 1f85fc0 
  
ambari-server/src/main/resources/common-services/KERBEROS/1.10.3-10/package/scripts/kerberos_client.py
 5a398db 
  
ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py
 7c5b6e0 
  
ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/pig_client.py
 111f4d2 
  
ambari-server/src/main/resources/common-services/SLIDER/0.60.0.2.2/package/scripts/slider_client.py
 99314ec 
  
ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py
 fac92e1 

Diff: https://reviews.apache.org/r/49735/diff/


Testing
---

-1 overall. Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12816319/AMBARI-17570.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javac. The applied patch does not increase the total number of javac 
compiler warnings.
+1 release audit. The applied patch does not increase the total number of 
release audit warnings.
+1 core tests. The patch passed unit tests in ambari-server.
Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/7710//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/7710//console
This message is automatically generated.


Thanks,

Masahiro Tanaka



Re: Review Request 49734: fix spark.driver.extraLibraryPath to include native gpl library

2016-07-06 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49734/#review141100
---


Ship it!




Ship It!

- Alejandro Fernandez


On July 6, 2016, 11:19 p.m., Weiqing Yang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49734/
> ---
> 
> (Updated July 6, 2016, 11:19 p.m.)
> 
> 
> Review request for Ambari, Sumit Mohanty and Srimanth Gunturi.
> 
> 
> Bugs: AMBARI-17579
> https://issues.apache.org/jira/browse/AMBARI-17579
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> spark-shell command prints "Could not load native gpl library" error. Need to 
> add directory of libgplcompression in spark.driver.extraLibraryPath.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/params.py
>  67bbac7 
>   
> ambari-server/src/main/resources/common-services/SPARK2/2.0.0/package/scripts/params.py
>  78feef3 
> 
> Diff: https://reviews.apache.org/r/49734/diff/
> 
> 
> Testing
> ---
> 
> Manually tests passed.
> 
> 
> Thanks,
> 
> Weiqing Yang
> 
>



Review Request 49734: fix spark.driver.extraLibraryPath to include native gpl library

2016-07-06 Thread Weiqing Yang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49734/
---

Review request for Ambari, Sumit Mohanty and Srimanth Gunturi.


Bugs: AMBARI-17579
https://issues.apache.org/jira/browse/AMBARI-17579


Repository: ambari


Description
---

spark-shell command prints "Could not load native gpl library" error. Need to 
add directory of libgplcompression in spark.driver.extraLibraryPath.


Diffs
-

  
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/params.py
 67bbac7 
  
ambari-server/src/main/resources/common-services/SPARK2/2.0.0/package/scripts/params.py
 78feef3 

Diff: https://reviews.apache.org/r/49734/diff/


Testing
---

Manually tests passed.


Thanks,

Weiqing Yang



Re: Review Request 48973: AMBARI-17324. kafka should set zookeeper.set.acl to true when kerberos enabled

2016-07-06 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48973/
---

(Updated July 6, 2016, 9:27 p.m.)


Review request for Ambari and Alejandro Fernandez.


Bugs: AMBARI-17324
https://issues.apache.org/jira/browse/AMBARI-17324


Repository: ambari


Description
---

kafka should set zookeeper.set.acl to true when kerberos enabled


Diffs
-

  ambari-server/src/main/resources/common-services/KAFKA/0.9.0/kerberos.json 
eaa3d9d 

Diff: https://reviews.apache.org/r/48973/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



Re: Review Request 48973: AMBARI-17324. kafka should set zookeeper.set.acl to true when kerberos enabled

2016-07-06 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48973/
---

(Updated July 6, 2016, 9:27 p.m.)


Review request for Ambari and Alejandro Fernandez.


Summary (updated)
-

AMBARI-17324. kafka should set zookeeper.set.acl to true when kerberos enabled


Bugs: AMBARI-17234
https://issues.apache.org/jira/browse/AMBARI-17234


Repository: ambari


Description
---

kafka should set zookeeper.set.acl to true when kerberos enabled


Diffs
-

  ambari-server/src/main/resources/common-services/KAFKA/0.9.0/kerberos.json 
eaa3d9d 

Diff: https://reviews.apache.org/r/48973/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



Review Request 49727: AMBARI-17598 Permission mismatch b/w 'Cluster user' and 'read only user' from older ambari

2016-07-06 Thread Zhe (Joe) Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49727/
---

Review request for Ambari, Jaimin Jetly, Richard Zang, Vivek Ratnavel 
Subramanian, Xi Wang, and Yusaku Sako.


Bugs: AMBARI-17598
https://issues.apache.org/jira/browse/AMBARI-17598


Repository: ambari


Description
---

In ambari 2.2.2 "Show only my widgets" check box was not shown for 'read only' 
user.
But in ambari-2.4.0, cluster user is being shown with this particular checkbox.
Steps to reproduce :
1) Create a cluster user.
2) Login as cluster user.
3) Go to browse widget.
4) Check that "Show only my widgets" is not shown for cluster user.
Since 'read only user' in ambari 2.2.2 corresponds to the cluster user in 
2.4.0, even the permissions should remain intact.


Diffs
-

  ambari-web/app/templates/common/modal_popups/widget_browser_footer.hbs 
3d58948 
  ambari-web/app/templates/common/modal_popups/widget_browser_popup.hbs 2b94e9d 

Diff: https://reviews.apache.org/r/49727/diff/


Testing
---

Local ambari-web test passed.
28944 tests complete (24 seconds)
154 tests pending
Manual testing done.


Thanks,

Zhe (Joe) Wang



Re: Review Request 48973: AMBARI-17234. kafka should set zookeeper.set.acl to true when kerberos enabled

2016-07-06 Thread Sriharsha Chintalapani


> On July 5, 2016, 7:32 p.m., Robert Levas wrote:
> > Ship It!

is this merged?


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48973/#review140857
---


On June 20, 2016, 10:47 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/48973/
> ---
> 
> (Updated June 20, 2016, 10:47 p.m.)
> 
> 
> Review request for Ambari and Alejandro Fernandez.
> 
> 
> Bugs: AMBARI-17234
> https://issues.apache.org/jira/browse/AMBARI-17234
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> kafka should set zookeeper.set.acl to true when kerberos enabled
> 
> 
> Diffs
> -
> 
>   ambari-server/src/main/resources/common-services/KAFKA/0.9.0/kerberos.json 
> eaa3d9d 
> 
> Diff: https://reviews.apache.org/r/48973/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: Review Request 49676: Add atlas-application config sections to all services that run Atlas hook, e.g., Hive, Falcon, Storm, Sqoop

2016-07-06 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49676/
---

(Updated July 6, 2016, 8:29 p.m.)


Review request for Ambari, Madhan Neethiraj, Robert Levas, Sumit Mohanty, 
Swapan Shridhar, and Suma Shivaprasad.


Changes
---

Added addendum patch


Bugs: AMBARI-17573
https://issues.apache.org/jira/browse/AMBARI-17573


Repository: ambari


Description
---

Currently, Atlas hooks that run in Hive, Falconm Storm, Sqoop processes 
reference atlas-application.properties file from Atlas server config location - 
/etc/atlas/conf/atlas-application.properties.
Not all properties in /etc/atlas/conf/atlas-application.properties are required 
in hooks and some of these properties are sensitive enough not to expose them 
to hooks/clients.

To address this concern:
1. atlas-application.properties should be added as a config section in each of 
the host component's that run Atlas hook - Hive, Storm, Falcon, Sqoop
2. These new config sections will only include properties that are required to 
the respective hooks
3. During initial deployment, Ambari will initialize these properties with 
values in Atlas server configuration.
For each one of those services, create a config type called $
{service}-atlas-application.properties that will be saved to /etc/${service}
/conf/application.properties

These are the default values,

Falcon
atlas.hook.falcon.synchronous=false
atlas.hook.falcon.numRetries=3
atlas.hook.falcon.minThreads=5
atlas.hook.falcon.maxThreads=5
atlas.hook.falcon.keepAliveTime=10
atlas.hook.falcon.queueSize

Storm
atlas.hook.storm.numRetries=3

Hive
atlas.hook.hive.synchronous=false
atlas.hook.hive.numRetries=3
atlas.hook.hive.minThreads=5
atlas.hook.hive.maxThreads=5
atlas.hook.hive.keepAliveTime=10
atlas.hook.hive.queueSize=1

Common for all hooks
atlas.kafka.zookeeper.connect=
atlas.kafka.bootstrap.servers=
atlas.kafka.zookeeper.session.timeout.ms=400
atlas.kafka.zookeeper.connection.timeout.ms=200
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.hook.group.id=atlas
atlas.notification.create.topics=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.notification.kafka.service.principal=kafka/_h...@example.com
atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
atlas.jaas.KafkaClient.loginModuleName = 
com.sun.security.auth.module.Krb5LoginModule
atlas.jaas.KafkaClient.loginModuleControlFlag = required
atlas.jaas.KafkaClient.option.useKeyTab = true
atlas.jaas.KafkaClient.option.storeKey = true
atlas.jaas.KafkaClient.option.serviceName = kafka
atlas.jaas.KafkaClient.option.keyTab = 
/etc/security/keytabs/atlas.service.keytab
atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
atlas.cluster.name=


Diffs
-

  
ambari-common/src/main/python/resource_management/libraries/functions/setup_atlas_hook.py
 PRE-CREATION 
  
ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
 1437251 
  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-atlas-application.properties.xml
 PRE-CREATION 
  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/metainfo.xml 
602144b 
  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py
 c2f1f53 
  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py
 fc9d8b9 
  
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/setup_atlas_falcon.py
 1dce515 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-atlas-application.properties.xml
 PRE-CREATION 
  ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml 
273133a 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hcat.py
 839ab04 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
 ea2af62 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
 17f7380 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/setup_atlas_hive.py
 d1bd8ea 
  
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/webhcat.py
 816b6af 
  ambari-server/src/main/resources/common-services/SQOOP/1.4.4.2.0/metainfo.xml 
e3aa5ef 
  
ambari-server/src/main/resources/common-services/SQOOP/1.4.4.2.0/package/scripts/params_linux.py
 b2a6802 
  
ambari-server/src/main/resources/common-services/SQOOP/1.4.4.2.0/package/scripts/setup_atlas_sqoop.py
 76c1cda 
  
ambari-server/src/main/resources/common-services/SQOOP/1.4.4.2.0/package/scripts/sqoop.py
 bac836c 
  
ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py
 fac6331 
  

Re: Review Request 49640: Identify config changes added to Ambari-2.4.0 and mark them to not get added during Ambari upgrade

2016-07-06 Thread Sumit Mohanty

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49640/#review141064
---




ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-site.xml
 (line 377)


Lets skip all HAWQ changes and open a bug to have the HAWQ developers make 
a call on which properties should not be added on Ambari upgrade.



ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
 (line 155)


If this does not get added what will the following line params_linux.py do 
- 
hbase_regionserver_shutdown_timeout = 
expect('/configurations/hbase-env/hbase_regionserver_shutdown_timeout', int)



ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-env.xml
 (line 83)


Looks like this code will break - we need some default behavior

oozie_tmp_dir = config['configurations']['oozie-env']['oozie_tmp_dir']



ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/configuration/tez-site.xml
 (line 246)


Will this code cause issue if the property is not there - 
ambari-server/src/main/resources/stacks/HDP/2.1/services/stack_advisor.py - 
line 170-172



ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml 
(line 27)


We need to make sure that these properties have good defaults if these are 
missing. You can ping Nahappan.


- Sumit Mohanty


On July 5, 2016, 4:29 p.m., Dmitro Lisnichenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49640/
> ---
> 
> (Updated July 5, 2016, 4:29 p.m.)
> 
> 
> Review request for Ambari, Jonathan Hurley, Nate Cole, and Sumit Mohanty.
> 
> 
> Bugs: AMBARI-17564
> https://issues.apache.org/jira/browse/AMBARI-17564
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Now that we have the mechanism to prevent configs from getting added as part 
> of Ambari upgrade, lets identify the configs that got added in 2.4.0 and mark 
> them as not to be added during Ambari upgrade.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-env.xml
>  b4eecec 
>   
> ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
>  871e571 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
>  3578d43 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/atlas-env.xml
>  b8f7715 
>   
> ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-site.xml
>  150b2c6 
>   
> ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-sysctl-env.xml
>  290239e 
>   
> ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
>  9811191 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-site.xml
>  61437d5 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/webhcat-site.xml
>  d8012dd 
>   
> ambari-server/src/main/resources/common-services/KERBEROS/1.10.3-10/configuration/kerberos-env.xml
>  29c46e9 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-env.xml
>  d0e51eb 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/configuration/tez-site.xml
>  e7a851c 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/configuration-mapred/mapred-site.xml
>  6951db0 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/configuration/yarn-env.xml
>  152c463 
>   
> ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml
>  89e05d7 
>   
> ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml
>  e1aab8a 
>   
> ambari-server/src/main/resources/stacks/HDP/2.3/services/STORM/configuration/storm-site.xml
>  f3bbce8 
>   
> ambari-server/src/main/resources/stacks/HDPWIN/2.3/services/OOZIE/configuration/oozie-site.xml
>  f2c41c3 
> 
> Diff: https://reviews.apache.org/r/49640/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Dmitro Lisnichenko
> 
>



Re: Review Request 49676: Add atlas-application config sections to all services that run Atlas hook, e.g., Hive, Falcon, Storm, Sqoop

2016-07-06 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49676/#review141060
---


Ship it!




Ship It!

- Robert Levas


On July 6, 2016, 2:48 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49676/
> ---
> 
> (Updated July 6, 2016, 2:48 p.m.)
> 
> 
> Review request for Ambari, Madhan Neethiraj, Robert Levas, Sumit Mohanty, 
> Swapan Shridhar, and Suma Shivaprasad.
> 
> 
> Bugs: AMBARI-17573
> https://issues.apache.org/jira/browse/AMBARI-17573
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Currently, Atlas hooks that run in Hive, Falconm Storm, Sqoop processes 
> reference atlas-application.properties file from Atlas server config location 
> - /etc/atlas/conf/atlas-application.properties.
> Not all properties in /etc/atlas/conf/atlas-application.properties are 
> required in hooks and some of these properties are sensitive enough not to 
> expose them to hooks/clients.
> 
> To address this concern:
> 1. atlas-application.properties should be added as a config section in each 
> of the host component's that run Atlas hook - Hive, Storm, Falcon, Sqoop
> 2. These new config sections will only include properties that are required 
> to the respective hooks
> 3. During initial deployment, Ambari will initialize these properties with 
> values in Atlas server configuration.
> For each one of those services, create a config type called $
> {service}-atlas-application.properties that will be saved to /etc/${service}
> /conf/application.properties
> 
> These are the default values,
> 
> Falcon
> atlas.hook.falcon.synchronous=false
> atlas.hook.falcon.numRetries=3
> atlas.hook.falcon.minThreads=5
> atlas.hook.falcon.maxThreads=5
> atlas.hook.falcon.keepAliveTime=10
> atlas.hook.falcon.queueSize
> 
> Storm
> atlas.hook.storm.numRetries=3
> 
> Hive
> atlas.hook.hive.synchronous=false
> atlas.hook.hive.numRetries=3
> atlas.hook.hive.minThreads=5
> atlas.hook.hive.maxThreads=5
> atlas.hook.hive.keepAliveTime=10
> atlas.hook.hive.queueSize=1
> 
> Common for all hooks
> atlas.kafka.zookeeper.connect=
> atlas.kafka.bootstrap.servers=
> atlas.kafka.zookeeper.session.timeout.ms=400
> atlas.kafka.zookeeper.connection.timeout.ms=200
> atlas.kafka.zookeeper.sync.time.ms=20
> atlas.kafka.hook.group.id=atlas
> atlas.notification.create.topics=true
> atlas.notification.replicas=1
> atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
> atlas.notification.kafka.service.principal=kafka/_h...@example.com
> atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
> atlas.jaas.KafkaClient.loginModuleName = 
> com.sun.security.auth.module.Krb5LoginModule
> atlas.jaas.KafkaClient.loginModuleControlFlag = required
> atlas.jaas.KafkaClient.option.useKeyTab = true
> atlas.jaas.KafkaClient.option.storeKey = true
> atlas.jaas.KafkaClient.option.serviceName = kafka
> atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> atlas.cluster.name=
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/setup_atlas_hook.py
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
>  1437251 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-atlas-application.properties.xml
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/metainfo.xml
>  602144b 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py
>  c2f1f53 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py
>  fc9d8b9 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/setup_atlas_falcon.py
>  1dce515 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-atlas-application.properties.xml
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml 
> 273133a 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hcat.py
>  839ab04 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
>  ea2af62 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
>  17f7380 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/setup_atlas_hive.py
>  d1bd8ea 
>   
> 

Re: Review Request 48972: AMBARI-17253 Ambari Alert causes too many wanings in ZooKeeper logs.

2016-07-06 Thread Nate Cole


On July 4, 2016, 6:58 p.m., Masahiro Tanaka wrote:
> > What about existing clusters? We probably need to modify any existing ZK 
> > alerts with this using the UpgradeCatalog.
> 
> Masahiro Tanaka wrote:
> Thank you for reviewing. Which one should we change, 
> `UpgradeCatalog230.java`, or `UpgradeCatalog240.java`?
> 
> Jonathan Hurley wrote:
> Always the latest one for the release (or the branch). Since we're 
> readying the 2.4 release, then you can edit UpgadeCatalog240. If you 
> backported this to, say, the 2.2 branch (as an example), you'd edit 
> UpgradeCatalog220.
> 
> Masahiro Tanaka wrote:
> Thanks. I'll update UpgradeCatalog240 and UpgradeCatalog220

I don't think we would accept this into the 2.2 branch unless you were directed 
to do so.  I recommend sticking with trunk and branch-2.4.


- Nate


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48972/#review140707
---


On July 4, 2016, 3:01 p.m., Masahiro Tanaka wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/48972/
> ---
> 
> (Updated July 4, 2016, 3:01 p.m.)
> 
> 
> Review request for Ambari, Florian Barca, Jonathan Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-17253
> https://issues.apache.org/jira/browse/AMBARI-17253
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> There are too many WARNING in ZooKeeper log.
> ```
> 2016-06-15 21:02:15,405 - WARN  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of 
> stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x0, likely client has closed socket
> at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> ```
> 
> It may be because of Ambari Alert. Ambari Alert pings to the zookeeper port 
> to do monitoring.
> We should use 'ruok' to monitor zookeepers.
> 
> 
> Diffs
> -
> 
>   ambari-agent/src/main/python/ambari_agent/alerts/port_alert.py 1918327 
>   ambari-agent/src/test/python/ambari_agent/TestPortAlert.py dffa56c 
>   
> ambari-server/src/main/java/org/apache/ambari/server/state/alert/PortSource.java
>  d7279de 
>   
> ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/alerts.json 
> 469036a 
> 
> Diff: https://reviews.apache.org/r/48972/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> ```
> +1 overall. Here are the results of testing the latest attachment 
> http://issues.apache.org/jira/secure/attachment/12811835/AMBARI-17253.2.patch
> against trunk revision .
> +1 @author. The patch does not contain any @author tags.
> +1 tests included. The patch appears to include 1 new or modified test files.
> +1 javac. The applied patch does not increase the total number of javac 
> compiler warnings.
> +1 release audit. The applied patch does not increase the total number of 
> release audit warnings.
> +1 core tests. The patch passed unit tests in .
> Test results: 
> https://builds.apache.org/job/Ambari-trunk-test-patch/7427//testReport/
> Console output: 
> https://builds.apache.org/job/Ambari-trunk-test-patch/7427//console
> This message is automatically generated.
> ```
> 
> 
> Thanks,
> 
> Masahiro Tanaka
> 
>



Re: Review Request 49676: Add atlas-application config sections to all services that run Atlas hook, e.g., Hive, Falcon, Storm, Sqoop

2016-07-06 Thread Nate Cole

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49676/#review141057
---


Ship it!




Ship It!

- Nate Cole


On July 6, 2016, 2:48 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49676/
> ---
> 
> (Updated July 6, 2016, 2:48 p.m.)
> 
> 
> Review request for Ambari, Madhan Neethiraj, Robert Levas, Sumit Mohanty, 
> Swapan Shridhar, and Suma Shivaprasad.
> 
> 
> Bugs: AMBARI-17573
> https://issues.apache.org/jira/browse/AMBARI-17573
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Currently, Atlas hooks that run in Hive, Falconm Storm, Sqoop processes 
> reference atlas-application.properties file from Atlas server config location 
> - /etc/atlas/conf/atlas-application.properties.
> Not all properties in /etc/atlas/conf/atlas-application.properties are 
> required in hooks and some of these properties are sensitive enough not to 
> expose them to hooks/clients.
> 
> To address this concern:
> 1. atlas-application.properties should be added as a config section in each 
> of the host component's that run Atlas hook - Hive, Storm, Falcon, Sqoop
> 2. These new config sections will only include properties that are required 
> to the respective hooks
> 3. During initial deployment, Ambari will initialize these properties with 
> values in Atlas server configuration.
> For each one of those services, create a config type called $
> {service}-atlas-application.properties that will be saved to /etc/${service}
> /conf/application.properties
> 
> These are the default values,
> 
> Falcon
> atlas.hook.falcon.synchronous=false
> atlas.hook.falcon.numRetries=3
> atlas.hook.falcon.minThreads=5
> atlas.hook.falcon.maxThreads=5
> atlas.hook.falcon.keepAliveTime=10
> atlas.hook.falcon.queueSize
> 
> Storm
> atlas.hook.storm.numRetries=3
> 
> Hive
> atlas.hook.hive.synchronous=false
> atlas.hook.hive.numRetries=3
> atlas.hook.hive.minThreads=5
> atlas.hook.hive.maxThreads=5
> atlas.hook.hive.keepAliveTime=10
> atlas.hook.hive.queueSize=1
> 
> Common for all hooks
> atlas.kafka.zookeeper.connect=
> atlas.kafka.bootstrap.servers=
> atlas.kafka.zookeeper.session.timeout.ms=400
> atlas.kafka.zookeeper.connection.timeout.ms=200
> atlas.kafka.zookeeper.sync.time.ms=20
> atlas.kafka.hook.group.id=atlas
> atlas.notification.create.topics=true
> atlas.notification.replicas=1
> atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
> atlas.notification.kafka.service.principal=kafka/_h...@example.com
> atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
> atlas.jaas.KafkaClient.loginModuleName = 
> com.sun.security.auth.module.Krb5LoginModule
> atlas.jaas.KafkaClient.loginModuleControlFlag = required
> atlas.jaas.KafkaClient.option.useKeyTab = true
> atlas.jaas.KafkaClient.option.storeKey = true
> atlas.jaas.KafkaClient.option.serviceName = kafka
> atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> atlas.cluster.name=
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/setup_atlas_hook.py
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
>  1437251 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-atlas-application.properties.xml
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/metainfo.xml
>  602144b 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py
>  c2f1f53 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py
>  fc9d8b9 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/setup_atlas_falcon.py
>  1dce515 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-atlas-application.properties.xml
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml 
> 273133a 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hcat.py
>  839ab04 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
>  ea2af62 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
>  17f7380 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/setup_atlas_hive.py
>  d1bd8ea 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/webhcat.py
> 

Re: Review Request 49665: authorizer.class.name not being set on secure kafka clusters

2016-07-06 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49665/#review141039
---


Ship it!




Ship It!

- Alejandro Fernandez


On July 6, 2016, 12:41 p.m., Robert Levas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49665/
> ---
> 
> (Updated July 6, 2016, 12:41 p.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, Srimanth Gunturi, Tim Thorpe, 
> and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-17479
> https://issues.apache.org/jira/browse/AMBARI-17479
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> The `kafka-broker/authorizer.class.name` property is not being set properly 
> when Kerberos is enabled.
> 
> The following logic should be followed:
> ```
> if Kerberos is enabled
>   if ranger-kafka-plugin-properties/ranger-kafka-plugin-enabled == yes
> set authorizer.class.name to 
> "org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer"
>   else
> set authorizer.class.name to "kafka.security.auth.SimpleAclAuthorizer"
> else
>   remove authorizer.class.name
> ```
> 
> This should be updated in the stack advisor code. 
> 
> While at it, configurations from Kafka's `kerberos.json` file should be moved 
> to the stack advisor to help ensure properties are set in the the same place 
> to help with code maintenance and consistency.
> 
> 
> Diffs
> -
> 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
> 06f7cfe 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py 
> 879008b 
>   ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 
> 2944f6f 
> 
> Diff: https://reviews.apache.org/r/49665/diff/
> 
> 
> Testing
> ---
> 
> Manually tested
> 
> #Jenkins test results: 
> 
> ```
> {color:green}+1 overall{color}.  Here are the results of testing the latest 
> attachment 
>   
> http://issues.apache.org/jira/secure/attachment/12816328/AMBARI-17479_trunk_01.patch
>   against trunk revision .
> 
> {color:green}+1 @author{color}.  The patch does not contain any @author 
> tags.
> 
> {color:green}+1 tests included{color}.  The patch appears to include 1 
> new or modified test files.
> 
> {color:green}+1 javac{color}.  The applied patch does not increase the 
> total number of javac compiler warnings.
> 
> {color:green}+1 release audit{color}.  The applied patch does not 
> increase the total number of release audit warnings.
> 
> {color:green}+1 core tests{color}.  The patch passed unit tests in 
> ambari-server.
> ```
> 
> 
> Thanks,
> 
> Robert Levas
> 
>



Re: Review Request 49590: While changing NN, DN directories from UI, proper warning should be present for invalid values

2016-07-06 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49590/#review141037
---


Ship it!




Ship It!

- Alejandro Fernandez


On July 6, 2016, 1:43 p.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49590/
> ---
> 
> (Updated July 6, 2016, 1:43 p.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, Dmytro Sen, and Sid Wagle.
> 
> 
> Bugs: AMBARI-17550
> https://issues.apache.org/jira/browse/AMBARI-17550
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> While changing NN and DN directories from ambari, for example:  
> changing dn directories from **/grid/0/hadoop/hdfs/data** to **/grid/0/hadoop/
> hdfs/data,/grid/0/hadoop/hdfs/data1,/grid/0/hadoop/hdfs/data2**
> 
> The values are changed without being validated  
> This leads to datanodes start failing
> 
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 174, in 
> DataNode().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 709, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 60, in start
> self.configure(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 55, in configure
> datanode(action="configure")
>   File 
> "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, 
> in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
>  line 53, in datanode
> data_dir_to_mount_file_content = handle_mounted_dirs(create_dirs, 
> params.dfs_data_dirs, params.data_dir_mount_file, params)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/mounted_dirs_helper.py",
>  line 158, in handle_mounted_dirs
> raise Fail(message + " . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.")
> resource_management.core.exceptions.Fail: Trying to create another 
> directory on the following mount: /grid/0 . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.
> 
> 
> The test fails because it set invalid value, and the directory was not 
> created  
> A warning message informing why the new directory name is invalid could be
> useful
> 
> 
> Diffs
> -
> 
>   ambari-agent/src/test/python/resource_management/TestDatanodeHelper.py 
> c33a295 
>   ambari-agent/src/test/python/resource_management/TestFileSystem.py 925758c 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/file_system.py
>  2a859ed 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/mounted_dirs_helper.py
>  9574ce5 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
> 06f7cfe 
>   ambari-server/src/test/python/stacks/2.0.6/common/test_stack_advisor.py 
> 7a092fc 
>   ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py 
> 08b9554 
>   ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 
> 4dfb8af 
> 
> Diff: https://reviews.apache.org/r/49590/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Review Request 49711: AMBARI-17593: Ambari server backup error - failure if backup size exceeds 4GB

2016-07-06 Thread Nahappan Somasundaram

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49711/
---

Review request for Ambari, Ajit Kumar and Sumit Mohanty.


Bugs: AMBARI-17593
https://issues.apache.org/jira/browse/AMBARI-17593


Repository: ambari


Description
---

AMBARI-17593: Ambari server backup error - failure if backup size exceeds 4GB

** Issue: **
If the size of the archive exceeds, zipping fails

** Fix: **
Specify allowZip64=True when calling ZipFile


Diffs
-

  ambari-server/src/main/python/ambari_server/BackupRestore.py 
2c00be9915a0b81cbb6be38f7c09fd58c5ca3d12 

Diff: https://reviews.apache.org/r/49711/diff/


Testing
---

**1. mvn clean install **

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Ambari Main ... SUCCESS [11.210s]
[INFO] Apache Ambari Project POM . SUCCESS [0.039s]
[INFO] Ambari Web  SUCCESS [33.914s]
[INFO] Ambari Views .. SUCCESS [1.327s]
[INFO] Ambari Admin View . SUCCESS [8.895s]
[INFO] ambari-metrics  SUCCESS [0.713s]
[INFO] Ambari Metrics Common . SUCCESS [3.857s]
[INFO] Ambari Metrics Hadoop Sink  SUCCESS [2.111s]
[INFO] Ambari Metrics Flume Sink . SUCCESS [1.232s]
[INFO] Ambari Metrics Kafka Sink . SUCCESS [1.216s]
[INFO] Ambari Metrics Storm Sink . SUCCESS [24.618s]
[INFO] Ambari Metrics Storm Sink (Legacy)  SUCCESS [1.642s]
[INFO] Ambari Metrics Collector .. SUCCESS [11.834s]
[INFO] Ambari Metrics Monitor  SUCCESS [3.307s]
[INFO] Ambari Metrics Grafana  SUCCESS [0.990s]
[INFO] Ambari Metrics Assembly ... SUCCESS [1:30.657s]
[INFO] Ambari Server . SUCCESS [2:58.533s]
[INFO] Ambari Functional Tests ... SUCCESS [2.605s]
[INFO] Ambari Agent .. SUCCESS [27.833s]
[INFO] Ambari Client . SUCCESS [0.058s]
[INFO] Ambari Python Client .. SUCCESS [0.948s]
[INFO] Ambari Groovy Client .. SUCCESS [2.109s]
[INFO] Ambari Shell .. SUCCESS [0.037s]
[INFO] Ambari Python Shell ... SUCCESS [0.667s]
[INFO] Ambari Groovy Shell ... SUCCESS [0.944s]
[INFO] ambari-logsearch .. SUCCESS [0.265s]
[INFO] Ambari Logsearch Appender . SUCCESS [0.427s]
[INFO] Ambari Logsearch Solr Client .. SUCCESS [1.090s]
[INFO] Ambari Logsearch Portal ... SUCCESS [6.379s]
[INFO] Ambari Logsearch Log Feeder ... SUCCESS [14.708s]
[INFO] Ambari Logsearch Assembly . SUCCESS [0.087s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 7:16.379s
[INFO] Finished at: Wed Jul 06 09:42:57 PDT 2016
[INFO] Final Memory: 335M/1134M
[INFO] 

** 2. mvn test -DskipSurefireTests **

--
Ran 261 tests in 6.676s

OK
--
Total run:1014
Total errors:0
Total failures:0
OK
INFO: AMBARI_SERVER_LIB is not set, using default /usr/lib/ambari-server
INFO: Return code from stack upgrade command, retcode = 0
StackAdvisor implementation for stack HDP1, version 2.0.6 was not found
Returning DefaultStackAdvisor implementation
StackAdvisor implementation for stack XYZ, version 1.0.0 was loaded
StackAdvisor implementation for stack XYZ, version 1.0.1 was loaded
Returning XYZ101StackAdvisor implementation
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 1:07.864s
[INFO] Finished at: Wed Jul 06 09:48:34 PDT 2016
[INFO] Final Memory: 55M/918M
[INFO] 

**3. Manual tests **

Added a few SQL dump files with sizes totalling over 4GB to /etc/ folder and 
verified that ambari-server backup succeeded.


Thanks,

Nahappan Somasundaram



Re: Review Request 49385: Hive and Oozie db displayed incorrectly on the installer review page

2016-07-06 Thread Alexandr Antonenko


> On July 6, 2016, 1:52 p.m., Alexandr Antonenko wrote:
> > Ship It!
> 
> Sangeeta Ravindran wrote:
> Thanks Alexandr. Can you help push the fix.

done


- Alexandr


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49385/#review140987
---


On June 30, 2016, 8:57 p.m., Sangeeta Ravindran wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49385/
> ---
> 
> (Updated June 30, 2016, 8:57 p.m.)
> 
> 
> Review request for Ambari, Alexandr Antonenko and Andrii Tkach.
> 
> 
> Bugs: AMBARI-17469
> https://issues.apache.org/jira/browse/AMBARI-17469
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> During Hive install, in the review page, the default value of 
> hive_admin_database (MySQL) is concatenated to the selected database type, no 
> matter which database is selected.  For e.g. if Existing PostgreSQL Database 
> is selected as the Hive database, the review page displays the following for 
> Hive database: 
> 
> Database : MySQL (Existing PostgreSQL Database)
>  
> In case of Oozie, because there is no oozie_admin_database property, a blank 
> is displayed for database although an existing database was selected
>  
> Database :  
>  
> This seems to be because of the logic in the method that determines the 
> database value to be displayed.
>  
> var dbFull = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_database'),
>  db = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_ambari_database');
> return db && dbFull ? db.value + ' (' + dbFull.value + ')' : '';
> 
> The value of hive_ambari_database returns MySQL and hence in case of Hive, 
> MySQL always gets appended.
>  
> There is no oozie_ambari_database property defined. Hence db is undefined and 
> an emtpy string is returned instead of the actual database type selected.
>  
> Fix involves changing the logic to not include the value of 
> serviceName_ambari_database since it will not have the right value unless the 
> default value is selected for Hive/Oozie database.
> 
> 
> Diffs
> -
> 
>   ambari-web/app/controllers/wizard/step8_controller.js 3971cf5 
>   ambari-web/test/controllers/wizard/step8_test.js 74e042b 
> 
> Diff: https://reviews.apache.org/r/49385/diff/
> 
> 
> Testing
> ---
> 
> Manual testing.
> Added a test case to verify the value displayed for database.
> Ran mvn test
> 
> 28979 tests complete (48 seconds)
> 154 tests pending
> 
> 
> File Attachments
> 
> 
> Updated Patch with review comments incorporated
>   
> https://reviews.apache.org/media/uploaded/files/2016/06/30/e5fa58fa-3940-4c82-ad92-c8070c824528__AMBARI-17469.patch
> 
> 
> Thanks,
> 
> Sangeeta Ravindran
> 
>



Re: Review Request 49637: Zeppelin: service install failure on Suse due to bash error

2016-07-06 Thread Rohit Choudhary

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49637/#review141016
---


Ship it!




Ship It!

- Rohit Choudhary


On July 5, 2016, 2:14 p.m., Renjith Kamath wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49637/
> ---
> 
> (Updated July 5, 2016, 2:14 p.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, DIPAYAN BHOWMICK, Gaurav 
> Nagar, Pallav Kulshreshtha, Prabhjyot Singh, Rohit Choudhary, and Sumit 
> Mohanty.
> 
> 
> Bugs: AMBARI-17558
> https://issues.apache.org/jira/browse/AMBARI-17558
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Fix the script failure on Suse with the following error due to outdated bash
> /setup_snapshot.sh: line 44: ${SETUP_VIEW,,}: bad substitution
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/setup_snapshot.sh
>  8612d64 
> 
> Diff: https://reviews.apache.org/r/49637/diff/
> 
> 
> Testing
> ---
> 
> Manually tested
> 
> 
> Thanks,
> 
> Renjith Kamath
> 
>



Re: Review Request 49385: Hive and Oozie db displayed incorrectly on the installer review page

2016-07-06 Thread Sangeeta Ravindran


> On July 5, 2016, 4:11 p.m., Andrii Tkach wrote:
> > Ship It!

Thanks Andrii.


- Sangeeta


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49385/#review140804
---


On June 30, 2016, 8:57 p.m., Sangeeta Ravindran wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49385/
> ---
> 
> (Updated June 30, 2016, 8:57 p.m.)
> 
> 
> Review request for Ambari, Alexandr Antonenko and Andrii Tkach.
> 
> 
> Bugs: AMBARI-17469
> https://issues.apache.org/jira/browse/AMBARI-17469
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> During Hive install, in the review page, the default value of 
> hive_admin_database (MySQL) is concatenated to the selected database type, no 
> matter which database is selected.  For e.g. if Existing PostgreSQL Database 
> is selected as the Hive database, the review page displays the following for 
> Hive database: 
> 
> Database : MySQL (Existing PostgreSQL Database)
>  
> In case of Oozie, because there is no oozie_admin_database property, a blank 
> is displayed for database although an existing database was selected
>  
> Database :  
>  
> This seems to be because of the logic in the method that determines the 
> database value to be displayed.
>  
> var dbFull = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_database'),
>  db = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_ambari_database');
> return db && dbFull ? db.value + ' (' + dbFull.value + ')' : '';
> 
> The value of hive_ambari_database returns MySQL and hence in case of Hive, 
> MySQL always gets appended.
>  
> There is no oozie_ambari_database property defined. Hence db is undefined and 
> an emtpy string is returned instead of the actual database type selected.
>  
> Fix involves changing the logic to not include the value of 
> serviceName_ambari_database since it will not have the right value unless the 
> default value is selected for Hive/Oozie database.
> 
> 
> Diffs
> -
> 
>   ambari-web/app/controllers/wizard/step8_controller.js 3971cf5 
>   ambari-web/test/controllers/wizard/step8_test.js 74e042b 
> 
> Diff: https://reviews.apache.org/r/49385/diff/
> 
> 
> Testing
> ---
> 
> Manual testing.
> Added a test case to verify the value displayed for database.
> Ran mvn test
> 
> 28979 tests complete (48 seconds)
> 154 tests pending
> 
> 
> File Attachments
> 
> 
> Updated Patch with review comments incorporated
>   
> https://reviews.apache.org/media/uploaded/files/2016/06/30/e5fa58fa-3940-4c82-ad92-c8070c824528__AMBARI-17469.patch
> 
> 
> Thanks,
> 
> Sangeeta Ravindran
> 
>



Re: Review Request 49590: While changing NN, DN directories from UI, proper warning should be present for invalid values

2016-07-06 Thread Dmytro Sen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49590/#review140993
---


Ship it!




Ship It!

- Dmytro Sen


On Июль 6, 2016, 1:43 п.п., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49590/
> ---
> 
> (Updated Июль 6, 2016, 1:43 п.п.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, Dmytro Sen, and Sid Wagle.
> 
> 
> Bugs: AMBARI-17550
> https://issues.apache.org/jira/browse/AMBARI-17550
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> While changing NN and DN directories from ambari, for example:  
> changing dn directories from **/grid/0/hadoop/hdfs/data** to **/grid/0/hadoop/
> hdfs/data,/grid/0/hadoop/hdfs/data1,/grid/0/hadoop/hdfs/data2**
> 
> The values are changed without being validated  
> This leads to datanodes start failing
> 
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 174, in 
> DataNode().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 709, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 60, in start
> self.configure(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 55, in configure
> datanode(action="configure")
>   File 
> "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, 
> in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
>  line 53, in datanode
> data_dir_to_mount_file_content = handle_mounted_dirs(create_dirs, 
> params.dfs_data_dirs, params.data_dir_mount_file, params)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/mounted_dirs_helper.py",
>  line 158, in handle_mounted_dirs
> raise Fail(message + " . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.")
> resource_management.core.exceptions.Fail: Trying to create another 
> directory on the following mount: /grid/0 . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.
> 
> 
> The test fails because it set invalid value, and the directory was not 
> created  
> A warning message informing why the new directory name is invalid could be
> useful
> 
> 
> Diffs
> -
> 
>   ambari-agent/src/test/python/resource_management/TestDatanodeHelper.py 
> c33a295 
>   ambari-agent/src/test/python/resource_management/TestFileSystem.py 925758c 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/file_system.py
>  2a859ed 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/mounted_dirs_helper.py
>  9574ce5 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
> 06f7cfe 
>   ambari-server/src/test/python/stacks/2.0.6/common/test_stack_advisor.py 
> 7a092fc 
>   ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py 
> 08b9554 
>   ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 
> 4dfb8af 
> 
> Diff: https://reviews.apache.org/r/49590/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Re: Review Request 48309: AMBARI-17047: Firewall check returns WARNING even if iptables and firewalld are stopped on CentOS7

2016-07-06 Thread Andrew Onischuk


> On June 13, 2016, 1:53 p.m., Andrew Onischuk wrote:
> > Ship It!
> 
> Masahiro Tanaka wrote:
> Thank you!
> 
> Masahiro Tanaka wrote:
> Could you commit it?
> 
> Andrew Onischuk wrote:
> Done. Please close the reviewboard now.

reverted the patch, apache jira


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48309/#review137293
---


On June 7, 2016, 11:11 a.m., Masahiro Tanaka wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/48309/
> ---
> 
> (Updated June 7, 2016, 11:11 a.m.)
> 
> 
> Review request for Ambari, Andrew Onischuk, Dmytro Sen, Florian Barca, and 
> Yusaku Sako.
> 
> 
> Bugs: AMBARI-17047
> https://issues.apache.org/jira/browse/AMBARI-17047
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> In firewall.py, `systemctl is-active iptables || systemctl is-active 
> firewalld` is passed to `run_in_shell` function, which splits cmd string by 
> using `shlex.split`.
> 
> run_in_shell function finally calls `subprocess.Popen` with `shell=True`, so 
> the cmd string is evaluated like `Popen(['/bin/sh', '-c', 'systemctl', 
> 'is-active', 'iptables', '||', 'systemctl', 'is-active', 'firewalld'])`. This 
> doesn't returns values as expected, because after args[1] (in this case, 
> after the first `is-active`) are evaluated as sh arguements.
> 
> `systemctl is-active` can take multiple arugments, so we can use it.
> 
> 
> Diffs
> -
> 
>   ambari-common/src/main/python/ambari_commons/firewall.py 72e6d26 
> 
> Diff: https://reviews.apache.org/r/48309/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test & manual test
> 
> 
> Thanks,
> 
> Masahiro Tanaka
> 
>



Re: Review Request 48309: AMBARI-17047: Firewall check returns WARNING even if iptables and firewalld are stopped on CentOS7

2016-07-06 Thread Andrew Onischuk


> On June 13, 2016, 1:53 p.m., Andrew Onischuk wrote:
> > Ship It!
> 
> Masahiro Tanaka wrote:
> Thank you!
> 
> Masahiro Tanaka wrote:
> Could you commit it?

Done. Please close the reviewboard now.


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/48309/#review137293
---


On June 7, 2016, 11:11 a.m., Masahiro Tanaka wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/48309/
> ---
> 
> (Updated June 7, 2016, 11:11 a.m.)
> 
> 
> Review request for Ambari, Andrew Onischuk, Dmytro Sen, Florian Barca, and 
> Yusaku Sako.
> 
> 
> Bugs: AMBARI-17047
> https://issues.apache.org/jira/browse/AMBARI-17047
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> In firewall.py, `systemctl is-active iptables || systemctl is-active 
> firewalld` is passed to `run_in_shell` function, which splits cmd string by 
> using `shlex.split`.
> 
> run_in_shell function finally calls `subprocess.Popen` with `shell=True`, so 
> the cmd string is evaluated like `Popen(['/bin/sh', '-c', 'systemctl', 
> 'is-active', 'iptables', '||', 'systemctl', 'is-active', 'firewalld'])`. This 
> doesn't returns values as expected, because after args[1] (in this case, 
> after the first `is-active`) are evaluated as sh arguements.
> 
> `systemctl is-active` can take multiple arugments, so we can use it.
> 
> 
> Diffs
> -
> 
>   ambari-common/src/main/python/ambari_commons/firewall.py 72e6d26 
> 
> Diff: https://reviews.apache.org/r/48309/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test & manual test
> 
> 
> Thanks,
> 
> Masahiro Tanaka
> 
>



Re: Review Request 49385: Hive and Oozie db displayed incorrectly on the installer review page

2016-07-06 Thread Alexandr Antonenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49385/#review140987
---


Ship it!




Ship It!

- Alexandr Antonenko


On June 30, 2016, 8:57 p.m., Sangeeta Ravindran wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49385/
> ---
> 
> (Updated June 30, 2016, 8:57 p.m.)
> 
> 
> Review request for Ambari, Alexandr Antonenko and Andrii Tkach.
> 
> 
> Bugs: AMBARI-17469
> https://issues.apache.org/jira/browse/AMBARI-17469
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> During Hive install, in the review page, the default value of 
> hive_admin_database (MySQL) is concatenated to the selected database type, no 
> matter which database is selected.  For e.g. if Existing PostgreSQL Database 
> is selected as the Hive database, the review page displays the following for 
> Hive database: 
> 
> Database : MySQL (Existing PostgreSQL Database)
>  
> In case of Oozie, because there is no oozie_admin_database property, a blank 
> is displayed for database although an existing database was selected
>  
> Database :  
>  
> This seems to be because of the logic in the method that determines the 
> database value to be displayed.
>  
> var dbFull = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_database'),
>  db = serviceConfigProperties.findProperty('name', 
> serviceName.toLowerCase() + '_ambari_database');
> return db && dbFull ? db.value + ' (' + dbFull.value + ')' : '';
> 
> The value of hive_ambari_database returns MySQL and hence in case of Hive, 
> MySQL always gets appended.
>  
> There is no oozie_ambari_database property defined. Hence db is undefined and 
> an emtpy string is returned instead of the actual database type selected.
>  
> Fix involves changing the logic to not include the value of 
> serviceName_ambari_database since it will not have the right value unless the 
> default value is selected for Hive/Oozie database.
> 
> 
> Diffs
> -
> 
>   ambari-web/app/controllers/wizard/step8_controller.js 3971cf5 
>   ambari-web/test/controllers/wizard/step8_test.js 74e042b 
> 
> Diff: https://reviews.apache.org/r/49385/diff/
> 
> 
> Testing
> ---
> 
> Manual testing.
> Added a test case to verify the value displayed for database.
> Ran mvn test
> 
> 28979 tests complete (48 seconds)
> 154 tests pending
> 
> 
> File Attachments
> 
> 
> Updated Patch with review comments incorporated
>   
> https://reviews.apache.org/media/uploaded/files/2016/06/30/e5fa58fa-3940-4c82-ad92-c8070c824528__AMBARI-17469.patch
> 
> 
> Thanks,
> 
> Sangeeta Ravindran
> 
>



Re: Review Request 49590: While changing NN, DN directories from UI, proper warning should be present for invalid values

2016-07-06 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49590/
---

(Updated July 6, 2016, 1:43 p.m.)


Review request for Ambari, Alejandro Fernandez, Dmytro Sen, and Sid Wagle.


Bugs: AMBARI-17550
https://issues.apache.org/jira/browse/AMBARI-17550


Repository: ambari


Description
---

While changing NN and DN directories from ambari, for example:  
changing dn directories from **/grid/0/hadoop/hdfs/data** to **/grid/0/hadoop/
hdfs/data,/grid/0/hadoop/hdfs/data1,/grid/0/hadoop/hdfs/data2**

The values are changed without being validated  
This leads to datanodes start failing




Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 174, in 
DataNode().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 280, in execute
method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 709, in restart
self.start(env, upgrade_type=upgrade_type)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 60, in start
self.configure(env)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 55, in configure
datanode(action="configure")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
line 89, in thunk
return fn(*args, **kwargs)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
 line 53, in datanode
data_dir_to_mount_file_content = handle_mounted_dirs(create_dirs, 
params.dfs_data_dirs, params.data_dir_mount_file, params)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/mounted_dirs_helper.py",
 line 158, in handle_mounted_dirs
raise Fail(message + " . Please turn off 
cluster-env/one_dir_per_partition or handle the situation manually.")
resource_management.core.exceptions.Fail: Trying to create another 
directory on the following mount: /grid/0 . Please turn off 
cluster-env/one_dir_per_partition or handle the situation manually.


The test fails because it set invalid value, and the directory was not created  
A warning message informing why the new directory name is invalid could be
useful


Diffs (updated)
-

  ambari-agent/src/test/python/resource_management/TestDatanodeHelper.py 
c33a295 
  ambari-agent/src/test/python/resource_management/TestFileSystem.py 925758c 
  
ambari-common/src/main/python/resource_management/libraries/functions/file_system.py
 2a859ed 
  
ambari-common/src/main/python/resource_management/libraries/functions/mounted_dirs_helper.py
 9574ce5 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
06f7cfe 
  ambari-server/src/test/python/stacks/2.0.6/common/test_stack_advisor.py 
7a092fc 
  ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py 08b9554 
  ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 4dfb8af 

Diff: https://reviews.apache.org/r/49590/diff/


Testing
---

mvn clean test


Thanks,

Andrew Onischuk



Re: Review Request 49676: Add atlas-application config sections to all services that run Atlas hook, e.g., Hive, Falcon, Storm, Sqoop

2016-07-06 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49676/#review140986
---


Ship it!





ambari-common/src/main/python/resource_management/libraries/functions/setup_atlas_hook.py
 (line 34)


Seems like this should be made _public_ so that it can be reused.



ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
 (lines 511 - 512)


You can reuse `has_atlas_in_cluster` from `setup_atlas_hook.py` (if made 
_public_)



ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py
 (lines 217 - 218)


You can reuse `has_atlas_in_cluster` from `setup_atlas_hook.py` (if made 
_public_)


- Robert Levas


On July 5, 2016, 9:13 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49676/
> ---
> 
> (Updated July 5, 2016, 9:13 p.m.)
> 
> 
> Review request for Ambari, Madhan Neethiraj, Robert Levas, Sumit Mohanty, 
> Swapan Shridhar, and Suma Shivaprasad.
> 
> 
> Bugs: AMBARI-17573
> https://issues.apache.org/jira/browse/AMBARI-17573
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Currently, Atlas hooks that run in Hive, Falconm Storm, Sqoop processes 
> reference atlas-application.properties file from Atlas server config location 
> - /etc/atlas/conf/atlas-application.properties.
> Not all properties in /etc/atlas/conf/atlas-application.properties are 
> required in hooks and some of these properties are sensitive enough not to 
> expose them to hooks/clients.
> 
> To address this concern:
> 1. atlas-application.properties should be added as a config section in each 
> of the host component's that run Atlas hook - Hive, Storm, Falcon, Sqoop
> 2. These new config sections will only include properties that are required 
> to the respective hooks
> 3. During initial deployment, Ambari will initialize these properties with 
> values in Atlas server configuration.
> For each one of those services, create a config type called $
> {service}-atlas-application.properties that will be saved to /etc/${service}
> /conf/application.properties
> 
> These are the default values,
> 
> Falcon
> atlas.hook.falcon.synchronous=false
> atlas.hook.falcon.numRetries=3
> atlas.hook.falcon.minThreads=5
> atlas.hook.falcon.maxThreads=5
> atlas.hook.falcon.keepAliveTime=10
> atlas.hook.falcon.queueSize
> 
> Storm
> atlas.hook.storm.numRetries=3
> 
> Hive
> atlas.hook.hive.synchronous=false
> atlas.hook.hive.numRetries=3
> atlas.hook.hive.minThreads=5
> atlas.hook.hive.maxThreads=5
> atlas.hook.hive.keepAliveTime=10
> atlas.hook.hive.queueSize=1
> 
> Common for all hooks
> atlas.kafka.zookeeper.connect=
> atlas.kafka.bootstrap.servers=
> atlas.kafka.zookeeper.session.timeout.ms=400
> atlas.kafka.zookeeper.connection.timeout.ms=200
> atlas.kafka.zookeeper.sync.time.ms=20
> atlas.kafka.hook.group.id=atlas
> atlas.notification.create.topics=true
> atlas.notification.replicas=1
> atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
> atlas.notification.kafka.service.principal=kafka/_h...@example.com
> atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
> atlas.jaas.KafkaClient.loginModuleName = 
> com.sun.security.auth.module.Krb5LoginModule
> atlas.jaas.KafkaClient.loginModuleControlFlag = required
> atlas.jaas.KafkaClient.option.useKeyTab = true
> atlas.jaas.KafkaClient.option.storeKey = true
> atlas.jaas.KafkaClient.option.serviceName = kafka
> atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> atlas.cluster.name=
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/setup_atlas_hook.py
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml
>  1437251 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-atlas-application.properties.xml
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/metainfo.xml
>  602144b 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py
>  c2f1f53 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py
>  fc9d8b9 
>   
> ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/setup_atlas_falcon.py
>  1dce515 
>   
> 

Re: Review Request 49701: Log search does not show Livy logs

2016-07-06 Thread Miklos Gergely

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49701/
---

(Updated July 6, 2016, 1:28 p.m.)


Review request for Ambari, Oliver Szabo, Robert Nettleton, and Sumit Mohanty.


Changes
---

fixed bug id


Bugs: AMBARI-17581
https://issues.apache.org/jira/browse/AMBARI-17581


Repository: ambari


Description
---

Also synced grok-patterns in the logsearch with the one in the ambari-server


Diffs
-

  ambari-logsearch/ambari-logsearch-logfeeder/src/main/resources/grok-patterns 
d25a78b 
  
ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/templates/input.config-spark.json.j2
 80be6ee 

Diff: https://reviews.apache.org/r/49701/diff/


Testing
---

Tested on local cluster


Thanks,

Miklos Gergely



Re: Review Request 49590: While changing NN, DN directories from UI, proper warning should be present for invalid values

2016-07-06 Thread Andrew Onischuk


> On July 5, 2016, 7:39 p.m., Sid Wagle wrote:
> > ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py,
> >  line 1476
> > 
> >
> > Based on this impl, we would get 1 warning at a time vs getting all dir 
> > with issues. Is this pattern consistent with other validations?

We are validating property 'dfs.datanode.data.dir'. Usually stack_advisor 
returns 1 warning for a property. Per dir warnings can flood UI. I think it 
should be 1 warning with all issues described in it's message.

Host mount configurations can differ, so basicly some dirs can be be valid on 
some hosts and invalid on other. The detailed message should contain something 
like:
warnings.append("Host: " + hostName + "; Mount: " + mountPoint + "; Data 
directories: " + ", ".join(dirList))
for each mount of each host. But it could be too long on clusters with ~1000 
nodes.

So currently the message displays only an affected host list.


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49590/#review140859
---


On July 6, 2016, 12:40 p.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49590/
> ---
> 
> (Updated July 6, 2016, 12:40 p.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez, Dmytro Sen, and Sid Wagle.
> 
> 
> Bugs: AMBARI-17550
> https://issues.apache.org/jira/browse/AMBARI-17550
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> While changing NN and DN directories from ambari, for example:  
> changing dn directories from **/grid/0/hadoop/hdfs/data** to **/grid/0/hadoop/
> hdfs/data,/grid/0/hadoop/hdfs/data1,/grid/0/hadoop/hdfs/data2**
> 
> The values are changed without being validated  
> This leads to datanodes start failing
> 
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 174, in 
> DataNode().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 709, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 60, in start
> self.configure(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
>  line 55, in configure
> datanode(action="configure")
>   File 
> "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, 
> in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
>  line 53, in datanode
> data_dir_to_mount_file_content = handle_mounted_dirs(create_dirs, 
> params.dfs_data_dirs, params.data_dir_mount_file, params)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/mounted_dirs_helper.py",
>  line 158, in handle_mounted_dirs
> raise Fail(message + " . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.")
> resource_management.core.exceptions.Fail: Trying to create another 
> directory on the following mount: /grid/0 . Please turn off 
> cluster-env/one_dir_per_partition or handle the situation manually.
> 
> 
> The test fails because it set invalid value, and the directory was not 
> created  
> A warning message informing why the new directory name is invalid could be
> useful
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/file_system.py
>  2a859ed 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/mounted_dirs_helper.py
>  9574ce5 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
> 06f7cfe 
>   ambari-server/src/test/python/stacks/2.0.6/common/test_stack_advisor.py 
> 7a092fc 
>   ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py 
> 08b9554 
>   ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 
> 4dfb8af 
> 
> Diff: https://reviews.apache.org/r/49590/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Re: Review Request 49665: authorizer.class.name not being set on secure kafka clusters

2016-07-06 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49665/
---

(Updated July 6, 2016, 8:41 a.m.)


Review request for Ambari, Alejandro Fernandez, Srimanth Gunturi, Tim Thorpe, 
and Vitalyi Brodetskyi.


Bugs: AMBARI-17479
https://issues.apache.org/jira/browse/AMBARI-17479


Repository: ambari


Description
---

The `kafka-broker/authorizer.class.name` property is not being set properly 
when Kerberos is enabled.

The following logic should be followed:
```
if Kerberos is enabled
  if ranger-kafka-plugin-properties/ranger-kafka-plugin-enabled == yes
set authorizer.class.name to 
"org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer"
  else
set authorizer.class.name to "kafka.security.auth.SimpleAclAuthorizer"
else
  remove authorizer.class.name
```

This should be updated in the stack advisor code. 

While at it, configurations from Kafka's `kerberos.json` file should be moved 
to the stack advisor to help ensure properties are set in the the same place to 
help with code maintenance and consistency.


Diffs
-

  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
06f7cfe 
  ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py 
879008b 
  ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 2944f6f 

Diff: https://reviews.apache.org/r/49665/diff/


Testing (updated)
---

Manually tested

#Jenkins test results: 

```
{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12816328/AMBARI-17479_trunk_01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.
```


Thanks,

Robert Levas



Re: Review Request 49590: While changing NN, DN directories from UI, proper warning should be present for invalid values

2016-07-06 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49590/
---

(Updated July 6, 2016, 12:40 p.m.)


Review request for Ambari, Alejandro Fernandez, Dmytro Sen, and Sid Wagle.


Bugs: AMBARI-17550
https://issues.apache.org/jira/browse/AMBARI-17550


Repository: ambari


Description
---

While changing NN and DN directories from ambari, for example:  
changing dn directories from **/grid/0/hadoop/hdfs/data** to **/grid/0/hadoop/
hdfs/data,/grid/0/hadoop/hdfs/data1,/grid/0/hadoop/hdfs/data2**

The values are changed without being validated  
This leads to datanodes start failing




Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 174, in 
DataNode().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 280, in execute
method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 709, in restart
self.start(env, upgrade_type=upgrade_type)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 60, in start
self.configure(env)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 55, in configure
datanode(action="configure")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
line 89, in thunk
return fn(*args, **kwargs)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
 line 53, in datanode
data_dir_to_mount_file_content = handle_mounted_dirs(create_dirs, 
params.dfs_data_dirs, params.data_dir_mount_file, params)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/mounted_dirs_helper.py",
 line 158, in handle_mounted_dirs
raise Fail(message + " . Please turn off 
cluster-env/one_dir_per_partition or handle the situation manually.")
resource_management.core.exceptions.Fail: Trying to create another 
directory on the following mount: /grid/0 . Please turn off 
cluster-env/one_dir_per_partition or handle the situation manually.


The test fails because it set invalid value, and the directory was not created  
A warning message informing why the new directory name is invalid could be
useful


Diffs (updated)
-

  
ambari-common/src/main/python/resource_management/libraries/functions/file_system.py
 2a859ed 
  
ambari-common/src/main/python/resource_management/libraries/functions/mounted_dirs_helper.py
 9574ce5 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py 
06f7cfe 
  ambari-server/src/test/python/stacks/2.0.6/common/test_stack_advisor.py 
7a092fc 
  ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py 08b9554 
  ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py 4dfb8af 

Diff: https://reviews.apache.org/r/49590/diff/


Testing
---

mvn clean test


Thanks,

Andrew Onischuk



Re: Review Request 49703: Add SmartSense activty logs to Log Search

2016-07-06 Thread Oliver Szabo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49703/#review140981
---


Ship it!




Ship It!

- Oliver Szabo


On July 6, 2016, 12:28 p.m., Miklos Gergely wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49703/
> ---
> 
> (Updated July 6, 2016, 12:28 p.m.)
> 
> 
> Review request for Ambari, Oliver Szabo, Robert Nettleton, and Sumit Mohanty.
> 
> 
> Bugs: AMBARI-17583
> https://issues.apache.org/jira/browse/AMBARI-17583
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> activity-explorer.log and activity-analyzer.log added
> 
> 
> Diffs
> -
> 
>   
> ambari-logsearch/ambari-logsearch-portal/src/main/resources/HadoopServiceConfig.json
>  d407d82 
>   
> ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/scripts/params.py
>  165ac08 
>   
> ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/templates/HadoopServiceConfig.json.j2
>  64c81b5 
>   
> ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/templates/input.config-hst.json.j2
>  ee19f14 
> 
> Diff: https://reviews.apache.org/r/49703/diff/
> 
> 
> Testing
> ---
> 
> Tested on local cluster
> 
> 
> Thanks,
> 
> Miklos Gergely
> 
>



Re: Review Request 49702: HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed due to timeout after waiting 900 secs"

2016-07-06 Thread Dmytro Sen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49702/#review140979
---


Ship it!




Ship It!

- Dmytro Sen


On Июль 6, 2016, 11:37 д.п., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49702/
> ---
> 
> (Updated Июль 6, 2016, 11:37 д.п.)
> 
> 
> Review request for Ambari and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-17582
> https://issues.apache.org/jira/browse/AMBARI-17582
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed
> due to timeout after waiting 900 secs"
> 
> 
> 
> 
> {
>   "href" : 
> "http://172.22.117.57:8080/api/v1/clusters/cl1/requests/8/tasks/198;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "STOP",
> "command_detail" : "HIVE_SERVER_INTERACTIVE STOP",
> "end_time" : 1467691652833,
> "error_log" : "/var/lib/ambari-agent/data/errors-198.txt",
> "exit_code" : 999,
> "host_name" : "nat-u14-dvys-ambari-logsearch-1-3.openstacklocal",
> "id" : 198,
> "output_log" : "/var/lib/ambari-agent/data/output-198.txt",
> "request_id" : 8,
> "role" : "HIVE_SERVER_INTERACTIVE",
> "stage_id" : 0,
> "start_time" : 1467690695556,
> "status" : "FAILED",
> "stderr" : "Python script has been killed due to timeout after 
> waiting 900 secs",
> "stdout" : "2016-07-05 03:52:27,679 - The hadoop conf dir 
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for 
> version 2.5.0.0-874\n2016-07-05 03:52:27,683 - Checking if need to create 
> versioned conf dir /etc/hadoop/2.5.0.0-874/0\n2016-07-05 03:52:27,686 - 
> call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': 
> -1}\n2016-07-05 03:52:27,726 - call returned (1, '/etc/hadoop/2.5.0.0-874/0 
> exist already', '')\n2016-07-05 03:52:27,727 - 
> checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False}\n2016-07-05 
> 03:52:27,788 - checked_call returned (0, '')\n2016-07-05 03:52:27,789 - 
> Ensuring that hadoop has the correct symlink structure\n2016-07-05 03:52:27,78
 9 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf\n2016-07-05 
03:52:27,810 - call['ambari-python-wrap /usr/bin/hdp-select status 
hive-server2'] {'timeout': 20}\n2016-07-05 03:52:27,853 - call returned (0, 
'hive-server2 - 2.5.0.0-874')\n2016-07-05 03:52:27,880 - call['ambari-sudo.sh 
su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-interactive.pid 
1>/tmp/tmpnftynk 2>/tmp/tmpRKnLIa''] {'quiet': False}\n2016-07-05 03:52:27,911 
- call returned (0, ' Hortonworks #\\nThis is MOTD message, 
added for testing in qe infra')\n2016-07-05 03:52:27,912 - 
Execute['ambari-sudo.sh kill 21297'] {'not_if': '! (ls 
/var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 >/dev/null 
2>&1)'}\n2016-07-05 03:52:27,936 - Execute['ambari-sudo.sh kill -9 21297'] 
{'not_if': '! (ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 
21297 >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1) )'}
 \n2016-07-05 03:52:32,975 - Execute['! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 
3}\n2016-07-05 03:52:33,036 - Retrying after 3 seconds. Reason: Execution of '! 
(ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 
>/dev/null 2>&1)' returned 1. \n2016-07-05 03:52:36,062 - 
File['/var/run/hive/hive-interactive.pid'] {'action': ['delete']}\n2016-07-05 
03:52:36,063 - Deleting File['/var/run/hive/hive-interactive.pid']\n2016-07-05 
03:52:36,063 - Stopping LLAP\n2016-07-05 03:52:36,063 - Command: ['slider', 
'stop', 'llap0']\n2016-07-05 03:52:36,063 - call[['slider', 'stop', 'llap0']] 
{'logoutput': True, 'user': 'hive', 'stderr': -1}\n Hortonworks 
#\nThis is MOTD message, added for testing in qe infra\n2016-07-05 
03:52:41,508 [main] INFO  impl.TimelineClientImpl - Timeline service address: 
http://nat-u14-dvys-ambari-logsearch-1-4.openstacklocal:8188/ws/v1/timeline/\n2016-07-05
 03:52
 :42,856 [main] WARN  shortcircuit.DomainSocketFactory - The short-circuit 
local reads feature cannot be used because libhadoop cannot be 

Re: Review Request 49701: Log search does not show Livy logs

2016-07-06 Thread Oliver Szabo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49701/#review140978
---


Ship it!




Ship It!

- Oliver Szabo


On July 6, 2016, 11:39 a.m., Miklos Gergely wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49701/
> ---
> 
> (Updated July 6, 2016, 11:39 a.m.)
> 
> 
> Review request for Ambari, Oliver Szabo, Robert Nettleton, and Sumit Mohanty.
> 
> 
> Bugs: iAMBARI-17581
> https://issues.apache.org/jira/browse/iAMBARI-17581
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Also synced grok-patterns in the logsearch with the one in the ambari-server
> 
> 
> Diffs
> -
> 
>   
> ambari-logsearch/ambari-logsearch-logfeeder/src/main/resources/grok-patterns 
> d25a78b 
>   
> ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/templates/input.config-spark.json.j2
>  80be6ee 
> 
> Diff: https://reviews.apache.org/r/49701/diff/
> 
> 
> Testing
> ---
> 
> Tested on local cluster
> 
> 
> Thanks,
> 
> Miklos Gergely
> 
>



Re: Review Request 49702: HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed due to timeout after waiting 900 secs"

2016-07-06 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49702/#review140976
---




ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py
 


Remove since all calls are already printed to logs


- Andrew Onischuk


On July 6, 2016, 11:37 a.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49702/
> ---
> 
> (Updated July 6, 2016, 11:37 a.m.)
> 
> 
> Review request for Ambari and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-17582
> https://issues.apache.org/jira/browse/AMBARI-17582
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed
> due to timeout after waiting 900 secs"
> 
> 
> 
> 
> {
>   "href" : 
> "http://172.22.117.57:8080/api/v1/clusters/cl1/requests/8/tasks/198;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "STOP",
> "command_detail" : "HIVE_SERVER_INTERACTIVE STOP",
> "end_time" : 1467691652833,
> "error_log" : "/var/lib/ambari-agent/data/errors-198.txt",
> "exit_code" : 999,
> "host_name" : "nat-u14-dvys-ambari-logsearch-1-3.openstacklocal",
> "id" : 198,
> "output_log" : "/var/lib/ambari-agent/data/output-198.txt",
> "request_id" : 8,
> "role" : "HIVE_SERVER_INTERACTIVE",
> "stage_id" : 0,
> "start_time" : 1467690695556,
> "status" : "FAILED",
> "stderr" : "Python script has been killed due to timeout after 
> waiting 900 secs",
> "stdout" : "2016-07-05 03:52:27,679 - The hadoop conf dir 
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for 
> version 2.5.0.0-874\n2016-07-05 03:52:27,683 - Checking if need to create 
> versioned conf dir /etc/hadoop/2.5.0.0-874/0\n2016-07-05 03:52:27,686 - 
> call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': 
> -1}\n2016-07-05 03:52:27,726 - call returned (1, '/etc/hadoop/2.5.0.0-874/0 
> exist already', '')\n2016-07-05 03:52:27,727 - 
> checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False}\n2016-07-05 
> 03:52:27,788 - checked_call returned (0, '')\n2016-07-05 03:52:27,789 - 
> Ensuring that hadoop has the correct symlink structure\n2016-07-05 03:52:27,78
 9 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf\n2016-07-05 
03:52:27,810 - call['ambari-python-wrap /usr/bin/hdp-select status 
hive-server2'] {'timeout': 20}\n2016-07-05 03:52:27,853 - call returned (0, 
'hive-server2 - 2.5.0.0-874')\n2016-07-05 03:52:27,880 - call['ambari-sudo.sh 
su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-interactive.pid 
1>/tmp/tmpnftynk 2>/tmp/tmpRKnLIa''] {'quiet': False}\n2016-07-05 03:52:27,911 
- call returned (0, ' Hortonworks #\\nThis is MOTD message, 
added for testing in qe infra')\n2016-07-05 03:52:27,912 - 
Execute['ambari-sudo.sh kill 21297'] {'not_if': '! (ls 
/var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 >/dev/null 
2>&1)'}\n2016-07-05 03:52:27,936 - Execute['ambari-sudo.sh kill -9 21297'] 
{'not_if': '! (ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 
21297 >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1) )'}
 \n2016-07-05 03:52:32,975 - Execute['! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 
3}\n2016-07-05 03:52:33,036 - Retrying after 3 seconds. Reason: Execution of '! 
(ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 
>/dev/null 2>&1)' returned 1. \n2016-07-05 03:52:36,062 - 
File['/var/run/hive/hive-interactive.pid'] {'action': ['delete']}\n2016-07-05 
03:52:36,063 - Deleting File['/var/run/hive/hive-interactive.pid']\n2016-07-05 
03:52:36,063 - Stopping LLAP\n2016-07-05 03:52:36,063 - Command: ['slider', 
'stop', 'llap0']\n2016-07-05 03:52:36,063 - call[['slider', 'stop', 'llap0']] 
{'logoutput': True, 'user': 'hive', 'stderr': -1}\n Hortonworks 
#\nThis is MOTD message, added for testing in qe infra\n2016-07-05 
03:52:41,508 [main] INFO  impl.TimelineClientImpl - Timeline service address: 

Re: Review Request 49702: HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed due to timeout after waiting 900 secs"

2016-07-06 Thread Andrew Onischuk


> On July 6, 2016, 11:38 a.m., Andrew Onischuk wrote:
> > ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py,
> >  line 234
> > 
> >
> > Remove since all calls are already printed to logs

Removed*


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49702/#review140976
---


On July 6, 2016, 11:37 a.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49702/
> ---
> 
> (Updated July 6, 2016, 11:37 a.m.)
> 
> 
> Review request for Ambari and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-17582
> https://issues.apache.org/jira/browse/AMBARI-17582
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> HIVE_SERVER_INTERACTIVE STOP failed with error "Python script has been killed
> due to timeout after waiting 900 secs"
> 
> 
> 
> 
> {
>   "href" : 
> "http://172.22.117.57:8080/api/v1/clusters/cl1/requests/8/tasks/198;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "STOP",
> "command_detail" : "HIVE_SERVER_INTERACTIVE STOP",
> "end_time" : 1467691652833,
> "error_log" : "/var/lib/ambari-agent/data/errors-198.txt",
> "exit_code" : 999,
> "host_name" : "nat-u14-dvys-ambari-logsearch-1-3.openstacklocal",
> "id" : 198,
> "output_log" : "/var/lib/ambari-agent/data/output-198.txt",
> "request_id" : 8,
> "role" : "HIVE_SERVER_INTERACTIVE",
> "stage_id" : 0,
> "start_time" : 1467690695556,
> "status" : "FAILED",
> "stderr" : "Python script has been killed due to timeout after 
> waiting 900 secs",
> "stdout" : "2016-07-05 03:52:27,679 - The hadoop conf dir 
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for 
> version 2.5.0.0-874\n2016-07-05 03:52:27,683 - Checking if need to create 
> versioned conf dir /etc/hadoop/2.5.0.0-874/0\n2016-07-05 03:52:27,686 - 
> call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': 
> -1}\n2016-07-05 03:52:27,726 - call returned (1, '/etc/hadoop/2.5.0.0-874/0 
> exist already', '')\n2016-07-05 03:52:27,727 - 
> checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', 
> '--package', 'hadoop', '--stack-version', '2.5.0.0-874', '--conf-version', 
> '0')] {'logoutput': False, 'sudo': True, 'quiet': False}\n2016-07-05 
> 03:52:27,788 - checked_call returned (0, '')\n2016-07-05 03:52:27,789 - 
> Ensuring that hadoop has the correct symlink structure\n2016-07-05 03:52:27,78
 9 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf\n2016-07-05 
03:52:27,810 - call['ambari-python-wrap /usr/bin/hdp-select status 
hive-server2'] {'timeout': 20}\n2016-07-05 03:52:27,853 - call returned (0, 
'hive-server2 - 2.5.0.0-874')\n2016-07-05 03:52:27,880 - call['ambari-sudo.sh 
su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-interactive.pid 
1>/tmp/tmpnftynk 2>/tmp/tmpRKnLIa''] {'quiet': False}\n2016-07-05 03:52:27,911 
- call returned (0, ' Hortonworks #\\nThis is MOTD message, 
added for testing in qe infra')\n2016-07-05 03:52:27,912 - 
Execute['ambari-sudo.sh kill 21297'] {'not_if': '! (ls 
/var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 >/dev/null 
2>&1)'}\n2016-07-05 03:52:27,936 - Execute['ambari-sudo.sh kill -9 21297'] 
{'not_if': '! (ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 
21297 >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1) )'}
 \n2016-07-05 03:52:32,975 - Execute['! (ls /var/run/hive/hive-interactive.pid 
>/dev/null 2>&1 && ps -p 21297 >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 
3}\n2016-07-05 03:52:33,036 - Retrying after 3 seconds. Reason: Execution of '! 
(ls /var/run/hive/hive-interactive.pid >/dev/null 2>&1 && ps -p 21297 
>/dev/null 2>&1)' returned 1. \n2016-07-05 03:52:36,062 - 
File['/var/run/hive/hive-interactive.pid'] {'action': ['delete']}\n2016-07-05 
03:52:36,063 - Deleting File['/var/run/hive/hive-interactive.pid']\n2016-07-05 
03:52:36,063 - Stopping LLAP\n2016-07-05 03:52:36,063 - Command: ['slider', 
'stop', 'llap0']\n2016-07-05 03:52:36,063 - call[['slider', 'stop', 'llap0']] 
{'logoutput': True, 'user': 'hive', 'stderr': -1}\n Hortonworks 
#\nThis is MOTD message, added for testing in qe infra\n2016-07-05 
03:52:41,508 [main] 

Re: Review Request 49635: Enable simulating logfeeder inputs

2016-07-06 Thread Miklos Gergely

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49635/
---

(Updated July 6, 2016, 10:26 a.m.)


Review request for Ambari, Oliver Szabo, Robert Nettleton, and Sumit Mohanty.


Changes
---

added license to new file and example for the log ids in the description


Bugs: AMBARI-17561
https://issues.apache.org/jira/browse/AMBARI-17561


Repository: ambari


Description (updated)
---

Enable simulating input files in a configurable way. The parameters has to be 
set at custom logfeeder.properties

Available parameters:
logfeeder.simulate.input_number - number of paralell inputs (threads) loading 
the logs, if it is set and not 0 then all the rest of the configured inputs are 
ignored (running in simulation mode)!!

logfeeder.simulate.log_ids - comma separated list of the log ids to propagate 
at random, if not set by default all the available logs are propagated at 
random, example: storm_drpc,storm_logviewer,storm_nimbus

logfeeder.simulate.log_level - the level of the simulated log messages, by 
default WARN

logfeeder.simulate.log_message_size - the length of the simulated log messages, 
can't be less then 50 due to log message prefix, the rest is filled with 'X' 
characters

logfeeder.simulate.sleep_milliseconds - the time interval at which each 
simulated inputs writes one log message at random

The text of the log message is like this:
Simulated log message for testing, line 0001 XX


Diffs (updated)
-

  
ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/LogFeeder.java
 88a6737 
  
ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/input/InputSimulate.java
 PRE-CREATION 
  
ambari-logsearch/ambari-logsearch-logfeeder/src/main/resources/alias_config.json
 978f581 

Diff: https://reviews.apache.org/r/49635/diff/


Testing
---

Tested on local cluster.


Thanks,

Miklos Gergely