[jira] [Updated] (AMBARI-21123) Part Two: Specify the script directly in alert target for script-based alert dispatchers

2017-06-06 Thread Yao Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yao Lei updated AMBARI-21123:
-
Attachment: (was: AMBARI-21123.patch)

> Part Two: Specify the script directly in alert target for script-based alert 
> dispatchers
> 
>
> Key: AMBARI-21123
> URL: https://issues.apache.org/jira/browse/AMBARI-21123
> Project: Ambari
>  Issue Type: Technical task
>  Components: alerts, ambari-web
>Affects Versions: trunk
>Reporter: Yao Lei
>Assignee: Yao Lei
> Fix For: 3.0.0
>
> Attachments: AMBARI-21123.patch, script_alert_notification_1.png, 
> script_alert_notification_2.png
>
>
> *Web Codes Part*
> This patch amis to support creating alert target  that inclueds property 
> *ambari.dispatch-property.script.filename*  on web UI



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21123) Part Two: Specify the script directly in alert target for script-based alert dispatchers

2017-06-06 Thread Yao Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yao Lei updated AMBARI-21123:
-
Attachment: AMBARI-21123.patch

> Part Two: Specify the script directly in alert target for script-based alert 
> dispatchers
> 
>
> Key: AMBARI-21123
> URL: https://issues.apache.org/jira/browse/AMBARI-21123
> Project: Ambari
>  Issue Type: Technical task
>  Components: alerts, ambari-web
>Affects Versions: trunk
>Reporter: Yao Lei
>Assignee: Yao Lei
> Fix For: 3.0.0
>
> Attachments: AMBARI-21123.patch, script_alert_notification_1.png, 
> script_alert_notification_2.png
>
>
> *Web Codes Part*
> This patch amis to support creating alert target  that inclueds property 
> *ambari.dispatch-property.script.filename*  on web UI



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21123) Part Two: Specify the script directly in alert target for script-based alert dispatchers

2017-06-06 Thread Yao Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yao Lei updated AMBARI-21123:
-
Status: Patch Available  (was: Open)

> Part Two: Specify the script directly in alert target for script-based alert 
> dispatchers
> 
>
> Key: AMBARI-21123
> URL: https://issues.apache.org/jira/browse/AMBARI-21123
> Project: Ambari
>  Issue Type: Technical task
>  Components: alerts, ambari-web
>Affects Versions: trunk
>Reporter: Yao Lei
>Assignee: Yao Lei
> Fix For: 3.0.0
>
> Attachments: AMBARI-21123.patch, script_alert_notification_1.png, 
> script_alert_notification_2.png
>
>
> *Web Codes Part*
> This patch amis to support creating alert target  that inclueds property 
> *ambari.dispatch-property.script.filename*  on web UI



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-19962) Clicking on the login button (or hitting page refresh) to see the dashboard takes a while on a 1000-node cluster

2017-06-06 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-19962:
-
Summary: Clicking on the login button (or hitting page refresh) to see the 
dashboard takes a while on a 1000-node cluster  (was: Clicking on the login 
button (or hitting page refresh) to seeing the dashboard takes a while on a 
1000-node cluster)

> Clicking on the login button (or hitting page refresh) to see the dashboard 
> takes a while on a 1000-node cluster
> 
>
> Key: AMBARI-19962
> URL: https://issues.apache.org/jira/browse/AMBARI-19962
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-19962_branch-2.5.patch, AMBARI-19962.patch
>
>
> Log in to Dashboard is really slow (about 15 seconds on a 1000-node cluster). 
> This does not include the time to load the widget graphs (loading the widget 
> takes additional ~5 seconds on top).
> Eliminate some of the bottlenecks and make this faster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21187) Get rid deprecated jdk install link in the Dockerfile of Log Search

2017-06-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-21187:
--
Summary: Get rid deprecated jdk install link in the Dockerfile of Log 
Search  (was: Get rid deprecated jdk install in the Dockerfile of Log Search)

> Get rid deprecated jdk install link in the Dockerfile of Log Search
> ---
>
> Key: AMBARI-21187
> URL: https://issues.apache.org/jira/browse/AMBARI-21187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21189) Service Advisor - UI to pass in instance name of Service modified to recommendation/validation APIs

2017-06-06 Thread Alejandro Fernandez (JIRA)
Alejandro Fernandez created AMBARI-21189:


 Summary: Service Advisor - UI to pass in instance name of Service 
modified to recommendation/validation APIs
 Key: AMBARI-21189
 URL: https://issues.apache.org/jira/browse/AMBARI-21189
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 3.0.0
Reporter: Alejandro Fernandez
 Fix For: 3.0.0


In order for AMBARI-20853 to have granularity at the service-level of which 
type of service advisor (Java or Python) to invoke, we need the UI to call the 
recommendation & validation APIs with the instance name of the service being 
modified.

Today, we pass a list of all services in the stack, which is redundant since 
the backend already knows it. Instead, we need a new field called 
"modified_service" with the instance name.

Today, it is calculated for all services, which adds extra overhead for 
services that are unrelated to configs being modified.
The backend has all of the knowledge about service and config dependencies, so 
if a ZK config is being changed, it should be smart enough to also invoke 
Service Advisor for say Storm, Kafka, and HDFS. Similarly, if an Atlas or 
Ranger config is being changed, it may have to invoke service advisor for all 
of the services that have hooks/plugins.

Either way, the UI shouldn't be the one to calculate this since it places 
additional burden and opens Ambari to miscalculations, especially since the API 
should require the minimal amount of input.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21188) Configuration Symlink Is Incorrect After Stack Distribution

2017-06-06 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-21188:
-
Attachment: AMBARI-21188.patch

> Configuration Symlink Is Incorrect After Stack Distribution
> ---
>
> Key: AMBARI-21188
> URL: https://issues.apache.org/jira/browse/AMBARI-21188
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21188.patch
>
>
> The configuration symlinks for a component after installation should reflect 
> something similar to the following:
> /etc/component/conf -> /usr/hdp/current/component/conf
> /usr/hdp/current/component/conf -> /etc/component//conf
> For example:
> {noformat}
> [root@c6403 ~]# ll /etc/zookeeper/
> total 12
> drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
> drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
> lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
> /usr/hdp/current/zookeeper-server/conf
> drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup
> [root@c6403 ~]# ll /usr/hdp/current/zookeeper-server/conf
> lrwxrwxrwx 1 root root 28 Jun  5 20:38 /usr/hdp/current/zookeeper-server/conf 
> -> /etc/zookeeper/2.6.0.0-334/0
> {noformat}
> This is the way that the structure exists after a normal installation today. 
> However, it seems that distribution a new stack breaks this:
> {code}
> [root@c6403 zookeeper]# ll /etc/zookeeper/
> total 12
> drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
> drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
> lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
> /etc/zookeeper/conf.backup
> drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup
> {code}
> The {{conf}} symlink is now pointing to {{conf.backup}} which is the interim 
> temporary location.
> {noformat:title=Initial Install}
> 2017-06-05 20:19:17,137 - Backing up /etc/zookeeper/conf to 
> /etc/zookeeper/conf.backup if destination doesn't exist already.
> 2017-06-05 20:19:17,137 - Execute[('cp', '-R', '-p', '/etc/zookeeper/conf', 
> '/etc/zookeeper/conf.backup')] {'not_if': 'test -e 
> /etc/zookeeper/conf.backup', 'sudo': True}
> 2017-06-05 20:19:17,150 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,151 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'dry-run-create', '--package', 'zookeeper', '--stack-version', '2.4.2.0-236', 
> '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 
> 'stderr': -1}
> 2017-06-05 20:19:17,173 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', 
> '')
> 2017-06-05 20:19:17,173 - Package zookeeper will have new conf directories: 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,173 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,174 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'create-conf-dir', '--package', 'zookeeper', '--stack-version', 
> '2.4.2.0-236', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 
> 'quiet': False, 'stderr': -1}
> 2017-06-05 20:19:17,197 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', 
> '')
> 2017-06-05 20:19:17,197 - Directory['/etc/zookeeper/2.4.2.0-236/0'] 
> {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
> 2017-06-05 20:19:17,198 - Seeding versioned configuration directories for 
> zookeeper
> 2017-06-05 20:19:17,198 - Execute['ambari-sudo.sh  -H -E cp -R -p -v 
> /usr/hdp/current/zookeeper-client/conf/* /etc/zookeeper/2.4.2.0-236/0'] 
> {'logoutput': True}
> `/usr/hdp/current/zookeeper-client/conf/configuration.xsl' -> 
> `/etc/zookeeper/2.4.2.0-236/0/configuration.xsl'
> `/usr/hdp/current/zookeeper-client/conf/log4j.properties' -> 
> `/etc/zookeeper/2.4.2.0-236/0/log4j.properties'
> `/usr/hdp/current/zookeeper-client/conf/zoo.cfg' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zoo.cfg'
> `/usr/hdp/current/zookeeper-client/conf/zoo_sample.cfg' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zoo_sample.cfg'
> `/usr/hdp/current/zookeeper-client/conf/zookeeper-env.cmd' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.cmd'
> `/usr/hdp/current/zookeeper-client/conf/zookeeper-env.sh' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.sh'
> 2017-06-05 20:19:17,204 - Execute['ambari-sudo.sh  -H -E cp -R -p 
> /etc/zookeeper/conf/* /etc/zookeeper/2.4.2.0-236/0'] {'only_if': 'ls -d 
> /etc/zookeeper/conf/*'}
> 2017-06-05 20:19:17,213 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,213 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'create-conf-dir', '--package', 'zookeeper', '--stack-version', 
> 

[jira] [Updated] (AMBARI-21188) Configuration Symlink Is Incorrect After Stack Distribution

2017-06-06 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-21188:
-
Status: Patch Available  (was: Open)

> Configuration Symlink Is Incorrect After Stack Distribution
> ---
>
> Key: AMBARI-21188
> URL: https://issues.apache.org/jira/browse/AMBARI-21188
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Critical
> Fix For: 3.0.0
>
>
> The configuration symlinks for a component after installation should reflect 
> something similar to the following:
> /etc/component/conf -> /usr/hdp/current/component/conf
> /usr/hdp/current/component/conf -> /etc/component//conf
> For example:
> {noformat}
> [root@c6403 ~]# ll /etc/zookeeper/
> total 12
> drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
> drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
> lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
> /usr/hdp/current/zookeeper-server/conf
> drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup
> [root@c6403 ~]# ll /usr/hdp/current/zookeeper-server/conf
> lrwxrwxrwx 1 root root 28 Jun  5 20:38 /usr/hdp/current/zookeeper-server/conf 
> -> /etc/zookeeper/2.6.0.0-334/0
> {noformat}
> This is the way that the structure exists after a normal installation today. 
> However, it seems that distribution a new stack breaks this:
> {code}
> [root@c6403 zookeeper]# ll /etc/zookeeper/
> total 12
> drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
> drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
> lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
> /etc/zookeeper/conf.backup
> drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup
> {code}
> The {{conf}} symlink is now pointing to {{conf.backup}} which is the interim 
> temporary location.
> {noformat:title=Initial Install}
> 2017-06-05 20:19:17,137 - Backing up /etc/zookeeper/conf to 
> /etc/zookeeper/conf.backup if destination doesn't exist already.
> 2017-06-05 20:19:17,137 - Execute[('cp', '-R', '-p', '/etc/zookeeper/conf', 
> '/etc/zookeeper/conf.backup')] {'not_if': 'test -e 
> /etc/zookeeper/conf.backup', 'sudo': True}
> 2017-06-05 20:19:17,150 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,151 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'dry-run-create', '--package', 'zookeeper', '--stack-version', '2.4.2.0-236', 
> '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 
> 'stderr': -1}
> 2017-06-05 20:19:17,173 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', 
> '')
> 2017-06-05 20:19:17,173 - Package zookeeper will have new conf directories: 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,173 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,174 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'create-conf-dir', '--package', 'zookeeper', '--stack-version', 
> '2.4.2.0-236', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 
> 'quiet': False, 'stderr': -1}
> 2017-06-05 20:19:17,197 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', 
> '')
> 2017-06-05 20:19:17,197 - Directory['/etc/zookeeper/2.4.2.0-236/0'] 
> {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
> 2017-06-05 20:19:17,198 - Seeding versioned configuration directories for 
> zookeeper
> 2017-06-05 20:19:17,198 - Execute['ambari-sudo.sh  -H -E cp -R -p -v 
> /usr/hdp/current/zookeeper-client/conf/* /etc/zookeeper/2.4.2.0-236/0'] 
> {'logoutput': True}
> `/usr/hdp/current/zookeeper-client/conf/configuration.xsl' -> 
> `/etc/zookeeper/2.4.2.0-236/0/configuration.xsl'
> `/usr/hdp/current/zookeeper-client/conf/log4j.properties' -> 
> `/etc/zookeeper/2.4.2.0-236/0/log4j.properties'
> `/usr/hdp/current/zookeeper-client/conf/zoo.cfg' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zoo.cfg'
> `/usr/hdp/current/zookeeper-client/conf/zoo_sample.cfg' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zoo_sample.cfg'
> `/usr/hdp/current/zookeeper-client/conf/zookeeper-env.cmd' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.cmd'
> `/usr/hdp/current/zookeeper-client/conf/zookeeper-env.sh' -> 
> `/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.sh'
> 2017-06-05 20:19:17,204 - Execute['ambari-sudo.sh  -H -E cp -R -p 
> /etc/zookeeper/conf/* /etc/zookeeper/2.4.2.0-236/0'] {'only_if': 'ls -d 
> /etc/zookeeper/conf/*'}
> 2017-06-05 20:19:17,213 - Checking if need to create versioned conf dir 
> /etc/zookeeper/2.4.2.0-236/0
> 2017-06-05 20:19:17,213 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
> 'create-conf-dir', '--package', 'zookeeper', '--stack-version', 
> '2.4.2.0-236', '--conf-version', '0')] 

[jira] [Created] (AMBARI-21188) Configuration Symlink Is Incorrect After Stack Distribution

2017-06-06 Thread Jonathan Hurley (JIRA)
Jonathan Hurley created AMBARI-21188:


 Summary: Configuration Symlink Is Incorrect After Stack 
Distribution
 Key: AMBARI-21188
 URL: https://issues.apache.org/jira/browse/AMBARI-21188
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 3.0.0
Reporter: Jonathan Hurley
Assignee: Jonathan Hurley
Priority: Critical
 Fix For: 3.0.0


The configuration symlinks for a component after installation should reflect 
something similar to the following:

/etc/component/conf -> /usr/hdp/current/component/conf
/usr/hdp/current/component/conf -> /etc/component//conf

For example:
{noformat}
[root@c6403 ~]# ll /etc/zookeeper/
total 12
drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
/usr/hdp/current/zookeeper-server/conf
drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup

[root@c6403 ~]# ll /usr/hdp/current/zookeeper-server/conf
lrwxrwxrwx 1 root root 28 Jun  5 20:38 /usr/hdp/current/zookeeper-server/conf 
-> /etc/zookeeper/2.6.0.0-334/0
{noformat}

This is the way that the structure exists after a normal installation today. 
However, it seems that distribution a new stack breaks this:

{code}
[root@c6403 zookeeper]# ll /etc/zookeeper/
total 12
drwxr-xr-x 3 root  root   4096 Jun  5 20:19 2.4.2.0-236
drwxr-xr-x 3 root  root   4096 Jun  5 20:38 2.6.0.0-334
lrwxrwxrwx 1 root  root 26 Jun  5 20:38 conf -> 
/etc/zookeeper/conf.backup
drwxr-xr-x 2 zookeeper hadoop 4096 Jun  5 20:17 conf.backup
{code}

The {{conf}} symlink is now pointing to {{conf.backup}} which is the interim 
temporary location.

{noformat:title=Initial Install}
2017-06-05 20:19:17,137 - Backing up /etc/zookeeper/conf to 
/etc/zookeeper/conf.backup if destination doesn't exist already.
2017-06-05 20:19:17,137 - Execute[('cp', '-R', '-p', '/etc/zookeeper/conf', 
'/etc/zookeeper/conf.backup')] {'not_if': 'test -e /etc/zookeeper/conf.backup', 
'sudo': True}
2017-06-05 20:19:17,150 - Checking if need to create versioned conf dir 
/etc/zookeeper/2.4.2.0-236/0
2017-06-05 20:19:17,151 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
'dry-run-create', '--package', 'zookeeper', '--stack-version', '2.4.2.0-236', 
'--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 
'stderr': -1}
2017-06-05 20:19:17,173 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', '')
2017-06-05 20:19:17,173 - Package zookeeper will have new conf directories: 
/etc/zookeeper/2.4.2.0-236/0
2017-06-05 20:19:17,173 - Checking if need to create versioned conf dir 
/etc/zookeeper/2.4.2.0-236/0
2017-06-05 20:19:17,174 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
'create-conf-dir', '--package', 'zookeeper', '--stack-version', '2.4.2.0-236', 
'--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 
'stderr': -1}
2017-06-05 20:19:17,197 - call returned (0, '/etc/zookeeper/2.4.2.0-236/0', '')
2017-06-05 20:19:17,197 - Directory['/etc/zookeeper/2.4.2.0-236/0'] 
{'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-06-05 20:19:17,198 - Seeding versioned configuration directories for 
zookeeper
2017-06-05 20:19:17,198 - Execute['ambari-sudo.sh  -H -E cp -R -p -v 
/usr/hdp/current/zookeeper-client/conf/* /etc/zookeeper/2.4.2.0-236/0'] 
{'logoutput': True}
`/usr/hdp/current/zookeeper-client/conf/configuration.xsl' -> 
`/etc/zookeeper/2.4.2.0-236/0/configuration.xsl'
`/usr/hdp/current/zookeeper-client/conf/log4j.properties' -> 
`/etc/zookeeper/2.4.2.0-236/0/log4j.properties'
`/usr/hdp/current/zookeeper-client/conf/zoo.cfg' -> 
`/etc/zookeeper/2.4.2.0-236/0/zoo.cfg'
`/usr/hdp/current/zookeeper-client/conf/zoo_sample.cfg' -> 
`/etc/zookeeper/2.4.2.0-236/0/zoo_sample.cfg'
`/usr/hdp/current/zookeeper-client/conf/zookeeper-env.cmd' -> 
`/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.cmd'
`/usr/hdp/current/zookeeper-client/conf/zookeeper-env.sh' -> 
`/etc/zookeeper/2.4.2.0-236/0/zookeeper-env.sh'
2017-06-05 20:19:17,204 - Execute['ambari-sudo.sh  -H -E cp -R -p 
/etc/zookeeper/conf/* /etc/zookeeper/2.4.2.0-236/0'] {'only_if': 'ls -d 
/etc/zookeeper/conf/*'}
2017-06-05 20:19:17,213 - Checking if need to create versioned conf dir 
/etc/zookeeper/2.4.2.0-236/0
2017-06-05 20:19:17,213 - call[('ambari-python-wrap', '/usr/bin/conf-select', 
'create-conf-dir', '--package', 'zookeeper', '--stack-version', '2.4.2.0-236', 
'--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 
'stderr': -1}
2017-06-05 20:19:17,231 - call returned (1, '/etc/zookeeper/2.4.2.0-236/0 exist 
already', '')
2017-06-05 20:19:17,231 - checked_call[('ambari-python-wrap', 
'/usr/bin/conf-select', 'set-conf-dir', '--package', 'zookeeper', 
'--stack-version', '2.4.2.0-236', '--conf-version', '0')] {'logoutput': False, 
'sudo': True, 

[jira] [Updated] (AMBARI-21187) Get rid deprecated jdk install in the Dockerfile of Log Search

2017-06-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-21187:
--
Attachment: (was: AMBARI-21187.patch)

> Get rid deprecated jdk install in the Dockerfile of Log Search
> --
>
> Key: AMBARI-21187
> URL: https://issues.apache.org/jira/browse/AMBARI-21187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21187) Get rid deprecated jdk install in the Dockerfile of Log Search

2017-06-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-21187:
--
Attachment: AMBARI-21187.patch

> Get rid deprecated jdk install in the Dockerfile of Log Search
> --
>
> Key: AMBARI-21187
> URL: https://issues.apache.org/jira/browse/AMBARI-21187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 3.0.0
>
> Attachments: AMBARI-21187.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (AMBARI-21187) Get rid deprecated jdk install in the Dockerfile of Log Search

2017-06-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó reassigned AMBARI-21187:
-

Assignee: Olivér Szabó

> Get rid deprecated jdk install in the Dockerfile of Log Search
> --
>
> Key: AMBARI-21187
> URL: https://issues.apache.org/jira/browse/AMBARI-21187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21187) Get rid deprecated jdk install in the Dockerfile of Log Search

2017-06-06 Thread JIRA
Olivér Szabó created AMBARI-21187:
-

 Summary: Get rid deprecated jdk install in the Dockerfile of Log 
Search
 Key: AMBARI-21187
 URL: https://issues.apache.org/jira/browse/AMBARI-21187
 Project: Ambari
  Issue Type: Bug
  Components: ambari-logsearch
Affects Versions: 3.0.0
Reporter: Olivér Szabó
 Fix For: 3.0.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21185) False positive unused import for nested class referenced only in Javadoc

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21185:
---
Status: Patch Available  (was: Open)

> False positive unused import for nested class referenced only in Javadoc
> 
>
> Key: AMBARI-21185
> URL: https://issues.apache.org/jira/browse/AMBARI-21185
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-21185.patch
>
>
> Checkstyle reports unused import:
> {code}
> [ERROR] 
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/UpgradeResourceProviderTest.java:99:8:
>  Unused import - org.apache.ambari.server.state.stack.upgrade.StageWrapper. 
> [UnusedImports]
> Audit done.
> {code}
> However, StageWrapper is referenced in the JavaDoc. IDEs, like Eclipse, don't 
> warn on this import since it's technically used int he JavaDoc generation:
> {code}
>   /**
>* Tests that commands created for {@link StageWrapper.Type#RU_TASKS} set 
> the
>* service and component on the {@link ExecutionCommand}.
> {code}
> This is an upstream bug: https://github.com/checkstyle/checkstyle/issues/3098 
> and https://github.com/checkstyle/checkstyle/issues/3453.
> I think the best thing we can do here is {{@link}} by full classname in the 
> JavaDoc and avoid the import.  This way we avoid both Checkstyle error when 
> import is present (due to "unused" import) and IDE warning when import is 
> missing (due to unresolved class).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21185) False positive unused import for nested class referenced only in Javadoc

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21185:
---
Attachment: AMBARI-21185.patch

> False positive unused import for nested class referenced only in Javadoc
> 
>
> Key: AMBARI-21185
> URL: https://issues.apache.org/jira/browse/AMBARI-21185
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-21185.patch
>
>
> Checkstyle reports unused import:
> {code}
> [ERROR] 
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/UpgradeResourceProviderTest.java:99:8:
>  Unused import - org.apache.ambari.server.state.stack.upgrade.StageWrapper. 
> [UnusedImports]
> Audit done.
> {code}
> However, StageWrapper is referenced in the JavaDoc. IDEs, like Eclipse, don't 
> warn on this import since it's technically used int he JavaDoc generation:
> {code}
>   /**
>* Tests that commands created for {@link StageWrapper.Type#RU_TASKS} set 
> the
>* service and component on the {@link ExecutionCommand}.
> {code}
> This is an upstream bug: https://github.com/checkstyle/checkstyle/issues/3098 
> and https://github.com/checkstyle/checkstyle/issues/3453.
> I think the best thing we can do here is {{@link}} by full classname in the 
> JavaDoc and avoid the import.  This way we avoid both Checkstyle error when 
> import is present (due to "unused" import) and IDE warning when import is 
> missing (due to unresolved class).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21122) Part One: Specify the script directly in alert target for script-based alert dispatchers

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16039025#comment-16039025
 ] 

Hudson commented on AMBARI-21122:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7580 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7580/])
AMBARI-21122 - Part One:  Specify the script directly in alert target (jhurley: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=4247f6919c329fc3da9e4ea8a0aa62aacd4793e3])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/notifications/dispatchers/AlertScriptDispatcherTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/configuration/Configuration.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/notifications/dispatchers/AlertScriptDispatcher.java


> Part One:  Specify the script directly in alert target for script-based alert 
> dispatchers
> -
>
> Key: AMBARI-21122
> URL: https://issues.apache.org/jira/browse/AMBARI-21122
> Project: Ambari
>  Issue Type: Technical task
>Affects Versions: trunk
>Reporter: Yao Lei
>Assignee: Yao Lei
> Fix For: 3.0.0
>
> Attachments: AMBARI-21122.1.patch, AMBARI-21122.2.patch, 
> AMBARI-21122.3.patch
>
>
> *Jave Codes Part*
> This patch will support using property  
> *ambari.dispatch-property.script.filename* in alert target  to tell 
> AlertScriptDispatcher to lookup script by filename,default in 
> /var/lib/ambari-server/resources/scripts directory. We can also change this  
> directory in ambari.properties by 
> *notification.dispatch.alert.script.directory* property.
> Execute a command manually like following to create an alert target that 
> includes this property:
> {code}
> POST api/v1/alert_targets
> {
>   "AlertTarget": {
> "name": "syslogger",
> "description": "Syslog Target",
> "notification_type": "ALERT_SCRIPT",
> "global": false,
> "groups":[1,3]
> "alert_states":["WARNING","CRITICAL","UNKNOWN"],
> "properties": {
>"ambari.dispatch-property.script.filename": "foo.py"
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16039023#comment-16039023
 ] 

Ishan Bhatt commented on AMBARI-21186:
--

Ember unit tests to be added.

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, manage_clients_popup.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Bhatt updated AMBARI-21186:
-
Status: Patch Available  (was: Open)

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, manage_clients_popup.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Bhatt updated AMBARI-21186:
-
Attachment: manage_clients_popup.png

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, manage_clients_popup.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Bhatt updated AMBARI-21186:
-
Attachment: (was: Screen Shot 2017-06-06 at 10.51.11 AM.png)

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, manage_clients_popup.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Bhatt updated AMBARI-21186:
-
Attachment: AMBARI-21186.patch

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, Screen Shot 2017-06-06 at 10.51.11 
> AM.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Bhatt updated AMBARI-21186:
-
Attachment: Screen Shot 2017-06-06 at 10.51.11 AM.png

> Install: Selective Client Install/Delete for Hosts Page
> ---
>
> Key: AMBARI-21186
> URL: https://issues.apache.org/jira/browse/AMBARI-21186
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Reporter: Ishan Bhatt
>Assignee: Ishan Bhatt
> Fix For: trunk, 3.0.0
>
> Attachments: AMBARI-21186.patch, Screen Shot 2017-06-06 at 10.51.11 
> AM.png
>
>
> Many customers do not want to install all of the clients on their edge nodes, 
> just a subset. We should give users the ability to pick and choose which 
> clients are installed, instead of making it an all or nothing scenario. 
> Likewise if a client installation fails, they want to be able to re-install 
> that single client, or remove a single client. The issue occurs today as we 
> force all clients to be installed or re-installed. We need to provide more 
> fine granularity to allow people to pick single clients, like just HDFS, or 
> just HBase.
> After Installation:
> On a specific host you should be able to add (All Clients, or Specific 
> Clients)
> On a specific host you should be able to remove (All Clients, or Specific 
> Clients)
> If an individual client install fails you should be able to (Retry Client 
> Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21186) Install: Selective Client Install/Delete for Hosts Page

2017-06-06 Thread Ishan Bhatt (JIRA)
Ishan Bhatt created AMBARI-21186:


 Summary: Install: Selective Client Install/Delete for Hosts Page
 Key: AMBARI-21186
 URL: https://issues.apache.org/jira/browse/AMBARI-21186
 Project: Ambari
  Issue Type: New Feature
  Components: ambari-web
Reporter: Ishan Bhatt
Assignee: Ishan Bhatt
 Fix For: trunk, 3.0.0


Many customers do not want to install all of the clients on their edge nodes, 
just a subset. We should give users the ability to pick and choose which 
clients are installed, instead of making it an all or nothing scenario. 
Likewise if a client installation fails, they want to be able to re-install 
that single client, or remove a single client. The issue occurs today as we 
force all clients to be installed or re-installed. We need to provide more fine 
granularity to allow people to pick single clients, like just HDFS, or just 
HBase.

After Installation:
On a specific host you should be able to add (All Clients, or Specific Clients)
On a specific host you should be able to remove (All Clients, or Specific 
Clients)
If an individual client install fails you should be able to (Retry Client 
Install)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21180) Component command changes must include version numbers for all services

2017-06-06 Thread Nate Cole (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Cole updated AMBARI-21180:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Component command changes must include version numbers for all services
> ---
>
> Key: AMBARI-21180
> URL: https://issues.apache.org/jira/browse/AMBARI-21180
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Nate Cole
>Assignee: Nate Cole
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21180.patch
>
>
> Component scripts need to understand when they have been patched, and supply 
> the correct version number such that the correct scripts are used.
> This will involve sending the component version information in commands.  
> Some components that rely on others will need to construct commands correctly 
> (say, Oozie scripts that require HDP_VERSION environment variables)
> Remove: availableServices from the command json
> Add: componentVersionMap as a structure of component name to version string



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20875) Removing A Service Causes DB Verification To Produce Warnings

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038975#comment-16038975
 ] 

Hudson commented on AMBARI-20875:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1569 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1569/])
AMBARI-20875. Removing A Service Causes DB Verification To Produce 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=22ccdbf670f9f0888e975a73a44f264e929d218c])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog252Test.java


> Removing A Service Causes DB Verification To Produce Warnings
> -
>
> Key: AMBARI-20875
> URL: https://issues.apache.org/jira/browse/AMBARI-20875
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.5.2
>
> Attachments: AMBARI-20875.patch
>
>
> When removing a service, the configurations for that service are kept for 
> historical purposes, but their various associations in the database are 
> removed (specifically, the {{serviceconfigmapping}} relationships).
> After removing a service, the orphaned configurations now cause a warning to 
> be displayed on Ambari Server startup:
> {noformat}
> 2017-04-06 17:15:24,003  WARN - You have config(s): 
> ranger-storm-policymgr-ssl-version1467149286586,atlas-env-version1471883877194,falcon-env-version1467044148480,storm-site-version1467149286586,storm-site-version1474944944095,ranger-storm-plugin-properties-version1467149286586,hana_hadoop-env-version1476989318735,hana_hadoop-env-version1468951412523,hanaes-site-version1475773173499,hanaes-site-version1477639131416,atlas-env-version1471880496396,falcon-startup.properties-version1474944962583,ranger-storm-security-version1467149286586,falcon-env-version1474944962517,application-properties-version1471883877194,hanaes-site-version1468951412523,application-properties-version1471992143777,application-properties-version1471880496396,hana_hadoop-env-version1475790068354,hana_hadoop-env-version1477639131416,falcon-runtime.properties-version1467044148480,atlas-env-version1471992143777,hana_hadoop-env-version1475773173499,storm-env-version1467149286586,hanaes-site-version1475790068354,hanaes-site-version1476902714170,atlas-env-version1471883827584,hana_hadoop-env-version1477695406433,hanaes-site-version1476989583427,falcon-log4j-version1,falcon-env-version1474944962457,hanaes-site-version1468959251565,falcon-client.properties-version1,atlas-env-version1471993347065,falcon-startup.properties-version1467044148480,storm-cluster-log4j-version1467149286586,hanaes-site-version1472285532383,hana_hadoop-env-version1477695089738,hana_hadoop-env-version1468959251565,hana_hadoop-env-version1476989821279,atlas-log4j-version1,storm-site-version1467612840864,storm-worker-log4j-version1467149286586,ranger-storm-audit-version1467149286586,application-properties-version1471993347065,application-properties-version1471883827584,hana_hadoop-env-version1477695579450
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {noformat}
> These orphaned configurations have entries in both {{clusterconfig}} and 
> {{clusterconfigmapping}} but are otherwise not references anywhere. They 
> don't hurt anything, but do trigger this warning since we can't determine if 
> they _should_ have mappings in {{serviceconfigmapping}}.
> A few options:
> - When removing a service, remove configurations as well, leaving no orphans. 
> Some would argue that this is a bad move since re-adding the service later 
> would allow you to see the old configurations. I do not believe this is true 
> since the old configurations are never associated with the new service's 
> {{serviceconfig}} or {{serviceconfigmapping}}.
> - Make the warning smarter somehow to ignore these since it's expected they 
> are orphaned.
> -- Somehow determine the service which should own the config and see if it 
> exists in the cluster?
> -- Add a new column to {{clusterconfig}} to mark it as deleted?
> To clean these warnings, we had to:
> {code}
> CREATE TEMPORARY TABLE IF NOT EXISTS orphaned_configs AS
> (SELECT
> cc.config_id,
> cc.type_name,
> cc.version_tag
> FROM clusterconfig cc, clusterconfigmapping ccm
> WHERE cc.config_id NOT IN (SELECT
> scm.config_id
> FROM serviceconfigmapping scm)
> AND cc.type_name != 'cluster-env'
> AND cc.type_name = ccm.type_name
> AND cc.version_tag = ccm.version_tag);
> DELETE FROM clusterconfigmapping
> WHERE EXISTS
> (SELECT 1 FROM orphaned_configs
> WHERE clusterconfigmapping.type_name = orphaned_configs.type_name AND 
> clusterconfigmapping.version_tag = orphaned_configs.version_tag);
> DELETE FROM clusterconfig WHERE 

[jira] [Created] (AMBARI-21185) False positive unused import for nested class referenced only in Javadoc

2017-06-06 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created AMBARI-21185:
--

 Summary: False positive unused import for nested class referenced 
only in Javadoc
 Key: AMBARI-21185
 URL: https://issues.apache.org/jira/browse/AMBARI-21185
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 3.0.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila
 Fix For: 3.0.0


Checkstyle reports unused import:

{code}
[ERROR] 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/UpgradeResourceProviderTest.java:99:8:
 Unused import - org.apache.ambari.server.state.stack.upgrade.StageWrapper. 
[UnusedImports]
Audit done.
{code}

However, StageWrapper is referenced in the JavaDoc. IDEs, like Eclipse, don't 
warn on this import since it's technically used int he JavaDoc generation:
{code}
  /**
   * Tests that commands created for {@link StageWrapper.Type#RU_TASKS} set the
   * service and component on the {@link ExecutionCommand}.
{code}

This is an upstream bug: https://github.com/checkstyle/checkstyle/issues/3098 
and https://github.com/checkstyle/checkstyle/issues/3453.

I think the best thing we can do here is {{@link}} by full classname in the 
JavaDoc and avoid the import.  This way we avoid both Checkstyle error when 
import is present (due to "unused" import) and IDE warning when import is 
missing (due to unresolved class).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21168) Deleting host from cluster leaves Ambari in inconsistent state (intermittently)

2017-06-06 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-21168:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Deleting host from cluster leaves Ambari in inconsistent state 
> (intermittently)
> ---
>
> Key: AMBARI-21168
> URL: https://issues.apache.org/jira/browse/AMBARI-21168
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21168.patch
>
>
> When deleting several components and hosts from Ambari under some 
> circumstances it could happen that a serviceComponentHost is deleted from 
> cache but it's still present in the DB. Since there are no DB errors in logs, 
> probably it gets reinserted by a concurrent merge. For ex.  HeatbeatProcessor 
>   may update the state meanwhile deleting components, which could result in 
> such situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21184) When trying to Add Hiveserver2 service on a node, we just get a pop-up dialog box, and then a spinning wheel. Unable to click "Confirm Add".

2017-06-06 Thread Antonenko Alexander (JIRA)
Antonenko Alexander created AMBARI-21184:


 Summary: When trying to Add Hiveserver2 service on a node, we just 
get a pop-up dialog box, and then a spinning wheel. Unable to click "Confirm 
Add".
 Key: AMBARI-21184
 URL: https://issues.apache.org/jira/browse/AMBARI-21184
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.5.2
Reporter: Antonenko Alexander
Assignee: Antonenko Alexander
 Fix For: 2.5.2


{noformat}
Uncaught TypeError: propertyHosts.filter is not a function 
at Class.updateHostsListValue 
(http://space1.example.com:8080/javascripts/app.js:178736:38) 
at Class.updateSiteObj 
(http://space1.example.com:8080/javascripts/app.js:178933:16) 
at Class. 
(http://space1.example.com:8080/javascripts/app.js:24852:23) 
at Array.forEach (native) 
at Class. 
(http://space1.example.com:8080/javascripts/app.js:24844:36) 
at Array.forEach (native) 
at Class.onLoadHiveConfigs 
(http://space1.example.com:8080/javascripts/app.js:24842:60) 
at Class.opt.success 
(http://space1.example.com:8080/javascripts/app.js:175771:38) 
at o (http://space1.example.com:8080/javascripts/vendor.js:106:14733) 
at Object.fireWith [as resolveWith] 
(http://space1.example.com:8080/javascripts/vendor.js:106:15502)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038943#comment-16038943
 ] 

Hudson commented on AMBARI-20884:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7579 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7579/])
AMBARI-20884. Compilation error due to import from relocated package 
(adoroszlai: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=e61fea51b5ffb9c74f746810713a7d9f1f27184f])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeHistoryEntity.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/serveraction/upgrades/UpgradeActionTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeEntity.java


> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch, AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21183) Removal of INIT Repository State from Web Client

2017-06-06 Thread Antonenko Alexander (JIRA)
Antonenko Alexander created AMBARI-21183:


 Summary: Removal of INIT Repository State from Web Client
 Key: AMBARI-21183
 URL: https://issues.apache.org/jira/browse/AMBARI-21183
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 3.0.0
Reporter: Antonenko Alexander
Assignee: Antonenko Alexander
 Fix For: 3.0.0


AMBARI-21179 removed the unused {{INIT}} state from repository version 
distributions. It seems like the web client is still using this field in 
several places. The new value of repositories which were in the {{INIT}} state 
is now {{NOT_REQUIRED}}:

{code}
{
  "href": "http://localhost:8080/api/v1/clusters/c1/stack_versions/2;,
  "ClusterStackVersions": {
"cluster_name": "c1",
"id": 2,
"repository_version": 2,
"stack": "HDP",
"state": "NOT_REQUIRED",
"version": "2.6",
"host_states": {
  "CURRENT": [],
  "INSTALLED": [],
  "INSTALLING": [],
  "INSTALL_FAILED": [],
  "NOT_REQUIRED": [],
  "OUT_OF_SYNC": []
}
  },
  "repository_versions": [
{
  "href": 
"http://localhost:8080/api/v1/clusters/c1/stack_versions/2/repository_versions/2;,
  "RepositoryVersions": {
"id": 2,
"stack_name": "HDP",
"stack_version": "2.6"
  }
}
  ]
}
{code}





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (AMBARI-20875) Removing A Service Causes DB Verification To Produce Warnings

2017-06-06 Thread Dmitry Lysnichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038899#comment-16038899
 ] 

Dmitry Lysnichenko edited comment on AMBARI-20875 at 6/6/17 1:52 PM:
-

Committed to branch-2.5 as well
To https://git-wip-us.apache.org/repos/asf/ambari.git
   9ed6d843e9..22ccdbf670  branch-2.5 -> branch-2.5



was (Author: dmitriusan):
Committed to branch-2.5 as well
To https://git-wip-us.apache.org/repos/asf/ambari.git
   9ed6d843e9..5439392f93  branch-2.5 -> branch-2.5


> Removing A Service Causes DB Verification To Produce Warnings
> -
>
> Key: AMBARI-20875
> URL: https://issues.apache.org/jira/browse/AMBARI-20875
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.5.2
>
> Attachments: AMBARI-20875.patch
>
>
> When removing a service, the configurations for that service are kept for 
> historical purposes, but their various associations in the database are 
> removed (specifically, the {{serviceconfigmapping}} relationships).
> After removing a service, the orphaned configurations now cause a warning to 
> be displayed on Ambari Server startup:
> {noformat}
> 2017-04-06 17:15:24,003  WARN - You have config(s): 
> ranger-storm-policymgr-ssl-version1467149286586,atlas-env-version1471883877194,falcon-env-version1467044148480,storm-site-version1467149286586,storm-site-version1474944944095,ranger-storm-plugin-properties-version1467149286586,hana_hadoop-env-version1476989318735,hana_hadoop-env-version1468951412523,hanaes-site-version1475773173499,hanaes-site-version1477639131416,atlas-env-version1471880496396,falcon-startup.properties-version1474944962583,ranger-storm-security-version1467149286586,falcon-env-version1474944962517,application-properties-version1471883877194,hanaes-site-version1468951412523,application-properties-version1471992143777,application-properties-version1471880496396,hana_hadoop-env-version1475790068354,hana_hadoop-env-version1477639131416,falcon-runtime.properties-version1467044148480,atlas-env-version1471992143777,hana_hadoop-env-version1475773173499,storm-env-version1467149286586,hanaes-site-version1475790068354,hanaes-site-version1476902714170,atlas-env-version1471883827584,hana_hadoop-env-version1477695406433,hanaes-site-version1476989583427,falcon-log4j-version1,falcon-env-version1474944962457,hanaes-site-version1468959251565,falcon-client.properties-version1,atlas-env-version1471993347065,falcon-startup.properties-version1467044148480,storm-cluster-log4j-version1467149286586,hanaes-site-version1472285532383,hana_hadoop-env-version1477695089738,hana_hadoop-env-version1468959251565,hana_hadoop-env-version1476989821279,atlas-log4j-version1,storm-site-version1467612840864,storm-worker-log4j-version1467149286586,ranger-storm-audit-version1467149286586,application-properties-version1471993347065,application-properties-version1471883827584,hana_hadoop-env-version1477695579450
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {noformat}
> These orphaned configurations have entries in both {{clusterconfig}} and 
> {{clusterconfigmapping}} but are otherwise not references anywhere. They 
> don't hurt anything, but do trigger this warning since we can't determine if 
> they _should_ have mappings in {{serviceconfigmapping}}.
> A few options:
> - When removing a service, remove configurations as well, leaving no orphans. 
> Some would argue that this is a bad move since re-adding the service later 
> would allow you to see the old configurations. I do not believe this is true 
> since the old configurations are never associated with the new service's 
> {{serviceconfig}} or {{serviceconfigmapping}}.
> - Make the warning smarter somehow to ignore these since it's expected they 
> are orphaned.
> -- Somehow determine the service which should own the config and see if it 
> exists in the cluster?
> -- Add a new column to {{clusterconfig}} to mark it as deleted?
> To clean these warnings, we had to:
> {code}
> CREATE TEMPORARY TABLE IF NOT EXISTS orphaned_configs AS
> (SELECT
> cc.config_id,
> cc.type_name,
> cc.version_tag
> FROM clusterconfig cc, clusterconfigmapping ccm
> WHERE cc.config_id NOT IN (SELECT
> scm.config_id
> FROM serviceconfigmapping scm)
> AND cc.type_name != 'cluster-env'
> AND cc.type_name = ccm.type_name
> AND cc.version_tag = ccm.version_tag);
> DELETE FROM clusterconfigmapping
> WHERE EXISTS
> (SELECT 1 FROM orphaned_configs
> WHERE clusterconfigmapping.type_name = orphaned_configs.type_name AND 
> clusterconfigmapping.version_tag = orphaned_configs.version_tag);
> DELETE FROM clusterconfig WHERE clusterconfig.config_id IN (SELECT config_id 

[jira] [Commented] (AMBARI-20875) Removing A Service Causes DB Verification To Produce Warnings

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038901#comment-16038901
 ] 

Hudson commented on AMBARI-20875:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1568 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1568/])
AMBARI-20875. Removing A Service Causes DB Verification To Produce 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=5439392f9330ec63e420a985543b167bed8efdef])
* (edit) ambari-server/src/main/resources/Ambari-DDL-Oracle-CREATE.sql
* (edit) ambari-server/src/main/resources/Ambari-DDL-MySQL-CREATE.sql
* (add) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/SchemaUpgradeHelper.java
* (edit) ambari-server/src/main/resources/Ambari-DDL-Derby-CREATE.sql
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/ServiceImpl.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/orm/entities/ClusterConfigEntity.java
* (edit) ambari-server/src/main/resources/Ambari-DDL-SQLAnywhere-CREATE.sql
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/orm/dao/ClusterDAO.java
* (edit) ambari-server/src/main/resources/Ambari-DDL-Postgres-CREATE.sql
* (edit) ambari-server/src/main/resources/Ambari-DDL-SQLServer-CREATE.sql
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/checks/DatabaseConsistencyCheckHelper.java
* (add) 
ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog252Test.java


> Removing A Service Causes DB Verification To Produce Warnings
> -
>
> Key: AMBARI-20875
> URL: https://issues.apache.org/jira/browse/AMBARI-20875
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.5.2
>
> Attachments: AMBARI-20875.patch
>
>
> When removing a service, the configurations for that service are kept for 
> historical purposes, but their various associations in the database are 
> removed (specifically, the {{serviceconfigmapping}} relationships).
> After removing a service, the orphaned configurations now cause a warning to 
> be displayed on Ambari Server startup:
> {noformat}
> 2017-04-06 17:15:24,003  WARN - You have config(s): 
> ranger-storm-policymgr-ssl-version1467149286586,atlas-env-version1471883877194,falcon-env-version1467044148480,storm-site-version1467149286586,storm-site-version1474944944095,ranger-storm-plugin-properties-version1467149286586,hana_hadoop-env-version1476989318735,hana_hadoop-env-version1468951412523,hanaes-site-version1475773173499,hanaes-site-version1477639131416,atlas-env-version1471880496396,falcon-startup.properties-version1474944962583,ranger-storm-security-version1467149286586,falcon-env-version1474944962517,application-properties-version1471883877194,hanaes-site-version1468951412523,application-properties-version1471992143777,application-properties-version1471880496396,hana_hadoop-env-version1475790068354,hana_hadoop-env-version1477639131416,falcon-runtime.properties-version1467044148480,atlas-env-version1471992143777,hana_hadoop-env-version1475773173499,storm-env-version1467149286586,hanaes-site-version1475790068354,hanaes-site-version1476902714170,atlas-env-version1471883827584,hana_hadoop-env-version1477695406433,hanaes-site-version1476989583427,falcon-log4j-version1,falcon-env-version1474944962457,hanaes-site-version1468959251565,falcon-client.properties-version1,atlas-env-version1471993347065,falcon-startup.properties-version1467044148480,storm-cluster-log4j-version1467149286586,hanaes-site-version1472285532383,hana_hadoop-env-version1477695089738,hana_hadoop-env-version1468959251565,hana_hadoop-env-version1476989821279,atlas-log4j-version1,storm-site-version1467612840864,storm-worker-log4j-version1467149286586,ranger-storm-audit-version1467149286586,application-properties-version1471993347065,application-properties-version1471883827584,hana_hadoop-env-version1477695579450
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {noformat}
> These orphaned configurations have entries in both {{clusterconfig}} and 
> {{clusterconfigmapping}} but are otherwise not references anywhere. They 
> don't hurt anything, but do trigger this warning since we can't determine if 
> they _should_ have mappings in {{serviceconfigmapping}}.
> A few options:
> - When removing a service, remove configurations as well, leaving no orphans. 
> Some would argue that this is a bad move since re-adding the service later 
> would allow you to see the old configurations. I do not believe this is true 
> since the old configurations are never associated with the new 

[jira] [Updated] (AMBARI-20875) Removing A Service Causes DB Verification To Produce Warnings

2017-06-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-20875:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch-2.5 as well
To https://git-wip-us.apache.org/repos/asf/ambari.git
   9ed6d843e9..5439392f93  branch-2.5 -> branch-2.5


> Removing A Service Causes DB Verification To Produce Warnings
> -
>
> Key: AMBARI-20875
> URL: https://issues.apache.org/jira/browse/AMBARI-20875
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.5.2
>
> Attachments: AMBARI-20875.patch
>
>
> When removing a service, the configurations for that service are kept for 
> historical purposes, but their various associations in the database are 
> removed (specifically, the {{serviceconfigmapping}} relationships).
> After removing a service, the orphaned configurations now cause a warning to 
> be displayed on Ambari Server startup:
> {noformat}
> 2017-04-06 17:15:24,003  WARN - You have config(s): 
> ranger-storm-policymgr-ssl-version1467149286586,atlas-env-version1471883877194,falcon-env-version1467044148480,storm-site-version1467149286586,storm-site-version1474944944095,ranger-storm-plugin-properties-version1467149286586,hana_hadoop-env-version1476989318735,hana_hadoop-env-version1468951412523,hanaes-site-version1475773173499,hanaes-site-version1477639131416,atlas-env-version1471880496396,falcon-startup.properties-version1474944962583,ranger-storm-security-version1467149286586,falcon-env-version1474944962517,application-properties-version1471883877194,hanaes-site-version1468951412523,application-properties-version1471992143777,application-properties-version1471880496396,hana_hadoop-env-version1475790068354,hana_hadoop-env-version1477639131416,falcon-runtime.properties-version1467044148480,atlas-env-version1471992143777,hana_hadoop-env-version1475773173499,storm-env-version1467149286586,hanaes-site-version1475790068354,hanaes-site-version1476902714170,atlas-env-version1471883827584,hana_hadoop-env-version1477695406433,hanaes-site-version1476989583427,falcon-log4j-version1,falcon-env-version1474944962457,hanaes-site-version1468959251565,falcon-client.properties-version1,atlas-env-version1471993347065,falcon-startup.properties-version1467044148480,storm-cluster-log4j-version1467149286586,hanaes-site-version1472285532383,hana_hadoop-env-version1477695089738,hana_hadoop-env-version1468959251565,hana_hadoop-env-version1476989821279,atlas-log4j-version1,storm-site-version1467612840864,storm-worker-log4j-version1467149286586,ranger-storm-audit-version1467149286586,application-properties-version1471993347065,application-properties-version1471883827584,hana_hadoop-env-version1477695579450
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {noformat}
> These orphaned configurations have entries in both {{clusterconfig}} and 
> {{clusterconfigmapping}} but are otherwise not references anywhere. They 
> don't hurt anything, but do trigger this warning since we can't determine if 
> they _should_ have mappings in {{serviceconfigmapping}}.
> A few options:
> - When removing a service, remove configurations as well, leaving no orphans. 
> Some would argue that this is a bad move since re-adding the service later 
> would allow you to see the old configurations. I do not believe this is true 
> since the old configurations are never associated with the new service's 
> {{serviceconfig}} or {{serviceconfigmapping}}.
> - Make the warning smarter somehow to ignore these since it's expected they 
> are orphaned.
> -- Somehow determine the service which should own the config and see if it 
> exists in the cluster?
> -- Add a new column to {{clusterconfig}} to mark it as deleted?
> To clean these warnings, we had to:
> {code}
> CREATE TEMPORARY TABLE IF NOT EXISTS orphaned_configs AS
> (SELECT
> cc.config_id,
> cc.type_name,
> cc.version_tag
> FROM clusterconfig cc, clusterconfigmapping ccm
> WHERE cc.config_id NOT IN (SELECT
> scm.config_id
> FROM serviceconfigmapping scm)
> AND cc.type_name != 'cluster-env'
> AND cc.type_name = ccm.type_name
> AND cc.version_tag = ccm.version_tag);
> DELETE FROM clusterconfigmapping
> WHERE EXISTS
> (SELECT 1 FROM orphaned_configs
> WHERE clusterconfigmapping.type_name = orphaned_configs.type_name AND 
> clusterconfigmapping.version_tag = orphaned_configs.version_tag);
> DELETE FROM clusterconfig WHERE clusterconfig.config_id IN (SELECT config_id 
> FROM orphaned_configs);
> SELECT * FROM orphaned_configs;
> DROP TABLE orphaned_configs;
> {code}
> I've considered advanced heuristics based on service metainfo with config 
> 

[jira] [Commented] (AMBARI-21168) Deleting host from cluster leaves Ambari in inconsistent state (intermittently)

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038894#comment-16038894
 ] 

Hudson commented on AMBARI-21168:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1567 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1567/])
AMBARI-21168. Deleting host from cluster leaves Ambari in inconsistent 
(smagyari: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=9ed6d843e929c37314560200555bbab48cc02e2b])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/svccomphost/ServiceComponentHostImpl.java


> Deleting host from cluster leaves Ambari in inconsistent state 
> (intermittently)
> ---
>
> Key: AMBARI-21168
> URL: https://issues.apache.org/jira/browse/AMBARI-21168
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21168.patch
>
>
> When deleting several components and hosts from Ambari under some 
> circumstances it could happen that a serviceComponentHost is deleted from 
> cache but it's still present in the DB. Since there are no DB errors in logs, 
> probably it gets reinserted by a concurrent merge. For ex.  HeatbeatProcessor 
>   may update the state meanwhile deleting components, which could result in 
> such situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20749) Ambari data purging

2017-06-06 Thread Sebastian Toader (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Toader updated AMBARI-20749:
--
Fix Version/s: 2.5.2

> Ambari data purging
> ---
>
> Key: AMBARI-20749
> URL: https://issues.apache.org/jira/browse/AMBARI-20749
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: amarnath reddy pappu
>Assignee: Sebastian Toader
>Priority: Critical
> Fix For: 2.5.2
>
>
> Currently there is one option to purge the old data in Ambari.
> 1. db-cleanup : this sound like database clean up (or delete) , may be better 
> word like Purge can be used.
> 2. appears like currently it purges only Alert related tables - should also 
> consider of other tables lie host_role_command and execution_commands and if 
> there is any other tables as well.
> 3. I don't find any documentation on this option - some customers are trying 
> to hard delete entries from DB tables and ending up with some other issues. 
> so better documentation would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (AMBARI-20749) Ambari data purging

2017-06-06 Thread Sebastian Toader (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Toader reassigned AMBARI-20749:
-

Assignee: Sebastian Toader

> Ambari data purging
> ---
>
> Key: AMBARI-20749
> URL: https://issues.apache.org/jira/browse/AMBARI-20749
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: amarnath reddy pappu
>Assignee: Sebastian Toader
>Priority: Critical
>
> Currently there is one option to purge the old data in Ambari.
> 1. db-cleanup : this sound like database clean up (or delete) , may be better 
> word like Purge can be used.
> 2. appears like currently it purges only Alert related tables - should also 
> consider of other tables lie host_role_command and execution_commands and if 
> there is any other tables as well.
> 3. I don't find any documentation on this option - some customers are trying 
> to hard delete entries from DB tables and ending up with some other issues. 
> so better documentation would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21158) Eliminate Maven warnings

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21158:
---
Component/s: ambari-server

> Eliminate Maven warnings
> 
>
> Key: AMBARI-21158
> URL: https://issues.apache.org/jira/browse/AMBARI-21158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-21158.patch
>
>
> Get rid of as many Maven warnings as possible:
> {noformat}
> [WARNING] 
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.ambari:ambari-web:pom:2.0.0.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin org.codehaus.mojo:exec-maven-plugin @ 
> line 161, column 15
> [WARNING] 
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.ambari:ambari-admin:jar:2.0.0.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.version' for 
> org.codehaus.mojo:exec-maven-plugin is missing. @ line 91, column 15
> [WARNING] 
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.ambari:ambari-metrics-common:jar:2.0.0.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-surefire-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 169, column 15
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.codehaus.mojo:build-helper-maven-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 202, column 15
> [WARNING] 'build.plugins.plugin.version' for org.apache.rat:apache-rat-plugin 
> is missing. @ org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 282, column 15
> [WARNING] 'build.plugins.plugin.version' for 
> org.apache.maven.plugins:maven-clean-plugin is missing. @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 187, column 15
> [WARNING] 
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.ambari:ambari-metrics-hadoop-sink:jar:2.0.0.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-surefire-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 169, column 15
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.codehaus.mojo:build-helper-maven-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 202, column 15
> [WARNING] 'build.plugins.plugin.version' for org.apache.rat:apache-rat-plugin 
> is missing. @ org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 282, column 15
> [WARNING] 'build.plugins.plugin.version' for 
> org.apache.maven.plugins:maven-clean-plugin is missing. @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 187, column 15
> [WARNING] 
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.ambari:ambari-metrics-flume-sink:jar:2.0.0.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-surefire-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 169, column 15
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.codehaus.mojo:build-helper-maven-plugin @ 
> org.apache.ambari:ambari-metrics:2.0.0.0-SNAPSHOT, 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-Commit/ambari-metrics/pom.xml,
>  line 202, column 15
> [WARNING] 'build.plugins.plugin.version' for org.apache.rat:apache-rat-plugin 
> is missing. @ 

[jira] [Updated] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-20884:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 
[trunk|http://git-wip-us.apache.org/repos/asf/ambari/commit/e61fea51].

> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch, AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Doroszlai, Attila (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038880#comment-16038880
 ] 

Doroszlai, Attila commented on AMBARI-21164:


[~dgrinenko], can you please commit fix for unused import?  Thanks.

> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch, AMBARI-21164_unused_import.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21164:
---
Attachment: AMBARI-21164_unused_import.patch

> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch, AMBARI-21164_unused_import.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reopened AMBARI-21164:


{noformat}
[INFO] --- maven-checkstyle-plugin:2.17:check (checkstyle) @ ambari-server ---
[INFO] Starting audit...
[ERROR] 
ambari-server/src/test/java/org/apache/ambari/server/checks/ServiceCheckValidityCheckTest.java:27:8:
 Unused import - java.util.Arrays. [UnusedImports]
Audit done.
{noformat}

> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21168) Deleting host from cluster leaves Ambari in inconsistent state (intermittently)

2017-06-06 Thread Sandor Magyari (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038870#comment-16038870
 ] 

Sandor Magyari commented on AMBARI-21168:
-

Build failure not related to this patch.

> Deleting host from cluster leaves Ambari in inconsistent state 
> (intermittently)
> ---
>
> Key: AMBARI-21168
> URL: https://issues.apache.org/jira/browse/AMBARI-21168
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21168.patch
>
>
> When deleting several components and hosts from Ambari under some 
> circumstances it could happen that a serviceComponentHost is deleted from 
> cache but it's still present in the DB. Since there are no DB errors in logs, 
> probably it gets reinserted by a concurrent merge. For ex.  HeatbeatProcessor 
>   may update the state meanwhile deleting components, which could result in 
> such situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21133) Configure Ambari Identity fails with "Cannot run program: ambari-sudo.sh" on Ubuntu

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21133:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 
[branch-2.5|http://git-wip-us.apache.org/repos/asf/ambari/commit/54ce6cca].

> Configure Ambari Identity fails with "Cannot run program: ambari-sudo.sh" on 
> Ubuntu
> ---
>
> Key: AMBARI-21133
> URL: https://issues.apache.org/jira/browse/AMBARI-21133
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0, 2.5.0
> Environment: Ubuntu 12, 14
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-21133.patch
>
>
> STR:
> # Configure Ambari server to be run as non-root user
> # Deploy cluster
> # Enable Kerberos
> Result: _Configure Ambari Identity_ step fails with the following error:
> {noformat}
> Cannot run program "ambari-sudo.sh": error=2, No such file or directory
> {noformat}
> This happens even on Ambari 2.5, although AMBARI-19083 attempted to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21133) Configure Ambari Identity fails with "Cannot run program: ambari-sudo.sh" on Ubuntu

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038821#comment-16038821
 ] 

Hudson commented on AMBARI-21133:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1566 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1566/])
AMBARI-21133. Configure Ambari Identity fails with "Cannot run program: 
(adoroszlai: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=54ce6cca4ce2f37b07c8574e9fc9d621d62ad704])
* (edit) ambari-server/src/main/python/ambari_server_main.py
* (edit) ambari-server/conf/unix/ambari-env.sh


> Configure Ambari Identity fails with "Cannot run program: ambari-sudo.sh" on 
> Ubuntu
> ---
>
> Key: AMBARI-21133
> URL: https://issues.apache.org/jira/browse/AMBARI-21133
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0, 2.5.0
> Environment: Ubuntu 12, 14
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-21133.patch
>
>
> STR:
> # Configure Ambari server to be run as non-root user
> # Deploy cluster
> # Enable Kerberos
> Result: _Configure Ambari Identity_ step fails with the following error:
> {noformat}
> Cannot run program "ambari-sudo.sh": error=2, No such file or directory
> {noformat}
> This happens even on Ambari 2.5, although AMBARI-19083 attempted to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20952) Collection added to itself

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038819#comment-16038819
 ] 

Hudson commented on AMBARI-20952:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1566 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1566/])
AMBARI-20952. Collection added to itself (adoroszlai: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=af0bbce27453a4a72a2f1931fad123195da041f8])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/stack/QuickLinksConfigurationModuleTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/stack/ThemeModule.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/stack/QuickLinksConfigurationModule.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/stack/ThemeModuleTest.java


> Collection added to itself
> --
>
> Key: AMBARI-20952
> URL: https://issues.apache.org/jira/browse/AMBARI-20952
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-20952.patch
>
>
> Collection added to itself due to typo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20918) AmbariServer Metrics service cannot be disabled

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038820#comment-16038820
 ] 

Hudson commented on AMBARI-20918:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1566 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1566/])
AMBARI-20918. AmbariServer Metrics service cannot be disabled (adoroszlai: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=97a51aaf924d4d9e385f7e4751d50f4617c609c5])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariServer.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/configuration/Configuration.java


> AmbariServer Metrics service cannot be disabled
> ---
>
> Key: AMBARI-20918
> URL: https://issues.apache.org/jira/browse/AMBARI-20918
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-20918.patch
>
>
> {code:title=Steps to reproduce}
> echo 'ambariserver.metrics.disable=true' >> 
> /etc/ambari-server/conf/ambari.properties
> ambari-server restart
> {code}
> {noformat:title=Expected in /var/log/ambari-server/ambari-server.log}
> ... INFO [main] ... AmbariServer Metrics disabled.
> {noformat}
> {noformat:title=Actual in /var/log/ambari-server/ambari-server.log}
> ... INFO [main] ... * Initializing AmbariServer Metrics Service 
> **
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20952) Collection added to itself

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-20952:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 
[branch-2.5|http://git-wip-us.apache.org/repos/asf/ambari/commit/af0bbce2].

> Collection added to itself
> --
>
> Key: AMBARI-20952
> URL: https://issues.apache.org/jira/browse/AMBARI-20952
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-20952.patch
>
>
> Collection added to itself due to typo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20918) AmbariServer Metrics service cannot be disabled

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-20918:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 
[branch-2.5|http://git-wip-us.apache.org/repos/asf/ambari/commit/97a51aaf].

> AmbariServer Metrics service cannot be disabled
> ---
>
> Key: AMBARI-20918
> URL: https://issues.apache.org/jira/browse/AMBARI-20918
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Fix For: 3.0.0, 2.5.2
>
> Attachments: AMBARI-20918.patch
>
>
> {code:title=Steps to reproduce}
> echo 'ambariserver.metrics.disable=true' >> 
> /etc/ambari-server/conf/ambari.properties
> ambari-server restart
> {code}
> {noformat:title=Expected in /var/log/ambari-server/ambari-server.log}
> ... INFO [main] ... AmbariServer Metrics disabled.
> {noformat}
> {noformat:title=Actual in /var/log/ambari-server/ambari-server.log}
> ... INFO [main] ... * Initializing AmbariServer Metrics Service 
> **
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21168) Deleting host from cluster leaves Ambari in inconsistent state (intermittently)

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038716#comment-16038716
 ] 

Hudson commented on AMBARI-21168:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7578/])
AMBARI-21168. Deleting host from cluster leaves Ambari in inconsistent 
(smagyari: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=d8d586888d415d1c429ff6514e5b8435f6cb7e47])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/svccomphost/ServiceComponentHostImpl.java


> Deleting host from cluster leaves Ambari in inconsistent state 
> (intermittently)
> ---
>
> Key: AMBARI-21168
> URL: https://issues.apache.org/jira/browse/AMBARI-21168
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21168.patch
>
>
> When deleting several components and hosts from Ambari under some 
> circumstances it could happen that a serviceComponentHost is deleted from 
> cache but it's still present in the DB. Since there are no DB errors in logs, 
> probably it gets reinserted by a concurrent merge. For ex.  HeatbeatProcessor 
>   may update the state meanwhile deleting components, which could result in 
> such situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038712#comment-16038712
 ] 

Hudson commented on AMBARI-21113:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7578/])
AMBARI-21113. hdfs_user_nofile_limit is not picking as expected for (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=4dba161a6fbeab2ab5507c9ff50f524242b7f450])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog250.java


> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21054) Add ppc as a new OS for User.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038714#comment-16038714
 ] 

Hudson commented on AMBARI-21054:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7578/])
AMBARI-21054. Add ppc as a new OS for User. (aonishuk) (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=bc90de2e9843f41229d86f4dad6accbb66163500])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/stack/OsFamily.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
* (edit) ambari-common/src/main/python/ambari_commons/resources/os_family.json
* (edit) ambari-server/src/main/resources/stacks/HDP/2.6/repos/repoinfo.xml
* (edit) ambari-common/src/main/python/ambari_commons/os_check.py
* (edit) 
ambari-common/src/main/python/resource_management/libraries/providers/__init__.py
* (edit) 
ambari-common/src/main/python/resource_management/core/providers/__init__.py


> Add ppc as a new OS for User.
> -
>
> Key: AMBARI-21054
> URL: https://issues.apache.org/jira/browse/AMBARI-21054
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21054.patch, AMBARI-21054-trunk.patch
>
>
> Add ppc as a new OS for User.
> As centos 6 - there should be a centos6-ppc for ppc users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21070) Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp not writable.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038715#comment-16038715
 ] 

Hudson commented on AMBARI-21070:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7578/])
AMBARI-21070. Race condition: webhdfs call  mkdir /tmp/druid-indexing 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=0dd9fbf34764de407d9605f4472da03bf466cad6])
* (edit) ambari-server/src/test/python/stacks/2.6/DRUID/test_druid.py
* (edit) ambari-server/src/test/python/stacks/2.6/configs/default.json
* (edit) 
ambari-server/src/main/resources/common-services/DRUID/0.9.2/package/scripts/params.py
* (edit) 
ambari-server/src/main/resources/common-services/DRUID/0.9.2/package/scripts/druid.py


> Race condition: webhdfs call  mkdir /tmp/druid-indexing before  /tmp  making 
> tmp not writable.
> --
>
> Key: AMBARI-21070
> URL: https://issues.apache.org/jira/browse/AMBARI-21070
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21070.patch
>
>
> Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp
> not writable.
> @HDP install through ambari , just at the step start components on host< > we
> have some webhdfs operations in background which is creating HDFS directory
> structures required for specific components like (/tmp, /tmp/hive /user/druid
> /tmp/druid-indexing ...)
> generally the expected order is getfileInfo : /tmp --> mkdir: /tmp
> changePermission: /tmp to 777 (hdfs:hdfs) so that /tmp is accessible to all ,
> hence hivemetastore able to create /tmp/hive(hive scratch directory)
> But here in this case specific to druid install , most of the times mkdir of
> /tmp/druid-indexing called before(actual /tmp creation) and thus /tmp is
> having just default directory permission(755).
> ->So next call of getfileInfo : /tmp says already exist it will not further 
> create and change permission
> This made /tmp not accessible to write, So HiveServer process gets shutdown as
> it unable to create/access /tmp/hive.
> hdfs-audit log:
> 
> 
> 
> 2017-05-12 06:39:51,067 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=getfileinfo src=/tmp/druid-indexing 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,120 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=contentSummary  
> src=/user/druid dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,133 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active dst=nullperm=hdfs:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,155 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=mkdirs  src=/tmp/druid-indexing 
> dst=nullperm=hdfs:hdfs:rwxr-xr-xproto=webhdfs
> 2017-05-12 06:39:51,206 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,235 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   src=/ats/  
>  dst=nullperm=yarn:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,249 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setPermission   
> src=/tmp/druid-indexing dst=nullperm=hdfs:hdfs:rwxr-xr-x
> proto=webhdfs
> 2017-05-12 06:39:51,290 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid/data   
>  dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,339 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active/dst=nullperm=hdfs:hadoop:rwxr-xr-x  
> proto=webhdfs
> 2017-05-12 06:39:51,341 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setOwnersrc=/tmp/druid-indexing 
> dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,380 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=setOwnersrc=/user/druid/data   
>  dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,431 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setOwner

[jira] [Commented] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038713#comment-16038713
 ] 

Hudson commented on AMBARI-21182:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7578/])
AMBARI-21182. Agent Host Disk Usage Alert Hardcodes the Stack Directory 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=119d2624f96d66c9a4d5d559ca436de73adae444])
* (edit) ambari-server/src/main/resources/host_scripts/alert_disk_space.py


> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20122) Stack advisor needs to recommend dependency for slaves and masters

2017-06-06 Thread Tim Thorpe (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Thorpe updated AMBARI-20122:

Description: 
After resolution of AMBARI-19685, stack advisor validates if stack defined 
dependency is not satisfied but recommendation API does not account for this.

Stack defined dependencies are service/component based and has a scope 
CLUSTER|HOST.  

During recommendation the services to install have already been selected.  We 
can't really utilize the cluster scope because either the dependent service was 
selected or it was not.  If it was not selected it will be caught during 
validation.  We can only recommend based on HOST scope.

This JIRA is also limited to only handling those which don't have conditional 
dependencies.


  was:
After resolution of AMBARI-19685, stack advisor validates if stack defined 
dependency is not satisfied but recommendation API does not account for this.

stack defined dependencies are servicecomponent based and has a scope 
CLUSTER|HOST


> Stack advisor needs to recommend dependency for slaves and masters
> --
>
> Key: AMBARI-20122
> URL: https://issues.apache.org/jira/browse/AMBARI-20122
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Jaimin Jetly
>Assignee: Tim Thorpe
> Fix For: 3.0.0
>
> Attachments: AMBARI-20122.patch
>
>
> After resolution of AMBARI-19685, stack advisor validates if stack defined 
> dependency is not satisfied but recommendation API does not account for this.
> Stack defined dependencies are service/component based and has a scope 
> CLUSTER|HOST.  
> During recommendation the services to install have already been selected.  We 
> can't really utilize the cluster scope because either the dependent service 
> was selected or it was not.  If it was not selected it will be caught during 
> validation.  We can only recommend based on HOST scope.
> This JIRA is also limited to only handling those which don't have conditional 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (AMBARI-21096) Provide additional logging for config audit log

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reopened AMBARI-21096:


[~afernandez], this commit introduced consistent unit test failures on 
[trunk|https://builds.apache.org/job/Ambari-trunk-Commit/7571/] and 
[branch-2.5|https://builds.apache.org/job/Ambari-branch-2.5/1555/], too.  Can 
you please check?

> Provide additional logging for config audit log 
> 
>
> Key: AMBARI-21096
> URL: https://issues.apache.org/jira/browse/AMBARI-21096
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk, 2.5.2, 2.4.4
>
> Attachments: AMBARI-21096.branch-2.5.patch, AMBARI-21096.trunk.patch
>
>
> Improve logging of ambari-config-changes.log to include names, timestamps, 
> and versions.
> Current log is of the form,
> {noformat}
> 2017-05-12 22:14:45,541  INFO - Cluster 'c1' changed by: 'admin'; 
> service_name='ZOOKEEPER' config_group='Default' config_group_id='-1' 
> version='1'
> 2017-05-12 22:14:45,562  INFO - cluster 'c1' changed by: 'admin'; 
> type='zookeeper-log4j' tag='version1'
> 2017-05-12 22:14:45,562  INFO - cluster 'c1' changed by: 'admin'; 
> type='zookeeper-logsearch-conf' tag='version1'
> 2017-05-12 22:14:45,562  INFO - cluster 'c1' changed by: 'admin'; 
> type='zookeeper-env' tag='version1'
> 2017-05-12 22:14:45,562  INFO - cluster 'c1' changed by: 'admin'; 
> type='zoo.cfg' tag='version1'
> # Changed default config
> 2017-05-12 22:18:06,277  INFO - Cluster 'c1' changed by: 'admin'; 
> service_name='ZOOKEEPER' config_group='Default' config_group_id='-1' 
> version='2'
> 2017-05-12 22:18:06,278  INFO - cluster 'c1' changed by: 'admin'; 
> type='zoo.cfg' tag='version1494627510038'
> # Changed config in Custom_ZK_05
> 2017-05-12 22:22:48,957  INFO - User admin is creating new configuration 
> group Custom_ZK_05 for tag ZOOKEEPER in cluster c1
> 2017-05-12 22:23:25,050  INFO - Cluster 'c1' changed by: 'admin'; 
> service_name='ZOOKEEPER' config_group='Default' config_group_id='-1' 
> version='3'
> 2017-05-12 22:23:25,050  INFO - cluster 'c1' changed by: 'admin'; 
> type='zoo.cfg' tag='version1494627828482'
> {noformat}
> Will add the Note field, config group name (not just the id), config type, 
> and perhaps number of hosts affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038643#comment-16038643
 ] 

Hudson commented on AMBARI-21164:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7577 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7577/])
AMBARI-21164. Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=b3425c9841b4153b1cf3b15dc6f55e67f1754f3b])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/sample/checks/SampleServiceCheck.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProviderTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/checks/ServiceCheckValidityCheck.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/checks/AbstractCheckDescriptor.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/checks/ServiceCheckValidityCheckTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/state/CheckHelperTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/CheckHelper.java


> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-20884:
---
Status: Patch Available  (was: Reopened)

> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch, AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-20884:
---
Attachment: AMBARI-20884.patch

> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch, AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21070) Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp not writable.

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21070:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> Race condition: webhdfs call  mkdir /tmp/druid-indexing before  /tmp  making 
> tmp not writable.
> --
>
> Key: AMBARI-21070
> URL: https://issues.apache.org/jira/browse/AMBARI-21070
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21070.patch
>
>
> Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp
> not writable.
> @HDP install through ambari , just at the step start components on host< > we
> have some webhdfs operations in background which is creating HDFS directory
> structures required for specific components like (/tmp, /tmp/hive /user/druid
> /tmp/druid-indexing ...)
> generally the expected order is getfileInfo : /tmp --> mkdir: /tmp
> changePermission: /tmp to 777 (hdfs:hdfs) so that /tmp is accessible to all ,
> hence hivemetastore able to create /tmp/hive(hive scratch directory)
> But here in this case specific to druid install , most of the times mkdir of
> /tmp/druid-indexing called before(actual /tmp creation) and thus /tmp is
> having just default directory permission(755).
> ->So next call of getfileInfo : /tmp says already exist it will not further 
> create and change permission
> This made /tmp not accessible to write, So HiveServer process gets shutdown as
> it unable to create/access /tmp/hive.
> hdfs-audit log:
> 
> 
> 
> 2017-05-12 06:39:51,067 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=getfileinfo src=/tmp/druid-indexing 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,120 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=contentSummary  
> src=/user/druid dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,133 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active dst=nullperm=hdfs:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,155 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=mkdirs  src=/tmp/druid-indexing 
> dst=nullperm=hdfs:hdfs:rwxr-xr-xproto=webhdfs
> 2017-05-12 06:39:51,206 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,235 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   src=/ats/  
>  dst=nullperm=yarn:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,249 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setPermission   
> src=/tmp/druid-indexing dst=nullperm=hdfs:hdfs:rwxr-xr-x
> proto=webhdfs
> 2017-05-12 06:39:51,290 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid/data   
>  dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,339 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active/dst=nullperm=hdfs:hadoop:rwxr-xr-x  
> proto=webhdfs
> 2017-05-12 06:39:51,341 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setOwnersrc=/tmp/druid-indexing 
> dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,380 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=setOwnersrc=/user/druid/data   
>  dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,431 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setOwnersrc=/ats/active 
> dst=nullperm=yarn:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,526 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setOwnersrc=/ats/   
> dst=nullperm=yarn:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,580 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.32.12cmd=getfileinfo 
> src=/apps/hbase/staging dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,620 INFO FSNamesystem.audit: 

[jira] [Commented] (AMBARI-21070) Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp not writable.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038614#comment-16038614
 ] 

Hudson commented on AMBARI-21070:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1565 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1565/])
AMBARI-21070. Race condition: webhdfs call  mkdir /tmp/druid-indexing 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=f585da99ecad8b4d6a15b19688226a18a6fbc79f])
* (edit) 
ambari-server/src/main/resources/common-services/DRUID/0.9.2/package/scripts/druid.py
* (edit) ambari-server/src/test/python/stacks/2.6/DRUID/test_druid.py
* (edit) ambari-server/src/test/python/stacks/2.6/configs/default.json
* (edit) 
ambari-server/src/main/resources/common-services/DRUID/0.9.2/package/scripts/params.py


> Race condition: webhdfs call  mkdir /tmp/druid-indexing before  /tmp  making 
> tmp not writable.
> --
>
> Key: AMBARI-21070
> URL: https://issues.apache.org/jira/browse/AMBARI-21070
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21070.patch
>
>
> Race condition: webhdfs call mkdir /tmp/druid-indexing before /tmp making tmp
> not writable.
> @HDP install through ambari , just at the step start components on host< > we
> have some webhdfs operations in background which is creating HDFS directory
> structures required for specific components like (/tmp, /tmp/hive /user/druid
> /tmp/druid-indexing ...)
> generally the expected order is getfileInfo : /tmp --> mkdir: /tmp
> changePermission: /tmp to 777 (hdfs:hdfs) so that /tmp is accessible to all ,
> hence hivemetastore able to create /tmp/hive(hive scratch directory)
> But here in this case specific to druid install , most of the times mkdir of
> /tmp/druid-indexing called before(actual /tmp creation) and thus /tmp is
> having just default directory permission(755).
> ->So next call of getfileInfo : /tmp says already exist it will not further 
> create and change permission
> This made /tmp not accessible to write, So HiveServer process gets shutdown as
> it unable to create/access /tmp/hive.
> hdfs-audit log:
> 
> 
> 
> 2017-05-12 06:39:51,067 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=getfileinfo src=/tmp/druid-indexing 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,120 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=contentSummary  
> src=/user/druid dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,133 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active dst=nullperm=hdfs:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,155 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=mkdirs  src=/tmp/druid-indexing 
> dst=nullperm=hdfs:hdfs:rwxr-xr-xproto=webhdfs
> 2017-05-12 06:39:51,206 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid 
> dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,235 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   src=/ats/  
>  dst=nullperm=yarn:hadoop:rwxr-xr-x  proto=webhdfs
> 2017-05-12 06:39:51,249 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setPermission   
> src=/tmp/druid-indexing dst=nullperm=hdfs:hdfs:rwxr-xr-x
> proto=webhdfs
> 2017-05-12 06:39:51,290 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=listStatus  src=/user/druid/data   
>  dst=nullperm=null   proto=webhdfs
> 2017-05-12 06:39:51,339 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setPermission   
> src=/ats/active/dst=nullperm=hdfs:hadoop:rwxr-xr-x  
> proto=webhdfs
> 2017-05-12 06:39:51,341 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.26.3 cmd=setOwnersrc=/tmp/druid-indexing 
> dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,380 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.22.81cmd=setOwnersrc=/user/druid/data   
>  dst=nullperm=druid:hdfs:rwxr-xr-x   proto=webhdfs
> 2017-05-12 06:39:51,431 INFO FSNamesystem.audit: allowed=true   ugi=hdfs 
> (auth:SIMPLE)  ip=/172.27.37.200   cmd=setOwner

[jira] [Updated] (AMBARI-21054) Add ppc as a new OS for User.

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21054:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> Add ppc as a new OS for User.
> -
>
> Key: AMBARI-21054
> URL: https://issues.apache.org/jira/browse/AMBARI-21054
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21054.patch, AMBARI-21054-trunk.patch
>
>
> Add ppc as a new OS for User.
> As centos 6 - there should be a centos6-ppc for ppc users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21054) Add ppc as a new OS for User.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038611#comment-16038611
 ] 

Hudson commented on AMBARI-21054:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1564 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1564/])
AMBARI-21054. Add ppc as a new OS for User. (aonishuk) (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=be980bb6f2bc878842fbade25538fa203d024ded])
* (edit) ambari-common/src/main/python/ambari_commons/resources/os_family.json
* (edit) ambari-server/src/main/resources/stacks/HDP/2.6/repos/repoinfo.xml
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/stack/OsFamily.java
* (edit) 
ambari-common/src/main/python/resource_management/libraries/providers/__init__.py
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
* (edit) 
ambari-common/src/main/python/resource_management/core/providers/__init__.py
* (edit) ambari-common/src/main/python/ambari_commons/os_check.py


> Add ppc as a new OS for User.
> -
>
> Key: AMBARI-21054
> URL: https://issues.apache.org/jira/browse/AMBARI-21054
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21054.patch, AMBARI-21054-trunk.patch
>
>
> Add ppc as a new OS for User.
> As centos 6 - there should be a centos6-ppc for ppc users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038606#comment-16038606
 ] 

Hudson commented on AMBARI-21113:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1562 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1562/])
AMBARI-21113. hdfs_user_nofile_limit is not picking as expected for (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=9496c0170439633a114332d13e46c7d3a4f4d339])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog250.java


> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038609#comment-16038609
 ] 

Hudson commented on AMBARI-21182:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1563 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1563/])
AMBARI-21182. Agent Host Disk Usage Alert Hardcodes the Stack Directory 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=533ec274c271626c4d38e3b9e236deaa6cbc17a9])
* (edit) ambari-server/src/main/resources/host_scripts/alert_disk_space.py


> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21182:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21113:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-21113:
-
Attachment: AMBARI-21113-trunk.patch

> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-21113:
-
Attachment: (was: AMBARI-21113-trunk.patch)

> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038598#comment-16038598
 ] 

Hadoop QA commented on AMBARI-21182:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12871562/AMBARI-21182.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level [trunk 
compilation|https://builds.apache.org/job/Ambari-trunk-test-patch/11631//artifact/patch-work/trunkJavacWarnings.txt]
 may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11631//console

This message is automatically generated.

> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-21113:
-
Attachment: AMBARI-21113-trunk.patch

> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21182:
-
Status: Patch Available  (was: Open)

> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21113) hdfs_user_nofile_limit is not picking as expected for datanode process in a secure cluster

2017-06-06 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-21113:
-
Attachment: (was: AMBARI-21113-trunk.patch)

> hdfs_user_nofile_limit is not picking as expected for datanode process in a 
> secure cluster
> --
>
> Key: AMBARI-21113
> URL: https://issues.apache.org/jira/browse/AMBARI-21113
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21113-2.5.patch, AMBARI-21113-trunk.patch
>
>
> such code snipped were not added to the hadoop-env after Ambari upgrade
> {code}
> if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n 
> "$HADOOP_SECURE_DN_USER" ]; then
>   ulimit -n {{hdfs_user_nofile_limit}}
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21182:
-
Attachment: AMBARI-21182.patch

> Agent Host Disk Usage Alert Hardcodes the Stack Directory
> -
>
> Key: AMBARI-21182
> URL: https://issues.apache.org/jira/browse/AMBARI-21182
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21182.patch
>
>
> The Host Disk Usage alert currently hard codes the stack location directly
> into the script:
> 
> 
> 
> # the location where HDP installs components when using HDP 2.2+
> STACK_HOME_DIR = "/usr/hdp"
> # the location where HDP installs components when using HDP 2.0 to 2.1
> STACK_HOME_LEGACY_DIR = "/usr/lib"
> # determine the location of HDP home
>   stack_home = None
>   if os.path.isdir(STACK_HOME_DIR):
> stack_home = STACK_HOME_DIR
>   elif os.path.isdir(STACK_HOME_LEGACY_DIR):
> stack_home = STACK_HOME_LEGACY_DIR
> 
> On clusters where a different stack is installed (such as `/usr/hdf`, the
> above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.
>   * The 2.0 and 2.1 code paths should be removed since they are not supported 
> anymore.
>   * We should parameterize STACK_HOME_DIR (or even better, use the stack 
> features JSON structure) to determine the home location to check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21182) Agent Host Disk Usage Alert Hardcodes the Stack Directory

2017-06-06 Thread Andrew Onischuk (JIRA)
Andrew Onischuk created AMBARI-21182:


 Summary: Agent Host Disk Usage Alert Hardcodes the Stack Directory
 Key: AMBARI-21182
 URL: https://issues.apache.org/jira/browse/AMBARI-21182
 Project: Ambari
  Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
 Fix For: 2.5.2
 Attachments: AMBARI-21182.patch

The Host Disk Usage alert currently hard codes the stack location directly
into the script:




# the location where HDP installs components when using HDP 2.2+
STACK_HOME_DIR = "/usr/hdp"
# the location where HDP installs components when using HDP 2.0 to 2.1
STACK_HOME_LEGACY_DIR = "/usr/lib"
# determine the location of HDP home
  stack_home = None
  if os.path.isdir(STACK_HOME_DIR):
stack_home = STACK_HOME_DIR
  elif os.path.isdir(STACK_HOME_LEGACY_DIR):
stack_home = STACK_HOME_LEGACY_DIR


On clusters where a different stack is installed (such as `/usr/hdf`, the
above logic incorrectly checks the `STACK_HOME_LEGACY_DIR`.

  * The 2.0 and 2.1 code paths should be removed since they are not supported 
anymore.
  * We should parameterize STACK_HOME_DIR (or even better, use the stack 
features JSON structure) to determine the home location to check.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21164:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-21164) Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored while doing upgrades with bad entries in "execution_command" table.

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038589#comment-16038589
 ] 

Hudson commented on AMBARI-21164:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1561 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1561/])
AMBARI-21164. Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=c51e0b865b54b67b04b848ab371510248c903c1e])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/checks/AbstractCheckDescriptor.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/checks/ServiceCheckValidityCheck.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/state/CheckHelper.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/checks/ServiceCheckValidityCheckTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/state/CheckHelperTest.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/sample/checks/SampleServiceCheck.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProviderTest.java


> Upgrades (RU/EU) : "stack.upgrade.bypass.prechecks" config is not honored 
> while doing upgrades with bad entries in "execution_command" table.
> -
>
> Key: AMBARI-21164
> URL: https://issues.apache.org/jira/browse/AMBARI-21164
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21164-branch.25.patch, 
> AMBARI-21164-branch.trunk.patch
>
>
> Happens so, that host_role_command table got null start_time value(or -1, 
> while command were aborted). This results in pre-checks fail with no 
> possibility to continue upgrade or bypass this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Doroszlai, Attila (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038468#comment-16038468
 ] 

Doroszlai, Attila commented on AMBARI-20884:


Similar issue appeared again:

{noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11630/artifact/patch-work/trunkJavacWarnings.txt}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeEntity.java:[44,68]
 package org.apache.hadoop.metrics2.sink.relocated.google.common.base does not 
exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeHistoryEntity.java:[34,68]
 package org.apache.hadoop.metrics2.sink.relocated.google.common.base does not 
exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeEntity.java:[403,12]
 cannot find symbol
  symbol:   variable Objects
  location: class org.apache.ambari.server.orm.entities.UpgradeEntity
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeHistoryEntity.java:[216,12]
 cannot find symbol
  symbol:   variable Objects
  location: class org.apache.ambari.server.orm.entities.UpgradeHistoryEntity
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/orm/entities/UpgradeHistoryEntity.java:[224,12]
 cannot find symbol
  symbol:   variable Objects
  location: class org.apache.ambari.server.orm.entities.UpgradeHistoryEntity
[INFO] 5 errors 
{noformat}

> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (AMBARI-20884) Compilation error due to import from relocated package

2017-06-06 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reopened AMBARI-20884:


> Compilation error due to import from relocated package
> --
>
> Key: AMBARI-20884
> URL: https://issues.apache.org/jira/browse/AMBARI-20884
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 3.0.0
>
> Attachments: AMBARI-20884.patch
>
>
> Hadoop QA fails to compile ambari-server trunk:
> {noformat:title=https://builds.apache.org/job/Ambari-trunk-test-patch/11521/artifact/patch-work/trunkJavacWarnings.txt}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[90,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[29,71]
>  package org.apache.hadoop.metrics2.sink.relocated.google.common.collect does 
> not exist
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java:[396,24]
>  cannot find symbol
>   symbol:   variable Lists
>   location: class 
> org.apache.ambari.server.controller.internal.ClusterStackVersionResourceProvider
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/main/java/org/apache/ambari/server/serveraction/upgrades/AbstractUpgradeServerAction.java:[70,14]
>  cannot find symbol
>   symbol:   variable Sets
>   location: class 
> org.apache.ambari.server.serveraction.upgrades.AbstractUpgradeServerAction
> [INFO] 4 errors 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21181) Ability to anonymize data during log processing

2017-06-06 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-21181:

Summary: Ability to anonymize data during log processing  (was:  
BUG-81251-aninymiz)

> Ability to anonymize data during log processing
> ---
>
> Key: AMBARI-21181
> URL: https://issues.apache.org/jira/browse/AMBARI-21181
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21181.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21181) Ability to anonymize data during log processing

2017-06-06 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-21181:

Status: Patch Available  (was: In Progress)

> Ability to anonymize data during log processing
> ---
>
> Key: AMBARI-21181
> URL: https://issues.apache.org/jira/browse/AMBARI-21181
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21181.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21181) BUG-81251-aninymiz

2017-06-06 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-21181:

Attachment: AMBARI-21181.patch

>  BUG-81251-aninymiz
> ---
>
> Key: AMBARI-21181
> URL: https://issues.apache.org/jira/browse/AMBARI-21181
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21181.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-21165) Register with server and changes to events format and handle graceful stop or threads

2017-06-06 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21165:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch-3.0-perf

> Register with server and changes to events format and handle graceful stop or 
> threads
> -
>
> Key: AMBARI-21165
> URL: https://issues.apache.org/jira/browse/AMBARI-21165
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-21165.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-21181) BUG-81251-aninymiz

2017-06-06 Thread Miklos Gergely (JIRA)
Miklos Gergely created AMBARI-21181:
---

 Summary:  BUG-81251-aninymiz
 Key: AMBARI-21181
 URL: https://issues.apache.org/jira/browse/AMBARI-21181
 Project: Ambari
  Issue Type: Bug
  Components: ambari-logsearch
Affects Versions: 3.0.0
Reporter: Miklos Gergely
Assignee: Miklos Gergely
 Fix For: 3.0.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)