[jira] [Commented] (AMBARI-21418) Ambari rebuilds custom auth_to_local rules changing its case sensitiveness option (/L) depending on the case_insensitive_username_rules.

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123576#comment-16123576
 ] 

Hudson commented on AMBARI-21418:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #3 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/3/])
AMBARI-21418. Ambari rebuilds custom auth_to_local rules changing its (amagyar: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=cfa299883cdf3f5ada93dfd72138b3b407f9bec5])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AuthToLocalBuilder.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/AuthToLocalBuilderTest.java


> Ambari rebuilds custom auth_to_local rules changing its case sensitiveness 
> option (/L) depending on the case_insensitive_username_rules.
> 
>
> Key: AMBARI-21418
> URL: https://issues.apache.org/jira/browse/AMBARI-21418
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0
>Reporter: Tomas Sokorai
>Assignee: Attila Magyar
> Fix For: 2.5.3
>
> Attachments: AMBARI-21418.patch
>
>
> Ambari changes the auth to local custom rules /L state on rebuild depending 
> on case_insensitive_username_rules.
> How to reproduce:
> 1) Kerberize Ambari.
> 2) Make sure these kerberos settings are set as follows:
> case_insensitive_username_rules = false
> manage_auth_to_local = true
> 3) Add custom auth_to_local rule:
> {code:java}
> RULE:[1:$1@$0](.*@HDP01.LOCAL)s/.*/ambari-qa//L
> {code}
> (NB: HDP01.LOCAL realm was chosen to avoid matching the default kerberos 
> realm, EXAMPLE.COM in my tests)
> 4) Add a new service to the cluster that has kerberos configuration, in my 
> case, tested with adding Spark2.
> 5) After successful service addition, check the auth_to_local mappings again; 
> the mapping we added in point 3 should now be missing the /L and be:
> {code:java}
> RULE:[1:$1@$0](.*@HDP01.LOCAL)s/.*/ambari-qa/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21708) History Server cannot be started due to wrong permissions of /mr-history

2017-08-11 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created AMBARI-21708:
--

 Summary: History Server cannot be started due to wrong permissions 
of /mr-history
 Key: AMBARI-21708
 URL: https://issues.apache.org/jira/browse/AMBARI-21708
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 3.0.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila
Priority: Critical
 Fix For: 3.0.0


Steps to reproduce:

# Install Ambari from trunk
# Create cluster with MapReduce2

Result: History Server becomes stopped after starting it.

During startup History Server tries to create {{/mr-history/tmp}}, but fails:

{noformat:title=mapred-mapred-historyserver.log}
2017-08-09 11:54:20,957 INFO  service.AbstractService 
(AbstractService.java:noteFailure(272)) - Service 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating 
intermediate done directory
...
Caused by: org.apache.hadoop.security.AccessControlException: Permission 
denied: user=mapred, access=WRITE, inode="/mr-history/tmp":hdfs:hdfs:drwxr-xr-x
...
2017-08-09 11:54:20,971 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
Exiting with status -1
{noformat}

Caused by wrong permissions on {{/mr-history}}:

{noformat:title=trunk}
drwxr-xr-x   - hdfs   hdfs0 2017-08-09 11:54 /mr-history
drwxrwxrwx   - mapred hadoop  0 2017-08-09 11:54 /mr-history/done
{noformat}

{noformat:title=branch-2.5}
drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history
drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history/done
drwxrwxrwt   - mapred hadoop  0 2017-08-09 12:26 /mr-history/tmp
{noformat}

In AMBARI-21116 recursive permissions were eliminated for the wrong directory 
in {{YARN/2.1.0.2.0}}: {{mapreduce_jobhistory_done_dir}} instead of 
{{node_labels_dir}}.

Compare:

{noformat:title=YARN/2.1.0.2.0/package/scripts/yarn.py}
   params.HdfsResource(params.mapreduce_jobhistory_done_dir,
type="directory",
 action="create_on_execute",
 owner=params.mapred_user,
 group=params.user_group,
 -   change_permissions_for_parents=True,
 mode=0777
)
{noformat}

with:

{noformat:title=YARN/3.0.0.3.0/package/scripts/yarn.py}
 params.HdfsResource(params.node_labels_dir,
   type="directory",
   action="create_on_execute",
 - change_permissions_for_parents=True,
   owner=params.yarn_user,
   group=params.user_group,
   mode=0700
 )
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21648) Do not use 'dbo' schema name in idempotent Ambari DDL generator for AzureDB.

2017-08-11 Thread Sebastian Toader (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Toader updated AMBARI-21648:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Do not use 'dbo' schema name in idempotent Ambari DDL generator for AzureDB.
> 
>
> Key: AMBARI-21648
> URL: https://issues.apache.org/jira/browse/AMBARI-21648
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Sebastian Toader
>Assignee: Sebastian Toader
> Fix For: 2.6.0
>
> Attachments: AMBARI-21648.v1.patch
>
>
> Ambari database tables can be created in any AzureDB schema beside the 
> default 'dbo'. The generated DDL sql will fail in case the Ambari database 
> tables were to be created in other schema than 'dbo' as it looks up db 
> objects in 'dbo' schema only.
> If no schema is specified than db object will be looked up in the default 
> schema of the user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21116) Setting yarn.node-labels.fs-store.root-dir to a "path" changes the permission of the "root path"

2017-08-11 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved AMBARI-21116.

Resolution: Fixed

Nevermind, will fix in AMBARI-21708.

> Setting yarn.node-labels.fs-store.root-dir to a "path" changes the permission 
> of the "root path"
> 
>
> Key: AMBARI-21116
> URL: https://issues.apache.org/jira/browse/AMBARI-21116
> Project: Ambari
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 3.0.0
>
>
> 1. Set the following configs to run nodeLabels test via Ambari Rest Call
> {code}
>  yarnProperties = {'yarn.acl.enable': 'true',
>   'yarn.node-labels.enabled' : "True",
>   'yarn.node-labels.fs-store.root-dir': 
> NODE_LABEL_STORE_DIR,
>   'yarn.admin.acl': yarn_user + ',' + qa_user}
> {code}
> where NODE_LABEL_STORE_DIR = "/tmp/node-labels"
> 2Restart ResourceManager and NodeManagers via ambari 
> 3. After running these commands,  /tmp directory's permission changes to 
> [drwx--   - yarn   hadoop] from [drwxrwxrwx   - hdfs   hadoop]  causing 
> other tests to fail since it cannot access /tmp  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21704) Upgrade Wizard Has Incorrect Title

2017-08-11 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123080#comment-16123080
 ] 

Andrii Tkach commented on AMBARI-21704:
---

committed to branch-feature-AMBARI-21450

> Upgrade Wizard Has Incorrect Title
> --
>
> Key: AMBARI-21704
> URL: https://issues.apache.org/jira/browse/AMBARI-21704
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21704.patch, Screen Shot 2017-08-03 at 1.53.31 
> PM.png
>
>
> Perform an Express {{PATCH}} upgrade. The upgrade wizard dialog will have an 
> incorrect title of "Upgrade to Express Upgrade". I would have expected 
> something like:
> - Express Upgrade to HDP-2.5.4.0-1234
> - Express Patch Upgrade to HDP-2.5.4.0-1234



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21704) Upgrade Wizard Has Incorrect Title

2017-08-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21704:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Wizard Has Incorrect Title
> --
>
> Key: AMBARI-21704
> URL: https://issues.apache.org/jira/browse/AMBARI-21704
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21704.patch, Screen Shot 2017-08-03 at 1.53.31 
> PM.png
>
>
> Perform an Express {{PATCH}} upgrade. The upgrade wizard dialog will have an 
> incorrect title of "Upgrade to Express Upgrade". I would have expected 
> something like:
> - Express Upgrade to HDP-2.5.4.0-1234
> - Express Patch Upgrade to HDP-2.5.4.0-1234



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21708) History Server cannot be started due to wrong permissions of /mr-history

2017-08-11 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21708:
---
Status: Patch Available  (was: In Progress)

> History Server cannot be started due to wrong permissions of /mr-history
> 
>
> Key: AMBARI-21708
> URL: https://issues.apache.org/jira/browse/AMBARI-21708
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21708.patch
>
>
> Steps to reproduce:
> # Install Ambari from trunk
> # Create cluster with MapReduce2
> Result: History Server becomes stopped after starting it.
> During startup History Server tries to create {{/mr-history/tmp}}, but fails:
> {noformat:title=mapred-mapred-historyserver.log}
> 2017-08-09 11:54:20,957 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; 
> cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating 
> intermediate done directory
> ...
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=mapred, access=WRITE, 
> inode="/mr-history/tmp":hdfs:hdfs:drwxr-xr-x
> ...
> 2017-08-09 11:54:20,971 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> {noformat}
> Caused by wrong permissions on {{/mr-history}}:
> {noformat:title=trunk}
> drwxr-xr-x   - hdfs   hdfs0 2017-08-09 11:54 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 11:54 /mr-history/done
> {noformat}
> {noformat:title=branch-2.5}
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history/done
> drwxrwxrwt   - mapred hadoop  0 2017-08-09 12:26 /mr-history/tmp
> {noformat}
> In AMBARI-21116 recursive permissions were eliminated for the wrong directory 
> in {{YARN/2.1.0.2.0}}: {{mapreduce_jobhistory_done_dir}} instead of 
> {{node_labels_dir}}.
> Compare:
> {noformat:title=YARN/2.1.0.2.0/package/scripts/yarn.py}
>params.HdfsResource(params.mapreduce_jobhistory_done_dir,
> type="directory",
>  action="create_on_execute",
>  owner=params.mapred_user,
>  group=params.user_group,
>  -   change_permissions_for_parents=True,
>  mode=0777
> )
> {noformat}
> with:
> {noformat:title=YARN/3.0.0.3.0/package/scripts/yarn.py}
>  params.HdfsResource(params.node_labels_dir,
>type="directory",
>action="create_on_execute",
>  - change_permissions_for_parents=True,
>owner=params.yarn_user,
>group=params.user_group,
>mode=0700
>  )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21708) History Server cannot be started due to wrong permissions of /mr-history

2017-08-11 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21708:
---
Attachment: AMBARI-21708.patch

> History Server cannot be started due to wrong permissions of /mr-history
> 
>
> Key: AMBARI-21708
> URL: https://issues.apache.org/jira/browse/AMBARI-21708
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21708.patch
>
>
> Steps to reproduce:
> # Install Ambari from trunk
> # Create cluster with MapReduce2
> Result: History Server becomes stopped after starting it.
> During startup History Server tries to create {{/mr-history/tmp}}, but fails:
> {noformat:title=mapred-mapred-historyserver.log}
> 2017-08-09 11:54:20,957 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; 
> cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating 
> intermediate done directory
> ...
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=mapred, access=WRITE, 
> inode="/mr-history/tmp":hdfs:hdfs:drwxr-xr-x
> ...
> 2017-08-09 11:54:20,971 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> {noformat}
> Caused by wrong permissions on {{/mr-history}}:
> {noformat:title=trunk}
> drwxr-xr-x   - hdfs   hdfs0 2017-08-09 11:54 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 11:54 /mr-history/done
> {noformat}
> {noformat:title=branch-2.5}
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history/done
> drwxrwxrwt   - mapred hadoop  0 2017-08-09 12:26 /mr-history/tmp
> {noformat}
> In AMBARI-21116 recursive permissions were eliminated for the wrong directory 
> in {{YARN/2.1.0.2.0}}: {{mapreduce_jobhistory_done_dir}} instead of 
> {{node_labels_dir}}.
> Compare:
> {noformat:title=YARN/2.1.0.2.0/package/scripts/yarn.py}
>params.HdfsResource(params.mapreduce_jobhistory_done_dir,
> type="directory",
>  action="create_on_execute",
>  owner=params.mapred_user,
>  group=params.user_group,
>  -   change_permissions_for_parents=True,
>  mode=0777
> )
> {noformat}
> with:
> {noformat:title=YARN/3.0.0.3.0/package/scripts/yarn.py}
>  params.HdfsResource(params.node_labels_dir,
>type="directory",
>action="create_on_execute",
>  - change_permissions_for_parents=True,
>owner=params.yarn_user,
>group=params.user_group,
>mode=0700
>  )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21709:
--
Attachment: Screen Shot 2017-08-03 at 2.01.52 PM.png

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: Screen Shot 2017-08-03 at 2.01.52 PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21619) More ResourceManager HA host group placeholders in blueprints

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123299#comment-16123299
 ] 

Hudson commented on AMBARI-21619:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #2 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/2/])
AMBARI-21619. More ResourceManager HA host group placeholders in (adoroszlai: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=78684fb7cb5058eb5ada6ab8fc8bcf664c24df9e])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java


> More ResourceManager HA host group placeholders in blueprints
> -
>
> Key: AMBARI-21619
> URL: https://issues.apache.org/jira/browse/AMBARI-21619
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 2.5.3
>
> Attachments: AMBARI-21619.patch
>
>
> Some ResourceManager HA addresses are not replaced during blueprint 
> processing (cluster creation, export).  This may cause failure of starting 
> ResourceManager, or force user to enter specific host names in the blueprint 
> as a workaround.
> {noformat:title=/usr/hdp/current/hadoop-client/conf/yarn-site.xml}
> 
>   yarn.resourcemanager.address.rm1
>   %HOSTGROUP::master0%:8050
> 
> 
>   yarn.resourcemanager.address.rm2
>   %HOSTGROUP::master1%:8050
> 
> 
>   yarn.resourcemanager.admin.address.rm1
>   %HOSTGROUP::master0%:8141
> 
> 
>   yarn.resourcemanager.admin.address.rm2
>   %HOSTGROUP::master1%:8141
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm1
>   %HOSTGROUP::master0%:8025
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm2
>   %HOSTGROUP::master1%:8025
> 
> 
>   yarn.resourcemanager.scheduler.address.rm1
>   %HOSTGROUP::master0%:8030
> 
> 
>   yarn.resourcemanager.scheduler.address.rm2
>   %HOSTGROUP::master1%:8030
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)
Andrii Tkach created AMBARI-21709:
-

 Summary: Finalize Warns that it is Permanent Even For PATCH 
Upgrades
 Key: AMBARI-21709
 URL: https://issues.apache.org/jira/browse/AMBARI-21709
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.5.3
Reporter: Andrii Tkach
Assignee: Andrii Tkach
Priority: Critical
 Fix For: 2.5.3


Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
upgrade wizard warns that finalization is permanent.

{quote}
Your cluster version has been upgraded. Click on Finalize when you are ready to 
finalize the upgrade and commit to the new version. You are strongly encouraged 
to run tests on your cluster to ensure it is fully operational before 
finalizing. You cannot go back to the original version once the upgrade is 
finalized.
{quote}

This is not true for certain upgrade types. Finalization is a required step, 
yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
finalized. The message for these types should read something like:

{quote}
The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
finalization, the patch can be reverted from the Stacks and Versions page if it 
is no longer required."
{quote}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21608) Spark shell is not working after upgrade

2017-08-11 Thread Doroszlai, Attila (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123214#comment-16123214
 ] 

Doroszlai, Attila commented on AMBARI-21608:


Reproduced on BI 4.2 _before_ any upgrade.  User {{spark}} cannot access 
{{/tmp/hive}}.  {{spark-shell}} works with user {{hive}} or {{root}}.

> Spark shell is not working after upgrade
> 
>
> Key: AMBARI-21608
> URL: https://issues.apache.org/jira/browse/AMBARI-21608
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.2
>Reporter: Eric Yang
> Fix For: 2.5.2
>
>
> Spark shell does not work after IOP to HDP upgrade.  This error message shows 
> up when running spark-shell:
> {code}
> 17/07/28 20:50:46 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
> "embedded-only" so does not have its own datastore table.
> 17/07/28 20:50:46 INFO SessionState: Created local directory: 
> /tmp/cedf0cab-747a-45e8-8c36-53a304027587_resources
> 17/07/28 20:50:46 INFO SessionState: Created HDFS directory: 
> /tmp/hive/spark/cedf0cab-747a-45e8-8c36-53a304027587
> 17/07/28 20:50:46 INFO SessionState: Created local directory: 
> /tmp/spark/cedf0cab-747a-45e8-8c36-53a304027587
> 17/07/28 20:50:46 INFO SessionState: Created HDFS directory: 
> /tmp/hive/spark/cedf0cab-747a-45e8-8c36-53a304027587/_tmp_space.db
> java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: 
> Permission denied
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
>   at 
> org.apache.spark.sql.hive.client.ClientWrapper.(ClientWrapper.scala:209)
>   at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
>   at 
> org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:225)
>   at 
> org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:215)
>   at 
> org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:480)
>   at 
> org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:479)
>   at org.apache.spark.sql.UDFRegistration.(UDFRegistration.scala:40)
>   at org.apache.spark.sql.SQLContext.(SQLContext.scala:330)
>   at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:90)
>   at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:101)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
>   at $iwC$$iwC.(:15)
>   at $iwC.(:24)
>   at (:26)
>   at .(:30)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
>   at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>   at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>   at 
> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>   at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>   at 
> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>   at 

[jira] [Updated] (AMBARI-21619) More ResourceManager HA host group placeholders in blueprints

2017-08-11 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated AMBARI-21619:
---
Fix Version/s: (was: trunk)

> More ResourceManager HA host group placeholders in blueprints
> -
>
> Key: AMBARI-21619
> URL: https://issues.apache.org/jira/browse/AMBARI-21619
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 2.5.3
>
> Attachments: AMBARI-21619.patch
>
>
> Some ResourceManager HA addresses are not replaced during blueprint 
> processing (cluster creation, export).  This may cause failure of starting 
> ResourceManager, or force user to enter specific host names in the blueprint 
> as a workaround.
> {noformat:title=/usr/hdp/current/hadoop-client/conf/yarn-site.xml}
> 
>   yarn.resourcemanager.address.rm1
>   %HOSTGROUP::master0%:8050
> 
> 
>   yarn.resourcemanager.address.rm2
>   %HOSTGROUP::master1%:8050
> 
> 
>   yarn.resourcemanager.admin.address.rm1
>   %HOSTGROUP::master0%:8141
> 
> 
>   yarn.resourcemanager.admin.address.rm2
>   %HOSTGROUP::master1%:8141
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm1
>   %HOSTGROUP::master0%:8025
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm2
>   %HOSTGROUP::master1%:8025
> 
> 
>   yarn.resourcemanager.scheduler.address.rm1
>   %HOSTGROUP::master0%:8030
> 
> 
>   yarn.resourcemanager.scheduler.address.rm2
>   %HOSTGROUP::master1%:8030
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21619) More ResourceManager HA host group placeholders in blueprints

2017-08-11 Thread Doroszlai, Attila (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123244#comment-16123244
 ] 

Doroszlai, Attila commented on AMBARI-21619:


Committed to 
[branch-2.6|http://git-wip-us.apache.org/repos/asf/ambari/commit/78684fb7cb].

> More ResourceManager HA host group placeholders in blueprints
> -
>
> Key: AMBARI-21619
> URL: https://issues.apache.org/jira/browse/AMBARI-21619
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
> Fix For: 2.5.3
>
> Attachments: AMBARI-21619.patch
>
>
> Some ResourceManager HA addresses are not replaced during blueprint 
> processing (cluster creation, export).  This may cause failure of starting 
> ResourceManager, or force user to enter specific host names in the blueprint 
> as a workaround.
> {noformat:title=/usr/hdp/current/hadoop-client/conf/yarn-site.xml}
> 
>   yarn.resourcemanager.address.rm1
>   %HOSTGROUP::master0%:8050
> 
> 
>   yarn.resourcemanager.address.rm2
>   %HOSTGROUP::master1%:8050
> 
> 
>   yarn.resourcemanager.admin.address.rm1
>   %HOSTGROUP::master0%:8141
> 
> 
>   yarn.resourcemanager.admin.address.rm2
>   %HOSTGROUP::master1%:8141
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm1
>   %HOSTGROUP::master0%:8025
> 
> 
>   yarn.resourcemanager.resource-tracker.address.rm2
>   %HOSTGROUP::master1%:8025
> 
> 
>   yarn.resourcemanager.scheduler.address.rm1
>   %HOSTGROUP::master0%:8030
> 
> 
>   yarn.resourcemanager.scheduler.address.rm2
>   %HOSTGROUP::master1%:8030
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21709:
--
Attachment: AMBARI-21709.patch

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123362#comment-16123362
 ] 

Andrii Tkach commented on AMBARI-21709:
---

30464 passing (25s)
  157 pending

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21709:
--
Status: Patch Available  (was: Open)

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen updated AMBARI-21692:

Attachment: AMBARI-21692_2.patch

> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123149#comment-16123149
 ] 

Hadoop QA commented on AMBARI-21692:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12881433/AMBARI-21692_2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11982//console

This message is automatically generated.

> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123201#comment-16123201
 ] 

Hudson commented on AMBARI-21692:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7881 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7881/])
AMBARI-21692 dfs.include file is created on all datanode hosts when (dsen: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=10b1efbc40b5d7dcd3d26e153f8e17125be747e8])
* (edit) 
ambari-server/src/main/resources/stacks/BIGTOP/0.8/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/package/scripts/hdfs_snamenode.py
* (edit) 
contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/stacks/BIGTOP/0.8/services/HDFS/package/scripts/hdfs_snamenode.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/3.0/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_snamenode.py
* (edit) 
contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/shared_initialization.py


> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123203#comment-16123203
 ] 

Hudson commented on AMBARI-21692:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1806 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1806/])
AMBARI-21692 dfs.include file is created on all datanode hosts when (dsen: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=aa9d866e7c4df1bdff665bec3154e3731cd8f5a7])
* (edit) 
ambari-server/src/main/resources/stacks/BIGTOP/0.8/services/HDFS/package/scripts/hdfs_snamenode.py
* (edit) 
contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_snamenode.py
* (edit) 
ambari-server/src/main/resources/stacks/BIGTOP/0.8/hooks/before-START/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py


> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Dmytro Sen (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123209#comment-16123209
 ] 

Dmytro Sen commented on AMBARI-21692:
-

Test failure is not caused by the commit

> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21703:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21682) Patched Service Doesn't Display Correct Hadoop Version on Stacks Page

2017-08-11 Thread Antonenko Alexander (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123239#comment-16123239
 ] 

Antonenko Alexander commented on AMBARI-21682:
--

committed to branch-feature-AMBARI-21450


> Patched Service Doesn't Display Correct Hadoop Version on Stacks Page
> -
>
> Key: AMBARI-21682
> URL: https://issues.apache.org/jira/browse/AMBARI-21682
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21682.patch
>
>
> STR:
> - Install ZK, YARN, HDFS with HDP 2.5.0.0
> - Apply a {{PATCH}} VDF for ZooKeeper for 2.5.4.0
> Notice that on the stacks/versions page, the columns show ZooKeeper as green 
> for both versions.
> Also - notice that when clicking on the hosts number to see where a 
> particular repository is installed, the text that is displayed sounds a 
> little off. It says 
> {quote}
> "HDP-2.5.4.0-121 Version is Current on 3 hosts"
> {quote}
> We can probably just drop the word "version". And maybe change the word 
> "Current" to applied. Something like this:
> {quote}
> "HDP-2.5.4.0-121 is applied to 3 hosts"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21703:
-
Fix Version/s: (was: 2.6.0)
   2.5.3

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21703:
-
Affects Version/s: (was: 2.6.0)
   2.5.3

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123190#comment-16123190
 ] 

Hadoop QA commented on AMBARI-21703:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12881216/AMBARI-21703.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11983//console

This message is automatically generated.

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21076) Move superset as a top-level module in HDP

2017-08-11 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated AMBARI-21076:
--
Status: Patch Available  (was: Open)

> Move superset as a top-level module in HDP
> --
>
> Key: AMBARI-21076
> URL: https://issues.apache.org/jira/browse/AMBARI-21076
> Project: Ambari
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: AMBARI-21076.1.patch, AMBARI-21076.patch
>
>
> Superset is a generic UI which can work with multiple data stores e.g HIVE, 
> DRUID and any other dataStore that supports SQLALCHEMY dialects. 
> Currently superset is installed as a master component under Druid. 
> This task is to move superset out of Druid so that it can be installed and 
> managed independent of Druid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (AMBARI-21682) Patched Service Doesn't Display Correct Hadoop Version on Stacks Page

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21682:
-
Comment: was deleted

(was: committed to branch-feature-AMBARI-21450
)

> Patched Service Doesn't Display Correct Hadoop Version on Stacks Page
> -
>
> Key: AMBARI-21682
> URL: https://issues.apache.org/jira/browse/AMBARI-21682
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21682.patch
>
>
> STR:
> - Install ZK, YARN, HDFS with HDP 2.5.0.0
> - Apply a {{PATCH}} VDF for ZooKeeper for 2.5.4.0
> Notice that on the stacks/versions page, the columns show ZooKeeper as green 
> for both versions.
> Also - notice that when clicking on the hosts number to see where a 
> particular repository is installed, the text that is displayed sounds a 
> little off. It says 
> {quote}
> "HDP-2.5.4.0-121 Version is Current on 3 hosts"
> {quote}
> We can probably just drop the word "version". And maybe change the word 
> "Current" to applied. Something like this:
> {quote}
> "HDP-2.5.4.0-121 is applied to 3 hosts"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123241#comment-16123241
 ] 

Antonenko Alexander commented on AMBARI-21703:
--

committed to branch-feature-AMBARI-21450


> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.5.3
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Dmytro Sen (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123163#comment-16123163
 ] 

Dmytro Sen commented on AMBARI-21692:
-

Committed to trunk, branch-2.5, branch-2.6


> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21692) dfs.include file is created on all datanode hosts when Ambari manages include/exclude files

2017-08-11 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen updated AMBARI-21692:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dfs.include file is created on all datanode hosts when Ambari manages 
> include/exclude files
> ---
>
> Key: AMBARI-21692
> URL: https://issues.apache.org/jira/browse/AMBARI-21692
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
> Attachments: AMBARI-21692_2.patch, AMBARI-21692.patch
>
>
> - deploy cluster with default configs
> - Update manage.include.files to true in hdfs-site.xml
> - Add property dfs.hosts=/etc/hadoop/conf/dfs.include in cstm-hdfs-site.xml
> - Create /etc/hadoop/conf/dfs.include on Namenode host
> - Restart all required services
> - dfs.include gets created on all datanode hosts. 
> Tried the same with Yarn, yarn.include will be present only on 
> Resourcemanager node and doesnt get created on any other nodes(eg: one with 
> nodemanager)
> Ideally dfs.include should not be created on all datanode hosts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21076) Move superset as a top-level module in HDP

2017-08-11 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated AMBARI-21076:
--
Attachment: AMBARI-21076.1.patch

> Move superset as a top-level module in HDP
> --
>
> Key: AMBARI-21076
> URL: https://issues.apache.org/jira/browse/AMBARI-21076
> Project: Ambari
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: AMBARI-21076.1.patch, AMBARI-21076.patch
>
>
> Superset is a generic UI which can work with multiple data stores e.g HIVE, 
> DRUID and any other dataStore that supports SQLALCHEMY dialects. 
> Currently superset is installed as a master component under Druid. 
> This task is to move superset out of Druid so that it can be installed and 
> managed independent of Druid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21076) Move superset as a top-level module in HDP

2017-08-11 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated AMBARI-21076:
--
Status: Open  (was: Patch Available)

> Move superset as a top-level module in HDP
> --
>
> Key: AMBARI-21076
> URL: https://issues.apache.org/jira/browse/AMBARI-21076
> Project: Ambari
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: AMBARI-21076.1.patch, AMBARI-21076.patch
>
>
> Superset is a generic UI which can work with multiple data stores e.g HIVE, 
> DRUID and any other dataStore that supports SQLALCHEMY dialects. 
> Currently superset is installed as a master component under Druid. 
> This task is to move superset out of Druid so that it can be installed and 
> managed independent of Druid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21703:
-
Fix Version/s: (was: 2.5.3)
   2.6.0

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.6.0
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.6.0
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-21703:
-
Affects Version/s: (was: 2.5.3)
   2.6.0

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.6.0
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.6.0
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21703) UI must consume API to show whether a service will be upgraded

2017-08-11 Thread Antonenko Alexander (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123230#comment-16123230
 ] 

Antonenko Alexander commented on AMBARI-21703:
--

PATCH APPLICATION FAILED
as it was applied against trunk.
manually tested.

> UI must consume API to show whether a service will be upgraded
> --
>
> Key: AMBARI-21703
> URL: https://issues.apache.org/jira/browse/AMBARI-21703
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.6.0
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 2.6.0
>
> Attachments: AMBARI-21703.patch
>
>
> use the API to indicate if a service is going to be upgraded based on its 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21702) ambari-agent registration fails due to invalid public hostname

2017-08-11 Thread Michael Davie (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Davie updated AMBARI-21702:
---
Description: 
* The script {{hostname.py}} 
(https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
 attempts to retrieve a host's public hostname from AWS from the location 
http://169.254.169.254/latest/meta-data/public-hostname.
* In a non-AWS network with a network proxy present, this request can return an 
HTML login or redirect page, rather than the expected hostname value.
* The script does not validate the length or format of the returned value, and 
submits the returned HTML code to ambari-server as the public hostname.
* Registration of the host fails, as the submitted HTML code exceeds the size 
of the hostname field in the server's database (255 characters).

* A partial manual workaround has been published at 
https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.

  was:
* The script {{hostname.py}} 
(https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
 attempts to retrieve a host's public hostname from AWS from the location 
http://169.254.169.254/latest/meta-data/public-hostname.
* In a non-AWS network with a network proxy present, this request can return an 
HTML login or redirect page, rather than the expected hostname value.
* The script does not validate the length or format of the returned value, and 
submits the returned HTML code to ambari-server as the public hostname.
* Registration of the host fails, as the submitted HTML code exceeds the size 
of the hostname field in the server's database (255 characters).

* A functioning manual workaround has been published at 
https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.
* An alternative workaround is to set the default gateway of the nodes to the 
IP address of the Ambari server.


> ambari-agent registration fails due to invalid public hostname
> --
>
> Key: AMBARI-21702
> URL: https://issues.apache.org/jira/browse/AMBARI-21702
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.6.0
> Environment: Networks with an active web proxy
>Reporter: Michael Davie
>Priority: Critical
>
> * The script {{hostname.py}} 
> (https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
>  attempts to retrieve a host's public hostname from AWS from the location 
> http://169.254.169.254/latest/meta-data/public-hostname.
> * In a non-AWS network with a network proxy present, this request can return 
> an HTML login or redirect page, rather than the expected hostname value.
> * The script does not validate the length or format of the returned value, 
> and submits the returned HTML code to ambari-server as the public hostname.
> * Registration of the host fails, as the submitted HTML code exceeds the size 
> of the hostname field in the server's database (255 characters).
> * A partial manual workaround has been published at 
> https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-21630) Delete datanode operation shows up as Decommission in bgops

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned AMBARI-21630:
--

Assignee: Dmytro Sen

> Delete datanode operation shows up as Decommission in bgops
> ---
>
> Key: AMBARI-21630
> URL: https://issues.apache.org/jira/browse/AMBARI-21630
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
>
> - Stop and delete a datanode from any cluster
> - Navigate to bgops to check the operation
> - First op shown is "Update Include and Exclude Files for HDFS"
> - If we click further on this parent operation to see the final op, it shows 
> as Decommission



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21631) Hostname and operation shown in bgops is incorrect when Ambari manages include/exclude files

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved AMBARI-21631.

Resolution: Duplicate
  Assignee: Dmytro Sen

> Hostname and operation shown in bgops is incorrect when Ambari manages 
> include/exclude files
> 
>
> Key: AMBARI-21631
> URL: https://issues.apache.org/jira/browse/AMBARI-21631
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
>
> This is found while Recommisioning a datanode and Installing a new datanode 
> while Ambari manages include/exclude files. Please find STR as below
> - Deploy a cluster
> - Set manage.include.files property in Advanced hdfs-site to true
> - Add dfs.hosts file to a valid path
> - Stop and delete a datanode from one of the hosts
> - Install Datanode back on the same node
> - 2 bgops are triggered : one for Installing Datanode and second for "Update 
> Include and Exclude Files for [HDFS]"
> - Click through "Update Include and Exclude Files for [HDFS]" to see the host 
> where operation is triggered. 
> Issue1:
> We can see that even though the node where it was added is 'host1' it shows 
> up as 'host2' in 'Hosts' and 'Tasks' in bgops window
> Issue2:
> Op was to add/install datanode. But it shows up as Recommission



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21471) ATS going down due to missing org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin

2017-08-11 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124153#comment-16124153
 ] 

Aravindan Vijayan commented on AMBARI-21471:


[~sumitmohanty] Is there any other work work pending in this jira?

> ATS going down due to missing 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> 
>
> Key: AMBARI-21471
> URL: https://issues.apache.org/jira/browse/AMBARI-21471
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Sumit Mohanty
>Assignee: Sumit Mohanty
> Fix For: 2.5.2
>
> Attachments: AMBARI-21471.patch
>
>
> ATS is going down with
> {code}
> 2017-07-12 02:48:01,542 FATAL 
> applicationhistoryservice.ApplicationHistoryServer 
> (ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error starting 
> ApplicationHistoryServer
> java.lang.RuntimeException: No class defined for 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:256)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:196)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:197)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:165)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:243)
> ... 7 more
> 2017-07-12 02:48:01,544 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> 2017-07-12 02:48:01,551 INFO  
> applicationhistoryservice.ApplicationHistoryServer (LogAdapter.java:info(45)) 
> - SHUTDOWN_MSG:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21706) Fix exception messages whenever empty host list is passed in predicate.

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123956#comment-16123956
 ] 

Hudson commented on AMBARI-21706:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7882 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7882/])
AMBARI-21706 : Fix exception messages whenever empty host list is passed 
(avijayan: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=bc6cbf3596247c79c6a2aad9047ebe6a2d1cf27b])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/api/predicate/QueryParserTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackAdvisorResourceProvider.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/StackAdvisorResourceProviderTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/api/predicate/QueryParser.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/api/predicate/operators/InOperator.java


> Fix exception messages whenever empty host list is passed in predicate.
> ---
>
> Key: AMBARI-21706
> URL: https://issues.apache.org/jira/browse/AMBARI-21706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21706.patch
>
>
> Cluster installation stuck on Customize Services Page and wasn't loaded even 
> after 4000 seconds waiting
> ambari-server.log shows:
> {code}
> 10 Aug 2017 09:48:08,356  INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - 
> No clusters configured.
> 10 Aug 2017 09:49:17,123 ERROR [ambari-client-thread-593] QueryParser:115 - 
> Lowercase host_name value in expression failed with 
> error:java.lang.NullPointerException
> 10 Aug 2017 09:49:17,125 ERROR [ambari-client-thread-593] Request:147 - 
> Unable to compile query predicate: IN operator is missing a required right 
> oper
> and.
> org.apache.ambari.server.api.predicate.InvalidQueryException: IN operator is 
> missing a required right operand.
> at 
> org.apache.ambari.server.api.predicate.operators.InOperator.toPredicate(InOperator.java:50)
> at 
> org.apache.ambari.server.api.predicate.expressions.RelationalExpression.toPredicate(RelationalExpression.java:43)
> at 
> org.apache.ambari.server.api.predicate.QueryParser.parse(QueryParser.java:99)
> at 
> org.apache.ambari.server.api.predicate.PredicateCompiler.compile(PredicateCompiler.java:62)
> at 
> org.apache.ambari.server.api.services.BaseRequest.parseQueryPredicate(BaseRequest.java:344)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:143)
> {code}
> {code}
> 10 Aug 2017 09:49:17,126  WARN [ambari-client-thread-597] 
> AbstractResourceProvider:134 - Error occurred during preparation of stack 
> advisor request
> java.lang.ClassCastException: java.util.LinkedHashSet cannot be cast to 
> java.util.List
> at 
> org.apache.ambari.server.controller.internal.StackAdvisorResourceProvider.prepareStackAdvisorRequest(StackAdvisorResourceProvider.java:110)
> at 
> org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:88)
> at 
> org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
> at 
> org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
> at 
> org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
> at 
> org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21076) Move superset as a top-level module in HDP

2017-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123996#comment-16123996
 ] 

Hadoop QA commented on AMBARI-21076:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12881448/AMBARI-21076.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 core tests{color}.  The test build failed in 
[ambari-server|https://builds.apache.org/job/Ambari-trunk-test-patch/11984//artifact/patch-work/testrun_ambari-server.txt]
 

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11984//console

This message is automatically generated.

> Move superset as a top-level module in HDP
> --
>
> Key: AMBARI-21076
> URL: https://issues.apache.org/jira/browse/AMBARI-21076
> Project: Ambari
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: AMBARI-21076.1.patch, AMBARI-21076.patch
>
>
> Superset is a generic UI which can work with multiple data stores e.g HIVE, 
> DRUID and any other dataStore that supports SQLALCHEMY dialects. 
> Currently superset is installed as a master component under Druid. 
> This task is to move superset out of Druid so that it can be installed and 
> managed independent of Druid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21708) History Server cannot be started due to wrong permissions of /mr-history

2017-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124049#comment-16124049
 ] 

Hadoop QA commented on AMBARI-21708:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12881429/AMBARI-21708.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 core tests{color}.  The test build failed in 
[ambari-server|https://builds.apache.org/job/Ambari-trunk-test-patch/11985//artifact/patch-work/testrun_ambari-server.txt]
 

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11985//console

This message is automatically generated.

> History Server cannot be started due to wrong permissions of /mr-history
> 
>
> Key: AMBARI-21708
> URL: https://issues.apache.org/jira/browse/AMBARI-21708
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21708.patch
>
>
> Steps to reproduce:
> # Install Ambari from trunk
> # Create cluster with MapReduce2
> Result: History Server becomes stopped after starting it.
> During startup History Server tries to create {{/mr-history/tmp}}, but fails:
> {noformat:title=mapred-mapred-historyserver.log}
> 2017-08-09 11:54:20,957 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; 
> cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating 
> intermediate done directory
> ...
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=mapred, access=WRITE, 
> inode="/mr-history/tmp":hdfs:hdfs:drwxr-xr-x
> ...
> 2017-08-09 11:54:20,971 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> {noformat}
> Caused by wrong permissions on {{/mr-history}}:
> {noformat:title=trunk}
> drwxr-xr-x   - hdfs   hdfs0 2017-08-09 11:54 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 11:54 /mr-history/done
> {noformat}
> {noformat:title=branch-2.5}
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history
> drwxrwxrwx   - mapred hadoop  0 2017-08-09 12:26 /mr-history/done
> drwxrwxrwt   - mapred hadoop  0 2017-08-09 12:26 /mr-history/tmp
> {noformat}
> In AMBARI-21116 recursive permissions were eliminated for the wrong directory 
> in {{YARN/2.1.0.2.0}}: {{mapreduce_jobhistory_done_dir}} instead of 
> {{node_labels_dir}}.
> Compare:
> {noformat:title=YARN/2.1.0.2.0/package/scripts/yarn.py}
>params.HdfsResource(params.mapreduce_jobhistory_done_dir,
> type="directory",
>  action="create_on_execute",
>  owner=params.mapred_user,
>  group=params.user_group,
>  -   change_permissions_for_parents=True,
>  mode=0777
> )
> {noformat}
> with:
> {noformat:title=YARN/3.0.0.3.0/package/scripts/yarn.py}
>  params.HdfsResource(params.node_labels_dir,
>type="directory",
>action="create_on_execute",
>  - change_permissions_for_parents=True,
>owner=params.yarn_user,
>group=params.user_group,
>mode=0700
>  )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-08-11 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124145#comment-16124145
 ] 

Aravindan Vijayan commented on AMBARI-21345:


[~dsen] What is the status of the jira?

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch, AMBARI-21345_additional_2.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21045) Enable Storm's AutoTGT configs in secure mode

2017-08-11 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124160#comment-16124160
 ] 

Aravindan Vijayan commented on AMBARI-21045:


[~sriharsha] Is there any work pending on this jira?

> Enable Storm's AutoTGT configs in secure mode
> -
>
> Key: AMBARI-21045
> URL: https://issues.apache.org/jira/browse/AMBARI-21045
> Project: Ambari
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21045.branch-2.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21471) ATS going down due to missing org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin

2017-08-11 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty updated AMBARI-21471:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ATS going down due to missing 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> 
>
> Key: AMBARI-21471
> URL: https://issues.apache.org/jira/browse/AMBARI-21471
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Sumit Mohanty
>Assignee: Sumit Mohanty
> Fix For: 2.5.2
>
> Attachments: AMBARI-21471.patch
>
>
> ATS is going down with
> {code}
> 2017-07-12 02:48:01,542 FATAL 
> applicationhistoryservice.ApplicationHistoryServer 
> (ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error starting 
> ApplicationHistoryServer
> java.lang.RuntimeException: No class defined for 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:256)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:196)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:197)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:165)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:243)
> ... 7 more
> 2017-07-12 02:48:01,544 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> 2017-07-12 02:48:01,551 INFO  
> applicationhistoryservice.ApplicationHistoryServer (LogAdapter.java:info(45)) 
> - SHUTDOWN_MSG:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21706) Fix exception messages whenever empty host list is passed in predicate.

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123931#comment-16123931
 ] 

Hudson commented on AMBARI-21706:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #4 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/4/])
AMBARI-21706 : Fix exception messages whenever empty host list is passed 
(avijayan: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=0630899ce80dcc2cbac76090480e33b592658f04])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/api/predicate/QueryParserTest.java
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/controller/internal/StackAdvisorResourceProviderTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/api/predicate/QueryParser.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackAdvisorResourceProvider.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/api/predicate/operators/InOperator.java


> Fix exception messages whenever empty host list is passed in predicate.
> ---
>
> Key: AMBARI-21706
> URL: https://issues.apache.org/jira/browse/AMBARI-21706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21706.patch
>
>
> Cluster installation stuck on Customize Services Page and wasn't loaded even 
> after 4000 seconds waiting
> ambari-server.log shows:
> {code}
> 10 Aug 2017 09:48:08,356  INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - 
> No clusters configured.
> 10 Aug 2017 09:49:17,123 ERROR [ambari-client-thread-593] QueryParser:115 - 
> Lowercase host_name value in expression failed with 
> error:java.lang.NullPointerException
> 10 Aug 2017 09:49:17,125 ERROR [ambari-client-thread-593] Request:147 - 
> Unable to compile query predicate: IN operator is missing a required right 
> oper
> and.
> org.apache.ambari.server.api.predicate.InvalidQueryException: IN operator is 
> missing a required right operand.
> at 
> org.apache.ambari.server.api.predicate.operators.InOperator.toPredicate(InOperator.java:50)
> at 
> org.apache.ambari.server.api.predicate.expressions.RelationalExpression.toPredicate(RelationalExpression.java:43)
> at 
> org.apache.ambari.server.api.predicate.QueryParser.parse(QueryParser.java:99)
> at 
> org.apache.ambari.server.api.predicate.PredicateCompiler.compile(PredicateCompiler.java:62)
> at 
> org.apache.ambari.server.api.services.BaseRequest.parseQueryPredicate(BaseRequest.java:344)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:143)
> {code}
> {code}
> 10 Aug 2017 09:49:17,126  WARN [ambari-client-thread-597] 
> AbstractResourceProvider:134 - Error occurred during preparation of stack 
> advisor request
> java.lang.ClassCastException: java.util.LinkedHashSet cannot be cast to 
> java.util.List
> at 
> org.apache.ambari.server.controller.internal.StackAdvisorResourceProvider.prepareStackAdvisorRequest(StackAdvisorResourceProvider.java:110)
> at 
> org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:88)
> at 
> org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
> at 
> org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
> at 
> org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
> at 
> org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21512) Stack Advisor reported an error: KeyError: 'stack_name' while Issued INSTALLED as new state for NODEMANAGER

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-21512:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Stack Advisor reported an error: KeyError: 'stack_name' while Issued 
> INSTALLED as new state for NODEMANAGER
> ---
>
> Key: AMBARI-21512
> URL: https://issues.apache.org/jira/browse/AMBARI-21512
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Srikanth Janardhan
>Assignee: Sumit Mohanty
> Fix For: 2.5.2
>
> Attachments: AMBARI-21512.patch
>
>
> While In build #139, performing an [AUTHORIZED, HOST_ADD_DELETE_COMPONENTS] 
> operation in RBAC for NODEMANAGER failed for AMBARI_ADMINISTRATOR role.
> Test Logs:
> {code}
> 2017-07-18 03:12:42,062|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Requesting put on 
> Request path : 
> http://172.27.22.82:8080/api/v1/clusters/cl1/hosts/ctr-e134-1499953498516-15485-01-03.hwx.site/host_components/NODEMANAGER
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:12:43 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Response body : {
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"href" : 
> "http://172.27.22.82:8080/api/v1/clusters/cl1/requests/44;,
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"Requests" : {
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"id" : 44,
> 2017-07-18 03:12:43,053|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"status" : "Accepted"
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|}
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|}
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:12:43 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Response body : {
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"href" : 
> "http://172.27.22.82:8080/api/v1/clusters/cl1/requests/44;,
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"Requests" : {
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"id" : 44,
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|"status" : "Accepted"
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|}
> 2017-07-18 03:12:43,054|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|}
> 2017-07-18 03:12:43,055|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:12:43 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:12:43,055|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Service URL : 
> http://172.27.22.82:8080/api/v1/clusters/cl1/requests/44
> 2017-07-18 03:13:13,099|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:13:13 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:13:13,099|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Wait for 30 seconds. 
> Total Wait Time : 30 seconds
> 2017-07-18 03:13:43,390|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:13:43 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:13:43,390|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|INFO: Wait for 30 seconds. 
> Total Wait Time : 60 seconds
> 2017-07-18 03:13:43,521|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|Jul 18, 2017 3:13:43 AM 
> com.hwx.utils.logging.LogManager log
> 2017-07-18 03:13:43,521|INFO|MainThread|machine.py:159 - 
> run()||GUID=2d76c48d-01f4-4740-bf4c-a0c9170ca246|SEVERE: Failed task while 
> Issued INSTALLED as new state for NODEMANAGER
> 2017-07-18 

[jira] [Resolved] (AMBARI-21630) Delete datanode operation shows up as Decommission in bgops

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved AMBARI-21630.

Resolution: Duplicate

> Delete datanode operation shows up as Decommission in bgops
> ---
>
> Key: AMBARI-21630
> URL: https://issues.apache.org/jira/browse/AMBARI-21630
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Affects Versions: 2.5.2
>Reporter: Dhanya Balasundaran
>Assignee: Dmytro Sen
> Fix For: 2.5.2
>
>
> - Stop and delete a datanode from any cluster
> - Navigate to bgops to check the operation
> - First op shown is "Update Include and Exclude Files for HDFS"
> - If we click further on this parent operation to see the final op, it shows 
> as Decommission



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21706) Fix exception messages whenever empty host list is passed in predicate.

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-21706:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-2.6 and trunk.

> Fix exception messages whenever empty host list is passed in predicate.
> ---
>
> Key: AMBARI-21706
> URL: https://issues.apache.org/jira/browse/AMBARI-21706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21706.patch
>
>
> Cluster installation stuck on Customize Services Page and wasn't loaded even 
> after 4000 seconds waiting
> ambari-server.log shows:
> {code}
> 10 Aug 2017 09:48:08,356  INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - 
> No clusters configured.
> 10 Aug 2017 09:49:17,123 ERROR [ambari-client-thread-593] QueryParser:115 - 
> Lowercase host_name value in expression failed with 
> error:java.lang.NullPointerException
> 10 Aug 2017 09:49:17,125 ERROR [ambari-client-thread-593] Request:147 - 
> Unable to compile query predicate: IN operator is missing a required right 
> oper
> and.
> org.apache.ambari.server.api.predicate.InvalidQueryException: IN operator is 
> missing a required right operand.
> at 
> org.apache.ambari.server.api.predicate.operators.InOperator.toPredicate(InOperator.java:50)
> at 
> org.apache.ambari.server.api.predicate.expressions.RelationalExpression.toPredicate(RelationalExpression.java:43)
> at 
> org.apache.ambari.server.api.predicate.QueryParser.parse(QueryParser.java:99)
> at 
> org.apache.ambari.server.api.predicate.PredicateCompiler.compile(PredicateCompiler.java:62)
> at 
> org.apache.ambari.server.api.services.BaseRequest.parseQueryPredicate(BaseRequest.java:344)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:143)
> {code}
> {code}
> 10 Aug 2017 09:49:17,126  WARN [ambari-client-thread-597] 
> AbstractResourceProvider:134 - Error occurred during preparation of stack 
> advisor request
> java.lang.ClassCastException: java.util.LinkedHashSet cannot be cast to 
> java.util.List
> at 
> org.apache.ambari.server.controller.internal.StackAdvisorResourceProvider.prepareStackAdvisorRequest(StackAdvisorResourceProvider.java:110)
> at 
> org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:88)
> at 
> org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
> at 
> org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
> at 
> org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
> at 
> org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
> at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21110) ambari-server setup fails with default postgres

2017-08-11 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124159#comment-16124159
 ] 

Aravindan Vijayan commented on AMBARI-21110:


[~aonishuk] Can you commit this to branch-2.5 and resolve it?

> ambari-server setup fails with default postgres
> ---
>
> Key: AMBARI-21110
> URL: https://issues.apache.org/jira/browse/AMBARI-21110
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21110.patch
>
>
> Thisis the first run with oracle12. Initially infra sets up Ambari with
> default postgres and later on ambari setup happens with Oracle12. But now the
> first setup with default postgres is failing with below error
> 
> 
> 
> root@172.27.24.11 "ambari-server setup 
> --java-home=/base/tools/jdk1.8.0_112 -s"
> 2017-05-22 10:57:54.032 Using python  /usr/bin/python
> 2017-05-22 10:57:54.032 Setup ambari-server
> 2017-05-22 10:58:14.299 Checking SELinux...
> 2017-05-22 10:58:14.299 SELinux status is 'disabled'
> 2017-05-22 10:58:14.299 Customize user account for ambari-server daemon 
> [y/n] (n)? 
> 2017-05-22 10:58:14.299 Adjusting ambari-server permissions and 
> ownership...
> 2017-05-22 10:58:14.299 Checking firewall status...
> 2017-05-22 10:58:14.299 FATAL: Could not load 
> /lib/modules/3.10.0-327.13.1.el7.x86_64/modules.dep: No such file or directory
> 2017-05-22 10:58:14.299 iptables v1.4.7: can't initialize iptables table 
> `nat': Permission denied (you must be root)
> 2017-05-22 10:58:14.299 Perhaps iptables or your kernel needs to be 
> upgraded.
> 2017-05-22 10:58:14.299 FATAL: Could not load 
> /lib/modules/3.10.0-327.13.1.el7.x86_64/modules.dep: No such file or directory
> 2017-05-22 10:58:14.299 iptables v1.4.7: can't initialize iptables table 
> `filter': Permission denied (you must be root)
> 2017-05-22 10:58:14.299 Perhaps iptables or your kernel needs to be 
> upgraded.
> 2017-05-22 10:58:14.299 WARNING: iptables is running. Confirm the 
> necessary Ambari ports are accessible. Refer to the Ambari documentation for 
> more details on ports.
> 2017-05-22 10:58:14.299 OK to continue [y/n] (y)? 
> 2017-05-22 10:58:14.299 Checking JDK...
> 2017-05-22 10:58:14.299 WARNING: JAVA_HOME /base/tools/jdk1.8.0_112 must 
> be valid on ALL hosts
> 2017-05-22 10:58:14.299 WARNING: JCE Policy files are required for 
> configuring Kerberos security. If you plan to use Kerberos,please make sure 
> JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
> 2017-05-22 10:58:14.299 Completing setup...
> 2017-05-22 10:58:14.299 Configuring database...
> 2017-05-22 10:58:14.299 Enter advanced database configuration [y/n] (n)? 
> 2017-05-22 10:58:14.299 Configuring database...
> 2017-05-22 10:58:14.299 Default properties detected. Using built-in 
> database.
> 2017-05-22 10:58:14.299 Configuring ambari database...
> 2017-05-22 10:58:14.299 Checking PostgreSQL...
> 2017-05-22 10:58:14.299 Running initdb: This may take up to a minute.
> 2017-05-22 10:58:14.299 Initializing database: [  OK  ]
> 2017-05-22 10:58:14.299 
> 2017-05-22 10:58:14.299 About to start PostgreSQL
> 2017-05-22 10:58:14.299 Configuring local database...
> 2017-05-22 10:58:14.299 Configuring PostgreSQL...
> 2017-05-22 10:58:14.299 Creating schema and user...
> 2017-05-22 10:58:14.299 ERROR: Failed to execute 
> command:['ambari-sudo.sh', 'su', 'postgres', '-', '--command=psql -f 
> /var/lib/ambari-server/resources/Ambari-DDL-Postgres-EMBEDDED-CREATE.sql -v 
> username=\'"ambari"\' -v password="\'bigdata\'" -v dbname="ambari"']
> 2017-05-22 10:58:14.299 ERROR: stderr:could not change directory to 
> "/root"
> 2017-05-22 10:58:14.299 psql: could not connect to server: No such file 
> or directory
> 2017-05-22 10:58:14.299   Is the server running locally and accepting
> 2017-05-22 10:58:14.299   connections on Unix domain socket 
> "/tmp/.s.PGSQL.5432"?
> 2017-05-22 10:58:14.299 
> 2017-05-22 10:58:14.299 ERROR: stdout:
> 2017-05-22 10:58:14.299 failed to execute queries ...retrying (1)
> 2017-05-22 10:58:14.299 Creating schema and user...
> 2017-05-22 10:58:14.299 ERROR: Failed to execute 
> command:['ambari-sudo.sh', 'su', 'postgres', '-', '--command=psql -f 
> /var/lib/ambari-server/resources/Ambari-DDL-Postgres-EMBEDDED-CREATE.sql -v 
> username=\'"ambari"\' -v password="\'bigdata\'" -v dbname="ambari"']
> 2017-05-22 10:58:14.299 ERROR: stderr:could not change directory to 
> "/root"
> 2017-05-22 10:58:14.299 psql: could not connect to server: No such file 
> or directory
> 2017-05-22 10:58:14.299   Is the 

[jira] [Comment Edited] (AMBARI-21471) ATS going down due to missing org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin

2017-08-11 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124153#comment-16124153
 ] 

Aravindan Vijayan edited comment on AMBARI-21471 at 8/11/17 9:55 PM:
-

[~sumitmohanty] Is there any other work pending in this jira?


was (Author: avijayan):
[~sumitmohanty] Is there any other work work pending in this jira?

> ATS going down due to missing 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> 
>
> Key: AMBARI-21471
> URL: https://issues.apache.org/jira/browse/AMBARI-21471
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Sumit Mohanty
>Assignee: Sumit Mohanty
> Fix For: 2.5.2
>
> Attachments: AMBARI-21471.patch
>
>
> ATS is going down with
> {code}
> 2017-07-12 02:48:01,542 FATAL 
> applicationhistoryservice.ApplicationHistoryServer 
> (ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error starting 
> ApplicationHistoryServer
> java.lang.RuntimeException: No class defined for 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:256)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:196)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.spark.deploy.history.yarn.plugin.SparkATSPlugin
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:197)
> at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:165)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:243)
> ... 7 more
> 2017-07-12 02:48:01,544 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status -1
> 2017-07-12 02:48:01,551 INFO  
> applicationhistoryservice.ApplicationHistoryServer (LogAdapter.java:info(45)) 
> - SHUTDOWN_MSG:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21710) Yarn-Queues dashboard not displaying data in Grafana.

2017-08-11 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created AMBARI-21710:
--

 Summary: Yarn-Queues dashboard not displaying data in Grafana.
 Key: AMBARI-21710
 URL: https://issues.apache.org/jira/browse/AMBARI-21710
 Project: Ambari
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
Priority: Critical
 Fix For: 2.6.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21710) Yarn-Queues dashboard not displaying data in Grafana.

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-21710:
---
Description: 
 There is no data or error saying 'timeseries data request error'. The api call 
being made in the background which turned out to be this: 
https://<>:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.AppsRunning._max=resourcemanager=NaNundefined1502465840=1502487440

Note that, 'NaNundefined' is prefixed to the 'startTime' value. 

> Yarn-Queues dashboard not displaying data in Grafana.
> -
>
> Key: AMBARI-21710
> URL: https://issues.apache.org/jira/browse/AMBARI-21710
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.6.0
>
>
>  There is no data or error saying 'timeseries data request error'. The api 
> call being made in the background which turned out to be this: 
> https://<>:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.AppsRunning._max=resourcemanager=NaNundefined1502465840=1502487440
> Note that, 'NaNundefined' is prefixed to the 'startTime' value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124326#comment-16124326
 ] 

Hudson commented on AMBARI-21711:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1807 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1807/])
AMBARI-21711. Seeing SQL errors in ambari server log when installing HDF 
(smohanty: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=5f750176f0c1fb88b7577aa69d3054ded6382f93])
* (edit) ambari-server/src/main/resources/host_scripts/alert_disk_space.py


> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> 

[jira] [Created] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Sumit Mohanty (JIRA)
Sumit Mohanty created AMBARI-21711:
--

 Summary: Seeing SQL errors in ambari server log when installing 
HDF 3.1
 Key: AMBARI-21711
 URL: https://issues.apache.org/jira/browse/AMBARI-21711
 Project: Ambari
  Issue Type: Bug
  Components: stacks
Affects Versions: 2.5.2
Reporter: Arpit Gupta
Assignee: Sumit Mohanty
Priority: Critical
 Fix For: 2.5.2


{code}
09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 
2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: 
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
Column 'alert_state' cannot be null
Error Code: 1048
Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?)
bind => [11 parameters bound]
Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
at 
org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
at 
org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
at 
org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
at 
org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
at 
org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
at 
org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
at 
org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
at 
org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
at 
org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
at 
org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
at 
org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
at 
org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
at 
org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
at 
org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
at 
org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
at 
org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
at 
org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
at 
org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
at 
org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
at 
org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
at 
org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
at 
org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
at 
org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
at 
org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
at 
org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
at 
org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
at 
org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857)
at 
org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1839)
at 
org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1790)
at 

[jira] [Updated] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty updated AMBARI-21711:
---
Attachment: AMBARI-21711.patch

> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
> at 
> org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
> at 
> 

[jira] [Commented] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Jonathan Hurley (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124238#comment-16124238
 ] 

Jonathan Hurley commented on AMBARI-21711:
--

+1

> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
> at 
> org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
>  

[jira] [Resolved] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty resolved AMBARI-21711.

Resolution: Fixed

Committed to trunk, branch-2.5, branch-2.6

> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
> at 
> org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
> at 
> 

[jira] [Updated] (AMBARI-21710) Yarn-Queues dashboard not displaying data in Grafana.

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-21710:
---
Fix Version/s: (was: 2.6.0)
   2.5.2

> Yarn-Queues dashboard not displaying data in Grafana.
> -
>
> Key: AMBARI-21710
> URL: https://issues.apache.org/jira/browse/AMBARI-21710
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.5.2
>
>
>  There is no data or error saying 'timeseries data request error'. The api 
> call being made in the background which turned out to be this: 
> https://<>:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.AppsRunning._max=resourcemanager=NaNundefined1502465840=1502487440
> Note that, 'NaNundefined' is prefixed to the 'startTime' value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21710) Yarn-Queues dashboard not displaying data in Grafana.

2017-08-11 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-21710:
---
Priority: Blocker  (was: Critical)

> Yarn-Queues dashboard not displaying data in Grafana.
> -
>
> Key: AMBARI-21710
> URL: https://issues.apache.org/jira/browse/AMBARI-21710
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Blocker
> Fix For: 2.5.2
>
>
>  There is no data or error saying 'timeseries data request error'. The api 
> call being made in the background which turned out to be this: 
> https://<>:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.AppsRunning._max=resourcemanager=NaNundefined1502465840=1502487440
> Note that, 'NaNundefined' is prefixed to the 'startTime' value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124319#comment-16124319
 ] 

Hudson commented on AMBARI-21711:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #5 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/5/])
AMBARI-21711. Seeing SQL errors in ambari server log when installing HDF 
(smohanty: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=7ab5876124db25cc111d80d82afb94b29e607485])
* (edit) ambari-server/src/main/resources/host_scripts/alert_disk_space.py


> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> 

[jira] [Commented] (AMBARI-21711) Seeing SQL errors in ambari server log when installing HDF 3.1

2017-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124331#comment-16124331
 ] 

Hudson commented on AMBARI-21711:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #7883 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7883/])
AMBARI-21711. Seeing SQL errors in ambari server log when installing HDF 
(smohanty: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=d9c271ae61c285e3b4e066616f057d857797fec9])
* (edit) ambari-server/src/main/resources/host_scripts/alert_disk_space.py


> Seeing SQL errors in ambari server log when installing HDF 3.1
> --
>
> Key: AMBARI-21711
> URL: https://issues.apache.org/jira/browse/AMBARI-21711
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Arpit Gupta
>Assignee: Sumit Mohanty
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21711.patch
>
>
> {code}
> 09 Aug 2017 17:38:17,151 ERROR [alert-event-bus-1] 
> AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'alert_state' cannot be null
> Error Code: 1048
> Call: INSERT INTO alert_history (alert_id, alert_instance, alert_label, 
> alert_state, alert_text, alert_timestamp, cluster_id, component_name, 
> host_name, service_name, alert_definition_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
> bind => [11 parameters bound]
> Query: InsertObjectQuery(AlertCurrentEntity{alertId=1, 
> name=ambari_agent_disk_usage, state=null, latestTimestamp=1502300296216})
> at 
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:902)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
> at 
> org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605)
> at 
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
> at 
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
> at 
> org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
> at 
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165)
> at 
> org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
> at 
> org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
> at 
> org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
> at 
> org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
> at 
> org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
> at 
> 

[jira] [Updated] (AMBARI-21702) ambari-agent registration fails due to invalid public hostname

2017-08-11 Thread Michael Davie (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Davie updated AMBARI-21702:
---
Description: 
* The script {{hostname.py}} 
(https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
 attempts to retrieve a host's public hostname from AWS from the location 
http://169.254.169.254/latest/meta-data/public-hostname.
* In a non-AWS network with a network proxy present, this request can return an 
HTML login or redirect page, rather than the expected hostname value.
* The script does not validate the length or format of the returned value, and 
submits the returned HTML code to ambari-server as the public hostname.
* Registration of the host fails, as the submitted HTML code exceeds the size 
of the hostname field in the server's database (255 characters).

* A functioning manual workaround has been published at 
https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.
* An alternative workaround is to set the default gateway of the nodes to the 
IP address of the Ambari server.

  was:
* The script {{hostname.py}} 
(https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
 attempts to retrieve a host's public hostname from AWS from the location 
http://169.254.169.254/latest/meta-data/public-hostname.
* In a non-AWS network with a network proxy present, this request can return an 
HTML login or redirect page, rather than the expected hostname value.
* The script does not validate the length or format of the returned value, and 
submits the returned HTML code to ambari-server as the public hostname.
* Registration of the host fails, as the submitted HTML code exceeds the size 
of the hostname field in the server's database (255 characters).

A functioning manual workaround has been published at 
https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.


> ambari-agent registration fails due to invalid public hostname
> --
>
> Key: AMBARI-21702
> URL: https://issues.apache.org/jira/browse/AMBARI-21702
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.6.0
> Environment: Networks with an active web proxy
>Reporter: Michael Davie
>
> * The script {{hostname.py}} 
> (https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
>  attempts to retrieve a host's public hostname from AWS from the location 
> http://169.254.169.254/latest/meta-data/public-hostname.
> * In a non-AWS network with a network proxy present, this request can return 
> an HTML login or redirect page, rather than the expected hostname value.
> * The script does not validate the length or format of the returned value, 
> and submits the returned HTML code to ambari-server as the public hostname.
> * Registration of the host fails, as the submitted HTML code exceeds the size 
> of the hostname field in the server's database (255 characters).
> * A functioning manual workaround has been published at 
> https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.
> * An alternative workaround is to set the default gateway of the nodes to the 
> IP address of the Ambari server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21702) ambari-agent registration fails due to invalid public hostname

2017-08-11 Thread Michael Davie (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Davie updated AMBARI-21702:
---
Priority: Critical  (was: Major)

> ambari-agent registration fails due to invalid public hostname
> --
>
> Key: AMBARI-21702
> URL: https://issues.apache.org/jira/browse/AMBARI-21702
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.6.0
> Environment: Networks with an active web proxy
>Reporter: Michael Davie
>Priority: Critical
>
> * The script {{hostname.py}} 
> (https://github.com/apache/ambari/blob/79cca1c7184f1661236971dac70d85a83fab6c11/ambari-agent/src/main/python/ambari_agent/hostname.py)
>  attempts to retrieve a host's public hostname from AWS from the location 
> http://169.254.169.254/latest/meta-data/public-hostname.
> * In a non-AWS network with a network proxy present, this request can return 
> an HTML login or redirect page, rather than the expected hostname value.
> * The script does not validate the length or format of the returned value, 
> and submits the returned HTML code to ambari-server as the public hostname.
> * Registration of the host fails, as the submitted HTML code exceeds the size 
> of the hostname field in the server's database (255 characters).
> * A functioning manual workaround has been published at 
> https://community.hortonworks.com/articles/42872/why-ambari-host-might-have-different-public-host-n.html.
> * An alternative workaround is to set the default gateway of the nodes to the 
> IP address of the Ambari server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Babiichuk (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123422#comment-16123422
 ] 

Andrii Babiichuk commented on AMBARI-21709:
---

+1 for the patch

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123472#comment-16123472
 ] 

Andrii Tkach commented on AMBARI-21709:
---

committed to branch-feature-AMBARI-21450

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21709) Finalize Warns that it is Permanent Even For PATCH Upgrades

2017-08-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21709:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Finalize Warns that it is Permanent Even For PATCH Upgrades
> ---
>
> Key: AMBARI-21709
> URL: https://issues.apache.org/jira/browse/AMBARI-21709
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.3
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21709.patch, Screen Shot 2017-08-03 at 2.01.52 
> PM.png
>
>
> Perform either a {{PATCH}} or {{MAINT}} upgrade and get to Finalize. The 
> upgrade wizard warns that finalization is permanent.
> {quote}
> Your cluster version has been upgraded. Click on Finalize when you are ready 
> to finalize the upgrade and commit to the new version. You are strongly 
> encouraged to run tests on your cluster to ensure it is fully operational 
> before finalizing. You cannot go back to the original version once the 
> upgrade is finalized.
> {quote}
> This is not true for certain upgrade types. Finalization is a required step, 
> yes, but you can still revert {{PATCH}} and {{MAINT}} upgrades that have 
> finalized. The message for these types should read something like:
> {quote}
> The {{PATCH}} upgrade to HDP-2.5.4.0-1234 is ready to be completed. After 
> finalization, the patch can be reverted from the Stacks and Versions page if 
> it is no longer required."
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)