[jira] [Updated] (AMBARI-22306) Cluster state is out of sync after HDP installation with Superset
[ https://issues.apache.org/jira/browse/AMBARI-22306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-22306: --- Fix Version/s: (was: 2.6.0) 2.6.1 > Cluster state is out of sync after HDP installation with Superset > - > > Key: AMBARI-22306 > URL: https://issues.apache.org/jira/browse/AMBARI-22306 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.6.0 > Environment: ambari-server --version 2.6.0.0-256 >Reporter: Supreeth Sharma >Assignee: Nishant Bangarwa >Priority: Blocker > Fix For: trunk, 2.6.1 > > Attachments: AMBARI-22306.patch, out_of_sync.png > > > Live cluster : http://172.27.26.136:8080/#/main/admin/stack/versions > Cluster is going out of sync after HDP installation with superset. Attaching > the screenshot. > Issue looks similar to https://hortonworks.jira.com/browse/BUG-90028. > Checked > https://github.com/hortonworks/ambari/blob/AMBARI-2.6.0.0/ambari-server/src/main/resources/common-services/SUPERSET/0.15.0/metainfo.xml > and > 'versionAdvertised' is true > {code} > true > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (AMBARI-22102) Ranger KMS should add proxy user for Spark2 user
[ https://issues.apache.org/jira/browse/AMBARI-22102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-22102: --- Reporter: Yesha Vora (was: Mugdha Varadkar) > Ranger KMS should add proxy user for Spark2 user > > > Key: AMBARI-22102 > URL: https://issues.apache.org/jira/browse/AMBARI-22102 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.6.0 >Reporter: Yesha Vora >Assignee: Mugdha Varadkar > Fix For: 2.6.0 > > Attachments: AMBARI-22102.patch, AMBARI-22102-trunk.patch > > > Spark2 user needs to be added to Ranger KMS proxy users in cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (AMBARI-21768) Spark History Server uses wrong log dir
[ https://issues.apache.org/jira/browse/AMBARI-21768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-21768: --- Reporter: Yesha Vora (was: Doroszlai, Attila) > Spark History Server uses wrong log dir > --- > > Key: AMBARI-21768 > URL: https://issues.apache.org/jira/browse/AMBARI-21768 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Yesha Vora >Assignee: Doroszlai, Attila >Priority: Blocker > Fix For: 2.5.2 > > Attachments: AMBARI-21768.patch > > > Steps to reproduce: > # Install BI 4.2.0 > # Upgrade to Ambari 2.5.2 > # Upgrade to HDP 2.6 > # Run some Spark task (eg. {{spark-shell}}) > # Open Spark History Server UI > Result: Spark History Server shows no jobs, because it reads logs from the > wrong directory. > Note: Test with both default and customized {{spark.eventLog.dir}} > configuration. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (AMBARI-21135) Kafka service fails to start during EU from HDF 202 to 30 while resolving /etc/kafka/conf
[ https://issues.apache.org/jira/browse/AMBARI-21135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026626#comment-16026626 ] Mahadev konar commented on AMBARI-21135: +1 for the patch. > Kafka service fails to start during EU from HDF 202 to 30 while resolving > /etc/kafka/conf > - > > Key: AMBARI-21135 > URL: https://issues.apache.org/jira/browse/AMBARI-21135 > Project: Ambari > Issue Type: Bug > Components: ambari-agent, ambari-server, stacks >Affects Versions: 2.5.1 >Reporter: Jayush Luniya >Assignee: Jayush Luniya > Fix For: 2.5.2 > > Attachments: AMBARI-21135.patch > > > /usr/bin/conf-select was renamed to /usr/bin/hdfconf-select and hence we need > to create a symlink from /usr/bin/conf-select -> /usr/bin/hdfconf-select. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (AMBARI-21131) Add NIFI JAAS Config StackFeatures to HDP StackFeatures
[ https://issues.apache.org/jira/browse/AMBARI-21131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025610#comment-16025610 ] Mahadev konar commented on AMBARI-21131: +1 for the patch. > Add NIFI JAAS Config StackFeatures to HDP StackFeatures > --- > > Key: AMBARI-21131 > URL: https://issues.apache.org/jira/browse/AMBARI-21131 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.0 >Reporter: Jayush Luniya >Assignee: Jayush Luniya > Fix For: 2.5.2 > > Attachments: AMBARI-21131.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (AMBARI-21121) Missing storm-site.xml in HDP-2.6 stack
[ https://issues.apache.org/jira/browse/AMBARI-21121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-21121: --- Fix Version/s: (was: 2.5.1) 2.5.2 > Missing storm-site.xml in HDP-2.6 stack > --- > > Key: AMBARI-21121 > URL: https://issues.apache.org/jira/browse/AMBARI-21121 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.5.2 > > Attachments: AMBARI-21121.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (AMBARI-21121) Missing storm-site.xml in HDP-2.6 stack
[ https://issues.apache.org/jira/browse/AMBARI-21121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024046#comment-16024046 ] Mahadev konar commented on AMBARI-21121: +1 for the patch. > Missing storm-site.xml in HDP-2.6 stack > --- > > Key: AMBARI-21121 > URL: https://issues.apache.org/jira/browse/AMBARI-21121 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.5.1 > > Attachments: AMBARI-21121.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (AMBARI-19623) Atlas startup failed with ZkTimeoutException exception, zookeeper timeout values are very low.
[ https://issues.apache.org/jira/browse/AMBARI-19623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830543#comment-15830543 ] Mahadev konar commented on AMBARI-19623: +1 for the patch. > Atlas startup failed with ZkTimeoutException exception, zookeeper timeout > values are very low. > -- > > Key: AMBARI-19623 > URL: https://issues.apache.org/jira/browse/AMBARI-19623 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: trunk, 2.5.0 >Reporter: Ayub Khan >Assignee: Ayub Khan >Priority: Critical > Fix For: trunk, 2.5.0 > > Attachments: AMBARI-19623.patch > > > Currently zookeeper connect/session timeout values for atlas are set to > 200/400 ms respectively which are very low and not practical/recommended > values. Due to this atlas startup fails more frequently. > The jira is set recommended values for zookeeper timeout configuration. > {noformat} > atlas.kafka.zookeeper.connection.timeout.ms=3 > atlas.kafka.zookeeper.session.timeout.ms=6 > atlas.audit.zookeeper.session.timeout.ms=6 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-18258) Update repo base urls for HDP-2.5 stack
[ https://issues.apache.org/jira/browse/AMBARI-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437628#comment-15437628 ] Mahadev konar commented on AMBARI-18258: +1 for the change. > Update repo base urls for HDP-2.5 stack > --- > > Key: AMBARI-18258 > URL: https://issues.apache.org/jira/browse/AMBARI-18258 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Sumit Mohanty >Assignee: Sumit Mohanty >Priority: Critical > Fix For: trunk > > Attachments: AMBARI-18258-trunk.patch > > > Update repo base urls for HDP-2.5 stack -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars
[ https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433921#comment-15433921 ] Mahadev konar commented on AMBARI-18239: +1 for the patch. > oozie.py is reading invalid 'version' attribute which results in not copying > required atlas hook jars > - > > Key: AMBARI-18239 > URL: https://issues.apache.org/jira/browse/AMBARI-18239 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: trunk, 2.4.0 >Reporter: Ayub Khan >Assignee: Jayush Luniya >Priority: Critical > Fix For: trunk > > Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch > > > *OOzie server start output by ambari-agent is showing this error - > "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this > Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"* > {noformat} > 2016-08-23 07:21:53,147 - call returned (0, '') > 2016-08-23 07:21:53,148 - > Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs > hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib > /usr/hdp/current/oozie-server/share'] {'path': > [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], > 'user': 'oozie'} > 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] > {'security_enabled': True, 'hadoop_bin_dir': > '/usr/hdp/current/hadoop-client/bin', 'keytab': > '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': > 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', > 'hdfs_resource_ignore_file': > '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., > 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', > 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': > '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', > 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', > u'/mr-history/done', u'/apps/falcon'], 'mode': 0755} > 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt > /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'} > 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c > 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : > '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"' > 1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False} > 2016-08-23 07:23:33,259 - call returned (0, '') > 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, > 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': > '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': > 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', > 'hdfs_resource_ignore_file': > '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., > 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', > 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': > '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': > [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', > u'/apps/falcon']} > 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && > /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': > {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': > "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid > >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", > 'user': 'oozie'} > 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie > server doesn't contain directory /usr/hdp/None/atlas/hook/hive/ > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-18219) Ambari should use oozied.sh for stopping oozie so that optional catalina args can be provided
[ https://issues.apache.org/jira/browse/AMBARI-18219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429490#comment-15429490 ] Mahadev konar commented on AMBARI-18219: +1 for the patch. > Ambari should use oozied.sh for stopping oozie so that optional catalina args > can be provided > - > > Key: AMBARI-18219 > URL: https://issues.apache.org/jira/browse/AMBARI-18219 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Venkat Ranganathan >Assignee: Venkat Ranganathan >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-18219.patch > > > In some scenarios, the oozie stop can take longer and if a oozie start is > attempted it can fail with address already in use -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17839) Ambari URLStreamProvider doesn't store the cookies
[ https://issues.apache.org/jira/browse/AMBARI-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17839: --- Assignee: Rohit Choudhary (was: Mahadev konar) > Ambari URLStreamProvider doesn't store the cookies > --- > > Key: AMBARI-17839 > URL: https://issues.apache.org/jira/browse/AMBARI-17839 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Rohit Choudhary >Priority: Blocker > Fix For: 2.4.0 > > > Views generally makes calls via Ambari Proxy Server using URLStreamProvider. > This is necessary as making direct calls to the Service REST-APIs is not > desirable especially incase if ambari server is running with HTTPS and where > as service rest apis are running with HTTP, these calls won't work. > URLStreamProvider currently doesn't store any cookies from the responses. > This is required, In case of Storm where we use SPNEGO auth it returns a > "hadoop-auth" cookie but it doesn't get stored and we get auth exceptions. > This works fine if we open the storm-ui directly in the browser which drops > hadoop-auth cookie and ambari-server is able to read this cookie and send it > to the storm-ui server . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17838) enabling ranger storm plugin through toggle button in ranger admin conf does not update actual conf at storm side
[ https://issues.apache.org/jira/browse/AMBARI-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388130#comment-15388130 ] Mahadev konar commented on AMBARI-17838: +1 > enabling ranger storm plugin through toggle button in ranger admin conf does > not update actual conf at storm side > - > > Key: AMBARI-17838 > URL: https://issues.apache.org/jira/browse/AMBARI-17838 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Jaimin D Jetly >Assignee: Jaimin D Jetly >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17838.patch > > > changing the ranger plugin toggle for storm is not changing the configuration > to enable storm plugin in back ground > scenario: > 1. install ranger > 2. enable the toggle button for storm plugin via Ranger Smart Configs > ER: corresponding storm configuration for enable ranger plugin should be > enable to true > AR: corresponding storm configuration for enable ranger plugin does not > update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17785) Provide support for S3 as a first class destination for log events
[ https://issues.apache.org/jira/browse/AMBARI-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17785: --- Assignee: Hemanth Yamijala > Provide support for S3 as a first class destination for log events > -- > > Key: AMBARI-17785 > URL: https://issues.apache.org/jira/browse/AMBARI-17785 > Project: Ambari > Issue Type: Improvement > Components: ambari-logsearch >Reporter: Hemanth Yamijala >Assignee: Hemanth Yamijala > > AMBARI-17045 added support for uploading Hadoop service logs from machines to > S3. The intended usage there was as a one time trigger where, on-demand, the > log files matching certain paths can be uploaded to a given S3 bucket and > path. > While useful, there are some use cases where we might need more than this one > time activity, particularly when clusters are deployed on ephemeral machines > such as cloud instances: > * The machines running the logfeeder could be irrevocably lost and in that > case we would not be able to retrieve any logs. > * If we are copying logs at one time, that were generated over a long period > of time, the time to copy all the logs at the end could extend cluster > up-time and cost. > It would be nice to have an ability to support S3 as another output > destination in logsearch just like Kafka, Solr etc. This JIRA is to track > work towards this enhancement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-17721) Change text around Hive LLAP Settings.
[ https://issues.apache.org/jira/browse/AMBARI-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar resolved AMBARI-17721. Resolution: Fixed > Change text around Hive LLAP Settings. > -- > > Key: AMBARI-17721 > URL: https://issues.apache.org/jira/browse/AMBARI-17721 > Project: Ambari > Issue Type: Bug >Reporter: Mahadev konar >Assignee: Mahadev konar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17721.patch > > > Change text around Hive LLAP Settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17721) Change text around Hive LLAP Settings.
[ https://issues.apache.org/jira/browse/AMBARI-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17721: --- Attachment: AMBARI-17721.patch Attached patch. > Change text around Hive LLAP Settings. > -- > > Key: AMBARI-17721 > URL: https://issues.apache.org/jira/browse/AMBARI-17721 > Project: Ambari > Issue Type: Bug >Reporter: Mahadev konar >Assignee: Mahadev konar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17721.patch > > > Change text around Hive LLAP Settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17721) Change text around Hive LLAP Settings.
Mahadev konar created AMBARI-17721: -- Summary: Change text around Hive LLAP Settings. Key: AMBARI-17721 URL: https://issues.apache.org/jira/browse/AMBARI-17721 Project: Ambari Issue Type: Bug Reporter: Mahadev konar Assignee: Mahadev konar Priority: Critical Fix For: 2.4.0 Change text around Hive LLAP Settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-17715) Not able to login using KnoxSSO if local/ldap Ambari User with same name exists
[ https://issues.apache.org/jira/browse/AMBARI-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar resolved AMBARI-17715. Resolution: Fixed > Not able to login using KnoxSSO if local/ldap Ambari User with same name > exists > --- > > Key: AMBARI-17715 > URL: https://issues.apache.org/jira/browse/AMBARI-17715 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Myroslav Papirkovskyi >Assignee: Myroslav Papirkovskyi >Priority: Blocker > Fix For: 2.4.0 > > > Due to API limitations we cannot login JWT user if LDAP/LOCAL one with same > name already exists. > We should temporary threat JWT users as LDAP ones and rely on ldap-sync > process for user creation, as this is most frequent configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17635) Disable async logging for HiveServer2 + Hive 2.x
[ https://issues.apache.org/jira/browse/AMBARI-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17635: --- Fix Version/s: 2.4.0 > Disable async logging for HiveServer2 + Hive 2.x > > > Key: AMBARI-17635 > URL: https://issues.apache.org/jira/browse/AMBARI-17635 > Project: Ambari > Issue Type: Bug > Components: HiveServer2, Metastore, and Client Heap Sizes to Smart > Configs, ambari-server, stacks >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 2.4.0 > > Attachments: AMBARI-17635.1.patch, AMBARI-17635.2.patch, > AMBARI-17635.3.patch > > > Async logging for HS2 is known to have issues (HIVE-14183) because of HS2's > use of custom log divert appender. We should disable async logging for HS2 > until we have a proper fix in hive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17623) nimbus.monitor.freq.secs should be 10 sec
[ https://issues.apache.org/jira/browse/AMBARI-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373704#comment-15373704 ] Mahadev konar commented on AMBARI-17623: [~sumitmohanty]/[~jluniya] can we get this in asap? > nimbus.monitor.freq.secs should be 10 sec > - > > Key: AMBARI-17623 > URL: https://issues.apache.org/jira/browse/AMBARI-17623 > Project: Ambari > Issue Type: Bug >Reporter: Raghav Kumar Gautam >Assignee: Satish Duggana > Fix For: 2.4.0 > > Attachments: AMBARI-17623.patch > > > We want value of nimbus.monitor.freq.secs to be set to 10, but the current > value is 120 > https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/STORM/0.9.1/configuration/storm-site.xml#L210 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17623) nimbus.monitor.freq.secs should be 10 sec
[ https://issues.apache.org/jira/browse/AMBARI-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17623: --- Fix Version/s: 2.4.0 > nimbus.monitor.freq.secs should be 10 sec > - > > Key: AMBARI-17623 > URL: https://issues.apache.org/jira/browse/AMBARI-17623 > Project: Ambari > Issue Type: Bug >Reporter: Raghav Kumar Gautam >Assignee: Satish Duggana > Fix For: 2.4.0 > > Attachments: AMBARI-17623.patch > > > We want value of nimbus.monitor.freq.secs to be set to 10, but the current > value is 120 > https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/STORM/0.9.1/configuration/storm-site.xml#L210 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Deleted] (AMBARI-10657) Ambari restart/stop operation loses control of Flume agents
[ https://issues.apache.org/jira/browse/AMBARI-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar deleted AMBARI-10657: --- > Ambari restart/stop operation loses control of Flume agents > --- > > Key: AMBARI-10657 > URL: https://issues.apache.org/jira/browse/AMBARI-10657 > Project: Ambari > Issue Type: Bug >Reporter: Andrew Onischuk >Assignee: Andrew Onischuk > > Ambari seems to lose control of Flume agents - reporting them as > stopped even though the processes are still running. > Trying to start the agents again results in: > > Please shutdown the agentor disable this component, or the agent will > bein an undefined state. > > Failed to bind to: /x.x.x.x:4545 Caused by: java.net.BindException: > Address already in use > STEPS TO REPRODUCE: > 1\. Killed all agents using kill -9 (this step was necessary as the agents > were still running, but reported as stopped in Ambari) > 2\. Start agents using Ambari > 3\. Check the content of the pid file. In this case was 29873 > 4\. Check the pid using "ps -aux | grep flume". The output in this case was: > > > Warning: bad syntax, perhaps a bogus '-'? See > /usr/share/doc/procps-3.2.8/FAQ > flume 29873 0.0 0.0 106060 1308 ? Ss 13:50 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 29874 35.7 0.5 17222116 272028 ? Sl 13:50 0:10 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > Everything is running fine at this point. > 6\. Restart agents using flume > 7\. Check the content of the pid file. In this case it was still 29873 > 8\. Check the pid using "ps -aux | grep flume". The output in this case was: > > > Warning: bad syntax, perhaps a bogus '-'? See > /usr/share/doc/procps-3.2.8/FAQ > flume 3097 0.0 0.0 106060 1308 ? Ss 13:54 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 3098 7.2 0.5 17222116 271076 ? Sl 13:54 0:10 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > As you can see the pid file was not updated and shortly after the restart, > Ambari reports the agents as stopped. > ANALYSIS: > "cat /var/run/flume/a1.pid" returns 10056 last written 16 March 2015 13:04 > When I check the running processes using "ps -aux | grep flume" it shows 26288 > and 26289. > > > flume 26288 0.0 0.0 106060 1308 ? Ss 13:04 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 26289 13.2 0.5 18359888 294220 ? Sl 13:04 1:15 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > The content of "/var/run/flume/ambari-state.txt" is RUNNING. > When I check the flume log file, nothing out of the ordinary is shown around > the time the pid was updated. > I used "cat /var/log/flume/flume-a1.log | grep "16 Mar 2015 12:04" > > > 16 Mar 2015 12:04:13,166 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:214) > - Start checkpoint for > /home/flume/.flume/file-channel/checkpoint/checkpoint_1426501435529, elements > to sync = 18272 > 16 Mar 2015 12:04:13,241 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:239) - > Updating checkpoint metadata: logWriteOrderID: 1426503859575, queueSize: 576, > queueHead: 475305 > 16 Mar 2015 12:04:13,341 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.Log.writeCheckpoint:1025) - Updated checkpoint > for file: /home/flume/.flume/file-channel/data/log-6 position: 9108128 > logWriteOrderID: 1426503859575 > 16 Mar 2015 12:04:13,342 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.LogFile$RandomReader.close:504) - Closing > RandomReader /home/flume/.flume/file-channel/data/log-4 > 16 Mar 2015 12:04:43,348 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:214) > - Start
[jira] [Reopened] (AMBARI-10657) Ambari restart/stop operation loses control of Flume agents
[ https://issues.apache.org/jira/browse/AMBARI-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar reopened AMBARI-10657: > Ambari restart/stop operation loses control of Flume agents > --- > > Key: AMBARI-10657 > URL: https://issues.apache.org/jira/browse/AMBARI-10657 > Project: Ambari > Issue Type: Bug >Reporter: Andrew Onischuk >Assignee: Andrew Onischuk > Fix For: 2.1.0 > > > Ambari seems to lose control of Flume agents - reporting them as > stopped even though the processes are still running. > Trying to start the agents again results in: > > Please shutdown the agentor disable this component, or the agent will > bein an undefined state. > > Failed to bind to: /x.x.x.x:4545 Caused by: java.net.BindException: > Address already in use > STEPS TO REPRODUCE: > 1\. Killed all agents using kill -9 (this step was necessary as the agents > were still running, but reported as stopped in Ambari) > 2\. Start agents using Ambari > 3\. Check the content of the pid file. In this case was 29873 > 4\. Check the pid using "ps -aux | grep flume". The output in this case was: > > > Warning: bad syntax, perhaps a bogus '-'? See > /usr/share/doc/procps-3.2.8/FAQ > flume 29873 0.0 0.0 106060 1308 ? Ss 13:50 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 29874 35.7 0.5 17222116 272028 ? Sl 13:50 0:10 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > Everything is running fine at this point. > 6\. Restart agents using flume > 7\. Check the content of the pid file. In this case it was still 29873 > 8\. Check the pid using "ps -aux | grep flume". The output in this case was: > > > Warning: bad syntax, perhaps a bogus '-'? See > /usr/share/doc/procps-3.2.8/FAQ > flume 3097 0.0 0.0 106060 1308 ? Ss 13:54 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 3098 7.2 0.5 17222116 271076 ? Sl 13:54 0:10 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > As you can see the pid file was not updated and shortly after the restart, > Ambari reports the agents as stopped. > ANALYSIS: > "cat /var/run/flume/a1.pid" returns 10056 last written 16 March 2015 13:04 > When I check the running processes using "ps -aux | grep flume" it shows 26288 > and 26289. > > > flume 26288 0.0 0.0 106060 1308 ? Ss 13:04 0:00 bash -c export > JAVA_HOME=/usr/jdk64/jdk1.7.0_45; /usr/hdp/current/flume-server/bin/flume-ng > agent --name a1 --conf /etc/flume/conf/a1 --conf-file > /etc/flume/conf/a1/flume.conf -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > flume 26289 13.2 0.5 18359888 294220 ? Sl 13:04 1:15 > /usr/jdk64/jdk1.7.0_45/bin/java -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= -Dflume.monitoring.type=ganglia > -Dflume.monitoring.hosts= > > The content of "/var/run/flume/ambari-state.txt" is RUNNING. > When I check the flume log file, nothing out of the ordinary is shown around > the time the pid was updated. > I used "cat /var/log/flume/flume-a1.log | grep "16 Mar 2015 12:04" > > > 16 Mar 2015 12:04:13,166 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:214) > - Start checkpoint for > /home/flume/.flume/file-channel/checkpoint/checkpoint_1426501435529, elements > to sync = 18272 > 16 Mar 2015 12:04:13,241 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:239) - > Updating checkpoint metadata: logWriteOrderID: 1426503859575, queueSize: 576, > queueHead: 475305 > 16 Mar 2015 12:04:13,341 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.Log.writeCheckpoint:1025) - Updated checkpoint > for file: /home/flume/.flume/file-channel/data/log-6 position: 9108128 > logWriteOrderID: 1426503859575 > 16 Mar 2015 12:04:13,342 INFO [Log-BackgroundWorker-c1] > (org.apache.flume.channel.file.LogFile$RandomReader.close:504) - Closing > RandomReader /home/flume/.flume/file-channel/data/log-4 > 16 Mar 2015 12:04:43,348 INFO [Log-BackgroundWorker-c1] >
[jira] [Updated] (AMBARI-17623) nimbus.monitor.freq.secs should be 10 sec
[ https://issues.apache.org/jira/browse/AMBARI-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17623: --- Assignee: Satish Duggana > nimbus.monitor.freq.secs should be 10 sec > - > > Key: AMBARI-17623 > URL: https://issues.apache.org/jira/browse/AMBARI-17623 > Project: Ambari > Issue Type: Bug >Reporter: Raghav Kumar Gautam >Assignee: Satish Duggana > > We want value of nimbus.monitor.freq.secs to be set to 10, but the current > value is 120 > https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/STORM/0.9.1/configuration/storm-site.xml#L210 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-12885) Dynamic stack extensions - install and upgrade support for custom services
[ https://issues.apache.org/jira/browse/AMBARI-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-12885: --- Fix Version/s: 2.4.0 > Dynamic stack extensions - install and upgrade support for custom services > -- > > Key: AMBARI-12885 > URL: https://issues.apache.org/jira/browse/AMBARI-12885 > Project: Ambari > Issue Type: New Feature > Components: ambari-agent, ambari-server, ambari-web >Reporter: Tim Thorpe >Assignee: Tim Thorpe > Fix For: 2.4.0 > > Attachments: AMBARI-12885 Example.pdf, AMBARI-12885.patch, Dynamic > Stack Extensions - High Level Design v5.pdf > > > The purpose of this proposal is to facilitate adding custom services to an > existing stack. Ideally this would support adding and upgrading custom > services separately from the core services defined in the stack. In > particular we are looking at custom services that need to support several > different stacks (different distributions of Ambari). The release cycle of > the custom services may be different from that of the core stack; that is, a > custom service may be upgraded at a different rate than the core distribution > itself and may be upgraded multiple times within the lifespan of a single > release of the core distribution. > One possible approach to handling this would be dynamically extending a stack > (after install time). It would be best to extend the stack in packages where > a stack extension package can have one or more custom services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17389) Read 'yarn.nodemanager.resource.memory-mb' and 'yarn.scheduler.minimum-allocation-mb' from 'configurations' if 'changed-configurations' is empty and config is there in
[ https://issues.apache.org/jira/browse/AMBARI-17389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-17389: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Read 'yarn.nodemanager.resource.memory-mb' and > 'yarn.scheduler.minimum-allocation-mb' from 'configurations' if > 'changed-configurations' is empty and config is there in 'configurations', > else from 'services'. > > > Key: AMBARI-17389 > URL: https://issues.apache.org/jira/browse/AMBARI-17389 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > Attachments: AMBARI-17389.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (AMBARI-17074) Expose Spark daemon memory in Spark2
[ https://issues.apache.org/jira/browse/AMBARI-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar reopened AMBARI-17074: Reverted the fix since this was causing deploys to fail. > Expose Spark daemon memory in Spark2 > > > Key: AMBARI-17074 > URL: https://issues.apache.org/jira/browse/AMBARI-17074 > Project: Ambari > Issue Type: Improvement >Reporter: Weiqing Yang >Assignee: Weiqing Yang > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-17074.v1.patch, AMBARI-17074_v2.patch, > AMBARI-17074_v3.patch > > > Expose Spark daemon memory in Spark2, so that the user can modify its size on > ambari web UI easily. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-16171) Changes to Phoenix QueryServer Kerberos configuration
[ https://issues.apache.org/jira/browse/AMBARI-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar resolved AMBARI-16171. Resolution: Fixed Committed to trunk and branch-2.4. > Changes to Phoenix QueryServer Kerberos configuration > - > > Key: AMBARI-16171 > URL: https://issues.apache.org/jira/browse/AMBARI-16171 > Project: Ambari > Issue Type: Improvement >Reporter: Josh Elser >Assignee: Josh Elser > Fix For: 2.4.0 > > Attachments: AMBARI-16171-stackadvisor-WIP.patch, > AMBARI-16171.001.patch, AMBARI-16171.002.patch, AMBARI-16171.003.patch, > AMBARI-16171.006.patch, AMBARI-16171.007.patch, AMBARI-16171.009.patch, > AMBARI-16171.addendum.patch, AMBARI-16171.addendum2-1.patch, > AMBARI-16171.addendum2.patch > > > The up-coming version of Phoenix will contain some new functionality to > support Kerberos authentication of clients via SPNEGO with the Phoenix Query > Server (PQS). > Presently, Ambari will configure PQS to use the hbase service keytab which > will result in the SPNEGO authentication failing as the RFC requires that the > "primary" component of the Kerberos principal for the server is "HTTP". Thus, > we need to ensure that we switch PQS over to use the spnego.service.keytab as > the keytab and "HTTP/_HOST@REALM" as the principal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16171) Changes to Phoenix QueryServer Kerberos configuration
[ https://issues.apache.org/jira/browse/AMBARI-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-16171: --- Fix Version/s: (was: 2.4.1) 2.4.0 > Changes to Phoenix QueryServer Kerberos configuration > - > > Key: AMBARI-16171 > URL: https://issues.apache.org/jira/browse/AMBARI-16171 > Project: Ambari > Issue Type: Improvement >Reporter: Josh Elser >Assignee: Josh Elser > Fix For: 2.4.0 > > Attachments: AMBARI-16171-stackadvisor-WIP.patch, > AMBARI-16171.001.patch, AMBARI-16171.002.patch, AMBARI-16171.003.patch, > AMBARI-16171.006.patch, AMBARI-16171.007.patch, AMBARI-16171.009.patch, > AMBARI-16171.addendum.patch, AMBARI-16171.addendum2-1.patch, > AMBARI-16171.addendum2.patch > > > The up-coming version of Phoenix will contain some new functionality to > support Kerberos authentication of clients via SPNEGO with the Phoenix Query > Server (PQS). > Presently, Ambari will configure PQS to use the hbase service keytab which > will result in the SPNEGO authentication failing as the RFC requires that the > "primary" component of the Kerberos principal for the server is "HTTP". Thus, > we need to ensure that we switch PQS over to use the spnego.service.keytab as > the keytab and "HTTP/_HOST@REALM" as the principal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16121) HBase RegionServers go down after Ambari upgrade due to ClassCastException
[ https://issues.apache.org/jira/browse/AMBARI-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-16121: --- Resolution: Fixed Status: Resolved (was: Patch Available) > HBase RegionServers go down after Ambari upgrade due to ClassCastException > -- > > Key: AMBARI-16121 > URL: https://issues.apache.org/jira/browse/AMBARI-16121 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Fix For: 2.2.2 > > Attachments: AMBARI-16121.patch > > > ambari-server --hash > 2b112376b631384852a6c8aaa2e102d8dd39a9f1 > ambari-server-2.2.2.0-456.x86_64 > *Steps* > # Deploy HDP 2.3.4.0 cluster with Ambari 2.2.0.0 (HA, unsecure cluster and > Ranger enabled) > # Upgrade Ambari to 2.2.2.0 > Observed that all HBase RS reported as down > Logs show below error: > {code} > 016-04-26 09:47:23,575 ERROR [regionserver/host1/ip:16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.ClassCastException: > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost$RegionServerEnvironment > cannot be cast to > org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment > java.lang.ClassCastException: > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost$RegionServerEnvironment > cannot be cast to > org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment > at > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:131) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:411) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:253) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:156) > at > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost.(RegionServerCoprocessorHost.java:69) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:888) > at java.lang.Thread.run(Thread.java:745) > 2016-04-26 09:47:23,584 FATAL [regionserver/host1/ip:16020] > regionserver.HRegionServer: ABORTING region server host1,16020,1461664036377: > The coprocessor > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.ClassCastException: > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost$RegionServerEnvironment > cannot be cast to > org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment > java.lang.ClassCastException: > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost$RegionServerEnvironment > cannot be cast to > org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment > at > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:131) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:411) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:253) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:156) > at > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost.(RegionServerCoprocessorHost.java:69) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:888) > at java.lang.Thread.run(Thread.java:745) > 2016-04-26 09:47:23,586 FATAL [regionserver/host1/ip:16020] > regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: > [org.apache.hadoop.hbase.security.token.TokenProvider] > 2016-04-26 09:47:27,474 ERROR [main] regionserver.HRegionServerCommandLine: > Region server exiting > java.lang.RuntimeException: HRegionServer Aborted > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2651) > 2016-04-26 09:47:27,491 INFO [Thread-129] provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2016-04-26 09:47:27,492 INFO [Thread-129] queue.AuditAsyncQueue: Stop > called. name=hbaseRegional.async > 2016-04-26 09:47:27,492 INFO [Thread-129] queue.AuditAsyncQueue: > Interrupting consumerThread. name=hbaseRegional.async, > consumer=hbaseRegional.async.summary > 2016-04-26 09:47:27,496 INFO [Thread-129]
[jira] [Updated] (AMBARI-16072) Stack Advisor issue when adding service to Kerberized cluster
[ https://issues.apache.org/jira/browse/AMBARI-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-16072: --- Fix Version/s: (was: 2.2.2) 2.4.0 > Stack Advisor issue when adding service to Kerberized cluster > - > > Key: AMBARI-16072 > URL: https://issues.apache.org/jira/browse/AMBARI-16072 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Robert Levas >Assignee: Robert Levas >Priority: Critical > Labels: kerberos > Fix For: 2.4.0 > > Attachments: AMBARI-16072_branch-2.2_01.patch, > AMBARI-16072_branch-2.2_02.patch, AMBARI-16072_trunk_01.patch, > AMBARI-16072_trunk_02.patch > > > When adding a service to a Kerberized cluster and click install nothing > happens on the UI and i see the following error in the ambari server logs > {code} > 20 Apr 2016 16:03:56,818 INFO [qtp-ambari-client-2764] > KerberosHelperImpl:735 - Adding identity for JOURNALNODE to auth to local > mapping > 20 Apr 2016 16:03:56,818 INFO [qtp-ambari-client-2764] > KerberosHelperImpl:735 - Adding identity for METRICS_COLLECTOR to auth to > local mapping > 20 Apr 2016 16:03:56,857 INFO [qtp-ambari-client-2764] StackAdvisorRunner:47 > - Script=/var/lib/ambari-server/resources/scripts/stack_advisor.py, > actionDirectory=/var/run/ambari-server/stack-recommendations/323, > command=recommend-configurations > 20 Apr 2016 16:03:56,860 INFO [qtp-ambari-client-2764] StackAdvisorRunner:61 > - Stack-advisor > output=/var/run/ambari-server/stack-recommendations/323/stackadvisor.out, > error=/var/run/ambari-server/stack-recommendations/323/stackadvisor.err > 20 Apr 2016 16:03:56,917 INFO [qtp-ambari-client-2764] StackAdvisorRunner:69 > - Stack advisor output files > 20 Apr 2016 16:03:56,917 INFO [qtp-ambari-client-2764] StackAdvisorRunner:70 > - advisor script stdout: StackAdvisor implementation for stack HDP, > version 2.0.6 was loaded > StackAdvisor implementation for stack HDP, version 2.1 was loaded > StackAdvisor implementation for stack HDP, version 2.2 was loaded > StackAdvisor implementation for stack HDP, version 2.3 was loaded > StackAdvisor implementation for stack HDP, version 2.4 was loaded > Returning HDP24StackAdvisor implementation > Error occured in stack advisor. > Error details: 'NoneType' object is not iterable > 20 Apr 2016 16:03:56,917 INFO [qtp-ambari-client-2764] StackAdvisorRunner:71 > - advisor script stderr: Traceback (most recent call last): > File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 158, > in > main(sys.argv) > File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 109, > in main > result = stackAdvisor.recommendConfigurations(services, hosts) > File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", > line 570, in recommendConfigurations > calculation(configurations, clusterSummary, services, hosts) > File > "/var/lib/ambari-server/resources/scripts/./../stacks/HDP/2.0.6/services/stack_advisor.py", > line 627, in recommendAmsConfigurations > if set(amsCollectorHosts).intersection(dn_hosts): > TypeError: 'NoneType' object is not iterable > 20 Apr 2016 16:03:56,918 INFO [qtp-ambari-client-2764] > AbstractResourceProvider:802 - Caught an exception while updating host > components, retrying : org.apache.ambari.server.AmbariException: Stack > Advisor reported an error: TypeError: 'NoneType' object is not iterable > StdOut file: /var/run/ambari-server/stack-recommendations/323/stackadvisor.out > StdErr file: /var/run/ambari-server/stack-recommendations/323/stackadvisor.err > {code} > *Solution* > Pass to the stack advisor information about all installed services where each > component is installed (component host map) > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16013) Host_status stuck in UNKNOWN status after blueprint deploy with host in heartbeat-lost
[ https://issues.apache.org/jira/browse/AMBARI-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-16013: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed. > Host_status stuck in UNKNOWN status after blueprint deploy with host in > heartbeat-lost > -- > > Key: AMBARI-16013 > URL: https://issues.apache.org/jira/browse/AMBARI-16013 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Sebastian Toader >Assignee: Sebastian Toader >Priority: Blocker > Fix For: 2.2.2 > > Attachments: AMBARI-16013.branch-2.2.v2.patch, > AMBARI-16013.trunk.v2.patch > > > Deploy a cluster using blueprint when all nodes are in HEARTBEAT_LOST state > (e.g. nodes already registered with Ambari server once but than all were > stopped prior posting the blueprint/cluster creation template to the server). > The blueprint and cluster creation succeeded and UI looked good with all the > hosts in heartbeat-lost state. > Then start the agents one by one. Expected behaviour is that as soon as all > required nodes are up Ambari to start scheduling tasks on the connected > nodes to install the cluster. > However the hosts were stuck in {{host_status: UNKNOWN}} state and Ambari did > not start scheduling any tasks to the connected hosts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-15646) Audit Log Code Cleanup & Safety
[ https://issues.apache.org/jira/browse/AMBARI-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-15646: --- Fix Version/s: 2.4.0 > Audit Log Code Cleanup & Safety > --- > > Key: AMBARI-15646 > URL: https://issues.apache.org/jira/browse/AMBARI-15646 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Daniel Gergely >Assignee: Daniel Gergely > Fix For: 2.4.0 > > Attachments: AMBARI-15646.patch > > > As a follow-up to AMBARI-15241, there are concerns brought up in review which > should be addressed but didn't need to hold up the feature being committed. > These can be further broken out to into separate Jiras if needed: > - When initializing a ThreadLocal, you can specify an initial value. This > code is unnecessary: > {code} > private ThreadLocal dateFormatThreadLocal = new ThreadLocal<>(); > if(dateFormatThreadLocal.get() == null) { > //2016-03-11T10:42:36.376Z > dateFormatThreadLocal.set(new > SimpleDateFormat("-MM-dd'T'HH:mm:ss.SSSX")); > } > {code} > - There are no tests for a majority of events and event creators. > - Using a multibinder is fine to be able to inject a {{Set}}, but it's > not clear to developers adding code that this is what must be done. > -- We either need to document the super interface to make it clear how to > have new creators bound > -- Or annotate creators with an annotation which then be automatically picked > up by the {{AuditLoggerModule}} and bound without the need to explicitly > define each creator. > - {code} > // binding for audit event creators > Multibinder auditLogEventCreatorBinder = > Multibinder.newSetBinder(binder(), RequestAuditEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(DefaultEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(ComponentEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(ServiceEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(UnauthorizedEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(ConfigurationChangeEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(UserEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(GroupEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(MemberEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(PrivilegeEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(BlueprintExportEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(ServiceConfigDownloadEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(BlueprintEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(ViewInstanceEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(ViewPrivilegeEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(RepositoryEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(RepositoryVersionEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(AlertGroupEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(AlertTargetEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(HostEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(UpgradeEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(UpgradeItemEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(RecommendationIgnoreEventCreator.class); > > auditLogEventCreatorBinder.addBinding().to(ValidationIgnoreEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(CredentialEventCreator.class); > auditLogEventCreatorBinder.addBinding().to(RequestEventCreator.class); > bind(RequestAuditLogger.class).to(RequestAuditLoggerImpl.class); > {code} > - Event creators have nested invocations which is not only hard to read, but > can potentially cause NPE's; it's a dangerous practice. As an example: > {code:title=AlertGroupEventCreator} > String.valueOf(request.getBody().getNamedPropertySets().iterator().next().getProperties().get(PropertyHelper.getPropertyId("AlertGroup", > "name"))); > {code} > -- Additionally, this references properties by building them, instead of by > their registration in the property provider. If the property name ever > changed, this could easily be missed. > - Some of the {{auditLog}} methods check to ensure that the logger is enabled > first. This is very good, as building objects which won't be logged is a > waste and potential performance problem. However, not all of them do. All > {{auditLog}} methods should check this first, and return if not enabled. You > can do this using AOP or just brute-force every method. > {code} > private void
[jira] [Updated] (AMBARI-15638) [AMS] Sum Calculation Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-15638: --- Fix Version/s: 2.2.2 > [AMS] Sum Calculation Incorrect > --- > > Key: AMBARI-15638 > URL: https://issues.apache.org/jira/browse/AMBARI-15638 > Project: Ambari > Issue Type: Bug >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Blocker > Fix For: 2.2.2 > > Attachments: AMBARI-15638-2.patch, AMBARI-15638.patch > > > Sum Calculation is incorrect when the time range is more than 2 hrs. > This issue affects all metrics that are queried with the "sum" aggregator > (with or without hostname being specified) for more than a 2hr time-range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-14556) Role based access control UX fixes
[ https://issues.apache.org/jira/browse/AMBARI-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-14556: --- Fix Version/s: 2.4.0 > Role based access control UX fixes > -- > > Key: AMBARI-14556 > URL: https://issues.apache.org/jira/browse/AMBARI-14556 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Reporter: Richard Zang >Assignee: Richard Zang > Fix For: 2.4.0 > > Attachments: AMBARI-14556.patch > > > Order of blocks in manage access page needs to be reversed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-13844) Support WebHDFS over SSL in Ambari Views
[ https://issues.apache.org/jira/browse/AMBARI-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahadev konar updated AMBARI-13844: --- Fix Version/s: 2.4.0 > Support WebHDFS over SSL in Ambari Views > > > Key: AMBARI-13844 > URL: https://issues.apache.org/jira/browse/AMBARI-13844 > Project: Ambari > Issue Type: Bug > Components: ambari-views >Affects Versions: 2.2.0 >Reporter: Henning Kropp >Assignee: Gaurav Nagar > Labels: files, filesystem > Fix For: 2.4.0 > > Attachments: AMBARI-13844.patch, AMBARI-13844_branch-2.2.patch > > > Currently Ambari Views do not support file system schema for swebhdfs - > webhdfs over SSL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)