[ 
https://issues.apache.org/jira/browse/AMBARI-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14255927#comment-14255927
 ] 

Hudson commented on AMBARI-8858:
--------------------------------

FAILURE: Integrated in Ambari-trunk-Commit-docker #567 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit-docker/567/])
AMBARI-8858. Recalibrate 'reasonable' timeouts for 2.2 Hadoop package 
installations (aonishuk) (aonishuk: 
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=9715bc9ffaa8f6e2a1669ebe90cc7f2eda529d7c)
* ambari-server/src/main/resources/stacks/HDP/2.2/services/KERBEROS/metainfo.xml
* ambari-server/src/main/resources/common-services/GANGLIA/3.5.0/metainfo.xml
* ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/metainfo.xml
* ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/metainfo.xml
* ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml
* ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/metainfo.xml
* 
ambari-server/src/main/resources/common-services/SLIDER/0.60.0.2.2/metainfo.xml
* ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/metainfo.xml
* ambari-server/src/main/resources/common-services/AMS/0.1.0/metainfo.xml
* ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml
* ambari-server/src/main/resources/common-services/KNOX/0.5.0.2.2/metainfo.xml
* ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml
* 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/metainfo.xml
* ambari-server/src/main/resources/common-services/STORM/0.9.1.2.1/metainfo.xml
* 
ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5.2.0/metainfo.xml
* ambari-server/src/main/resources/stacks/HDP/2.1/services/YARN/metainfo.xml
* ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/metainfo.xml
* ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/metainfo.xml
* ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/metainfo.xml
* ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/metainfo.xml


> Recalibrate 'reasonable' timeouts for 2.2 Hadoop package installations
> ----------------------------------------------------------------------
>
>                 Key: AMBARI-8858
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8858
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.0.0
>
>
> Reports of issues from email from ~jeff:
>     
>     
>     ...
>     On 12/4/14 8:55 AM, Pramod Thangali wrote:> + RajaÂ
>     > 
>     > (Also interesting comment from JP about packages including sources. Are 
>     > the sources part of the RPMs we install?)
>     > 
>     > On Thu, Dec 4, 2014 at 7:49 AM, Jeff Sposetti <[email protected] 
>     > <mailto:[email protected]>> wrote:
>     > 
>     >     John, has something changed with RE with the HDP 2.2 bits? I think
>     >     we saw timeouts during dev cycle but thought they were just due to
>     >     use of temporary/dev s3 repos.
>     > 
>     >     But seems to be happening even with public repos.
>     > 
>     > 
>     > 
>     >     ---------- Forwarded message ----------
>     >     From: *Andrew Grande* <[email protected]
>     >     <mailto:[email protected]>>
>     >     Date: Thu, Dec 4, 2014 at 10:25 AM
>     >     Subject: Re: HDP 2.2 from local repo keeps timing out
>     >     To: Andrew Grande <[email protected]
>     >     <mailto:[email protected]>>, "[email protected]
>     >     <mailto:[email protected]>" <[email protected]
>     >     <mailto:[email protected]>>
>     > 
>     > 
>     >     Here’s a typical error. I’ve had timeouts yesterday with our SE
>     >     Cloud environment (which is fine, it’s very slow). But nobody is
>     >     happy to see those on customer’s hardware (96GB VM on IBM Hardware
>     >     with 1Gbps network and Local repo on the master host).
>     > 
>     >     Ideas?
>     > 
>     >     stderr:____
>     > 
>     >     Python script has been killed due to timeout after waiting 900 
> secs____
>     > 
>     >     stdout:____
>     > 
>     >     2014-12-04 20:10:38,495 - Group['hadoop'] {'ignore_failures': 
> False}____
>     > 
>     >     2014-12-04 20:10:38,503 - Adding group Group['hadoop']____
>     > 
>     >     2014-12-04 20:10:38,553 - Group['nobody'] {'ignore_failures': 
> False}____
>     > 
>     >     2014-12-04 20:10:38,553 - Modifying group nobody____
>     > 
>     >     2014-12-04 20:10:38,586 - Group['users'] {'ignore_failures': 
> False}____
>     > 
>     >     2014-12-04 20:10:38,586 - Modifying group users____
>     > 
>     >     2014-12-04 20:10:38,612 - Group['nagios'] {'ignore_failures': 
> False}____
>     > 
>     >     2014-12-04 20:10:38,612 - Adding group Group['nagios']____
>     > 
>     >     2014-12-04 20:10:38,659 - Group['knox'] {'ignore_failures': 
> False}____
>     > 
>     >     2014-12-04 20:10:38,660 - Adding group Group['knox']____
>     > 
>     >     2014-12-04 20:10:38,686 - User['nobody'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'nobody']}____
>     > 
>     >     2014-12-04 20:10:38,686 - Modifying user nobody____
>     > 
>     >     2014-12-04 20:10:38,768 - User['hive'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:38,768 - Adding user User['hive']____
>     > 
>     >     2014-12-04 20:10:38,949 - User['oozie'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'users']}____
>     > 
>     >     2014-12-04 20:10:38,949 - Adding user User['oozie']____
>     > 
>     >     2014-12-04 20:10:39,128 - User['nagios'] {'gid': 'nagios',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,128 - Adding user User['nagios']____
>     > 
>     >     2014-12-04 20:10:39,288 - User['ambari-qa'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'users']}____
>     > 
>     >     2014-12-04 20:10:39,288 - Adding user User['ambari-qa']____
>     > 
>     >     2014-12-04 20:10:39,418 - User['flume'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,419 - Adding user User['flume']____
>     > 
>     >     2014-12-04 20:10:39,550 - User['hdfs'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,550 - Adding user User['hdfs']____
>     > 
>     >     2014-12-04 20:10:39,692 - User['knox'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,693 - Adding user User['knox']____
>     > 
>     >     2014-12-04 20:10:39,832 - User['storm'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,833 - Adding user User['storm']____
>     > 
>     >     2014-12-04 20:10:39,966 - User['mapred'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:39,966 - Adding user User['mapred']____
>     > 
>     >     2014-12-04 20:10:40,108 - User['hbase'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:40,109 - Adding user User['hbase']____
>     > 
>     >     2014-12-04 20:10:40,304 - User['tez'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'users']}____
>     > 
>     >     2014-12-04 20:10:40,304 - Adding user User['tez']____
>     > 
>     >     2014-12-04 20:10:40,450 - User['zookeeper'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:40,451 - Adding user User['zookeeper']____
>     > 
>     >     2014-12-04 20:10:40,591 - User['kafka'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:40,591 - Adding user User['kafka']____
>     > 
>     >     2014-12-04 20:10:40,740 - User['falcon'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:40,740 - Adding user User['falcon']____
>     > 
>     >     2014-12-04 20:10:40,880 - User['sqoop'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:40,880 - Adding user User['sqoop']____
>     > 
>     >     2014-12-04 20:10:41,024 - User['yarn'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:41,025 - Adding user User['yarn']____
>     > 
>     >     2014-12-04 20:10:41,184 - User['hcat'] {'gid': 'hadoop',
>     >     'ignore_failures': False, 'groups': [u'hadoop']}____
>     > 
>     >     2014-12-04 20:10:41,185 - Adding user User['hcat']____
>     > 
>     >     2014-12-04 20:10:41,319 -
>     >     File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
>     >     StaticFile('changeToSecureUid.sh'), 'mode': 0555}____
>     > 
>     >     2014-12-04 20:10:41,324 - Writing
>     >     File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] because it
>     >     doesn't exist____
>     > 
>     >     2014-12-04 20:10:41,324 - Changing permission for
>     >     /var/lib/ambari-agent/data/tmp/changeUid.sh from 644 to 555____
>     > 
>     >     2014-12-04 20:10:41,325 -
>     >     Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa
>     >     
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>     >     2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}____
>     > 
>     >     2014-12-04 20:10:41,412 -
>     >     File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
>     >     StaticFile('changeToSecureUid.sh'), 'mode': 0555}____
>     > 
>     >     2014-12-04 20:10:41,414 -
>     >     Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase
>     >     /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase
>     >     2>/dev/null'] {'not_if': 'test $(id -u hbase) -gt 1000'}____
>     > 
>     >     2014-12-04 20:10:41,474 - Directory['/etc/hadoop/conf.empty']
>     >     {'owner': 'root', 'group': 'root', 'recursive': True}____
>     > 
>     >     2014-12-04 20:10:41,475 - Creating directory
>     >     Directory['/etc/hadoop/conf.empty']____
>     > 
>     >     2014-12-04 20:10:41,476 - Link['/etc/hadoop/conf'] {'not_if': 'ls
>     >     /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}____
>     > 
>     >     2014-12-04 20:10:41,488 - Creating symbolic 
> Link['/etc/hadoop/conf']____
>     > 
>     >     2014-12-04 20:10:41,501 - File['/etc/hadoop/conf/hadoop-env.sh']
>     >     {'content': InlineTemplate(...), 'owner': 'hdfs'}____
>     > 
>     >     2014-12-04 20:10:41,501 - Writing
>     >     File['/etc/hadoop/conf/hadoop-env.sh'] because it doesn't exist____
>     > 
>     >     2014-12-04 20:10:41,501 - Changing owner for
>     >     /etc/hadoop/conf/hadoop-env.sh from 0 to hdfs____
>     > 
>     >     2014-12-04 20:10:41,513 - Repository['HDP-2.2'] {'base_url':
>     >     
> 'http://btc5x040.code1.emi.philips.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/',
>     >     'action': ['create'], 'components': [u'HDP', 'main'],
>     >     'repo_template': 'repo_suse_rhel.j2', 'repo_file_name': 'HDP',
>     >     'mirror_list': None}____
>     > 
>     >     2014-12-04 20:10:41,529 - File['/etc/yum.repos.d/HDP.repo']
>     >     {'content': Template('repo_suse_rhel.j2')}____
>     > 
>     >     2014-12-04 20:10:41,530 - Writing File['/etc/yum.repos.d/HDP.repo']
>     >     because it doesn't exist____
>     > 
>     >     2014-12-04 20:10:41,530 - Repository['HDP-UTILS-1.1.0.20']
>     >     {'base_url':
>     >     
> 'http://btc5x040.code1.emi.philips.com/hdp/HDP-UTILS-1.1.0.20/repos/centos6/',
>     >     'action': ['create'], 'components': [u'HDP-UTILS', 'main'],
>     >     'repo_template': 'repo_suse_rhel.j2', 'repo_file_name': 'HDP-UTILS',
>     >     'mirror_list': None}____
>     > 
>     >     2014-12-04 20:10:41,533 - File['/etc/yum.repos.d/HDP-UTILS.repo']
>     >     {'content': Template('repo_suse_rhel.j2')}____
>     > 
>     >     2014-12-04 20:10:41,534 - Writing
>     >     File['/etc/yum.repos.d/HDP-UTILS.repo'] because it doesn't exist____
>     > 
>     >     2014-12-04 20:10:41,534 - Package['unzip'] {}____
>     > 
>     >     2014-12-04 20:10:42,202 - Skipping installing existent package 
> unzip____
>     > 
>     >     2014-12-04 20:10:42,203 - Package['curl'] {}____
>     > 
>     >     2014-12-04 20:10:42,833 - Skipping installing existent package 
> curl____
>     > 
>     >     2014-12-04 20:10:42,834 - Package['hdp-select'] {}____
>     > 
>     >     2014-12-04 20:10:43,513 - Installing package hdp-select
>     >     ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')____
>     > 
>     >     2014-12-04 20:10:51,702 - Package['hadoop_2_2_*-yarn'] {}____
>     > 
>     >     2014-12-04 20:10:52,343 - Installing package hadoop_2_2_*-yarn
>     >     ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_2_*-yarn')
>     > 
>     > 
>     >     From: Andrew Perepelytsa <[email protected]
>     >     <mailto:[email protected]>>
>     >     Date: Thursday, December 4, 2014 at 10:23 AM
>     >     To: "[email protected] <mailto:[email protected]>"
>     >     <[email protected] <mailto:[email protected]>>
>     >     Subject: HDP 2.2 from local repo keeps timing out
>     > 
>     >         Guys,
>     > 
>     >         Has anything changed with our hdp2.2 install process?
>     > 
>     >         The install times out after 900 seconds. First, ATS (when it
>     >         does hadop_2_2*-yarn install), then fails with timeout. Retry
>     >         – moves on, fails on the next step (ganglia)
>     > 
>     >         Andrew
>     > 
>     > 
>     > 
>     > 
>     > 
>     > -- 
>     > 
> __________________________________________________________________________
>     > Pramod Thangali
>     > 408 621 1525
>     > Engineering at Hortonworks
>     > 
>     > 
>     



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to