-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29310/
-----------------------------------------------------------
Review request for Ambari and Dmitro Lisnichenko.
Bugs: AMBARI-8858
https://issues.apache.org/jira/browse/AMBARI-8858
Repository: ambari
Description
-------
Reports of issues from email from ~jeff:
...
On 12/4/14 8:55 AM, Pramod Thangali wrote:> + RajaÂ
>
> (Also interesting comment from JP about packages including sources. Are
> the sources part of the RPMs we install?)
>
> On Thu, Dec 4, 2014 at 7:49 AM, Jeff Sposetti <[email protected]
> <mailto:[email protected]>> wrote:
>
> John, has something changed with RE with the HDP 2.2 bits? I think
> we saw timeouts during dev cycle but thought they were just due to
> use of temporary/dev s3 repos.
>
> But seems to be happening even with public repos.
>
>
>
> ---------- Forwarded message ----------
> From: *Andrew Grande* <[email protected]
> <mailto:[email protected]>>
> Date: Thu, Dec 4, 2014 at 10:25 AM
> Subject: Re: HDP 2.2 from local repo keeps timing out
> To: Andrew Grande <[email protected]
> <mailto:[email protected]>>, "[email protected]
> <mailto:[email protected]>" <[email protected]
> <mailto:[email protected]>>
>
>
> Here’s a typical error. I’ve had timeouts yesterday with our SE
> Cloud environment (which is fine, it’s very slow). But nobody is
> happy to see those on customer’s hardware (96GB VM on IBM Hardware
> with 1Gbps network and Local repo on the master host).
>
> Ideas?
>
> stderr:____
>
> Python script has been killed due to timeout after waiting 900
secs____
>
> stdout:____
>
> 2014-12-04 20:10:38,495 - Group['hadoop'] {'ignore_failures':
False}____
>
> 2014-12-04 20:10:38,503 - Adding group Group['hadoop']____
>
> 2014-12-04 20:10:38,553 - Group['nobody'] {'ignore_failures':
False}____
>
> 2014-12-04 20:10:38,553 - Modifying group nobody____
>
> 2014-12-04 20:10:38,586 - Group['users'] {'ignore_failures':
False}____
>
> 2014-12-04 20:10:38,586 - Modifying group users____
>
> 2014-12-04 20:10:38,612 - Group['nagios'] {'ignore_failures':
False}____
>
> 2014-12-04 20:10:38,612 - Adding group Group['nagios']____
>
> 2014-12-04 20:10:38,659 - Group['knox'] {'ignore_failures': False}____
>
> 2014-12-04 20:10:38,660 - Adding group Group['knox']____
>
> 2014-12-04 20:10:38,686 - User['nobody'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'nobody']}____
>
> 2014-12-04 20:10:38,686 - Modifying user nobody____
>
> 2014-12-04 20:10:38,768 - User['hive'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:38,768 - Adding user User['hive']____
>
> 2014-12-04 20:10:38,949 - User['oozie'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'users']}____
>
> 2014-12-04 20:10:38,949 - Adding user User['oozie']____
>
> 2014-12-04 20:10:39,128 - User['nagios'] {'gid': 'nagios',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,128 - Adding user User['nagios']____
>
> 2014-12-04 20:10:39,288 - User['ambari-qa'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'users']}____
>
> 2014-12-04 20:10:39,288 - Adding user User['ambari-qa']____
>
> 2014-12-04 20:10:39,418 - User['flume'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,419 - Adding user User['flume']____
>
> 2014-12-04 20:10:39,550 - User['hdfs'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,550 - Adding user User['hdfs']____
>
> 2014-12-04 20:10:39,692 - User['knox'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,693 - Adding user User['knox']____
>
> 2014-12-04 20:10:39,832 - User['storm'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,833 - Adding user User['storm']____
>
> 2014-12-04 20:10:39,966 - User['mapred'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:39,966 - Adding user User['mapred']____
>
> 2014-12-04 20:10:40,108 - User['hbase'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:40,109 - Adding user User['hbase']____
>
> 2014-12-04 20:10:40,304 - User['tez'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'users']}____
>
> 2014-12-04 20:10:40,304 - Adding user User['tez']____
>
> 2014-12-04 20:10:40,450 - User['zookeeper'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:40,451 - Adding user User['zookeeper']____
>
> 2014-12-04 20:10:40,591 - User['kafka'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:40,591 - Adding user User['kafka']____
>
> 2014-12-04 20:10:40,740 - User['falcon'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:40,740 - Adding user User['falcon']____
>
> 2014-12-04 20:10:40,880 - User['sqoop'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:40,880 - Adding user User['sqoop']____
>
> 2014-12-04 20:10:41,024 - User['yarn'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:41,025 - Adding user User['yarn']____
>
> 2014-12-04 20:10:41,184 - User['hcat'] {'gid': 'hadoop',
> 'ignore_failures': False, 'groups': [u'hadoop']}____
>
> 2014-12-04 20:10:41,185 - Adding user User['hcat']____
>
> 2014-12-04 20:10:41,319 -
> File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}____
>
> 2014-12-04 20:10:41,324 - Writing
> File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] because it
> doesn't exist____
>
> 2014-12-04 20:10:41,324 - Changing permission for
> /var/lib/ambari-agent/data/tmp/changeUid.sh from 644 to 555____
>
> 2014-12-04 20:10:41,325 -
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa
>
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
> 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}____
>
> 2014-12-04 20:10:41,412 -
> File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}____
>
> 2014-12-04 20:10:41,414 -
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase
> 2>/dev/null'] {'not_if': 'test $(id -u hbase) -gt 1000'}____
>
> 2014-12-04 20:10:41,474 - Directory['/etc/hadoop/conf.empty']
> {'owner': 'root', 'group': 'root', 'recursive': True}____
>
> 2014-12-04 20:10:41,475 - Creating directory
> Directory['/etc/hadoop/conf.empty']____
>
> 2014-12-04 20:10:41,476 - Link['/etc/hadoop/conf'] {'not_if': 'ls
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}____
>
> 2014-12-04 20:10:41,488 - Creating symbolic
Link['/etc/hadoop/conf']____
>
> 2014-12-04 20:10:41,501 - File['/etc/hadoop/conf/hadoop-env.sh']
> {'content': InlineTemplate(...), 'owner': 'hdfs'}____
>
> 2014-12-04 20:10:41,501 - Writing
> File['/etc/hadoop/conf/hadoop-env.sh'] because it doesn't exist____
>
> 2014-12-04 20:10:41,501 - Changing owner for
> /etc/hadoop/conf/hadoop-env.sh from 0 to hdfs____
>
> 2014-12-04 20:10:41,513 - Repository['HDP-2.2'] {'base_url':
>
'http://btc5x040.code1.emi.philips.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/',
> 'action': ['create'], 'components': [u'HDP', 'main'],
> 'repo_template': 'repo_suse_rhel.j2', 'repo_file_name': 'HDP',
> 'mirror_list': None}____
>
> 2014-12-04 20:10:41,529 - File['/etc/yum.repos.d/HDP.repo']
> {'content': Template('repo_suse_rhel.j2')}____
>
> 2014-12-04 20:10:41,530 - Writing File['/etc/yum.repos.d/HDP.repo']
> because it doesn't exist____
>
> 2014-12-04 20:10:41,530 - Repository['HDP-UTILS-1.1.0.20']
> {'base_url':
>
'http://btc5x040.code1.emi.philips.com/hdp/HDP-UTILS-1.1.0.20/repos/centos6/',
> 'action': ['create'], 'components': [u'HDP-UTILS', 'main'],
> 'repo_template': 'repo_suse_rhel.j2', 'repo_file_name': 'HDP-UTILS',
> 'mirror_list': None}____
>
> 2014-12-04 20:10:41,533 - File['/etc/yum.repos.d/HDP-UTILS.repo']
> {'content': Template('repo_suse_rhel.j2')}____
>
> 2014-12-04 20:10:41,534 - Writing
> File['/etc/yum.repos.d/HDP-UTILS.repo'] because it doesn't exist____
>
> 2014-12-04 20:10:41,534 - Package['unzip'] {}____
>
> 2014-12-04 20:10:42,202 - Skipping installing existent package
unzip____
>
> 2014-12-04 20:10:42,203 - Package['curl'] {}____
>
> 2014-12-04 20:10:42,833 - Skipping installing existent package
curl____
>
> 2014-12-04 20:10:42,834 - Package['hdp-select'] {}____
>
> 2014-12-04 20:10:43,513 - Installing package hdp-select
> ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')____
>
> 2014-12-04 20:10:51,702 - Package['hadoop_2_2_*-yarn'] {}____
>
> 2014-12-04 20:10:52,343 - Installing package hadoop_2_2_*-yarn
> ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_2_*-yarn')
>
>
> From: Andrew Perepelytsa <[email protected]
> <mailto:[email protected]>>
> Date: Thursday, December 4, 2014 at 10:23 AM
> To: "[email protected] <mailto:[email protected]>"
> <[email protected] <mailto:[email protected]>>
> Subject: HDP 2.2 from local repo keeps timing out
>
> Guys,
>
> Has anything changed with our hdp2.2 install process?
>
> The install times out after 900 seconds. First, ATS (when it
> does hadop_2_2*-yarn install), then fails with timeout. Retry
> – moves on, fails on the next step (ganglia)
>
> Andrew
>
>
>
>
>
> --
> __________________________________________________________________________
> Pramod Thangali
> 408 621 1525
> Engineering at Hortonworks
>
>
Diffs
-----
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/FLUME/metainfo.xml
cb05b02
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/GANGLIA/metainfo.xml
4e96ade
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HBASE/metainfo.xml
fd290df
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/metainfo.xml
93504a4
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/metainfo.xml
26edb84
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/OOZIE/metainfo.xml
ec66213
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/PIG/metainfo.xml
27bf492
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml
49e3803
ambari-server/src/main/resources/stacks/HDP/2.0.6/services/ZOOKEEPER/metainfo.xml
0cb65ea
ambari-server/src/main/resources/stacks/HDP/2.1/services/FALCON/metainfo.xml
78336e6
ambari-server/src/main/resources/stacks/HDP/2.1/services/STORM/metainfo.xml
f2c391c
ambari-server/src/main/resources/stacks/HDP/2.1/services/TEZ/metainfo.xml
641de86
ambari-server/src/main/resources/stacks/HDP/2.1/services/YARN/metainfo.xml
be41833
ambari-server/src/main/resources/stacks/HDP/2.2/services/AMS/metainfo.xml
51d8177
ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/metainfo.xml
c5dc9b6
ambari-server/src/main/resources/stacks/HDP/2.2/services/KAFKA/metainfo.xml
f410d98
ambari-server/src/main/resources/stacks/HDP/2.2/services/KERBEROS/metainfo.xml
debde07
ambari-server/src/main/resources/stacks/HDP/2.2/services/KNOX/metainfo.xml
90ab331
ambari-server/src/main/resources/stacks/HDP/2.2/services/SLIDER/metainfo.xml
19e75d1
Diff: https://reviews.apache.org/r/29310/diff/
Testing
-------
mvn clean test
Thanks,
Andrew Onischuk