-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19621/#review38472
-----------------------------------------------------------

Ship it!


Ship It!

- Vitalyi Brodetskyi


On March 25, 2014, 5:38 p.m., Andrew Onischuk wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/19621/
> -----------------------------------------------------------
> 
> (Updated March 25, 2014, 5:38 p.m.)
> 
> 
> Review request for Ambari and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-5208
>     https://issues.apache.org/jira/browse/AMBARI-5208
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> *STR:*
> - deploy cluster on 2 node 
> - add host
> - enable HA
> - enable security 
> - stop all components
> - start all components
> 
> *Result:* HBase Master Start failed (if you start singly HBase Service, 
> everything will be ok)
> 
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-322.txt
> 
> Python script has been killed due to timeout
> {code}
> {code}
> stdout:   /var/lib/ambari-agent/data/output-322.txt
> 
> 2014-03-23 20:52:43,576 - Execute['mkdir -p /tmp/HDP-artifacts/ ; curl -kf 
> --retry 10 
> http://ambsmoke6-4-1395623610-2.cs1cloud.internal:8080/resources//jdk-7u45-linux-x64.tar.gz
>  -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz'] {'not_if': 'test -e 
> /usr/jdk64/jdk1.7.0_45/bin/java', 'path': ['/bin', '/usr/bin/']}
> 2014-03-23 20:52:43,587 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/ ; 
> curl -kf --retry 10 
> http://ambsmoke6-4-1395623610-2.cs1cloud.internal:8080/resources//jdk-7u45-linux-x64.tar.gz
>  -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz'] due to not_if
> 2014-03-23 20:52:43,588 - Execute['mkdir -p /usr/jdk64 ; cd /usr/jdk64 ; tar 
> -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > /dev/null 2>&1'] 
> {'not_if': 'test -e /usr/jdk64/jdk1.7.0_45/bin/java', 'path': ['/bin', 
> '/usr/bin/']}
> 2014-03-23 20:52:43,598 - Skipping Execute['mkdir -p /usr/jdk64 ; cd 
> /usr/jdk64 ; tar -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > 
> /dev/null 2>&1'] due to not_if
> 2014-03-23 20:52:43,599 - Execute['mkdir -p /tmp/HDP-artifacts/; curl -kf 
> --retry 10 
> http://ambsmoke6-4-1395623610-2.cs1cloud.internal:8080/resources//UnlimitedJCEPolicyJDK7.zip
>  -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'not_if': 'test -e 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 'ignore_failures': True, 
> 'path': ['/bin', '/usr/bin/']}
> 2014-03-23 20:52:43,609 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/; 
> curl -kf --retry 10 
> http://ambsmoke6-4-1395623610-2.cs1cloud.internal:8080/resources//UnlimitedJCEPolicyJDK7.zip
>  -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] due to not_if
> 2014-03-23 20:52:43,610 - Execute['rm -f local_policy.jar; rm -f 
> US_export_policy.jar; unzip -o -j -q 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'path': ['/bin/', 
> '/usr/bin'], 'only_if': 'test -e /usr/jdk64/jdk1.7.0_45/jre/lib/security && 
> test -f /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 'cwd': 
> '/usr/jdk64/jdk1.7.0_45/jre/lib/security'}
> 2014-03-23 20:52:43,645 - File['/etc/snmp/snmpd.conf'] {'content': 
> Template('snmpd.conf.j2')}
> 2014-03-23 20:52:43,646 - Service['snmpd'] {'action': ['restart']}
> 2014-03-23 20:52:43,666 - Service['snmpd'] command 'restart'
> 2014-03-23 20:52:43,871 - Execute['/bin/echo 0 > /selinux/enforce'] 
> {'only_if': 'test -f /selinux/enforce'}
> 2014-03-23 20:52:43,884 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] 
> due to only_if
> 2014-03-23 20:52:43,886 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
> 2014-03-23 20:52:43,905 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
> 2014-03-23 20:52:43,921 - Directory['/etc/hadoop/conf'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-03-23 20:52:43,922 - Directory['/grid/0/log/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-03-23 20:52:43,923 - Directory['/var/run/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-03-23 20:52:43,923 - Directory['/tmp'] {'owner': 'hdfs', 'recursive': 
> True}
> 2014-03-23 20:52:43,929 - File['/etc/security/limits.d/hdfs.conf'] 
> {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 
> 'mode': 0644}
> 2014-03-23 20:52:43,930 - File['/usr/lib/hadoop/sbin/task-controller'] 
> {'owner': 'root', 'group': 'hadoop', 'mode': 06050}
> 2014-03-23 20:52:43,935 - File['/etc/hadoop/conf/taskcontroller.cfg'] 
> {'content': Template('taskcontroller.cfg.j2'), 'owner': 'root', 'group': 
> 'hadoop', 'mode': 0644}
> 2014-03-23 20:52:43,948 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': 
> Template('hadoop-env.sh.j2'), 'owner': 'root'}
> 2014-03-23 20:52:43,950 - File['/etc/hadoop/conf/commons-logging.properties'] 
> {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
> 2014-03-23 20:52:43,953 - File['/etc/hadoop/conf/slaves'] {'content': 
> Template('slaves.j2'), 'owner': 'root'}
> 2014-03-23 20:52:43,956 - File['/etc/hadoop/conf/health_check'] {'content': 
> Template('health_check-v2.j2'), 'owner': 'root'}
> 2014-03-23 20:52:43,956 - File['/etc/hadoop/conf/log4j.properties'] 
> {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2014-03-23 20:52:43,963 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] 
> {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2014-03-23 20:52:43,964 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-03-23 20:52:43,971 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-03-23 20:52:43,972 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 
> 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-03-23 20:52:43,973 - Writing File['/etc/hadoop/conf/core-site.xml'] 
> because contents don't match
> 2014-03-23 20:52:43,974 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-03-23 20:52:43,979 - Generating config: /etc/hadoop/conf/mapred-site.xml
> 2014-03-23 20:52:43,979 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 
> 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-03-23 20:52:43,981 - Writing File['/etc/hadoop/conf/mapred-site.xml'] 
> because contents don't match
> 2014-03-23 20:52:43,981 - File['/etc/hadoop/conf/task-log4j.properties'] 
> {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2014-03-23 20:52:43,982 - XmlConfig['capacity-scheduler.xml'] {'owner': 
> 'hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': 
> ...}
> 2014-03-23 20:52:43,988 - Generating config: 
> /etc/hadoop/conf/capacity-scheduler.xml
> 2014-03-23 20:52:43,988 - File['/etc/hadoop/conf/capacity-scheduler.xml'] 
> {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 
> None}
> 2014-03-23 20:52:43,989 - Writing 
> File['/etc/hadoop/conf/capacity-scheduler.xml'] because contents don't match
> 2014-03-23 20:52:43,989 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-03-23 20:52:43,995 - Generating config: /etc/hadoop/conf/hdfs-site.xml
> 2014-03-23 20:52:43,996 - File['/etc/hadoop/conf/hdfs-site.xml'] {'owner': 
> 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-03-23 20:52:43,997 - Writing File['/etc/hadoop/conf/hdfs-site.xml'] 
> because contents don't match
> 2014-03-23 20:52:43,998 - File['/etc/hadoop/conf/configuration.xsl'] 
> {'owner': 'hdfs', 'group': 'hadoop'}
> 2014-03-23 20:52:43,998 - File['/etc/hadoop/conf/ssl-client.xml.example'] 
> {'owner': 'mapred', 'group': 'hadoop'}
> 2014-03-23 20:52:43,999 - File['/etc/hadoop/conf/ssl-server.xml.example'] 
> {'owner': 'mapred', 'group': 'hadoop'}
> 2014-03-23 20:52:44,200 - HdfsDirectory['hdfs://nameservice/apps/hbase/data'] 
> {'security_enabled': True, 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'conf_dir': '/etc/hadoop/conf', 
> 'hdfs_user': 'hdfs', 'kinit_path_local': '/usr/bin/kinit', 'owner': 'hbase', 
> 'action': ['create_delayed']}
> 2014-03-23 20:52:44,202 - HdfsDirectory['/apps/hbase/staging'] 
> {'security_enabled': True, 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'conf_dir': '/etc/hadoop/conf', 
> 'hdfs_user': 'hdfs', 'kinit_path_local': '/usr/bin/kinit', 'mode': 0711, 
> 'owner': 'hbase', 'action': ['create_delayed']}
> 2014-03-23 20:52:44,203 - HdfsDirectory['None'] {'security_enabled': True, 
> 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'conf_dir': 
> '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': 
> '/usr/bin/kinit', 'action': ['create']}
> 2014-03-23 20:52:44,204 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab hdfs'] {'user': 'hdfs'}
> 2014-03-23 20:52:44,259 - Execute['hadoop fs -mkdir `rpm -q hadoop | grep -q 
> "hadoop-1" || echo "-p"` hdfs://nameservice/apps/hbase/data 
> /apps/hbase/staging && hadoop fs -chmod  711 /apps/hbase/staging && hadoop fs 
> -chown  hbase hdfs://nameservice/apps/hbase/data /apps/hbase/staging'] 
> {'not_if': "su - hdfs -c 'hadoop fs -ls hdfs://nameservice/apps/hbase/data 
> /apps/hbase/staging'", 'user': 'hdfs'}
> {code}
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json 
> 550f885 
>   
> ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/package/scripts/service_check.py
>  d27b13a 
>   ambari-server/src/main/resources/stacks/HDP/2.0/role_command_order.json 
> PRE-CREATION 
>   ambari-server/src/main/resources/stacks/HDP/2.1/role_command_order.json 
> 60c2d20 
> 
> Diff: https://reviews.apache.org/r/19621/diff/
> 
> 
> Testing
> -------
> 
> mvn test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>

Reply via email to