----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26317/#review55343 -----------------------------------------------------------
Ship it! Ship It! - Andrew Onischuk On Oct. 3, 2014, 2:46 p.m., Dmytro Sen wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/26317/ > ----------------------------------------------------------- > > (Updated Oct. 3, 2014, 2:46 p.m.) > > > Review request for Ambari and Andrew Onischuk. > > > Bugs: AMBARI-7631 > https://issues.apache.org/jira/browse/AMBARI-7631 > > > Repository: ambari > > > Description > ------- > > Hive Metastore didn't start on 2.1 Stack. > /var/lib/ambari-agent/data/errors-62.txt > 2014-10-03 11:52:20,935 - Error while executing command 'start': > Traceback (most recent call last): > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 122, in execute > method(env) > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py", > line 43, in start > self.configure(env) # FOR SECURITY > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py", > line 38, in configure > hive(name='metastore') > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py", > line 83, in hive > not_if = check_schema_created_cmd > File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", > line 148, in __init__ > self.env.run() > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 149, in run > self.run_action(resource, action) > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 115, in run_action > provider_action() > File > "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", > line 237, in action_run > raise ex > Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; > /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive > -passWord [PROTECTED]' returned 1. Metastore connection URL: > jdbc:derby:;databaseName=metastore_db;create=true > Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver > Metastore connection User: hive > Starting metastore schema initialization to 0.13.0 > Initialization script hive-schema-0.13.0.mysql.sql > Error: Syntax error: Encountered "<EOF>" at line 1, column 64. > (state=42X01,code=30000) > org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization > FAILED! Metastore state would be inconsistent !! > *** schemaTool failed *** > /var/lib/ambari-agent/data/output-62.txt > 2014-10-03 11:52:09,810 - Execute['mkdir -p > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts/; curl -kf -x "" --retry > 10 > http://c6401.ambari.apache.org:8080/resources//UnlimitedJCEPolicyJDK7.zip -o > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip'] > {'environment': ..., 'not_if': 'test -e > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip', > 'ignore_failures': True, 'path': ['/bin', '/usr/bin/']} > 2014-10-03 11:52:09,833 - Skipping Execute['mkdir -p > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts/; curl -kf -x "" --retry > 10 > http://c6401.ambari.apache.org:8080/resources//UnlimitedJCEPolicyJDK7.zip -o > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip'] > due to not_if > 2014-10-03 11:52:09,834 - Group['hadoop'] {'ignore_failures': False} > 2014-10-03 11:52:09,835 - Modifying group hadoop > 2014-10-03 11:52:09,862 - Group['users'] {'ignore_failures': False} > 2014-10-03 11:52:09,863 - Modifying group users > 2014-10-03 11:52:09,892 - User['hive'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,892 - Modifying user hive > 2014-10-03 11:52:09,903 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,904 - Modifying user mapred > 2014-10-03 11:52:09,916 - User['ambari-qa'] {'gid': 'hadoop', > 'ignore_failures': False, 'groups': [u'users']} > 2014-10-03 11:52:09,916 - Modifying user ambari-qa > 2014-10-03 11:52:09,930 - User['zookeeper'] {'gid': 'hadoop', > 'ignore_failures': False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,930 - Modifying user zookeeper > 2014-10-03 11:52:09,943 - User['tez'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'users']} > 2014-10-03 11:52:09,943 - Modifying user tez > 2014-10-03 11:52:09,955 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,955 - Modifying user hdfs > 2014-10-03 11:52:09,968 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,968 - Modifying user yarn > 2014-10-03 11:52:09,980 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': > False, 'groups': [u'hadoop']} > 2014-10-03 11:52:09,980 - Modifying user hcat > 2014-10-03 11:52:09,994 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] > {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} > 2014-10-03 11:52:09,995 - > Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa > /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa > 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'} > 2014-10-03 11:52:10,007 - Skipping > Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa > /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa > 2>/dev/null'] due to not_if > 2014-10-03 11:52:10,008 - Directory['/etc/hadoop/conf.empty'] {'owner': > 'root', 'group': 'root', 'recursive': True} > 2014-10-03 11:52:10,009 - Link['/etc/hadoop/conf'] {'not_if': 'ls > /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'} > 2014-10-03 11:52:10,019 - Skipping Link['/etc/hadoop/conf'] due to not_if > 2014-10-03 11:52:10,033 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': > InlineTemplate(...), 'owner': 'hdfs'} > 2014-10-03 11:52:10,044 - Execute['/bin/echo 0 > /selinux/enforce'] > {'only_if': 'test -f /selinux/enforce'} > 2014-10-03 11:52:10,055 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] > due to only_if > 2014-10-03 11:52:10,057 - Execute['mkdir -p > /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so > /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {} > 2014-10-03 11:52:10,071 - Execute['mkdir -p > /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so > /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {} > 2014-10-03 11:52:10,083 - Directory['/var/log/hadoop'] {'owner': 'root', > 'group': 'hadoop', 'mode': 0775, 'recursive': True} > 2014-10-03 11:52:10,084 - Directory['/var/run/hadoop'] {'owner': 'root', > 'group': 'root', 'recursive': True} > 2014-10-03 11:52:10,084 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', > 'recursive': True} > 2014-10-03 11:52:10,088 - File['/etc/hadoop/conf/commons-logging.properties'] > {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} > 2014-10-03 11:52:10,090 - File['/etc/hadoop/conf/health_check'] {'content': > Template('health_check-v2.j2'), 'owner': 'hdfs'} > 2014-10-03 11:52:10,091 - File['/etc/hadoop/conf/log4j.properties'] > {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} > 2014-10-03 11:52:10,096 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] > {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} > 2014-10-03 11:52:10,096 - File['/etc/hadoop/conf/task-log4j.properties'] > {'content': StaticFile('task-log4j.properties'), 'mode': 0755} > 2014-10-03 11:52:10,097 - File['/etc/hadoop/conf/configuration.xsl'] > {'owner': 'hdfs', 'group': 'hadoop'} > 2014-10-03 11:52:10,206 - Directory['/etc/hive/conf.server'] {'owner': > 'hive', 'group': 'hadoop', 'recursive': True} > 2014-10-03 11:52:10,207 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', > 'conf_dir': '/etc/hive/conf.server', 'mode': 0644, > 'configuration_attributes': ..., 'owner': 'hive', 'configurations': ...} > 2014-10-03 11:52:10,223 - Generating config: > /etc/hive/conf.server/mapred-site.xml > 2014-10-03 11:52:10,223 - File['/etc/hive/conf.server/mapred-site.xml'] > {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': > 0644, 'encoding': 'UTF-8'} > 2014-10-03 11:52:10,225 - Writing > File['/etc/hive/conf.server/mapred-site.xml'] because contents don't match > 2014-10-03 11:52:10,225 - > File['/etc/hive/conf.server/hive-default.xml.template'] {'owner': 'hive', > 'group': 'hadoop'} > 2014-10-03 11:52:10,226 - File['/etc/hive/conf.server/hive-env.sh.template'] > {'owner': 'hive', 'group': 'hadoop'} > 2014-10-03 11:52:10,226 - > File['/etc/hive/conf.server/hive-exec-log4j.properties'] {'content': '...', > 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} > 2014-10-03 11:52:10,227 - File['/etc/hive/conf.server/hive-log4j.properties'] > {'content': '...', 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} > 2014-10-03 11:52:10,227 - Directory['/etc/hive/conf'] {'owner': 'hive', > 'group': 'hadoop', 'recursive': True} > 2014-10-03 11:52:10,228 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', > 'conf_dir': '/etc/hive/conf', 'mode': 0644, 'configuration_attributes': ..., > 'owner': 'hive', 'configurations': ...} > 2014-10-03 11:52:10,238 - Generating config: /etc/hive/conf/mapred-site.xml > 2014-10-03 11:52:10,238 - File['/etc/hive/conf/mapred-site.xml'] {'owner': > 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, > 'encoding': 'UTF-8'} > 2014-10-03 11:52:10,239 - Writing File['/etc/hive/conf/mapred-site.xml'] > because contents don't match > 2014-10-03 11:52:10,240 - File['/etc/hive/conf/hive-default.xml.template'] > {'owner': 'hive', 'group': 'hadoop'} > 2014-10-03 11:52:10,240 - File['/etc/hive/conf/hive-env.sh.template'] > {'owner': 'hive', 'group': 'hadoop'} > 2014-10-03 11:52:10,241 - File['/etc/hive/conf/hive-exec-log4j.properties'] > {'content': '...', 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} > 2014-10-03 11:52:10,241 - File['/etc/hive/conf/hive-log4j.properties'] > {'content': '...', 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} > 2014-10-03 11:52:10,242 - Execute['hive mkdir -p > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts/ ; cp > /usr/share/java/mysql-connector-java.jar > /usr/lib/hive/lib//mysql-connector-java.jar'] {'environment': ..., 'path': > ['/bin', '/usr/bin/'], 'creates': > '/usr/lib/hive/lib//mysql-connector-java.jar', 'not_if': 'test -f > /usr/lib/hive/lib//mysql-connector-java.jar'} > 2014-10-03 11:52:10,252 - Skipping Execute['hive mkdir -p > /var/lib/ambari-agent/data/tmp/AMBARI-artifacts/ ; cp > /usr/share/java/mysql-connector-java.jar > /usr/lib/hive/lib//mysql-connector-java.jar'] due to not_if > 2014-10-03 11:52:10,253 - Execute['/bin/sh -c 'cd /usr/lib/ambari-agent/ && > curl -kf -x "" --retry 5 > http://c6401.ambari.apache.org:8080/resources/DBConnectionVerification.jar -o > DBConnectionVerification.jar''] {'environment': ..., 'not_if': '[ -f > DBConnectionVerification.jar]'} > 2014-10-03 11:52:10,281 - > File['/var/lib/ambari-agent/data/tmp/start_metastore_script'] {'content': > StaticFile('startMetastore.sh'), 'mode': 0755} > 2014-10-03 11:52:10,283 - Execute['export HIVE_CONF_DIR=/etc/hive/conf.server > ; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive > -passWord [PROTECTED]'] {'not_if': 'export > HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -info > -dbType mysql -userName hive -passWord [PROTECTED]'} > 2014-10-03 11:52:20,935 - Error while executing command 'start': > Traceback (most recent call last): > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 122, in execute > method(env) > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py", > line 43, in start > self.configure(env) # FOR SECURITY > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py", > line 38, in configure > hive(name='metastore') > File > "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py", > line 83, in hive > not_if = check_schema_created_cmd > File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", > line 148, in __init__ > self.env.run() > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 149, in run > self.run_action(resource, action) > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 115, in run_action > provider_action() > File > "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", > line 237, in action_run > raise ex > Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; > /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive > -passWord [PROTECTED]' returned 1. Metastore connection URL: > jdbc:derby:;databaseName=metastore_db;create=true > Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver > Metastore connection User: hive > Starting metastore schema initialization to 0.13.0 > Initialization script hive-schema-0.13.0.mysql.sql > Error: Syntax error: Encountered "<EOF>" at line 1, column 64. > (state=42X01,code=30000) > org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization > FAILED! Metastore state would be inconsistent !! > *** schemaTool failed *** > > > Diffs > ----- > > > ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py > 67720e1 > ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_client.py 796e820 > ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py > 201b3f2 > ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 4fe131b > ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py > 12e7439 > > Diff: https://reviews.apache.org/r/26317/diff/ > > > Testing > ------- > > manual tests > > + > > Ran 229 tests in 1.595s > > OK > ---------------------------------------------------------------------- > Total run:638 > Total errors:0 > Total failures:0 > OK > > > Thanks, > > Dmytro Sen > >
