<cough>JIRAS</cough>


On January 18, 2018 at 12:14:11, Casey Stella (ceste...@gmail.com) wrote:

So, the challenge here is that our install script isn't smart enough right
now to skip creating tables that are already created. One thing you could
do is

1. rename the hbase tables for metron (see
https://stackoverflow.com/questions/27966072/how-do-you-rename-a-table-in-hbase
)
2. let the install create them anew
3. stop metron
4. delete the new empty hbase tables
5. swap in the old tables
6. start metron

What we probably should do is not barf if the tables exist, but rather
warn.

On Thu, Jan 18, 2018 at 12:02 PM, Laurens Vets <laur...@daemon.be> wrote:

> After upgrading from 0.4.1 to 0.4.2, I can't seem to start or restart
> Metron Indexing. I get the following errors:
>
> stderr: /var/lib/ambari-agent/data/errors-2468.txt
>
> Traceback (most recent call last):
> File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.2/pa
> ckage/scripts/indexing_master.py", line 160, in <module>
> Indexing().execute()
> File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",

> line 280, in execute
> method(env)
> File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.2/pa
> ckage/scripts/indexing_master.py", line 82, in start
> self.configure(env)
> File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.2/pa
> ckage/scripts/indexing_master.py", line 72, in configure
> commands.create_hbase_tables()
> File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.2/pa
> ckage/scripts/indexing_commands.py", line 126, in create_hbase_tables
> user=self.__params.hbase_user
> File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
> line 155, in __init__
> self.env.run()
> File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 160, in run
> self.run_action(resource, action)
> File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 124, in run_action
> provider_action()
> File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",

> line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
> File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
> line 70, in inner
> result = function(command, **kwargs)
> File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
> line 293, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'echo
> "create 'metron_update','t'" | hbase shell -n' returned 1. ERROR
> RuntimeError: Table already exists: metron_update!
>
> stdout: /var/lib/ambari-agent/data/output-2468.txt
>
> 2018-01-18 16:54:30,101 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2018-01-18 16:54:30,301 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2018-01-18 16:54:30,302 - Group['metron'] {}
> 2018-01-18 16:54:30,303 - Group['livy'] {}
> 2018-01-18 16:54:30,303 - Group['elasticsearch'] {}
> 2018-01-18 16:54:30,303 - Group['spark'] {}
> 2018-01-18 16:54:30,303 - Group['zeppelin'] {}
> 2018-01-18 16:54:30,304 - Group['hadoop'] {}
> 2018-01-18 16:54:30,304 - Group['kibana'] {}
> 2018-01-18 16:54:30,304 - Group['users'] {}
> 2018-01-18 16:54:30,304 - User['hive'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,305 - User['storm'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,306 - User['zookeeper'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,306 - User['infra-solr'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,307 - User['ams'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,307 - User['tez'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['users']}
> 2018-01-18 16:54:30,308 - User['zeppelin'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,309 - User['metron'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,309 - User['livy'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,310 - User['elasticsearch'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,310 - User['spark'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,311 - User['ambari-qa'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['users']}
> 2018-01-18 16:54:30,311 - User['flume'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,312 - User['kafka'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,312 - User['hdfs'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,313 - User['yarn'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,314 - User['kibana'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,314 - User['mapred'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,315 - User['hbase'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,315 - User['hcat'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2018-01-18 16:54:30,316 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
> {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2018-01-18 16:54:30,317 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari
> -qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u
> ambari-qa) -gt 1000) || (false)'}
> 2018-01-18 16:54:30,323 - Skipping
Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari
> -qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
> 2018-01-18 16:54:30,324 - Directory['/tmp/hbase-hbase'] {'owner':
'hbase',
> 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
> 2018-01-18 16:54:30,325 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
> {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2018-01-18 16:54:30,326 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> hbase
/home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
> {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
> 2018-01-18 16:54:30,331 - Skipping
Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> hbase
/home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
> due to not_if
> 2018-01-18 16:54:30,332 - Group['hdfs'] {}
> 2018-01-18 16:54:30,332 - User['hdfs'] {'fetch_nonlocal_groups': True,
> 'groups': ['hadoop', 'hdfs']}
> 2018-01-18 16:54:30,333 - FS Type:
> 2018-01-18 16:54:30,333 - Directory['/etc/hadoop'] {'mode': 0755}
> 2018-01-18 16:54:30,346 -
File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh']
> {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2018-01-18 16:54:30,347 -
Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
> {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
> 2018-01-18 16:54:30,365 - Execute[('setenforce', '0')] {'not_if': '(!
> which getenforce ) || (which getenforce && getenforce | grep -q
Disabled)',
> 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
> 2018-01-18 16:54:30,384 - Directory['/var/log/hadoop'] {'owner': 'root',
> 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access':
'a'}
> 2018-01-18 16:54:30,386 - Directory['/var/run/hadoop'] {'owner': 'root',
> 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
> 2018-01-18 16:54:30,386 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs',
> 'create_parents': True, 'cd_access': 'a'}
> 2018-01-18 16:54:30,390 - File['/usr/hdp/current/hadoop-client/conf/
> commons-logging.properties'] {'content':
Template('commons-logging.properties.j2'),
> 'owner': 'hdfs'}
> 2018-01-18 16:54:30,392 -
File['/usr/hdp/current/hadoop-client/conf/health_check']
> {'content': Template('health_check.j2'), 'owner': 'hdfs'}
> 2018-01-18 16:54:30,393 -
File['/usr/hdp/current/hadoop-client/conf/log4j.properties']
> {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2018-01-18 16:54:30,403 - File['/usr/hdp/current/hadoop-client/conf/
> hadoop-metrics2.properties'] {'content':
Template('hadoop-metrics2.properties.j2'),
> 'owner': 'hdfs', 'group': 'hadoop'}
> 2018-01-18 16:54:30,404 - File['/usr/hdp/current/hadoop-
> client/conf/task-log4j.properties'] {'content':
> StaticFile('task-log4j.properties'), 'mode': 0755}
> 2018-01-18 16:54:30,405 - File['/usr/hdp/current/hadoop-
> client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
> 2018-01-18 16:54:30,409 - File['/etc/hadoop/conf/topology_mappings.data']
> {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'),
> 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
> 2018-01-18 16:54:30,414 - File['/etc/hadoop/conf/topology_script.py']
> {'content': StaticFile('topology_script.py'), 'only_if': 'test -d
> /etc/hadoop/conf', 'mode': 0755}
> 2018-01-18 16:54:30,667 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2018-01-18 16:54:30,669 - Running indexing configure
> 2018-01-18 16:54:30,676 -
File['/usr/metron/0.4.2/config/elasticsearch.properties']
> {'owner': 'metron', 'content': Template('elasticsearch.properties.j2'),
> 'group': 'metron'}
> 2018-01-18 16:54:30,678 - Patch global config in Zookeeper
> 2018-01-18 16:54:30,678 - Setup temporary global config JSON patch
> (formatting per RFC6902): /tmp/metron-global-config-patch.json
> 2018-01-18 16:54:30,681 - File['/tmp/metron-global-config-patch.json']
> {'owner': 'metron', 'content': InlineTemplate(...), 'group': 'metron'}
> 2018-01-18 16:54:30,681 - Patching global config in ZooKeeper
> 2018-01-18 16:54:30,681 -
Execute['/usr/metron/0.4.2/bin/zk_load_configs.sh
> --zk_quorum metron1:2181,metron2:2181 --mode PATCH --config_type GLOBAL
> --patch_file /tmp/metron-global-config-patch.json'] {'path':
> ['/usr/jdk64/jdk1.8.0_77/bin']}
> 2018-01-18 16:55:19,874 - Done patching global config
> 2018-01-18 16:55:19,874 - Pull zookeeper config locally
> 2018-01-18 16:55:19,874 - Pulling all Metron configs down from ZooKeeper
> to local file system
> 2018-01-18 16:55:19,874 - NOTE - THIS IS OVERWRITING THE LOCAL METRON
> CONFIG DIR WITH ZOOKEEPER CONTENTS: /usr/metron/0.4.2/config/zookeeper
> 2018-01-18 16:55:19,875 -
Execute['/usr/metron/0.4.2/bin/zk_load_configs.sh
> --zk_quorum metron1:2181,metron2:2181 --mode PULL --output_dir
> /usr/metron/0.4.2/config/zookeeper --force'] {'path':
> ['/usr/jdk64/jdk1.8.0_77/bin']}
> 2018-01-18 16:56:08,286 - Creating HBase Tables for indexing
> 2018-01-18 16:56:08,287 - Execute['echo "create 'metron_update','t'" |
> hbase shell -n'] {'logoutput': False, 'path':
['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'],
> 'tries': 3, 'user': 'hbase', 'try_sleep': 5}
> 2018-01-18 16:56:59,943 - Retrying after 5 seconds. Reason: Execution of
> 'echo "create 'metron_update','t'" | hbase shell -n' returned 1. ERROR
> RuntimeError: Table already exists: metron_update!
> 2018-01-18 16:57:57,884 - Retrying after 5 seconds. Reason: Execution of
> 'echo "create 'metron_update','t'" | hbase shell -n' returned 1. ERROR
> RuntimeError: Table already exists: metron_update!
>
> Command failed after 1 tries
>

Reply via email to