Basapuram Kumar created AMBARI-26305:
----------------------------------------
Summary: Hive Metastore Initialization Fails Due to initialization
of VM
Key: AMBARI-26305
URL: https://issues.apache.org/jira/browse/AMBARI-26305
Project: Ambari
Issue Type: Improvement
Components: ambari-sever
Affects Versions: 2.7.8
Reporter: Basapuram Kumar
Attachments: image-2025-02-04-19-28-39-283.png,
image-2025-02-04-19-29-15-338.png
Hive Metastore initialization failed with the below error.
error
{noformat}
resource_management.core.exceptions.ExecutionFailed: Execution of 'export
HIVE_CONF_DIR=/usr/odp/current/hive-metastore/conf/ ;
/usr/odp/current/hive-server2/bin/schematool -initSchema -dbType mysql
-userName hive -passWord [PROTECTED] -verbose' returned 1. Error occurred
during initialization of VM Initial heap size set to a larger value than the
maximum heap size stdout:{noformat}
Complete stack-trace
{noformat}
the following exception:Traceback (most recent call last):
File
"/var/lib/ambari-agent/cache/stacks/ODP/3.3/services/HIVE/package/scripts/hive_metastore.py",
line 201, in
HiveMetastore().execute()
File
"/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py",
line 351, in execute
method(env)
File
"/var/lib/ambari-agent/cache/stacks/ODP/3.3/services/HIVE/package/scripts/hive_metastore.py",
line 61, in start
create_metastore_schema() # execute without config lock
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/var/lib/ambari-agent/cache/stacks/ODP/3.3/services/HIVE/package/scripts/hive.py",
line 448, in create_metastore_schema
Execute(create_schema_cmd,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 164,
in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py",
line 163, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py",
line 127, in run_action
provider_action()
File
"/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line
253, in action_run
shell.checked_call(self.resource.command, logoutput=self.resource.logoutput,
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 73,
in inner
result = function(command, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 100,
in checked_call
return _call_wrapper(command, logoutput=logoutput, throw_on_failure=True,
stdout=stdout, stderr=stderr,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 151,
in _call_wrapper
result = _call(command, **kwargs_copy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 315,
in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export
HIVE_CONF_DIR=/usr/odp/current/hive-metastore/conf/ ;
/usr/odp/current/hive-server2/bin/schematool -initSchema -dbType mysql
-userName hive -passWord [PROTECTED] -verbose' returned 1. Error occurred
during initialization of VM
Initial heap size set to a larger value than the maximum heap size
stdout:{noformat}
When checked on other hadoop services, it started with both min and max heap
sizes( Xms and Xmx).
For example:
+*For Namenode*+
{noformat}
/usr/lib/jvm/java-11-openjdk/bin/java
..
..
-Xms2048m -Xmx2048m
..
..
org.apache.hadoop.hdfs.server.namenode.NameNode{noformat}
For ResourceManager
{{}}
{noformat}
/usr/lib/jvm/java-11-openjdk/bin/java
..
..
-Xmx1024m -Xms1024m
..
..
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager{noformat}
{{}}
For NodeManager
{noformat}
/usr/lib/jvm/java-11-openjdk/bin/java -Dproc_nodemanager
..
..
-Xmx1024m -Xms1024m
..
..
org.apache.hadoop.yarn.server.nodemanager.NodeManager{noformat}
{{}}
For habse Region Server
{{}}
{noformat}
/usr/lib/jvm/java-11-openjdk/bin/java
..
..
-Xms3276m -Xmx3276m
..
..
org.apache.hadoop.hbase.regionserver.HRegionServer start{noformat}
{{}}
So suggesting to set the similar way for hive services as well.
Currently is setting only Xmx value as mentioned
[here|https://github.com/apache/ambari/blob/ebf95d0d4b4634f8e2b4592857b291d5dd81c6ce/ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HIVE/configuration/hive-env.xml#L391C1-L392C1]
{noformat}
export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS
-Xmx${HADOOP_HEAPSIZE}m"{noformat}
Altering to
{noformat}
export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Xms${HADOOP_HEAPSIZE}m
-Xmx${HADOOP_HEAPSIZE}m"{noformat}
With the above suggested changes, the service start and running fine.
!image-2025-02-04-19-28-39-283.png!
The heap values are set from ambari as shown ambari.
!image-2025-02-04-19-29-15-338.png!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]