Check your hive-site.xml

What is this set to

  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://rhes75:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore
client to connect to remote metastore.</description>
  </property>

if you find any S3-related configurations (like fs.defaultFS pointing to
S3) in hive-site.xml, that could be the cause of the problem. Remove or
correct those settings. The S3 configuration should be handled entirely
within your Spark application's configuration.

HTH


Dr Mich Talebzadeh,
Architect | Data Science | Financial Crime | Forensic Analysis | GDPR

   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>





On Thu, 30 Jan 2025 at 13:15, Márkus Andor Rudolf <markus.an...@gmail.com>
wrote:

> Hi Hive team,
>
> I'm encountering an issue where Hive Metastore (HMS), running as a Spark
> sidecar container, is attempting to read/write to S3 even though it's only
> being used as a metastore. Since HMS is solely functioning as a metadata
> service in this setup, these S3 operations seem unnecessary.
>
> Can someone help explain:
>
>    1. How to completely disable S3 access for HMS?
>    2. Are there specific configuration parameters I should modify?
>    3. Is there any potential impact on functionality if S3 access is
>    disabled?
>
>
> Current setup:
>
>    - HMS running as a Spark sidecar container
>    - Only using HMS for metadata storage
>    - No direct data storage requirements for HMS
>
>
> Thanks in advance for your help.
>
> Best regards,
> Andor Markus
>

Reply via email to