GreenBinary opened a new issue, #2398:
URL: https://github.com/apache/polaris/issues/2398

   ### Describe the bug
   
   I'm developing data-streaming-pipeline and have a Kafka (v3.9.0), Flink 
(v1.20.1), Iceberg (v1.8.1) on MinIO based data streaming pipeline all deployed 
on-prem on Kubernetes cluster.
   
   For last several weeks, I've been working to establish Iceberg REST based 
"Apache Polaris" catalog: 1.0.1 incubating.
   
   First I deploy all my containers (including Polaris) which are running fine. 
 Then I add a new catalog using Polaris REST API using curl command.  Then, in 
Flink-SQL, I create a new catalog (kind of alias to the one created using curl 
command), database successfully.  Finally I try to create an iceberg table 
which ends up with error.
   
   I took care of setting required properties to ensure **S3 path style 
access** is set to **true** for my on-prem MinIO storage. I'll drop/attach k8s 
manifest files here for MinIO and Polaris YAML (you'll see I've been 
experimenting with a lot of env variables!). 
   
   Then I'll add the Polaris curl command to add the catalog and flink-sql 
commands in "To Reproduce" section, and finally put the error details/logs in 
the "Actual Behavior" section.
   
   I tried a lot of things consistently for weeks with no luck at all.  Please 
provide your feedback and let me know how to get this resolved.  Thanks much.
   
   _By the way, before trying Polaris, I had been using simple HadoopCatalog 
and everything was working just fine._
   
   
[polaris-catalog.yaml](https://github.com/user-attachments/files/21861873/polaris-catalog.yaml)
   
[minio-data-lake.yaml](https://github.com/user-attachments/files/21861872/minio-data-lake.yaml)
   
   
   ### To Reproduce
   
   Add a Polaris catalog using REST API method via curl command as below:
   
   `curl -i -X POST -H "Authorization: Bearer $Access_Token" 
"http://polaris-service.my-ns.svc.cluster.local:8181/api/management/v1/catalogs";
 \
      -H "Content-Type: application/json" \
      -d '{
            "catalog": {
              "name": "my_iceberg_polaris_catalog",
              "type": "INTERNAL",
              "properties": {
                "default-base-location": "s3://iceberg-bucket",
                         "s3.endpoint": 
"http://minio-data-lake-service.my-ns.svc.cluster.local:9000";,
                         "s3.path-style-access": "true",
                         "s3.access-key-id": "polaris-service-user",
                         "s3.secret-access-key": "Passw0rd1",
                         "s3.region": "auto"
              },
              "storageConfigInfo": {
                         "roleArn": 
"arn:aws:iam::000000000000:role/minio-polaris-role",
                         "region": "auto",
                "storageType": "S3",
                         
"endpoint":"http://minio-data-lake-service.my-ns.svc.cluster.local:9000";,
                "pathStyleAccess":"true",
                "allowedLocations": [
                  "s3://iceberg-bucket/*"
                ]
              }
            }
          }'`
   
   And below are flink-sql commands.  All gets executed successfully except for 
the last one (CREATE TABLE one).
   
   `CREATE CATALOG IF NOT EXISTS my_iceberg_polaris_catalog WITH (
     'type'='iceberg',
     'catalog-type'='rest',
     'uri'='http://polaris-service.my-ns.svc.cluster.local:8181/api/catalog',
     'credential'='root:s3cr3t',
     'rest.credential-vending.enabled'='true',
     'warehouse'='my_iceberg_polaris_catalog',
     's3.region'='auto',
     
's3.endpoint'='http://minio-data-lake-service.my-ns.svc.cluster.local:9000',
     's3.path-style-access'='true',
     's3.access-key-id'='polaris-service-user',
     's3.secret-access-key'='Passw0rd1',
     'token-refresh-enabled'='true',
     'scope'='PRINCIPAL_ROLE:ALL'
   );
   
   USE CATALOG my_iceberg_polaris_catalog;
   
   CREATE DATABASE IF NOT EXISTS my_iceberg_db;
   
   USE my_iceberg_db;
   
   CREATE TABLE IF NOT EXISTS 
i_t_obk_cdc_topic_ciprod_CMMSDATA_MM_MAINTENANCE_ACTION_LABOR_tagged 
   (
     <my columns>
     ...
   )
   PARTITIONED BY (`col1`)
   WITH 
   (
     'format-version'='2',
     'write.format.default'='parquet',
     'write.upsert.enabled'='true',
     'write.merge.mode'='merge-on-read',
     'write.equality-delete-field'='<cols>',
     'write.distribution-mode'='hash',
     'write.iceberg.hash-distribution.column'='<cols>',
     'write.target-file-size-bytes'='1048576'
   );`
   
   ### Actual Behavior
   
   Error below from flink-sql terminal:
   
   `org.apache.flink.table.api.TableException: Could not execute CreateTable in 
path `my_iceberg_polaris_catalog`.`my_iceberg_db`.`iceberg_table_1`
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1375) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:1005)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:86)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1102)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:687)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:522)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:243)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:199)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:214)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) [?:?]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) [?:?]
           at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[?:?]
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source) [?:?]
           at java.lang.Thread.run(Unknown Source) [?:?]
   Caused by: org.apache.iceberg.exceptions.ServiceFailureException: Server 
error: SdkClientException: Received an UnknownHostException when attempting to 
interact with a service. See cause for the exact endpoint that is failing to 
resolve. If this is happening on an endpoint that previously worked, there may 
be a network connectivity issue or your DNS cache could be storing endpoints 
for too long.
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:217)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:118)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:102)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:224) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:308) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:88) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.RESTClient.post(RESTClient.java:113) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.create(RESTSessionCatalog.java:848)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.lambda$create$0(CachingCatalog.java:264)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at java.util.concurrent.ConcurrentHashMap.compute(Unknown Source) 
~[?:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.create(CachingCatalog.java:260)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.catalog.Catalog.createTable(Catalog.java:75) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createIcebergTable(FlinkCatalog.java:415) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createTable(FlinkCatalog.java:395) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.flink.table.catalog.CatalogManager.lambda$createTable$18(CatalogManager.java:1016)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1369) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           ... 17 more
   17:41:51.335 [flink-rest-server-netty-worker-thread-1] ERROR 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl - Failed to 
fetchResults.
   org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed 
to execute the operation d12e4ce3-6997-4e6c-99bb-2467b49802fd.
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[?:?]
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source) ~[?:?]
           at java.lang.Thread.run(Unknown Source) [?:?]
   Caused by: org.apache.flink.table.api.TableException: Could not execute 
CreateTable in path 
`my_iceberg_polaris_catalog`.`my_iceberg_db`.`iceberg_table_1`
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1375) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:1005)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:86)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1102)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:687)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:522)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:243)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:199)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:214)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           ... 7 more
   Caused by: org.apache.iceberg.exceptions.ServiceFailureException: Server 
error: SdkClientException: Received an UnknownHostException when attempting to 
interact with a service. See cause for the exact endpoint that is failing to 
resolve. If this is happening on an endpoint that previously worked, there may 
be a network connectivity issue or your DNS cache could be storing endpoints 
for too long.
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:217)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:118)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:102)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:224) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:308) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:88) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.RESTClient.post(RESTClient.java:113) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.create(RESTSessionCatalog.java:848)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.lambda$create$0(CachingCatalog.java:264)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at java.util.concurrent.ConcurrentHashMap.compute(Unknown Source) 
~[?:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.create(CachingCatalog.java:260)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.catalog.Catalog.createTable(Catalog.java:75) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createIcebergTable(FlinkCatalog.java:415) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createTable(FlinkCatalog.java:395) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.flink.table.catalog.CatalogManager.lambda$createTable$18(CatalogManager.java:1016)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1369) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:1005)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:86)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1102)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:687)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:522)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:243)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:199)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:214)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           ... 7 more
   17:41:51.338 [flink-rest-server-netty-worker-thread-1] ERROR 
org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler - 
Unhandled exception.
   org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
org.apache.flink.table.gateway.api.utils.SqlGatewayException: Failed to 
fetchResults.
           at 
org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:91)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:84)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:52)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:196)
 ~[flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:88)
 ~[flink-dist-1.20.1.jar:1.20.1]
           at java.util.Optional.ifPresent(Unknown Source) [?:?]
           at 
org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45) 
[flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:85)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:50)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:233)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:70)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
 [flink-dist-1.20.1.jar:1.20.1]
           at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
 [flink-dist-1.20.1.jar:1.20.1]
           at java.lang.Thread.run(Unknown Source) [?:?]
   Caused by: org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
Failed to fetchResults.
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.fetchResults(SqlGatewayServiceImpl.java:231)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:89)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           ... 48 more
   Caused by: 
org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
execute the operation d12e4ce3-6997-4e6c-99bb-2467b49802fd.
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[?:?]
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source) ~[?:?]
           ... 1 more
   Caused by: org.apache.flink.table.api.TableException: Could not execute 
CreateTable in path 
`my_iceberg_polaris_catalog`.`my_iceberg_db`.`iceberg_table_1`
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1375) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:1005)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:86)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1102)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:687)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:522)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:243)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:199)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:214)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[?:?]
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source) ~[?:?]
           ... 1 more
   Caused by: org.apache.iceberg.exceptions.ServiceFailureException: Server 
error: SdkClientException: Received an UnknownHostException when attempting to 
interact with a service. See cause for the exact endpoint that is failing to 
resolve. If this is happening on an endpoint that previously worked, there may 
be a network connectivity issue or your DNS cache could be storing endpoints 
for too long.
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:217)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:118)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:102)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:224) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:308) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:88) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.rest.RESTClient.post(RESTClient.java:113) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.create(RESTSessionCatalog.java:848)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.lambda$create$0(CachingCatalog.java:264)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at java.util.concurrent.ConcurrentHashMap.compute(Unknown Source) 
~[?:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.create(CachingCatalog.java:260)
 ~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at org.apache.iceberg.catalog.Catalog.createTable(Catalog.java:75) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createIcebergTable(FlinkCatalog.java:415) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.iceberg.flink.FlinkCatalog.createTable(FlinkCatalog.java:395) 
~[iceberg-flink-runtime-1.20-1.8.1.jar:?]
           at 
org.apache.flink.table.catalog.CatalogManager.lambda$createTable$18(CatalogManager.java:1016)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1369) 
~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:1005)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:86)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1102)
 ~[flink-table-api-java-uber-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:687)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:522)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:243)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:199)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:214)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
 ~[flink-sql-gateway-1.20.1.jar:1.20.1]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) ~[?:?]
           at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
           at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[?:?]
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source) ~[?:?]
           ... 1 more
   
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.ServiceFailureException: Server error: 
SdkClientException: Received an UnknownHostException when attempting to 
interact with a service. See cause for the exact endpoint that is failing to 
resolve. If this is happening on an endpoint that previously worked, there may 
be a network connectivity issue or your DNS cache could be storing endpoints 
for too long.
   
   Flink SQL> `
   
   
   ===============
   
   And below error log from Polaris pod:
   
   `2025-08-19 17:41:51,220 INFO  [org.apa.pol.ser.exc.IcebergExceptionMapper] 
[,POLARIS] [,,,] (executor-thread-1) Full RuntimeException: 
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.
           at 
software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:130)
           at 
software.amazon.awssdk.awscore.interceptor.HelpfulUnknownHostExceptionInterceptor.modifyException(HelpfulUnknownHostExceptionInterceptor.java:59)
           at 
software.amazon.awssdk.core.interceptor.ExecutionInterceptorChain.modifyException(ExecutionInterceptorChain.java:181)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.runModifyException(ExceptionReportingUtils.java:54)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.reportFailureToInterceptors(ExceptionReportingUtils.java:38)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:39)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
           at 
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:210)
           at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
           at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
           at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
           at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
           at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
           at 
software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
           at 
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
           at 
software.amazon.awssdk.services.s3.DefaultS3Client.putObject(DefaultS3Client.java:11227)
           at 
org.apache.iceberg.aws.s3.S3OutputStream.completeUploads(S3OutputStream.java:444)
           at 
org.apache.iceberg.aws.s3.S3OutputStream.close(S3OutputStream.java:270)
           at 
org.apache.iceberg.aws.s3.S3OutputStream.close(S3OutputStream.java:256)
           at 
java.base/sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:439)
           at 
java.base/sun.nio.cs.StreamEncoder.lmyedClose(StreamEncoder.java:237)
           at java.base/sun.nio.cs.StreamEncoder.close(StreamEncoder.java:222)
           at 
java.base/java.io.OutputStreamWriter.close(OutputStreamWriter.java:266)
           at 
org.apache.iceberg.TableMetadataParser.internalWrite(TableMetadataParser.java:133)
           at 
org.apache.iceberg.TableMetadataParser.overwrite(TableMetadataParser.java:117)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalog$BasePolarisTableOperations.writeNewMetadata(IcebergCatalog.java:1571)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalog$BasePolarisTableOperations.writeNewMetadataIfRequired(IcebergCatalog.java:1560)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalog$BasePolarisTableOperations.doCommit(IcebergCatalog.java:1430)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalog$BasePolarisTableOperations.commit(IcebergCatalog.java:1270)
           at 
org.apache.iceberg.BaseMetastoreCatalog$BaseMetastoreCatalogTableBuilder.create(BaseMetastoreCatalog.java:201)
           at 
org.apache.polaris.service.catalog.iceberg.CatalogHandlerUtils.createTable(CatalogHandlerUtils.java:339)
           at 
org.apache.polaris.service.catalog.iceberg.CatalogHandlerUtils_ClientProxy.createTable(Unknown
 Source)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalogHandler.createTableDirect(IcebergCatalogHandler.java:394)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalogAdapter.lambda$createTable$6(IcebergCatalogAdapter.java:360)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalogAdapter.withCatalog(IcebergCatalogAdapter.java:183)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalogAdapter.createTable(IcebergCatalogAdapter.java:347)
           at 
org.apache.polaris.service.catalog.iceberg.IcebergCatalogAdapter_ClientProxy.createTable(Unknown
 Source)
           at 
org.apache.polaris.service.catalog.api.IcebergRestCatalogApi.createTable(IcebergRestCatalogApi.java:193)
           at 
org.apache.polaris.service.catalog.api.IcebergRestCatalogApi_Subclass.createTable$$superforward(Unknown
 Source)
           at 
org.apache.polaris.service.catalog.api.IcebergRestCatalogApi_Subclass$$function$$3.apply(Unknown
 Source)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:73)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext$NextAroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:97)
           at 
io.smallrye.faulttolerance.FaultToleranceInterceptor.lambda$syncFlow$8(FaultToleranceInterceptor.java:364)
           at io.smallrye.faulttolerance.core.Future.from(Future.java:85)
           at 
io.smallrye.faulttolerance.FaultToleranceInterceptor.lambda$syncFlow$9(FaultToleranceInterceptor.java:364)
           at 
io.smallrye.faulttolerance.core.FaultToleranceContext.call(FaultToleranceContext.java:20)
           at 
io.smallrye.faulttolerance.core.Invocation.apply(Invocation.java:29)
           at 
io.smallrye.faulttolerance.core.metrics.MetricsCollector.apply(MetricsCollector.java:98)
           at 
io.smallrye.faulttolerance.FaultToleranceInterceptor.syncFlow(FaultToleranceInterceptor.java:367)
           at 
io.smallrye.faulttolerance.FaultToleranceInterceptor.intercept(FaultToleranceInterceptor.java:205)
           at 
io.smallrye.faulttolerance.FaultToleranceInterceptor_Bean.intercept(Unknown 
Source)
           at 
io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:42)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:70)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext$NextAroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:97)
           at 
io.quarkus.micrometer.runtime.MicrometerTimedInterceptor.timedMethod(MicrometerTimedInterceptor.java:79)
           at 
io.quarkus.micrometer.runtime.MicrometerTimedInterceptor_Bean.intercept(Unknown 
Source)
           at 
io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:42)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:70)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext$NextAroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:97)
           at 
io.quarkus.security.runtime.interceptor.SecurityHandler.handle(SecurityHandler.java:27)
           at 
io.quarkus.security.runtime.interceptor.RolesAllowedInterceptor.intercept(RolesAllowedInterceptor.java:29)
           at 
io.quarkus.security.runtime.interceptor.RolesAllowedInterceptor_Bean.intercept(Unknown
 Source)
           at 
io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:42)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:70)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:62)
           at 
io.quarkus.resteasy.reactive.server.runtime.StandardSecurityCheckInterceptor.intercept(StandardSecurityCheckInterceptor.java:44)
           at 
io.quarkus.resteasy.reactive.server.runtime.StandardSecurityCheckInterceptor_RolesAllowedInterceptor_Bean.intercept(Unknown
 Source)
           at 
io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:42)
           at 
io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:30)
           at 
io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:27)
           at 
org.apache.polaris.service.catalog.api.IcebergRestCatalogApi_Subclass.createTable(Unknown
 Source)
           at 
org.apache.polaris.service.catalog.api.IcebergRestCatalogApi$quarkusrestinvoker$createTable_01f5a1bd6d7815fd3314a553161c943c8cd03101.invoke(Unknown
 Source)
           at 
org.jboss.resteasy.reactive.server.handlers.InvocationHandler.handle(InvocationHandler.java:29)
           at 
io.quarkus.resteasy.reactive.server.runtime.QuarkusResteasyReactiveRequestContext.invokeHandler(QuarkusResteasyReactiveRequestContext.java:141)
           at 
org.jboss.resteasy.reactive.common.core.AbstractResteasyReactiveContext.run(AbstractResteasyReactiveContext.java:147)
           at 
io.quarkus.vertx.core.runtime.VertxCoreRecorder$15.runWith(VertxCoreRecorder.java:638)
           at 
org.jboss.threads.EnhancedQueueExecutor$Task.doRunWith(EnhancedQueueExecutor.java:2675)
           at 
org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2654)
           at 
org.jboss.threads.EnhancedQueueExecutor.runThreadBody(EnhancedQueueExecutor.java:1627)
           at 
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1594)
           at 
org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:11)
           at 
org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:11)
           at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
           at java.base/java.lang.Thread.run(Thread.java:1583)
   Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable 
to execute HTTP request: 
iceberg-bucket.minio-data-lake-service.my-ns.svc.cluster.local: Name or service 
not known (SDK Attempt Count: 6)
           at 
software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:130)
           at 
software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:95)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.retryPolicyDisallowedRetryException(RetryableStageHelper.java:168)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:73)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
           at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at 
software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:53)
           at 
software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:35)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:82)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:43)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
           at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
           ... 78 more
           Suppressed: 
software.amazon.awssdk.core.exception.SdkClientException: Request attempt 1 
failure: Unable to execute HTTP request: 
iceberg-bucket.minio-data-lake-service.my-ns.svc.cluster.local: Name or service 
not known
                ...`
   
   ### Expected Behavior
   
   Polaris would work well and Iceberg tables get created successfully.
   
   ### Additional context
   
   _No response_
   
   ### System information
   
   OS: RHEL 8


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@polaris.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to