krutoileshii opened a new issue, #3462: URL: https://github.com/apache/polaris/issues/3462
### Describe the bug I have a very large table that was created through CDC. After executing DROP TABLE from trino, the whole polaris instance becomes unresponsive with logs like below: Running polaris as a standalone (not docker). ``` 2026-01-16 16:42:59,043 WARN [org.apa.pol.ser.tas.FileCleanupTaskHandler] [,] [,,,] () file=s3://testdec16/allo_custom/dbo__cdc_test/data/00001-1768518467576-0d5953a4-ceed-49ed-a763-6ebb7b66a939-00001.parquet tableIdentifier=allo_custom.dbo__cdc_test baseFile=s3://testdec16/allo_custom/dbo__cdc_test/metadata/58be18c0-aa21-4b89-84dd-e1eb953f0444-m2.avro Exception caught deleting data file 2026-01-16 16:42:59,043 WARN [org.apa.pol.ser.tas.FileCleanupTaskHandler] [,] [,,,] () file=s3://testdec16/allo_custom/dbo__cdc_test/data/00001-1768518467576-0d5953a4-ceed-49ed-a763-6ebb7b66a939-00001.parquet attempt=2 error=java.lang.RuntimeException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool (SDK Attempt Count: 1) Error encountered attempting to delete file 2026-01-16 16:42:59,043 WARN [org.apa.pol.ser.tas.FileCleanupTaskHandler] [,] [,,,] () file=s3://testdec16/allo_custom/dbo__cdc_test/data/00001-1768518467292-00defaac-2afc-4248-959c-f08f9294f161-00001.parquet tableIdentifier=allo_custom.dbo__cdc_test baseFile=s3://testdec16/allo_custom/dbo__cdc_test/metadata/58be18c0-aa21-4b89-84dd-e1eb953f0444-m2.avro Exception caught deleting data file 2026-01-16 16:42:59,043 WARN [org.apa.pol.ser.tas.FileCleanupTaskHandler] [,] [,,,] () file=s3://testdec16/allo_custom/dbo__cdc_test/data/00001-1768518467292-00defaac-2afc-4248-959c-f08f9294f161-00001.parquet attempt=2 error=java.lang.RuntimeException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool (SDK Attempt Count: 1) Error encountered attempting to delete file ``` After a while, it starts to delete them eventually, but during this process, you cannot access catalog, list tables, create table or query data. Using postgresql as a JDBC backend. Here is my application config for using On-Prem S3. ``` polaris.realm-context.realms=polaris quarkus.log.console.level=DEBUG quarkus.log.category."software.amazon.awssdk.request".level=DEBUG quarkus.log.category."software.amazon.awssdk.auth.signer".level=DEBUG quarkus.log.category."software.amazon.awssdk.http".level=DEBUG quarkus.log.category."io.smallrye.config".level=DEBUG polaris.realm-context.header-name=Polaris-Realm #polaris.realm-context.require-header=true polaris.persistence.type=relational-jdbc quarkus.datasource.db-kind=postgresql quarkus.datasource.username=user_name quarkus.datasource.password=password quarkus.datasource.jdbc.url=jdbc:postgresql://postgresql:5432/polaris_test polaris.authentication.token-broker.type=rsa-key-pair polaris.authentication.token-broker.rsa-key-pair.public-key-file=/opt/polaris/public.key polaris.authentication.token-broker.rsa-key-pair.private-key-file=/opt/polaris/private.key polaris.authentication.token-broker.max-token-generation=PT1H polaris.bootstrap.credentials=polaris,admin,creds polaris.storage.aws.access-key=access_key polaris.storage.aws.secret-key=secret_key polaris.catalog.s3.no-sts=true #polaris.catalog.s3.region=us-east-1 #polaris.features."SKIP_CREDENTIAL_SUBSCOPING_INDIRECTION": true polaris.features."SUPPORTED_CATALOG_STORAGE_TYPES": ["S3"] polaris.features."ALLOW_SETTING_S3_ENDPOINTS"=true polaris.features."DROP_WITH_PURGE_ENABLED"=true polaris.features."CLEANUP_ON_CATALOG_DROP"=true ``` ### To Reproduce _No response_ ### Actual Behavior _No response_ ### Expected Behavior _No response_ ### Additional context _No response_ ### System information _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
