xiedeyantu commented on PR #4596: URL: https://github.com/apache/calcite/pull/4596#issuecomment-3434816950
I think I may have identified the issue. The image used in docker-compose.ymlis postgres:latest, and the latesttag of the PostgreSQL image was updated 6 hours ago (to [version 18](https://hub.docker.com/layers/library/postgres/18/images/sha256-a6561a48c57f841ff807e429c41f72aefd5766156d9dec66fd7317f4dc4d3b3f)), which ultimately caused the error. Since this test code repository is at [Link1](https://github.com/zabetak/calcite-druid-dataset), if @zabetak has time, please take a look. Thank you! The Druid test error report is at [Link2](https://github.com/apache/calcite/actions/runs/18734848974/job/53439388816?pr=4596). cc @mihaibudiu ``` === Checking Failed Containers === ❌ Failed containers detected: --- postgres (Exit Code: 1) --- Error: in 18+, these Docker images are configured to store database data in a format which is compatible with "pg_ctlcluster" (specifically, using major-version-specific directory names). This better reflects how PostgreSQL itself works, and how upgrades are to be performed. See also https://github.com/docker-library/postgres/pull/1259 Counter to that, there appears to be PostgreSQL data in: /var/lib/postgresql/data (unused mount/volume) This is usually the result of upgrading the Docker image without upgrading the underlying database using "pg_upgrade" (which requires both versions). The suggested container configuration for 18+ is to place a single mount at /var/lib/postgresql which will then place PostgreSQL data in a subdirectory, allowing usage of "pg_upgrade --link" without mount point boundary issues. See https://github.com/docker-library/postgres/issues/37 for a (long) discussion around this process, and suggestions for how to do so. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
