[jira] [Created] (FLINK-33866) KafkaSinkBuilder in flink-connector-kafka references DeliveryGuarantee in flink-connector-base
Kurt Ostfeld created FLINK-33866: Summary: KafkaSinkBuilder in flink-connector-kafka references DeliveryGuarantee in flink-connector-base Key: FLINK-33866 URL: https://issues.apache.org/jira/browse/FLINK-33866 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: kafka-3.0.2 Reporter: Kurt Ostfeld I have a Flink project that has code like: ``` KafkaSink.builder().setDeliveryGuarantee(DeliveryGuarantee.AT_LEAST_ONCE) ``` This worked with flink-connector-kafka 3.0.1 as well as past versions of Flink. This fails to compile with flink-connector-kafka 3.0.2 because that release changed flink-connector-base to a provided dependency so the reference to the DeliveryGuarantee class becomes a compiler error. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-3154) Update Kryo version from 2.24.0 to latest Kryo LTS version
[ https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726320#comment-17726320 ] Kurt Ostfeld edited comment on FLINK-3154 at 5/26/23 5:24 PM: -- [~martijnvisser] this PR upgrades Flink from Kryo v2.x to Kryo v5.x and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. EDIT: All CI tests are passing. was (Author: JIRAUSER38): [~martijnvisser] this PR upgrades Flink from Kryo v2.x to Kryo v5.x and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. EDIT: I will fix the CI errors. > Update Kryo version from 2.24.0 to latest Kryo LTS version > -- > > Key: FLINK-3154 > URL: https://issues.apache.org/jira/browse/FLINK-3154 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Affects Versions: 1.0.0 >Reporter: Maximilian Michels >Priority: Not a Priority > Labels: pull-request-available > > Flink's Kryo version is outdated and could be updated to a newer version, > e.g. kryo-3.0.3. > From ML: we cannot bumping the Kryo version easily - the serialization format > changed (that's why they have a new major version), which would render all > Flink savepoints and checkpoints incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-3154) Update Kryo version from 2.24.0 to latest Kryo LTS version
[ https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726320#comment-17726320 ] Kurt Ostfeld edited comment on FLINK-3154 at 5/26/23 3:18 AM: -- [~martijnvisser] this PR upgrades Flink from Kryo v2.x to Kryo v5.x and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. EDIT: I will fix the CI errors. was (Author: JIRAUSER38): [~martijnvisser] this PR upgrades Flink from Kryo v2.x to Kryo v5.x and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. > Update Kryo version from 2.24.0 to latest Kryo LTS version > -- > > Key: FLINK-3154 > URL: https://issues.apache.org/jira/browse/FLINK-3154 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Affects Versions: 1.0.0 >Reporter: Maximilian Michels >Priority: Not a Priority > Labels: pull-request-available > > Flink's Kryo version is outdated and could be updated to a newer version, > e.g. kryo-3.0.3. > From ML: we cannot bumping the Kryo version easily - the serialization format > changed (that's why they have a new major version), which would render all > Flink savepoints and checkpoints incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-3154) Update Kryo version from 2.24.0 to latest Kryo LTS version
[ https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726320#comment-17726320 ] Kurt Ostfeld edited comment on FLINK-3154 at 5/25/23 5:09 PM: -- [~martijnvisser] this PR upgrades Flink from Kryo v2.x to Kryo v5.x and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. was (Author: JIRAUSER38): [~martijnvisser] this PR upgrades Flink from v2 to v5 and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. > Update Kryo version from 2.24.0 to latest Kryo LTS version > -- > > Key: FLINK-3154 > URL: https://issues.apache.org/jira/browse/FLINK-3154 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Affects Versions: 1.0.0 >Reporter: Maximilian Michels >Priority: Not a Priority > Labels: pull-request-available > > Flink's Kryo version is outdated and could be updated to a newer version, > e.g. kryo-3.0.3. > From ML: we cannot bumping the Kryo version easily - the serialization format > changed (that's why they have a new major version), which would render all > Flink savepoints and checkpoints incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-3154) Update Kryo version from 2.24.0 to latest Kryo LTS version
[ https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726320#comment-17726320 ] Kurt Ostfeld commented on FLINK-3154: - [~martijnvisser] this PR upgrades Flink from v2 to v5 and preserves backward compatibility with existing savepoints and checkpoints: [https://github.com/apache/flink/pull/22660] This keeps the Kryo v2 project dependency for backwards compatibility only and otherwise uses Kryo v5.x. > Update Kryo version from 2.24.0 to latest Kryo LTS version > -- > > Key: FLINK-3154 > URL: https://issues.apache.org/jira/browse/FLINK-3154 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Affects Versions: 1.0.0 >Reporter: Maximilian Michels >Priority: Not a Priority > Labels: pull-request-available > > Flink's Kryo version is outdated and could be updated to a newer version, > e.g. kryo-3.0.3. > From ML: we cannot bumping the Kryo version easily - the serialization format > changed (that's why they have a new major version), which would render all > Flink savepoints and checkpoints incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-32104) stop-with-savepoint fails and times out with simple reproducible example
[ https://issues.apache.org/jira/browse/FLINK-32104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Ostfeld resolved FLINK-32104. -- Resolution: Invalid [~Weijie Guo] , thank you so much for your help. If I move the sleep before the keyBy, I can suspend. Thanks! > stop-with-savepoint fails and times out with simple reproducible example > > > Key: FLINK-32104 > URL: https://issues.apache.org/jira/browse/FLINK-32104 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.17.0 >Reporter: Kurt Ostfeld >Priority: Major > > I've put together a simple demo app that reproduces the issue with > instructions on how to reproduce: > [https://github.com/kurtostfeld/flink-stop-issue] > > The issue is that with a very simple Flink DataStream API application, the > `stop-with-savepoint` fails and times out like this: > > {code:java} > ./bin/flink stop --type native --savepointPath ../savepoints > d69a952625497cca0665dfdcdb9f4718 > Suspending job "d69a952625497cca0665dfdcdb9f4718" with a NATIVE savepoint. > > The program finished with the following exception: > org.apache.flink.util.FlinkException: Could not stop with a savepoint job > "d69a952625497cca0665dfdcdb9f4718". > at > org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:595) > at > org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1041) > at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:578) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1110) > at > org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) > at > org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) > at > org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) > Caused by: java.util.concurrent.TimeoutException > at > java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) > at > java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) > at > org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:591) > ... 7 more {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32104) stop-with-savepoint fails and times out with simple reproducible example
[ https://issues.apache.org/jira/browse/FLINK-32104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Ostfeld updated FLINK-32104: - Description: I've put together a simple demo app that reproduces the issue with instructions on how to reproduce: [https://github.com/kurtostfeld/flink-stop-issue] The issue is that with a very simple Flink DataStream API application, the `stop-with-savepoint` fails and times out like this: {code:java} ./bin/flink stop --type native --savepointPath ../savepoints d69a952625497cca0665dfdcdb9f4718 Suspending job "d69a952625497cca0665dfdcdb9f4718" with a NATIVE savepoint. The program finished with the following exception: org.apache.flink.util.FlinkException: Could not stop with a savepoint job "d69a952625497cca0665dfdcdb9f4718". at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:595) at org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1041) at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:578) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1110) at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) Caused by: java.util.concurrent.TimeoutException at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:591) ... 7 more {code} was: I've put together a simple demo app that reproduces the issue with instructions on how to reproduce: [https://github.com/kurtostfeld/flink-stop-issue] The issue is with a very simple application written with the Flink DataStream API, `stop-with-savepoint` fails and times out like this: {code:java} ./bin/flink stop --type native --savepointPath ../savepoints d69a952625497cca0665dfdcdb9f4718 Suspending job "d69a952625497cca0665dfdcdb9f4718" with a NATIVE savepoint. The program finished with the following exception: org.apache.flink.util.FlinkException: Could not stop with a savepoint job "d69a952625497cca0665dfdcdb9f4718". at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:595) at org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1041) at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:578) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1110) at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) Caused by: java.util.concurrent.TimeoutException at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:591) ... 7 more {code} > stop-with-savepoint fails and times out with simple reproducible example > > > Key: FLINK-32104 > URL: https://issues.apache.org/jira/browse/FLINK-32104 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.17.0 >Reporter: Kurt Ostfeld >Priority: Major > > I've put together a simple demo app that reproduces the issue with > instructions on how to reproduce: > [https://github.com/kurtostfeld/flink-stop-issue] > > The issue is that with a very simple Flink DataStream API application, the > `stop-with-savepoint` fails and times out like this: > > {code:java} > ./bin/flink stop --type native --savepointPath ../savepoints > d69a952625497cca0665dfdcdb9f4718 > Suspending job "d69a952625497cca0665dfdcdb9f4718" with a NATIVE savepoint. > > The program finished with the following exception: > org.apache.flink.util.FlinkException: Could not stop with a savepoint job > "d69a952625497cca0665dfdcdb9f4718". > at > org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:595) > at >
[jira] [Created] (FLINK-32104) stop-with-savepoint fails and times out with simple reproducible example
Kurt Ostfeld created FLINK-32104: Summary: stop-with-savepoint fails and times out with simple reproducible example Key: FLINK-32104 URL: https://issues.apache.org/jira/browse/FLINK-32104 Project: Flink Issue Type: Bug Components: API / DataStream Affects Versions: 1.17.0 Reporter: Kurt Ostfeld I've put together a simple demo app that reproduces the issue with instructions on how to reproduce: [https://github.com/kurtostfeld/flink-stop-issue] The issue is with a very simple application written with the Flink DataStream API, `stop-with-savepoint` fails and times out like this: {code:java} ./bin/flink stop --type native --savepointPath ../savepoints d69a952625497cca0665dfdcdb9f4718 Suspending job "d69a952625497cca0665dfdcdb9f4718" with a NATIVE savepoint. The program finished with the following exception: org.apache.flink.util.FlinkException: Could not stop with a savepoint job "d69a952625497cca0665dfdcdb9f4718". at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:595) at org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1041) at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:578) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1110) at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) Caused by: java.util.concurrent.TimeoutException at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) at org.apache.flink.client.cli.CliFrontend.lambda$stop$4(CliFrontend.java:591) ... 7 more {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31880) Bad Test in OrcColumnarRowSplitReaderTest
[ https://issues.apache.org/jira/browse/FLINK-31880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722973#comment-17722973 ] Kurt Ostfeld commented on FLINK-31880: -- Updated PR https://github.com/apache/flink/pull/22586 > Bad Test in OrcColumnarRowSplitReaderTest > - > > Key: FLINK-31880 > URL: https://issues.apache.org/jira/browse/FLINK-31880 > Project: Flink > Issue Type: Bug > Components: Connectors / ORC, Formats (JSON, Avro, Parquet, ORC, > SequenceFile) >Reporter: Kurt Ostfeld >Priority: Minor > Labels: pull-request-available > > This is a development issue with, what looks like a buggy unit test. > > I tried to build Flink with a clean copy of the repository and I get: > > ``` > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] OrcColumnarRowSplitReaderTest.testReadFileWithTypes:365 > expected: "1969-12-31" > but was: "1970-01-01" > [INFO] > [ERROR] Tests run: 26, Failures: 1, Errors: 0, Skipped: 0 > ``` > > I see the test is testing Date data types with `new Date(562423)` which is 9 > minutes and 22 seconds after the epoch time, which is 1970-01-01 UTC time, or > when I run that on my laptop in CST timezone, I get `Wed Dec 31 18:09:22 CST > 1969`. > > I have a simple pull request ready which fixes this issue and uses the Java 8 > LocalDate API instead which avoids time zones entirely. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-31937) Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"
[ https://issues.apache.org/jira/browse/FLINK-31937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Ostfeld resolved FLINK-31937. -- Resolution: Won't Fix [~martijnvisser] ah, ok, unit tests don't need to run outside of the CI environment. thank you. > Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak" > --- > > Key: FLINK-31937 > URL: https://issues.apache.org/jira/browse/FLINK-31937 > Project: Flink > Issue Type: Bug > Components: Runtime / Queryable State >Reporter: Kurt Ostfeld >Priority: Minor > > {code:java} > [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.68 > s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest[ERROR] > org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration > Time elapsed: 3.801 s <<< FAILURE!java.lang.AssertionError: Connection > leak (server) at > org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration(ClientTest.java:719) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31938) Failing Unit Test: FlinkConnectionTest.testCatalogSchema "Failed to get response for the operation"
Kurt Ostfeld created FLINK-31938: Summary: Failing Unit Test: FlinkConnectionTest.testCatalogSchema "Failed to get response for the operation" Key: FLINK-31938 URL: https://issues.apache.org/jira/browse/FLINK-31938 Project: Flink Issue Type: Bug Components: Table SQL / JDBC Reporter: Kurt Ostfeld {noformat} [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.885 s <<< FAILURE! - in org.apache.flink.table.jdbc.FlinkConnectionTest [ERROR] org.apache.flink.table.jdbc.FlinkConnectionTest.testCatalogSchema Time elapsed: 1.513 s <<< ERROR! org.apache.flink.table.client.gateway.SqlExecutionException: Failed to get response for the operation 733f0d91-e9e8-4487-949f-f3abb13384e8. at org.apache.flink.table.client.gateway.ExecutorImpl.getFetchResultResponse(ExecutorImpl.java:416) at org.apache.flink.table.client.gateway.ExecutorImpl.fetchUtilResultsReady(ExecutorImpl.java:376) at org.apache.flink.table.client.gateway.ExecutorImpl.executeStatement(ExecutorImpl.java:242) at org.apache.flink.table.jdbc.FlinkConnectionTest.testCatalogSchema(FlinkConnectionTest.java:95){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31937) Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"
Kurt Ostfeld created FLINK-31937: Summary: Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak" Key: FLINK-31937 URL: https://issues.apache.org/jira/browse/FLINK-31937 Project: Flink Issue Type: Bug Components: Runtime / Queryable State Reporter: Kurt Ostfeld {code:java} [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.68 s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest[ERROR] org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration Time elapsed: 3.801 s <<< FAILURE!java.lang.AssertionError: Connection leak (server) at org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration(ClientTest.java:719) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
Kurt Ostfeld created FLINK-31897: Summary: Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost Key: FLINK-31897 URL: https://issues.apache.org/jira/browse/FLINK-31897 Project: Flink Issue Type: Bug Components: API / State Processor Reporter: Kurt Ostfeld {code:java} [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest [ERROR] org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: Expected: A CompletableFuture that will have failed within 360 milliseconds with: java.net.ConnectException but: Future completed with different exception: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException: Can't assign requested address: /:0 Caused by: java.net.BindException: Can't assign requested address {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31880) Bad Test in OrcColumnarRowSplitReaderTest
Kurt Ostfeld created FLINK-31880: Summary: Bad Test in OrcColumnarRowSplitReaderTest Key: FLINK-31880 URL: https://issues.apache.org/jira/browse/FLINK-31880 Project: Flink Issue Type: Bug Components: Connectors / ORC, Formats (JSON, Avro, Parquet, ORC, SequenceFile) Reporter: Kurt Ostfeld This is a development issue with, what looks like a buggy unit test. I tried to build Flink with a clean copy of the repository and I get: ``` [INFO] Results: [INFO] [ERROR] Failures: [ERROR] OrcColumnarRowSplitReaderTest.testReadFileWithTypes:365 expected: "1969-12-31" but was: "1970-01-01" [INFO] [ERROR] Tests run: 26, Failures: 1, Errors: 0, Skipped: 0 ``` I see the test is testing Date data types with `new Date(562423)` which is 9 minutes and 22 seconds after the epoch time, which is 1970-01-01 UTC time, or when I run that on my laptop in CST timezone, I get `Wed Dec 31 18:09:22 CST 1969`. I have a simple pull request ready which fixes this issue and uses the Java 8 LocalDate API instead which avoids time zones entirely. -- This message was sent by Atlassian Jira (v8.20.10#820010)