[jira] [Created] (FLINK-22385) Type mismatch in NetworkBufferPool
sharkd tu created FLINK-22385: - Summary: Type mismatch in NetworkBufferPool Key: FLINK-22385 URL: https://issues.apache.org/jira/browse/FLINK-22385 Project: Flink Issue Type: Bug Reporter: sharkd tu Attachments: flink-ui-bug.png 'Network Metrics' in flink ui display error when network memory is large. !flink-ui-bug.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22384) Chinese typos
simenliuxing created FLINK-22384: Summary: Chinese typos Key: FLINK-22384 URL: https://issues.apache.org/jira/browse/FLINK-22384 Project: Flink Issue Type: Improvement Components: Documentation Reporter: simenliuxing Fix For: 1.14.0 i found Chinese typos in https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/state/ text content "和 keyed state 类系" Should be replaced with "和 keyed state 类似" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22383) HiveParser may fail to call Flink functions
Rui Li created FLINK-22383: -- Summary: HiveParser may fail to call Flink functions Key: FLINK-22383 URL: https://issues.apache.org/jira/browse/FLINK-22383 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Reporter: Rui Li -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22382) ProcessFailureCancelingITCase.testCancelingOnProcessFailure
Dawid Wysakowicz created FLINK-22382: Summary: ProcessFailureCancelingITCase.testCancelingOnProcessFailure Key: FLINK-22382 URL: https://issues.apache.org/jira/browse/FLINK-22382 Project: Flink Issue Type: Task Components: Runtime / Coordination, Runtime / Task Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz Fix For: 1.13.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16896=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=9756 {code} Apr 20 18:05:14 Suppressed: java.util.concurrent.TimeoutException Apr 20 18:05:14 at org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210) Apr 20 18:05:14 at org.apache.flink.test.recovery.ProcessFailureCancelingITCase.waitUntilAtLeastOneTaskHasBeenDeployed(ProcessFailureCancelingITCase.java:236) Apr 20 18:05:14 at org.apache.flink.test.recovery.ProcessFailureCancelingITCase.testCancelingOnProcessFailure(ProcessFailureCancelingITCase.java:193) Apr 20 18:05:14 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Apr 20 18:05:14 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Apr 20 18:05:14 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Apr 20 18:05:14 at java.lang.reflect.Method.invoke(Method.java:498) Apr 20 18:05:14 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Apr 20 18:05:14 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Apr 20 18:05:14 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Apr 20 18:05:14 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Apr 20 18:05:14 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 20 18:05:14 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 20 18:05:14 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 20 18:05:14 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Apr 20 18:05:14 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) Apr 20 18:05:14 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Apr 20 18:05:14 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) Apr 20 18:05:14 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) Apr 20 18:05:14 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) Apr 20 18:05:14 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Apr 20 18:05:14 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Apr 20 18:05:14 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Apr 20 18:05:14 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) Apr 20 18:05:14 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) Apr 20 18:05:14 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) Apr 20 18:05:14 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) Apr 20 18:05:14 at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) Apr 20 18:05:14 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) Apr 20 18:05:14 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) Apr 20 18:05:14 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) Apr 20 18:05:14 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) Apr 20 18:05:14 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) Apr 20 18:05:14 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22381) RocksDBStateBackendTests::test_get_set_number_of_transfering_threads fails on azure
Roman Khachatryan created FLINK-22381: - Summary: RocksDBStateBackendTests::test_get_set_number_of_transfering_threads fails on azure Key: FLINK-22381 URL: https://issues.apache.org/jira/browse/FLINK-22381 Project: Flink Issue Type: Bug Components: Runtime / State Backends Affects Versions: 1.13.0 Reporter: Roman Khachatryan Fix For: 1.13.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16900=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21206 {code} === FAILURES === _ RocksDBStateBackendTests.test_get_set_number_of_transfering_threads __ self = def test_get_set_number_of_transfering_threads(self): state_backend = RocksDBStateBackend("file://var/checkpoints/") > self.assertEqual(state_backend.get_number_of_transfering_threads(), 1) E AssertionError: 4 != 1 pyflink/datastream/tests/test_state_backend.py:185: AssertionError {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22380) Pub/Sub Lite Connector for Flink
Evan Palmer created FLINK-22380: --- Summary: Pub/Sub Lite Connector for Flink Key: FLINK-22380 URL: https://issues.apache.org/jira/browse/FLINK-22380 Project: Flink Issue Type: New Feature Components: Connectors / Google Cloud PubSub Reporter: Evan Palmer Hello, I'm an engineer at Google working on [Pub/Sub Lite|https://cloud.google.com/pubsub/lite/docs]. Pub/Sub Lite is zonal partition based messaging service which is meant to be a cheaper alternative to Cloud Pub/Sub. We're interesting in writing a Flink connector so that users can read/write to Pub/Sub Lite in flink pipelines. I'm wondering if we can contribute this connector to the flink repo. Perhaps somewhere near [https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-gcp-pubsub|https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-gcp-pubsub.]. Thanks! Evan -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22379) Introduce a new JobStatus to avoid premature checkpoint triggering
Anton Kalashnikov created FLINK-22379: - Summary: Introduce a new JobStatus to avoid premature checkpoint triggering Key: FLINK-22379 URL: https://issues.apache.org/jira/browse/FLINK-22379 Project: Flink Issue Type: Improvement Components: Runtime / Checkpointing Reporter: Anton Kalashnikov Right now, when JobStatus switches to RUNNING it allows CheckpointCoordinator to trigger checkpoint which is ok. But unfortunately, JobStatus switches to RUNNING before TaskState(ExecutionState) switches even to SCHEDULED. And this leads to several problems, one of them you can see in the log: {noformat} WARN org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Failed to trigger checkpoint for job bc943302f92d979824fbc8f4cabc5db3.)org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint triggering task Source: EventSource -> Timestamps/Watermarks (1/7) of job bc943302f92d979824fbc8f4cabc5db3 has not being executed at the moment. Aborting checkpoint. Failure reason: Not all required tasks are currently running.at org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ~[?:1.8.0_272]at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.actor.Actor$class.aroundReceive(Actor.scala:517) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.actor.ActorCell.invoke(ActorCell.scala:561) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.Mailbox.run(Mailbox.scala:225) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.Mailbox.exec(Mailbox.scala:235) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]{noformat} To avoid this problem, it is a good idea to introduce new JobStatus between CREATED and RUNNING(RESTORING?). And then: * JobStatus CREATED switches to RESTORING at the same time when right now CREATED switches to RUNNING * JobStatus RESTORING switches to RUNNING when all tasks switched their states from INITIALIZING to RUNNING It also makes sense to rename ExecutionState.INITIALIZING to RESTORING in order to have the same name for job and task. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22378) Type mismatch when declaring SOURCE_WATERMARK on TIMESTAMP_LTZ column
Timo Walther created FLINK-22378: Summary: Type mismatch when declaring SOURCE_WATERMARK on TIMESTAMP_LTZ column Key: FLINK-22378 URL: https://issues.apache.org/jira/browse/FLINK-22378 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Timo Walther The following schema cannot be resolved currently: {code} Schema.newBuilder() .columnByMetadata("rowtime", DataTypes.TIMESTAMP_LTZ(3)) .watermark("rowtime", "SOURCE_WATERMARK()") .build() {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22377) Ignore if writer is stopped when aborting channel state writing
Roman Khachatryan created FLINK-22377: - Summary: Ignore if writer is stopped when aborting channel state writing Key: FLINK-22377 URL: https://issues.apache.org/jira/browse/FLINK-22377 Project: Flink Issue Type: Improvement Components: Runtime / Network, Runtime / Task Affects Versions: 1.13.0 Reporter: Roman Khachatryan When channel state write is being aborted it can happen that writer is already stopped. In that case, an error will be thrown: {code} 2021-04-19 09:05:52,716 DEBUG org.apache.flink.streaming.runtime.tasks.StreamTask Could not perform checkpoint 401 for operator Source: Custom Source (5/6)#0 while the invokable was not in state running. java.lang.RuntimeException: unable to send request to worker at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriterImpl.enqueue(ChannelStateWriterImpl.java:229) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriterImpl.abort(ChannelStateWriterImpl.java:190) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.cleanup(SubtaskCheckpointCoordinatorImpl.java:472) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:315) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$8(StreamTask.java:1062) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:1046) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:964) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$triggerCheckpointAsync$7(StreamTask.java:936) ~[flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:344) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:330) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:202) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:661) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623) [flink-streaming-java_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:776) [flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) [flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.lang.IllegalStateException: not running at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.ensureRunning(ChannelStateWriteRequestExecutorImpl.java:152) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.submitInternal(ChannelStateWriteRequestExecutorImpl.java:144) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.submitPriority(ChannelStateWriteRequestExecutorImpl.java:133) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] at org.apache.flink.runtime.checkpoint.channel.ChannelStateWriterImpl.enqueue(ChannelStateWriterImpl.java:224) ~[flink-runtime_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT] ... 18 more {code} Need to -
[jira] [Created] (FLINK-22376) SequentialChannelStateReaderImpl may recycle buffer twice
Roman Khachatryan created FLINK-22376: - Summary: SequentialChannelStateReaderImpl may recycle buffer twice Key: FLINK-22376 URL: https://issues.apache.org/jira/browse/FLINK-22376 Project: Flink Issue Type: Bug Components: Runtime / Network, Runtime / Task Affects Versions: 1.13.0 Reporter: Roman Khachatryan Fix For: 1.13.0 In ChannelStateChunkReader.readChunk in case of error buffer is recycled in the catch block. However, it might already have been recycled in stateHandler.recover(). Using minor priority, as this only affects already failing path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22375) Disable JDBC XA connection pooling if unnecessary
Roman Khachatryan created FLINK-22375: - Summary: Disable JDBC XA connection pooling if unnecessary Key: FLINK-22375 URL: https://issues.apache.org/jira/browse/FLINK-22375 Project: Flink Issue Type: Improvement Components: Connectors / JDBC Affects Versions: 1.13.0 Reporter: Roman Khachatryan In FLINK-22239 pooling was added to have each active XA transaction in a separate connection. Some databases don't need it (like Oracle, maybe others - need to check). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22374) ConfigOptionsDocsCompletenessITCase fails on traivs
Leonard Xu created FLINK-22374: -- Summary: ConfigOptionsDocsCompletenessITCase fails on traivs Key: FLINK-22374 URL: https://issues.apache.org/jira/browse/FLINK-22374 Project: Flink Issue Type: Bug Components: Tests Affects Versions: 1.13.0 Reporter: Leonard Xu [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.128 s <<< FAILURE! - in org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase Apr 20 12:27:32 [ERROR] testCompleteness(org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase) Time elapsed: 1.043 s <<< FAILURE! Apr 20 12:27:32 java.lang.AssertionError: Apr 20 12:27:32 Documentation is outdated, please regenerate it according to the instructions in flink-docs/README.md. Apr 20 12:27:32 Problems: Apr 20 12:27:32 Documentation of table.local-time-zone in TableConfigOptions is outdated. Expected: default=("default") description=(The local time zone defines current session time zone id. It is used when converting to/from codeTIMESTAMP WITH LOCAL TIME ZONE/code. Internally, timestamps with local time zone are always represented in the UTC time zone. However, when converting to data types that don't include a time zone (e.g. TIMESTAMP, TIME, or simply STRING), the session time zone is used during conversion. The input of option is either a full name such as "America/Los_Angeles", or a custom timezone id such as "GMT-8:00".). Apr 20 12:27:32 Documented option table.local-time-zone does not exist. Apr 20 12:27:32 at org.junit.Assert.fail(Assert.java:88) Apr 20 12:27:32 at org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase.compareDocumentedAndExistingOptions(ConfigOptionsDocsCompletenessITCase.java:220) Apr 20 12:27:32 at org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase.testCompleteness(ConfigOptionsDocsCompletenessITCase.java:76) Apr 20 12:27:32 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Apr 20 12:27:32 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Apr 20 12:27:32 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Apr 20 12:27:32 at java.lang.reflect.Method.invoke(Method.java:498) Apr 20 12:27:32 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Apr 20 12:27:32 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Apr 20 12:27:32 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16870=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf=34561 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22373) Create release notes for 1.13
Dawid Wysakowicz created FLINK-22373: Summary: Create release notes for 1.13 Key: FLINK-22373 URL: https://issues.apache.org/jira/browse/FLINK-22373 Project: Flink Issue Type: Task Components: Documentation Reporter: Dawid Wysakowicz Assignee: Dawid Wysakowicz Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.
Wong Mulan created FLINK-22372: -- Summary: Rename LogicalTypeCasts class variables in the castTo method. Key: FLINK-22372 URL: https://issues.apache.org/jira/browse/FLINK-22372 Project: Flink Issue Type: Improvement Components: Table SQL / Client Affects Versions: 1.12.3, 1.13.1 Reporter: Wong Mulan {code:java} // Old private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) { return new CastingRuleBuilder(sourceType); } // New private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) { return new CastingRuleBuilder(targetType); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22371) Chinese grammatical error
simenliuxing created FLINK-22371: Summary: Chinese grammatical error Key: FLINK-22371 URL: https://issues.apache.org/jira/browse/FLINK-22371 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.12.2 Reporter: simenliuxing Fix For: 1.13.0 Chinese grammatical error in [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/table/common.html] . text content "下文讨论的 {{DataSet}} API 只与旧计划起有关。" Should be replaced with "下文讨论的 {{DataSet}} API 只与旧计划有关。" -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Simplify SQL lookup join (temporal join with latest) syntax
Thanks for the pointer Jingsong, I don't see how proctime() is ambiguous though as it always refers to the current wall clock time. I think thats much better than adding a magic pseudocolumn. Cheers Gyula On Tue, Apr 20, 2021 at 11:06 AM Jingsong Li wrote: > +1 for simplifying. > > We already have a thread of this topic: > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Make-Temporal-Join-syntax-easier-to-use-td47296.html > FYI. > > Best, > Jingsong > > On Tue, Apr 20, 2021 at 4:55 PM Gyula Fóra wrote: > > > Hi All! > > > > Playing around with the SQL syntax for temporal join with latest table I > > feel there is some room for optimizing the current syntax to provide a > > better user experience. > > > > The current system for specifying the lookup side is: > > > > lookuptable FOR SYSTEM_TIME AS OF probe.proctime_column > > > > It feels a bit clumsy to have to define a proctime() column in the probe > > table as I think it brings no real syntactic value and just introduces an > > overhead. > > > > I think we should allow the following syntax instead: > > > > lookuptable FOR SYSTEM_TIME AS OF proctime() > > > > To me this means the same thing and Flink can easily map it to the same > > lookup join operator. Playing around with the planner logic, this is > > surprisingly simple to implement (basicly just a 2 line change). > > > > It would be good to hear some SQL expert opinions of any potential > downside > > to this. If this makes sense I am happy to contribute this change. > > > > Cheers, > > Gyula > > > > > -- > Best, Jingsong Lee >
Re: [DISCUSS] Simplify SQL lookup join (temporal join with latest) syntax
+1 for simplifying. We already have a thread of this topic: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Make-Temporal-Join-syntax-easier-to-use-td47296.html FYI. Best, Jingsong On Tue, Apr 20, 2021 at 4:55 PM Gyula Fóra wrote: > Hi All! > > Playing around with the SQL syntax for temporal join with latest table I > feel there is some room for optimizing the current syntax to provide a > better user experience. > > The current system for specifying the lookup side is: > > lookuptable FOR SYSTEM_TIME AS OF probe.proctime_column > > It feels a bit clumsy to have to define a proctime() column in the probe > table as I think it brings no real syntactic value and just introduces an > overhead. > > I think we should allow the following syntax instead: > > lookuptable FOR SYSTEM_TIME AS OF proctime() > > To me this means the same thing and Flink can easily map it to the same > lookup join operator. Playing around with the planner logic, this is > surprisingly simple to implement (basicly just a 2 line change). > > It would be good to hear some SQL expert opinions of any potential downside > to this. If this makes sense I am happy to contribute this change. > > Cheers, > Gyula > -- Best, Jingsong Lee
[DISCUSS] Simplify SQL lookup join (temporal join with latest) syntax
Hi All! Playing around with the SQL syntax for temporal join with latest table I feel there is some room for optimizing the current syntax to provide a better user experience. The current system for specifying the lookup side is: lookuptable FOR SYSTEM_TIME AS OF probe.proctime_column It feels a bit clumsy to have to define a proctime() column in the probe table as I think it brings no real syntactic value and just introduces an overhead. I think we should allow the following syntax instead: lookuptable FOR SYSTEM_TIME AS OF proctime() To me this means the same thing and Flink can easily map it to the same lookup join operator. Playing around with the planner logic, this is surprisingly simple to implement (basicly just a 2 line change). It would be good to hear some SQL expert opinions of any potential downside to this. If this makes sense I am happy to contribute this change. Cheers, Gyula
Re: [jira] [Created] (FLINK-21986) taskmanager native memory not release timely after restart
Thanks for raising this issue Feifan. I think it is very important to fix it. I will take a look at your PR. Cheers, Till On Tue, Apr 20, 2021 at 5:35 AM zoltar9264 wrote: > > > Hi community, > I raised this issue about three weeks ago. After several weeks of > investigation, I found the root cause of this issue and explained it in the > issue comments. > And I raised a PR to fix this problem ( I'm sorry that I didn't > know before that I should raise the PR after the issue was assigned to me. > I will pay attention next time. ). > Now I request the committer of the relevant module to check this > issue, assign this issue to me, and review this PR. > > > Issue URL: https://issues.apache.org/jira/browse/FLINK-21986 > PR URL: https://github.com/apache/flink/pull/15619 > > > Best wishes, > Feifan Wang > > > —— > Name: Feifan Wang > Email: zoltar9...@163.com > > > On 03/26/2021 12:00,Feifan Wang (Jira) wrote: > Feifan Wang created FLINK-21986: > --- > > Summary: taskmanager native memory not release timely after restart > Key: FLINK-21986 > URL: https://issues.apache.org/jira/browse/FLINK-21986 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends > Affects Versions: 1.12.1 > Environment: flink version:1.12.1 > run :yarn session > job type:mock source -> regular join > > checkpoint interval: 3m > Taskmanager memory : 16G > > Reporter: Feifan Wang > Attachments: image-2021-03-25-15-53-44-214.png, > image-2021-03-25-16-07-29-083.png, image-2021-03-26-11-46-06-828.png, > image-2021-03-26-11-47-21-388.png > > I run a regular join job with flink_1.12.1 , and find taskmanager native > memory not release timely after restart cause by exceeded checkpoint > tolerable failure threshold. > > *problem job information:* > # job first restart cause by exceeded checkpoint tolerable failure > threshold. > # then taskmanager be killed by yarn many times > # in this case,tm heap is set to 7.68G,bug all tm heap size is under 4.2G > !image-2021-03-25-15-53-44-214.png|width=496,height=103! > # nonheap size increase after restart,but still under 160M. > ! > https://km.sankuai.com/api/file/cdn/706284607/716474606?contentType=1=false=false|width=493,height=102 > ! > # taskmanager process memory increase 3-4G after restart(this figure show > one of taskmanager) > !image-2021-03-25-16-07-29-083.png|width=493,height=107! > > *my guess:* > > > > [RocksDB wiki| > https://github.com/facebook/rocksdb/wiki/RocksJava-Basics#memory-management] > mentioned :Many of the Java Objects used in the RocksJava API will be > backed by C++ objects for which the Java Objects have ownership. As C++ has > no notion of automatic garbage collection for its heap in the way that Java > does, we must explicitly free the memory used by the C++ objects when we > are finished with them. > > > > So, is it possible that RocksDBStateBackend not call > AbstractNativeReference#close() to release memory use by RocksDB C++ Object > ? > > *I make a change:* > > Actively call System.gc() and System.runFinalization() every > minute. > > *And run this test again:* > # taskmanager process memory no obvious increase > !image-2021-03-26-11-46-06-828.png|width=495,height=93! > # job run for several days,and restart many times,but no taskmanager > killed by yarn like before > > > > *Summary:* > # first,there is some native memory can not release timely after restart > in this situation > # I guess it maybe RocksDB C++ object,but I hive not check it from source > code of RocksDBStateBackend > > > > > > -- > This message was sent by Atlassian Jira > (v8.3.4#803005) >
[jira] [Created] (FLINK-22370) ParquetColumnarRowSplitReader#reachedEnd() returns false after it returns true
Danny Chen created FLINK-22370: -- Summary: ParquetColumnarRowSplitReader#reachedEnd() returns false after it returns true Key: FLINK-22370 URL: https://issues.apache.org/jira/browse/FLINK-22370 Project: Flink Issue Type: Bug Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.12.2, 1.13.1 Reporter: Danny Chen Fix For: 1.13.1 {{ParquetColumnarRowSplitReader#reachedEnd()}} should always return true after it first time returns true. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22369) RocksDB state backend might occur ClassNotFoundException when deserializing on TM side
Yun Tang created FLINK-22369: Summary: RocksDB state backend might occur ClassNotFoundException when deserializing on TM side Key: FLINK-22369 URL: https://issues.apache.org/jira/browse/FLINK-22369 Project: Flink Issue Type: Bug Affects Versions: 1.13.0 Reporter: Yun Tang Fix For: 1.13.0 Attachments: image-2021-04-20-15-18-49-706.png FLINK-19467 introduced new {{EmbeddedRocksDBStateBackend}} and added new interface {{[EmbeddedRocksDBStateBackend#setLogger|https://github.com/apache/flink/blob/24031e55e4cf35a5818db2e927e65b290a9b2aed/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/EmbeddedRocksDBStateBackend.java#L287]}} to ensures users of the legacy {{RocksDBStateBackend}} see consistent logging. However, this change introduce another non transient {{[logger|https://github.com/apache/flink/blob/24031e55e4cf35a5818db2e927e65b290a9b2aed/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/EmbeddedRocksDBStateBackend.java#L115]}} and it would be deserialized on TM side first. If the client has different log4j implementation, we might meet ClassNotFoundException: !image-2021-04-20-15-18-49-706.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22368) UnalignedCheckpointITCase hangs on azure
Dawid Wysakowicz created FLINK-22368: Summary: UnalignedCheckpointITCase hangs on azure Key: FLINK-22368 URL: https://issues.apache.org/jira/browse/FLINK-22368 Project: Flink Issue Type: Bug Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz Fix For: 1.13.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=d13f554f-d4b9-50f8-30ee-d49c6fb0b3cc=10144 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22367) JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished times out
Dawid Wysakowicz created FLINK-22367: Summary: JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished times out Key: FLINK-22367 URL: https://issues.apache.org/jira/browse/FLINK-22367 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz Fix For: 1.13.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=2c7d57b9-7341-5a87-c9af-2cf7cc1a37dc=3844 {code} [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.135 s <<< FAILURE! - in org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase Apr 19 22:28:44 [ERROR] terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase) Time elapsed: 10.237 s <<< ERROR! Apr 19 22:28:44 java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Invocation of public default java.util.concurrent.CompletableFuture org.apache.flink.runtime.webmonitor.RestfulGateway.stopWithSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time) timed out. Apr 19 22:28:44 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) Apr 19 22:28:44 at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999) Apr 19 22:28:44 at org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:123) Apr 19 22:28:44 at org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:111) Apr 19 22:28:44 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Apr 19 22:28:44 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Apr 19 22:28:44 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Apr 19 22:28:44 at java.base/java.lang.reflect.Method.invoke(Method.java:566) Apr 19 22:28:44 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Apr 19 22:28:44 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Apr 19 22:28:44 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Apr 19 22:28:44 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Apr 19 22:28:44 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) Apr 19 22:28:44 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 19 22:28:44 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Apr 19 22:28:44 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) Apr 19 22:28:44 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Apr 19 22:28:44 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) Apr 19 22:28:44 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) Apr 19 22:28:44 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) Apr 19 22:28:44 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Apr 19 22:28:44 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Apr 19 22:28:44 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Apr 19 22:28:44 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) Apr 19 22:28:44 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) Apr 19 22:28:44 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 19 22:28:44 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 19 22:28:44 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Apr 19 22:28:44 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) Apr 19 22:28:44 at org.junit.runners.Suite.runChild(Suite.java:128) Apr 19 22:28:44 at org.junit.runners.Suite.runChild(Suite.java:27) Apr 19 22:28:44 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Apr 19 22:28:44 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Apr 19 22:28:44 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Apr 19 22:28:44 at
[jira] [Created] (FLINK-22366) HiveSinkCompactionITCase fails on azure
Dawid Wysakowicz created FLINK-22366: Summary: HiveSinkCompactionITCase fails on azure Key: FLINK-22366 URL: https://issues.apache.org/jira/browse/FLINK-22366 Project: Flink Issue Type: Bug Components: Table SQL / Ecosystem Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz Fix For: 1.13.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23420 {code} [ERROR] testNonPartition[format = sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase) Time elapsed: 4.999 s <<< FAILURE! Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, 8, 8], +I[39, 9, 9], +I[40, 0, 0], +I[41, 1, 1], +I[42, 2, 2], +I[43, 3, 3], +I[44, 4, 4], +I[45, 5, 5], +I[46, 6, 6], +I[47, 7, 7], +I[48, 8, 8], +I[49, 9, 9], +I[50, 0, 0], +I[51, 1, 1], +I[52, 2, 2], +I[53, 3, 3], +I[54, 4, 4], +I[55, 5, 5], +I[56, 6, 6], +I[57, 7, 7], +I[58, 8, 8], +I[59, 9, 9], +I[60, 0, 0], +I[61, 1, 1], +I[62, 2, 2], +I[63, 3, 3], +I[64, 4, 4], +I[65, 5, 5], +I[66, 6, 6], +I[67, 7, 7], +I[68, 8, 8], +I[69, 9, 9], +I[70, 0, 0], +I[71, 1, 1], +I[72, 2, 2], +I[73, 3, 3], +I[74, 4, 4], +I[75, 5, 5], +I[76, 6, 6], +I[77, 7, 7], +I[78, 8, 8], +I[79, 9, 9], +I[80, 0, 0], +I[81, 1, 1], +I[82, 2, 2], +I[83, 3, 3], +I[84, 4, 4], +I[85, 5, 5], +I[86, 6, 6], +I[87, 7, 7], +I[88, 8, 8], +I[89, 9, 9], +I[90, 0, 0], +I[91, 1, 1], +I[92, 2, 2], +I[93, 3, 3], +I[94, 4, 4], +I[95, 5, 5],