[jira] [Updated] (FLINK-13567) Avro Confluent Schema Registry nightly end-to-end test failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-13567: -- Fix Version/s: 1.9.0 > Avro Confluent Schema Registry nightly end-to-end test failed on Travis > --- > > Key: FLINK-13567 > URL: https://issues.apache.org/jira/browse/FLINK-13567 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > Attachments: patch.diff > > > The {{Avro Confluent Schema Registry nightly end-to-end test}} failed on > Travis with > {code} > [FAIL] 'Avro Confluent Schema Registry nightly end-to-end test' failed after > 2 minutes and 11 seconds! Test exited with exit code 1 > No taskexecutor daemon (pid: 29044) is running anymore on > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > No standalonesession daemon to stop on host > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > rm: cannot remove > '/home/travis/build/apache/flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins': > No such file or directory > {code} > https://api.travis-ci.org/v3/job/567273939/log.txt -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] dubin555 opened a new pull request #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector
dubin555 opened a new pull request #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector URL: https://github.com/apache/flink/pull/9356 ## What is the purpose of the change This pull request enable Flink Kafka table enable more topic option: ``` new Kafka() .version("0.11") .topic("test-flink-1") // .subscriptionPattern("test-flink-.*") //.topics("test-flink-1", "test-flink-2") .startFromEarliest() .property("zookeeper.connect", "sap-zookeeper1:2181") .property("bootstrap.servers", "sap-kafka1:9092")) ``` any one of 'topic', 'topics', 'subscriptionPattern' is functional. ## Brief change log - *Implement a new Class 'KafkaTopicDescriptor' to describe the Kafka topic option* - *Implement 'KafkaConsumerValidator' and 'KafkaProducerValidator' instead of one single 'KafkaValidator' to valid the consumer topic setting and producer setting* - *Implement the consumer function with help of 'KafkaConsumer' of different Kafka version* ## Verifying this change This change is already covered by existing tests, such as *KafkaTest*. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): yes - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: don't know - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? JavaDocs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13340) Add more Kafka topic option of flink-connector-kafka
[ https://issues.apache.org/jira/browse/FLINK-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13340: --- Labels: features pull-request-available (was: features) > Add more Kafka topic option of flink-connector-kafka > > > Key: FLINK-13340 > URL: https://issues.apache.org/jira/browse/FLINK-13340 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, Table SQL / API >Affects Versions: 1.8.1 >Reporter: DuBin >Assignee: DuBin >Priority: Major > Labels: features, pull-request-available > Original Estimate: 48h > Remaining Estimate: 48h > > Currently, only 'topic' option implemented in the Kafka Connector Descriptor, > we can only use it like : > {code:java} > val env = StreamExecutionEnvironment.getExecutionEnvironment > val tableEnv = StreamTableEnvironment.create(env) > tableEnv > .connect( > new Kafka() > .version("0.11") > .topic("test-flink-1") > .startFromEarliest() > .property("zookeeper.connect", "localhost:2181") > .property("bootstrap.servers", "localhost:9092")) > .withFormat( > new Json() > .deriveSchema() > ) > .withSchema( > new Schema() > .field("name", Types.STRING) > .field("age", Types.STRING) > ){code} > but we cannot consume multiple topics or a topic regex pattern. > Here is my thoughts: > {code:java} > .topic("test-flink-1") > //.topics("test-flink-1,test-flink-2") or topics(List > topics) > //.subscriptionPattern("test-flink-.*") or > subscriptionPattern(Pattern pattern) > {code} > I already implement the code on my local env with help of the > FlinkKafkaConsumer, and it works. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot commented on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector
flinkbot commented on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector URL: https://github.com/apache/flink/pull/9356#issuecomment-518115797 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories
zjuwangg commented on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories URL: https://github.com/apache/flink/pull/9337#issuecomment-518115991 @bowenli86 @wuchong @xuefuz Could you have a review when you have time? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector
flinkbot commented on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscription' option for Flink Kafka connector URL: https://github.com/apache/flink/pull/9356#issuecomment-518117192 ## CI report: * 994378a936cb3d0e91dd78607e81229c4680e7d6 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121912507) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
zjuwangg commented on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#issuecomment-518117631 cc @xuefuz @lirui-apache This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-13532: --- Assignee: Biao Liu > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Assignee: Biao Liu >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13199) ARM support for Flink
[ https://issues.apache.org/jira/browse/FLINK-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899835#comment-16899835 ] wangxiyuan commented on FLINK-13199: yeah, this ticket can be marked as duplicated as https://issues.apache.org/jira/browse/FLINK-13448 > ARM support for Flink > - > > Key: FLINK-13199 > URL: https://issues.apache.org/jira/browse/FLINK-13199 > Project: Flink > Issue Type: New Feature > Components: Build System >Reporter: wangxiyuan >Priority: Critical > > There is not official ARM release for Flink. But basing on my local test, > Flink which is made by Java and Scala is built and tested well. So is it > possible to support ARM release officially? And I think it's may not be a > huge work. > > AFAIK, Flink now uses travis-ci which supports only x86 for CI gate. Is it > possible to add an ARM one? I'm from openlab community[1]. Similar with > travis-ci, it's is an opensource and free community which provide CI > resources and system for opensource projects, contains both ARM and X86 > machines. And now it helps some community building there CI already. Such as > OpenStack and CNCF. > > If Flink community agree to support ARM. I can spend my full time to help. > Such as job define, CI maintaining, test fix and so on. If Flink don't want > to rely on OpenLab, we can donate ARM resources directly as well. > > I have sent out a discuess mail-list already[2]. Feel free to reply there or > here. > > Thanks. > > [1]:[https://openlabtesting.org/] > [2]:[http://mail-archives.apache.org/mod_mbox/flink-dev/201907.mbox/browser] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] KurtYoung commented on issue #9264: [FLINK-13192][hive] Add tests for different Hive table formats
KurtYoung commented on issue #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#issuecomment-518119852 sure, I will take a look soon. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13558) Include table examples in flink-dist
[ https://issues.apache.org/jira/browse/FLINK-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899836#comment-16899836 ] Dawid Wysakowicz commented on FLINK-13558: -- [~sjwiesman] My intention for this ticket was just to include the code examples in {{flink-examples}} in the dist. > Include table examples in flink-dist > > > Key: FLINK-13558 > URL: https://issues.apache.org/jira/browse/FLINK-13558 > Project: Flink > Issue Type: Improvement > Components: Examples, Table SQL / API >Affects Versions: 1.9.0 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Critical > Fix For: 1.9.0 > > > We want to treat the table api as first-class API. We already included in the > lib directory flink. > We should also include some examples of the table api in the distribution. > Before that we should strip all the dependency and just include the classes > from example module. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] pnowojski commented on a change in pull request #9340: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask
pnowojski commented on a change in pull request #9340: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9340#discussion_r310471128 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java ## @@ -58,6 +61,7 @@ public SourceStreamTask(Environment env) { super(env); + this.sourceThread = new LegacySourceFunctionThread(getName()); Review comment: Not the best solution, but let's not overthink this. +1 for setting the name in `performDefaultAction` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength
flinkbot edited a comment on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength URL: https://github.com/apache/flink/pull/8559#issuecomment-511466573 ## CI report: * b2e38d5e9dabd95409899c56a3064e75378fdba3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/119181531) * e547ec5bd814c1b6e3c94a2b2ebc64f86f3ca66e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121350230) * 97e79678d8e7b9cea59da4ffe2d999a5f0f970b7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121907735) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13578) Blink throws exception when using Types.INTERVAL_MILLIS in TableSource
Wei Zhong created FLINK-13578: - Summary: Blink throws exception when using Types.INTERVAL_MILLIS in TableSource Key: FLINK-13578 URL: https://issues.apache.org/jira/browse/FLINK-13578 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.9.0 Reporter: Wei Zhong Running this program will throw a TableException: {code:java} object Tests { class MyTableSource extends InputFormatTableSource[java.lang.Long] { val data = new java.util.ArrayList[java.lang.Long]() data.add(1L) data.add(2L) data.add(3L) val dataType = Types.INTERVAL_MILLIS() val inputFormat = new CollectionInputFormat[java.lang.Long]( data, dataType.createSerializer(new ExecutionConfig)) override def getInputFormat: InputFormat[java.lang.Long, _ <: InputSplit] = inputFormat override def getTableSchema: TableSchema = TableSchema.fromTypeInfo(dataType) override def getReturnType: TypeInformation[java.lang.Long] = dataType } def main(args: Array[String]): Unit = { val tenv = TableEnvironmentImpl.create( EnvironmentSettings.newInstance().inBatchMode().useBlinkPlanner().build()) val table = tenv.fromTableSource(new MyTableSource) tenv.registerTableSink("sink", Array("f0"), Array(Types.INTERVAL_MILLIS()), new CsvTableSink("/tmp/results")) table.select("f0").insertInto("sink") tenv.execute("test") } } {code} The TableException detail: {code:java} Exception in thread "main" org.apache.flink.table.api.TableException: Unsupported conversion from data type 'INTERVAL SECOND(3)' (conversion class: java.time.Duration) to type information. Only data types that originated from type information fully support a reverse conversion. at org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) at org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) at org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter.fromDataTypeToTypeInfo(TypeInfoDataTypeConverter.java:145) at org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo(TypeInfoLogicalTypeConverter.java:58) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) at org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.(BaseRowTypeInfo.java:64) at org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.of(BaseRowTypeInfo.java:210) at org.apache.flink.table.planner.plan.utils.ScanUtil$.convertToInternalRow(ScanUtil.scala:126) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:112) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:48) at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:48) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToTransformation(BatchExecSink.scala:127) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:92) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:50) at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlan(BatchExecSink.scala:50) at org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:70) at org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:69) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(
[GitHub] [flink] YngwieWang commented on issue #9350: [FLINK-13485] [chinese-translation] Translate "Table API Example Walkthrough" page into Chinese
YngwieWang commented on issue #9350: [FLINK-13485] [chinese-translation] Translate "Table API Example Walkthrough" page into Chinese URL: https://github.com/apache/flink/pull/9350#issuecomment-518125127 > > Hi, @AT-Fieldless , Thanks for your contribution. I left some suggestions. I hope it will help. :) > > BTW, you should build a new branch instead of using the master branch. > > Thanks for review.Should I build a new branch and push it again? I'm not sure about that. @klion26 @wuchong PTAL This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13527) Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager
[ https://issues.apache.org/jira/browse/FLINK-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899850#comment-16899850 ] Yu Li commented on FLINK-13527: --- [~yanghua] Mind clarify whether this JIRA is blocked by FLINK-13497? Since this is a blocker for 1.9.0 release, we will need to also mark FLINK-13497 as blocker if it's a dependency. Thanks. > Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager > --- > > Key: FLINK-13527 > URL: https://issues.apache.org/jira/browse/FLINK-13527 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0 >Reporter: Yun Tang >Assignee: vinoyang >Priority: Blocker > Fix For: 1.9.0 > > > [~banmoy] and I met this instable test below: > [https://api.travis-ci.org/v3/job/565270958/log.txt] > [https://api.travis-ci.com/v3/job/221237628/log.txt] > The root cause is task {{Source: Custom Source -> Map -> Sink: Unnamed > (1/1)}} failed due to expected artificial test failure and then free task > resource including closing the registry. However, the async checkpoint thread > in {{SourceStreamTask}} would then failed and send decline checkpoint message > to JM. > The key logs is like: > {code:java} > 03:36:46,639 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph >- Source: Custom Source -> Map -> Sink: Unnamed (1/1) > (f45ff068d2c80da22c2a958739ec0c87) switched from RUNNING to FAILED. > java.lang.Exception: Artificial Test Failure > at > org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper.map(FailingIdentityMapper.java:79) > at > org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:637) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104) > at > org.apache.flink.streaming.connectors.kafka.testutils.IntegerSource.run(IntegerSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:172) > 03:36:46,637 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Decline checkpoint 12 by task f45ff068d2c80da22c2a958739ec0c87 of job > d5b629623731c66f1bac89dec3e87b89 at 03cbfd77-0727-4366-83c4-9aa4923fc817 @ > localhost (dataPort=-1). > 03:36:46,640 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Discarding checkpoint 12 of job d5b629623731c66f1bac89dec3e87b89. > org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete > snapshot 12 for operator Source: Custom Source -> Map -> Sink: Unnamed (1/1). > Failure reason: Checkpoint was declined. > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:431) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1248) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1182) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:853) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:758) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:667) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:147) > at org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1138) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
[GitHub] [flink] zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#discussion_r310472230 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java ## @@ -70,7 +72,11 @@ public HiveTableSource(JobConf jobConf, ObjectPath tablePath, CatalogTable catalogTable) { this.jobConf = Preconditions.checkNotNull(jobConf); this.tablePath = Preconditions.checkNotNull(tablePath); - this.catalogTable = Preconditions.checkNotNull(catalogTable); + + Preconditions.checkArgument(catalogTable instanceof CatalogTableImpl); + Preconditions.checkNotNull(catalogTable); Review comment: `checkNotNull` should precede `checkArgument`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#discussion_r310473582 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java ## @@ -133,4 +140,34 @@ public static Partition createHivePartition(String dbName, String tableName, Lis return partition; } + public static CatalogTable toHiveCatalogTable(CatalogTable oldTable) { Review comment: We may need to add comments to explain the method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#discussion_r310471870 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java ## @@ -70,7 +72,11 @@ public HiveTableSource(JobConf jobConf, ObjectPath tablePath, CatalogTable catalogTable) { this.jobConf = Preconditions.checkNotNull(jobConf); this.tablePath = Preconditions.checkNotNull(tablePath); - this.catalogTable = Preconditions.checkNotNull(catalogTable); + + Preconditions.checkArgument(catalogTable instanceof CatalogTableImpl); Review comment: Why `catalogTable` must be `CatalogTableImpl ` type? If it is a must, we may change the constructor. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
zjuwangg commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#discussion_r310475347 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveFullTest.java ## @@ -0,0 +1,139 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connectors.hive; + +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.api.TableEnvironment; +import org.apache.flink.table.api.TableSchema; +import org.apache.flink.table.catalog.CatalogTable; +import org.apache.flink.table.catalog.CatalogTableImpl; +import org.apache.flink.table.catalog.ObjectPath; +import org.apache.flink.table.catalog.hive.HiveCatalog; +import org.apache.flink.table.catalog.hive.HiveTestUtils; + +import com.klarna.hiverunner.HiveShell; +import com.klarna.hiverunner.annotations.HiveSQL; +import org.apache.hadoop.hive.conf.HiveConf; +import org.apache.hadoop.mapred.JobConf; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.runner.RunWith; + +import java.sql.Date; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests using both {@link HiveTableSource} and {@link HiveTableSink}. + */ +@RunWith(FlinkStandaloneHiveRunner.class) +public class HiveFullTest { Review comment: Rename HiveFullTest to HiveTimeRealtedTypeTest or sth else? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dawidwys commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
dawidwys commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r310475913 ## File path: flink-end-to-end-tests/run-pre-commit-tests.sh ## @@ -60,6 +60,7 @@ run_test "Modern Kafka end-to-end test" "$END_TO_END_DIR/test-scripts/test_strea run_test "Kinesis end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_kinesis.sh" run_test "class loading end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_classloader.sh" run_test "Distributed cache end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_distributed_cache_via_blob.sh" - +run_test "Streaming SQL end-to-end test (Old planner)" "$END_TO_END_DIR/test-scripts/test_streaming_sql.sh old" "skip_check_exceptions" Review comment: I think it's enough to have those tests as nightly tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899854#comment-16899854 ] Yu Li commented on FLINK-13489: --- Please mark the status as "In-Progress" since you've already started debugging here [~kevin.cyj], thanks. > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247) > ... 21 more > Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager > with id ea456d6a590eca7598c19c4d35e56db9 timed out. > at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149) > at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190) > at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
[GitHub] [flink] dawidwys commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
dawidwys commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#issuecomment-518126428 Hi @docete Would you like to update this PR, or should I do it, while merging? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
Xintong Song created FLINK-13579: Summary: Failed launching standalone cluster due to improper configured irrelevant config options for active mode. Key: FLINK-13579 URL: https://issues.apache.org/jira/browse/FLINK-13579 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.9.0 Reporter: Xintong Song Fix For: 1.9.0 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value. URL: https://github.com/apache/flink/pull/9285#issuecomment-516712727 ## CI report: * bb70e45a98e76de7f95ac31e893999683cb5bde8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121359827) * 4f96a184d471836053a7e2b09cbd1583ebced727 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121512745) * a915ad9e9323b5c0f799beae32eba104b76b583f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121587513) * 3e6a30848c721001c6bf0a514fb00b00c6f6e0ce : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121696226) * 5958000c4e08d3b4a5842467a9c56bdfeb468efa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121720893) * 5d22079940ff5ccab81cd7090e57af8652b98ab0 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121916070) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] asfgit closed pull request #9344: [FLINK-13532][docs] Fix broken links of zh docs
asfgit closed pull request #9344: [FLINK-13532][docs] Fix broken links of zh docs URL: https://github.com/apache/flink/pull/9344 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu resolved FLINK-13532. - Resolution: Fixed Fix Version/s: 1.10.0 Fixed in 1.10.0: da8a91c2d3769d84a13d7556e601d8258e5128e1 Fixed in 1.9.0: 8186f18f9e149bfca87b8f42397745f0a6bf7767 > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Assignee: Biao Liu >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0, 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899860#comment-16899860 ] Yingjie Cao commented on FLINK-13489: - [~carp84] Thanks for reminding, I have marked the status as "In-Progress". > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247) > ... 21 more > Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager > with id ea456d6a590eca7598c19c4d35e56db9 timed out. > at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149) > at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190) > at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > at scala.Par
[jira] [Commented] (FLINK-13578) Blink throws exception when using Types.INTERVAL_MILLIS in TableSource
[ https://issues.apache.org/jira/browse/FLINK-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899861#comment-16899861 ] Jark Wu commented on FLINK-13578: - cc [~lzljs3620320] > Blink throws exception when using Types.INTERVAL_MILLIS in TableSource > -- > > Key: FLINK-13578 > URL: https://issues.apache.org/jira/browse/FLINK-13578 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0 >Reporter: Wei Zhong >Priority: Critical > > Running this program will throw a TableException: > {code:java} > object Tests { > class MyTableSource extends InputFormatTableSource[java.lang.Long] { > val data = new java.util.ArrayList[java.lang.Long]() > data.add(1L) > data.add(2L) > data.add(3L) > val dataType = Types.INTERVAL_MILLIS() > val inputFormat = new CollectionInputFormat[java.lang.Long]( > data, dataType.createSerializer(new ExecutionConfig)) > override def getInputFormat: InputFormat[java.lang.Long, _ <: InputSplit] > = inputFormat > override def getTableSchema: TableSchema = > TableSchema.fromTypeInfo(dataType) > override def getReturnType: TypeInformation[java.lang.Long] = dataType > } > def main(args: Array[String]): Unit = { > val tenv = TableEnvironmentImpl.create( > > EnvironmentSettings.newInstance().inBatchMode().useBlinkPlanner().build()) > val table = tenv.fromTableSource(new MyTableSource) > tenv.registerTableSink("sink", Array("f0"), > Array(Types.INTERVAL_MILLIS()), new CsvTableSink("/tmp/results")) > table.select("f0").insertInto("sink") > tenv.execute("test") > } > } > {code} > The TableException detail: > {code:java} > Exception in thread "main" org.apache.flink.table.api.TableException: > Unsupported conversion from data type 'INTERVAL SECOND(3)' (conversion class: > java.time.Duration) to type information. Only data types that originated from > type information fully support a reverse conversion. > at > org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) > at > org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter.fromDataTypeToTypeInfo(TypeInfoDataTypeConverter.java:145) > at > org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo(TypeInfoLogicalTypeConverter.java:58) > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) > at > java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) > at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.(BaseRowTypeInfo.java:64) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.of(BaseRowTypeInfo.java:210) > at > org.apache.flink.table.planner.plan.utils.ScanUtil$.convertToInternalRow(ScanUtil.scala:126) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:112) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToTransformation(BatchExecSink.scala:127) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:92) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlan(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:70) > at > org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:69) > at > scala.collection.Traversa
[jira] [Commented] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM
[ https://issues.apache.org/jira/browse/FLINK-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899865#comment-16899865 ] wangxiyuan commented on FLINK-13450: @Robert, yeah, there is a ticket and PR for spark already: https://issues.apache.org/jira/browse/SPARK-28519 [https://github.com/apache/spark/pull/25279] But IIRC, The performance of StrictMath is not good as Math. So maybe it's good to add a switch for StrictMah and Math. It can be open or closed by Flink users. > Adjust tests to tolerate arithmetic differences between x86 and ARM > --- > > Key: FLINK-13450 > URL: https://issues.apache.org/jira/browse/FLINK-13450 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.9.0 >Reporter: Stephan Ewen >Priority: Major > > Certain arithmetic operations have different precision/rounding on ARM versus > x86. > Tests using floating point numbers should be changed to tolerate a certain > minimal deviation. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] beyond1920 commented on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value.
beyond1920 commented on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value. URL: https://github.com/apache/flink/pull/9285#issuecomment-518130431 @wuchong , Ok This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13527) Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager
[ https://issues.apache.org/jira/browse/FLINK-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899866#comment-16899866 ] vinoyang commented on FLINK-13527: -- [~carp84] After discussing with [~yunta], we got some conclusion. Actually, the root reason is that there exists an unexpected checkpoint failure in this test case. Before FLINK-11662, the root reason was hidden. We both admit FLINK-12364 and FLINK-11662 broked the default behavior about some test cases. And the solution of FLINK-13497 may fix this problem(recover the default behavior and let the unexpected checkpoint failure been ignored). It's better to listen to [~till.rohrmann]‘s opinion about the solution of FLINK-13497. > Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager > --- > > Key: FLINK-13527 > URL: https://issues.apache.org/jira/browse/FLINK-13527 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0 >Reporter: Yun Tang >Assignee: vinoyang >Priority: Blocker > Fix For: 1.9.0 > > > [~banmoy] and I met this instable test below: > [https://api.travis-ci.org/v3/job/565270958/log.txt] > [https://api.travis-ci.com/v3/job/221237628/log.txt] > The root cause is task {{Source: Custom Source -> Map -> Sink: Unnamed > (1/1)}} failed due to expected artificial test failure and then free task > resource including closing the registry. However, the async checkpoint thread > in {{SourceStreamTask}} would then failed and send decline checkpoint message > to JM. > The key logs is like: > {code:java} > 03:36:46,639 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph >- Source: Custom Source -> Map -> Sink: Unnamed (1/1) > (f45ff068d2c80da22c2a958739ec0c87) switched from RUNNING to FAILED. > java.lang.Exception: Artificial Test Failure > at > org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper.map(FailingIdentityMapper.java:79) > at > org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:637) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104) > at > org.apache.flink.streaming.connectors.kafka.testutils.IntegerSource.run(IntegerSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:172) > 03:36:46,637 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Decline checkpoint 12 by task f45ff068d2c80da22c2a958739ec0c87 of job > d5b629623731c66f1bac89dec3e87b89 at 03cbfd77-0727-4366-83c4-9aa4923fc817 @ > localhost (dataPort=-1). > 03:36:46,640 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Discarding checkpoint 12 of job d5b629623731c66f1bac89dec3e87b89. > org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete > snapshot 12 for operator Source: Custom Source -> Map -> Sink: Unnamed (1/1). > Failure reason: Checkpoint was declined. > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:431) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1248) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1182) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:853) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:758) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:667) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:147) > at org.apache.flink.runtime.taskmanager.Task$1
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r310482247 ## File path: flink-end-to-end-tests/run-pre-commit-tests.sh ## @@ -60,6 +60,7 @@ run_test "Modern Kafka end-to-end test" "$END_TO_END_DIR/test-scripts/test_strea run_test "Kinesis end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_kinesis.sh" run_test "class loading end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_classloader.sh" run_test "Distributed cache end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_distributed_cache_via_blob.sh" - +run_test "Streaming SQL end-to-end test (Old planner)" "$END_TO_END_DIR/test-scripts/test_streaming_sql.sh old" "skip_check_exceptions" Review comment: OK, will remove in pre-commit tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead
[ https://issues.apache.org/jira/browse/FLINK-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899873#comment-16899873 ] Timo Walther commented on FLINK-13569: -- I would also vote for string instead of identifier because it translates in a string-string map. Or an entire new grammar that doesn't require single quotes {{' '}}. Could you briefly explain the current grammar? What is the kind of grammar on both sides? > DDL table property key is defined as indentifier but should be string literal > instead > - > > Key: FLINK-13569 > URL: https://issues.apache.org/jira/browse/FLINK-13569 > Project: Flink > Issue Type: Bug >Reporter: Xuefu Zhang >Priority: Major > > The key name should be any free text, and should not be constrained by the > identifier grammar. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] beyond1920 commented on a change in pull request #9316: [FLINK-13529][table-planner-blink] Verify and correct agg function's semantic for Blink planner
beyond1920 commented on a change in pull request #9316: [FLINK-13529][table-planner-blink] Verify and correct agg function's semantic for Blink planner URL: https://github.com/apache/flink/pull/9316#discussion_r310482801 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java ## @@ -946,14 +946,9 @@ public SqlMonotonicity getMonotonicity(SqlOperatorBinding call) { public static final SqlFirstLastValueAggFunction LAST_VALUE = new SqlFirstLastValueAggFunction(SqlKind.LAST_VALUE); /** -* CONCAT_AGG aggregate function. +* LISTAGG aggregate function. */ - public static final SqlConcatAggFunction CONCAT_AGG = new SqlConcatAggFunction(); - - /** -* INCR_SUM aggregate function. -*/ - public static final SqlIncrSumAggFunction INCR_SUM = new SqlIncrSumAggFunction(); + public static final SqlListAggFunction LISTAGG = new SqlListAggFunction(); Review comment: SqlStdOperatorTable.LISTAGG does not limit second parameter to constant character. It's better to use our own `SqlListAggFunction`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value. URL: https://github.com/apache/flink/pull/9285#issuecomment-516712727 ## CI report: * bb70e45a98e76de7f95ac31e893999683cb5bde8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121359827) * 4f96a184d471836053a7e2b09cbd1583ebced727 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121512745) * a915ad9e9323b5c0f799beae32eba104b76b583f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121587513) * 3e6a30848c721001c6bf0a514fb00b00c6f6e0ce : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121696226) * 5958000c4e08d3b4a5842467a9c56bdfeb468efa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121720893) * 5d22079940ff5ccab81cd7090e57af8652b98ab0 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121916070) * a10620b3f6814599bcb14ac7ff8a30bed87de5c9 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121918212) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead
[ https://issues.apache.org/jira/browse/FLINK-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13569: Component/s: Table SQL / API > DDL table property key is defined as indentifier but should be string literal > instead > - > > Key: FLINK-13569 > URL: https://issues.apache.org/jira/browse/FLINK-13569 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Xuefu Zhang >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > The key name should be any free text, and should not be constrained by the > identifier grammar. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead
[ https://issues.apache.org/jira/browse/FLINK-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13569: Fix Version/s: 1.10.0 1.9.0 > DDL table property key is defined as indentifier but should be string literal > instead > - > > Key: FLINK-13569 > URL: https://issues.apache.org/jira/browse/FLINK-13569 > Project: Flink > Issue Type: Bug >Reporter: Xuefu Zhang >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > The key name should be any free text, and should not be constrained by the > identifier grammar. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-10407) Reactive container mode
[ https://issues.apache.org/jira/browse/FLINK-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899879#comment-16899879 ] Antonio Verardi commented on FLINK-10407: - This feature/re-architecture looks really cool! The document says that the target is 1.9.0, but the ticket says 1.7.0. However the 1.9.0 rc1 should be out already, so... Which version are you aiming for at the end? > Reactive container mode > --- > > Key: FLINK-10407 > URL: https://issues.apache.org/jira/browse/FLINK-10407 > Project: Flink > Issue Type: New Feature > Components: Runtime / Coordination >Affects Versions: 1.7.0 >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Major > > The reactive container mode is a new operation mode where a Flink cluster > will react to newly available resources (e.g. started by an external service) > and make use of them by rescaling the existing job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310463189 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/BLAS.java ## @@ -0,0 +1,140 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; Review comment: I'm not sure that `matrix` package is the right place for this class This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310476137 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/Vector.java ## @@ -0,0 +1,340 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import org.apache.commons.lang3.StringUtils; + +import java.io.Serializable; + +/** + * The Vector class defines some common methods for both DenseVector and + * SparseVector. + */ +public abstract class Vector implements Serializable { + + /** +* Parse a DenseVector from a formatted string. +*/ + public static DenseVector dense(String str) { + return DenseVector.deserialize(str); + } + + /** +* Parse a SparseVector from a formatted string. +*/ + public static SparseVector sparse(String str) { + return SparseVector.deserialize(str); + } + + /** +* To check whether the formatted string represents a SparseVector. +*/ + public static boolean isSparse(String str) { + if (org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly(str)) { + return true; + } + return StringUtils.indexOf(str, ':') != -1 || StringUtils.indexOf(str, "$") != -1; + } + + /** +* Parse the tensor from a formatted string. +*/ + public static Vector deserialize(String str) { Review comment: This class is not a good place for this method, because in parent class we use child classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310461362 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/BLAS.java ## @@ -0,0 +1,140 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +/** + * A utility class that wraps netlib BLAS and provides some operations on dense matrix + * and dense vector. + */ +public class BLAS { + private static final com.github.fommil.netlib.BLAS BLAS = com.github.fommil.netlib.BLAS.getInstance(); + + /** +* y += a * x . +*/ + public static void axpy(double a, double[] x, double[] y) { + BLAS.daxpy(x.length, a, x, 1, y, 1); + } + + /** +* y += a * x . +*/ + public static void axpy(double a, DenseVector x, DenseVector y) { + axpy(a, x.getData(), y.getData()); + } + + /** +* x \cdot y . +*/ + public static double dot(double[] x, double[] y) { + return BLAS.ddot(x.length, x, 1, y, 1); + } + + /** +* x \cdot y . +*/ + public static double dot(DenseVector x, DenseVector y) { + return dot(x.getData(), y.getData()); + } + + /** +* x = x * a . +*/ + public static void scal(double a, double[] x) { + BLAS.dscal(x.length, a, x, 1); + } + + /** +* x = x * a . +*/ + public static void scal(double a, DenseVector x) { + scal(a, x.getData()); + } + + /** +* || x - y ||^2 . +*/ + public static double dsquared(double[] x, double[] y) { + double s = 0.; + for (int i = 0; i < x.length; i++) { + double d = x[i] - y[i]; + s += d * d; + } + return s; + } + + /** +* | x - y | . +*/ + public static double dabs(double[] x, double[] y) { + double s = 0.; + for (int i = 0; i < x.length; i++) { + double d = x[i] - y[i]; + s += Math.abs(d); + } + return s; + } + + /** +* C := alpha * A * B + beta * C . +*/ + public static void gemm(double alpha, DenseMatrix matA, boolean transA, DenseMatrix matB, boolean transB, + double beta, DenseMatrix matC) { + if (transA) { + assert matA.numCols() == matC.numRows(); Review comment: probably the better to will say why it is a problem. I mean add a message for assertion, e.g ``` assert matA.numCols() == matC.numRows() : "Matrices should be compatible by dimensions"; ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310478222 ## File path: flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/matrix/DenseMatrixTest.java ## @@ -0,0 +1,174 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import org.junit.Assert; +import org.junit.Test; + +/** + * Test cases for DenseMatrix. + */ +public class DenseMatrixTest { + + private static final double TOL = 1.0e-6; + + private static void assertEqual2D(double[][] matA, double[][] matB) { + assert (matA.length == matB.length); + assert (matA[0].length == matB[0].length); + int m = matA.length; + int n = matA[0].length; + for (int i = 0; i < m; i++) { + for (int j = 0; j < n; j++) { + Assert.assertEquals(matA[i][j], matB[i][j], TOL); + } + } + } + + private static double[][] simpleMM(double[][] matA, double[][] matB) { + int m = matA.length; + int n = matB[0].length; + int k = matA[0].length; + double[][] matC = new double[m][n]; + for (int i = 0; i < m; i++) { + for (int j = 0; j < n; j++) { + matC[i][j] = 0.; + for (int l = 0; l < k; l++) { + matC[i][j] += matA[i][l] * matB[l][j]; + } + } + } + return matC; + } + + private static double[] simpleMV(double[][] matA, double[] x) { + int m = matA.length; + int n = matA[0].length; + assert (n == x.length); + double[] y = new double[m]; + for (int i = 0; i < m; i++) { + y[i] = 0.; + for (int j = 0; j < n; j++) { + y[i] += matA[i][j] * x[j]; + } + } + return y; + } + + @Test + public void testPlus() throws Exception { + DenseMatrix matA = DenseMatrix.rand(4, 3); + DenseMatrix matB = DenseMatrix.ones(4, 3); + matA.plusEquals(matB); + matA.plusEquals(3.0); Review comment: this test does not check anything This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310464259 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/BinaryOp.java ## @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +/** + * Defines the matrix or vector element-wise binary operation. + */ +public interface BinaryOp { Review comment: This interface looks like as java functional interface. how do you feel use `java.util.function.BiFunction` interface instead of this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310469247 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/DenseMatrix.java ## @@ -0,0 +1,763 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import java.io.Serializable; +import java.util.Arrays; + +/** + * DenseMatrix stores dense matrix data and provides some methods to operate on + * the matrix it represents. + */ +public class DenseMatrix implements Serializable { + + /** +* Row dimension. +*/ + private int m; + + /** +* Column dimension. +*/ + private int n; + + /** +* Array for internal storage of elements. +* +* The matrix data is stored in column major format internally. +*/ + private double[] data; + + /** +* Construct an m-by-n matrix of zeros. +* +* @param m Number of rows. +* @param n Number of colums. +*/ + public DenseMatrix(int m, int n) { + this(m, n, new double[m * n], false); + } + + /** +* Construct a matrix from a 1-D array. The data in the array should organize +* in column major. +* +* @param mNumber of rows. +* @param nNumber of cols. +* @param data One-dimensional array of doubles. +*/ + public DenseMatrix(int m, int n, double[] data) { + this(m, n, data, false); + } + + /** +* Construct a matrix from a 1-D array. The data in the array is organized +* in column major or in row major, which is specified by parameter 'inRowMajor' +* +* @param m Number of rows. +* @param n Number of cols. +* @param data One-dimensional array of doubles. +* @param inRowMajor Whether the matrix in 'data' is in row major format. +*/ + public DenseMatrix(int m, int n, double[] data, boolean inRowMajor) { + assert (data.length == m * n); + this.m = m; + this.n = n; + if (inRowMajor) { + toColumnMajor(m, n, data); + } + this.data = data; + } + + /** +* Construct a matrix from a 2-D array. +* +* @param data Two-dimensional array of doubles. +* @throws IllegalArgumentException All rows must have the same size +*/ + public DenseMatrix(double[][] data) { + this.m = data.length; + if (this.m == 0) { + this.n = 0; + this.data = new double[0]; + return; + } + this.n = data[0].length; + for (int i = 0; i < m; i++) { + if (data[i].length != n) { + throw new IllegalArgumentException("All rows must have the same size."); + } + } + this.data = new double[m * n]; + for (int i = 0; i < m; i++) { + for (int j = 0; j < n; j++) { + this.set(i, j, data[i][j]); + } + } + } + + /** +* Create an identity matrix. +* +* @param n +* @return +*/ + public static DenseMatrix eye(int n) { + return eye(n, n); + } + + /** +* Create a identity matrix. +* +* @param m +* @param n +* @return +*/ + public static DenseMatrix eye(int m, int n) { + DenseMatrix mat = new DenseMatrix(m, n); + int k = Math.min(m, n); + for (int i = 0; i < k; i++) { + mat.data[i * m + i] = 1.0; + } + return mat; + } + + /** +* Create a zero matrix. +* +* @par
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310477310 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/Vector.java ## @@ -0,0 +1,340 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import org.apache.commons.lang3.StringUtils; + +import java.io.Serializable; + +/** + * The Vector class defines some common methods for both DenseVector and + * SparseVector. + */ +public abstract class Vector implements Serializable { + + /** +* Parse a DenseVector from a formatted string. +*/ + public static DenseVector dense(String str) { + return DenseVector.deserialize(str); + } + + /** +* Parse a SparseVector from a formatted string. +*/ + public static SparseVector sparse(String str) { + return SparseVector.deserialize(str); + } + + /** +* To check whether the formatted string represents a SparseVector. +*/ + public static boolean isSparse(String str) { + if (org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly(str)) { + return true; + } + return StringUtils.indexOf(str, ':') != -1 || StringUtils.indexOf(str, "$") != -1; + } + + /** +* Parse the tensor from a formatted string. +*/ + public static Vector deserialize(String str) { + Vector vec; + if (isSparse(str)) { + vec = Vector.sparse(str); + } else { + vec = Vector.dense(str); + } + return vec; + } + + /** +* Plus two vectors and create a new vector to store the result. +*/ + public static Vector plus(Vector vec1, Vector vec2) { + return vec1.plus(vec2); + } + + /** +* Minus two vectors and create a new vector to store the result. +*/ + public static Vector minus(Vector vec1, Vector vec2) { + return vec1.minus(vec2); + } + + /** +* Compute the dot product of two vectors. +*/ + public static double dot(Vector vec1, Vector vec2) { + return vec1.dot(vec2); + } + + /** +* Compute || vec1 - vec2 ||_1. +*/ + public static double sumAbsDiff(Vector vec1, Vector vec2) { + if (vec1 instanceof DenseVector) { + if (vec2 instanceof DenseVector) { + return applySum((DenseVector) vec1, (DenseVector) vec2, (a, b) -> Math.abs(a - b)); + } else { + return applySum((DenseVector) vec1, (SparseVector) vec2, (a, b) -> Math.abs(a - b)); + } + } else { + if (vec2 instanceof DenseVector) { + return applySum((SparseVector) vec1, (DenseVector) vec2, (a, b) -> Math.abs(a - b)); + } else { + return applySum((SparseVector) vec1, (SparseVector) vec2, (a, b) -> Math.abs(a - b)); + } + } + } + + /** +* Compute || vec1 - vec2 ||_2^2 . +*/ + public static double sumSquaredDiff(Vector vec1, Vector vec2) { + if (vec1 instanceof DenseVector) { + if (vec2 instanceof DenseVector) { + return applySum((DenseVector) vec1, (DenseVector) vec2, (a, b) -> (a - b) * (a - b)); + } else { + return applySum((DenseVector) vec1, (SparseVector) vec2, (a, b) -> (a - b) * (a - b)); + } + } else { + if (vec2 instanceof DenseVector) { + return applySum((SparseVector) vec1, (DenseVector) vec2, (a, b) -> (a - b) * (a - b)); +
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310465666 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/DenseMatrix.java ## @@ -0,0 +1,763 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import java.io.Serializable; +import java.util.Arrays; + +/** + * DenseMatrix stores dense matrix data and provides some methods to operate on + * the matrix it represents. + */ +public class DenseMatrix implements Serializable { + + /** +* Row dimension. +*/ + private int m; + + /** +* Column dimension. +*/ + private int n; Review comment: same that about rows. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310473823 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/Vector.java ## @@ -0,0 +1,340 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import org.apache.commons.lang3.StringUtils; + +import java.io.Serializable; + +/** + * The Vector class defines some common methods for both DenseVector and + * SparseVector. + */ +public abstract class Vector implements Serializable { + + /** +* Parse a DenseVector from a formatted string. +*/ + public static DenseVector dense(String str) { Review comment: I suppose this method and `sparse` should be united to one abstract method like `toVector` and will be implemented in `SparseVector` and `DenseVector`. It is not good desigion uses childs in parent class. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310465537 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/matrix/DenseMatrix.java ## @@ -0,0 +1,763 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import java.io.Serializable; +import java.util.Arrays; + +/** + * DenseMatrix stores dense matrix data and provides some methods to operate on + * the matrix it represents. + */ +public class DenseMatrix implements Serializable { + + /** +* Row dimension. +*/ + private int m; Review comment: why have not call the `rows` or `rowCount`? the name somehow associated with the `row` would sound more familiar This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.
ex00 commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations. URL: https://github.com/apache/flink/pull/8631#discussion_r310478255 ## File path: flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/matrix/DenseMatrixTest.java ## @@ -0,0 +1,174 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common.matrix; + +import org.junit.Assert; +import org.junit.Test; + +/** + * Test cases for DenseMatrix. + */ +public class DenseMatrixTest { + + private static final double TOL = 1.0e-6; + + private static void assertEqual2D(double[][] matA, double[][] matB) { + assert (matA.length == matB.length); + assert (matA[0].length == matB[0].length); + int m = matA.length; + int n = matA[0].length; + for (int i = 0; i < m; i++) { + for (int j = 0; j < n; j++) { + Assert.assertEquals(matA[i][j], matB[i][j], TOL); + } + } + } + + private static double[][] simpleMM(double[][] matA, double[][] matB) { + int m = matA.length; + int n = matB[0].length; + int k = matA[0].length; + double[][] matC = new double[m][n]; + for (int i = 0; i < m; i++) { + for (int j = 0; j < n; j++) { + matC[i][j] = 0.; + for (int l = 0; l < k; l++) { + matC[i][j] += matA[i][l] * matB[l][j]; + } + } + } + return matC; + } + + private static double[] simpleMV(double[][] matA, double[] x) { + int m = matA.length; + int n = matA[0].length; + assert (n == x.length); + double[] y = new double[m]; + for (int i = 0; i < m; i++) { + y[i] = 0.; + for (int j = 0; j < n; j++) { + y[i] += matA[i][j] * x[j]; + } + } + return y; + } + + @Test + public void testPlus() throws Exception { + DenseMatrix matA = DenseMatrix.rand(4, 3); + DenseMatrix matB = DenseMatrix.ones(4, 3); + matA.plusEquals(matB); + matA.plusEquals(3.0); + } + + @Test + public void testMinus() throws Exception { + DenseMatrix matA = DenseMatrix.rand(4, 3); + DenseMatrix matB = DenseMatrix.ones(4, 3); + matA.minusEquals(matB); Review comment: this test does not check anything This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
[ https://issues.apache.org/jira/browse/FLINK-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899881#comment-16899881 ] TisonKun commented on FLINK-13579: -- [~xintongsong] are there any more details? From the title it's hard to see what happened exactly. > Failed launching standalone cluster due to improper configured irrelevant > config options for active mode. > - > > Key: FLINK-13579 > URL: https://issues.apache.org/jira/browse/FLINK-13579 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Xintong Song >Priority: Blocker > Fix For: 1.9.0 > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13578) Blink throws exception when using Types.INTERVAL_MILLIS in TableSource
[ https://issues.apache.org/jira/browse/FLINK-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899882#comment-16899882 ] sunjincheng commented on FLINK-13578: - This is better to fix in 1.9, so that user ban be using python in blink planner more easy, What do you think? [~jark] [~zhongwei] [~lzljs3620320] > Blink throws exception when using Types.INTERVAL_MILLIS in TableSource > -- > > Key: FLINK-13578 > URL: https://issues.apache.org/jira/browse/FLINK-13578 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0 >Reporter: Wei Zhong >Priority: Critical > > Running this program will throw a TableException: > {code:java} > object Tests { > class MyTableSource extends InputFormatTableSource[java.lang.Long] { > val data = new java.util.ArrayList[java.lang.Long]() > data.add(1L) > data.add(2L) > data.add(3L) > val dataType = Types.INTERVAL_MILLIS() > val inputFormat = new CollectionInputFormat[java.lang.Long]( > data, dataType.createSerializer(new ExecutionConfig)) > override def getInputFormat: InputFormat[java.lang.Long, _ <: InputSplit] > = inputFormat > override def getTableSchema: TableSchema = > TableSchema.fromTypeInfo(dataType) > override def getReturnType: TypeInformation[java.lang.Long] = dataType > } > def main(args: Array[String]): Unit = { > val tenv = TableEnvironmentImpl.create( > > EnvironmentSettings.newInstance().inBatchMode().useBlinkPlanner().build()) > val table = tenv.fromTableSource(new MyTableSource) > tenv.registerTableSink("sink", Array("f0"), > Array(Types.INTERVAL_MILLIS()), new CsvTableSink("/tmp/results")) > table.select("f0").insertInto("sink") > tenv.execute("test") > } > } > {code} > The TableException detail: > {code:java} > Exception in thread "main" org.apache.flink.table.api.TableException: > Unsupported conversion from data type 'INTERVAL SECOND(3)' (conversion class: > java.time.Duration) to type information. Only data types that originated from > type information fully support a reverse conversion. > at > org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) > at > org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter.fromDataTypeToTypeInfo(TypeInfoDataTypeConverter.java:145) > at > org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo(TypeInfoLogicalTypeConverter.java:58) > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) > at > java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) > at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.(BaseRowTypeInfo.java:64) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.of(BaseRowTypeInfo.java:210) > at > org.apache.flink.table.planner.plan.utils.ScanUtil$.convertToInternalRow(ScanUtil.scala:126) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:112) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToTransformation(BatchExecSink.scala:127) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:92) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlan(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:70) > at > org.apa
[jira] [Commented] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead
[ https://issues.apache.org/jira/browse/FLINK-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899883#comment-16899883 ] Jark Wu commented on FLINK-13569: - Hive and Spark also use string literal for keys. But identifier is more concise, so I would like to go with identifier if the parser can support. Currently, we are using CompoundIdentifier for the grammar of property key which doesn't support {{-}}. > DDL table property key is defined as indentifier but should be string literal > instead > - > > Key: FLINK-13569 > URL: https://issues.apache.org/jira/browse/FLINK-13569 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Xuefu Zhang >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > The key name should be any free text, and should not be constrained by the > identifier grammar. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
[ https://issues.apache.org/jira/browse/FLINK-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-13579: - Labels: test-stability (was: ) > Failed launching standalone cluster due to improper configured irrelevant > config options for active mode. > - > > Key: FLINK-13579 > URL: https://issues.apache.org/jira/browse/FLINK-13579 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Xintong Song >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13567) Avro Confluent Schema Registry nightly end-to-end test failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-13567: -- Priority: Major (was: Blocker) > Avro Confluent Schema Registry nightly end-to-end test failed on Travis > --- > > Key: FLINK-13567 > URL: https://issues.apache.org/jira/browse/FLINK-13567 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > Labels: test-stability > Fix For: 1.9.0 > > Attachments: patch.diff > > > The {{Avro Confluent Schema Registry nightly end-to-end test}} failed on > Travis with > {code} > [FAIL] 'Avro Confluent Schema Registry nightly end-to-end test' failed after > 2 minutes and 11 seconds! Test exited with exit code 1 > No taskexecutor daemon (pid: 29044) is running anymore on > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > No standalonesession daemon to stop on host > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > rm: cannot remove > '/home/travis/build/apache/flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins': > No such file or directory > {code} > https://api.travis-ci.org/v3/job/567273939/log.txt -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (FLINK-13578) Blink throws exception when using Types.INTERVAL_MILLIS in TableSource
[ https://issues.apache.org/jira/browse/FLINK-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899882#comment-16899882 ] sunjincheng edited comment on FLINK-13578 at 8/5/19 8:22 AM: - This is better to fix in 1.9, so that user can using python in blink planner more easy, What do you think? [~jark] [~zhongwei] [~lzljs3620320] was (Author: sunjincheng121): This is better to fix in 1.9, so that user ban be using python in blink planner more easy, What do you think? [~jark] [~zhongwei] [~lzljs3620320] > Blink throws exception when using Types.INTERVAL_MILLIS in TableSource > -- > > Key: FLINK-13578 > URL: https://issues.apache.org/jira/browse/FLINK-13578 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0 >Reporter: Wei Zhong >Priority: Critical > > Running this program will throw a TableException: > {code:java} > object Tests { > class MyTableSource extends InputFormatTableSource[java.lang.Long] { > val data = new java.util.ArrayList[java.lang.Long]() > data.add(1L) > data.add(2L) > data.add(3L) > val dataType = Types.INTERVAL_MILLIS() > val inputFormat = new CollectionInputFormat[java.lang.Long]( > data, dataType.createSerializer(new ExecutionConfig)) > override def getInputFormat: InputFormat[java.lang.Long, _ <: InputSplit] > = inputFormat > override def getTableSchema: TableSchema = > TableSchema.fromTypeInfo(dataType) > override def getReturnType: TypeInformation[java.lang.Long] = dataType > } > def main(args: Array[String]): Unit = { > val tenv = TableEnvironmentImpl.create( > > EnvironmentSettings.newInstance().inBatchMode().useBlinkPlanner().build()) > val table = tenv.fromTableSource(new MyTableSource) > tenv.registerTableSink("sink", Array("f0"), > Array(Types.INTERVAL_MILLIS()), new CsvTableSink("/tmp/results")) > table.select("f0").insertInto("sink") > tenv.execute("test") > } > } > {code} > The TableException detail: > {code:java} > Exception in thread "main" org.apache.flink.table.api.TableException: > Unsupported conversion from data type 'INTERVAL SECOND(3)' (conversion class: > java.time.Duration) to type information. Only data types that originated from > type information fully support a reverse conversion. > at > org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) > at > org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter.fromDataTypeToTypeInfo(TypeInfoDataTypeConverter.java:145) > at > org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo(TypeInfoLogicalTypeConverter.java:58) > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) > at > java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) > at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.(BaseRowTypeInfo.java:64) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.of(BaseRowTypeInfo.java:210) > at > org.apache.flink.table.planner.plan.utils.ScanUtil$.convertToInternalRow(ScanUtil.scala:126) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:112) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToTransformation(BatchExecSink.scala:127) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:92) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.tabl
[jira] [Assigned] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
[ https://issues.apache.org/jira/browse/FLINK-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-13579: - Assignee: Xintong Song > Failed launching standalone cluster due to improper configured irrelevant > config options for active mode. > - > > Key: FLINK-13579 > URL: https://issues.apache.org/jira/browse/FLINK-13579 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Xintong Song >Assignee: Xintong Song >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM
[ https://issues.apache.org/jira/browse/FLINK-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899885#comment-16899885 ] Stephan Ewen commented on FLINK-13450: -- How big is the performance difference in practice? Unless it is major, could we just switch to {{StrictMath}}, rather than having yet another switch? > Adjust tests to tolerate arithmetic differences between x86 and ARM > --- > > Key: FLINK-13450 > URL: https://issues.apache.org/jira/browse/FLINK-13450 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.9.0 >Reporter: Stephan Ewen >Priority: Major > > Certain arithmetic operations have different precision/rounding on ARM versus > x86. > Tests using floating point numbers should be changed to tolerate a certain > minimal deviation. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13567) Avro Confluent Schema Registry nightly end-to-end test failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899887#comment-16899887 ] Dawid Wysakowicz commented on FLINK-13567: -- I just checked that this a transient problem with downloading kafka 0.10 from apache archives. I think we can unblock the release from this issue. We can hopefully get rid of those kind of issues when we migrate to the new e2e framework with a proper caching mechanism. > Avro Confluent Schema Registry nightly end-to-end test failed on Travis > --- > > Key: FLINK-13567 > URL: https://issues.apache.org/jira/browse/FLINK-13567 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > Labels: test-stability > Fix For: 1.9.0 > > Attachments: patch.diff > > > The {{Avro Confluent Schema Registry nightly end-to-end test}} failed on > Travis with > {code} > [FAIL] 'Avro Confluent Schema Registry nightly end-to-end test' failed after > 2 minutes and 11 seconds! Test exited with exit code 1 > No taskexecutor daemon (pid: 29044) is running anymore on > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > No standalonesession daemon to stop on host > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > rm: cannot remove > '/home/travis/build/apache/flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins': > No such file or directory > {code} > https://api.travis-ci.org/v3/job/567273939/log.txt -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead
[ https://issues.apache.org/jira/browse/FLINK-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899889#comment-16899889 ] Timo Walther commented on FLINK-13569: -- But what are the implications of using identifier? Can users use reserved keywords in those properties or do they need to escape them with {{` `}}? > DDL table property key is defined as indentifier but should be string literal > instead > - > > Key: FLINK-13569 > URL: https://issues.apache.org/jira/browse/FLINK-13569 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Xuefu Zhang >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > The key name should be any free text, and should not be constrained by the > identifier grammar. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] wuchong commented on a change in pull request #9316: [FLINK-13529][table-planner-blink] Verify and correct agg function's semantic for Blink planner
wuchong commented on a change in pull request #9316: [FLINK-13529][table-planner-blink] Verify and correct agg function's semantic for Blink planner URL: https://github.com/apache/flink/pull/9316#discussion_r310489669 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java ## @@ -946,14 +946,9 @@ public SqlMonotonicity getMonotonicity(SqlOperatorBinding call) { public static final SqlFirstLastValueAggFunction LAST_VALUE = new SqlFirstLastValueAggFunction(SqlKind.LAST_VALUE); /** -* CONCAT_AGG aggregate function. +* LISTAGG aggregate function. */ - public static final SqlConcatAggFunction CONCAT_AGG = new SqlConcatAggFunction(); - - /** -* INCR_SUM aggregate function. -*/ - public static final SqlIncrSumAggFunction INCR_SUM = new SqlIncrSumAggFunction(); + public static final SqlListAggFunction LISTAGG = new SqlListAggFunction(); Review comment: Yes. I agree. Please add a comment on `SqlListAggFunction` why we not use SqlStdOperatorTable.LISTAGG. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13449) Add ARM architecture to MemoryArchitecture
[ https://issues.apache.org/jira/browse/FLINK-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899892#comment-16899892 ] Stephan Ewen commented on FLINK-13449: -- Can you elaborate, why does the test fail if using file shuffles? > Add ARM architecture to MemoryArchitecture > -- > > Key: FLINK-13449 > URL: https://issues.apache.org/jira/browse/FLINK-13449 > Project: Flink > Issue Type: Sub-task >Reporter: Stephan Ewen >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, {{Memoryarchitecture}} recognizes only various versions of x86 and > amd64 / ia64. > We should add aarch64 for ARM to the known architectures. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] zentol edited a comment on issue #9318: [FLINK-13044][s3][fs] Fix handling of relocated amazon classes
zentol edited a comment on issue #9318: [FLINK-13044][s3][fs] Fix handling of relocated amazon classes URL: https://github.com/apache/flink/pull/9318#issuecomment-517982150 According to the CI results there's a shading issue in the presto filesystem: There's a shading issue in the Presto filesystem. When I setup the E2E test to use a credentials provider I run into a NoSuchMethodException: ``` Caused by: java.lang.RuntimeException: Error creating an instance of org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EnvironmentVariableCredentialsProvider for URI s3://[secure]/static/words at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.s3.PrestoS3FileSystem.getCustomAWSCredentialsProvider(PrestoS3FileSystem.java:724) at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.s3.PrestoS3FileSystem.getAwsCredentialsProvider(PrestoS3FileSystem.java:708) at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.s3.PrestoS3FileSystem.createAmazonS3Client(PrestoS3FileSystem.java:632) at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.s3.PrestoS3FileSystem.initialize(PrestoS3FileSystem.java:216) at org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3FileSystemFactory.java:126) ... 28 more Caused by: java.lang.NoSuchMethodException: org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EnvironmentVariableCredentialsProvider.(java.net.URI, org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration) at java.lang.Class.getConstructor0(Class.java:3082) at java.lang.Class.getConstructor(Class.java:1825) at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.s3.PrestoS3FileSystem.getCustomAWSCredentialsProvider(PrestoS3FileSystem.java:720) ... 32 more``` Branch: https://github.com/apache/flink/tree/s3_test2 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-9845) Make InternalTimerService's timer processing interruptible/abortable
[ https://issues.apache.org/jira/browse/FLINK-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899895#comment-16899895 ] Biao Liu commented on FLINK-9845: - I think this issue should be reconsidered through mailbox thread model. > Make InternalTimerService's timer processing interruptible/abortable > > > Key: FLINK-9845 > URL: https://issues.apache.org/jira/browse/FLINK-9845 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Affects Versions: 1.5.1, 1.6.0 >Reporter: Till Rohrmann >Assignee: Biao Liu >Priority: Major > > When cancelling a {{Task}}, the task thread might currently process the > timers registered at the {{InternalTimerService}}. Depending on the timer > action, this might take a while and, thus, blocks the cancellation of the > {{Task}}. In the most extreme case, the {{TaskCancelerWatchDog}} kicks in and > kills the whole {{TaskManager}} process. > In order to alleviate the problem (speed up the cancellation reaction), we > should make the processing of the timers interruptible/abortable. This means > that instead of processing all timers we should check in between timers > whether the {{Task}} is currently being cancelled or not. If this is the > case, then we should directly stop processing the remaining timers and return. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13497) Checkpoints can complete after CheckpointFailureManager fails job
[ https://issues.apache.org/jira/browse/FLINK-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899896#comment-16899896 ] Till Rohrmann commented on FLINK-13497: --- [~pnowojski] I think this issue would benefit from your attention. > Checkpoints can complete after CheckpointFailureManager fails job > - > > Key: FLINK-13497 > URL: https://issues.apache.org/jira/browse/FLINK-13497 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.9.0 > > > I think that we introduced with FLINK-12364 an inconsistency wrt to job > termination a checkpointing. In FLINK-9900 it was discovered that checkpoints > can complete even after the {{CheckpointFailureManager}} decided to fail a > job. I think the expected behaviour should be that we fail all pending > checkpoints once the {{CheckpointFailureManager}} decides to fail the job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13527) Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager
[ https://issues.apache.org/jira/browse/FLINK-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899898#comment-16899898 ] Till Rohrmann commented on FLINK-13527: --- [~pnowojski] I think this issue would benefit from your attention because it is related to FLINK-13497. > Instable KafkaProducerExactlyOnceITCase due to CheckpointFailureManager > --- > > Key: FLINK-13527 > URL: https://issues.apache.org/jira/browse/FLINK-13527 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.9.0 >Reporter: Yun Tang >Assignee: vinoyang >Priority: Blocker > Fix For: 1.9.0 > > > [~banmoy] and I met this instable test below: > [https://api.travis-ci.org/v3/job/565270958/log.txt] > [https://api.travis-ci.com/v3/job/221237628/log.txt] > The root cause is task {{Source: Custom Source -> Map -> Sink: Unnamed > (1/1)}} failed due to expected artificial test failure and then free task > resource including closing the registry. However, the async checkpoint thread > in {{SourceStreamTask}} would then failed and send decline checkpoint message > to JM. > The key logs is like: > {code:java} > 03:36:46,639 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph >- Source: Custom Source -> Map -> Sink: Unnamed (1/1) > (f45ff068d2c80da22c2a958739ec0c87) switched from RUNNING to FAILED. > java.lang.Exception: Artificial Test Failure > at > org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper.map(FailingIdentityMapper.java:79) > at > org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:637) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104) > at > org.apache.flink.streaming.connectors.kafka.testutils.IntegerSource.run(IntegerSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:172) > 03:36:46,637 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Decline checkpoint 12 by task f45ff068d2c80da22c2a958739ec0c87 of job > d5b629623731c66f1bac89dec3e87b89 at 03cbfd77-0727-4366-83c4-9aa4923fc817 @ > localhost (dataPort=-1). > 03:36:46,640 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator >- Discarding checkpoint 12 of job d5b629623731c66f1bac89dec3e87b89. > org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete > snapshot 12 for operator Source: Custom Source -> Map -> Sink: Unnamed (1/1). > Failure reason: Checkpoint was declined. > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:431) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1248) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1182) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:853) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:758) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:667) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:147) > at org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1138) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: jav
[jira] [Commented] (FLINK-12481) Make processing time timer trigger run via the mailbox
[ https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899901#comment-16899901 ] Biao Liu commented on FLINK-12481: -- Hi [~srichter], I'm just wondering what the latest state of this issue. The relevant PR seems to be abandoned. > Make processing time timer trigger run via the mailbox > -- > > Key: FLINK-12481 > URL: https://issues.apache.org/jira/browse/FLINK-12481 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Alex >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > This sub-task integrates the mailbox with processing time timer triggering. > Those triggers should now be enqueued as mailbox events and picked up by the > stream task's main thread for processing. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (FLINK-12481) Make processing time timer trigger run via the mailbox
[ https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899901#comment-16899901 ] Biao Liu edited comment on FLINK-12481 at 8/5/19 8:34 AM: -- Hi [~srichter], I'm just wondering what the latest state of this issue is. The relevant PR seems to be abandoned. was (Author: sleepy): Hi [~srichter], I'm just wondering what the latest state of this issue. The relevant PR seems to be abandoned. > Make processing time timer trigger run via the mailbox > -- > > Key: FLINK-12481 > URL: https://issues.apache.org/jira/browse/FLINK-12481 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Alex >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > This sub-task integrates the mailbox with processing time timer triggering. > Those triggers should now be enqueued as mailbox events and picked up by the > stream task's main thread for processing. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscriptionPattern' option for Flink Kafka connector
flinkbot edited a comment on issue #9356: [FLINK-13340][kafka][table] Add 'topics' and 'subscriptionPattern' option for Flink Kafka connector URL: https://github.com/apache/flink/pull/9356#issuecomment-518117192 ## CI report: * 994378a936cb3d0e91dd78607e81229c4680e7d6 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121912507) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on issue #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently
twalthr commented on issue #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently URL: https://github.com/apache/flink/pull/9334#issuecomment-518143534 Thanks @dawidwys. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently
twalthr commented on a change in pull request #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently URL: https://github.com/apache/flink/pull/9334#discussion_r310496320 ## File path: flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/LogicalTypeDuplicatorTest.java ## @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.types; + +import org.apache.flink.table.catalog.ObjectIdentifier; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.BooleanType; +import org.apache.flink.table.types.logical.CharType; +import org.apache.flink.table.types.logical.DistinctType; +import org.apache.flink.table.types.logical.IntType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.MapType; +import org.apache.flink.table.types.logical.MultisetType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.table.types.logical.SmallIntType; +import org.apache.flink.table.types.logical.StructuredType; +import org.apache.flink.table.types.logical.VarCharType; +import org.apache.flink.table.types.logical.utils.LogicalTypeDuplicator; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; +import org.junit.runners.Parameterized.Parameter; +import org.junit.runners.Parameterized.Parameters; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.hamcrest.CoreMatchers.equalTo; +import static org.junit.Assert.assertThat; + +/** + * Tests for {@link LogicalTypeDuplicator}. + */ +@RunWith(Parameterized.class) +public class LogicalTypeDuplicatorTest { + + private static final LogicalTypeDuplicator DUPLICATOR = new LogicalTypeDuplicator(); Review comment: That's on purpose. The `IntReplacer` tests this behavior. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM
[ https://issues.apache.org/jira/browse/FLINK-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899908#comment-16899908 ] wangxiyuan commented on FLINK-13450: There is an openjdk bug for StrictMath performance: [https://bugs.openjdk.java.net/browse/JDK-8210416] According to its test, after this bug fix, some calculation using StricMath is still slow than Math. For example: Function | java.lang.Math | java.lang.StrictMath - sin | 1.713649346570452 0m5.800s | 1.7136493465700542 0m18.731s cos | 0.17098435541810225 0m5.765s | 0.1709843554185943 0m18.796s tan | -5.5500322522995315E7 0m6.031s |-5.5500322522995315E7 0m21.093s log | 1.7420680845245087E9 0m2.321s | 1.7420680845245087E9 0m4.439s log10 | 7.565705562087342E8 0m2.263s | 7.565705562087342E8 0m5.543s > Adjust tests to tolerate arithmetic differences between x86 and ARM > --- > > Key: FLINK-13450 > URL: https://issues.apache.org/jira/browse/FLINK-13450 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.9.0 >Reporter: Stephan Ewen >Priority: Major > > Certain arithmetic operations have different precision/rounding on ARM versus > x86. > Tests using floating point numbers should be changed to tolerate a certain > minimal deviation. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9024: [FLINK-13119] add blink table config to documentation
flinkbot edited a comment on issue #9024: [FLINK-13119] add blink table config to documentation URL: https://github.com/apache/flink/pull/9024#issuecomment-512084139 ## CI report: * 1e4a2e9a584232fd8f5b441567190e1149b6f72f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119409238) * 0e651b1490efc20b8f974c651dbee6c061548a9e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119638477) * 88dffe037b8d71af38c5cb0c1d2d8452263e5bc0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119766372) * 090e45f4b1515e2e808577355e71981fda2be4fa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/119969718) * e0ff1434b2a0fae8a300bcf473f6365a5663d867 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121734928) * 4e12f62b74500cbe33671f3d59f768252273629d : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121923285) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on issue #9024: [FLINK-13119] add blink table config to documentation
twalthr commented on issue #9024: [FLINK-13119] add blink table config to documentation URL: https://github.com/apache/flink/pull/9024#issuecomment-518147572 @wuchong if this PR is in a good shape from your side. I would volunteer to merge it and add some more introductory words and fix a couple of typos if that is ok? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899914#comment-16899914 ] Till Rohrmann commented on FLINK-13489: --- Yes [~gaoyunhaii], the cut-off configuration problem has been introduced with FLINK-13241. [~xintongsong] opened FLINK-13579 to fix this issue. That leaves now the problem of the Akka timeout issues to be figured out. Maybe it's solely because of Travis but we should make sure that this is the case. > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247) > ... 21 more > Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager > with id ea456d6a590eca7598c19c4d35e56db9 timed out. > at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149) > at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190) > at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > at akka.japi.pf.
[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899915#comment-16899915 ] Till Rohrmann commented on FLINK-13489: --- This is issue is partially caused by FLINK-13579. The Akka timeouts might not be explained by this issue, though. > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247) > ... 21 more > Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager > with id ea456d6a590eca7598c19c4d35e56db9 timed out. > at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149) > at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190) > at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > at scala.PartialFunction$OrElse.applyOrElse(Par
[GitHub] [flink] asfgit closed pull request #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently
asfgit closed pull request #9334: [FLINK-10257][FLINK-13463][table] Support VARCHAR type generalization consistently URL: https://github.com/apache/flink/pull/9334 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dawidwys commented on a change in pull request #8852: [FLINK-12798][table-api][table-planner] Add a proper discover mechanism that will enable switching between Flink & Blink Planner/
dawidwys commented on a change in pull request #8852: [FLINK-12798][table-api][table-planner] Add a proper discover mechanism that will enable switching between Flink & Blink Planner/Executor URL: https://github.com/apache/flink/pull/8852#discussion_r310503698 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableFactoryService.java ## @@ -103,9 +106,52 @@ * @param factory class type * @return the matching factory */ - public static T find(Class factoryClass, Map propertyMap, ClassLoader classLoader) { + public static T find( + Class factoryClass, + Map propertyMap, + ClassLoader classLoader) { Preconditions.checkNotNull(classLoader); - return findInternal(factoryClass, propertyMap, Optional.of(classLoader)); + return findSingleInternal(factoryClass, propertyMap, Optional.of(classLoader)); + } + + /** +* Finds all table factories of the given class and property map. +* +* @param factoryClass desired factory class +* @param propertyMap properties that describe the factory configuration +* @param factory class type +* @return all the matching factories +*/ + public static List findAll(Class factoryClass, Map propertyMap) { + return findAllInternal(factoryClass, propertyMap, Optional.empty()); Review comment: Hi @ssquan What is the exception you are getting? The `defaultLoader` actually already uses the user classloader. Could you make sure you have proper entries in the `META-INF/services/org.apache.flink.table.factories.TableFactory`. You might need a `https://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html#ServicesResourceTransformer` if you have services coming from multiple dependencies. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
[ https://issues.apache.org/jira/browse/FLINK-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899918#comment-16899918 ] Till Rohrmann commented on FLINK-13579: --- The problem is that we always try to update the {{ResourceManager}} configuration independent whether we are trying to start the active or the standalone/reactive mode. The update method fails if some configuration parameters are not set which is the case for the standalone case: https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/component/AbstractDispatcherResourceManagerComponentFactory.java#L171 > Failed launching standalone cluster due to improper configured irrelevant > config options for active mode. > - > > Key: FLINK-13579 > URL: https://issues.apache.org/jira/browse/FLINK-13579 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Xintong Song >Assignee: Xintong Song >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13579) Failed launching standalone cluster due to improper configured irrelevant config options for active mode.
[ https://issues.apache.org/jira/browse/FLINK-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899919#comment-16899919 ] Till Rohrmann commented on FLINK-13579: --- I think we should fix the problem by moving the {{ResourceManager}} update procedure into an {{ActiveResourceManagerFactory}} which is extended by the {{YarnResourceManagerFactory}} and {{MesosResourceManagerFactory}}. > Failed launching standalone cluster due to improper configured irrelevant > config options for active mode. > - > > Key: FLINK-13579 > URL: https://issues.apache.org/jira/browse/FLINK-13579 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Xintong Song >Assignee: Xintong Song >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] GJL commented on issue #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
GJL commented on issue #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9291#issuecomment-518150215 Merged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] GJL closed pull request #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
GJL closed pull request #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9291 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on issue #9024: [FLINK-13119] add blink table config to documentation
wuchong commented on issue #9024: [FLINK-13119] add blink table config to documentation URL: https://github.com/apache/flink/pull/9024#issuecomment-518150476 Yes. It looks good from my side. Thanks for the help @twalthr . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] asfgit merged pull request #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
asfgit merged pull request #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9343 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13580) Add overload support for user defined function to blink-planner
Jingsong Lee created FLINK-13580: Summary: Add overload support for user defined function to blink-planner Key: FLINK-13580 URL: https://issues.apache.org/jira/browse/FLINK-13580 Project: Flink Issue Type: New Feature Components: Table SQL / Planner Reporter: Jingsong Lee Fix For: 1.10.0 Currently overload is not supported in user defined function and given the following UDF {code:java} class Func21 extends ScalarFunction { def eval(p: People): String = { p.name } def eval(p: Student): String = { "student#" + p.name } } class People(val name: String) class Student(name: String) extends People(name) class GraduatedStudent(name: String) extends Student(name) {code} Queries such as the following will compile failed with error msg "Found multiple 'eval' methods which match the signature." {code:java} val udf = new Func21 val table = ... table.select(udf(new GraduatedStudent("test"))) {code} That's because overload is not supported in user defined function currently. I think it will make sense to support overload following the java language specification in section [15.2|https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.12]. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13449) Add ARM architecture to MemoryArchitecture
[ https://issues.apache.org/jira/browse/FLINK-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899923#comment-16899923 ] wangxiyuan commented on FLINK-13449: Take [testBlockingPartitionIsConsumableMultipleTimesIfNotReleasedOnConsumption|https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/ResultPartitionTest.java#L122] as an exmaple: Since aarch64 is not supported, the [BoundedBlockingType|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartitionFactory.java#L197] will be *FILE* instead of *FILE_MMAP*. Then when read the [Buffer|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/BoundedBlockingSubpartitionReader.java#L71], it try to [get 4 bit|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/BufferReaderWriterUtil.java#L159], but the segment is only 1 bit, then *IndexOutOfBoundsException* error raised. Just explained from code layer, since I'm still a newbie for Flink, Sorry that I don't know the deep Flink concept at this moment. > Add ARM architecture to MemoryArchitecture > -- > > Key: FLINK-13449 > URL: https://issues.apache.org/jira/browse/FLINK-13449 > Project: Flink > Issue Type: Sub-task >Reporter: Stephan Ewen >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, {{Memoryarchitecture}} recognizes only various versions of x86 and > amd64 / ia64. > We should add aarch64 for ARM to the known architectures. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] JingsongLi commented on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink
JingsongLi commented on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink URL: https://github.com/apache/flink/pull/9099#issuecomment-518151605 @wuchong I ignored this cases and created JIRA: https://issues.apache.org/jira/browse/FLINK-13580 I think it can be supported in 1.10. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13581) BatchFineGrainedRecoveryITCase failed on Travis
Andrey Zagrebin created FLINK-13581: --- Summary: BatchFineGrainedRecoveryITCase failed on Travis Key: FLINK-13581 URL: https://issues.apache.org/jira/browse/FLINK-13581 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.9.0 Reporter: Andrey Zagrebin Fix For: 1.9.0 [https://travis-ci.com/flink-ci/flink/jobs/221567908] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Closed] (FLINK-13508) CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
[ https://issues.apache.org/jira/browse/FLINK-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Yao closed FLINK-13508. > CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time > > > Key: FLINK-13508 > URL: https://issues.apache.org/jira/browse/FLINK-13508 > Project: Flink > Issue Type: Bug > Components: Tests >Reporter: Gary Yao >Assignee: Gary Yao >Priority: Critical > Labels: pull-request-available > Fix For: 1.8.2, 1.9.0, 1.10.0 > > Time Spent: 1h > Remaining Estimate: 0h > > The test utility > {{CommonTestUtils#waitUntilCondition(SupplierWithException Exception>, Deadline, long)}} may attempt to call {{Thread.sleep(long)}} with > a negative argument. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (FLINK-13508) CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
[ https://issues.apache.org/jira/browse/FLINK-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Yao resolved FLINK-13508. -- Resolution: Fixed 1.8: a0d236fba7c6abdabb461aa504b1e088a3982c31 1.9: d609917d706e6928d6eee1535c9d12b90b6ae6f8 1.10: 1ad16bc252f1d3502a29ddb2081fdfdf3436cc55 > CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time > > > Key: FLINK-13508 > URL: https://issues.apache.org/jira/browse/FLINK-13508 > Project: Flink > Issue Type: Bug > Components: Tests >Reporter: Gary Yao >Assignee: Gary Yao >Priority: Critical > Labels: pull-request-available > Fix For: 1.8.2, 1.9.0, 1.10.0 > > Time Spent: 1h > Remaining Estimate: 0h > > The test utility > {{CommonTestUtils#waitUntilCondition(SupplierWithException Exception>, Deadline, long)}} may attempt to call {{Thread.sleep(long)}} with > a negative argument. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (FLINK-13581) BatchFineGrainedRecoveryITCase failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-13581: - Assignee: Andrey Zagrebin > BatchFineGrainedRecoveryITCase failed on Travis > --- > > Key: FLINK-13581 > URL: https://issues.apache.org/jira/browse/FLINK-13581 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Andrey Zagrebin >Assignee: Andrey Zagrebin >Priority: Blocker > Labels: test-stability > Fix For: 1.9.0 > > > [https://travis-ci.com/flink-ci/flink/jobs/221567908] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13580) Add overload support for user defined function to blink-planner
[ https://issues.apache.org/jira/browse/FLINK-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899928#comment-16899928 ] Timo Walther commented on FLINK-13580: -- This issue should be solved after FLIP-37 part 2 has been implemented. > Add overload support for user defined function to blink-planner > --- > > Key: FLINK-13580 > URL: https://issues.apache.org/jira/browse/FLINK-13580 > Project: Flink > Issue Type: New Feature > Components: Table SQL / Planner >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > Currently overload is not supported in user defined function and given the > following UDF > {code:java} > class Func21 extends ScalarFunction { > def eval(p: People): String = { > p.name > } > def eval(p: Student): String = { > "student#" + p.name > } > } > class People(val name: String) > class Student(name: String) extends People(name) > class GraduatedStudent(name: String) extends Student(name) > {code} > Queries such as the following will compile failed with error msg "Found > multiple 'eval' methods which match the signature." > > {code:java} > val udf = new Func21 > val table = ... > table.select(udf(new GraduatedStudent("test"))) {code} > That's because overload is not supported in user defined function currently. > I think it will make sense to support overload following the java language > specification in section > [15.2|https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.12]. > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] wuchong commented on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink
wuchong commented on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink URL: https://github.com/apache/flink/pull/9099#issuecomment-518152949 Thanks @JingsongLi . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink
flinkbot edited a comment on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink URL: https://github.com/apache/flink/pull/9099#issuecomment-510762700 ## CI report: * fb347fe30a5e894e388837ed2de4f9b60513d7b1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/118885023) * 48382540ba07e7096f2b1f1548c0703fdd5ec8a1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120638945) * e08e1be6e80933dbf7526088691b0dced7673025 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121761606) * 0ecc8a3e8ec55fb25bbd4464942158976b086a8c : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121925576) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-13463) SQL VALUES might fail for Blink planner
[ https://issues.apache.org/jira/browse/FLINK-13463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther closed FLINK-13463. Resolution: Fixed [FLINK-13463][table-planner-blink] Add test case for VALUES with char literal Fixed in 1.10.0: 80e81c7e34b48e1e0d8cf4f2e282307744c0dc2f Fixed in 1.9.0: ae0dba54746e7bedf72ed71c36c8a9f48b593744 [FLINK-13463][table-common] Relax legacy type info conversion for VARCHAR literals Fixed in 1.10.0: 63e9a167fba59be2addaedc74e5d235ec6739832 Fixed in 1.9.0: 2f4e5eab3ee983f76952a4867403eced7bdd32de > SQL VALUES might fail for Blink planner > --- > > Key: FLINK-13463 > URL: https://issues.apache.org/jira/browse/FLINK-13463 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Critical > Fix For: 1.9.0 > > > Executing the following statement in SQL Client of FLINK-13458: > {code} > SELECT name, COUNT(*) AS cnt FROM (VALUES ('Bob'), ('Alice'), ('Greg'), > ('Bob')) AS NameTable(name) GROUP BY name; > {code} > Leads to: > {code} > Exception in thread "main" org.apache.flink.table.client.SqlClientException: > Unexpected exception. This is a bug. Please consider filing an issue. > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:206) > Caused by: org.apache.flink.table.api.TableException: Unsupported conversion > from data type 'VARCHAR(5) NOT NULL' (conversion class: java.lang.String) to > type information. Only data types that originated from type information fully > support a reverse conversion. > at > org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) > at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) > at > java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) > at > java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:55) > at > org.apache.flink.table.api.TableSchema.getFieldTypes(TableSchema.java:129) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.removeTimeAttributes(LocalExecutor.java:609) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:465) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:316) > at > org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:469) > at > org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:291) > at java.util.Optional.ifPresent(Optional.java:159) > at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200) > at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:123) > at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:194) > {code} > A solution needs some investigation. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-10392) Remove legacy mode
[ https://issues.apache.org/jira/browse/FLINK-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899933#comment-16899933 ] TisonKun commented on FLINK-10392: -- ping [~till.rohrmann] as a reminder. > Remove legacy mode > -- > > Key: FLINK-10392 > URL: https://issues.apache.org/jira/browse/FLINK-10392 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Major > > This issue is the umbrella issue to remove the legacy mode code from Flink. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value. URL: https://github.com/apache/flink/pull/9285#issuecomment-516712727 ## CI report: * bb70e45a98e76de7f95ac31e893999683cb5bde8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121359827) * 4f96a184d471836053a7e2b09cbd1583ebced727 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121512745) * a915ad9e9323b5c0f799beae32eba104b76b583f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121587513) * 3e6a30848c721001c6bf0a514fb00b00c6f6e0ce : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121696226) * 5958000c4e08d3b4a5842467a9c56bdfeb468efa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121720893) * 5d22079940ff5ccab81cd7090e57af8652b98ab0 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121916070) * a10620b3f6814599bcb14ac7ff8a30bed87de5c9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121918212) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13582) Improve the implementation of LISTAGG in Blink planner to remove delimiter from state
Jing Zhang created FLINK-13582: -- Summary: Improve the implementation of LISTAGG in Blink planner to remove delimiter from state Key: FLINK-13582 URL: https://issues.apache.org/jira/browse/FLINK-13582 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.10.0 Reporter: Jing Zhang The implementation of LISTAGG save delimiter as a part of state, which is not necessary, because delimiter is constant character. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13159) java.lang.ClassNotFoundException when restore job
[ https://issues.apache.org/jira/browse/FLINK-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kring updated FLINK-13159: -- Attachment: image-2019-08-05-17-32-44-988.png > java.lang.ClassNotFoundException when restore job > - > > Key: FLINK-13159 > URL: https://issues.apache.org/jira/browse/FLINK-13159 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: kring >Priority: Critical > Attachments: image-2019-08-05-17-32-44-988.png > > > {code:java} > java.lang.Exception: Exception while creating StreamOperatorStateContext. > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:195) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:250) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.flink.util.FlinkException: Could not restore keyed > state backend for WindowOperator_b398b3dd4c544ddf2d47a0cc47d332f4_(1/6) from > any of the 1 prov > ided restore options. > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:307) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:135) > ... 5 common frames omitted > Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed > when trying to restore heap backend > at > org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:130) > at > org.apache.flink.runtime.state.filesystem.FsStateBackend.createKeyedStateBackend(FsStateBackend.java:489) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:291) > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:142) > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:121) > ... 7 common frames omitted > Caused by: java.lang.RuntimeException: Cannot instantiate class. > at > org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:384) > at > org.apache.flink.runtime.state.heap.StateTableByKeyGroupReaders.lambda$createV2PlusReader$0(StateTableByKeyGroupReaders.java:74) > at > org.apache.flink.runtime.state.KeyGroupPartitioner$PartitioningResultKeyGroupReader.readMappingsInKeyGroup(KeyGroupPartitioner.java:297) > at > org.apache.flink.runtime.state.heap.HeapRestoreOperation.readKeyGroupStateData(HeapRestoreOperation.java:290) > at > org.apache.flink.runtime.state.heap.HeapRestoreOperation.readStateHandleStateData(HeapRestoreOperation.java:251) > at > org.apache.flink.runtime.state.heap.HeapRestoreOperation.restore(HeapRestoreOperation.java:153) > at > org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:127) > ... 11 common frames omitted > Caused by: java.lang.ClassNotFoundException: xxx > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:382) > ... 17 common frames omitted > {code} > A strange problem with Flink is that after a task has been running properly > for a period of time, if any exception (such as ask timeout or ES request > timeout) is thrown, the task restart will report the above error (xxx is a > business model), and ten subsequent retries will not succeed, but the task > will be resubmitted. Then it can run normally. In addition, there are three > other tasks running at the same time, none of which has the problem. > My flink version is 1.8.0. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13578) Blink throws exception when using Types.INTERVAL_MILLIS in TableSource
[ https://issues.apache.org/jira/browse/FLINK-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899944#comment-16899944 ] Jingsong Lee commented on FLINK-13578: -- This should be fixed in [https://github.com/apache/flink/pull/9099/commits/e74d49cb1fd3a23fe6025db40c4a85fb13d8ce6f] > Blink throws exception when using Types.INTERVAL_MILLIS in TableSource > -- > > Key: FLINK-13578 > URL: https://issues.apache.org/jira/browse/FLINK-13578 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0 >Reporter: Wei Zhong >Priority: Critical > > Running this program will throw a TableException: > {code:java} > object Tests { > class MyTableSource extends InputFormatTableSource[java.lang.Long] { > val data = new java.util.ArrayList[java.lang.Long]() > data.add(1L) > data.add(2L) > data.add(3L) > val dataType = Types.INTERVAL_MILLIS() > val inputFormat = new CollectionInputFormat[java.lang.Long]( > data, dataType.createSerializer(new ExecutionConfig)) > override def getInputFormat: InputFormat[java.lang.Long, _ <: InputSplit] > = inputFormat > override def getTableSchema: TableSchema = > TableSchema.fromTypeInfo(dataType) > override def getReturnType: TypeInformation[java.lang.Long] = dataType > } > def main(args: Array[String]): Unit = { > val tenv = TableEnvironmentImpl.create( > > EnvironmentSettings.newInstance().inBatchMode().useBlinkPlanner().build()) > val table = tenv.fromTableSource(new MyTableSource) > tenv.registerTableSink("sink", Array("f0"), > Array(Types.INTERVAL_MILLIS()), new CsvTableSink("/tmp/results")) > table.select("f0").insertInto("sink") > tenv.execute("test") > } > } > {code} > The TableException detail: > {code:java} > Exception in thread "main" org.apache.flink.table.api.TableException: > Unsupported conversion from data type 'INTERVAL SECOND(3)' (conversion class: > java.time.Duration) to type information. Only data types that originated from > type information fully support a reverse conversion. > at > org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter.toLegacyTypeInfo(LegacyTypeInfoDataTypeConverter.java:242) > at > org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.java:49) > at > org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter.fromDataTypeToTypeInfo(TypeInfoDataTypeConverter.java:145) > at > org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo(TypeInfoLogicalTypeConverter.java:58) > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) > at > java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) > at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.(BaseRowTypeInfo.java:64) > at > org.apache.flink.table.runtime.typeutils.BaseRowTypeInfo.of(BaseRowTypeInfo.java:210) > at > org.apache.flink.table.planner.plan.utils.ScanUtil$.convertToInternalRow(ScanUtil.scala:126) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:112) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:48) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToTransformation(BatchExecSink.scala:127) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:92) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlan(BatchExecSink.scala:50) > at > org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:70) > at > org.apache.flink.table.planner.dele