[jira] [Commented] (FLINK-16976) Update chinese documentation for ListCheckpointed deprecation

2020-04-17 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085592#comment-17085592
 ] 

Aljoscha Krettek commented on FLINK-16976:
--

also cc [~jark]

> Update chinese documentation for ListCheckpointed deprecation
> -
>
> Key: FLINK-16976
> URL: https://issues.apache.org/jira/browse/FLINK-16976
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation, Documentation
>Reporter: Aljoscha Krettek
>Priority: Major
> Fix For: 1.11.0
>
>
> The change for the english documentation is in 
> https://github.com/apache/flink/commit/10aadfc6906a1629f7e60eacf087e351ba40d517
> The original Jira issue is FLINK-6258.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17110) Make StreamExecutionEnvironment#configure also affects StreamExecutionEnvironment#configuration

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084697#comment-17084697
 ] 

Aljoscha Krettek commented on FLINK-17110:
--

I think it should be OK to apply the Configuration. What do you think, [~kkl0u]?

> Make StreamExecutionEnvironment#configure also affects 
> StreamExecutionEnvironment#configuration
> ---
>
> Key: FLINK-17110
> URL: https://issues.apache.org/jira/browse/FLINK-17110
> Project: Flink
>  Issue Type: Improvement
>Reporter: Wenlong Lyu
>Priority: Major
>
> If StreamExecutionEnvironment#configure can also affect the configuration in 
> StreamExecutionEnvironment, we can easily not only add some library jars or 
> classpaths dynamically according to the job we want to run which is quite 
> important for a platform product, but also optimize some runtime 
> configuration in program.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15557) Cannot connect to Azure Event Hub/Kafka since Jan 5th 2020. Kafka version issue

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084695#comment-17084695
 ] 

Aljoscha Krettek commented on FLINK-15557:
--

[~cmichels] do you know if updating to Kafka 2.2.2 would also work for your 
case? I doubt it but it might be worth a try since it's very hard (impossible) 
to update past 2.2.x with the current design of our exactly-once sink.

> Cannot connect to Azure Event Hub/Kafka since Jan 5th 2020. Kafka version 
> issue
> ---
>
> Key: FLINK-15557
> URL: https://issues.apache.org/jira/browse/FLINK-15557
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Java 8, Flink 1.9.1, Azure Event Hub
>Reporter: Chris
>Priority: Blocker
>
> As of Jan 5th we can no longer consume messages from azure event hub using 
> Flink 1.9.1.  I was able to fix the issue on several of our spring boot 
> projects by upgrading to spring boot 2.2.2, kafka 2.3.1, and kafka-clients 
> 2.3.1
>  
> 2020-01-10 19:36:30,364 WARN org.apache.kafka.clients.NetworkClient - 
> [Consumer clientId=consumer-1, groupId=] Bootstrap broker 
> *.servicebus.windows.net:9093 (id: -1 rack: null) disconnected



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16057) Performance regression in ContinuousFileReaderOperator

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084694#comment-17084694
 ] 

Aljoscha Krettek commented on FLINK-16057:
--

[~pnowojski] [~roman_khachatryan] Is there any more work happening on this one?

> Performance regression in ContinuousFileReaderOperator
> --
>
> Key: FLINK-16057
> URL: https://issues.apache.org/jira/browse/FLINK-16057
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After switching CFRO to a single-threaded execution model performance 
> regression was expected to be about 15-20% (benchmarked in November).
> But after merging to master it turned out to be about 50%.
>   
> One reason is that the chaining strategy isn't set by default in CFRO factory.
> Without that even reading and outputting all records of a split in a single 
> mail action doesn't reverse the regression (only about half).
> However,  with strategy set AND batching enabled fixes the regression 
> (starting from batch size 6).
> Though batching can't be used in practice because it can significantly delay 
> checkpointing.
>  
> Another approach would be to process one record and the repeat until 
> defaultMailboxActionAvailable OR haveNewMail.
> This reverses regression and even improves the performance by about 50% 
> compared to the old version.
>  
> The final solution could also be FLIP-27.
>  
> Other things tried (didn't help):
>  * CFRO rework without subsequent commits (removing checkpoint lock)
>  * different batch sizes, including the whole split, without chaining 
> strategy fixed - partial improvement only
>  * disabling close
>  * disabling checkpointing
>  * disabling output (serialization)
>  * using LinkedList instead of PriorityQueue
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15362) Bump Kafka client version to 2.5.0 for universal Kafka connector

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084693#comment-17084693
 ] 

Aljoscha Krettek commented on FLINK-15362:
--

By the way, I think we still can't upgrade if the changes that prevent us from 
upgrading past 2.2.x persist.

> Bump Kafka client version to 2.5.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: Aljoscha Krettek
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15362) Bump Kafka client version to 2.5.0 for universal Kafka connector

2020-04-16 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15362:
-
Summary: Bump Kafka client version to 2.5.0 for universal Kafka connector  
(was: Bump Kafka client version to 2.4.0 for universal Kafka connector)

> Bump Kafka client version to 2.5.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: Aljoscha Krettek
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15362) Bump Kafka client version to 2.4.0 for universal Kafka connector

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084692#comment-17084692
 ] 

Aljoscha Krettek commented on FLINK-15362:
--

Thanks for the heads up!

> Bump Kafka client version to 2.4.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: Aljoscha Krettek
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16662) Blink Planner failed to generate JobGraph for POJO DataStream converting to Table (Cannot determine simple type name)

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084690#comment-17084690
 ] 

Aljoscha Krettek commented on FLINK-16662:
--

What's the proposed fix?

> Blink Planner failed to generate JobGraph for POJO DataStream converting to 
> Table (Cannot determine simple type name)
> -
>
> Key: FLINK-16662
> URL: https://issues.apache.org/jira/browse/FLINK-16662
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: chenxyz
>Assignee: LionelZ
>Priority: Blocker
> Fix For: 1.10.1, 1.11.0
>
>
> When using Blink Palnner to convert a POJO DataStream to a Table, Blink will 
> generate and compile the SourceConversion$1 code. If the Jar task is 
> submitted to Flink, since the UserCodeClassLoader is not used when generating 
> the JobGraph, the ClassLoader(AppClassLoader) of the compiled code cannot 
> load the POJO class in the Jar package, so the following error will be 
> reported:
>  
> {code:java}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 27, Column 
> 174: Cannot determine simple type name "net"Caused by: 
> org.codehaus.commons.compiler.CompileException: Line 27, Column 174: Cannot 
> determine simple type name "net" at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486) at 
> org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
>  at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
>  at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:7009) at 
> org.codehaus.janino.UnitCompiler.access$15200(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6425) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6403) at 
> org.codehaus.janino.Java$Cast.accept(Java.java:4887) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6403) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$Rvalue.accept(Java.java:4105) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9150)
>  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9036) at 
> org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8938) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5060) at 
> org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4421)
>  at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4394)
>  at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5062) at 
> org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4394) at 
> org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5575) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5019) at 
> org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4416) at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4394) at 
> org.codehaus.janino.Java$Cast.accept(Java.java:4887) at 
> 

[jira] [Commented] (FLINK-17058) Adding TimeoutTrigger support nested triggers

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084674#comment-17084674
 ] 

Aljoscha Krettek commented on FLINK-17058:
--

The new version you posted doesn't fire itself on {{onProcessingTime()}} but 
only defers to the nested trigger, right?

> Adding TimeoutTrigger support nested triggers
> -
>
> Key: FLINK-17058
> URL: https://issues.apache.org/jira/browse/FLINK-17058
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Roey Shem Tov
>Priority: Minor
> Attachments: ProcessingTimeoutTrigger.java, 
> ProcessingTimeoutTrigger.java
>
>
> Hello,
> first Jira ticket that im opening here so if there is any mistakes of how 
> doing it, please recorrect me.
> My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as 
> example how the PurgeTrigger does).
> The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous:
>  # Const timeout - when the first element of the window is arriving it is 
> opening a timeout of X millis - after that the window will be evaluate.
>  # Continual timeout - each record arriving will increase the timeout of the 
> evaluation of the window.
>  
> I found it very useful in our case when using flink, and i would like to work 
> on it (if it is possible).
> what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17074) Deprecate DataStream.keyBy() that use tuple/expression keys

2020-04-16 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084672#comment-17084672
 ] 

Aljoscha Krettek commented on FLINK-17074:
--

I'd say this is only about the DataStream API, we don't plan to put more work 
into the DataSet API so we can leave that as is.

We might consider opening another issue for deprecating similar methods on 
{{DataStream}}/{{KeyedStream}}: the aggregation methods such as {{min()}}, 
{{max()}}, {{sum()}}, and so on.

> Deprecate DataStream.keyBy() that use tuple/expression keys
> ---
>
> Key: FLINK-17074
> URL: https://issues.apache.org/jira/browse/FLINK-17074
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Etienne Chauchot
>Priority: Major
>  Labels: starter
> Fix For: 1.11.0
>
>
> Currently you can either specify a {{KeySelector}}, tuple positions, and 
> expression keys? I think {{KeySelectors}} are strictly superior and with 
> lambdas (or function references) quite easy to use.  Tuple/expression keys 
> use reflection underneath to do the field accesses, so performance is 
> strictly worse. Also, when using a {{KeySelector}} you will have a meaningful 
> key type {{KEY}} in your operations while for tuple/expression keys the key 
> type is simply {{Tuple}}.
> Tuple/expression keys were introduced before Java got support for lambdas in 
> Java 8 and before we added the Table API. Nowadays, using a lambda is little 
> more typing than using an expression key but is (possibly) faster and more 
> type safe. The Table API should be used for these more 
> expression-based/relational use cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17136) Rename toplevel DataSet/DataStream section titles

2020-04-15 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-17136.

Fix Version/s: 1.11.0
   Resolution: Done

master: ffb9878ca457c94af6781ba88b2e5a8c3d597f3a

> Rename toplevel DataSet/DataStream section titles
> -
>
> Key: FLINK-17136
> URL: https://issues.apache.org/jira/browse/FLINK-17136
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> According to FLIP-42:
>  - Streaming (DataStream API) -> DataStream API
>  - Batch (DataSet API) -> DataSet API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-3344) EventTimeWindowCheckpointingITCase.testPreAggregatedTumblingTimeWindow

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-3344.
---
Resolution: Cannot Reproduce

Closing for now since it's likely caused by FLINK-16770

> EventTimeWindowCheckpointingITCase.testPreAggregatedTumblingTimeWindow
> --
>
> Key: FLINK-3344
> URL: https://issues.apache.org/jira/browse/FLINK-3344
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: test-stability
>
> https://travis-ci.org/apache/flink/jobs/107198388
> https://travis-ci.org/mjsax/flink/jobs/107198383
> {noformat}
> Maven produced no output for 300 seconds.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17135) PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction failed

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083201#comment-17083201
 ] 

Aljoscha Krettek commented on FLINK-17135:
--

Another instance: 
https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=93=logs=a1590513-d0ea-59c3-3c7b-aad756c48f25=912be268-3995-5931-14ba-f3d52302cb3c=4792

> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed
> ---
>
> Key: FLINK-17135
> URL: https://issues.apache.org/jira/browse/FLINK-17135
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> Shared by Chesnay on 
> https://issues.apache.org/jira/browse/FLINK-17093?focusedCommentId=17083055=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17083055:
> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed on master:
> {code:java}
>  [INFO] Running 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 0.441 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] 
> testPandasFunctionMixedWithGeneralPythonFunction(org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest)
>  Time elapsed: 0.032 s <<< FAILURE!
>  java.lang.AssertionError: 
>  type mismatch:
>  type1:
>  INTEGER NOT NULL
>  type2:
>  INTEGER NOT NULL
>  at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>  at org.apache.calcite.plan.RelOptUtil.eq(RelOptUtil.java:2188)
>  at 
> org.apache.calcite.rex.RexProgramBuilder$RegisterInputShuttle.visitInputRef(RexProgramBuilder.java:948)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17136) Rename toplevel DataSet/DataStream section titles

2020-04-14 Thread Aljoscha Krettek (Jira)
Aljoscha Krettek created FLINK-17136:


 Summary: Rename toplevel DataSet/DataStream section titles
 Key: FLINK-17136
 URL: https://issues.apache.org/jira/browse/FLINK-17136
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Aljoscha Krettek


According to FLIP-42:
 - Streaming (DataStream API) -> DataStream API
 - Batch (DataSet API) -> DataSet API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17136) Rename toplevel DataSet/DataStream section titles

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17136:


Assignee: Aljoscha Krettek

> Rename toplevel DataSet/DataStream section titles
> -
>
> Key: FLINK-17136
> URL: https://issues.apache.org/jira/browse/FLINK-17136
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>
> According to FLIP-42:
>  - Streaming (DataStream API) -> DataStream API
>  - Batch (DataSet API) -> DataSet API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17134) Wrong logging information in Kafka010PartitionDiscoverer#L80

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-17134.

Fix Version/s: 1.11.0
   Resolution: Fixed

master: 485ac221a3c7f5a7746323219d9eaafa27c3391d

> Wrong logging information in Kafka010PartitionDiscoverer#L80
> 
>
> Key: FLINK-17134
> URL: https://issues.apache.org/jira/browse/FLINK-17134
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010PartitionDiscoverer.java#L80
> {code:java}
> throw new RuntimeException("Could not fetch partitions for %s. Make sure that 
> the topic exists.".format(topic));
> {code} 
> equals to {{String.format(topic)}}, which
> should be replaced with 
> {code:java}
> throw new RuntimeException(String.format("Could not fetch partitions for %s. 
> Make sure that the topic exists.", topic));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17134) Wrong logging information in Kafka010PartitionDiscoverer#L80

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17134:


Assignee: Jiayi Liao

> Wrong logging information in Kafka010PartitionDiscoverer#L80
> 
>
> Key: FLINK-17134
> URL: https://issues.apache.org/jira/browse/FLINK-17134
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010PartitionDiscoverer.java#L80
> {code:java}
> throw new RuntimeException("Could not fetch partitions for %s. Make sure that 
> the topic exists.".format(topic));
> {code} 
> equals to {{String.format(topic)}}, which
> should be replaced with 
> {code:java}
> throw new RuntimeException(String.format("Could not fetch partitions for %s. 
> Make sure that the topic exists.", topic));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17058) Adding TimeoutTrigger support nested triggers

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083045#comment-17083045
 ] 

Aljoscha Krettek commented on FLINK-17058:
--

Sorry, we had public vacation on Friday and Monday in Germany... 

Thanks for the code! I think there are some very tricky corner cases or things 
that we learned over time that could be problematic:
 - the namespace of state is not changed when calling the nested trigger, i.e. 
if the nested trigger sets state or timers this could interfere with the 
timeout trigger because we just straight pass through the context
 - {{onProcessingTime()}} doesn't invoke the nested trigger, which might want 
to do a purge or other things
 - same for {{onEventTime()}} where this wrapping trigger would not 
react/forward at all

I don't know if we can find a good solution that would be useful for many Flink 
users with the current restrictions of the APIs. By the way, there was also 
FLINK-4407 which we eventually dropped because it was considered too much 
effort for the potential benefit.

> Adding TimeoutTrigger support nested triggers
> -
>
> Key: FLINK-17058
> URL: https://issues.apache.org/jira/browse/FLINK-17058
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Roey Shem Tov
>Priority: Minor
> Attachments: ProcessingTimeoutTrigger.java
>
>
> Hello,
> first Jira ticket that im opening here so if there is any mistakes of how 
> doing it, please recorrect me.
> My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as 
> example how the PurgeTrigger does).
> The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous:
>  # Const timeout - when the first element of the window is arriving it is 
> opening a timeout of X millis - after that the window will be evaluate.
>  # Continual timeout - each record arriving will increase the timeout of the 
> evaluation of the window.
>  
> I found it very useful in our case when using flink, and i would like to work 
> on it (if it is possible).
> what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17083) Restrict users of accumulators not to return null in Accumulator#getLocalValue

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083040#comment-17083040
 ] 

Aljoscha Krettek commented on FLINK-17083:
--

Yes, I think that restriction makes sense.

> Restrict users of accumulators not to return null in Accumulator#getLocalValue
> --
>
> Key: FLINK-17083
> URL: https://issues.apache.org/jira/browse/FLINK-17083
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Major
>
> [This discussion|https://issues.apache.org/jira/browse/FLINK-13880] raises a 
> problem that, if a user returns a null value in 
> {{Accumulator#getLocalValue}}, he will be notified that the {{failureCause}} 
> should not be null when fetching the accumulator result, which seems to be 
> really wired.
> The problem is that we're not explicitly restricting the users not to return 
> null values in {{Accumulator#getLocalValue}}.
> [~aljoscha] is it legal to return null values? If not, we should explicitly 
> throw a related exception to the users instead of giving the users a somewhat 
> unrelated exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17123) flink-sql-connector-elasticsearch版本与 es 版本不对应

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-17123.

Resolution: Invalid

> flink-sql-connector-elasticsearch版本与 es 版本不对应
> -
>
> Key: FLINK-17123
> URL: https://issues.apache.org/jira/browse/FLINK-17123
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.10.0
>Reporter: longxibendi
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 一.flink-sql-connector-es 与 es 版本 不对应
>  
> 二.es版本是7.6.0 ,但是用 flink-sql 连接器,只能用 es6 的,用es7的就报错了。
>  
> 比如:
> flink-sql-connector-elasticsearch6_2.11-1.10.0.jar ,对应 es7.6.0 和 flink1.10.0
> 启动 sql-client.sh embedded
>  
> create tb xxx
> insert xxx select xxx;
>  
> 然后看到 submit 了,但是任务是执行失败的。报:
>  
> ...skipping...
> java.lang.NoClassDefFoundError: 
> org/apache/flink/elasticsearch7/shaded/org/elasticsearch/script/mustache/SearchTemplateRequest
>  at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.createClient(Elasticsearch7ApiCallBridge.java:76)
>  at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.createClient(Elasticsearch7ApiCallBridge.java:48)
>  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:299)
>  at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>  at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>  at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.script.mustache.SearchTemplateRequest
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 14 more
>  
>  
> 三. 如果用 flink-sql-connector-elasticsearch7_2.11-1.10.0.jar , es7.6.0 
> ,flink1.10.0 报错了。改成
> flink-sql-connector-elasticsearch6_2.11-1.10.0.jar,es7.6.0 ,flink1.10.0 没问题了



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17110) Make StreamExecutionEnvironment#configure also affects configuration

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083032#comment-17083032
 ] 

Aljoscha Krettek commented on FLINK-17110:
--

What do you mean by "also affects configuration"?

> Make StreamExecutionEnvironment#configure also affects configuration
> 
>
> Key: FLINK-17110
> URL: https://issues.apache.org/jira/browse/FLINK-17110
> Project: Flink
>  Issue Type: Improvement
>Reporter: Wenlong Lyu
>Priority: Major
>
> If StreamExecutionEnvironment#configure can also affect the configuration in 
> StreamExecutionEnvironment, we can easily not only add some library jars or 
> classpaths dynamically according to the job we want to run which is quite 
> important for a platform product, but also optimize some runtime 
> configuration in program.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-17123) flink-sql-connector-elasticsearch版本与 es 版本不对应

2020-04-14 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reopened FLINK-17123:
--

> flink-sql-connector-elasticsearch版本与 es 版本不对应
> -
>
> Key: FLINK-17123
> URL: https://issues.apache.org/jira/browse/FLINK-17123
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.10.0
>Reporter: longxibendi
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 一.flink-sql-connector-es 与 es 版本 不对应
>  
> 二.es版本是7.6.0 ,但是用 flink-sql 连接器,只能用 es6 的,用es7的就报错了。
>  
> 比如:
> flink-sql-connector-elasticsearch6_2.11-1.10.0.jar ,对应 es7.6.0 和 flink1.10.0
> 启动 sql-client.sh embedded
>  
> create tb xxx
> insert xxx select xxx;
>  
> 然后看到 submit 了,但是任务是执行失败的。报:
>  
> ...skipping...
> java.lang.NoClassDefFoundError: 
> org/apache/flink/elasticsearch7/shaded/org/elasticsearch/script/mustache/SearchTemplateRequest
>  at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.createClient(Elasticsearch7ApiCallBridge.java:76)
>  at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.createClient(Elasticsearch7ApiCallBridge.java:48)
>  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:299)
>  at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>  at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>  at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.script.mustache.SearchTemplateRequest
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 14 more
>  
>  
> 三. 如果用 flink-sql-connector-elasticsearch7_2.11-1.10.0.jar , es7.6.0 
> ,flink1.10.0 报错了。改成
> flink-sql-connector-elasticsearch6_2.11-1.10.0.jar,es7.6.0 ,flink1.10.0 没问题了



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17076) Revamp Kafka Connector Documentation

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083029#comment-17083029
 ] 

Aljoscha Krettek commented on FLINK-17076:
--

Oh yes +1

> Revamp Kafka Connector Documentation
> 
>
> Key: FLINK-17076
> URL: https://issues.apache.org/jira/browse/FLINK-17076
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Documentation
>Reporter: Seth Wiesman
>Priority: Major
>
> The Kafka connector is one of the most popular connectors in Flink. The 
> documentation has grown organically over the years as new versions and 
> features have become supported. The page is still based on the kafka 0.8 
> connector with asides for newer versions. For instance, the first paragraph 
> says "For most users, the FlinkKafkaConsumer08 (part of 
> flink-connector-kafka) is appropriate."
> Since Kafka 1.0 was released in 2017, I believe the page should be 
> restructured around the universal connector and 
> Kafka(De)SerializationSchema's with clear examples.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16770) Resuming Externalized Checkpoint (rocks, incremental, scale up) end-to-end test fails with no such file

2020-04-14 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082921#comment-17082921
 ] 

Aljoscha Krettek commented on FLINK-16770:
--

Another occurrence of the general problem: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7418=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=643a4312-2b8f-5e76-5975-7bbc0942470d=3814

> Resuming Externalized Checkpoint (rocks, incremental, scale up) end-to-end 
> test fails with no such file
> ---
>
> Key: FLINK-16770
> URL: https://issues.apache.org/jira/browse/FLINK-16770
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
> Attachments: e2e-output.log, 
> flink-vsts-standalonesession-0-fv-az53.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The log : 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6603=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
> There was also the similar problem in 
> https://issues.apache.org/jira/browse/FLINK-16561, but for the case of no 
> parallelism change. And this case is for scaling up. Not quite sure whether 
> the root cause is the same one.
> {code:java}
> 2020-03-25T06:50:31.3894841Z Running 'Resuming Externalized Checkpoint 
> (rocks, incremental, scale up) end-to-end test'
> 2020-03-25T06:50:31.3895308Z 
> ==
> 2020-03-25T06:50:31.3907274Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-31390197304
> 2020-03-25T06:50:31.5500274Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-25T06:50:31.6354639Z Starting cluster.
> 2020-03-25T06:50:31.8871932Z Starting standalonesession daemon on host 
> fv-az655.
> 2020-03-25T06:50:33.5021784Z Starting taskexecutor daemon on host fv-az655.
> 2020-03-25T06:50:33.5152274Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-03-25T06:50:34.5498116Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-03-25T06:50:35.6031346Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-03-25T06:50:36.9848425Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-03-25T06:50:38.0283377Z Dispatcher REST endpoint is up.
> 2020-03-25T06:50:38.0285490Z Running externalized checkpoints test, with 
> ORIGINAL_DOP=2 NEW_DOP=4 and STATE_BACKEND_TYPE=rocks 
> STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INCREMENTAL=true 
> SIMULATE_FAILURE=false ...
> 2020-03-25T06:50:46.1754645Z Job (b8cb04e4b1e730585bc616aa352866d0) is 
> running.
> 2020-03-25T06:50:46.1758132Z Waiting for job 
> (b8cb04e4b1e730585bc616aa352866d0) to have at least 1 completed checkpoints 
> ...
> 2020-03-25T06:50:46.3478276Z Waiting for job to process up to 200 records, 
> current progress: 173 records ...
> 2020-03-25T06:50:49.6332988Z Cancelling job b8cb04e4b1e730585bc616aa352866d0.
> 2020-03-25T06:50:50.4875673Z Cancelled job b8cb04e4b1e730585bc616aa352866d0.
> 2020-03-25T06:50:50.5468230Z ls: cannot access 
> '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-31390197304/externalized-chckpt-e2e-backend-dir/b8cb04e4b1e730585bc616aa352866d0/chk-[1-9]*/_metadata':
>  No such file or directory
> 2020-03-25T06:50:50.5606260Z Restoring job with externalized checkpoint at . 
> ...
> 2020-03-25T06:50:58.4728245Z 
> 2020-03-25T06:50:58.4732663Z 
> 
> 2020-03-25T06:50:58.4735785Z  The program finished with the following 
> exception:
> 2020-03-25T06:50:58.4737759Z 
> 2020-03-25T06:50:58.4742666Z 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> 2020-03-25T06:50:58.4746274Z  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
> 2020-03-25T06:50:58.4749954Z  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
> 2020-03-25T06:50:58.4752753Z  at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:142)
> 2020-03-25T06:50:58.4755400Z  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:659)
> 2020-03-25T06:50:58.4757862Z  at 
> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:210)
> 2020-03-25T06:50:58.4760282Z  at 
> 

[jira] [Updated] (FLINK-17074) Deprecate DataStream.keyBy() that use tuple/expression keys

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17074:
-
Description: 
Currently you can either specify a {{KeySelector}}, tuple positions, and 
expression keys? I think {{KeySelectors}} are strictly superior and with 
lambdas (or function references) quite easy to use.  Tuple/expression keys use 
reflection underneath to do the field accesses, so performance is strictly 
worse. Also, when using a {{KeySelector}} you will have a meaningful key type 
{{KEY}} in your operations while for tuple/expression keys the key type is 
simply {{Tuple}}.

Tuple/expression keys were introduced before Java got support for lambdas in 
Java 8 and before we added the Table API. Nowadays, using a lambda is little 
more typing than using an expression key but is (possibly) faster and more type 
safe. The Table API should be used for these more expression-based/relational 
use cases.

> Deprecate DataStream.keyBy() that use tuple/expression keys
> ---
>
> Key: FLINK-17074
> URL: https://issues.apache.org/jira/browse/FLINK-17074
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
> Environment: Currently you can either specify a {{KeySelector}}, 
> tuple positions, and expression keys? I think {{KeySelectors}} are strictly 
> superior and with lambdas (or function references) quite easy to use.  
> Tuple/expression keys use reflection underneath to do the field accesses, so 
> performance is strictly worse. Also, when using a {{KeySelector}} you will 
> have a meaningful key type {{KEY}} in your operations while for 
> tuple/expression keys the key type is simply {{Tuple}}.
> Tuple/expression keys were introduced before Java got support for lambdas in 
> Java 8 and before we added the Table API. Nowadays, using a lambda is little 
> more typing than using an expression key but is (possibly) faster and more 
> type safe. The Table API should be used for these more 
> expression-based/relational use cases.
>Reporter: Aljoscha Krettek
>Priority: Major
>  Labels: starter
> Fix For: 1.11.0
>
>
> Currently you can either specify a {{KeySelector}}, tuple positions, and 
> expression keys? I think {{KeySelectors}} are strictly superior and with 
> lambdas (or function references) quite easy to use.  Tuple/expression keys 
> use reflection underneath to do the field accesses, so performance is 
> strictly worse. Also, when using a {{KeySelector}} you will have a meaningful 
> key type {{KEY}} in your operations while for tuple/expression keys the key 
> type is simply {{Tuple}}.
> Tuple/expression keys were introduced before Java got support for lambdas in 
> Java 8 and before we added the Table API. Nowadays, using a lambda is little 
> more typing than using an expression key but is (possibly) faster and more 
> type safe. The Table API should be used for these more 
> expression-based/relational use cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17074) Deprecate DataStream.keyBy() that use tuple/expression keys

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17074:
-
Environment: (was: Currently you can either specify a {{KeySelector}}, 
tuple positions, and expression keys? I think {{KeySelectors}} are strictly 
superior and with lambdas (or function references) quite easy to use.  
Tuple/expression keys use reflection underneath to do the field accesses, so 
performance is strictly worse. Also, when using a {{KeySelector}} you will have 
a meaningful key type {{KEY}} in your operations while for tuple/expression 
keys the key type is simply {{Tuple}}.

Tuple/expression keys were introduced before Java got support for lambdas in 
Java 8 and before we added the Table API. Nowadays, using a lambda is little 
more typing than using an expression key but is (possibly) faster and more type 
safe. The Table API should be used for these more expression-based/relational 
use cases.)

> Deprecate DataStream.keyBy() that use tuple/expression keys
> ---
>
> Key: FLINK-17074
> URL: https://issues.apache.org/jira/browse/FLINK-17074
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Priority: Major
>  Labels: starter
> Fix For: 1.11.0
>
>
> Currently you can either specify a {{KeySelector}}, tuple positions, and 
> expression keys? I think {{KeySelectors}} are strictly superior and with 
> lambdas (or function references) quite easy to use.  Tuple/expression keys 
> use reflection underneath to do the field accesses, so performance is 
> strictly worse. Also, when using a {{KeySelector}} you will have a meaningful 
> key type {{KEY}} in your operations while for tuple/expression keys the key 
> type is simply {{Tuple}}.
> Tuple/expression keys were introduced before Java got support for lambdas in 
> Java 8 and before we added the Table API. Nowadays, using a lambda is little 
> more typing than using an expression key but is (possibly) faster and more 
> type safe. The Table API should be used for these more 
> expression-based/relational use cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17074) Deprecate DataStream.keyBy() that use tuple/expression keys

2020-04-09 Thread Aljoscha Krettek (Jira)
Aljoscha Krettek created FLINK-17074:


 Summary: Deprecate DataStream.keyBy() that use tuple/expression 
keys
 Key: FLINK-17074
 URL: https://issues.apache.org/jira/browse/FLINK-17074
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
 Environment: Currently you can either specify a {{KeySelector}}, tuple 
positions, and expression keys? I think {{KeySelectors}} are strictly superior 
and with lambdas (or function references) quite easy to use.  Tuple/expression 
keys use reflection underneath to do the field accesses, so performance is 
strictly worse. Also, when using a {{KeySelector}} you will have a meaningful 
key type {{KEY}} in your operations while for tuple/expression keys the key 
type is simply {{Tuple}}.

Tuple/expression keys were introduced before Java got support for lambdas in 
Java 8 and before we added the Table API. Nowadays, using a lambda is little 
more typing than using an expression key but is (possibly) faster and more type 
safe. The Table API should be used for these more expression-based/relational 
use cases.
Reporter: Aljoscha Krettek






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17074) Deprecate DataStream.keyBy() that use tuple/expression keys

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17074:
-
Fix Version/s: 1.11.0

> Deprecate DataStream.keyBy() that use tuple/expression keys
> ---
>
> Key: FLINK-17074
> URL: https://issues.apache.org/jira/browse/FLINK-17074
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
> Environment: Currently you can either specify a {{KeySelector}}, 
> tuple positions, and expression keys? I think {{KeySelectors}} are strictly 
> superior and with lambdas (or function references) quite easy to use.  
> Tuple/expression keys use reflection underneath to do the field accesses, so 
> performance is strictly worse. Also, when using a {{KeySelector}} you will 
> have a meaningful key type {{KEY}} in your operations while for 
> tuple/expression keys the key type is simply {{Tuple}}.
> Tuple/expression keys were introduced before Java got support for lambdas in 
> Java 8 and before we added the Table API. Nowadays, using a lambda is little 
> more typing than using an expression key but is (possibly) faster and more 
> type safe. The Table API should be used for these more 
> expression-based/relational use cases.
>Reporter: Aljoscha Krettek
>Priority: Major
>  Labels: starter
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17058) Adding TimeoutTrigger support nested triggers

2020-04-09 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17079201#comment-17079201
 ] 

Aljoscha Krettek commented on FLINK-17058:
--

Hi could you maybe post example code? It seems you already have a working 
version of this, right?

Also, welcome to the community! 

> Adding TimeoutTrigger support nested triggers
> -
>
> Key: FLINK-17058
> URL: https://issues.apache.org/jira/browse/FLINK-17058
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Roey Shem Tov
>Priority: Minor
>
> Hello,
> first Jira ticket that im opening here so if there is any mistakes of how 
> doing it, please recorrect me.
> My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as 
> example how the PurgeTrigger does).
> The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous:
>  # Const timeout - when the first element of the window is arriving it is 
> opening a timeout of X millis - after that the window will be evaluate.
>  # Continual timeout - each record arriving will increase the timeout of the 
> evaluation of the window.
>  
> I found it very useful in our case when using flink, and i would like to work 
> on it (if it is possible).
> what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17058) Adding TimeoutTrigger support nested triggers

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17058:
-
Component/s: (was: API / DataSet)
 API / DataStream

> Adding TimeoutTrigger support nested triggers
> -
>
> Key: FLINK-17058
> URL: https://issues.apache.org/jira/browse/FLINK-17058
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Roey Shem Tov
>Priority: Minor
>
> Hello,
> first Jira ticket that im opening here so if there is any mistakes of how 
> doing it, please recorrect me.
> My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as 
> example how the PurgeTrigger does).
> The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous:
>  # Const timeout - when the first element of the window is arriving it is 
> opening a timeout of X millis - after that the window will be evaluate.
>  # Continual timeout - each record arriving will increase the timeout of the 
> evaluation of the window.
>  
> I found it very useful in our case when using flink, and i would like to work 
> on it (if it is possible).
> what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16533) ExecutionEnvironment supports execution of existing plan

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-16533:
-
Fix Version/s: (was: 1.11.0)

> ExecutionEnvironment supports execution of existing plan
> 
>
> Key: FLINK-16533
> URL: https://issues.apache.org/jira/browse/FLINK-16533
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: godfrey he
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, {{ExecutionEnvironment}} only supports executing the plan 
> generated by self.
> FLIP-84 proposes {{TableEnvironment}} can only trigger the table program and 
> the {{StreamExecutionEnvironment}}/{{ExecutionEnvironment}} can only trigger 
> {{DataStream}}/{{DataSet}} program. This requires that 
> {{ExecutionEnvironment}} can execute the plan generated by 
> {{TableEnvironment}}. We propose to add two methods in  
> {{ExecutionEnvironment}}: (which are similar to 
> {{StreamExecutionEnvironment}}#execute(StreamGraph) and 
> {{StreamExecutionEnvironment}}#executeAsync(StreamGraph))
> {code:java}
> public class ExecutionEnvironment {
> @Internal
> public JobExecutionResult execute(Plan plan) throws Exception {
> .
> }
> @Internal
> public JobClient executeAsync(Plan plan) throws Exception {
> .
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16521) Remove unused FileUtils#isClassFile

2020-04-09 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17079112#comment-17079112
 ] 

Aljoscha Krettek commented on FLINK-16521:
--

Any news on this?

> Remove unused FileUtils#isClassFile
> ---
>
> Key: FLINK-16521
> URL: https://issues.apache.org/jira/browse/FLINK-16521
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Canbin Zheng
>Priority: Trivial
>  Labels: starter
> Fix For: 1.10.1, 1.11.0
>
>
> Remove the unused public method of {{FileUtils#isClassFile}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2020-04-09 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-992.
--
Resolution: Won't Do

We're not putting effort in the DataSet API anymore.

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / Python
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15557) Cannot connect to Azure Event Hub/Kafka since Jan 5th 2020. Kafka version issue

2020-04-09 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17079022#comment-17079022
 ] 

Aljoscha Krettek commented on FLINK-15557:
--

There is a problem with bumping the version because our Producer was using some 
internals that have changed in Kafka 2.3.x. (see 
https://issues.apache.org/jira/browse/FLINK-15362?focusedCommentId=17078507=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17078507).

> Cannot connect to Azure Event Hub/Kafka since Jan 5th 2020. Kafka version 
> issue
> ---
>
> Key: FLINK-15557
> URL: https://issues.apache.org/jira/browse/FLINK-15557
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Java 8, Flink 1.9.1, Azure Event Hub
>Reporter: Chris
>Priority: Blocker
>
> As of Jan 5th we can no longer consume messages from azure event hub using 
> Flink 1.9.1.  I was able to fix the issue on several of our spring boot 
> projects by upgrading to spring boot 2.2.2, kafka 2.3.1, and kafka-clients 
> 2.3.1
>  
> 2020-01-10 19:36:30,364 WARN org.apache.kafka.clients.NetworkClient - 
> [Consumer clientId=consumer-1, groupId=] Bootstrap broker 
> *.servicebus.windows.net:9093 (id: -1 rack: null) disconnected



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15362) Bump Kafka client version to 2.4.0 for universal Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078507#comment-17078507
 ] 

Aljoscha Krettek commented on FLINK-15362:
--

I did the easy fixing of the reflective code but I'm afraid this doesn't work 
anymore with Kafka versions > 2.3.x. I spent a day debugging this and what I 
found in the end is this change: 
https://github.com/apache/kafka/pull/5971/files.

The {{Sender}} thread of the {{Producer}} will now actively abort incomplete 
transactions on shutdown, see {{"Aborting incomplete transaction due to 
shutdown"}}. Our Kafka Producer will get an {{InvalidTxnStateException}}, that 
it swallows, but in  this case it's a valid exception because upon recovery we 
try to finalize a transaction that was already aborted in the 
{{TransactionCoordinator}}. Aborting the transaction can be seen in the DEBUG 
logs of the {{TransactionCoordinator}}.

[~pnowojski] Could you confirm this?

> Bump Kafka client version to 2.4.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: Aljoscha Krettek
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15362) Bump Kafka client version to 2.4.0 for universal Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-15362:


Assignee: Aljoscha Krettek  (was: vinoyang)

> Bump Kafka client version to 2.4.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: Aljoscha Krettek
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17053) RecordWriterTest.testClearBuffersAfterInterruptDuringBlockingBufferRequest fails

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078315#comment-17078315
 ] 

Aljoscha Krettek commented on FLINK-17053:
--

Another occurrence: 
https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=85=logs=6e58d712-c5cc-52fb-0895-6ff7bd56c46b=6545c5fa-bc4d-5447-1a0b-b3260f46cc6b=6734

> RecordWriterTest.testClearBuffersAfterInterruptDuringBlockingBufferRequest 
> fails
> 
>
> Key: FLINK-17053
> URL: https://issues.apache.org/jira/browse/FLINK-17053
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI run: https://travis-ci.org/github/apache/flink/jobs/672458481
> {code}
> [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 2.378 s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.api.writer.BroadcastRecordWriterTest
> [ERROR] 
> testClearBuffersAfterInterruptDuringBlockingBufferRequest(org.apache.flink.runtime.io.network.api.writer.BroadcastRecordWriterTest)
>   Time elapsed: 1.197 s  <<< FAILURE!
> Wanted but not invoked:
> bufferProvider.requestBufferBuilderBlocking();
> -> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriterTest.testClearBuffersAfterInterruptDuringBlockingBufferRequest(RecordWriterTest.java:211)
> However, there were exactly 2 interactions with this mock:
> bufferProvider.requestBufferBuilder();
> -> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriterTest$RecyclingPartitionWriter.tryGetBufferBuilder(RecordWriterTest.java:726)
> bufferProvider.requestBufferBuilder();
> -> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriterTest$RecyclingPartitionWriter.tryGetBufferBuilder(RecordWriterTest.java:726)
>   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriterTest.testClearBuffersAfterInterruptDuringBlockingBufferRequest(RecordWriterTest.java:211)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10598) Maintain modern Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10598.

Resolution: Fixed

All sub-issues are closed, for future version updates or such we should create 
new regular Jira Issues. I don't think it's useful to keep this issue open 
forever.

> Maintain modern Kafka connector
> ---
>
> Key: FLINK-10598
> URL: https://issues.apache.org/jira/browse/FLINK-10598
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> This is an umbrella issue for archiving and maintaining the issues generated 
> by the newly merged modern Kafka connector (FLINK-9697).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12976) Bump Kafka client version to 2.3.0 for universal Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-12976.

Resolution: Won't Fix

Closing in favour of FLINK-15362, which updates to 2.4.x

> Bump Kafka client version to 2.3.0 for universal Kafka connector
> 
>
> Key: FLINK-12976
> URL: https://issues.apache.org/jira/browse/FLINK-12976
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently, Kafka 2.3.0 has released, see here: 
> [https://www.apache.org/dist/kafka/2.3.0/RELEASE_NOTES.html]
> We'd better bump the dependency version of Kafka client to 2.3.0 to track the 
> newest version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15362) Bump Kafka client version to 2.4.0 for universal Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15362:
-
Parent: (was: FLINK-10598)
Issue Type: Improvement  (was: Sub-task)

> Bump Kafka client version to 2.4.0 for universal Kafka connector
> 
>
> Key: FLINK-15362
> URL: https://issues.apache.org/jira/browse/FLINK-15362
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Kafka 2.4 has been released recently. There are many features are involved in 
> this version. More details: 
> https://blogs.apache.org/kafka/entry/what-s-new-in-apache1
> IMO, it would be better to bump the Kafka client version to 2.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12976) Bump Kafka client version to 2.3.0 for universal Kafka connector

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-12976:
-
Parent: (was: FLINK-10598)
Issue Type: Improvement  (was: Sub-task)

> Bump Kafka client version to 2.3.0 for universal Kafka connector
> 
>
> Key: FLINK-12976
> URL: https://issues.apache.org/jira/browse/FLINK-12976
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently, Kafka 2.3.0 has released, see here: 
> [https://www.apache.org/dist/kafka/2.3.0/RELEASE_NOTES.html]
> We'd better bump the dependency version of Kafka client to 2.3.0 to track the 
> newest version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-11257) FlinkKafkaConsumer should support assgin partition

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-11257.

Resolution: Not A Problem

I'm closing this for now because I don't understand what the issue is. Our 
consumer should correctly use the "assign partition" API to only read from the 
necessary partitions.

Please re-open if there is any new concrete information or a code example that 
shows what the change should be.

> FlinkKafkaConsumer should support assgin partition 
> ---
>
> Key: FLINK-11257
> URL: https://issues.apache.org/jira/browse/FLINK-11257
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.7.1
>Reporter: shengjk1
>Assignee: vinoyang
>Priority: Major
>
> i find flink 1.7 also has  universal Kafka connector ,if the kakfa-connector  
> support assgin partition ,the the kakfa-connector should prefect.  such as a 
> kafka topci has  3 partition, i only use 1 partition,but i should read all 
> partition then filter.this method Not only waste resources but also 
> relatively low efficiency.so i suggest FlinkKafkaConsumer should support 
> assgin partition 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11257) FlinkKafkaConsumer should support assgin partition

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-11257:
-
Parent: (was: FLINK-10598)
Issue Type: Improvement  (was: Sub-task)

> FlinkKafkaConsumer should support assgin partition 
> ---
>
> Key: FLINK-11257
> URL: https://issues.apache.org/jira/browse/FLINK-11257
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.7.1
>Reporter: shengjk1
>Assignee: vinoyang
>Priority: Major
>
> i find flink 1.7 also has  universal Kafka connector ,if the kakfa-connector  
> support assgin partition ,the the kakfa-connector should prefect.  such as a 
> kafka topci has  3 partition, i only use 1 partition,but i should read all 
> partition then filter.this method Not only waste resources but also 
> relatively low efficiency.so i suggest FlinkKafkaConsumer should support 
> assgin partition 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10603) Reduce kafka test duration

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10603:
-
Affects Version/s: 1.11.0
   1.8.0
   1.9.0
   1.10.0

> Reduce kafka test duration
> --
>
> Key: FLINK-10603
> URL: https://issues.apache.org/jira/browse/FLINK-10603
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> The tests for the modern kafka connector take more than 10 minutes which is 
> simply unacceptable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10603) Reduce kafka test duration

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10603:
-
Parent: (was: FLINK-10598)
Issue Type: Improvement  (was: Sub-task)

> Reduce kafka test duration
> --
>
> Key: FLINK-10603
> URL: https://issues.apache.org/jira/browse/FLINK-10603
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> The tests for the modern kafka connector take more than 10 minutes which is 
> simply unacceptable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10701) Move modern kafka connector module into connector profile

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10701.

Resolution: Abandoned

Closing this for now. With the move to Azure there's also other things going on.

> Move modern kafka connector module into connector profile 
> --
>
> Key: FLINK-10701
> URL: https://issues.apache.org/jira/browse/FLINK-10701
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka, Tests
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The modern connector is run in the {{misc}} profile since it wasn't properly 
> added to the {{connector profile in stage.sh click 
> [here|https://github.com/apache/flink/pull/6890#issuecomment-431917344] to 
> see more details.}}
> *This issue is blocked by FLINK-10603.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-5015) Add Tests/ITCase for Kafka Per-Partition Watermarks

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-5015:

Priority: Major  (was: Critical)

> Add Tests/ITCase for Kafka Per-Partition Watermarks
> ---
>
> Key: FLINK-5015
> URL: https://issues.apache.org/jira/browse/FLINK-5015
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Connectors / Kafka
>Reporter: Aljoscha Krettek
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4859) Clearly Separate Responsibilities of StreamOperator and StreamTask

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-4859:

Priority: Major  (was: Critical)

> Clearly Separate Responsibilities of StreamOperator and StreamTask
> --
>
> Key: FLINK-4859
> URL: https://issues.apache.org/jira/browse/FLINK-4859
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Priority: Major
>
> Currently, {{StreamTask}} and {{StreamOperator}} perform a complicated dance 
> for certain operations. For example, when restoring from a snapshot, the 
> state backends are restored in the {{StreamTask}}, but only after 
> {{StreamTask}} calls restore on the operator and the operator asks for the 
> backends. There is no clear separation/modularisation with properly defined 
> interfaces right now.
> The code works right now but maintainability/understandability could be 
> greatly improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4859) Clearly Separate Responsibilities of StreamOperator and StreamTask

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-4859:

Component/s: Runtime / Task

> Clearly Separate Responsibilities of StreamOperator and StreamTask
> --
>
> Key: FLINK-4859
> URL: https://issues.apache.org/jira/browse/FLINK-4859
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Reporter: Aljoscha Krettek
>Priority: Major
>
> Currently, {{StreamTask}} and {{StreamOperator}} perform a complicated dance 
> for certain operations. For example, when restoring from a snapshot, the 
> state backends are restored in the {{StreamTask}}, but only after 
> {{StreamTask}} calls restore on the operator and the operator asks for the 
> backends. There is no clear separation/modularisation with properly defined 
> interfaces right now.
> The code works right now but maintainability/understandability could be 
> greatly improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3431) Add retrying logic for RocksDB snapshots

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3431:

Component/s: (was: API / DataStream)

> Add retrying logic for RocksDB snapshots
> 
>
> Key: FLINK-3431
> URL: https://issues.apache.org/jira/browse/FLINK-3431
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Gyula Fora
>Assignee: Shuyi Chen
>Priority: Critical
>
> Currently the RocksDB snapshots rely on hdfs copy not failing while taking 
> the snapshots.
> In some cases when the state size is big enough the HDFS nodes might get so 
> overloaded that the copy operation fails on errors like this:
> AsynchronousException{java.io.IOException: All datanodes 172.26.86.90:50010 
> are bad. Aborting...}
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$1.run(StreamTask.java:545)
> Caused by: java.io.IOException: All datanodes 172.26.86.90:50010 are bad. 
> Aborting...
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)
> I think it would be important that we don't immediately fail the job in these 
> cases but retry the copy operation after some random sleep time. It might be 
> also good to do a random sleep before the copy depending on the state size to 
> smoothen out IO a little bit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-6993) Not reading recursive files in Batch by using readTextFile when file name contains _ in starting.

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-6993.
---
Resolution: Abandoned

Closing for now since this is expected Hadoop behaviour.

> Not reading recursive files in Batch by using readTextFile when file name 
> contains _ in starting.
> -
>
> Key: FLINK-6993
> URL: https://issues.apache.org/jira/browse/FLINK-6993
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.3.0
>Reporter: Shashank Agarwal
>Assignee: Andrey Zagrebin
>Priority: Critical
>
> When i try to read files from a folder using using readTextFile in batch and 
> using recursive.file.enumeration, It's not reading the files when file name 
> contains _ in starting. But when i removed the _ from start it's working 
> fine. 
> It also working fine in case of direct path of single file not working with 
> Directory path. For replicate the issue :
> {code}
> import org.apache.flink.api.scala.{DataSet, ExecutionEnvironment}
> import org.apache.flink.configuration.Configuration
> object CSVMerge {
>   def main(args: Array[String]): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> // create a configuration object
> val parameters = new Configuration
> // set the recursive enumeration parameter
> parameters.setBoolean("recursive.file.enumeration", true)
> val stream = env.readTextFile("file:///Users/data")
>   .withParameters(parameters)
> stream.print()
>   }
> }
> {code}
> When you put 2-3 Text files with name like 1.txt, 2.txt etc. in data folder 
> it's working fine. But when we put _1.txt, _2.txt file it's not working.
> Flink BucketingSink in stream by default put _ before the file names. So 
> unable to read Sinked files from DataStream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-7151) Support function DDL

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-7151:

Priority: Major  (was: Critical)

> Support function DDL
> 
>
> Key: FLINK-7151
> URL: https://issues.apache.org/jira/browse/FLINK-7151
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: yuemeng
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Based on create function and table.we can register a udf,udaf,udtf use sql:
> {code}
> CREATE FUNCTION [IF NOT EXISTS] [catalog_name.db_name.]function_name AS 
> class_name;
> DROP FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name;
> ALTER FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name RENAME TO 
> new_name;
> {code}
> {code}
> CREATE function 'TOPK' AS 
> 'com..aggregate.udaf.distinctUdaf.topk.ITopKUDAF';
> INSERT INTO db_sink SELECT id, TOPK(price, 5, 'DESC') FROM kafka_source GROUP 
> BY id;
> {code}
> This ticket can assume that the function class is already loaded in classpath 
> by users. Advanced syntax like to how to dynamically load udf libraries from 
> external locations can be on a separate ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-9189) Add a SBT Quickstart

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-9189:

Priority: Major  (was: Critical)

> Add a SBT Quickstart
> 
>
> Key: FLINK-9189
> URL: https://issues.apache.org/jira/browse/FLINK-9189
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Stephan Ewen
>Priority: Major
>
> Having a proper project template helps a lot in getting dependencies right. 
> For example, setting the core dependencies to "provided", the connector / 
> library dependencies to "compile", etc.
> The Maven quickstarts are in good shape by now, but I observed SBT users to 
> get this wrong quite often.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9307) Running flink program doesn't print infos in console any more

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9307.
---
Resolution: Won't Fix

This is a side effect of switching to the REST API quite a while ago.

> Running flink program doesn't print infos in console any more
> -
>
> Key: FLINK-9307
> URL: https://issues.apache.org/jira/browse/FLINK-9307
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.6.0
> Environment: Mac OS X 12.10.6
> Flink 1.6-SNAPSHOT commit c8fa8d025684c222582 .
>Reporter: Zili Chen
>Priority: Critical
>
> As shown in [this 
> page|https://ci.apache.org/projects/flink/flink-docs-master/quickstart/setup_quickstart.html#run-the-example],
>  when flink run program, it should have printed infos like
> {code:java}
> Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
>   Using address 127.0.0.1:6123 to connect to JobManager.
>   JobManager web interface address http://127.0.0.1:8081
>   Starting execution of program
>   Submitting job with JobID: 574a10c8debda3dccd0c78a3bde55e1b. Waiting for 
> job completion.
>   Connected to JobManager at 
> Actor[akka.tcp://flink@127.0.0.1:6123/user/jobmanager#297388688]
>   11/04/2016 14:04:50 Job execution switched to status RUNNING.
>   11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to 
> SCHEDULED
>   11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to 
> DEPLOYING
>   11/04/2016 14:04:50 Fast TumblingProcessingTimeWindows(5000) of 
> WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) 
> switched to SCHEDULED
>   11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of 
> WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) 
> switched to DEPLOYING
>   11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of 
> WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) 
> switched to RUNNING
>   11/04/2016 14:04:51 Source: Socket Stream -> Flat Map(1/1) switched to 
> RUNNING
> {code}
> but when I follow the same command, it just printed
> {code:java}
> Starting execution of program
> {code}
> Those message are printed properly if I use flink 1.4, and also logged at 
> `log/flink--taskexecutor-0-localhost.log` when running on 
> 1.6-SNAPSHOT.
> Now it breaks [this pull request|https://github.com/apache/flink/pull/5957] 
> and [~Zentol] suspects it's a regression.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9433) SystemProcessingTimeService does not work properly

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9433.
---
Resolution: Abandoned

I'm closing this as "Abandoned", since there is no more activity and the code 
base has moved on quite a bit. Please re-open this if you feel otherwise and 
work should continue.

The code base has moved to a new mailbox model for the Task, so things changed 
quite a bit.

> SystemProcessingTimeService does not work properly
> --
>
> Key: FLINK-9433
> URL: https://issues.apache.org/jira/browse/FLINK-9433
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: Ruidong Li
>Assignee: Ruidong Li
>Priority: Critical
> Attachments: log.txt
>
>
> if  (WindowOperator --> AsyncWaitOperator) chained together, when the queue 
> of AsyncWaitOperator is full and timeTrigger of WindowOperator is triggered 
> to call collect(), it will wait until the queue of AsyncWaitOperator  is not 
> full, at the moment, the timeTrigger of AsyncWaitOperator will not be 
> triggered because the SystemProcessingTimeService has only one capacity.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9528) Incorrect results: Filter does not treat Upsert messages correctly.

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9528.
---
Resolution: Abandoned

I'm closing this as "Abandoned", since there is no more activity and the code 
base has moved on quite a bit. Please re-open this if you feel otherwise and 
work should continue.

This is for the old Table Planner, I think.

> Incorrect results: Filter does not treat Upsert messages correctly.
> ---
>
> Key: FLINK-9528
> URL: https://issues.apache.org/jira/browse/FLINK-9528
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.3.3, 1.4.2, 1.5.0
>Reporter: Fabian Hueske
>Assignee: Hequn Cheng
>Priority: Critical
>
> Currently, Filters (i.e., Calcs with predicates) do not distinguish between 
> retraction and upsert mode. A Calc looks at record (regardless of its update 
> semantics) and either discard it (predicate evaluates to false) or pass it on 
> (predicate evaluates to true).
> This works fine for messages with retraction semantics but is not correct for 
> upsert messages.
> The following test case (can be pasted into {{TableSinkITCase}}) shows the 
> problem:
> {code:java}
>   @Test
>   def testUpsertsWithFilter(): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.getConfig.enableObjectReuse()
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tEnv = TableEnvironment.getTableEnvironment(env)
> val t = StreamTestData.get3TupleDataStream(env)
>   .assignAscendingTimestamps(_._1.toLong)
>   .toTable(tEnv, 'id, 'num, 'text)
> t.select('text.charLength() as 'len)
>   .groupBy('len)
>   .select('len, 'len.count as 'cnt)
>   // .where('cnt < 7)
>   .writeToSink(new TestUpsertSink(Array("len"), false))
> env.execute()
> val results = RowCollector.getAndClearValues
> val retracted = RowCollector.upsertResults(results, Array(0)).sorted
> val expectedWithoutFilter = List(
>   "2,1", "5,1", "9,9", "10,7", "11,1", "14,1", "25,1").sorted
> val expectedWithFilter = List(
> "2,1", "5,1", "11,1", "14,1", "25,1").sorted
> assertEquals(expectedWithoutFilter, retracted)
> // assertEquals(expectedWithFilter, retracted)
>   }
> {code}
> When we add a filter on the aggregation result, we would expect that all rows 
> that do not fulfill the condition are removed from the result. However, the 
> filter only removes the upsert message such that the previous version remains 
> in the result.
> One solution could be to make a filter aware of the update semantics (retract 
> or upsert) and convert the upsert message into a delete message if the 
> predicate evaluates to false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9544) Downgrade kinesis protocol from CBOR to JSON not possible as required by kinesalite

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9544.
---
Resolution: Abandoned

I'm closing this as "Abandoned", since there is no more activity and the code 
base has moved on quite a bit. Please re-open this if you feel otherwise and 
work should continue.

> Downgrade kinesis protocol from CBOR to JSON not possible as required by 
> kinesalite
> ---
>
> Key: FLINK-9544
> URL: https://issues.apache.org/jira/browse/FLINK-9544
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.4.0, 1.4.1, 1.4.2, 1.5.0
>Reporter: Ph.Duveau
>Priority: Critical
>
> The amazon client do not downgrade from CBOR to JSON while setting env 
> AWS_CBOR_DISABLE to true (or 1) and/or defining 
> com.amazonaws.sdk.disableCbor=true via JVM options. This bug is due to maven 
> shade relocation of com.amazon.* classes. As soon as you cancel this 
> relocation (by removing the relocation in the kinesis connector or by 
> re-relocating in the final jar), it reruns again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9812) SpanningRecordSerializationTest fails on travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9812.
---
Resolution: Cannot Reproduce

This has not occured for very long now, please re-open if it re-occurs.

> SpanningRecordSerializationTest fails on travis
> ---
>
> Key: FLINK-9812
> URL: https://issues.apache.org/jira/browse/FLINK-9812
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System, Runtime / Network, Tests
>Affects Versions: 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Nico Kruber
>Priority: Critical
> Fix For: 1.7.3
>
>
> https://travis-ci.org/zentol/flink/jobs/402744191
> {code}
> testHandleMixedLargeRecords(org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest)
>   Time elapsed: 6.113 sec  <<< ERROR!
> java.nio.channels.ClosedChannelException: null
>   at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
>   at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:199)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer$SpanningWrapper.addNextChunkFromMemorySegment(SpillingAdaptiveSpanningRecordDeserializer.java:529)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer$SpanningWrapper.access$200(SpillingAdaptiveSpanningRecordDeserializer.java:431)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.setNextBuffer(SpillingAdaptiveSpanningRecordDeserializer.java:76)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testSerializationRoundTrip(SpanningRecordSerializationTest.java:149)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testSerializationRoundTrip(SpanningRecordSerializationTest.java:115)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializationTest.testHandleMixedLargeRecords(SpanningRecordSerializationTest.java:104)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9992) FsStorageLocationReferenceTest#testEncodeAndDecode failed in Travis CI

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9992.
---
Resolution: Cannot Reproduce

This has not occured for very long now, please re-open if it re-occurs.

> FsStorageLocationReferenceTest#testEncodeAndDecode failed in Travis CI
> --
>
> Key: FLINK-9992
> URL: https://issues.apache.org/jira/browse/FLINK-9992
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Reporter: vinoyang
>Priority: Critical
>
> {code:java}
> testEncodeAndDecode(org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest)
>   Time elapsed: 0.027 sec  <<< ERROR!
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal 
> character in hostname at index 5: 
> gl://碪⯶㪴]ឪ嵿⎐䪀筪ᆶ歑ᆂ玚୓䇷ノⳡ೯43575/䡷ᦼ☶⨩䚩筶ࢊණ⣁᳊尯/彡䫼畒伈森削㔞/缳漸⩧勎㓘癐⍖ᾐ䘽㼺䨶/粉掩㤡⪌⎏㆐罠Ꮨㆆ䤱ൎ堉儾
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.parseHostname(URI.java:3387)
>   at java.net.URI$Parser.parseServer(URI.java:3236)
>   at java.net.URI$Parser.parseAuthority(URI.java:3155)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:746)
>   at org.apache.flink.core.fs.Path.initialize(Path.java:247)
>   at org.apache.flink.core.fs.Path.(Path.java:217)
>   at 
> org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.randomPath(FsStorageLocationReferenceTest.java:88)
>   at 
> org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.testEncodeAndDecode(FsStorageLocationReferenceTest.java:41)
> {code}
> log is here : https://travis-ci.org/apache/flink/jobs/409430886
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10100) Optimizer pushes partitioning past Null-Filter

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10100.

Resolution: Won't Fix

We're not actively developing the DataSet API anymore.

> Optimizer pushes partitioning past Null-Filter
> --
>
> Key: FLINK-10100
> URL: https://issues.apache.org/jira/browse/FLINK-10100
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.3.3, 1.4.2, 1.5.2, 1.6.0, 1.7.0
>Reporter: Fabian Hueske
>Priority: Critical
>
> The DataSet optimizer pushes certain operations like partitioning or sorting 
> past Filter operators.
> It does that because it knows that a {{FilterFunction}} cannot modify the 
> records but only indicate whether a record should be forwarded or not. 
> However, this causes problems if the filter should remove records with null 
> keys. In that case, the partitioning can be pushed past the filter such that 
> the partitioner has to deal with null keys. This can fail with a 
> {{NullPointerException}}.
> The following code produces an affected plan.
> {code}
> List rowList = new ArrayList<>();
> rowList.add(Row.of(null, 1L));
> rowList.add(Row.of(2L, 2L));
> rowList.add(Row.of(2L, 2L));
> rowList.add(Row.of(3L, 3L));
> rowList.add(Row.of(null, 3L));
> DataSet rows = env.fromCollection(rowList, Types.ROW(Types.LONG, 
> Types.LONG));
> DataSet result = rows
>   .filter(r -> r.getField(0) != null)
> .setParallelism(4)
>   .groupBy(0)
>   .reduceGroup((Iterable vals, Collector out) -> {
>   long cnt = 0L;
>   for(Row v : vals) { cnt++; }
> out.collect(cnt);
>   }).returns(Types.LONG)
> .setParallelism(4);
> result.output(new DiscardingOutputFormat());
> System.out.println(env.getExecutionPlan());
> {code}
> To resolve the problem, we could remove the field-forward property of 
> {{FilterFunction}}. In general, it is typically more efficient to filter 
> before shipping or sorting data. So this might also improve the performance 
> of certain plans.
> As a *workaround* until this bug is fix, users can implement the filter with 
> a {{FlatMapFunction}}. {{FlatMapFunction}} is a more generic interface and 
> the optimizer cannot automatically infer how the function behaves and won't 
> push partitionings or sorts past the function.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-1915) Faulty plan selection by optimizer

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-1915.
---
Resolution: Won't Fix

We're not actively developing the DataSet API anymore.

> Faulty plan selection by optimizer
> --
>
> Key: FLINK-1915
> URL: https://issues.apache.org/jira/browse/FLINK-1915
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Reporter: Till Rohrmann
>Priority: Minor
>
> The optimizer selects for certain jobs a sub-optimal execution plan. 
> For example, the {{WebLogAnalysis}} example job contains a coGroup input 
> which consists of a {{Filter}} and a subsequent {{Projection}}. The optimizer 
> inserts a hash partitioning between the filter and the mapper (projection) 
> and a sorting after the projection. It would be more efficient if the hash 
> partitioning would have been done after the projection, because the data is 
> smaller at this stage.
> I could observe a similar behaviour for a larger job, where the hash 
> partitioning was executed before a filter operation which was then used as 
> input for a join operator. I suspect that the optimizer considers the two 
> plans (hash partitioning before the filter and after the filter) as 
> equivalent in the absence of proper size estimates. However, executing the 
> hash partitioning after the filter should always be more efficient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10134) UTF-16 support for TextInputFormat

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10134:
-
Priority: Major  (was: Critical)

> UTF-16 support for TextInputFormat
> --
>
> Key: FLINK-10134
> URL: https://issues.apache.org/jira/browse/FLINK-10134
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.4.2
>Reporter: David Dreyfus
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>
> It does not appear that Flink supports a charset encoding of "UTF-16". It 
> particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM) 
> to establish whether a UTF-16 file is UTF-16LE or UTF-16BE.
>  
> TextInputFormat.setCharset("UTF-16") calls DelimitedInputFormat.setCharset(), 
> which sets TextInputFormat.charsetName and then modifies the previously set 
> delimiterString to construct the proper byte string encoding of the the 
> delimiter. This same charsetName is also used in TextInputFormat.readRecord() 
> to interpret the bytes read from the file.
>  
> There are two problems that this implementation would seem to have when using 
> UTF-16.
>  # delimiterString.getBytes(getCharset()) in DelimitedInputFormat.java will 
> return a Big Endian byte sequence including the Byte Order Mark (BOM). The 
> actual text file will not contain a BOM at each line ending, so the delimiter 
> will never be read. Moreover, if the actual byte encoding of the file is 
> Little Endian, the bytes will be interpreted incorrectly.
>  # TextInputFormat.readRecord() will not see a BOM each time it decodes a 
> byte sequence with the String(bytes, offset, numBytes, charset) call. 
> Therefore, it will assume Big Endian, which may not always be correct. [1] 
> [https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/io/TextInputFormat.java#L95]
>  
> While there are likely many solutions, I would think that all of them would 
> have to start by reading the BOM from the file when a Split is opened and 
> then using that BOM to modify the specified encoding to a BOM specific one 
> when the caller doesn't specify one, and to overwrite the caller's 
> specification if the BOM is in conflict with the caller's specification. That 
> is, if the BOM indicates Little Endian and the caller indicates UTF-16BE, 
> Flink should rewrite the charsetName as UTF-16LE.
>  I hope this makes sense and that I haven't been testing incorrectly or 
> misreading the code.
>  
> I've verified the problem on version 1.4.2. I believe the problem exists on 
> all versions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10694) ZooKeeperHaServices Cleanup

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10694:
-
Component/s: (was: API / DataStream)
 Runtime / Coordination

> ZooKeeperHaServices Cleanup
> ---
>
> Key: FLINK-10694
> URL: https://issues.apache.org/jira/browse/FLINK-10694
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.1, 1.7.0
>Reporter: Mikhail Pryakhin
>Assignee: vinoyang
>Priority: Critical
> Fix For: 1.7.3
>
>
> When a streaming job with Zookeeper-HA enabled gets cancelled all the 
> job-related Zookeeper nodes are not removed. Is there a reason behind that? 
>  I noticed that Zookeeper paths are created of type "Container Node" (an 
> Ephemeral node that can have nested nodes) and fall back to Persistent node 
> type in case Zookeeper doesn't support this sort of nodes. 
>  But anyway, it is worth removing the job Zookeeper node when a job is 
> cancelled, isn't it?
> zookeeper version 3.4.10
>  flink version 1.6.1
>  # The job is deployed as a YARN cluster with the following properties set
> {noformat}
>  high-availability: zookeeper
>  high-availability.zookeeper.quorum: 
>  high-availability.zookeeper.storageDir: hdfs:///
>  high-availability.zookeeper.path.root: 
>  high-availability.zookeeper.path.namespace: 
> {noformat}
>  # The job is cancelled via flink cancel  command.
> What I've noticed:
>  when the job is running the following directory structure is created in 
> zookeeper
> {noformat}
> ///leader/resource_manager_lock
> ///leader/rest_server_lock
> ///leader/dispatcher_lock
> ///leader/5c21f00b9162becf5ce25a1cf0e67cde/job_manager_lock
> ///leaderlatch/resource_manager_lock
> ///leaderlatch/rest_server_lock
> ///leaderlatch/dispatcher_lock
> ///leaderlatch/5c21f00b9162becf5ce25a1cf0e67cde/job_manager_lock
> ///checkpoints/5c21f00b9162becf5ce25a1cf0e67cde/041
> ///checkpoint-counter/5c21f00b9162becf5ce25a1cf0e67cde
> ///running_job_registry/5c21f00b9162becf5ce25a1cf0e67cde
> {noformat}
> when the job is cancelled some ephemeral nodes disappear, but most of them 
> are still there:
> {noformat}
> ///leader/5c21f00b9162becf5ce25a1cf0e67cde
> ///leaderlatch/resource_manager_lock
> ///leaderlatch/rest_server_lock
> ///leaderlatch/dispatcher_lock
> ///leaderlatch/5c21f00b9162becf5ce25a1cf0e67cde/job_manager_lock
> ///checkpoints/
> ///checkpoint-counter/
> ///running_job_registry/
> {noformat}
> Here is the method [1] responsible for cleaning zookeeper folders up [1] 
> which is called when a job manager has stopped [2]. 
>  And it seems it only cleans up the *running_job_registry* folder, other 
> folders stay untouched. I suppose that everything under the 
> *///* folder should be cleaned up when the 
> job is cancelled.
> [1] 
> [https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/zookeeper/ZooKeeperRunningJobsRegistry.java#L107]
>  [2] 
> [https://github.com/apache/flink/blob/f087f57749004790b6f5b823d66822c36ae09927/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L332]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10231) Add a view SQL DDL

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10231:
-
Priority: Major  (was: Critical)

> Add a view SQL DDL
> --
>
> Key: FLINK-10231
> URL: https://issues.apache.org/jira/browse/FLINK-10231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Danny Chen
>Priority: Major
>
> FLINK-10163 added initial view support for the SQL Client. However, for 
> supporting the [full definition of 
> views|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterView]
>  (with schema, comments, etc.) we need to support native support for views in 
> the Table API.
> {code}
> CREATE VIEW [IF NOT EXISTS] [catalog_name.db_name.]view_name [COMMENT 
> comment] AS SELECT ;
> DROP VIEW[IF EXISTS] [catalog_name.db_name.]view_name;
> ALTER VIEW [IF EXISTS] [catalog_name.db_name.]view_name AS SELECT ;
> ALTER VIEW [IF EXISTS] [catalog_name.db_name.]view_name RENAME TO ;
> ALTER VIEW [IF EXISTS] [catalog_name.db_name.]view_name SET COMMENT =  ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13052) Supporting multi-topic when using kafkaTableSourceSinkFactoryBase.createStreamTableSource

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-13052:
-
Priority: Major  (was: Critical)

> Supporting multi-topic when using 
> kafkaTableSourceSinkFactoryBase.createStreamTableSource
> -
>
> Key: FLINK-13052
> URL: https://issues.apache.org/jira/browse/FLINK-13052
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.8.0
>Reporter: chaiyongqiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13459) Violation of the order queue messages from RabbitMQ.

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-13459.

Resolution: Abandoned

There was no more activity from the reporter.

> Violation of the order queue messages from RabbitMQ.
> 
>
> Key: FLINK-13459
> URL: https://issues.apache.org/jira/browse/FLINK-13459
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Connectors/ RabbitMQ
>Affects Versions: 1.8.1
>Reporter: Dmitry Kharlashko
>Priority: Critical
>
> When receiving an accumulated message queue from RabbitMQ their order is 
> disturbed. Messages come from Rabbit in the correct order but in the stream 
> they are mixed. Stream created as written in the documentation.
> DataStream stream = env.addSource(new 
> RMQSource(connectionConfig, queueName, true, new 
> SimpleStringSchema()))
>  .setParallelism(1);
> Example:
> In the RabbitMQ message queue is :\{message1,message2,message 3,message4...}.
> In the flink stream queue messages is :
> {message1,message3,message4,message2...}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12089) Broadcasting Dynamic Configuration

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-12089:
-
Priority: Major  (was: Critical)

> Broadcasting Dynamic Configuration
> --
>
> Key: FLINK-12089
> URL: https://issues.apache.org/jira/browse/FLINK-12089
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Scala, Runtime / Network
>Affects Versions: 1.7.2
>Reporter: langjingxiang
>Priority: Major
>
> flink Broadcasting Dynamic Configuration now the way is:
> datastream :broadcasting stream to join 
> dataset: broadcasting dataset 
> but Intrusion to user business is relatively large ,
> I think we can design an API:
> evn.broadcast(confName,confFunction,SchedulingTime)
> dataStream.map(new RichMapFunction(){
> public void open(Configuration parameters) throws Exception {
> Objec broadcastSet = getRuntimeContext().getBroadcastVariable(confName);
> }})
>  
> Job Manager Schedule confFunction broadcast to Task Context
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15674) Let Java and Scala Type Extraction go through the same stack

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15674:
-
Priority: Major  (was: Critical)

> Let Java and Scala Type Extraction go through the same stack
> 
>
> Key: FLINK-15674
> URL: https://issues.apache.org/jira/browse/FLINK-15674
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Guowei Ma
>Priority: Major
>  Labels: usability
>
> Currently, the Java and Scala Type Extraction stacks are completely different.
> * Java uses the {{TypeExtractor}}
> * Scala uses the type extraction macros.
> As a result, the same class can be extracted as different types in the 
> different stacks, which can lead to very confusing results. In particular, 
> when you use the TypeExtractor on Scala Classes, you always get a 
> {{GenericType}}.
> *Suggestion for New Design*
> There should be one type extraction stack, based on the TypeExtractor.
> * The TypeExtractor should be extensible and load additions through service 
> loaders, similar as it currently loads Avro as an extension.
> * The Scala Type Extraction logic should be such an extension.
> * The Scala Marcos would only capture the {{Type}} (as in Java type), meaning 
> {{Class}}, or {{ParameterizedType}}, or {{Array}} (etc.) and delegate this to 
> the TypeExtractor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14644) Improve the usability of connector properties

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-14644:
-
Priority: Major  (was: Critical)

> Improve the usability of connector properties
> -
>
> Key: FLINK-14644
> URL: https://issues.apache.org/jira/browse/FLINK-14644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>
> This is an umbrella issue to collect problems of current connector 
> properties, especially when using CREATE TABLE DDL WITH properties. 
> Design documentation: 
> https://docs.google.com/document/d/14MlXFB45NUmUtesFMFZFjFhd4pqmYamxbfNwUPrfdL4/edit#



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15877) Stop using deprecated methods from TableSource interface

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15877:
-
Priority: Major  (was: Critical)

> Stop using deprecated methods from TableSource interface
> 
>
> Key: FLINK-15877
> URL: https://issues.apache.org/jira/browse/FLINK-15877
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Kurt Young
>Priority: Major
> Fix For: 1.11.0
>
>
> This is an *umbrella* issue to track the cleaning work of current TableSource 
> interface. 
> Currently, methods like `getReturnType` and `getTableSchema` are already 
> deprecated, but still used by lots of codes in various connectors and test 
> codes. We should make sure no connector and testing codes would use these 
> deprecated methods anymore, except for the backward compatibility callings. 
> This is to prepare for the further interface improvement.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15849) Update SQL-CLIENT document from type to data-type

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15849:
-
Priority: Major  (was: Critical)

> Update SQL-CLIENT document from type to data-type
> -
>
> Key: FLINK-15849
> URL: https://issues.apache.org/jira/browse/FLINK-15849
> Project: Flink
>  Issue Type: Task
>  Components: Documentation, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> There are documentation of {{type}} instead of {{data-type}} in sql-client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15758) Investigate potential out-of-memory problems due to managed unsafe memory allocation

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15758:
-
Component/s: (was: API / DataSet)

> Investigate potential out-of-memory problems due to managed unsafe memory 
> allocation
> 
>
> Key: FLINK-15758
> URL: https://issues.apache.org/jira/browse/FLINK-15758
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Task
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After FLINK-13985, managed memory is allocated from UNSAFE, not as direct nio 
> buffers as before 1.10.
> in FLINK-14894, there was an attempt to release this memory only when all 
> Java handles of the unsafe memory are about to be GC'ed. It is similar to how 
> it was with direct nio buffers before 1.10 but the unsafe memory is not 
> tracked by direct memory limit (-XX:MaxDirectMemorySize). The problem is that 
> over-allocating of unsafe memory will not hit the direct limit and will not 
> cause GC immediately which will be the only way to release it. In this case, 
> it causes out-of-memory failures w/o triggering GC to release a lot of 
> potentially already unused memory.
> We have to investigate further optimisations, like:
>  * directly monitoring phantom reference queue of the cleaner (if JVM detects 
> quickly that there are no more reference to the memory) and explicitly 
> release memory ready for GC asap, e.g. after Task exit
>  * monitor allocated memory amount and block allocation until GC releases 
> occupied memory instead of failing with out-of-memory immediately
> cc [~sewen] [~trohrmann]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15986) support setting or changing session properties in Flink SQL

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15986:
-
Priority: Major  (was: Critical)

> support setting or changing session properties in Flink SQL
> ---
>
> Key: FLINK-15986
> URL: https://issues.apache.org/jira/browse/FLINK-15986
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: Bowen Li
>Assignee: Kurt Young
>Priority: Major
> Fix For: 1.11.0
>
>
> as Flink SQL is more and more critical for user running batch jobs, 
> experiments, and OLAP exploration, it's important than ever to support 
> setting and changing session properties in Flink SQL.
>  
> Use cases include switching SQL dialects at runtime, switching job mode 
> between "streaming" and "batch", changing other params defined in 
> flink-conf.yaml and default-sql-client.yaml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16555) Preflight check for known unstable hashCodes.

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-16555.

Resolution: Fixed

master: 7cf0f6880896bb30bec2f8fc0c6193bfa35c4f04

This adds rejection of enums. Arrays and objects that didn't define 
{{hashCode()}} were already rejected before.

> Preflight check for known unstable hashCodes.
> -
>
> Key: FLINK-16555
> URL: https://issues.apache.org/jira/browse/FLINK-16555
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, API / Type Serialization System
>Reporter: Stephan Ewen
>Assignee: Lu Niu
>Priority: Critical
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Data types can only be used as keys, if they have a stable hash code 
> implementation that is deterministic across JVMs. Otherwise, the keyBy() 
> operations will result in incorrect data routing.
> We should eagerly check the key type information for known cases types with 
> unstable hash code, such as
> * arrays
> * enums
> * anything that does not override Object.hashCode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15674) Let Java and Scala Type Extraction go through the same stack

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-15674:


Assignee: Guowei Ma

> Let Java and Scala Type Extraction go through the same stack
> 
>
> Key: FLINK-15674
> URL: https://issues.apache.org/jira/browse/FLINK-15674
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Guowei Ma
>Priority: Critical
>  Labels: usability
>
> Currently, the Java and Scala Type Extraction stacks are completely different.
> * Java uses the {{TypeExtractor}}
> * Scala uses the type extraction macros.
> As a result, the same class can be extracted as different types in the 
> different stacks, which can lead to very confusing results. In particular, 
> when you use the TypeExtractor on Scala Classes, you always get a 
> {{GenericType}}.
> *Suggestion for New Design*
> There should be one type extraction stack, based on the TypeExtractor.
> * The TypeExtractor should be extensible and load additions through service 
> loaders, similar as it currently loads Avro as an extension.
> * The Scala Type Extraction logic should be such an extension.
> * The Scala Marcos would only capture the {{Type}} (as in Java type), meaning 
> {{Class}}, or {{ParameterizedType}}, or {{Array}} (etc.) and delegate this to 
> the TypeExtractor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17041) Migrate current TypeInformation creation to the TypeInformationExtractor framework

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17041:


Assignee: Guowei Ma

> Migrate current TypeInformation creation to the TypeInformationExtractor 
> framework
> --
>
> Key: FLINK-17041
> URL: https://issues.apache.org/jira/browse/FLINK-17041
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17039) Introduce TypeInformationExtractor famework

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17039:


Assignee: Guowei Ma

> Introduce TypeInformationExtractor famework
> ---
>
> Key: FLINK-17039
> URL: https://issues.apache.org/jira/browse/FLINK-17039
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17038) Decouple resolving Type from creating TypeInformation process

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17038:


Assignee: Guowei Ma

> Decouple resolving Type from creating TypeInformation process
> -
>
> Key: FLINK-17038
> URL: https://issues.apache.org/jira/browse/FLINK-17038
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.10.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17038) Decouple resolving Type from creating TypeInformation process

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17038:
-
Priority: Major  (was: Critical)

> Decouple resolving Type from creating TypeInformation process
> -
>
> Key: FLINK-17038
> URL: https://issues.apache.org/jira/browse/FLINK-17038
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.10.0
>Reporter: Guowei Ma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17039) Introduce TypeInformationExtractor famework

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-17039:
-
Priority: Major  (was: Critical)

> Introduce TypeInformationExtractor famework
> ---
>
> Key: FLINK-17039
> URL: https://issues.apache.org/jira/browse/FLINK-17039
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9429.
---
Fix Version/s: 1.11.0
   Resolution: Fixed

Closing this for now since cancelling seems to work ok or the test passes so 
quickly that it doesn't matter.

The issue with the test not working locally was resolved as well. 
 

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15772) Shaded Hadoop S3A with credentials provider end-to-end test fails on travis

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078068#comment-17078068
 ] 

Aljoscha Krettek commented on FLINK-15772:
--

Fix on release-1.10: 0ad2f0634e81921472ebb506f9d826c7ad6f69a3

> Shaded Hadoop S3A with credentials provider end-to-end test fails on travis
> ---
>
> Key: FLINK-15772
> URL: https://issues.apache.org/jira/browse/FLINK-15772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Yu Li
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> Shaded Hadoop S3A with credentials provider end-to-end test fails on travis 
> with below error:
> {code}
> Job with JobID 048b4651c0ba858b926aeb36f5315058 has finished.
> Job Runtime: 6016 ms
> sort: cannot read: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-17605057822/temp/test_batch_wordcount-2abf3dbf-b4ba-4d3a-a43b-c43e710eb439*':
>  No such file or directory
> FAIL WordCount (hadoop_with_provider): Output hash mismatch.  Got 
> d41d8cd98f00b204e9800998ecf8427e, expected 72a690412be8928ba239c2da967328a5.
> head hexdump of actual:
> head: cannot open 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-17605057822/temp/test_batch_wordcount-2abf3dbf-b4ba-4d3a-a43b-c43e710eb439*'
>  for reading: No such file or directory
> ed2bf7571ec8ab184b7316809da0b2facb9b367a7c7f0f1bdaac6dd5e6f107ae
> ed2bf7571ec8ab184b7316809da0b2facb9b367a7c7f0f1bdaac6dd5e6f107ae
> [FAIL] Test script contains errors.
> Checking for errors...
> No errors in log files.
> Checking for exceptions...
> No exceptions in log files.
> Checking for non-empty .out files...
> No non-empty .out files.
> [FAIL] 'Shaded Hadoop S3A with credentials provider end-to-end test' failed 
> after 0 minutes and 20 seconds! Test exited with exit code 1
> {code}
> https://api.travis-ci.org/v3/job/641444512/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078021#comment-17078021
 ] 

Aljoscha Krettek edited comment on FLINK-9429 at 4/8/20, 10:58 AM:
---

The reason the test doesn't work locally is that {{run-single-test.sh}} has the 
order of "imports" wrong. i.e. we have:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
while {{run-nightly-tests.sh}} has:
{code:java}
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
This leads to this line failing in {{test-runner-common.sh because 
}}{{mvn_run}} is not available:
{code:java}
export FLINK_VERSION=$(MVN_RUN_VERBOSE=false run_mvn --file 
${END_TO_END_DIR}/pom.xml 
org.apache.maven.plugins:maven-help-plugin:3.1.0:evaluate 
-Dexpression=project.version -q -DforceStdout) {code}

(edited to fix "import" ordering)


was (Author: aljoscha):
The reason the test doesn't work locally is that {{run-single-test.sh}} has the 
order of "imports" wrong. i.e. we have:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
while {{run-nightly-tests.sh}} has:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
This leads to this line failing in {{test-runner-common.sh because 
}}{{mvn_run}} is not available:
{code:java}
export FLINK_VERSION=$(MVN_RUN_VERBOSE=false run_mvn --file 
${END_TO_END_DIR}/pom.xml 
org.apache.maven.plugins:maven-help-plugin:3.1.0:evaluate 
-Dexpression=project.version -q -DforceStdout) {code}

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078021#comment-17078021
 ] 

Aljoscha Krettek edited comment on FLINK-9429 at 4/8/20, 10:58 AM:
---

The reason the test doesn't work locally is that {{run-single-test.sh}} has the 
order of "imports" wrong. i.e. we have:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh"
{code}
while {{run-nightly-tests.sh}} has:
{code:java}
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh"
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
{code}
This leads to this line failing in {{test-runner-common.sh because 
}}{{mvn_run}} is not available:
{code:java}
export FLINK_VERSION=$(MVN_RUN_VERBOSE=false run_mvn --file 
${END_TO_END_DIR}/pom.xml 
org.apache.maven.plugins:maven-help-plugin:3.1.0:evaluate 
-Dexpression=project.version -q -DforceStdout) {code}

(edited to fix "import" ordering)


was (Author: aljoscha):
The reason the test doesn't work locally is that {{run-single-test.sh}} has the 
order of "imports" wrong. i.e. we have:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
while {{run-nightly-tests.sh}} has:
{code:java}
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
This leads to this line failing in {{test-runner-common.sh because 
}}{{mvn_run}} is not available:
{code:java}
export FLINK_VERSION=$(MVN_RUN_VERBOSE=false run_mvn --file 
${END_TO_END_DIR}/pom.xml 
org.apache.maven.plugins:maven-help-plugin:3.1.0:evaluate 
-Dexpression=project.version -q -DforceStdout) {code}

(edited to fix "import" ordering)

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078062#comment-17078062
 ] 

Aljoscha Krettek commented on FLINK-9429:
-

Fixed test
master:5e5fddb81d212bb5b9f6f48868e6dc21715d8ada

There's still the issue that the test can't be interrupted, I'll also fix that 
in a sec.

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078059#comment-17078059
 ] 

Aljoscha Krettek commented on FLINK-9429:
-

I think this is a different bug from the one originally reported but it works 
locally for me now.

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-9429:
---

Assignee: Aljoscha Krettek  (was: zhangminglei)

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: test-stability
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9429) Quickstart E2E not working locally

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17078021#comment-17078021
 ] 

Aljoscha Krettek commented on FLINK-9429:
-

The reason the test doesn't work locally is that {{run-single-test.sh}} has the 
order of "imports" wrong. i.e. we have:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
while {{run-nightly-tests.sh}} has:
{code:java}
source "${END_TO_END_DIR}/test-scripts/test-runner-common.sh"
source "${END_TO_END_DIR}/../tools/ci/maven-utils.sh" {code}
This leads to this line failing in {{test-runner-common.sh because 
}}{{mvn_run}} is not available:
{code:java}
export FLINK_VERSION=$(MVN_RUN_VERBOSE=false run_mvn --file 
${END_TO_END_DIR}/pom.xml 
org.apache.maven.plugins:maven-help-plugin:3.1.0:evaluate 
-Dexpression=project.version -q -DforceStdout) {code}

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: zhangminglei
>Priority: Critical
>  Labels: test-stability
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15772) Shaded Hadoop S3A with credentials provider end-to-end test fails on travis

2020-04-08 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077993#comment-17077993
 ] 

Aljoscha Krettek commented on FLINK-15772:
--

{{release-1.10}} doesn't have the fix. I will also push it there now.

 

> Shaded Hadoop S3A with credentials provider end-to-end test fails on travis
> ---
>
> Key: FLINK-15772
> URL: https://issues.apache.org/jira/browse/FLINK-15772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Yu Li
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> Shaded Hadoop S3A with credentials provider end-to-end test fails on travis 
> with below error:
> {code}
> Job with JobID 048b4651c0ba858b926aeb36f5315058 has finished.
> Job Runtime: 6016 ms
> sort: cannot read: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-17605057822/temp/test_batch_wordcount-2abf3dbf-b4ba-4d3a-a43b-c43e710eb439*':
>  No such file or directory
> FAIL WordCount (hadoop_with_provider): Output hash mismatch.  Got 
> d41d8cd98f00b204e9800998ecf8427e, expected 72a690412be8928ba239c2da967328a5.
> head hexdump of actual:
> head: cannot open 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-17605057822/temp/test_batch_wordcount-2abf3dbf-b4ba-4d3a-a43b-c43e710eb439*'
>  for reading: No such file or directory
> ed2bf7571ec8ab184b7316809da0b2facb9b367a7c7f0f1bdaac6dd5e6f107ae
> ed2bf7571ec8ab184b7316809da0b2facb9b367a7c7f0f1bdaac6dd5e6f107ae
> [FAIL] Test script contains errors.
> Checking for errors...
> No errors in log files.
> Checking for exceptions...
> No exceptions in log files.
> Checking for non-empty .out files...
> No non-empty .out files.
> [FAIL] 'Shaded Hadoop S3A with credentials provider end-to-end test' failed 
> after 0 minutes and 20 seconds! Test exited with exit code 1
> {code}
> https://api.travis-ci.org/v3/job/641444512/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-9017) FlinkKafkaProducer011ITCase.testFlinkKafkaProducer011FailBeforeNotify times out on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9017.
---
Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> FlinkKafkaProducer011ITCase.testFlinkKafkaProducer011FailBeforeNotify times 
> out on Travis
> -
>
> Key: FLINK-9017
> URL: https://issues.apache.org/jira/browse/FLINK-9017
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> {{FlinkKafkaProducer011ITCase.testFlinkKafkaProducer011FailBeforeNotify}} 
> times out on Travis.
> https://api.travis-ci.org/v3/job/355025366/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12540) Kafka011ProducerExactlyOnceITCase.testExactlyOnceCustomOperator

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-12540.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> Kafka011ProducerExactlyOnceITCase.testExactlyOnceCustomOperator
> ---
>
> Key: FLINK-12540
> URL: https://issues.apache.org/jira/browse/FLINK-12540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: vinoyang
>Priority: Critical
>  Labels: test-stability
>
> Always print this message until exceed :
> {code:java}
> 17:56:34,950 INFO  
> org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper  
> - > Failing mapper  0: count=690, totalCount=1000
> {code}
> log details : [https://api.travis-ci.org/v3/job/533358203/log.txt]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12032) KafkaProducerAtLeastOnceITCase failed to bind to address

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-12032.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> KafkaProducerAtLeastOnceITCase failed to bind to address
> 
>
> Key: FLINK-12032
> URL: https://issues.apache.org/jira/browse/FLINK-12032
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> The {{KafkaProducerAtLeastOnceITCase}} failed on Travis because it could not 
> bind to the specified address.
> https://api.travis-ci.org/v3/job/511419185/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-11995) Kafka09SecuredRunITCase fails due to port conflict

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-11995.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> Kafka09SecuredRunITCase fails due to port conflict
> --
>
> Key: FLINK-11995
> URL: https://issues.apache.org/jira/browse/FLINK-11995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.8.0
>Reporter: Chesnay Schepler
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> [https://travis-ci.org/apache/flink/jobs/509650718]
>  
> {code:java}
> 22:40:05,370 INFO  
> org.apache.flink.streaming.connectors.kafka.Kafka09SecuredRunITCase  - 
> Starting Kafka09SecuredRunITCase 
> 22:40:05,373 INFO  
> org.apache.flink.streaming.connectors.kafka.Kafka09SecuredRunITCase  - 
> -
> 22:40:10,553 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Starting KafkaTestBase.prepare() for Kafka 0.9
> 22:40:10,554 INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl  - 
> Starting Zookeeper
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:90)
>   at 
> org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:117)
>   at 
> org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:93)
>   at 
> org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:148)
>   at java.lang.Thread.run(Thread.java:748)
> 22:40:13.083 [ERROR] Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time 
> elapsed: 10.529 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.Kafka09SecuredRunITCase{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10737) FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10737.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis
> 
>
> Key: FLINK-10737
> URL: https://issues.apache.org/jira/browse/FLINK-10737
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.7.0, 1.8.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> The {{FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint}} failed on 
> Travis:
> https://api.travis-ci.org/v3/job/448781612/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12033) KafkaITCase.testCancelingFullTopic failed on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-12033.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> KafkaITCase.testCancelingFullTopic failed on Travis
> ---
>
> Key: FLINK-12033
> URL: https://issues.apache.org/jira/browse/FLINK-12033
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Test Infrastructure
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> The {{KafkaITCase.testCancelingFullTopic}} failed on Travis because it timed 
> out.
> https://api.travis-ci.org/v3/job/511419185/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-11426) Kafka09ProducerITCase test case failed while starting KafkaCluster

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-11426.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> Kafka09ProducerITCase test case failed while starting KafkaCluster
> --
>
> Key: FLINK-11426
> URL: https://issues.apache.org/jira/browse/FLINK-11426
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: vinoyang
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> error info:
> {code:java}
> 10:15:02,459 ERROR 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase  - Error 
> while sending record to Kafka: Test error
> java.lang.Exception: Test error
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:78)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:74)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:39)
>   at 
> org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:96)
>   at 
> org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
>   at 
> org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:35)
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:63)
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:49)
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor$DispatcherDefaultingToRealMethod.interceptSuperCallable(MockMethodInterceptor.java:110)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$MockitoMock$1191687640.send(Unknown
>  Source)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.invoke(FlinkKafkaProducerBase.java:313)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness.processElement(OneInputStreamOperatorTestHarness.java:111)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTest.testPropagateExceptions(KafkaProducerTest.java:116)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> 

[jira] [Closed] (FLINK-10837) Kafka 2.0 test KafkaITCase.testOneToOneSources dead locked on deleteTestTopic?

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10837.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> Kafka 2.0 test KafkaITCase.testOneToOneSources dead locked on deleteTestTopic?
> --
>
> Key: FLINK-10837
> URL: https://issues.apache.org/jira/browse/FLINK-10837
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.7.0, 1.8.0, 1.9.0
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://api.travis-ci.org/v3/job/452439034/log.txt
> {noformat}
> Tests in error: 
>   
> KafkaITCase.testOneToOneSources:97->KafkaConsumerTestBase.runOneToOneExactlyOnceTest:875->KafkaTestBase.deleteTestTopic:206->Object.wait:502->Object.wait:-2
>  » TestTimedOut
> {noformat}
> {noformat}
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:262)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.deleteTestTopic(KafkaTestEnvironmentImpl.java:158)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.deleteTestTopic(KafkaTestBase.java:206)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runOneToOneExactlyOnceTest(KafkaConsumerTestBase.java:875)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testOneToOneSources(KafkaITCase.java:97)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13709) Kafka09ITCase hangs when starting the KafkaServer on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-13709.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> Kafka09ITCase hangs when starting the KafkaServer on Travis
> ---
>
> Key: FLINK-13709
> URL: https://issues.apache.org/jira/browse/FLINK-13709
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The {{Kafka09ITCase}} hangs when starting the {{KafkaServer}} on Travis.
> https://api.travis-ci.org/v3/job/571295948/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12034) KafkaITCase.testAllDeletes failed on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-12034.

Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> KafkaITCase.testAllDeletes failed on Travis
> ---
>
> Key: FLINK-12034
> URL: https://issues.apache.org/jira/browse/FLINK-12034
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
>
> The {{KafkaITCase.testAllDeletes}} failed on Travis because it timed out:
> {code}
> 12:36:24.782 [ERROR] Errors: 
> 12:36:24.783 [ERROR]   
> KafkaITCase.testAllDeletes:131->KafkaConsumerTestBase.runAllDeletesTest:1526->KafkaTestBase.deleteTestTopic:192->Object.wait:-2
>  » TestTimedOut
> 12:36:24.783 [INFO] 
> 12:36:24.783 [ERROR] Tests run: 46, Failures: 0, Errors: 1, Skipped: 0
> {code}
> https://api.travis-ci.org/v3/job/511419474/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-8521) FlinkKafkaProducer011ITCase.testRunOutOfProducersInThePool timed out on Travis

2020-04-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-8521.
---
Resolution: Cannot Reproduce

I'm closing this for new since there have been no new reported cases since 
several months. Please re-open if it occurs again.

> FlinkKafkaProducer011ITCase.testRunOutOfProducersInThePool timed out on Travis
> --
>
> Key: FLINK-8521
> URL: https://issues.apache.org/jira/browse/FLINK-8521
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The {{FlinkKafkaProducer011ITCase.testRunOutOfProducersInThePool}} timed out 
> on Travis with producing no output for longer than 300s.
>  
> https://travis-ci.org/tillrohrmann/flink/jobs/334642014



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


<    5   6   7   8   9   10   11   12   13   14   >