[jira] [Assigned] (FLINK-21336) Activate bloom filter in RocksDB State Backend via Flink configuration

2021-06-11 Thread Jun Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Qin reassigned FLINK-21336:
---

Assignee: Jun Qin

> Activate bloom filter in RocksDB State Backend via Flink configuration
> --
>
> Key: FLINK-21336
> URL: https://issues.apache.org/jira/browse/FLINK-21336
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Major
>  Labels: auto-unassigned
>
> Activating bloom filter in the RocksDB state backend improves read 
> performance. Currently activating bloom filter can only be done by 
> implementing a custom ConfigurableRocksDBOptionsFactory. I think we should 
> provide an option to activate bloom filter via Flink configuration.
> See also the discussion in ML:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Activate-bloom-filter-in-RocksDB-State-Backend-via-Flink-configuration-td48636.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10121) Introduce methods to remove registered operator states

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10121:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Introduce methods to remove registered operator states
> --
>
> Key: FLINK-10121
> URL: https://issues.apache.org/jira/browse/FLINK-10121
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> User can register new operator states but never remove a registered state. 
> This is particularly problematic with expensive states or states that we 
> register only to provide backwards compatibility. We can also consider the 
> same for keyed state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10168) Add FileFilter interface and FileModTimeFilter which sets a read start position for files by modification time

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10168:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add FileFilter interface and FileModTimeFilter which sets a read start 
> position for files by modification time
> --
>
> Key: FLINK-10168
> URL: https://issues.apache.org/jira/browse/FLINK-10168
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.6.0
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update: The motivation is 1) enabling users to set a read start position for 
> files, so they can process files that are modified after a given timestamp 2) 
> expose more file information to users and providing them with a more flexible 
> file filter interface to define their own filtering rules
> ---
> support filtering files by modified/created time in 
> {{StreamExecutionEnvironment.readFile()}}
> for example, in a source dir with lots of file, we only want to read files 
> that is created or modified after a specific time.
> This API can expose a generic filter function of files, and let users define 
> filtering rules. Currently Flink only supports filtering files by path. What 
> this means is that, currently the API is 
> {{FileInputFormat.setFilesFilters(PathFiter)}} that takes only one file path 
> filter. A more generic API that can take more filters can look like this 1) 
> {{FileInputFormat.setFilesFilters(List (PathFiter, ModifiedTileFilter, ... 
> ))}}
> 2) or {{FileInputFormat.setFilesFilters(FileFiter),}} and {{FileFilter}} 
> exposes all file attributes that Flink's file system can provide, like path 
> and modified time
> I lean towards the 2nd option, because it gives users more flexibility to 
> define complex filtering rules based on combinations of file attributes.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10134) UTF-16 support for TextInputFormat

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10134:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> UTF-16 support for TextInputFormat
> --
>
> Key: FLINK-10134
> URL: https://issues.apache.org/jira/browse/FLINK-10134
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.4.2
>Reporter: David Dreyfus
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> It does not appear that Flink supports a charset encoding of "UTF-16". It 
> particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM) 
> to establish whether a UTF-16 file is UTF-16LE or UTF-16BE.
>  
> TextInputFormat.setCharset("UTF-16") calls DelimitedInputFormat.setCharset(), 
> which sets TextInputFormat.charsetName and then modifies the previously set 
> delimiterString to construct the proper byte string encoding of the the 
> delimiter. This same charsetName is also used in TextInputFormat.readRecord() 
> to interpret the bytes read from the file.
>  
> There are two problems that this implementation would seem to have when using 
> UTF-16.
>  # delimiterString.getBytes(getCharset()) in DelimitedInputFormat.java will 
> return a Big Endian byte sequence including the Byte Order Mark (BOM). The 
> actual text file will not contain a BOM at each line ending, so the delimiter 
> will never be read. Moreover, if the actual byte encoding of the file is 
> Little Endian, the bytes will be interpreted incorrectly.
>  # TextInputFormat.readRecord() will not see a BOM each time it decodes a 
> byte sequence with the String(bytes, offset, numBytes, charset) call. 
> Therefore, it will assume Big Endian, which may not always be correct. [1] 
> [https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/io/TextInputFormat.java#L95]
>  
> While there are likely many solutions, I would think that all of them would 
> have to start by reading the BOM from the file when a Split is opened and 
> then using that BOM to modify the specified encoding to a BOM specific one 
> when the caller doesn't specify one, and to overwrite the caller's 
> specification if the BOM is in conflict with the caller's specification. That 
> is, if the BOM indicates Little Endian and the caller indicates UTF-16BE, 
> Flink should rewrite the charsetName as UTF-16LE.
>  I hope this makes sense and that I haven't been testing incorrectly or 
> misreading the code.
>  
> I've verified the problem on version 1.4.2. I believe the problem exists on 
> all versions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22367) JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished times out

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22367:
---
  Labels: auto-deprioritized-critical auto-unassigned test-stability  (was: 
auto-unassigned stale-critical test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished
>  times out
> --
>
> Key: FLINK-22367
> URL: https://issues.apache.org/jira/browse/FLINK-22367
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, 
> test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=2c7d57b9-7341-5a87-c9af-2cf7cc1a37dc=3844
> {code}
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 13.135 s <<< FAILURE! - in 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase
> Apr 19 22:28:44 [ERROR] 
> terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase)
>   Time elapsed: 10.237 s  <<< ERROR!
> Apr 19 22:28:44 java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: Invocation of public default 
> java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.stopWithSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:123)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:111)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 19 22:28:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Apr 19 22:28:44   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Apr 19 22:28:44   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 19 22:28:44   at 
> 

[jira] [Updated] (FLINK-10200) Consolidate FileBasedStateOutputStream and FsCheckpointStateOutputStream

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10200:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Consolidate FileBasedStateOutputStream and FsCheckpointStateOutputStream
> 
>
> Key: FLINK-10200
> URL: https://issues.apache.org/jira/browse/FLINK-10200
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> We should be able to find a common denominator that avoids code duplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22949) java.io.InvalidClassException With Flink Kafka Beam

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22949:
---
Labels: stale-blocker  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> java.io.InvalidClassException With Flink Kafka Beam
> ---
>
> Key: FLINK-22949
> URL: https://issues.apache.org/jira/browse/FLINK-22949
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Ravikiran Borse
>Priority: Blocker
>  Labels: stale-blocker
>
> Beam: 2.30.0
> Flink: 1.12.0
> Kafka: 2.6.0
> ERROR:root:java.io.InvalidClassException: 
> org.apache.flink.streaming.api.graph.StreamConfig$NetworkInputConfig; local 
> class incompatible: stream classdesc serialVersionUID = 3698633776553163849, 
> local class serialVersionUID = -3137689219135046939
>  
> In Flink Logs
> KafkaIO.Read.ReadFromKafkaViaSDF/{ParDo(GenerateKafkaSourceDescriptor), 
> KafkaIO.ReadSourceDescriptors} (1/1)#0 (b0c31371874208adb0ccaff85b971883) 
> switched from RUNNING to FAILED.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Could not 
> deserialize inputs
>         at 
> org.apache.flink.streaming.api.graph.StreamConfig.getInputs(StreamConfig.java:265)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.api.graph.StreamConfig.getTypeSerializerIn(StreamConfig.java:280)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.api.graph.StreamConfig.getTypeSerializerIn1(StreamConfig.java:271)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.wrapOperatorIntoOutput(OperatorChain.java:639)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:591)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:526)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:164)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:485)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:531)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>         at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
> [flink-dist_2.12-1.12.0.jar:1.12.0]
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
> [flink-dist_2.12-1.12.0.jar:1.12.0]
>         at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
> Caused by: java.io.InvalidClassException: 
> org.apache.flink.streaming.api.graph.StreamConfig$NetworkInputConfig; local 
> class incompatible: stream classdesc serialVersionUID = 3698633776553163849, 
> local class serialVersionUID = -3137689219135046939



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10122) KafkaConsumer should use partitionable state over union state if partition discovery is not active

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10122:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> KafkaConsumer should use partitionable state over union state if partition 
> discovery is not active
> --
>
> Key: FLINK-10122
> URL: https://issues.apache.org/jira/browse/FLINK-10122
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>
> KafkaConsumer store its offsets state always as union state. I think this is 
> only required in the case that partition discovery is active. For jobs with a 
> very high parallelism, the union state can lead to prohibitively expensive 
> deployments. For example, a job with 2000 source and a total of 10MB 
> checkpointed union state offsets state would have to ship ~ 2000 x 10MB = 
> 20GB of state. With partitionable state, it would have to ship ~10MB.
> For now, I would suggest to go back to partitionable state in case that 
> partition discovery is not active. In the long run, I have some ideas for 
> more efficient partitioning schemes that would also work for active discovery.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22483) Recover checkpoints when JobMaster gains leadership

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22483:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Recover checkpoints when JobMaster gains leadership
> ---
>
> Key: FLINK-22483
> URL: https://issues.apache.org/jira/browse/FLINK-22483
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.14.0
>
>
> Recovering checkpoints (from the CompletedCheckpointStore) is a potentially 
> long-lasting/blocking operation, for example if the file system 
> implementation is retrying to connect to a unavailable storage backend.
> Currently, we are calling the CompletedCheckpointStore.recover() method from 
> the main thread of the JobManager, making it unresponsive to any RPC call 
> while the recover method is blocked:
> {code}
> 2021-04-02 20:33:31,384 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job XXX 
> switched from state RUNNING to RESTARTING.
> com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to 
> minio.minio.svc:9000 [minio.minio.svc/] failed: Connection refused 
> (Connection refused)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
>  ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[?:?]
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[?:?]
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062) 
> ~[?:?]
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008) 
> ~[?:?]
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1490) 
> ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.lambda$openStream$1(PrestoS3FileSystem.java:905)
>  ~[?:?]
>   at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138) ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.openStream(PrestoS3FileSystem.java:902)
>  ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.openStream(PrestoS3FileSystem.java:887)
>  ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.seekStream(PrestoS3FileSystem.java:880)
>  ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.lambda$read$0(PrestoS3FileSystem.java:819)
>  ~[?:?]
>   at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138) ~[?:?]
>   at 
> com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.read(PrestoS3FileSystem.java:818)
>  ~[?:?]
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284) 
> ~[?:1.8.0_282]
>   at XXX.recover(KubernetesHaCheckpointStore.java:69) 
> ~[vvp-flink-ha-kubernetes-flink112-1.1.0.jar:?]
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1511)
>  ~[flink-dist_2.12-1.12.2-stream1.jar:1.12.2-stream1]
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateToAll(CheckpointCoordinator.java:1451)
>  ~[flink-dist_2.12-1.12.2-stream1.jar:1.12.2-stream1]
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.restoreState(SchedulerBase.java:421)
>  ~[flink-dist_2.12-1.12.2-stream1.jar:1.12.2-stream1]
>   at 
> 

[jira] [Updated] (FLINK-10203) Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10203:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support truncate method for old Hadoop versions in 
> HadoopRecoverableFsDataOutputStream
> --
>
> Key: FLINK-10203
> URL: https://issues.apache.org/jira/browse/FLINK-10203
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Connectors / FileSystem
>Affects Versions: 1.6.0, 1.6.1, 1.7.0
>Reporter: Artsem Semianenka
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Attachments: legacy truncate logic.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> New StreamingFileSink ( introduced in 1.6 Flink version ) use 
> HadoopRecoverableFsDataOutputStream wrapper to write data in HDFS.
> HadoopRecoverableFsDataOutputStream is a wrapper for FSDataOutputStream to 
> have an ability to restore from a certain point of the file after failure and 
> continue to write data. To achieve this recovery functionality the 
> HadoopRecoverableFsDataOutputStream uses "truncate" method which was 
> introduced only in Hadoop 2.7. 
> FLINK-14170 has enabled the usage of StreamingFileSink for 
> OnCheckpointRollingPolicy, but it is still not possible to use 
> StreamingFileSink with DefaultRollingPolicy, which makes writing of the data 
> to HDFS unpractical in scale for HDFS < 2.7.
> Unfortunately, there are a few official Hadoop distributives which latest 
> version still use Hadoop 2.6 (This distributives: Cloudera, Pivotal HD ). As 
> the result Flinks Hadoop connector can't work with this distributives.
> Flink declares that supported Hadoop from version 2.4.0 upwards 
> ([https://ci.apache.org/projects/flink/flink-docs-release-1.6/start/building.html#hadoop-versions])
> I guess we should emulate the functionality of "truncate" method for older 
> Hadoop versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10146) numBuffersOut not updated properly

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10146:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> numBuffersOut not updated properly
> --
>
> Key: FLINK-10146
> URL: https://issues.apache.org/jira/browse/FLINK-10146
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics, Runtime / Network
>Affects Versions: 1.5.3, 1.6.1, 1.7.0
>Reporter: Nico Kruber
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> We only enqueue a {{BufferConsumer}} once into the result (sub)partition but 
> Netty may draw and send parts of it independently. {{numBuffersOut}} is, 
> however, only increased when a new buffer is retrieved from the buffer pool, 
> not when an actual buffer is sent via Netty. This metric is thus wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10171) Allow for assemblePartPath override in the BucketingSink

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10171:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Allow for assemblePartPath override in the BucketingSink
> 
>
> Key: FLINK-10171
> URL: https://issues.apache.org/jira/browse/FLINK-10171
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.4.2, 1.5.0
>Reporter: Lakshmi Rao
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Make the 
> [assemblePartPath|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L663]
>  method  in the BucketingSink protected. This will enable creating dynamic 
> part prefixes in use cases where they are required (for e.g. adding a 
> datetime prefix to the filename)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15507) Activate local recovery for RocksDB backends by default

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15507:
---
Labels: auto-unassigned pull-request-available stale-critical  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Activate local recovery for RocksDB backends by default
> ---
>
> Key: FLINK-15507
> URL: https://issues.apache.org/jira/browse/FLINK-15507
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Stephan Ewen
>Priority: Critical
>  Labels: auto-unassigned, pull-request-available, stale-critical
> Fix For: 1.14.0
>
>
> For the RocksDB state backend, local recovery has no overhead when 
> incremental checkpoints are used. 
> It should be activated by default, because it greatly helps with recovery.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10162) Add host in checkpoint detail

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10162:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add host in checkpoint detail
> -
>
> Key: FLINK-10162
> URL: https://issues.apache.org/jira/browse/FLINK-10162
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: xymaqingxiang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Attachments: image-2018-08-16-21-14-57-221.png, 
> image-2018-08-16-21-15-42-167.png, image-2018-08-16-21-15-46-585.png, 
> image-2018-08-16-21-36-04-977.png
>
>
> When a streaming job run for a long time,  there will be failures at 
> checkpoint. We will have a requirement to look at the log for the 
> corresponding task and the performance of the machine on which the task is 
> located. 
> Therefore, I suggest that add the host information to the checkpoint so that 
> if the checkpoint fails, you can specifically login to the appropriate 
> machine to see the cause of the failure.
> The failures are as follows:
> !image-2018-08-16-21-14-57-221.png!
> !image-2018-08-16-21-15-46-585.png!
> The revised representation is as follows:
>   !image-2018-08-16-21-36-04-977.png!
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10183) Processing-Time based windows aren't emitted when a finite stream ends

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10183:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Processing-Time based windows aren't emitted when a finite stream ends
> --
>
> Key: FLINK-10183
> URL: https://issues.apache.org/jira/browse/FLINK-10183
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.6.0
>Reporter: Andrew Roberts
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> It looks like Event-time based windows leverage a final watermark added in 
> FLINK-3554, but in-progress processing-time based windows are dropped on the 
> floor once a finite stream ends.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10198) Set Env object in DBOptions for RocksDB

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10198:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Set Env object in DBOptions for RocksDB
> ---
>
> Key: FLINK-10198
> URL: https://issues.apache.org/jira/browse/FLINK-10198
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.7.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>
> I think we should consider to always set a default environment when we create 
> the DBOptions.
> See https://github.com/facebook/rocksdb/wiki/rocksdb-basics:
> *Support for Multiple Embedded Databases in the same process*
> A common use-case for RocksDB is that applications inherently partition their 
> data set into logical partitions or shards. This technique benefits 
> application load balancing and fast recovery from faults. This means that a 
> single server process should be able to operate multiple RocksDB databases 
> simultaneously. This is done via an environment object named Env. Among other 
> things, a thread pool is associated with an Env. If applications want to 
> share a common thread pool (for background compactions) among multiple 
> database instances, then it should use the same Env object for opening those 
> databases.
> Similarly, multiple database instances may share the same block cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10177) Use network transport type AUTO by default

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10177:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Use network transport type AUTO by default
> --
>
> Key: FLINK-10177
> URL: https://issues.apache.org/jira/browse/FLINK-10177
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration, Runtime / Network
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Nico Kruber
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Now that the shading issue with the native library is fixed (FLINK-9463), 
> EPOLL should be available on (all?) Linux distributions and provide some 
> efficiency gain (if enabled). Therefore, 
> {{taskmanager.network.netty.transport}} should be set to {{auto}} by default. 
> If EPOLL is not available, it will automatically fall back to NIO which 
> currently is the default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10230) Support 'SHOW CREATE VIEW' syntax to print the query of a view

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10230:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support 'SHOW CREATE VIEW' syntax to print the query of a view
> --
>
> Key: FLINK-10230
> URL: https://issues.apache.org/jira/browse/FLINK-10230
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Client
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> FLINK-10163 added initial support for views in SQL Client. We should add a 
> command that allows for printing the query of a view for debugging. MySQL 
> offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE 
> TABLE}}. The latter one could be extended to also show information about the 
> used table factories and properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10210) Time indicators are not always materialised for LogicalCorrelate

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10210:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Time indicators are not always materialised for LogicalCorrelate
> 
>
> Key: FLINK-10210
> URL: https://issues.apache.org/jira/browse/FLINK-10210
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.6.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Currenty 
> {{org.apache.flink.table.calcite.RelTimeIndicatorConverter#visit(LogicalCorrelate)}}
>  supports only the cases if right side is {{LogicalTableFunctionScan}}. That 
> is not always the case. For example in case of 
> {{org.apache.flink.table.api.stream.table.CorrelateTest#testFilter}} the 
> LogicalFilter node is being pushed down to the right side of 
> {{LogicalCorrelate}}.
>  
> CC [~twalthr]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22255) AdaptiveScheduler improvements/bugs

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22255:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> AdaptiveScheduler improvements/bugs
> ---
>
> Key: FLINK-22255
> URL: https://issues.apache.org/jira/browse/FLINK-22255
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: stale-major
>
> This ticket collects the improvements/bugs for the {{AdaptiveScheduler}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10232) Add a SQL DDL

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10232:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add a SQL DDL
> -
>
> Key: FLINK-10232
> URL: https://issues.apache.org/jira/browse/FLINK-10232
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> This is an umbrella issue for all efforts related to supporting a SQL Data 
> Definition Language (DDL) in Flink's Table & SQL API.
> Such a DDL includes creating, deleting, replacing:
> - tables
> - views
> - functions
> - types
> - libraries
> - catalogs
> If possible, the parsing/validating/logical part should be done using 
> Calcite. Related issues are CALCITE-707, CALCITE-2045, CALCITE-2046, 
> CALCITE-2214, and others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15845) Add java-based StreamingFileSink test using s3

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15845:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add java-based StreamingFileSink test using s3
> --
>
> Key: FLINK-15845
> URL: https://issues.apache.org/jira/browse/FLINK-15845
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.11.0
>Reporter: Arvid Heise
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> StreamingFileSink is covered by a few shell-based e2e tests but no Java-based 
> tests. To reach our long-term goal of replacing shell-based tests, this issue 
> will provide a first version that runs on minio/aws and aims to 
> extend/replace `test_streaming_file_sink.sh`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15747) Enable setting RocksDB log level from configuration

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15747:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Enable setting RocksDB log level from configuration
> ---
>
> Key: FLINK-15747
> URL: https://issues.apache.org/jira/browse/FLINK-15747
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yu Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.14.0
>
>
> Currently to open the RocksDB local log, one has to create a customized 
> {{OptionsFactory}}, which is not quite convenient. This JIRA proposes to 
> enable setting it from configuration in flink-conf.yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15745) KafkaITCase.testKeyValueSupport failure due to assertion error.

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15745:
---
  Labels: auto-deprioritized-major auto-unassigned test-stability  (was: 
auto-unassigned stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> KafkaITCase.testKeyValueSupport failure due to assertion error.
> ---
>
> Key: FLINK-15745
> URL: https://issues.apache.org/jira/browse/FLINK-15745
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, test-stability
> Fix For: 1.14.0
>
>
> The failure cause was:
> {code:java}
> Caused by: java.lang.AssertionError: Wrong value 50
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$15.flatMap(KafkaConsumerTestBase.java:1411)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$15.flatMap(KafkaConsumerTestBase.java:1406)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21108) Flink runtime rest server and history server webmonitor do not require authentication.

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21108:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Flink runtime rest server and history server webmonitor do not require 
> authentication.
> --
>
> Key: FLINK-21108
> URL: https://issues.apache.org/jira/browse/FLINK-21108
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: Xiaoguang Sun
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>
> Flink runtime rest server and history server webmonitor do not require 
> authentication. At certain scenarios, prohibiting unauthorized access is 
> desired. Http basic authentication can be used here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15787) Upgrade REST API response to remove '-' from key names.

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15787:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Upgrade REST API response to remove '-' from key names.
> ---
>
> Key: FLINK-15787
> URL: https://issues.apache.org/jira/browse/FLINK-15787
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.10.0
>Reporter: Daryl Roberts
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> There are some REST API responses that include keys with hyphens in them.  
> This results in the frontend having to use string-lookups to access those 
> values and we lose the type information when doing that.
> Example from {{/jobs//vertices//backpressure}}
>  {{export interface JobBackpressureInterface {}}
>  {{  status: string;}}
>  {{  'backpressure-level': string;}}
>  {{  'end-timestamp': number;}}
>  {{  subtasks: JobBackpressureSubtaskInterface[];}}
>  {{}}}
>  I would like to update all of these to use {{_}} instead so we can maintain 
> the type information we have in the web-runtime.
> My suggestion to do this with out a version bump to the API is to just make 
> an addition to all the enpoints that include the _ versions as well. Then 
> after the web-runtime has completely switched over to using the new keys, you 
> can deprecate and remove the old hypenated keys at your own pace.
> [~chesnay] [~trohrmann]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15973) Optimize the execution plan where it refers the Python UDF result field in the where clause

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15973:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Optimize the execution plan where it refers the Python UDF result field in 
> the where clause
> ---
>
> Key: FLINK-15973
> URL: https://issues.apache.org/jira/browse/FLINK-15973
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.14.0
>
>
> For the following job:
> {code}
> t_env.register_function("inc", inc)
> table.select("inc(id) as inc_id") \
>  .where("inc_id > 0") \
>  .insert_into("sink")
> {code}
> The execution plan is as following:
> {code}
> StreamExecPythonCalc(select=inc(f0) AS inc_id))
> +- StreamExecCalc(select=id AS f0, where=>(f0, 0))
> +--- StreamExecPythonCalc(select=id, inc(f0) AS f0))
> +-StreamExecCalc(select=id, id AS f0))
> +---StreamExecTableSourceScan(fields=id)
> {code}
> The plan is not the best. It could be optimized as following:
> {code}
> StreamExecCalc(select=f0, where=>(f0, 0))
> +- StreamExecPythonCalc(select=inc(f0) AS f0))
> +---StreamExecCalc(select=id, id AS f0))
> +-StreamExecTableSourceScan(fields=id)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15873) Matched result may not be output if existing earlier partial matches

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15873:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Matched result may not be output if existing earlier partial matches
> 
>
> Key: FLINK-15873
> URL: https://issues.apache.org/jira/browse/FLINK-15873
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: shuai.xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> When running some cep jobs with skip strategy, I found that when we get a 
> matched result, but if there is an earlier partial matches, the result will 
> not be returned.
> I think this is due to a bug in processMatchesAccordingToSkipStrategy() in 
> NFA class. It should return matched result without judging whether this is 
> partial matches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15964) Getting previous stage in notFollowedBy may throw exception

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15964:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Getting previous stage in notFollowedBy may throw exception
> ---
>
> Key: FLINK-15964
> URL: https://issues.apache.org/jira/browse/FLINK-15964
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: shuai.xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a notFollowedBy() condition, it may throw exception if trying to get value 
> from previous stage for comparison.
> For example:
> Pattern pattern = Pattern.begin("start", 
> AfterMatchSkipStrategy.skipPastLastEvent())
>  .notFollowedBy("not").where(new IterativeCondition() {
>  private static final long serialVersionUID = -4702359359303151881L;
>  @Override
>  public boolean filter(Event value, Context ctx) throws Exception {
>  return 
> value.getName().equals(ctx.getEventsForPattern("start").iterator().next().getName());
>  }
>  })
>  .followedBy("middle").where(new IterativeCondition() {
>  @Override
>  public boolean filter(Event value, Context ctx) throws Exception {
>  return value.getName().equals("b");
>  }
>  });
> with inputs:
> Event a = new Event(40, "a", 1.0);
> Event b1 = new Event(41, "a", 2.0);
> Event b2 = new Event(43, "b", 3.0);
> It will throw 
> org.apache.flink.util.FlinkRuntimeException: Failure happened in filter 
> function.org.apache.flink.util.FlinkRuntimeException: Failure happened in 
> filter function.
>  at org.apache.flink.cep.nfa.NFA.findFinalStateAfterProceed(NFA.java:698) at 
> org.apache.flink.cep.nfa.NFA.computeNextStates(NFA.java:628) at 
> org.apache.flink.cep.nfa.NFA.doProcess(NFA.java:292) at 
> org.apache.flink.cep.nfa.NFA.process(NFA.java:228) at 
> org.apache.flink.cep.utils.NFATestHarness.consumeRecord(NFATestHarness.java:107)
>  at 
> org.apache.flink.cep.utils.NFATestHarness.feedRecord(NFATestHarness.java:84) 
> at 
> org.apache.flink.cep.utils.NFATestHarness.feedRecords(NFATestHarness.java:77) 
> at 
> org.apache.flink.cep.nfa.NFAITCase.testEndWithNotFollow(NFAITCase.java:2914) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)Caused 
> by: java.util.NoSuchElementException at 
> 

[jira] [Updated] (FLINK-15980) The notFollowedBy in the end of GroupPattern may be ignored

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15980:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> The notFollowedBy in the end of GroupPattern may be ignored
> ---
>
> Key: FLINK-15980
> URL: https://issues.apache.org/jira/browse/FLINK-15980
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: shuai.xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If we write a Pattern like this:
> Pattern group = Pattern.begin('A').notFollowedBy("B");
> Pattern pattern = Pattern.begin(group).followedBy("C");
> Let notFollowedBy as the last part of a GroupPattern.
> This pattern can be compile normally, but the notFollowedBy("B") doesn't work 
> in fact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15992) Incorrect classloader when finding TableFactory

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15992:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Incorrect classloader when finding TableFactory
> ---
>
> Key: FLINK-15992
> URL: https://issues.apache.org/jira/browse/FLINK-15992
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API, Tests
>Reporter: Victor Wong
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Background*
> As a streaming service maintainer in our company, to ensure our users depend 
> on the correct version of Kafka and flink-kafka, we add 
> "flink-connector-kafka" into "fink-dist/lib" directory.
> *Problem*
> When submitting flink-sql jobs, we encountered below exceptions:
> {code:java}
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
> not find a suitable table factory for 
> 'org.apache.flink.table.factories.DeserializationSchemaFactory' in
> the classpath.
> {code}
> But we have add "org.apache.flink.formats.json.JsonRowFormatFactory" in 
> "META-INF/services/org.apache.flink.table.factories.TableFactory", which 
> implements DeserializationSchemaFactory.
> *Debug*
> We find that it was caused by this:
> {code:java}
> // 
> org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase#getSerializationSchema
>   final SerializationSchemaFactory formatFactory = 
> TableFactoryService.find(
>   SerializationSchemaFactory.class,
>   properties,
>   this.getClass().getClassLoader());
> {code}
> It uses `this.getClass().getClassLoader()`, which will be 
> BootStrapClassLoader of flink.
> I think we could replace it with 
> `Thread.currentThread().getContextClassLoader()` to solve this.
> There is a related issue: https://issues.apache.org/jira/browse/FLINK-15552



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22333) Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy task timeout.

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22333:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy 
> task timeout.
> ---
>
> Key: FLINK-22333
> URL: https://issues.apache.org/jira/browse/FLINK-22333
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16694=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12329
> {code:java}
> 2021-04-16T23:37:23.5719280Z Apr 16 23:37:23 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-16T23:37:23.5739250Z Apr 16 23:37:23  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-16T23:37:23.5759329Z Apr 16 23:37:23  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-16T23:37:23.5779145Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-16T23:37:23.5799204Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-16T23:37:23.5819302Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-16T23:37:23.5839106Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-16T23:37:23.5859276Z Apr 16 23:37:23  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-16T23:37:23.5868964Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-16T23:37:23.5869925Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-16T23:37:23.5919839Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-16T23:37:23.5959562Z Apr 16 23:37:23  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-16T23:37:23.5989732Z Apr 16 23:37:23  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> 2021-04-16T23:37:23.6019422Z Apr 16 23:37:23  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-04-16T23:37:23.6039067Z Apr 16 23:37:23  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-04-16T23:37:23.6060126Z Apr 16 23:37:23  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-04-16T23:37:23.6089258Z Apr 16 23:37:23  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-04-16T23:37:23.6119150Z Apr 16 23:37:23  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-16T23:37:23.6139149Z Apr 16 23:37:23  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-04-16T23:37:23.6159077Z Apr 16 23:37:23  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-04-16T23:37:23.6189432Z Apr 16 23:37:23  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-04-16T23:37:23.6215243Z Apr 16 23:37:23  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-04-16T23:37:23.6219148Z Apr 16 23:37:23  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-04-16T23:37:23.6220221Z Apr 16 23:37:23  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-04-16T23:37:23.6249411Z Apr 16 23:37:23  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-04-16T23:37:23.6259145Z Apr 16 23:37:23  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-04-16T23:37:23.6289272Z Apr 16 23:37:23  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-16T23:37:23.6309243Z Apr 16 23:37:23  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 

[jira] [Updated] (FLINK-15867) LAST_VALUE and FIRST_VALUE aggregate function does not support time-related types

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15867:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> LAST_VALUE and FIRST_VALUE aggregate function does not support time-related 
> types
> -
>
> Key: FLINK-15867
> URL: https://issues.apache.org/jira/browse/FLINK-15867
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Benoît Paris
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Attachments: flink-test-lastvalue-timestamp.zip
>
>
> The following fails:
> {code:java}
> LAST_VALUE(TIMESTAMP '2020-02-03 16:17:20')
> LAST_VALUE(DATE '2020-02-03')
> LAST_VALUE(TIME '16:17:20')
> LAST_VALUE(NOW()){code}
> But this works:
>  
> {code:java}
> LAST_VALUE(UNIX_TIMESTAMP()) 
> {code}
> Leading me to say it might be more a type/format issue, rather than an actual 
> time processing issue.
> Attached is java + pom + full stacktrace, for reproduction. Stacktrace part 
> is below.
>  
> The ByteLastValueAggFunction, etc types seem trivial to implement, but the in 
> the createLastValueAggFunction only basic types seem to be dealt with. Is 
> there a reason more complicated LogicalTypeRoots might not be implemented ? 
> (old vs new types?)
>  
>  
> Caused by: org.apache.flink.table.api.TableException: LAST_VALUE aggregate 
> function does not support type: ''TIMESTAMP_WITHOUT_TIME_ZONE''.Caused by: 
> org.apache.flink.table.api.TableException: LAST_VALUE aggregate function does 
> not support type: ''TIMESTAMP_WITHOUT_TIME_ZONE''.Please re-check the data 
> type. at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createLastValueAggFunction(AggFunctionFactory.scala:617)
>  at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:113)
>  at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:285)
>  at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:279)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:279)
>  at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:228)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecGroupAggregate.(StreamExecGroupAggregate.scala:72)
>  at 
> org.apache.flink.table.planner.plan.rules.physical.stream.StreamExecGroupAggregateRule.convert(StreamExecGroupAggregateRule.scala:68)
>  at 
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:139) 
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
>  at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
>  at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:328) 
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
> 
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15832) Test jars are installed twice

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15832:
---
  Labels: auto-deprioritized-major auto-unassigned build  (was: 
auto-unassigned build stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Test jars are installed twice
> -
>
> Key: FLINK-15832
> URL: https://issues.apache.org/jira/browse/FLINK-15832
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.9.1
>Reporter: static-max
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, build
>
> I built Flink 1.9.1 myself and merged the changes from 
> [https://github.com/apache/flink/pull/10936].
> When I uploaded the artifacts to our repository (using {{mvn deploy }}
> {{-DaltDeploymentRepository}}) the build fails as 
> {{flink-metrics-core-tests}} will be uploaded twice and we have redeployments 
> disabled.
>  
> I'm not sure if other artifacts are affected as well, as I enabled 
> redeployment as a quick workaround.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22139) Flink Jobmanager & Task Manger logs are not writing to the logs files

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22139:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Flink Jobmanager & Task Manger logs are not writing to the logs files
> -
>
> Key: FLINK-22139
> URL: https://issues.apache.org/jira/browse/FLINK-22139
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.2
> Environment: on kubernetes flink standalone deployment with 
> jobmanager HA is enabled.
>Reporter: Bhagi
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Hi Team,
> I am submitting the jobs and restarting the job manager and task manager 
> pods..  Log files are generating with the name task manager and job manager.
> but job manager & task manager log file size is '0', i am not sure any 
> configuration missed..why logs are not writing to their log files..
> # Task Manager pod###
> flink@flink-taskmanager-85b6585b7-hhgl7:~$ ls -lart log/
> total 0
> -rw-r--r-- 1 flink flink  0 Apr  7 09:35 
> flink--taskexecutor-0-flink-taskmanager-85b6585b7-hhgl7.log
> flink@flink-taskmanager-85b6585b7-hhgl7:~$
> ### Jobmanager pod Logs #
> flink@flink-jobmanager-f6db89b7f-lq4ps:~$
> -rw-r--r-- 1 7148739 flink 0 Apr  7 06:36 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-gtkx5.log
> -rw-r--r-- 1 7148739 flink 0 Apr  7 06:36 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-wnrfm.log
> -rw-r--r-- 1 7148739 flink 0 Apr  7 06:37 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-2b2fs.log
> -rw-r--r-- 1 7148739 flink 0 Apr  7 06:37 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-7kdhh.log
> -rw-r--r-- 1 7148739 flink 0 Apr  7 09:35 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-twhkt.log
> drwxrwxrwx 2 7148739 flink35 Apr  7 09:35 .
> -rw-r--r-- 1 7148739 flink 0 Apr  7 09:35 
> flink--standalonesession-0-flink-jobmanager-f6db89b7f-lq4ps.log
> flink@flink-jobmanager-f6db89b7f-lq4ps:~$
> I configured log4j.properties for flink
> log4j.properties: |+
> monitorInterval=30
> rootLogger.level = INFO
> rootLogger.appenderRef.file.ref = MainAppender
> logger.flink.name = org.apache.flink
> logger.flink.level = INFO
> logger.akka.name = akka
> logger.akka.level = INFO
> appender.main.name = MainAppender
> appender.main.type = RollingFile
> appender.main.append = true
> appender.main.fileName = ${sys:log.file}
> appender.main.filePattern = ${sys:log.file}.%i
> appender.main.layout.type = PatternLayout
> appender.main.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
> - %m%n
> appender.main.policies.type = Policies
> appender.main.policies.size.type = SizeBasedTriggeringPolicy
> appender.main.policies.size.size = 100MB
> appender.main.policies.startup.type = OnStartupTriggeringPolicy
> appender.main.strategy.type = DefaultRolloverStrategy
> appender.main.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}
> logger.netty.name = 
> org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
> logger.netty.level = OFF



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22558) testcontainers fail to start ResourceReaper

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22558:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> testcontainers fail to start ResourceReaper
> ---
>
> Key: FLINK-22558
> URL: https://issues.apache.org/jira/browse/FLINK-22558
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17527=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13314
> {code}
> May 03 23:15:17 "main" #1 prio=5 os_prio=0 tid=0x7f82c400b800 nid=0x43a8 
> runnable [0x7f82cb6ad000]
> May 03 23:15:17java.lang.Thread.State: RUNNABLE
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.AsyncTimeout.scheduleTimeout(AsyncTimeout.java:106)
> May 03 23:15:17   - locked <0xb31f9ca8> (a java.lang.Class for 
> org.testcontainers.shaded.okio.AsyncTimeout)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.AsyncTimeout.enter(AsyncTimeout.java:80)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:235)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.RealBufferedSource.request(RealBufferedSource.java:72)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.RealBufferedSource.require(RealBufferedSource.java:65)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:307)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
> May 03 23:15:17   at 
> org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
> May 03 23:15:17   at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285)
> May 03 23:15:17   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
> May 03 23:15:17   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$83/1670241829.close(Unknown
>  Source)
> May 03 23:15:17   at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
> May 03 23:15:17   at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202)
> May 03 23:15:17   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205)
> May 03 23:15:17   - locked <0x987a9470> (a [Ljava.lang.Object;)
> May 03 23:15:17   at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
> May 03 23:15:17   at 
> org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
> May 03 23:15:17   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
> May 03 23:15:17   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029)
> May 03 23:15:17   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> May 03 23:15:17   

[jira] [Updated] (FLINK-16137) Translate all DataStream API related pages into Chinese

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16137:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Translate all DataStream API related pages into Chinese 
> 
>
> Key: FLINK-16137
> URL: https://issues.apache.org/jira/browse/FLINK-16137
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>
> Translate data stream related pages into Chinese, including the pages under 
> the section "Application Development/Streaming(DataStream API)" in the 
> document site. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11937) Resolve small file problem in RocksDB incremental checkpoint

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11937:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Resolve small file problem in RocksDB incremental checkpoint
> 
>
> Key: FLINK-11937
> URL: https://issues.apache.org/jira/browse/FLINK-11937
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Reporter: Congxian Qiu
>Assignee: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently when incremental checkpoint is enabled in RocksDBStateBackend a 
> separate file will be generated on DFS for each sst file. This may cause 
> “file flood” when running intensive workload (many jobs with high 
> parallelism) in big cluster. According to our observation in Alibaba 
> production, such file flood introduces at lease two drawbacks when using HDFS 
> as the checkpoint storage FileSystem: 1) huge number of RPC request issued to 
> NN which may burst its response queue; 2) huge number of files causes big 
> pressure on NN’s on-heap memory.
> In Flink we ever noticed similar small file flood problem and tried to 
> resolved it by introducing ByteStreamStateHandle(FLINK-2808), but this 
> solution has its limitation that if we configure the threshold too low there 
> will still be too many small files, while if too high the JM will finally 
> OOM, thus could hardly resolve the issue in case of using RocksDBStateBackend 
> with incremental snapshot strategy.
> We propose a new OutputStream called 
> FileSegmentCheckpointStateOutputStream(FSCSOS) to fix the problem. FSCSOS 
> will reuse the same underlying distributed file until its size exceeds a 
> preset threshold. We
> plan to complete the work in 3 steps: firstly introduce FSCSOS, secondly 
> resolve the specific storage amplification issue on FSCSOS, and lastly add an 
> option to reuse FSCSOS across multiple checkpoints to further reduce the DFS 
> file number.
> More details please refer to the attached design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21952) Make all the "Connection reset by peer" exception wrapped as RemoteTransportException

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21952:
---
Labels: auto-deprioritized-major stale-assigned  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Make all the "Connection reset by peer" exception wrapped as 
> RemoteTransportException
> -
>
> Key: FLINK-21952
> URL: https://issues.apache.org/jira/browse/FLINK-21952
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: auto-deprioritized-major, stale-assigned
>
> In CreditBasedPartitionRequestClientHandler#exceptionCaught, the IOException 
> or the exception with exact message "Connection reset by peer" are marked as 
> RemoteTransportException. 
> However, with the current Netty implementation, sometimes it might throw 
> {code:java}
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
>  readAddress(..) failed: Connection reset by peer
> {code}
> in some case. It would be also wrapped as LocalTransportException, which 
> might cause some confusion. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20280) Support batch mode for Python DataStream API

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20280:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Support batch mode for Python DataStream API
> 
>
> Key: FLINK-20280
> URL: https://issues.apache.org/jira/browse/FLINK-20280
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>
> Currently, it still doesn't support batch mode for the Python DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21634) ALTER TABLE statement enhancement

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21634:
---
Labels: auto-unassigned stale-assigned  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> ALTER TABLE statement enhancement
> -
>
> Key: FLINK-21634
> URL: https://issues.apache.org/jira/browse/FLINK-21634
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Client
>Reporter: Jark Wu
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
> Fix For: 1.14.0
>
>
> We already introduced ALTER TABLE statement in FLIP-69 [1], but only support 
> to rename table name and change table options. One useful feature of ALTER 
> TABLE statement is modifying schema. This is also heavily required by 
> integration with data lakes (e.g. iceberg). 
> Therefore, I propose to support following ALTER TABLE statements (except 
> {{SET}} and {{RENAME TO}}, others are all new introduced syntax):
> {code:sql}
> ALTER TABLE table_name {
> ADD {  | ( [, ...]) }
>   | MODIFY {  | ( [, ...]) }
>   | DROP {column_name | (column_name, column_name, ) | PRIMARY KEY | 
> CONSTRAINT constraint_name | WATERMARK}
>   | RENAME old_column_name TO new_column_name
>   | RENAME TO new_table_name
>   | SET (key1=val1, ...)
>   | RESET (key1, ...)
> }
> ::
>   {  |  |  }
> ::
>   column_name  [FIRST | AFTER column_name]
> ::
>   [CONSTRAINT constraint_name] PRIMARY KEY (column_name, ...) NOT ENFORCED
> ::
>   WATERMARK FOR rowtime_column_name AS watermark_strategy_expression
> ::
>   {  |  | 
>  } [COMMENT column_comment]
> ::
>   column_type
> ::
>   column_type METADATA [ FROM metadata_key ] [ VIRTUAL ]
> ::
>   AS computed_column_expression
> {code}
> And some examples:
> {code:sql}
> -- add a new column 
> ALTER TABLE mytable ADD new_column STRING COMMENT 'new_column docs';
> -- add columns, constraint, and watermark
> ALTER TABLE mytable ADD (
> log_ts STRING COMMENT 'log timestamp string' FIRST,
> ts AS TO_TIMESTAMP(log_ts) AFTER log_ts,
> PRIMARY KEY (id) NOT ENFORCED,
> WATERMARK FOR ts AS ts - INTERVAL '3' SECOND
> );
> -- modify a column type
> ALTER TABLE prod.db.sample MODIFY measurement double COMMENT 'unit is bytes 
> per second' AFTER `id`;
> -- modify definition of column log_ts and ts, primary key, watermark. they 
> must exist in table schema
> ALTER TABLE mytable ADD (
> log_ts STRING COMMENT 'log timestamp string' AFTER `id`,  -- reoder 
> columns
> ts AS TO_TIMESTAMP(log_ts) AFTER log_ts,
> PRIMARY KEY (id) NOT ENFORCED,
> WATERMARK FOR ts AS ts - INTERVAL '3' SECOND
> );
> -- drop an old column
> ALTER TABLE prod.db.sample DROP measurement;
> -- drop columns
> ALTER TABLE prod.db.sample DROP (col1, col2, col3);
> -- drop a watermark
> ALTER TABLE prod.db.sample DROP WATERMARK;
> -- rename column name
> ALTER TABLE prod.db.sample RENAME `data` TO payload;
> -- rename table name
> ALTER TABLE mytable RENAME TO mytable2;
> -- set options
> ALTER TABLE kafka_table SET (
> 'scan.startup.mode' = 'specific-offsets', 
> 'scan.startup.specific-offsets' = 'partition:0,offset:42'
> );
> -- reset options
> ALTER TABLE kafka_table RESET ('scan.startup.mode', 
> 'scan.startup.specific-offsets');
> {code}
> Note: we don't need to introduce new interfaces, because all the alter table 
> operation will be forward to catalog through {{Catalog#alterTable(tablePath, 
> newTable, ignoreIfNotExists)}}.
> [1]: 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/alter/#alter-table
> [2]: http://iceberg.apache.org/spark-ddl/#alter-table-alter-column
> [3]: https://trino.io/docs/current/sql/alter-table.html
> [4]: https://dev.mysql.com/doc/refman/8.0/en/alter-table.html
> [5]: https://www.postgresql.org/docs/9.1/sql-altertable.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10705) Rework Flink Web Dashboard

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10705:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Rework Flink Web Dashboard
> --
>
> Key: FLINK-10705
> URL: https://issues.apache.org/jira/browse/FLINK-10705
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.6.2, 1.8.0
>Reporter: Fabian Wollert
>Assignee: Yadong Xie
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: 3rdpartylicenses.txt, image-2018-10-29-09-17-24-115.png, 
> snapshot.jpeg
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Flink Dashboard is very simple currently and should get updated. This is 
> the umbrella ticket for other tickets regarding this. Please check the 
> sub-tickets for details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21431) UpsertKafkaTableITCase.testTemporalJoin hang

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21431:
---
Labels: pull-request-available stale-assigned test-stability  (was: 
pull-request-available test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> UpsertKafkaTableITCase.testTemporalJoin hang
> 
>
> Key: FLINK-21431
> URL: https://issues.apache.org/jira/browse/FLINK-21431
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Guowei Ma
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.13.0, 1.12.3
>
>
> This case hangs almost 3 hours:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13543=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b
> {code:java}
> Test testTemporalJoin[format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase)
>  is running. 
> 
>  23:08:43,259 [ main] INFO 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
> Creating topic users_csv 23:08:45,303 [ main] WARN 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
> [transaction.timeout.ms] not specified. Setting it to 360 ms 23:08:45,430 
> [ChangelogNormalize(key=[user_id]) -> Calc(select=[user_id, user_name, 
> region, CAST(modification_time) AS timestamp]) -> Sink: 
> Sink(table=[default_catalog.default_database.users_csv], fields=[user_id, 
> user_name, region, timestamp]) (1/1)#0] WARN 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
> AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
> semantic. 23:08:45,438 [ChangelogNormalize(key=[user_id]) -> 
> Calc(select=[user_id, user_name, region, CAST(modification_time) AS 
> timestamp]) -> Sink: Sink(table=[default_catalog.default_database.users_csv], 
> fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
> FlinkKafkaInternalProducer (1/1) to produce into default topic users_csv 
> 23:08:45,791 [Source: TableSourceScan(table=[[default_catalog, 
> default_database, users_csv, watermark=[CAST($3):TIMESTAMP(3)]]], 
> fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 has no restore state. 23:08:45,810 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, users_csv, 
> watermark=[CAST($3):TIMESTAMP(3)]]], fields=[user_id, user_name, region, 
> timestamp]) (1/1)#0] INFO 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 will start reading the following 2 partitions from the 
> earliest offsets: [KafkaTopicPartition{topic='users_csv', partition=1}, 
> KafkaTopicPartition{topic='users_csv', partition=0}] 23:08:45,825 [Legacy 
> Source Thread - Source: TableSourceScan(table=[[default_catalog, 
> default_database, users_csv, watermark=[CAST($3):TIMESTAMP(3)]]], 
> fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 creating fetcher with offsets 
> {KafkaTopicPartition{topic='users_csv', partition=1}=-915623761775, 
> KafkaTopicPartition{topic='users_csv', partition=0}=-915623761775}. 
> ##[error]The operation was canceled.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16073) Translate "State & Fault Tolerance" pages into Chinese

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16073:
---
Labels: stale-assigned  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Translate "State & Fault Tolerance" pages into Chinese
> --
>
> Key: FLINK-16073
> URL: https://issues.apache.org/jira/browse/FLINK-16073
> Project: Flink
>  Issue Type: New Feature
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>  Labels: stale-assigned
> Fix For: 1.14.0
>
>
> Translate all "State & Fault Tolerance" related pages into Chinese, including 
> pages under `docs/dev/stream/state/` and `docs/ops/state`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21420) Add path templating to the DataStream API

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21420:
---
Labels: auto-deprioritized-major stale-assigned  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Add path templating to the DataStream API
> -
>
> Key: FLINK-21420
> URL: https://issues.apache.org/jira/browse/FLINK-21420
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Miguel Araujo
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: auto-deprioritized-major, stale-assigned
> Fix For: statefun-3.1.0
>
>
> Path Template was introduced in FLINK-20264 with a new module YAML 
> specification being added in FLINK-20334.
> However, that possibility was not added to the DataStream API.
> The main problem is that RequestReplyFunctionBuilder can only receive 
> FunctionTypes which it then turns into FunctionTypeTarget's for the 
> HttpFunctionEndpointSpec builder:
> {code:java}
> private RequestReplyFunctionBuilder(FunctionType functionType, URI endpoint) {
>   this.builder =
>   HttpFunctionEndpointSpec.builder(
>   Target.functionType(functionType), new 
> UrlPathTemplate(endpoint.toASCIIString()));
> }
> {code}
> It should also be possible for the RequestReplyFunctionBuilder to receive a 
> namespace instead of a function type and to use `Target.namespace(namespace)` 
> to initialize the HttpFunctionEndpointSpec Builder instead.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16012) Reduce the default number of exclusive buffers from 2 to 1 on receiver side

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16012:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Reduce the default number of exclusive buffers from 2 to 1 on receiver side
> ---
>
> Key: FLINK-16012
> URL: https://issues.apache.org/jira/browse/FLINK-16012
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Reporter: Zhijiang
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to reduce the inflight buffers for checkpoint in the case of back 
> pressure, we can reduce the number of exclusive buffers for remote input 
> channel from default 2 to 1 as the first step. Besides that, the total 
> required buffers are also reduced as a result. We can further verify the 
> performance effect via various of benchmarks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22771) Add TestContext to Java SDK

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22771:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Add TestContext to Java SDK
> ---
>
> Key: FLINK-22771
> URL: https://issues.apache.org/jira/browse/FLINK-22771
> Project: Flink
>  Issue Type: Sub-task
>  Components: Stateful Functions
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18438) TaskManager start failed

2021-06-11 Thread Derek Moore (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361927#comment-17361927
 ] 

Derek Moore commented on FLINK-18438:
-

[~JohnSiro], [~cr0566], [~xintongsong]:

This issue resolves itself when you use the `set -o igncr` shell command. The 
need is described here, but the visible error seems to have changed:

https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/standalone/overview/#windows-cygwin-users

> TaskManager start failed
> 
>
> Key: FLINK-18438
> URL: https://issues.apache.org/jira/browse/FLINK-18438
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.1
> Environment: Java:   java version "1.8.0_101"
>           Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
>           Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode) 
> Flink: 1.10.1 (flink-1.10.1-bin-scala_2.12.tgz)
> OS: Windows 10 (1903) / 64-bits
>Reporter: JohnSiro
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.10.4, 1.11.4, 1.14.0
>
>
>  
> Error:  in file  xxx-taskexecutor-0-xxx.out is:
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> Improperly specified VM option 'MaxMetaspaceSize=268435456 '



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18182) Upgrade AWS SDK in flink-connector-kinesis to include new region af-south-1

2021-06-11 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-18182:
-

Assignee: Danny Cranmer

> Upgrade AWS SDK in flink-connector-kinesis to include new region af-south-1
> ---
>
> Key: FLINK-18182
> URL: https://issues.apache.org/jira/browse/FLINK-18182
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Affects Versions: 1.11.1
>Reporter: Tom Wells
>Assignee: Danny Cranmer
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>
> Current (1.11.1) version of flink-connector-kinesis is compiled against 
> version 1.11.754 of the AWS SDK, which does not include the new af-south-1 
> (Cape Town) AWS region.
> Looking at the release notes for AWS SDK - this region is included from 
> version 1.11.768 onwards.
> I'd be happy to try and create a PR for this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22891) FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase fails on azure

2021-06-11 Thread Anton Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361863#comment-17361863
 ] 

Anton Kalashnikov commented on FLINK-22891:
---

[~karmagyz], I still observe this problem on the fresh changes - 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18912=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=6980]

 

I believe we should keep at least one such ticket open to collect the problems. 
The guess about the overloaded machines is not so bad, maybe it makes sense to 
increase corresponded timeout specifically for this test, at least we will be 
sure that it is not any deadlock or the missing completion.

> FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase fails on azure
> 
>
> Key: FLINK-22891
> URL: https://issues.apache.org/jira/browse/FLINK-22891
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18700=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=8660
> {code}
> Jun 05 21:16:00 [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 6.24 s <<< FAILURE! - in 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase
> Jun 05 21:16:00 [ERROR] 
> testResourceCanBeAllocatedForDifferentJobWithDeclarationBeforeSlotFree(org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase)
>   Time elapsed: 5.015 s  <<< ERROR!
> Jun 05 21:16:00 java.util.concurrent.TimeoutException
> Jun 05 21:16:00   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Jun 05 21:16:00   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerTestBase.assertFutureCompleteAndReturn(FineGrainedSlotManagerTestBase.java:121)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase$4.lambda$new$4(AbstractFineGrainedSlotManagerITCase.java:374)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerTestBase$Context.runTest(FineGrainedSlotManagerTestBase.java:212)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase$4.(AbstractFineGrainedSlotManagerITCase.java:310)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase.testResourceCanBeAllocatedForDifferentJobAfterFree(AbstractFineGrainedSlotManagerITCase.java:308)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase.testResourceCanBeAllocatedForDifferentJobWithDeclarationBeforeSlotFree(AbstractFineGrainedSlotManagerITCase.java:262)
> Jun 05 21:16:00   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 05 21:16:00   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 05 21:16:00   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 05 21:16:00   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 05 21:16:00   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 05 21:16:00   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 05 21:16:00   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 05 21:16:00   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 05 21:16:00   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 05 21:16:00   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jun 05 21:16:00   at 
> 

[jira] [Updated] (FLINK-22970) The documentation for `TO_TIMESTAMP` UDF has an incorrect description

2021-06-11 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-22970:
-
Component/s: Table SQL / API

> The documentation for `TO_TIMESTAMP` UDF has an incorrect description
> -
>
> Key: FLINK-22970
> URL: https://issues.apache.org/jira/browse/FLINK-22970
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Wei-Che Wei
>Priority: Minor
>
> According to this ML discussion 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/confused-about-TO-TIMESTAMP-document-description-td44352.html]
> The description for `TO_TIMESTAMP` udf is not correct. It will use UTC+0 
> timezone instead of session timezone. We should fix this documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361827#comment-17361827
 ] 

Jark Wu commented on FLINK-22968:
-

[~twalthr], do you think we still need to improve exception messages for 
{{toAppendStream}} or not? 

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18438) TaskManager start failed

2021-06-11 Thread Derek Moore (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361826#comment-17361826
 ] 

Derek Moore commented on FLINK-18438:
-

[~JohnSiro], [~cr0566], [~xintongsong], et al.,

The issue a ^M character at the end of the argument. In this bug report it 
shows as a space in the original report. In FLINK-18792, it is shown as a 
newline (notice the wrapped quote character).

A number followed by ^M is an improperly specified option. You can use `less` 
to see the control character in the logs.

Haven't figured out how to get rid of the ^M character yet!

I recently pulled down Flink 1.13.1 & latest Cygwin, and I get this error. 
Things work even less under Git Bash.

> TaskManager start failed
> 
>
> Key: FLINK-18438
> URL: https://issues.apache.org/jira/browse/FLINK-18438
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.1
> Environment: Java:   java version "1.8.0_101"
>           Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
>           Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode) 
> Flink: 1.10.1 (flink-1.10.1-bin-scala_2.12.tgz)
> OS: Windows 10 (1903) / 64-bits
>Reporter: JohnSiro
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.10.4, 1.11.4, 1.14.0
>
>
>  
> Error:  in file  xxx-taskexecutor-0-xxx.out is:
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> Improperly specified VM option 'MaxMetaspaceSize=268435456 '



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22882) Tasks are blocked while emitting records

2021-06-11 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361733#comment-17361733
 ] 

Piotr Nowojski commented on FLINK-22882:


This turned out to be another symptom of FLINK-22881. After broadcasting stream 
status, but before emitting the actual record, task could end up with zero 
available buffers which was causing exactly this observed behaviour.

> Tasks are blocked while emitting records
> 
>
> Key: FLINK-22882
> URL: https://issues.apache.org/jira/browse/FLINK-22882
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / Task
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Critical
>
> On a cluster I observed symptoms of tasks being blocked for long time, 
> causing long delays with unaligned checkpointing. 1% of those cases were 
> caused by emitting records:
> {noformat}
> 2021-06-04 14:40:24,115 ERROR 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool   [] - Blocking 
> wait [16497 ms] for an available buffer.
> java.lang.Exception: Stracktracegenerator
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:323)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:290)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewBufferBuilderFromPool(BufferWritingResultPartition.java:338)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewUnicastBufferBuilder(BufferWritingResultPartition.java:314)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.appendUnicastDataForRecordContinuation(BufferWritingResultPartition.java:258)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.emitRecord(BufferWritingResultPartition.java:147)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:104)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:54)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:105)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:90)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:44)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:82)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:182)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> 

[jira] [Closed] (FLINK-22934) Add instructions for using the " ' " escape syntax of SQL client

2021-06-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-22934.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: ddfd5cac045c7aa0ad3561b08ed0215db397

> Add instructions for using the " ' " escape syntax of SQL client
> 
>
> Key: FLINK-22934
> URL: https://issues.apache.org/jira/browse/FLINK-22934
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.0, 1.13.1
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> # FLINK-22921



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22882) Tasks are blocked while emitting records

2021-06-11 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-22882.
--
Resolution: Duplicate

> Tasks are blocked while emitting records
> 
>
> Key: FLINK-22882
> URL: https://issues.apache.org/jira/browse/FLINK-22882
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / Task
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Priority: Critical
>
> On a cluster I observed symptoms of tasks being blocked for long time, 
> causing long delays with unaligned checkpointing. 1% of those cases were 
> caused by emitting records:
> {noformat}
> 2021-06-04 14:40:24,115 ERROR 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool   [] - Blocking 
> wait [16497 ms] for an available buffer.
> java.lang.Exception: Stracktracegenerator
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:323)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:290)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewBufferBuilderFromPool(BufferWritingResultPartition.java:338)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewUnicastBufferBuilder(BufferWritingResultPartition.java:314)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.appendUnicastDataForRecordContinuation(BufferWritingResultPartition.java:258)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.emitRecord(BufferWritingResultPartition.java:147)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:104)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:54)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:105)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:90)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:44)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:82)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:182)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.api.connector.source.lib.util.IteratorSourceReader.pollNext(IteratorSourceReader.java:98)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:294)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> 

[jira] [Assigned] (FLINK-22882) Tasks are blocked while emitting records

2021-06-11 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-22882:
--

Assignee: Piotr Nowojski

> Tasks are blocked while emitting records
> 
>
> Key: FLINK-22882
> URL: https://issues.apache.org/jira/browse/FLINK-22882
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / Task
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Critical
>
> On a cluster I observed symptoms of tasks being blocked for long time, 
> causing long delays with unaligned checkpointing. 1% of those cases were 
> caused by emitting records:
> {noformat}
> 2021-06-04 14:40:24,115 ERROR 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool   [] - Blocking 
> wait [16497 ms] for an available buffer.
> java.lang.Exception: Stracktracegenerator
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:323)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:290)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewBufferBuilderFromPool(BufferWritingResultPartition.java:338)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewUnicastBufferBuilder(BufferWritingResultPartition.java:314)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.appendUnicastDataForRecordContinuation(BufferWritingResultPartition.java:258)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.emitRecord(BufferWritingResultPartition.java:147)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:104)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:54)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:105)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:90)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:44)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:82)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:182)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:101)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.api.connector.source.lib.util.IteratorSourceReader.pollNext(IteratorSourceReader.java:98)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:294)
>  

[jira] [Updated] (FLINK-22973) Provide benchmark for unaligned checkpoint with timeouts

2021-06-11 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-22973:
---
Description: Provide some simple benchmark to test speed of the unaligned 
checkpoints with timeouts on our continuous benchmarking infrastructure.  (was: 
Provide some simple benchmark to test speed of the unaligned checkpoints on our 
continuous benchmarking infrastructure.)

> Provide benchmark for unaligned checkpoint with timeouts
> 
>
> Key: FLINK-22973
> URL: https://issues.apache.org/jira/browse/FLINK-22973
> Project: Flink
>  Issue Type: New Feature
>  Components: Benchmarks, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
> Fix For: 1.14.0
>
>
> Provide some simple benchmark to test speed of the unaligned checkpoints with 
> timeouts on our continuous benchmarking infrastructure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22973) Provide benchmark for unaligned checkpoint with timeouts

2021-06-11 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-22973:
---
Summary: Provide benchmark for unaligned checkpoint with timeouts  (was: 
Provide benchmark for unaligned checkpoints performance)

> Provide benchmark for unaligned checkpoint with timeouts
> 
>
> Key: FLINK-22973
> URL: https://issues.apache.org/jira/browse/FLINK-22973
> Project: Flink
>  Issue Type: New Feature
>  Components: Benchmarks, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
> Fix For: 1.14.0
>
>
> Provide some simple benchmark to test speed of the unaligned checkpoints on 
> our continuous benchmarking infrastructure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22593) SavepointITCase.testShouldAddEntropyToSavepointPath unstable

2021-06-11 Thread Anton Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov reassigned FLINK-22593:
-

Assignee: Anton Kalashnikov

> SavepointITCase.testShouldAddEntropyToSavepointPath unstable
> 
>
> Key: FLINK-22593
> URL: https://issues.apache.org/jira/browse/FLINK-22593
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Robert Metzger
>Assignee: Anton Kalashnikov
>Priority: Blocker
>  Labels: stale-blocker, stale-critical, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=9072=logs=cc649950-03e9-5fae-8326-2f1ad744b536=51cab6ca-669f-5dc0-221d-1e4f7dc4fc85
> {code}
> 2021-05-07T10:56:20.9429367Z May 07 10:56:20 [ERROR] Tests run: 13, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 33.441 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.SavepointITCase
> 2021-05-07T10:56:20.9445862Z May 07 10:56:20 [ERROR] 
> testShouldAddEntropyToSavepointPath(org.apache.flink.test.checkpointing.SavepointITCase)
>   Time elapsed: 2.083 s  <<< ERROR!
> 2021-05-07T10:56:20.9447106Z May 07 10:56:20 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (3/4) of job 4e155a20f0a7895043661a6446caf1cb 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> 2021-05-07T10:56:20.9448194Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-05-07T10:56:20.9448797Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-05-07T10:56:20.9449428Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.submitJobAndTakeSavepoint(SavepointITCase.java:305)
> 2021-05-07T10:56:20.9450160Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.testShouldAddEntropyToSavepointPath(SavepointITCase.java:273)
> 2021-05-07T10:56:20.9450785Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-05-07T10:56:20.9451331Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-05-07T10:56:20.9451940Z May 07 10:56:20  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-05-07T10:56:20.9452498Z May 07 10:56:20  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-05-07T10:56:20.9453247Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-05-07T10:56:20.9454007Z May 07 10:56:20  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-05-07T10:56:20.9454687Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-05-07T10:56:20.9455302Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-05-07T10:56:20.9455909Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-05-07T10:56:20.9456493Z May 07 10:56:20  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-07T10:56:20.9457074Z May 07 10:56:20  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-05-07T10:56:20.9457636Z May 07 10:56:20  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-05-07T10:56:20.9458157Z May 07 10:56:20  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-07T10:56:20.9458678Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-05-07T10:56:20.9459252Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-05-07T10:56:20.9459865Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-05-07T10:56:20.9460433Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-05-07T10:56:20.9461058Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-05-07T10:56:20.9461607Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-05-07T10:56:20.9462159Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-05-07T10:56:20.9462705Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 

[jira] [Created] (FLINK-22976) Whether to consider adding config-option to control whether to exclude record.key value from record.value value

2021-06-11 Thread hehuiyuan (Jira)
hehuiyuan created FLINK-22976:
-

 Summary: Whether to consider adding config-option to control  
whether to exclude record.key value from  record.value value 
 Key: FLINK-22976
 URL: https://issues.apache.org/jira/browse/FLINK-22976
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: hehuiyuan


upsert-kafka:

key :

{"name":"hehui111","sname":"wman"}

 

value :

{"name":"hehui111","sname":"wman","sno":"wman","sclass":"wman","address":"wman"}

 

The value of ProduceRecord' value contain the value of  ProduceRecord' key.

Whether to consider adding config-option to control whether to exclude 
record.key value from record.value value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22975) Specify port or range for k8s service

2021-06-11 Thread Jun Zhang (Jira)
Jun Zhang created FLINK-22975:
-

 Summary: Specify port or range for k8s service
 Key: FLINK-22975
 URL: https://issues.apache.org/jira/browse/FLINK-22975
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Affects Versions: 1.13.1
Reporter: Jun Zhang
 Fix For: 1.14.0


When we deploy the flink program in k8s, the service port is randomly 
generated. This random port may not be accessible due to the company's network 
policy, so I think we should be able to let users specify the port or port 
range that is exposed to the outside, similar to   '- -service-node-port-range' 
 parameter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10294) Split MiniClusterResource to be usable in runtime/streaming-java/clients

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10294:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Split MiniClusterResource to be usable in runtime/streaming-java/clients
> 
>
> Key: FLINK-10294
> URL: https://issues.apache.org/jira/browse/FLINK-10294
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> h5. Problem
> The {{MiniClusterResource}} is a utility class to create and manage flink 
> clusters for testing purposes. It is incredibly convenient, but unfortunately 
> resides in {{flink-test-utils}} which depends on flink-runtime, 
> flink-streaming-java and flink-clients, making the class not usable in these 
> modules.
> The current version does require these dependencies, but only for specific, 
> *optional*, parts. {{streaming-java}} is only required for accessing 
> {{TestStreamEnvironment}} and {{flink-clients}} only for tests that want to 
> work against a {{ClusterClient}}.
> h5. Proposal
> Split the {{MiniClusterResource}} as follows:
> h5. 1)
> Remove client/streaming-java dependent parts and move the class to 
> flink-runtime.
> h5. 2)
> Add a new class {{StreamingMiniClusterResourceExtension}} that accepts a  
> {{MiniClusterResource}} as an argument and contains the streaming parts.
> Usage would look like this:
> {code}
> private final MiniClusterResource cluster = ...
> private final StreamingMiniClusterResourceExtension ext = new 
> StreamingMiniClusterResourceExtension(cluster);
> @Rule
> public RuleChain chain= RuleChain
>   .outerRule(cluster) 
>   .around(ext),
> {code}
> h5. 3)
> Add a new class {{ClientMiniClusterResourceExtension}} that accepts a  
> {{MiniClusterResource}} as an argument and contains the client parts.
> Usage would look like this:
> {code}
> private final MiniClusterResource cluster = ...
> private final ClientMiniClusterResourceExtensionext = new 
> ClientMiniClusterResourceExtension(cluster);
> @Rule
> public RuleChain chain= RuleChain
>   .outerRule(cluster) 
>   .around(ext),
> {code}
> [~till.rohrmann] WDYT?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10273) Access composite type fields after a function

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10273:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Access composite type fields after a function
> -
>
> Key: FLINK-10273
> URL: https://issues.apache.org/jira/browse/FLINK-10273
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.7.0
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If a function returns a composite type, for example, {{Row(lon: Float, lat: 
> Float)}}. There is currently no way of accessing fields.
> Both queries fail with exceptions:
> {code}
> select t.c.lat, t.c.lon FROM (select toCoords(12) as c) AS t
> {code}
> {code}
> select toCoords(12).lat
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10297) PostVersionedIOReadableWritable ignores result of InputStream.read(...)

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10297:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> PostVersionedIOReadableWritable ignores result of InputStream.read(...)
> ---
>
> Key: FLINK-10297
> URL: https://issues.apache.org/jira/browse/FLINK-10297
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.4.2, 1.5.3, 1.6.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> PostVersionedIOReadableWritable ignores result of {{InputStream.read(...)}}. 
> Probably the intention was to invoke {{readFully}}. As it is now, this can 
> lead to a corrupted deserialization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10489) Inconsistent window information for streams with EventTime characteristic

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10489:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Inconsistent window information for streams with EventTime characteristic
> -
>
> Key: FLINK-10489
> URL: https://issues.apache.org/jira/browse/FLINK-10489
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
> Environment: Flink v1.5.3
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Consider the following program.
> {code:java}
> void temp() throws Exception {
> StreamExecutionEnvironment execEnv = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(execEnv);
> execEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream> source = execEnv
> .fromElements(
> new Tuple3<>(Timestamp.valueOf("2018-09-20 22:00:00"), 
> "a", 1.3),
> new Tuple3<>(Timestamp.valueOf("2018-09-20 22:01:00"), 
> "a", 2.1),
> new Tuple3<>(Timestamp.valueOf("2018-09-20 22:02:00"), 
> "a", 3.0),
> new Tuple3<>(Timestamp.valueOf("2018-09-20 22:00:00"), 
> "b", 2.2),
> new Tuple3<>(Timestamp.valueOf("2018-09-20 22:01:00"), 
> "b", 1.8)
> )
> .assignTimestampsAndWatermarks(new WaterMarker());
> Table table = tableEnv.fromDataStream(source, "f0, f1, f2, 
> rowtime.rowtime")
> 
> .window(Slide.over("2.minutes").every("1.minute").on("rowtime").as("w"))
> .groupBy("f1, w")
> .select("f1, w.end - 1.minute, f2.sum");
> tableEnv.toAppendStream(table, Row.class).print();
> execEnv.execute();
> }
> public static class WaterMarker implements 
> AssignerWithPunctuatedWatermarks> {
> @Nullable
> @Override
> public Watermark checkAndGetNextWatermark(Tuple3 Double> lastElement, long extractedTimestamp) {
> return new Watermark(extractedTimestamp);
> }
> @Override
> public long extractTimestamp(Tuple3 element, 
> long previousElementTimestamp) {
> return element.f0.getTime();
> }
> }
> {code}
> I am seeing an output like the following, where the hour is offset by 2 hours:
> {code:java}
> a,2018-09-20 20:00:00.0,1.3
> a,2018-09-20 20:01:00.0,3.4004
> a,2018-09-20 20:02:00.0,5.1
> a,2018-09-20 20:03:00.0,3.0
> {code}
> My local time zone is GMT+01:00, and indeed the input timestamps are parsed 
> with this time zone, but even if the window-end is printed as GMT, it should 
> show 21:0x:00. In my tests, where I actually collect the output and can see 
> the actual {{java.sql.Timestamp}} instances, I can always see this extra 1 
> hour offset.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10448) VALUES clause is translated into a separate operator per value

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10448:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> VALUES clause is translated into a separate operator per value
> --
>
> Key: FLINK-10448
> URL: https://issues.apache.org/jira/browse/FLINK-10448
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.1
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> It seems that a SQL VALUES clause uses one operator per value under certain 
> conditions which leads to a complicated job graph. Given that we need to 
> compile code for every operator in the open method and have other overhead as 
> well, this looks inefficient to me.
> For example, the following query creates and unions 6 operators together:
> {code}
> SELECT *
>   FROM (
> VALUES
>   (1, 'Bob', CAST(0 AS BIGINT)),
>   (22, 'Alice', CAST(0 AS BIGINT)),
>   (42, 'Greg', CAST(0 AS BIGINT)),
>   (42, 'Greg', CAST(0 AS BIGINT)),
>   (42, 'Greg', CAST(0 AS BIGINT)),
>   (1, 'Bob', CAST(0 AS BIGINT)))
> AS UserCountTable(user_id, user_name, user_count)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10306) Support the display of log file from any position on webUI

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10306:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support the display of log file from any position on webUI
> --
>
> Key: FLINK-10306
> URL: https://issues.apache.org/jira/browse/FLINK-10306
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.6.0
>Reporter: Jiayi Liao
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Although we copy the whole log files from taskmanager to blob service host, 
> sometimes we may not be able to read the whole file's content on WebUI 
> because of the browser's load.
> We already use RandomAccessFile to read files, so I think we need to support 
> read the log file from any row of it by adding a parameter at the end of url 
> like http://xxx/#/taskmanager/container_xx/log/100.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10274) The stop-cluster.sh cannot stop cluster properly when there are multiple clusters running

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10274:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> The stop-cluster.sh cannot stop cluster properly when there are multiple 
> clusters running
> -
>
> Key: FLINK-10274
> URL: https://issues.apache.org/jira/browse/FLINK-10274
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.5.1, 1.5.2, 1.5.3, 1.6.0
>Reporter: YuFeng Shen
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When you are preparing to do a Flink framework version upgrading by using the 
> strategy [shadow 
> copy|https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/upgrading.html#upgrading-the-flink-framework-version]
>  , you have to run multiple clusters concurrently,  however when you are 
> ready to stop the old version cluster after upgrading, you would find the 
> stop-cluster.sh wouldn't work as you expected, the following is the steps to 
> duplicate the issue:
>  # There is already a running Flink 1.5.x cluster instance;
>  # Installing another Flink 1.6.x cluster instance at the same cluster 
> machines;
>  # Migrating the jobs from Flink 1.5.x  to Flink 1.6.x ;
>  # go to the bin dir of the Flink 1.5.x cluster instance and run 
> stop-cluster.sh ;
> You would expect the old Flink 1.5.x cluster instance be stopped ,right? 
> Unfortunately the stopped cluster is the new installed Flink 1.6.x cluster 
> instance instead!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10266) Web UI has outdated entry about externalized checkpoints

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10266:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Web UI has outdated entry about externalized checkpoints
> 
>
> Key: FLINK-10266
> URL: https://issues.apache.org/jira/browse/FLINK-10266
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.5.3, 1.6.0
>Reporter: Stephan Ewen
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> The web UI says "Persist Checkpoints Externally   Disabled" even though 
> starting from 1.5.0 all checkpoints are always externally addressable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10476) Change related ProcessFunction to KeyedProcessFunction in Sql/Table-api

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10476:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Change related ProcessFunction to KeyedProcessFunction in Sql/Table-api
> ---
>
> Key: FLINK-10476
> URL: https://issues.apache.org/jira/browse/FLINK-10476
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Hequn Cheng
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Since KeyedStream.process(ProcessFunction func) is deprecated, we should 
> change related ProcessFunction to KeyedProcessFunction in Sql/Table-api, for 
> example {{ProcessFunctionWithCleanupState}} extends {{KeyedProcessFunction}} 
> instead of {{ProcessFunction}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10688) Remove flink-table dependencies from flink-connectors

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10688:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Remove flink-table dependencies from flink-connectors 
> --
>
> Key: FLINK-10688
> URL: https://issues.apache.org/jira/browse/FLINK-10688
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> This issue covers the fifth step of the migration plan mentioned in 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> Replace {{flink-table}} dependencies in {{flink-connectors}} with more 
> lightweight and Scala-free {{flink-table-spi}}.
> Once we implemented improvements to the unified connector interface, we can 
> also migrate the classes. Among others, it requires a refactoring of the 
> timestamp extractors which are the biggest blockers because they transitively 
> depending on expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16143) Turn on more date time functions of blink planner

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16143:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Turn on more date time functions of blink planner
> -
>
> Key: FLINK-16143
> URL: https://issues.apache.org/jira/browse/FLINK-16143
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Zili Chen
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> Currently blink planner has a series of built-in functions such as
> DATEDIFF
>  DATE_ADD
>  DATE_SUB
> which haven't been into used so far. We could add the necessary register, 
> generate and convert code to make it available in production scope.
>  
> what do you think [~jark]?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10515) Improve tokenisation of program args passed as string

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10515:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Improve tokenisation of program args passed as string
> -
>
> Key: FLINK-10515
> URL: https://issues.apache.org/jira/browse/FLINK-10515
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.7.0
>Reporter: Andrey Zagrebin
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> At the moment tokenisation of program args does not respect escape 
> characters. It can be improved to support at least program args separated by 
> spaces with unix style escaping.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16050) Add Attempt Information

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16050:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add Attempt Information
> ---
>
> Key: FLINK-16050
> URL: https://issues.apache.org/jira/browse/FLINK-16050
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: lining
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> According to the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/rest_api.html#jobs-jobid-vertices-vertexid-subtasks-subtaskindex],
>  there may exist more than one attempt in a subtask, but there is no way to 
> get the attempt history list in the REST API, users have no way to know if 
> the subtask has failed before. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10576) Introduce Machine/Node/TM health management

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10576:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Introduce Machine/Node/TM health management
> ---
>
> Key: FLINK-10576
> URL: https://issues.apache.org/jira/browse/FLINK-10576
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: JIN SUN
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> When a task failed we can identify whether it was due to environment issues, 
> especially when multiple tasks report environment error from some 
> TM/Machine/Node, there are high possibility that this TM has issue, and if we 
> found multiple tasks became slow in some certain node, we should put the 
> machine into probation. 
>  * we should avoid schedule new task to it
>  * release the task manager when all tasks are drained and allocated new one 
> if needed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16175) Add config option to switch case sensitive for column names in SQL

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16175:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available 
usability  (was: auto-unassigned pull-request-available stale-major usability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add config option to switch case sensitive for column names in SQL
> --
>
> Key: FLINK-16175
> URL: https://issues.apache.org/jira/browse/FLINK-16175
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, usability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink SQL is default CaseSensitive and have no option to config. This issue 
> aims to support
> a configOption so that user can set CaseSensitive for their SQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10684) Improve the CSV reading process

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10684:
---
Labels: CSV stale-major  (was: CSV)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Improve the CSV reading process
> ---
>
> Key: FLINK-10684
> URL: https://issues.apache.org/jira/browse/FLINK-10684
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet
>Reporter: Xingcan Cui
>Priority: Major
>  Labels: CSV, stale-major
>
> CSV is one of the most commonly used file formats in data wrangling. To load 
> records from CSV files, Flink has provided the basic {{CsvInputFormat}}, as 
> well as some variants (e.g., {{RowCsvInputFormat}} and 
> {{PojoCsvInputFormat}}). However, it seems that the reading process can be 
> improved. For example, we could add a built-in util to automatically infer 
> schemas from CSV headers and samples of data. Also, the current bad record 
> handling method can be improved by somehow keeping the invalid lines (and 
> even the reasons for failed parsing), instead of logging the total number 
> only.
> This is an umbrella issue for all the improvements and bug fixes for the CSV 
> reading process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16043) Support non-BMP Unicode for JsonRowSerializationSchema

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16043:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support non-BMP Unicode for JsonRowSerializationSchema
> --
>
> Key: FLINK-16043
> URL: https://issues.apache.org/jira/browse/FLINK-16043
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.10.0
>Reporter: Benchao Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a known issue for jackson: 
> [https://github.com/FasterXML/jackson-core/issues/223]
> You can see more details: 
> [https://github.com/FasterXML/jackson-core/blob/master/src/main/java/com/fasterxml/jackson/core/json/UTF8JsonGenerator.java#L2105]
>  
> And I also encountered this issue in my production environment. I've figured 
> out a solution to solve this issue. Java's String.getBytes() can deal with 
> UTF-8 encoding well. So we can do it like this:
> {{mapper.writeValueAsString(node).getBytes()}} instead of 
> {{mapper.writeValueAsBytes(node)}}
> cc [~jark] [~twalthr]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10667) Clean up Scala 2.10 remnants

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10667:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Clean up Scala 2.10 remnants 
> -
>
> Key: FLINK-10667
> URL: https://issues.apache.org/jira/browse/FLINK-10667
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala
>Affects Versions: 1.6.3, 1.7.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Currently there are couple of places which reference Scala 2.10 in our code 
> base:
> {noformat}
> git grep "2\.10" | grep -i scala
> docs/dev/connectors/kafka.md:Since 0.11.x Kafka does not support 
> scala 2.10. This connector supports  href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging;>Kafka
>  transactional messaging to provide exactly once semantic for the 
> producer.
> docs/dev/projectsetup/scala_api_quickstart.md:  * [Scala IDE for Scala 
> 2.11](http://download.scala-ide.org/sdk/helium/e38/scala211/stable/site) or 
> [Scala IDE for Scala 
> 2.10](http://download.scala-ide.org/sdk/helium/e38/scala210/stable/site)
> flink-contrib/docker-flink/README.md:docker build --build-arg 
> FLINK_VERSION=1.0.3 --build-arg HADOOP_VERSION=26 --build-arg 
> SCALA_VERSION=2.10 -t "flink:1.0.3-hadoop2.6-scala_2.10" flink
> flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/package.scala:
>   // Used in ExpressionCodeGenerator because Scala 2.10 reflection is not 
> thread safe. We might
> flink-scala-shell/start-script/start-scala-shell.sh:# from scala-lang 2.10.4
> flink-scala-shell/start-script/start-scala-shell.sh:# in scala shell since 
> 2.10, has to be done at startup
> flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala:
> // convention, compatible with Scala 2.10
> flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala:
> // TODO: use this once 2.10 is no longer supported
> {noformat}
> Those should be looked into.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16294) Support to create non-existed table in database automatically when writing data to JDBC connector

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16294:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available 
usability  (was: auto-unassigned pull-request-available stale-major usability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support to create non-existed table in database automatically when writing 
> data to JDBC connector
> -
>
> Key: FLINK-16294
> URL: https://issues.apache.org/jira/browse/FLINK-16294
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, usability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka connector/Elasticsearch connector support create topic/index 
> automatically when topic/index not exists in kafka/Elasticsearch from now.
> This issue aims to support JDBC connector can create database table 
> automatically which will be more friendly to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10568) Show correct time attributes when calling Table.getSchema()

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10568:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Show correct time attributes when calling Table.getSchema()
> ---
>
> Key: FLINK-10568
> URL: https://issues.apache.org/jira/browse/FLINK-10568
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> A call to `Table.getSchema` can result in false positives with respect to 
> time attributes. Time attributes are converted during optimization phase. It 
> would be helpful if we could execute parts of the optimization phase when 
> calling this method in order to get useful/correct information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16032) Depends on core classifier hive-exec in hive connector

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16032:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Depends on core classifier hive-exec in hive connector
> --
>
> Key: FLINK-16032
> URL: https://issues.apache.org/jira/browse/FLINK-16032
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Now we depends on non-core classifier hive-exec, it is a uber jar and it 
> contains a lot of classes.
> This make parquet vectorization support very hard, because we use many deep 
> api of parquet, and it is hard to compatible with multi parquet versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10518) Inefficient design in ContinuousFileMonitoringFunction

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10518:
---
Labels: Source:FileSystem auto-unassigned stale-major  (was: 
Source:FileSystem auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Inefficient design in ContinuousFileMonitoringFunction
> --
>
> Key: FLINK-10518
> URL: https://issues.apache.org/jira/browse/FLINK-10518
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.5.2
>Reporter: Huyen Levan
>Priority: Major
>  Labels: Source:FileSystem, auto-unassigned, stale-major
>
> The ContinuousFileMonitoringFunction class keeps track of the latest file 
> modification time to rule out all files it has processed in the previous 
> cycles. For a long-running job, the list of eligible files will be much 
> smaller than the list of all files in the folder being monitored.
> In the current implementation of the getInputSplitsSortedByModTime method, a 
> (big) list of all available splits are created first, and then every single 
> split is checked with the list of eligible files.
> {quote}for (FileInputSplit split: 
> format.createInputSplits(readerParallelism)) {
>  FileStatus fileStatus = eligibleFiles.get(split.getPath());
>  if (fileStatus != null) {
> {quote}
> The improvement can be done as:
>  * Listing of all files should be done once in 
> _ContinuousFileMonitoringFunction.listEligibleFiles()_ (as of now it is done 
> the 2nd time in _FileInputFormat.createInputSplits()_ )
>  * The list of file-splits should then be created from the list of paths in 
> eligibleFiles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16193) Improve error messaging when a key is assigned to the wrong key group range

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16193:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Improve error messaging when a key is assigned to the wrong key group range
> ---
>
> Key: FLINK-16193
> URL: https://issues.apache.org/jira/browse/FLINK-16193
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Seth Wiesman
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Occasionally, users may run into an exception that reads something like:
>  
> java.lang.IllegalArgumentException: Key group 45 is not in 
> KeyGroupRange{startKeyGroup=0, endKeyGroup=42}
>  
> This may be caused by a number of issues including:
> 1) Unstable hash and equals methods on their key objects
> 2) Improper use of DataStreamUtils#reinterpretAsKeyedStream
>  
> Key group ranges are a fairly low level detail that most users will be 
> unfamiliar with when working with Flink. We should offer more comprehensive 
> error messaging that outlines likely causes and solutions.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16305) FlinkYarnSessionClI ignores target executor and uses yarn-session if YARN properties file is present

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16305:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> FlinkYarnSessionClI ignores target executor and uses yarn-session if YARN 
> properties file is present
> 
>
> Key: FLINK-16305
> URL: https://issues.apache.org/jira/browse/FLINK-16305
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Command Line Client
>Affects Versions: 1.10.0
>Reporter: Daniel Laszlo Magyar
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> The presence of the hidden YARN property file (which contains the last 
> started YARN session’s application id), causes the cli to ignore the 
> {{execution.target}} property set in the {{conf/flink-conf.yaml}} 
> configuration file, which leads to unexpected behaviour at the time of job 
> submission via cli, e.g. when using {{flink run}} or SQL client.
>  The code that ignores the execution target if the YARN application id is set 
> in the hidden property file is at 
> [https://github.com/apache/flink/blob/release-1.10/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java#L337-L351].
>  
> Reproduction steps:
>  • start flink yarn session via {{./bin/yarn-session.sh -d}}, this writes the 
> application id to {{/tmp/.yarn-properties-root}}
>  • set {{execution.target: yarn-per-job}} in 
> {{/etc/flink/conf/flink-conf.yaml}}
>  • enable debug logging
>  • run a flink job e.g. {{flink run -d -p 2 examples/streaming/WordCount.jar 
> --input README.txt}}
>  • the logs below show that even though the {{execution.target}} property is 
> read properly, {{FlinkYarnSessionCli}} is chosen and the execution.target is 
> reset to yarn-session
> {code:java}
> 20/02/26 12:14:24 INFO configuration.GlobalConfiguration: Loading 
> configuration property: execution.target, yarn-per-job
> ...
> 20/02/26 12:14:24 INFO cli.FlinkYarnSessionCli: Found Yarn properties file 
> under /tmp/.yarn-properties-root.
> 20/02/26 12:14:24 DEBUG fs.FileSystem: Loading extension file systems via 
> services
> 20/02/26 12:14:24 DEBUG cli.CliFrontend: Custom commandlines: 
> [org.apache.flink.yarn.cli.FlinkYarnSessionCli@43df23d3, 
> org.apache.flink.client.cli.ExecutorCLI@6d60fe40, 
> org.apache.flink.client.cli.DefaultCLI@792b749c]
> 20/02/26 12:14:24 DEBUG cli.CliFrontend: Checking custom commandline 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli@43df23d3, isActive: true
> ...
> 20/02/26 12:14:25 DEBUG cli.CliFrontend: Effective executor configuration: 
> {...execution.target=yarn-session, }
> 20/02/26 12:14:25 INFO client.ClientUtils: Starting program (detached: true)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16310) Javadoc of AppendingState.get() contradicts with behavior of UserFacingListState

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16310:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Javadoc of AppendingState.get() contradicts with behavior of 
> UserFacingListState
> 
>
> Key: FLINK-16310
> URL: https://issues.apache.org/jira/browse/FLINK-16310
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / State Backends
>Affects Versions: 1.6.4, 1.7.2, 1.8.3, 1.9.2, 1.10.0
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> The {{get()}} method on the user-facing state handle interface 
> {{AppendingState}} states that:
> {code}
> @return The operator state value corresponding to the current input or NULL 
> if the state is empty.
> {code}
> This behavior, is not true for the user-facing list state handles, as the 
> behavior has always been that if the list state does not have elements, an 
> empty list is returned (see FLINK-4307 / {{UserFacingListState}}).
> The fix would be to only mention the null-returning behavior on the internal 
> list state interface, i.e. {{InternalListState}}, and fix the Javadoc for 
> {{AppendingState}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16528) Support Limit push down for Kafka streaming sources

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16528:
---
  Labels: auto-deprioritized-major usability  (was: stale-major usability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support Limit push down for Kafka streaming sources
> ---
>
> Key: FLINK-16528
> URL: https://issues.apache.org/jira/browse/FLINK-16528
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, usability
>
> Currently, a limit query {{SELECT * FROM kafka LIMIT 10}} will be translated 
> into TopN operator and will scan the full data in the source. However, 
> {{LIMIT}} is a very useful feature in SQL CLI to explore data in the source. 
> It doesn't make sense it never stop. 
> We can support such case in streaming mode (ignore the text format):
> {code}
> flink > SELECT * FROM kafka LIMIT 10;
>  kafka_key  |user_name| lang |   created_at
> +-+--+-
>  494227746231685121 | burncaniff  | en   | 2014-07-29 14:07:31.000
>  494227746214535169 | gu8tn   | ja   | 2014-07-29 14:07:31.000
>  494227746219126785 | pequitamedicen  | es   | 2014-07-29 14:07:31.000
>  494227746201931777 | josnyS  | ht   | 2014-07-29 14:07:31.000
>  494227746219110401 | Cafe510 | en   | 2014-07-29 14:07:31.000
>  494227746210332673 | Da_JuanAnd_Only | en   | 2014-07-29 14:07:31.000
>  494227746193956865 | Smile_Kidrauhl6 | pt   | 2014-07-29 14:07:31.000
>  494227750426017793 | CashforeverCD   | en   | 2014-07-29 14:07:32.000
>  494227750396653569 | FilmArsivimiz   | tr   | 2014-07-29 14:07:32.000
>  494227750388256769 | jmolas  | es   | 2014-07-29 14:07:32.000
> (10 rows)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16582) NettyBufferPoolTest may have warns on NettyBuffer leak

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16582:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> NettyBufferPoolTest may have warns on NettyBuffer leak 
> ---
>
> Key: FLINK-16582
> URL: https://issues.apache.org/jira/browse/FLINK-16582
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Tests
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>
> {code:java}
> 4749 [Flink Netty Client (50072) Thread 0] ERROR
> org.apache.flink.shaded.netty4.io.netty.util.ResourceLeakDetector [] - LEAK:
> ByteBuf.release() was not called before it's garbage-collected. See
> https://netty.io/wiki/reference-counted-objects.html for more information.
> Recent access records: 
> Created at:
>   
> org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:349)
>   
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
>   
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
>   
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:115)
>   
> org.apache.flink.runtime.io.network.netty.NettyBufferPoolTest.testNoHeapAllocations(NettyBufferPoolTest.java:38)
>   sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   java.lang.reflect.Method.invoke(Method.java:498)
>   
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   org.junit.runners.Suite.runChild(Suite.java:128)
>   org.junit.runners.Suite.runChild(Suite.java:27)
>   org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>   
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>   com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
> Test ignored.
> Process finished with exit code 0
> {code}
> We should released the allocated buffers in the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16568) Cannot call Hive UDAF that requires constant arguments

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16568:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Cannot call Hive UDAF that requires constant arguments
> --
>
> Key: FLINK-16568
> URL: https://issues.apache.org/jira/browse/FLINK-16568
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Calling Hive's {{percentile_approx}} would fail:
> {code}
>   hiveShell.execute("create table src (key double,val string)");
>   tableEnv.loadModule("hive", new HiveModule());
>   tableEnv.sqlQuery("select percentile_approx(key,0.5) from src");
>   TableUtils.collectToList(tableEnv.sqlQuery("select 
> percentile_approx(key,0.5) from src"));
> {code}
> With error:
> {noformat}
> Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException: The 
> second argument must be a constant, but decimal(2,1) was passed instead.
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFPercentileApprox.getEvaluator(GenericUDAFPercentileApprox.java:145)
>   at 
> org.apache.flink.table.functions.hive.HiveGenericUDAF.createEvaluator(HiveGenericUDAF.java:120)
>   at 
> org.apache.flink.table.functions.hive.HiveGenericUDAF.init(HiveGenericUDAF.java:92)
>   at 
> org.apache.flink.table.functions.hive.HiveGenericUDAF.getHiveResultType(HiveGenericUDAF.java:196)
>   ... 58 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16333) After exiting from the command line client, the process still exists

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16333:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major  (was: 
auto-deprioritized-critical stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> After exiting from the command line client, the process still exists
> 
>
> Key: FLINK-16333
> URL: https://issues.apache.org/jira/browse/FLINK-16333
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.10.0
> Environment: JDK 8 
> Flink 1.10 
>Reporter: jinxin
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major
> Attachments: 微信图片_20200228190743.png, 微信图片_20200228190935.png
>
>
> After exiting from flink scala shell, the process still exists. Then I tried 
> to kill him with kill -9 PID, but it still didn't work. Process still exists.
> Similar problems also appear in SQL client.Fortunately, kill-9 takes effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16332) Support un-ordered mode for async lookup join

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16332:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support un-ordered mode for async lookup join
> -
>
> Key: FLINK-16332
> URL: https://issues.apache.org/jira/browse/FLINK-16332
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Currently, we only support "ordered" mode for async lookup join.  Because the 
> ordering in streaming SQL is very important, the accumulate and retract 
> messages shoudl be in ordered. If messages are out of order, the result will 
> be wrong. 
> The "un-ordered" mode can be enabled by a job configuration. But it will be a 
> prefered option. Only if it doesn't affect the order of acc/retract messages 
> (e.g. it is just an append-only stream), the "un-ordered" mode will be 
> enabled. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16356) Some dependencies contain CVEs

2021-06-11 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16356:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Some dependencies contain CVEs
> --
>
> Key: FLINK-16356
> URL: https://issues.apache.org/jira/browse/FLINK-16356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: XuCongying
>Priority: Minor
>  Labels: auto-deprioritized-major
> Attachments: apache-flink_CVE-report.md
>
>
> I found your project used some dependencies that contain CVEs. To prevent 
> potential risk it may cause, I suggest a library update. The following is a 
> detailed content.
>  
> Vulnerable Library Version: com.squareup.okhttp3 : okhttp : 3.7.0
>   CVE ID: 
> [CVE-2018-20200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20200)
>   Import Path: flink-metrics/flink-metrics-datadog/pom.xml, 
> flink-end-to-end-tests/flink-end-to-end-tests-common/pom.xml, 
> flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/pom.xml, 
> flink-runtime/pom.xml
>   Suggested Safe Versions: 3.12.1, 3.12.2, 3.12.3, 3.12.4, 3.12.5, 3.12.6, 
> 3.12.7, 3.12.8, 3.13.0, 3.13.1, 3.14.0, 3.14.1, 3.14.2, 3.14.3, 3.14.4, 
> 3.14.5, 3.14.6, 4.0.0, 4.0.0-RC1, 4.0.0-RC2, 4.0.0-RC3, 4.0.0-alpha01, 
> 4.0.0-alpha02, 4.0.1, 4.1.0, 4.1.1, 4.2.0, 4.2.1, 4.2.2, 4.3.0, 4.3.1, 4.4.0
>  Vulnerable Library Version: com.google.guava : guava : 18.0
>   CVE ID: 
> [CVE-2018-10237](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10237)
>   Import Path: flink-connectors/flink-connector-kinesis/pom.xml, 
> flink-connectors/flink-connector-cassandra/pom.xml
>   Suggested Safe Versions: 24.1.1-android, 24.1.1-jre, 25.0-android, 
> 25.0-jre, 25.1-android, 25.1-jre, 26.0-android, 26.0-jre, 27.0-android, 
> 27.0-jre, 27.0.1-android, 27.0.1-jre, 27.1-android, 27.1-jre, 28.0-android, 
> 28.0-jre, 28.1-android, 28.1-jre, 28.2-android, 28.2-jre
>  
> Vulnerable Library Version: org.apache.hive : hive-exec : 1.2.1
>   CVE ID: 
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  
> [CVE-2015-7521](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7521),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: flink-connectors/flink-connector-hive/pom.xml
>   Suggested Safe Versions: 2.3.4, 2.3.5, 2.3.6, 3.1.1, 3.1.2
>  
> Vulnerable Library Version: org.apache.hive : hive-exec : 2.0.0
>   CVE ID: 
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: flink-connectors/flink-connector-hive/pom.xml
>   Suggested Safe Versions: 2.3.4, 2.3.5, 2.3.6, 3.1.1, 3.1.2
>  
> Vulnerable Library Version: org.apache.hive : hive-exec : 1.1.0
>   CVE ID: 
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  
> [CVE-2015-7521](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7521),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: flink-connectors/flink-connector-hive/pom.xml
>   Suggested Safe Versions: 2.3.4, 2.3.5, 2.3.6, 3.1.1, 3.1.2
>  
> Vulnerable Library Version: org.apache.hive : hive-exec : 2.1.1
>   CVE ID: 
> [CVE-2017-12625](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-12625),
>  
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: flink-connectors/flink-connector-hive/pom.xml
>   Suggested Safe Versions: 2.3.4, 2.3.5, 2.3.6, 3.1.1, 3.1.2
>  
> Vulnerable Library Version: org.apache.hive : hive-exec : 1.0.1
>   CVE ID: 
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  
> [CVE-2015-7521](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7521),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: flink-connectors/flink-connector-hive/pom.xml
>   Suggested Safe Versions: 2.3.4, 2.3.5, 2.3.6, 3.1.1, 3.1.2
>  Vulnerable Library Version: org.apache.hive : hive-exec : 2.2.0
>   CVE ID: 
> [CVE-2017-12625](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-12625),
>  
> [CVE-2018-11777](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11777),
>  [CVE-2018-1314](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1314)
>   Import Path: 

[jira] [Updated] (FLINK-22015) SQL filter containing OR and IS NULL will produce an incorrect result.

2021-06-11 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-22015:
---
Fix Version/s: 1.12.5

> SQL filter containing OR and IS NULL will produce an incorrect result.
> --
>
> Key: FLINK-22015
> URL: https://issues.apache.org/jira/browse/FLINK-22015
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.5
>
>
> Add the following test case to {{CalcITCase}} to reproduce this bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   checkResult(
> """
>   |WITH myView AS (SELECT a, CASE
>   |  WHEN a = 1 THEN '1'
>   |  ELSE CAST(NULL AS STRING)
>   |  END AS s
>   |FROM SmallTable3)
>   |SELECT a FROM myView WHERE s = '2' OR s IS NULL
>   |""".stripMargin,
> Seq(row(2), row(3)))
> }
> {code}
> However if we remove the {{s = '2'}} the result will be correct.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22015) SQL filter containing OR and IS NULL will produce an incorrect result.

2021-06-11 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361592#comment-17361592
 ] 

Kurt Young commented on FLINK-22015:


also fixed in 1.12.5: faf7cc43beebce3fee528ec5637e9387b95bec99

> SQL filter containing OR and IS NULL will produce an incorrect result.
> --
>
> Key: FLINK-22015
> URL: https://issues.apache.org/jira/browse/FLINK-22015
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.5
>
>
> Add the following test case to {{CalcITCase}} to reproduce this bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   checkResult(
> """
>   |WITH myView AS (SELECT a, CASE
>   |  WHEN a = 1 THEN '1'
>   |  ELSE CAST(NULL AS STRING)
>   |  END AS s
>   |FROM SmallTable3)
>   |SELECT a FROM myView WHERE s = '2' OR s IS NULL
>   |""".stripMargin,
> Seq(row(2), row(3)))
> }
> {code}
> However if we remove the {{s = '2'}} the result will be correct.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22885) Support 'SHOW COLUMNS'.

2021-06-11 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361588#comment-17361588
 ] 

Roc Marshal commented on FLINK-22885:
-

For the time being, it would be feasible to support this DDL. Because we have 
designed some DDL statements based on catalog.

If we need to consider whether to support a simplified INFORMATION_ SCHEMA 
paradigm, maybe we can start from transforming the existing catalog to the 
simple INFORMATION_ SCHEMA. Of course, it requires special discussion and 
research.

> Support 'SHOW COLUMNS'.
> ---
>
> Key: FLINK-22885
> URL: https://issues.apache.org/jira/browse/FLINK-22885
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Roc Marshal
>Priority: Major
>
> h1. Support 'SHOW COLUMNS'.
> SHOW COLUMNS ( FROM | IN )  [LIKE ]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-11 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361586#comment-17361586
 ] 

Timo Walther edited comment on FLINK-22968 at 6/11/21, 10:12 AM:
-

[~DaChun777] {{toAppendStream}} is soft-deprecated in Flink 1.13. I would 
recommend to use `toDataStream` which should provide a better user experience. 
We wanted to give {{toDataStream}} some exposure before deprecating 
{{toAppendStream}}. But I think this will happen in 1.14. {{toDataStream}} 
supports {{toDataStream(result, String.class)}}.


was (Author: twalthr):
[~DaChun777] {{toAppendStream}} is soft-deprecated in Flink 1.13. I would 
recommend to use `toDataStream` which should provide a better user experience. 
We wanted to give {{toDataStream}} some exposure before deprecating 
{{toAppendStream}}. But I think this will happen in 1.14.

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22908) FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown should wait until job is running

2021-06-11 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361467#comment-17361467
 ] 

Chesnay Schepler edited comment on FLINK-22908 at 6/11/21, 10:07 AM:
-

master: f52b2d4d78a3b6facc2af5cc589b3fcbeaa12826
1.13: e107e29fa930d4e08e25882f1b52b0d987741694
1.12: 11c20305b58484285cef0c4e98b30e27c70d29a1


was (Author: zentol):
master: f52b2d4d78a3b6facc2af5cc589b3fcbeaa12826
1.13: e107e29fa930d4e08e25882f1b52b0d987741694

> FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown should 
> wait until job is running
> -
>
> Key: FLINK-22908
> URL: https://issues.apache.org/jira/browse/FLINK-22908
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18754=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=7744
> {code}
> Jun 08 00:03:01 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 4.21 s <<< FAILURE! - in 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest
> Jun 08 00:03:01 [ERROR] 
> testPutSuspendedJobOnClusterShutdown(org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest)
>   Time elapsed: 2.763 s  <<< ERROR!
> Jun 08 00:03:01 org.apache.flink.util.FlinkException: Could not close 
> resource.
> Jun 08 00:03:01   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:39)
> Jun 08 00:03:01   at 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown(FileExecutionGraphInfoStoreTest.java:349)
> Jun 08 00:03:01   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 08 00:03:01   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 08 00:03:01   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 08 00:03:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 08 00:03:01   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 08 00:03:01   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jun 08 00:03:01   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Jun 08 00:03:01   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jun 08 00:03:01   at 
> 

[jira] [Closed] (FLINK-22908) FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown should wait until job is running

2021-06-11 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-22908.

Resolution: Fixed

> FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown should 
> wait until job is running
> -
>
> Key: FLINK-22908
> URL: https://issues.apache.org/jira/browse/FLINK-22908
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18754=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=7744
> {code}
> Jun 08 00:03:01 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 4.21 s <<< FAILURE! - in 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest
> Jun 08 00:03:01 [ERROR] 
> testPutSuspendedJobOnClusterShutdown(org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest)
>   Time elapsed: 2.763 s  <<< ERROR!
> Jun 08 00:03:01 org.apache.flink.util.FlinkException: Could not close 
> resource.
> Jun 08 00:03:01   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:39)
> Jun 08 00:03:01   at 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown(FileExecutionGraphInfoStoreTest.java:349)
> Jun 08 00:03:01   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 08 00:03:01   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 08 00:03:01   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 08 00:03:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 08 00:03:01   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 08 00:03:01   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jun 08 00:03:01   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Jun 08 00:03:01   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Jun 08 00:03:01   at 
> 

[jira] [Commented] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-11 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361586#comment-17361586
 ] 

Timo Walther commented on FLINK-22968:
--

[~DaChun777] {{toAppendStream}} is soft-deprecated in Flink 1.13. I would 
recommend to use `toDataStream` which should provide a better user experience. 
We wanted to give {{toDataStream}} some exposure before deprecating 
{{toAppendStream}}. But I think this will happen in 1.14.

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >