[jira] [Created] (FLINK-22128) Window aggregation should have unique keys

2021-04-06 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-22128:


 Summary: Window aggregation should have unique keys
 Key: FLINK-22128
 URL: https://issues.apache.org/jira/browse/FLINK-22128
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.13.0


We should add match method in {{FlinkRelMdUniqueKeys}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22127) Enrich error message of read buffer request timeout exception

2021-04-06 Thread Yingjie Cao (Jira)
Yingjie Cao created FLINK-22127:
---

 Summary: Enrich error message of read buffer request timeout 
exception
 Key: FLINK-22127
 URL: https://issues.apache.org/jira/browse/FLINK-22127
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.13.0
Reporter: Yingjie Cao
 Fix For: 1.13.0


Enrich error message of read buffer request timeout exception to tell the user 
how to solve the timeout exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22126) when i set ssl ,the jobmanager got certificate_unknown exception

2021-04-06 Thread tonychan (Jira)
tonychan created FLINK-22126:


 Summary: when i set ssl ,the jobmanager got certificate_unknown 
exception
 Key: FLINK-22126
 URL: https://issues.apache.org/jira/browse/FLINK-22126
 Project: Flink
  Issue Type: Bug
Reporter: tonychan
 Attachments: image-2021-04-07-09-26-16-490.png, 
image-2021-04-07-09-26-21-958.png

!image-2021-04-07-09-26-21-958.png!

my setup as below:

 

keytool -genkeypair -alias ca -keystore ca.keystore -dname "CN=ART002" 
-storepass ca_keystore_password -keyalg RSA -keysize 4096 -ext "bc=ca:true" 
-storetype PKCS12
keytool -exportcert -keystore ca.keystore -alias ca -storepass 
ca_keystore_password -file ca.cer
keytool -importcert -keystore ca.truststore -alias ca -storepass 
ca_truststore_password -file ca.cer -noprompt

 

keytool -genkeypair -alias flink.rest -keystore rest.signed.keystore -dname 
"CN=ART002" -ext "SAN=dns:ART002" -storepass rest_keystore_password -keyalg RSA 
-keysize 4096 -storetype PKCS12
keytool -certreq -alias flink.rest -keystore rest.signed.keystore -storepass 
rest_keystore_password -file rest.csr
keytool -gencert -alias ca -keystore ca.keystore -storepass 
ca_keystore_password -ext "SAN=dns:ART002,ip:*.*0.145.92" -infile rest.csr 
-outfile rest.cer
keytool -importcert -keystore rest.signed.keystore -storepass 
rest_keystore_password -file ca.cer -alias ca -noprompt
keytool -importcert -keystore rest.signed.keystore -storepass 
rest_keystore_password -file rest.cer -alias flink.rest -noprompt

 

 

security.ssl.rest.enabled: true
security.ssl.rest.keystore: /data/flink/flink-1.11.2/ssl/rest.signed.keystore
security.ssl.rest.truststore: /data/flink/flink-1.11.2/ssl/ca.truststore
security.ssl.rest.keystore-password: rest_keystore_password
security.ssl.rest.key-password: rest_keystore_password
security.ssl.rest.truststore-password: ca_truststore_password

 

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Automatic backpressure detection

2021-04-06 Thread Lu Niu
Hi, Piotr

Thanks for replying!

We don't have a plan to upgrade to 1.13 in short term. We are using flink
1.11 and I notice there is a metric called isBackpressured. Is that enough
to solve 1? If not, would backporting patches regarding
backPressuredTimeMsPerSecond, busyTimeMsPerSecond and idleTimeMsPerSecond
work? And do you have an estimate of how difficult it is?


Best
Lu



On Tue, Apr 6, 2021 at 12:18 AM Piotr Nowojski  wrote:

> Hi,
>
> Lately we overhauled the backpressure detection [1] and a screenshot
> preview of those efforts is attached here [2]. I encourage you to check the
> 1.13 RC0 build and how the current mechanism works for you [3]. To support
> those WebUI changes we have added a couple of new metrics:
> backPressuredTimeMsPerSecond, busyTimeMsPerSecond and idleTimeMsPerSecond.
>
> 1. I believe that solves 1.
> 2. This still requires a bit of manual investigation. Once you locate
> backpressuring task, you can check the detail subtask stats to check if all
> parallel instances are uniformly backpressured/busy or not. If you would
> like to add a hint "it looks like you have a data skew in Task XYZ ", that
> I believe could be added to the WebUI.
> 3. The tricky part is how to display this kind of information. Currently I
> would recommend just export/report
> backPressuredTimeMsPerSecond, busyTimeMsPerSecond and idleTimeMsPerSecond
> metrics for every task to an external system and  display them for example
> in Graphana.
>
> The blog post you are referencing is quite outdated, especially with those
> new changes from 1.13. I'm hoping to write a new one pretty soon.
>
> Piotrek
>
> [1] https://issues.apache.org/jira/browse/FLINK-14712
> [2]
>
> https://issues.apache.org/jira/browse/FLINK-14814?focusedCommentId=17256926=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17256926
> [3]
>
> http://mail-archives.apache.org/mod_mbox/flink-user/202104.mbox/%3c1d2412ce-d4d0-ed50-6181-1b610e16d...@apache.org%3E
>
> pon., 5 kwi 2021 o 23:20 Lu Niu  napisał(a):
>
> > Hi, Flink dev
> >
> > Lately, we want to develop some tools to:
> > 1. show backpressure operator without manual operation
> > 2. Provide suggestions to mitigate back pressure after checking data
> skew,
> > external service RPC etc.
> > 3. Show back pressure history
> >
> > Could anyone share their experience with such tooling?
> > Also, I notice backpressure monitoring and detection is mentioned across
> > multiple places. Could someone help to explain how these connect to each
> > other? Maybe some of them are outdated? Thanks!
> >
> > 1. The official doc introduces monitoring back pressure through web UI.
> >
> >
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/ops/monitoring/back_pressure.html
> > 2. In https://flink.apache.org/2019/07/23/flink-network-stack-2.html, it
> > says outPoolUsage, inPoolUsage metrics can be used to determine back
> > pressure.
> > 3. Latest flink version introduces metrics called “isBackPressured" But I
> > didn't find related documentation on usage.
> >
> > Best
> > Lu
> >
>


Re: Flink job cannot find recover path after using entropy injection for s3 file systems

2021-04-06 Thread chenqin
Friendly ping, the fix for entropy marker is ready.



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


[jira] [Created] (FLINK-22125) Revisit the return value of MapState.get when the key doesn't exists

2021-04-06 Thread Dian Fu (Jira)
Dian Fu created FLINK-22125:
---

 Summary: Revisit the return value of MapState.get when the key 
doesn't exists
 Key: FLINK-22125
 URL: https://issues.apache.org/jira/browse/FLINK-22125
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Affects Versions: 1.13.0
Reporter: Dian Fu
 Fix For: 1.13.0


Currently, it will thrown KeyError if the key doesn't exist for MapState in 
Python DataStream API. However, it returns null in the Java DataStream API. 
Maybe we should keep the behavior the same across Python DataStream API and 
Java DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22124) The job finished without any exception if error was thrown during state access

2021-04-06 Thread Dian Fu (Jira)
Dian Fu created FLINK-22124:
---

 Summary: The job finished without any exception if error was 
thrown during state access
 Key: FLINK-22124
 URL: https://issues.apache.org/jira/browse/FLINK-22124
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Affects Versions: 1.13.0
Reporter: Dian Fu
 Fix For: 1.13.0


For the following job:

{code}
import logging

from pyflink.common import WatermarkStrategy, Row
from pyflink.common.serialization import Encoder
from pyflink.common.typeinfo import Types
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.connectors import FileSink, OutputFileConfig, 
NumberSequenceSource
from pyflink.datastream.execution_mode import RuntimeExecutionMode
from pyflink.datastream.functions import KeyedProcessFunction, RuntimeContext
from pyflink.datastream.state import MapStateDescriptor


env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(2)
env.set_runtime_mode(RuntimeExecutionMode.BATCH)

seq_num_source = NumberSequenceSource(1, 1000)

file_sink = FileSink \

.for_row_format('/Users/dianfu/code/src/apache/playgrounds/examples/output/data_stream_batch_state',
Encoder.simple_string_encoder()) \

.with_output_file_config(OutputFileConfig.builder().with_part_prefix('pre').with_part_suffix('suf').build())
 \
.build()

ds = env.from_source(
source=seq_num_source,
watermark_strategy=WatermarkStrategy.for_monotonous_timestamps(),
source_name='file_source',
type_info=Types.LONG())


class MyKeyedProcessFunction(KeyedProcessFunction):

def __init__(self):
self.state = None

def open(self, runtime_context: RuntimeContext):
logging.info("open")
state_desc = MapStateDescriptor('map', Types.LONG(), Types.LONG())
self.state = runtime_context.get_map_state(state_desc)

def process_element(self, value, ctx: 'KeyedProcessFunction.Context'):
existing = self.state.get(value[0])
if existing is None:
result = value[1]
self.state.put(value[0], result)
elif existing <= 10:
result = value[1] + existing
self.state.put(value[0], result)
else:
result = existing
yield result


ds.map(lambda a: Row(a % 4, 1), output_type=Types.ROW([Types.LONG(), 
Types.LONG()])) \
.key_by(lambda a: a[0]) \
.process(MyKeyedProcessFunction(), Types.LONG()) \
.sink_to(file_sink)

env.execute('data_stream_batch_state')
{code}

As it will encounter KeyError for `self.state.get(value[0])`, the job finished 
without any error message. This issue should be addressed. We should make sure 
the error message appears in the log file to help users to figure out what 
happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22123) Test state access in Python DataStream API

2021-04-06 Thread Dian Fu (Jira)
Dian Fu created FLINK-22123:
---

 Summary: Test state access in Python DataStream API
 Key: FLINK-22123
 URL: https://issues.apache.org/jira/browse/FLINK-22123
 Project: Flink
  Issue Type: Test
  Components: API / Python
Affects Versions: 1.13.0
Reporter: Dian Fu
Assignee: Dian Fu
 Fix For: 1.13.0


It includes but not limited to the following testing items:
- The three kinds of state type works well: ValueState/ListState/MapState
- It works well with checkpoint
- It works well in all kinds of operators, e.g. process, map, etc.
- It works well in both streaming mode and batch mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22122) StreamingKafkaITCase Fail due to TestTimedOutException

2021-04-06 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22122:
-

 Summary: StreamingKafkaITCase Fail due to TestTimedOutException
 Key: FLINK-22122
 URL: https://issues.apache.org/jira/browse/FLINK-22122
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16059=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=27391
{code:java}
Apr 06 08:34:01 org.junit.runners.model.TestTimedOutException: test timed out 
after 3 minutes
Apr 06 08:34:01 at java.lang.Object.wait(Native Method)
Apr 06 08:34:01 at java.lang.Object.wait(Object.java:502)
Apr 06 08:34:01 at java.lang.UNIXProcess.waitFor(UNIXProcess.java:395)
Apr 06 08:34:01 at 
org.apache.flink.tests.util.flink.FlinkDistribution.submitJob(FlinkDistribution.java:194)
Apr 06 08:34:01 at 
org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitJob(LocalStandaloneFlinkResource.java:200)
Apr 06 08:34:01 at 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:109)
Apr 06 08:34:01 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Apr 06 08:34:01 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Apr 06 08:34:01 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Apr 06 08:34:01 at java.lang.reflect.Method.invoke(Method.java:498)
Apr 06 08:34:01 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Apr 06 08:34:01 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Apr 06 08:34:01 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Apr 06 08:34:01 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Apr 06 08:34:01 at 
org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
Apr 06 08:34:01 at 
org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
Apr 06 08:34:01 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
Apr 06 08:34:01 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
Apr 06 08:34:01 at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
Apr 06 08:34:01 at java.lang.Thread.run(Thread.java:748)

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22121) FlinkLogicalRankRuleBase should check if name of rankNumberType already exists in the input

2021-04-06 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-22121:
---

 Summary: FlinkLogicalRankRuleBase should check if name of 
rankNumberType already exists in the input
 Key: FLINK-22121
 URL: https://issues.apache.org/jira/browse/FLINK-22121
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.0
Reporter: Caizhi Weng
 Fix For: 1.13.0


Add the following test case to 
{{org.apache.flink.table.planner.plan.stream.sql.RankTest}} to reproduce this 
issue.

{code:scala}
@Test
def myTest(): Unit = {
  val sql =
"""
  |SELECT CAST(rna AS INT) AS rn1, CAST(rnb AS INT) AS rn2 FROM (
  |  SELECT *, row_number() over (partition by a order by b desc) AS rnb
  |  FROM (
  |SELECT *, row_number() over (partition by a, c order by b desc) AS 
rna
  |FROM MyTable
  |  )
  |  WHERE rna <= 100
  |)
  |WHERE rnb <= 100
  |""".stripMargin
  util.verifyExecPlan(sql)
}
{code}

The exception stack is
{code}
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [w0$o0]

at 
org.apache.flink.table.types.logical.RowType.validateFields(RowType.java:272)
at org.apache.flink.table.types.logical.RowType.(RowType.java:157)
at org.apache.flink.table.types.logical.RowType.of(RowType.java:297)
at org.apache.flink.table.types.logical.RowType.of(RowType.java:289)
at 
org.apache.flink.table.planner.calcite.FlinkTypeFactory$.toLogicalRowType(FlinkTypeFactory.scala:632)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.translateToExecNode(StreamPhysicalRank.scala:117)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:74)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:71)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:54)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToExecNodeGraph(PlannerBase.scala:314)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:895)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:780)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.verifyExecPlan(TableTestBase.scala:583)
{code}

This is because currently names of rank fields are all {{w0$o0}}, so if the 
input of a Rank is another Rank the exception will occur. To solve this, we 
should check if name of rank field has occurred in the input in 
{{FlinkLogicalRankRuleBase}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22119) Update document for hive dialect

2021-04-06 Thread Rui Li (Jira)
Rui Li created FLINK-22119:
--

 Summary: Update document for hive dialect
 Key: FLINK-22119
 URL: https://issues.apache.org/jira/browse/FLINK-22119
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive, Documentation
Reporter: Rui Li
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22118) Always apply projection push down in blink planner

2021-04-06 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-22118:
-

 Summary: Always apply projection push down in blink planner
 Key: FLINK-22118
 URL: https://issues.apache.org/jira/browse/FLINK-22118
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: Shengkai Fang


Please add the case in `TableSourceTest`.
{code:java}
  s"""
 |CREATE TABLE NestedItemTable (
 |  `id` INT,
 |  `result` ROW<
 | `data_arr` ROW<`value` BIGINT> ARRAY,
 | `data_map` MAP>>,
 |  ) WITH (
 |'connector' = 'values',
 |'nested-projection-supported' = 'true',
 |'bounded' = 'true'
 |  )
 |""".stripMargin
util.tableEnv.executeSql(ddl4)
util.verifyExecPlan(
  s"""
 |SELECT
 |  `result`.`data_arr`[`id`].`value`,
 |  `result`.`data_map`['item'].`value`
 |FROM NestedItemTable
 |""".stripMargin
)
{code}
we can get optimized plan
{code:java}
Calc(select=[ITEM(result.data_arr, id).value AS EXPR$0, ITEM(result.data_map, 
_UTF-16LE'item').value AS EXPR$1])
+- TableSourceScan(table=[[default_catalog, default_database, 
NestedItemTable]], fields=[id, result])
{code}
but expected is
{code:java}
Calc(select=[ITEM(result_data_arr, id).value AS EXPR$0, ITEM(result_data_map, 
_UTF-16LE'item').value AS EXPR$1])
+- TableSourceScan(table=[[default_catalog, default_database, NestedItemTable, 
project=[result_data_arr, result_data_map, id]]], fields=[result_data_arr, 
result_data_map, id])
{code}
It seems the planner doesn't apply the rule to push projection into scan. The 
reason why we have different results is the optimized plan has more fields than 
before.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22117) Not print stack trace for checkpoint trigger failure if not all tasks are started.

2021-04-06 Thread Yun Gao (Jira)
Yun Gao created FLINK-22117:
---

 Summary: Not print stack trace for checkpoint trigger failure if 
not all tasks are started.
 Key: FLINK-22117
 URL: https://issues.apache.org/jira/browse/FLINK-22117
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Yun Gao


Currently the stack trace is printed compared with the previous versions, but 
it might cover the actual exception that user want to locate. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22116) Setup .asf.yaml in flink-web

2021-04-06 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-22116:


 Summary: Setup .asf.yaml in flink-web
 Key: FLINK-22116
 URL: https://issues.apache.org/jira/browse/FLINK-22116
 Project: Flink
  Issue Type: Task
  Components: Project Website
Reporter: Chesnay Schepler


Infra is making some changes to the hosting of websites from git, in 2 months 
and flink-web  appears to require a small change.

We need to add a .asf.yaml file, with these contents:
{code}
publish:
 whoami: asf-site
{code}

https://lists.apache.org/thread.html/r8d023c0f5afefca7f6ce4e26d02404762bd6234fbe328011e1564249%40%3Cusers.infra.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[RESULT] [VOTE] Release flink-shaded 13.0, release candidate #1

2021-04-06 Thread Chesnay Schepler

The vote has passed with 3 binding votes.

Votes:
Dawid (+1)
Robert (+1)
Gordon (+1)

On 3/31/2021 2:07 PM, Chesnay Schepler wrote:

Hi everyone,
Please review and vote on the release candidate #1 for the version 
13.0, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release to be deployed to dist.apache.org 
[2], which are signed with the key with fingerprint C2EED7B111D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-13.0-rc1" [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.


Thanks,
Release Manager

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12349618

[2] https://dist.apache.org/repos/dist/dev/flink/flink-shaded-13.0-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1416/

[5] https://github.com/apache/flink-shaded/tree/release-13.0-rc1
[6] https://github.com/apache/flink-web/pull/428





[jira] [Created] (FLINK-22115) JobManager dies with IllegalStateException SharedSlot (physical request SlotRequestId{%}) has been released

2021-04-06 Thread wym_maozi (Jira)
wym_maozi created FLINK-22115:
-

 Summary: JobManager dies with IllegalStateException SharedSlot 
(physical request SlotRequestId{%}) has been released
 Key: FLINK-22115
 URL: https://issues.apache.org/jira/browse/FLINK-22115
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.12.0
Reporter: wym_maozi
 Attachments: flink-root-standalonesession-0-banyue01.zip

After TaskManager hangs, and JobManager fails to restart the task many times, I 
experienced fatal JobManager crashes, with the following log:
{code:java}
2021-04-06 14:13:10,388 INFO  
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - Requesting 
nesourceProfile{UNKNOWN} with allocation id 6c99582801115ac080d407123cf92e80 
from resource manager.
2021-04-06 14:13:10,389 INFO  
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - 
Request5123cc0fd9eb3 with allocation id 6c99582801115ac080d407123cf92e80.
2021-04-06 14:13:10,389 INFO  
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - Requesting 
nesourceProfile{UNKNOWN} with allocation id 56b82914f123bdf3aec2cad538dc8e75 
from resource manager.
2021-04-06 14:13:10,389 INFO  
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - 
Request5123cc0fd9eb3 with allocation id 56b82914f123bdf3aec2cad538dc8e75.
2021-04-06 14:13:10,389 ERROR 
org.apache.flink.runtime.util.FatalExitExceptionHandler  [] - FATAL: 
Thread. Stopping the process...
java.util.concurrent.CompletionException: java.lang.IllegalStateException: 
SharedSlot (physical request SlotR
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
 ~[?:1.8.0_181]
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
 ~[?:1.8.0_181
at 
java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:708) 
~[?:1.8.0_181]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:687)
 ~[?:1.8.0_181]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
 ~[?:1.8.0_181]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:404)
 ~[flink-dist_
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:197)
 ~[flink-dis
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
 ~[flink-dist_2
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
[flink-dist_2.11-1.12.0.jar:1.1
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
[flink-dist_2.11-1.12.0.jar:1.
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) 
[flink-dist_2.11-1.12.0.jar:1.
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.12.0.jar:1.
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.12.0.jar:1.
at akka.actor.Actor$class.aroundReceive(Actor.scala:517) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) 
[flink-dist_2.11-1.12.0.jar:1.12.0
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) 
[flink-dist_2.11-1.12.0.jar:1.12.0]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[flink-dist_2.11-1.12.0.jar:1.12
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
[flink-dist_2.11-1.1
at 
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[flink-dist_2.11-1.12.0.jar:
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
[flink-dist_2.11-1.
Caused by: java.lang.IllegalStateException: SharedSlot (physical request 
SlotRequestId{0db9fa269ccc01f3eb2c15
at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:220) 
~[flink-dist_2.11-1.12.0.ja
at 
org.apache.flink.runtime.scheduler.SharedSlot.allocateLogicalSlot(SharedSlot.java:126)
 ~[flink-dis
at 

[jira] [Created] (FLINK-22114) Streaming File Sink s3 end-to-end test fail because the test did not finish after 900 seconds.

2021-04-06 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22114:
-

 Summary: Streaming File Sink s3 end-to-end test fail because the 
test  did not finish after 900 seconds.
 Key: FLINK-22114
 URL: https://issues.apache.org/jira/browse/FLINK-22114
 Project: Flink
  Issue Type: Bug
  Components: FileSystems
Affects Versions: 1.12.2
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16038=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=13756


{code:java}
Image docker.io/stedolan/jq:latest uses outdated schema1 manifest format. 
Please upgrade to a schema2 image for better future compatibility. More 
information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
237d5fcd25cf: Pulling fs layer
a3ed95caeb02: Pulling fs layer
1169f6d603e5: Pulling fs layer
4dae4fd48813: Pulling fs layer
4dae4fd48813: Waiting
1169f6d603e5: Verifying Checksum
1169f6d603e5: Download complete
a3ed95caeb02: Verifying Checksum
a3ed95caeb02: Download complete
237d5fcd25cf: Verifying Checksum
237d5fcd25cf: Download complete
4dae4fd48813: Verifying Checksum
4dae4fd48813: Download complete
237d5fcd25cf: Pull complete
a3ed95caeb02: Pull complete
1169f6d603e5: Pull complete
4dae4fd48813: Pull complete
Digest: sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
Status: Downloaded newer image for stedolan/jq:latest
parse error: Invalid numeric literal at line 1, column 6
Apr 03 22:37:09 Number of produced values 0/6
Error: No such container: 
parse error: Invalid numeric literal at line 1, column 6
Error: No such container: 
parse error: Invalid numeric literal at line 1, column 6
Error: No such container: 
parse error: Invalid numeric literal at line 1, column 6
Error: No such container: 
parse error: Invalid numeric literal at line 1, column 6
Error: No such container: 
parse error: Invalid numeric literal at line 1, column 6

{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22113) UniqueKey constraint is lost with multiple sources join in SQL

2021-04-06 Thread Fu Kai (Jira)
Fu Kai created FLINK-22113:
--

 Summary: UniqueKey constraint is lost with multiple sources join 
in SQL
 Key: FLINK-22113
 URL: https://issues.apache.org/jira/browse/FLINK-22113
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: Fu Kai


Hi team,
 
We have a use case to join multiple data sources to generate a continuous 
updated view. We defined primary key constraint on all the input sources and 
all the keys are the subsets in the join condition. All joins are left join.
 
In our case, the first two inputs can produce *JoinKeyContainsUniqueKey* input 
sepc, which is good and performant. While when it comes to the third input 
source, it's joined with the intermediate output table of the first two input 
tables, and the intermediate table does not carry key constraint 
information(although the thrid source input table does), so it results in a 
*NoUniqueKey* input sepc. Given NoUniqueKey inputs has dramatic performance 
implications per the[ Force Join Unique 
Key|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Force-Join-Unique-Key-td39521.html#a39651]
 email thread, we want to know if there is any mitigation solution for this.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22112) EmulatedPubSubSinkTest fail due to pull docker image failure

2021-04-06 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22112:
-

 Summary: EmulatedPubSubSinkTest fail due to pull docker image 
failure
 Key: FLINK-22112
 URL: https://issues.apache.org/jira/browse/FLINK-22112
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Google Cloud PubSub
Affects Versions: 1.12.2
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16038=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=8586


{code:java}
Apr 03 22:30:22 [ERROR] 
org.apache.flink.streaming.connectors.gcp.pubsub.EmulatedPubSubSinkTest  Time 
elapsed: 17.561 s  <<< ERROR!
Apr 03 22:30:22 com.spotify.docker.client.exceptions.DockerRequestException: 
Apr 03 22:30:22 Request error: POST 
unix://localhost:80/images/create?fromImage=google%2Fcloud-sdk=313.0.0: 
500, body: {"message":"Head 
https://registry-1.docker.io/v2/google/cloud-sdk/manifests/313.0.0: Get 
https://auth.docker.io/token?account=githubactions=repository%3Agoogle%2Fcloud-sdk%3Apull=registry.docker.io:
 net/http: request canceled (Client.Timeout exceeded while awaiting headers)"}
Apr 03 22:30:22 
Apr 03 22:30:22 at 
com.spotify.docker.client.DefaultDockerClient.requestAndTail(DefaultDockerClient.java:2800)
Apr 03 22:30:22 at 
com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1346)
Apr 03 22:30:22 at 
com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1323)
Apr 03 22:30:22 at 
org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager.launchDocker(GCloudEmulatorManager.java:103)
Apr 03 22:30:22 at 
org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudUnitTestBase.launchGCloudEmulator(GCloudUnitTestBase.java:46)
Apr 03 22:30:22 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Apr 03 22:30:22 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Apr 03 22:30:22 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Apr 03 22:30:22 at java.lang.reflect.Method.invoke(Method.java:498)
Apr 03 22:30:22 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Apr 03 22:30:22 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Apr 03 22:30:22 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Apr 03 22:30:22 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
Apr 03 22:30:22 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Apr 03 22:30:22 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
Apr 03 22:30:22 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Apr 03 22:30:22 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Apr 03 22:30:22 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Apr 03 22:30:22 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Apr 03 22:30:22 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Apr 03 22:30:22 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Apr 03 22:30:22 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Apr 03 22:30:22 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Zigzag shape in TM JVM used memory

2021-04-06 Thread Piotr Nowojski
Hi,

this should be posted on the user mailing list not the dev.

Apart from that, this looks like normal/standard behaviour of JVM, and has
very little to do with Flink. Garbage Collector (GC) is kicking in when
memory usage is approaching some threshold:
https://www.google.com/search?q=jvm+heap+memory+usage=isch

Piotrek


pon., 5 kwi 2021 o 22:54 Lu Niu  napisał(a):

> Hi,
>
> we need to update our email system then :) . Here are the links:
>
>
> https://drive.google.com/file/d/1lZ5_P8_NqsN1JeLzutGj4DxkyWJN75mR/view?usp=sharing
>
>
> https://drive.google.com/file/d/1J6c6rQJwtDp1moAGlvQyLQXTqcuG4HjL/view?usp=sharing
>
>
> https://drive.google.com/file/d/1-R2KzsABC471AEjkF5qTm5O3V47cpbBV/view?usp=sharing
>
> All are DataStream job.
>
> Best
> Lu
>
> On Sun, Apr 4, 2021 at 9:17 PM Yun Gao  wrote:
>
> >
> > Hi Lu,
> >
> > The image seems not be able to shown due to the mail server limitation,
> > could you upload it somewhere and paste the link here ?
> >
> > Logically, I think zigzag usually due to there are some small object get
> > created and eliminated soon in the heap. Are you running a SQL job or a
> > DataStream job ?
> >
> > Best,
> > Yun
> >
> > --
> > Sender:Lu Niu
> > Date:2021/04/05 12:06:24
> > Recipient:dev@flink.apache.org
> > Theme:Zigzag shape in TM JVM used memory
> >
> > Hi, Flink dev
> >
> > We observed that the TM JVM used memory metric shows zigzag shape among
> > lots of our applications, although these applications are quite different
> > in business logic. The upper bound is close to the max heap size. Is this
> > expected in flink application? Or does flink internally
> > aggressively pre-allocate memory?
> >
> > app1
> > [image: Screen Shot 2021-04-04 at 8.46.45 PM.png]
> > app2
> > [image: Screen Shot 2021-04-04 at 8.45.35 PM.png]
> > app3
> > [image: Screen Shot 2021-04-04 at 8.43.53 PM.png]
> >
> > Best
> > Lu
> >
> >
>


Re: Automatic backpressure detection

2021-04-06 Thread Piotr Nowojski
Hi,

Lately we overhauled the backpressure detection [1] and a screenshot
preview of those efforts is attached here [2]. I encourage you to check the
1.13 RC0 build and how the current mechanism works for you [3]. To support
those WebUI changes we have added a couple of new metrics:
backPressuredTimeMsPerSecond, busyTimeMsPerSecond and idleTimeMsPerSecond.

1. I believe that solves 1.
2. This still requires a bit of manual investigation. Once you locate
backpressuring task, you can check the detail subtask stats to check if all
parallel instances are uniformly backpressured/busy or not. If you would
like to add a hint "it looks like you have a data skew in Task XYZ ", that
I believe could be added to the WebUI.
3. The tricky part is how to display this kind of information. Currently I
would recommend just export/report
backPressuredTimeMsPerSecond, busyTimeMsPerSecond and idleTimeMsPerSecond
metrics for every task to an external system and  display them for example
in Graphana.

The blog post you are referencing is quite outdated, especially with those
new changes from 1.13. I'm hoping to write a new one pretty soon.

Piotrek

[1] https://issues.apache.org/jira/browse/FLINK-14712
[2]
https://issues.apache.org/jira/browse/FLINK-14814?focusedCommentId=17256926=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17256926
[3]
http://mail-archives.apache.org/mod_mbox/flink-user/202104.mbox/%3c1d2412ce-d4d0-ed50-6181-1b610e16d...@apache.org%3E

pon., 5 kwi 2021 o 23:20 Lu Niu  napisał(a):

> Hi, Flink dev
>
> Lately, we want to develop some tools to:
> 1. show backpressure operator without manual operation
> 2. Provide suggestions to mitigate back pressure after checking data skew,
> external service RPC etc.
> 3. Show back pressure history
>
> Could anyone share their experience with such tooling?
> Also, I notice backpressure monitoring and detection is mentioned across
> multiple places. Could someone help to explain how these connect to each
> other? Maybe some of them are outdated? Thanks!
>
> 1. The official doc introduces monitoring back pressure through web UI.
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/ops/monitoring/back_pressure.html
> 2. In https://flink.apache.org/2019/07/23/flink-network-stack-2.html, it
> says outPoolUsage, inPoolUsage metrics can be used to determine back
> pressure.
> 3. Latest flink version introduces metrics called “isBackPressured" But I
> didn't find related documentation on usage.
>
> Best
> Lu
>


[jira] [Created] (FLINK-22111) ClientTest.testSimpleRequests fail

2021-04-06 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22111:
-

 Summary: ClientTest.testSimpleRequests fail
 Key: FLINK-22111
 URL: https://issues.apache.org/jira/browse/FLINK-22111
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Queryable State
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=15421
{code:java}
21:47:16,289 [nioEventLoopGroup-4-3] WARN  
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer [] - Failed 
to initialize a channel. Closing: [id: 0x40eab0f6, L:/172.29.0.2:43846 - 
R:/172.29.0.2:42436]
org.apache.flink.shaded.netty4.io.netty.channel.ChannelPipelineException: 
org.apache.flink.queryablestate.network.ClientTest$1 is not a @Sharable 
handler, so can't be added or removed multiple times.
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.checkMultiplicity(DefaultChannelPipeline.java:600)
 ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:202)
 ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
 ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
 ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.queryablestate.network.ClientTest$5.initChannel(ClientTest.java:897)
 ~[test-classes/:?]
at 
org.apache.flink.queryablestate.network.ClientTest$5.initChannel(ClientTest.java:890)
 ~[test-classes/:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:502)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]

[jira] [Created] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink failed due to download docker image throw IOException

2021-04-06 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22110:
-

 Summary: KinesisTableApiITCase.testTableApiSourceAndSink failed 
due to download docker image throw IOException
 Key: FLINK-22110
 URL: https://issues.apache.org/jira/browse/FLINK-22110
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=27232


{code:java}
Apr 06 00:08:58 [ERROR] 
testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
  Time elapsed: 0.007 s  <<< ERROR!
Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist image
Apr 06 00:08:58 at 
org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
Apr 06 00:08:58 at 
org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
Apr 06 00:08:58 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Apr 06 00:08:58 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Apr 06 00:08:58 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
Apr 06 00:08:58 at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
Apr 06 00:08:58 at 
org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
Apr 06 00:08:58 at 
org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
Apr 06 00:08:58 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Apr 06 00:08:58 at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
Apr 06 00:08:58 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
Apr 06 00:08:58 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
Apr 06 00:08:58 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Apr 06 00:08:58 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
Apr 06 00:08:58 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
Apr 06 00:08:58 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
Apr 06 00:08:58 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
Apr 06 00:08:58 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
Apr 06 00:08:58 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)