Flink ddl sql 在 Test和在Main里面执行结果不同

2021-01-20 文章 HunterXHunter
同一段代码,在main里面可以正常正常,在Test里面却直接结束

StreamExecutionEnvironment bsEnv =
StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings bsSettings =
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment bsTableEnv =
StreamTableEnvironment.create(bsEnv, bsSettings);

bsTableEnv.executeSql(
DDLSourceSQLManager.createStreamFromKafka("localhost:9092",
    "test",
    "test",
"test",
"json"));
   
bsTableEnv.executeSql(com.ddlsql.DDLSourceSQLManager.createDynamicPrintlnRetractSinkTbl("printlnRetractSink"));
bsTableEnv.executeSql("insert into printlnRetractSink select
msg,count(*) as cnt from test group by msg");



--
Sent from: http://apache-flink.147419.n8.nabble.com/


test

2021-01-01 文章 mq sun



Re: test

2020-12-27 文章 liang zhao
请不要发送这些无意义的邮件

> 2020年12月27日 23:19,蒋德祥  写道:
> 
> 



test

2020-12-27 文章 蒋德祥



Re: How to test flink job recover from checkpoint

2020-03-05 文章 Bajaj, Abhinav
I implemented a custom function that throws up a runtime exception.

You can extend from simpler MapFunction or more complicated 
RichParallelSourceFunction depending on your use case.
You can add logic to throw a runtime exception on a certain condition in the 
map or run method.   .
You can use a count or timer to trigger the exception.

Sharing a quick handwritten example.

DataStream stream = .
DataStream mappedStream = stream.map(new MapFunction>() 
{
  @Override
  public String map(String value) throws Exception {
if (SOME_CONDITION) {
  throw new RuntimeException("Lets test checkpointing");
}
return value;
  }
});

~ Abhinav Bajaj


From: Eleanore Jin 
Date: Wednesday, March 4, 2020 at 4:40 PM
To: user , user-zh 
Subject: How to test flink job recover from checkpoint

Hi,

I have a flink application and checkpoint is enabled, I am running locally 
using miniCluster.

I just wonder if there is a way to simulate the failure, and verify that flink 
job restarts from checkpoint?

Thanks a lot!
Eleanore


Re: How to test flink job recover from checkpoint

2020-03-04 文章 Zhu Zhu
Hi Eleanore,

You can change your application tasks to throw exceptions in a certain
frequency.
Alternatively, if the application has external dependencies (e.g. source),
you can trigger failures manually by manipulating the status of the
external service (e.g. shutdown the source service, or break the network
connection between the Flink app and the source service).

Thanks,
Zhu Zhu

Eleanore Jin  于2020年3月5日周四 上午8:40写道:

> Hi,
>
> I have a flink application and checkpoint is enabled, I am running locally
> using miniCluster.
>
> I just wonder if there is a way to simulate the failure, and verify that
> flink job restarts from checkpoint?
>
> Thanks a lot!
> Eleanore
>


How to test flink job recover from checkpoint

2020-03-04 文章 Eleanore Jin
Hi,

I have a flink application and checkpoint is enabled, I am running locally
using miniCluster.

I just wonder if there is a way to simulate the failure, and verify that
flink job restarts from checkpoint?

Thanks a lot!
Eleanore


执行integration-test失败

2019-09-30 文章 gaofeilong198...@163.com
我在linux上对flink的flink-connector-kafka-0.10 这个module进行测试,执行mvn test 没有问题,但是执行 mvn 
integration-test 失败,如下:
执行命令:
cd flink-connectors/flink-connector-kafka-0.10
mvn integration-test -Dtest=Kafka010ProducerITCase
日志:
[INFO] Error stacktraces are turned on.
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.flink:flink-connector-kafka-0.10_2.11:jar:1.10-SNAPSHOT
[WARNING] 'artifactId' contains an expression but should be a constant. @ 
org.apache.flink:flink-connector-kafka-0.10_${scala.binary.version}:[unknown-version],
 
/home/j-gaofeilong-jk/software/flink/flink-master/flink-connectors/flink-connector-kafka-0.10/pom.xml,
 line 33, column 14
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING] 
[INFO] 
[INFO] --< org.apache.flink:flink-connector-kafka-0.10_2.11 >--
[INFO] Building flink-connector-kafka-0.10 1.10-SNAPSHOT
[INFO] [ jar ]-
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] 
[INFO] --- directory-maven-plugin:0.1:highest-basedir (directories) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Highest basedir set to: /home/j-gaofeilong-jk/software/flink/flink-master
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) 
@ flink-connector-kafka-0.10_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.1:test (default-test) @ 
flink-connector-kafka-0.10_2.11 ---
[INFO] Surefire report directory: 
/home/j-gaofeilong-jk/software/flink/flink-master/flink-connectors/flink-connector-kafka-0.10/target/surefire-reports
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.26 s 
<<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase
[ERROR] org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase  
Time elapsed: 0.259 s  <<< ERROR!
java.io.IOException: No such file or directory

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
Kafka010ProducerITCase.org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase
 » IO
[INFO] 
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
[INFO] 
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  6.063 s
[INFO] Finished at: 2019-09-30T16:31:39+08:00
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (default-test) on 
project flink-connector-kafka-0.10_2.11: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/j-gaofeilong-jk/software/flink/flink-master/flink-connectors/flink-connector-kafka-0.10/target/surefire-reports
 for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (default-test) on 
project flink-connector-kafka-0.10_2.11: There are test failures.

Please refer to 
/home/j-gaofeilong-jk/software/flink/flink-master/flink-conne

test

2019-08-23 文章 张金
test


mail list test

2019-03-25 文章 邓成刚【qq】
mail list test