[jira] [Created] (FLINK-13577) Add an util class to build result row and generate result schema.

2019-08-04 Thread Xu Yang (JIRA)
Xu Yang created FLINK-13577:
---

 Summary: Add an util class to build result row and generate result 
schema.
 Key: FLINK-13577
 URL: https://issues.apache.org/jira/browse/FLINK-13577
 Project: Flink
  Issue Type: Sub-task
  Components: Library / Machine Learning
Reporter: Xu Yang


Add an util class to build result row and generate result schema, which will be 
used in later algorithm implementations.
* Add class OutputColsHelper.
* Add unit test



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13576) Enable configuring IntermediateResultStorage with YAML

2019-08-04 Thread Xuannan Su (JIRA)
Xuannan Su created FLINK-13576:
--

 Summary: Enable configuring IntermediateResultStorage with YAML
 Key: FLINK-13576
 URL: https://issues.apache.org/jira/browse/FLINK-13576
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Xuannan Su


The CacheManager would load the YAML file from the classpath and parse the YAML 
file to key-value properties that can be used by the Table API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13575) Introduce IntermediateResultStorageDescriptor

2019-08-04 Thread Xuannan Su (JIRA)
Xuannan Su created FLINK-13575:
--

 Summary: Introduce IntermediateResultStorageDescriptor
 Key: FLINK-13575
 URL: https://issues.apache.org/jira/browse/FLINK-13575
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Xuannan Su


The IntermediateResultStorageDescriptor can be used to configure the 
intermediate result storage used by the CacheManager.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13573) Merge SubmittedJobGraph into JobGraph

2019-08-04 Thread TisonKun (JIRA)
TisonKun created FLINK-13573:


 Summary: Merge SubmittedJobGraph into JobGraph
 Key: FLINK-13573
 URL: https://issues.apache.org/jira/browse/FLINK-13573
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Affects Versions: 1.10.0
Reporter: TisonKun
 Fix For: 1.10.0


As time goes on, {{SubmittedJobGraph}} becomes a thin wrapper of {{JobGraph}} 
without any additional information. It is reasonable that we merge 
{{SubmittedJobGraph}} into {{JobGraph}} and use only {{JobGraph}}. 

WDYT? cc [~till.rohrmann] [~GJL]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13574) Introduce IntermediateResultStorage Interface

2019-08-04 Thread Xuannan Su (JIRA)
Xuannan Su created FLINK-13574:
--

 Summary: Introduce IntermediateResultStorage Interface
 Key: FLINK-13574
 URL: https://issues.apache.org/jira/browse/FLINK-13574
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Xuannan Su


The IntermediateResultStorage represents a backend end storage where the 
intermediate result stored upon caching. It provides the necessary methods for 
the CacheManager to substitute a cached table with TableSink and TableSource.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13572) Introduce Configurable, TableCreationHook and TableCleanupHook Interfaces

2019-08-04 Thread Xuannan Su (JIRA)
Xuannan Su created FLINK-13572:
--

 Summary: Introduce Configurable, TableCreationHook and 
TableCleanupHook Interfaces
 Key: FLINK-13572
 URL: https://issues.apache.org/jira/browse/FLINK-13572
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Xuannan Su


* The Configurable interface indicates that the class can be instantiated by 
reflection. And it needs to take some configuration parameters to configure 
itself.
* The TableCreationHook is responsible for preparing the location to store the 
content of the cached table and map the table name to fields in the 
configuration so that the TableSinkFactory/TableSourceFactory can understand. 
* The TableCleanupHook is responsible for deleting the content of the cached 
tables and reclaim the space.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13571) Support full data type in sql DML

2019-08-04 Thread Danny Chan (JIRA)
Danny Chan created FLINK-13571:
--

 Summary: Support full data type in sql DML
 Key: FLINK-13571
 URL: https://issues.apache.org/jira/browse/FLINK-13571
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Danny Chan
 Fix For: 1.10.0


In Flink-13335, we have aligned the SQL CREATE TABLE DDL with FLIP-37, mainly 
for the data types, but for the DML and sql queries, we still only support the 
old data types(cast expression and literals). We should sync the data types for 
DMLs also.

For reasons of the implementations, we decide to postpone this to 1.10.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13570) Pluggable Intermediate Result Storage

2019-08-04 Thread Xuannan Su (JIRA)
Xuannan Su created FLINK-13570:
--

 Summary: Pluggable Intermediate Result Storage
 Key: FLINK-13570
 URL: https://issues.apache.org/jira/browse/FLINK-13570
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Xuannan Su


As discussed in 
[FLIP-48|https://cwiki.apache.org/confluence/display/FLINK/FLIP-48%3A+Pluggable+Intermediate+Result+Storage],
 the intermediate result storage is the backend storage where the content of 
the cached table stored.

[FLIP-36|https://cwiki.apache.org/confluence/display/FLINK/FLIP-36%3A+Support+Interactive+Programming+in+Flink]
 provides a default implementation of the intermediate result storage so that 
the user can use the cache out of the box. 

For more advanced usage of the cache, users may want to plug in some external 
storages to store the intermediate result. This task is to provide the 
necessary interfaces for the pluggable intermediate result storage and 
implementation of using the filesystem as the intermediate result storage.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13569) DDL table property key is defined as indentifier but should be string literal instead

2019-08-04 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created FLINK-13569:
---

 Summary: DDL table property key is defined as indentifier but 
should be string literal instead
 Key: FLINK-13569
 URL: https://issues.apache.org/jira/browse/FLINK-13569
 Project: Flink
  Issue Type: Bug
Reporter: Xuefu Zhang


The key name should be any free text, and should not be constrained by the 
identifier grammar.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13568) DDL create table doesn't allow STRING data type

2019-08-04 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created FLINK-13568:
---

 Summary: DDL create table doesn't allow STRING data type
 Key: FLINK-13568
 URL: https://issues.apache.org/jira/browse/FLINK-13568
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Xuefu Zhang


Creating a table with "string" data type fails with tableEnv.sqlUpdate().



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13567) Avro Confluent Schema Registry nightly end-to-end test failed on Travis

2019-08-04 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13567:
-

 Summary: Avro Confluent Schema Registry nightly end-to-end test 
failed on Travis
 Key: FLINK-13567
 URL: https://issues.apache.org/jira/browse/FLINK-13567
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.9.0, 1.10.0
Reporter: Till Rohrmann


The {{Avro Confluent Schema Registry nightly end-to-end test}} failed on Travis 
with

{code}
[FAIL] 'Avro Confluent Schema Registry nightly end-to-end test' failed after 2 
minutes and 11 seconds! Test exited with exit code 1

No taskexecutor daemon (pid: 29044) is running anymore on 
travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501.
No standalonesession daemon to stop on host 
travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501.
rm: cannot remove 
'/home/travis/build/apache/flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins':
 No such file or directory
{code}

https://api.travis-ci.org/v3/job/567273939/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13566) Support checkpoint configuration through flink-conf.yaml

2019-08-04 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-13566:
--

 Summary: Support checkpoint configuration through flink-conf.yaml
 Key: FLINK-13566
 URL: https://issues.apache.org/jira/browse/FLINK-13566
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Checkpointing, Runtime / Configuration
Reporter: Gyula Fora


Currently basic checkpointing configuration happens through the 
StreamExecutionEnvironment and the CheckpointConfig class.

There is no way to configure checkpointing behaviour purely from the 
flink-conf.yaml file (or provide a default checkpointing behaviour) as it 
always needs to happen programmatically through the environment.

The checkpoint config settings are then translated down to the 
CheckpointCoordinatorConfiguration which will control the runtime behaviour.

As checkpointing related settings are operational features that should not 
affect the application logic I think we need to support configuring these 
params through the flink-conf yaml.

In order to do this we probably need to rework the CheckpointConfig class so 
that it distinguishes parameters that the user actually set from the defaults 
(to support overriding what was set in the conf).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[ANNOUNCE] Weekly Community Update 2019/31

2019-08-04 Thread Konstantin Knauf
Dear community,

happy to share this week's community update with news about a Flink on
PyPi, code style proposals, Flink 1.9.0 RC1, Flame Graphs in the WebUI and
a bit more.

As always, please feel free to add additional updates and news to this
thread!

Flink Development
===

* [development process] Andrey has opened three threads following up on the
recently published "code style and quality guide". They deal with the usage
of Java Optional [1] (tl;dr: only as return type in public APIs) ,
Collections with initial capacity [2] (tl;dr: only if trivial) and how to
wrap long arguments lists and chained method calls [3] (tl;dr: yes, but
details still open).

* [python] Jingcheng has started a voting thread on publishing PyFlink to
PyPi. Name will be "apache-flink" and the account will be managed by the
Apache Flink PMC. The vote has passed unanimously. [4]

* [metrics] David proposed to add a CPU flame graph [5] for each Task to
the WebUI (similar to the backpressure monitor). A CPU flame graph is a
visualisation method for stack trace samples, which makes it easier to
determine hot code paths. This has been well received and David is looking
for a comitter to sheperd the effort. [6]

* [releases] Kurt announced a second preview RC (RC1) for Flink 1.9.0 to
facilitate ongoing release testing. There will be no vote on this RC. [7]

* [filesystems] Aljoscha proposed to removev flink-mapr-fs module from
Apache Flink due to recent problems pulling its dependencies. If removed
the MapR filesytem could still be used with Flink's HadoopFilesystem [8]

* [client] Tison has published a first design document on the recently
discussed improvements to Flink's client API. [9]

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-CODE-STYLE-Usage-of-Java-Optional-tp31240.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-CODE-STYLE-Create-collections-always-with-initial-capacity-tp31229.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-CODE-STYLE-Breaking-long-function-argument-lists-and-chained-method-calls-tp31242.html
[4]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Publish-the-PyFlink-into-PyPI-tp31201.html
[5] http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
[6]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-CPU-flame-graph-for-a-job-vertex-in-web-UI-tp31188.html
[7]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/PREVIEW-Apache-Flink-1-9-0-release-candidate-1-tp31233.html
[8]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Removing-the-flink-mapr-fs-module-tp31080.html
[9]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-client-api-enhancement-for-downstream-project-tp25844.html

Notable Bugs
===

Nothing for 1.6/1.7/1.8 that came to my attention.

Events, Blog Posts, Misc


* *Nico Kruber *published the second part of his series on Flink's network
stack. This time about metrics, monitoring and backpressure. (This slipped
through last week.) [10]

[10] https://flink.apache.org/2019/07/23/flink-network-stack-2.html

Cheers,

Konstantin (@snntrable)

-- 

Konstantin Knauf | Solutions Architect

+49 160 91394525


--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--

Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Dr. Kostas Tzoumas, Dr. Stephan Ewen


[jira] [Created] (FLINK-13565) Integrate springboot found error

2019-08-04 Thread feixue (JIRA)
feixue created FLINK-13565:
--

 Summary: Integrate springboot found error
 Key: FLINK-13565
 URL: https://issues.apache.org/jira/browse/FLINK-13565
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Affects Versions: 1.8.1, 1.9.0
Reporter: feixue


when Integrate springboot for 2.1.*, we init flink stream work for spring. in 
our local environment is work find. but when we submit it in dashboard for 
submit new job,and then click show plan, it show a internal error, the detail 
just like this:
{code:java}
2019-08-04 18:05:28 [ERROR] [ispatcherRestEndpoint-thread-3] 
[o.a.f.r.w.h.JarPlanHandler][196] Unhandled exception.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: null
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at 
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at 
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at 
org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126)
at 
org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:47)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:86)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
... 9 common frames omitted
Caused by: 
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException: 
null
at 
org.apache.flink.streaming.api.environment.StreamPlanEnvironment.execute(StreamPlanEnvironment.java:70)
at 
org.apache.flink.streaming.api.environment.StreamPlanEnvironment.execute(StreamPlanEnvironment.java:53)
at com.ggj.center.boot.demo.DemoApplication.run(DemoApplication.java:73)
at 
org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:779)
at 
org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:763)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:318)
at com.ggj.boot.demo.DemoApplication.main(DemoApplication.java:29)
... 22 common frames omitted
 {code}
 just read zhe source code, we found in PackageProgram.java file, the 
callMainMethod() call main method and catch InvocationTargetException for it. 
when throw ProgramAbortException, it will throw as InvocationTargetException 
and it target is ProgramAbortException. so flink can catch this and check the 
target exception for Error and throw angin.

 
{code:java}
try {
   mainMethod.invoke(null, (Object) args);
}
catch (IllegalArgumentException e) {
   throw new ProgramInvocationException("Could not invoke the main method, 
arguments are not matching.", e);
}
catch (IllegalAccessException e) {
   throw new ProgramInvocationException("Access to the main method was denied: 
" + e.getMessage(), e);
}
catch (InvocationTargetException e) {
   Throwable exceptionInMethod = e.getTargetException();
   if (exceptionInMethod instanceof Error) {
  throw (Error) exceptionInMethod;
   } else if (exceptionInMethod instanceof ProgramParametrizationException) {
  throw (ProgramParametrizationException) exceptionInMethod;
   } else if (exceptionInMethod instanceof ProgramInvocationException) {
  throw