[jira] [Created] (FLINK-18039) Change SourceCoordinator to handle resetToCheckpoint() call after started.

2020-05-29 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-18039:


 Summary: Change SourceCoordinator to handle resetToCheckpoint() 
call after started.
 Key: FLINK-18039
 URL: https://issues.apache.org/jira/browse/FLINK-18039
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.11.0
Reporter: Jiangjie Qin


Right now the SourceCoordinator assumes that {{resetToCheckpoint()}} is only 
called before {{start()}} is called. We need to change the SourceCoordinator to 
handle the case when {{resetToCheckpoint()}} is invoked after the 
SourceCoordinator has started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18038) StateBackendLoader logs application-defined state before it is fully configured

2020-05-29 Thread Steve Bairos (Jira)
Steve Bairos created FLINK-18038:


 Summary: StateBackendLoader logs application-defined state before 
it is fully configured
 Key: FLINK-18038
 URL: https://issues.apache.org/jira/browse/FLINK-18038
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.9.1
Reporter: Steve Bairos


In the 
[StateBackendLoader|[https://github.com/apache/flink/blob/bb46756b84940a6134910e74406bfaff4f2f37e9/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java#L201]],
 there's this log line:
{code:java}
logger.info("Using application-defined state backend: {}", fromApplication); 
{code}
It seems like this is inaccurate though because immediately after logging this, 
if fromApplication is a ConfigurableStateBackend, we call the .configure() 
function and it is replaced by a newly configured StateBackend. 

To me, it seems like it would be better if we logged the state backend after it 
was fully configured. In the current setup, we get confusing logs like this: 
{code:java}
2020-05-29 21:39:44,387 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask   - Using 
application-defined state backend: 
RocksDBStateBackend{checkpointStreamBackend=File State Backend (checkpoints: 
's3://pinterest-montreal/checkpoints/xenon-dev-001-20191210/Xenon/BasicJavaStream',
 savepoints: 'null', asynchronous: UNDEFINED, fileStateThreshold: -1), 
localRocksDbDirectories=null, enableIncrementalCheckpointing=UNDEFINED, 
numberOfTransferingThreads=-1}2020-05-29 21:39:44,387 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask   - Configuring 
application-defined state backend with job/cluster config{code}
Which makes it ambiguous whether or not settings in our flink-conf.yaml like 
"state.backend.incremental: true" are being applied properly or not. 

 

I can make a diff for the change if there aren't any objections



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [NOTICE] Release guide updated for updating japicmp configuration

2020-05-29 Thread Yu Li
Thanks Chesnay for the efforts!

Best Regards,
Yu


On Fri, 29 May 2020 at 18:03, Piotr Nowojski  wrote:

> Thanks Chesney for adding those scripts and configuring checks!
>
> Piotrek
>
> > On 29 May 2020, at 10:04, Chesnay Schepler  wrote:
> >
> > Hello everyone,
> >
> > We recently decided to enforce compatibility for @PublicEvolving APIs
> for minor releases.
> >
> > This requires modifications to the japicmp-maven-plugin execution on the
> corresponding release-X.Y branch after X.Y.Z was released.
> >
> > In FLINK-17844 new tooling was added to take care of this
> (tools/releasing/updated_japicmp_configuration.sh), but it must be run
> manually by the release manager, after the release has concluded.
> >
> > Note that this is also run automatically when an RC is created, as a
> final safeguard in case the manual step is missed.
> >
> > I have amended the release guide accordingly:
> >
> > Update japicmp configuration
> >
> > Update the japicmp reference version and enable API compatibility checks
> for @PublicEvolving  APIs on the corresponding SNAPSHOT branch.
> >
> > For a new major release (x.y.0), run the same command also on the master
> branch for updating the japicmp reference version.
> >
> > tools $ NEW_VERSION=$RELEASE_VERSION
> releasing/update_japicmp_configuration.sh
> > tools $ cd ..
> > $ git add *
> > $ git commit -m "Update japicmp configuration for $RELEASE_VERSION"
>
>


[jira] [Created] (FLINK-18037) The doc of StreamTaskNetworkInput.java may has a redundant 'status'

2020-05-29 Thread ZhuShang (Jira)
ZhuShang created FLINK-18037:


 Summary: The doc of StreamTaskNetworkInput.java may has a 
redundant 'status'
 Key: FLINK-18037
 URL: https://issues.apache.org/jira/browse/FLINK-18037
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.10.1, 1.10.0, 1.11.0
Reporter: ZhuShang


The doc of class StreamTaskNetworkInput as follows 
{code:java}
* Forwarding elements, watermarks, or status status elements must be 
protected by synchronizing
* on the given lock object. This ensures that we don't call methods on a
* {@link StreamInputProcessor} concurrently with the timer callback or other 
things.{code}
 
 Is one of the 'status' redundant?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [NOTICE] Release guide updated for updating japicmp configuration

2020-05-29 Thread Zhijiang
Thanks for the updates, Chesnay! 
Really helpful!

Best,
Zhijiang


--
From:Piotr Nowojski 
Send Time:2020年5月29日(星期五) 18:03
To:Chesnay Schepler 
Cc:dev@flink.apache.org ; zhijiang 

Subject:Re: [NOTICE] Release guide updated for updating japicmp configuration

Thanks Chesney for adding those scripts and configuring checks!

Piotrek


On 29 May 2020, at 10:04, Chesnay Schepler  wrote:
Hello everyone,
We recently decided to enforce compatibility for @PublicEvolving APIs for minor 
releases.
This requires modifications to the japicmp-maven-plugin execution on the 
corresponding release-X.Y branch after X.Y.Z was released.
In FLINK-17844 new tooling was added to take care of this 
(tools/releasing/updated_japicmp_configuration.sh), but it must be run manually 
by the release manager, after the release has concluded.
Note that this is also run automatically when an RC is created, as a final 
safeguard in case the manual step is missed.
I have amended the release guide accordingly: Update japicmp configuration
Update the japicmp reference version and enable API compatibility checks for 
@PublicEvolving  APIs on the corresponding SNAPSHOT branch.
For a new major release (x.y.0), run the same command also on the master branch 
for updating the japicmp reference version.  
tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
tools $ cd ..
$ git add *
$ git commit -m "Update japicmp configuration for $RELEASE_VERSION"  



[jira] [Created] (FLINK-18036) Chinese documentation build is broken

2020-05-29 Thread Aljoscha Krettek (Jira)
Aljoscha Krettek created FLINK-18036:


 Summary: Chinese documentation build is broken
 Key: FLINK-18036
 URL: https://issues.apache.org/jira/browse/FLINK-18036
 Project: Flink
  Issue Type: Task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Aljoscha Krettek
 Fix For: 1.11.0


Log from one of the builders: 
https://ci.apache.org/builders/flink-docs-master/builds/1848/steps/Build%20docs/logs/stdio

The problem is that the chinese doc uses {{{% link %}}} tags that refer to 
documents from the english documentation. It should be as easy as adding 
{{.zh}} in these links.

It seems this change introduced the problem: 
https://github.com/apache/flink/commit/d40abbf0309f414a6acf8a090c448ba397a08d9c



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Introduce a new module 'flink-hadoop-utils'

2020-05-29 Thread Robert Metzger
Thanks a lot! Let's continue the discussion in the ticket! (I might not be
able to respond before Monday there)

On Thu, May 28, 2020 at 5:08 PM Sivaprasanna 
wrote:

> FYI.
>
> I created a Jira to track this improvement.
> https://issues.apache.org/jira/browse/FLINK-18013
>
> -
> Sivaprasanna
>
> On Thu, May 28, 2020 at 12:22 PM Sivaprasanna 
> wrote:
>
> > Awesome. : )
> > Thanks, Robert for signing up to be the reviewer. I will create Jira and
> > share the link here.
> >
> > Stay safe.
> >
> > -
> > Sivaprasanna
> >
> > On Thu, May 28, 2020 at 12:13 PM Robert Metzger 
> > wrote:
> >
> >> Hi Sivaprasanna,
> >>
> >> thanks a lot for your proposal. Now that I ran into a
> HadoopUtils-related
> >> issue myself [1] I see the benefit in this proposal.
> >>
> >> I'm happy to be the Flink committer that mentors this change. If we do
> >> this, I would like to have a small scope for the initial change:
> >> - create a "flink-hadoop-utils" module
> >> - move generic, common utils into that module (for example
> >> SerializableHadoopConfiguration)
> >>
> >> I agree with Till that we should initially leave out the Hadoop
> >> compatibility modules.
> >>
> >> You can go ahead with filing a JIRA ticket! Let's discuss the exact
> scope
> >> there.
> >>
> >>
> >> [1] https://github.com/apache/flink/pull/12146
> >>
> >>
> >> On Thu, Apr 30, 2020 at 6:54 PM Sivaprasanna  >
> >> wrote:
> >>
> >> > Bump.
> >> >
> >> > Please let me know, if someone is interested in reviewing this one. I
> am
> >> > willing to start working on this. BTW, a small and new addition to the
> >> > list: With FLINK-10114 merged, OrcBulkWriterFactory can also reuse
> >> > `SerializableHadoopConfiguration` along with SequenceFileWriterFactory
> >> and
> >> > CompressWriterFactory.
> >> >
> >> > CC - Kostas Kloudas since he has a better understanding on the
> >> > `SerializableHadoopConfiguration.`
> >> >
> >> > Cheers,
> >> > Sivaprasanna
> >> >
> >> > On Mon, Mar 30, 2020 at 3:17 PM Chesnay Schepler 
> >> > wrote:
> >> >
> >> > > I would recommend to wait until a committer has signed up for
> >> reviewing
> >> > > your changes before preparing any PR.
> >> > > Otherwise the chances are high that you invest a lot of time but the
> >> > > changes never get in.
> >> > >
> >> > > On 30/03/2020 11:42, Sivaprasanna wrote:
> >> > > > Hello Till,
> >> > > >
> >> > > > I agree with having the scope limited and more concentrated. I can
> >> > file a
> >> > > > Jira and get started with the code changes, as and when someone
> has
> >> > some
> >> > > > bandwidth, the review can also be done. What do you think?
> >> > > >
> >> > > > Cheers,
> >> > > > Sivaprasanna
> >> > > >
> >> > > > On Mon, Mar 30, 2020 at 3:00 PM Till Rohrmann <
> trohrm...@apache.org
> >> >
> >> > > wrote:
> >> > > >
> >> > > >> Hi Sivaprasanna,
> >> > > >>
> >> > > >> thanks for starting this discussion. In general I like the idea
> to
> >> > > remove
> >> > > >> duplications and move common code to a shared module. As a
> >> > > recommendation,
> >> > > >> I would exclude the whole part about Flink's Hadoop compatibility
> >> > > modules
> >> > > >> because they are legacy code and hardly used anymore. This would
> >> also
> >> > > have
> >> > > >> the benefit of making the scope of the proposal a bit smaller.
> >> > > >>
> >> > > >> What we now need is a committer who wants to help with this
> >> effort. It
> >> > > >> might be that this takes a bit of time as many of the committers
> >> are
> >> > > quite
> >> > > >> busy.
> >> > > >>
> >> > > >> Cheers,
> >> > > >> Till
> >> > > >>
> >> > > >> On Thu, Mar 19, 2020 at 2:15 PM Sivaprasanna <
> >> > sivaprasanna...@gmail.com
> >> > > >
> >> > > >> wrote:
> >> > > >>
> >> > > >>> Hi,
> >> > > >>>
> >> > > >>> Continuing on an earlier discussion[1] regarding having a
> separate
> >> > > module
> >> > > >>> for Hadoop related utility components, I have gone through our
> >> > project
> >> > > >>> briefly and found the following components which I feel could be
> >> > moved
> >> > > >> to a
> >> > > >>> separate module for reusability, and better module structure.
> >> > > >>>
> >> > > >>> Module Name Class Name Used at / Remarks
> >> > > >>>
> >> > > >>> flink-hadoop-fs
> >> > > >>> flink.runtime.util.HadoopUtils
> >> > > >>> flink-runtime => HadoopModule & HadoopModuleFactory
> >> > > >>> flink-swift-fs-hadoop => SwiftFileSystemFactory
> >> > > >>> flink-yarn => Utils, YarnClusterDescriptor
> >> > > >>>
> >> > > >>> flink-hadoop-compatability
> >> > > >>> api.java.hadoop.mapred.utils.HadoopUtils
> >> > > >>> Both belong to the same module but with different packages
> >> > > >>> (api.java.hadoop.mapred and api.java.hadoop.mapreduce)
> >> > > >>> api.java.hadoop.mapreduce.utils.HadoopUtils
> >> > > >>> flink-sequeunce-file
> >> > > >>> formats.sequeuncefile.SerializableHadoopConfiguration Currently,
> >> > > >>> it is used at formats.sequencefile.SequenceFileWriterFactory but
> >> can
> >> > > also
> >> > > >>> be used at 

[jira] [Created] (FLINK-18035) Executors#newCachedThreadPool could not work as expected

2020-05-29 Thread Yang Wang (Jira)
Yang Wang created FLINK-18035:
-

 Summary: Executors#newCachedThreadPool could not work as expected
 Key: FLINK-18035
 URL: https://issues.apache.org/jira/browse/FLINK-18035
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Yang Wang
 Fix For: 1.11.0


In FLINK-17558, we introduce {{Executors#newCachedThreadPool}} to create 
dedicated thread pool for TaskManager io. However, it could not work as 
expected.

The root cause is about the following constructor of {{ThreadPoolExecutor}}. 
Only when the workQueue is full, new thread will be started then. So if we set 
a {{LinkedBlockingQueue}} with {{Integer.MAX_VALUE}} capacity, only one thread 
will be started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Stateful Functions 2.1.0 feature freeze

2020-05-29 Thread Tzu-Li (Gordon) Tai
Absolutely, thanks Seth.
I won't necessarily block the RCs on this, but will make sure that these
are in before announcing the release.

On Fri, May 29, 2020 at 4:32 AM Seth Wiesman  wrote:

> Thank you for driving this release Gordon!
>
> We are still missing documentation for state ttl and unix sockets. This
> doesn't need to block release testing, but it would be nice to get this
> done before 2.1 is released.
>
> Seth
>
> https://issues.apache.org/jira/browse/FLINK-18015
> https://issues.apache.org/jira/browse/FLINK-18016
>
> On Thu, May 28, 2020 at 7:03 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Hi all,
> >
> > Following the consensus to do the next feature release for Flink Stateful
> > Functions,
> > we've finished all the planned features and have cut the feature branch:
> > https://github.com/apache/flink-statefun/tree/release-2.1
> >
> > This time, we've added quite a bit more coverage for typical
> > functional tests that we performed in the last release as form of
> > end-to-end tests which we currently run for every build,
> > so we should be confident to already start with a voting RC soon
> > that requires minimal manual functional testing.
> >
> > Therefore, I'll aim to create the first RC to vote on early next week.
> >
> > Cheers,
> > Gordon
> >
>


Re: [NOTICE] Release guide updated for updating japicmp configuration

2020-05-29 Thread Piotr Nowojski
Thanks Chesney for adding those scripts and configuring checks!

Piotrek

> On 29 May 2020, at 10:04, Chesnay Schepler  wrote:
> 
> Hello everyone,
> 
> We recently decided to enforce compatibility for @PublicEvolving APIs for 
> minor releases.
> 
> This requires modifications to the japicmp-maven-plugin execution on the 
> corresponding release-X.Y branch after X.Y.Z was released.
> 
> In FLINK-17844 new tooling was added to take care of this 
> (tools/releasing/updated_japicmp_configuration.sh), but it must be run 
> manually by the release manager, after the release has concluded.
> 
> Note that this is also run automatically when an RC is created, as a final 
> safeguard in case the manual step is missed.
> 
> I have amended the release guide accordingly:
> 
> Update japicmp configuration
> 
> Update the japicmp reference version and enable API compatibility checks for 
> @PublicEvolving  APIs on the corresponding SNAPSHOT branch.
> 
> For a new major release (x.y.0), run the same command also on the master 
> branch for updating the japicmp reference version.
> 
> tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
> tools $ cd ..
> $ git add *
> $ git commit -m "Update japicmp configuration for $RELEASE_VERSION"



[jira] [Created] (FLINK-18034) Introduce PreferredLocationsRetriever

2020-05-29 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-18034:
---

 Summary: Introduce PreferredLocationsRetriever
 Key: FLINK-18034
 URL: https://issues.apache.org/jira/browse/FLINK-18034
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: Zhu Zhu
Assignee: Zhu Zhu
 Fix For: 1.12.0


Preferred locations based on state and inputs are scattered into multiple 
components, which makes it harder to reasoning the calculation and complicates 
those hosting components.

We can introduce a {{PreferredLocationsRetriever}} to be used by 
{{ExecutionSlotAllocator}} which returns preferred locations of an execution 
vertex and hides the details of the calculation from other components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18033) Improve e2e test execution time

2020-05-29 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18033:


 Summary: Improve e2e test execution time
 Key: FLINK-18033
 URL: https://issues.apache.org/jira/browse/FLINK-18033
 Project: Flink
  Issue Type: Improvement
  Components: Build System / Azure Pipelines, Test Infrastructure, Tests
Reporter: Chesnay Schepler


Running all e2e tests currently requires ~3.5h, and this time is growing.

We should look into ways to bring this time down to improve feedback times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Fwd: pyflink Table Api连接 外部系统问题

2020-05-29 Thread Xingbo Huang
-- Forwarded message -
发件人: Xingbo Huang 
Date: 2020年5月29日周五 下午4:30
Subject: Re: pyflink Table Api连接 外部系统问题
To: 刘亚坤 


你好,
你想问的应该是如何把kafka里面的一整个json数据当成一个string读进来,然后不做任何format解析对吧。如果是这样的话,我的理解是,你首先不能用json
format,需要使用csv
format,然后你得指定一个field_delimiter,默认的是逗号,你得换一个,比如\n,要不然就会把你的json字符串数据按照都厚给切分开了。我刚刚用descriptor试验了一下,没有问题。你可以试试。

下面是我整个PyFlink读取json串进来然后解析数据中time字段的作业
def str_func(str_param):
import json
return json.loads(str_param)['time']

s_env = StreamExecutionEnvironment.get_execution_environment()
s_env.set_parallelism(1)
s_env.set_stream_time_characteristic(TimeCharacteristic.ProcessingTime)
st_env = StreamTableEnvironment.create(s_env)
result_file = "/tmp/slide_row_window_streaming.csv"
if os.path.exists(result_file):
os.remove(result_file)
st_env \
.connect(  # declare the external system to connect to
Kafka()
.version("0.11")
.topic("user")
.start_from_earliest()
.property("zookeeper.connect", "localhost:2181")
.property("bootstrap.servers", "localhost:9092")
) \
.with_format(  # declare a format for this system
Csv()
.schema(DataTypes.ROW(
[DataTypes.FIELD("a", DataTypes.STRING())
]))
.field_delimiter('\n')
 ) \
.with_schema(  # declare the schema of the table
 Schema()
.field("a", DataTypes.STRING())
 ) \
.in_append_mode() \
.register_table_source("source")

st_env.register_function(
"str_func", udf(str_func, [DataTypes.STRING()], DataTypes.STRING()))
st_env.register_table_sink("sink",
   CsvTableSink(["a"],
[DataTypes.STRING()],
result_file))

st_env.scan("source").select("str_func(a)").insert_into("sink")

kafka里面的数据
{"a": "a", "b": 1, "c": 1, "time": "2013-01-01T00:14:13Z"}
{"a": "b", "b": 2, "c": 2, "time": "2013-01-01T00:24:13Z"}
{"a": "a", "b": 3, "c": 3, "time": "2013-01-01T00:34:13Z"}
{"a": "a", "b": 4, "c": 4, "time": "2013-01-01T01:14:13Z"}
{"a": "b", "b": 4, "c": 5, "time": "2013-01-01T01:24:13Z"}
{"a": "a", "b": 5, "c": 2, "time": "2013-01-01T01:34:13Z"}

最后Csv里面的结果数据为
2013-01-01T00:14:13Z
2013-01-01T00:24:13Z
2013-01-01T00:34:13Z
2013-01-01T01:14:13Z
2013-01-01T01:24:13Z
2013-01-01T01:34:13Z

Best,
Xingbo

刘亚坤  于2020年5月29日周五 下午2:17写道:

> 目前在学习使用pyflink的Table api,请教一个问题:
> 1、Table Api连接kafka系统,能否把整条的kafka消息看成是一个table字段进行处理?比如,kafka
> topic连的消息为一个json字符串,把这个字符串整体当做是一个字段,这样可以方便使用 pyflink 的udf函数对消息进行处理转换等操作?
> 2、如果以上可行,连接kafka的数据格式如何设置,即with_format如何设置,目前官网这方便的资料较少。
>
> 新手入门,请多指教,感谢。
>


[jira] [Created] (FLINK-18032) Remove outdated sections in migration guide

2020-05-29 Thread Aljoscha Krettek (Jira)
Aljoscha Krettek created FLINK-18032:


 Summary: Remove outdated sections in migration guide
 Key: FLINK-18032
 URL: https://issues.apache.org/jira/browse/FLINK-18032
 Project: Flink
  Issue Type: Task
  Components: Documentation
Reporter: Aljoscha Krettek


There is still a section in {{docs/dev/migration.md}} about migrating from 1.2 
to 1.3, and we talk about the serializable list interfaces that will be removed 
as part of FLINK-17376.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Hequn Cheng
+1 (binding)

1. Go through all the commits from 11.0.
2. website PR looks good
3. Built from source archive successfully.
4. Signatures and hash are correct.

Best,
Hequn

On Fri, May 29, 2020 at 4:08 PM Chesnay Schepler  wrote:

> +1 (binding)
>
> On 28/05/2020 18:49, Yu Li wrote:
> > +1 (non-binding)
> >
> > Checked issues listed in release notes: ok (*)
> > - Add shaded-11.0 as fixed version for FLINK-17513
> > - Minor: FLINK-16454 is listed in the release note but found no changes
> in
> > commit history
> > Checked sums and signatures: ok
> > Checked the maven central artifices: ok
> > Built from source: ok (8u101)
> > Built from source (with -Dshade-sources): ok (8u101)
> > Checked website pull request listing the new release: ok
> >
> > Best Regards,
> > Yu
> >
> >
> > On Fri, 29 May 2020 at 00:32, Till Rohrmann 
> wrote:
> >
> >> +1 (binding)
> >>
> >> - verified checksums and signature
> >> - mvn clean verify passes on source release
> >> - verified licenses
> >> - checked pom.xml changes
> >>
> >> Cheers,
> >> Till
> >>
> >> On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
> >> wrote:
> >>
> >>> +1 (non-binding)
> >>>
> >>> checked
> >>> - mvn clean verify, ok
> >>> - gpg & sha512, ok
> >>> - all pom files point to the same version, ok
> >>> - checked license, ok
> >>>
> >>> Best,
> >>> Congxian
> >>>
> >>>
> >>> Robert Metzger  于2020年5月27日周三 下午6:05写道:
> >>>
>  +1 (binding)
> 
>  Checks:
>  - diff to flink-shaded 1.10:
> 
> 
> >>
> https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1
>  - mvn clean install passes on the source archive
>  - sha of source archive is correct
>  - source archive is signed by Chesnay
>  - mvn staging repo looks reasonable
>  - flink-shaded-zookeeper 3 jar license documentation seems correct
> 
> 
> 
>  On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler 
>  wrote:
> 
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version
> >>> 11.0,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which
> >> includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to
> >> dist.apache.org
> > [2], which are signed with the key with fingerprint 11D464BA [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-11.0-rc1" [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by
> >> majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Chesnay
> >
> > [1]
> >
> >
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784
> > [2]
> >>> https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> >
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1372/
> > [5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
> > [6] https://github.com/apache/flink-web/pull/340
> >
> >
>
>


[RESULT][VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Chesnay Schepler

|I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 3 of which are binding:
* Robert (binding)|
|* Congxian (non-binding)|
|* Till (binding)|
|* Yu (non-binding)|
|* Chesnay (binding)|
|
|
|There are no disapproving votes.

Thanks everyone|||

On 25/05/2020 19:14, Chesnay Schepler wrote:

Hi everyone,
Please review and vote on the release candidate #1 for the version 
11.0, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release to be deployed to dist.apache.org 
[2], which are signed with the key with fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-11.0-rc1" [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784

[2] https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1372/

[5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
[6] https://github.com/apache/flink-web/pull/340






Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Chesnay Schepler

+1 (binding)

On 28/05/2020 18:49, Yu Li wrote:

+1 (non-binding)

Checked issues listed in release notes: ok (*)
- Add shaded-11.0 as fixed version for FLINK-17513
- Minor: FLINK-16454 is listed in the release note but found no changes in
commit history
Checked sums and signatures: ok
Checked the maven central artifices: ok
Built from source: ok (8u101)
Built from source (with -Dshade-sources): ok (8u101)
Checked website pull request listing the new release: ok

Best Regards,
Yu


On Fri, 29 May 2020 at 00:32, Till Rohrmann  wrote:


+1 (binding)

- verified checksums and signature
- mvn clean verify passes on source release
- verified licenses
- checked pom.xml changes

Cheers,
Till

On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
wrote:


+1 (non-binding)

checked
- mvn clean verify, ok
- gpg & sha512, ok
- all pom files point to the same version, ok
- checked license, ok

Best,
Congxian


Robert Metzger  于2020年5月27日周三 下午6:05写道:


+1 (binding)

Checks:
- diff to flink-shaded 1.10:



https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1

- mvn clean install passes on the source archive
- sha of source archive is correct
- source archive is signed by Chesnay
- mvn staging repo looks reasonable
- flink-shaded-zookeeper 3 jar license documentation seems correct



On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler 
wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version

11.0,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release to be deployed to

dist.apache.org

[2], which are signed with the key with fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-11.0-rc1" [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]



https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784

[2]

https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/

[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]


https://repository.apache.org/content/repositories/orgapacheflink-1372/

[5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
[6] https://github.com/apache/flink-web/pull/340






Re: Subscription mail

2020-05-29 Thread Marta Paes Moreira
You're right, Tison. I forgot to include his e-mail address, so thanks for
doing that (seems like we had a case of concurrency here, too!).

Thanks!

Marta

On Fri, May 29, 2020 at 9:44 AM tison  wrote:

> Hi,
>
> I'm not very sure but IIRC if someone didn't subscribe to this mailing list
> yet, he could not receive mail from the list.
>
> Best,
> tison.
>
>
> 夏帅  于2020年5月29日周五 下午2:44写道:
>
> > send email to dev-subscr...@flink.apache.org
> >
> >
> > --
> > 发件人:kartheek annamaneni 
> > 发送时间:2020年5月29日(星期五) 14:38
> > 收件人:dev 
> > 主 题:Subscription mail
> >
> > Hi,
> >   How can I subscribe to this?
> >
> >
>


[NOTICE] Release guide updated for updating japicmp configuration

2020-05-29 Thread Chesnay Schepler

Hello everyone,

We recently decided to enforce compatibility for @PublicEvolving APIs 
for minor releases.


This requires modifications to the japicmp-maven-plugin execution on the 
corresponding release-X.Y branch after X.Y.Z was released.


In FLINK-17844 new tooling was added to take care of this 
(tools/releasing/updated_japicmp_configuration.sh), but it must be run 
manually by the release manager, after the release has concluded.


Note that this is also run automatically when an RC is created, as a 
final safeguard in case the manual step is missed.


I have amended the release guide accordingly:


 /Update japicmp configuration/

/Update the japicmp reference version and enable API compatibility 
checks for //|@PublicEvolving|//APIs on the corresponding SNAPSHOT branch./


/For a new major release (x.y.0), run the same command also on the 
master branch for updating the japicmp reference version./


//
/|tools $ NEW_VERSION=$RELEASE_VERSION 
releasing|//|/update_japicmp_configuration|//|.sh|/

/|tools $ |//|cd||..|/
/|$ git add *|/
/|$ git commit -m |//|"Update japicmp configuration for $RELEASE_VERSION"|/


Re: Subscription mail

2020-05-29 Thread tison
Hi,

I'm not very sure but IIRC if someone didn't subscribe to this mailing list
yet, he could not receive mail from the list.

Best,
tison.


夏帅  于2020年5月29日周五 下午2:44写道:

> send email to dev-subscr...@flink.apache.org
>
>
> --
> 发件人:kartheek annamaneni 
> 发送时间:2020年5月29日(星期五) 14:38
> 收件人:dev 
> 主 题:Subscription mail
>
> Hi,
>   How can I subscribe to this?
>
>


Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Hequn Cheng
@Chesnay Schepler   Thanks a lot for your confirmation.
I think it is good if it wouldn't block the release. I have lower
the Priority to Critical :)

On Fri, May 29, 2020 at 3:20 PM Chesnay Schepler  wrote:

>  From my understanding the current year in the copyright header is not
> such an important thing, so I wouldn't block the release on it.
>
> On 29/05/2020 09:11, Hequn Cheng wrote:
> > Sorry about my late check.
> > I found one blocker just now that the year in the root NOTICE file should
> > be updated. I have created a jira(FLINK-18031) for it.
> >
> > Best,
> > Hequn
> >
> >
> > On Fri, May 29, 2020 at 12:47 AM Yu Li  wrote:
> >
> >> +1 (non-binding)
> >>
> >> Checked issues listed in release notes: ok (*)
> >> - Add shaded-11.0 as fixed version for FLINK-17513
> >> - Minor: FLINK-16454 is listed in the release note but found no changes
> in
> >> commit history
> >> Checked sums and signatures: ok
> >> Checked the maven central artifices: ok
> >> Built from source: ok (8u101)
> >> Built from source (with -Dshade-sources): ok (8u101)
> >> Checked website pull request listing the new release: ok
> >>
> >> Best Regards,
> >> Yu
> >>
> >>
> >> On Fri, 29 May 2020 at 00:32, Till Rohrmann 
> wrote:
> >>
> >>> +1 (binding)
> >>>
> >>> - verified checksums and signature
> >>> - mvn clean verify passes on source release
> >>> - verified licenses
> >>> - checked pom.xml changes
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
> >>> wrote:
> >>>
>  +1 (non-binding)
> 
>  checked
>  - mvn clean verify, ok
>  - gpg & sha512, ok
>  - all pom files point to the same version, ok
>  - checked license, ok
> 
>  Best,
>  Congxian
> 
> 
>  Robert Metzger  于2020年5月27日周三 下午6:05写道:
> 
> > +1 (binding)
> >
> > Checks:
> > - diff to flink-shaded 1.10:
> >
> >
> >>
> https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1
> > - mvn clean install passes on the source archive
> > - sha of source archive is correct
> > - source archive is signed by Chesnay
> > - mvn staging repo looks reasonable
> > - flink-shaded-zookeeper 3 jar license documentation seems correct
> >
> >
> >
> > On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler  > wrote:
> >
> >> Hi everyone,
> >> Please review and vote on the release candidate #1 for the version
>  11.0,
> >> as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific
> >> comments)
> >>
> >> The complete staging area is available for your review, which
> >>> includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release to be deployed to
> >>> dist.apache.org
> >> [2], which are signed with the key with fingerprint 11D464BA [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag "release-11.0-rc1" [5],
> >> * website pull request listing the new release [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by
> >>> majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Thanks,
> >> Chesnay
> >>
> >> [1]
> >>
> >>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784
> >> [2]
>  https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> >>
> >> https://repository.apache.org/content/repositories/orgapacheflink-1372/
> >> [5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
> >> [6] https://github.com/apache/flink-web/pull/340
> >>
> >>
>
>


Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Chesnay Schepler
From my understanding the current year in the copyright header is not 
such an important thing, so I wouldn't block the release on it.


On 29/05/2020 09:11, Hequn Cheng wrote:

Sorry about my late check.
I found one blocker just now that the year in the root NOTICE file should
be updated. I have created a jira(FLINK-18031) for it.

Best,
Hequn


On Fri, May 29, 2020 at 12:47 AM Yu Li  wrote:


+1 (non-binding)

Checked issues listed in release notes: ok (*)
- Add shaded-11.0 as fixed version for FLINK-17513
- Minor: FLINK-16454 is listed in the release note but found no changes in
commit history
Checked sums and signatures: ok
Checked the maven central artifices: ok
Built from source: ok (8u101)
Built from source (with -Dshade-sources): ok (8u101)
Checked website pull request listing the new release: ok

Best Regards,
Yu


On Fri, 29 May 2020 at 00:32, Till Rohrmann  wrote:


+1 (binding)

- verified checksums and signature
- mvn clean verify passes on source release
- verified licenses
- checked pom.xml changes

Cheers,
Till

On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
wrote:


+1 (non-binding)

checked
- mvn clean verify, ok
- gpg & sha512, ok
- all pom files point to the same version, ok
- checked license, ok

Best,
Congxian


Robert Metzger  于2020年5月27日周三 下午6:05写道:


+1 (binding)

Checks:
- diff to flink-shaded 1.10:



https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1

- mvn clean install passes on the source archive
- sha of source archive is correct
- source archive is signed by Chesnay
- mvn staging repo looks reasonable
- flink-shaded-zookeeper 3 jar license documentation seems correct



On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler 
Hi everyone,
Please review and vote on the release candidate #1 for the version

11.0,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release to be deployed to

dist.apache.org

[2], which are signed with the key with fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-11.0-rc1" [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]



https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784

[2]

https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/

[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]


https://repository.apache.org/content/repositories/orgapacheflink-1372/

[5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
[6] https://github.com/apache/flink-web/pull/340






Re: [DISCUSS] Remove dependency shipping through nested jars during job submission.

2020-05-29 Thread Robert Metzger
Hi,
afaik, this feature was added because Hadoop MapReduce has it as well (
https://blog.cloudera.com/how-to-include-third-party-libraries-in-your-map-reduce-job/,
point 2.).

I don't remember having seen this anywhere in the wild. I believe it is a
good idea to simplify our codebase here.
If there are concerns, then we could at least add a big WARN log message in
Flink 1.11+ that this feature will be deprecated in the future.


On Wed, May 20, 2020 at 10:39 AM Kostas Kloudas  wrote:

> Hi all,
>
> I would like to bring the discussion in
> https://issues.apache.org/jira/browse/FLINK-17745 to the dev mailing
> list, just to hear the opinions of the community.
>
> In a nutshell, in the early days of Flink, users could submit their
> jobs as fat-jars that had a specific structure. More concretely, the
> user could put the dependencies of the submitted job in a lib/ folder
> within his/her jar and Flink would search within the user's jar for
> such a folder, and if this existed, it would extract the nested jars,
> ship them independently and add them to the classpath. Finally, it
> would also ship the fat-jar itself so that the user-code is available
> at the cluster (for details see [1]).
>
> This way of submission was NOT documented anywhere and it has the
> obvious shortcoming that the "nested" jars will be shipped twice. In
> addition, it makes the codebase a bit more difficult to maintain, as
> this constitutes another way of submitting stuff.
>
> Given the above, I would like to propose to remove this codepath. But
> given that there are users using the hidden feature, I would like to
> discuss 1) how many such users exist, 2) how difficult it is for them
> to "migrate" to a different way of submitting jobs, and 3) if the rest
> of the community agrees on removing it.
>
> I post this on both dev and user ML so that we have better coverage.
>
> Looking forward to a fruitful discussion,
> Kostas
>
> [1]
> https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java#L222
>


Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Chesnay Schepler
Nice catch, it appears FLINK-16454 was eagerly closed and never fixed 
for flink-shaded; FLINK-18031 has been filed to fix this.


On 28/05/2020 18:49, Yu Li wrote:

+1 (non-binding)

Checked issues listed in release notes: ok (*)
- Add shaded-11.0 as fixed version for FLINK-17513
- Minor: FLINK-16454 is listed in the release note but found no changes in
commit history
Checked sums and signatures: ok
Checked the maven central artifices: ok
Built from source: ok (8u101)
Built from source (with -Dshade-sources): ok (8u101)
Checked website pull request listing the new release: ok

Best Regards,
Yu


On Fri, 29 May 2020 at 00:32, Till Rohrmann  wrote:


+1 (binding)

- verified checksums and signature
- mvn clean verify passes on source release
- verified licenses
- checked pom.xml changes

Cheers,
Till

On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
wrote:


+1 (non-binding)

checked
- mvn clean verify, ok
- gpg & sha512, ok
- all pom files point to the same version, ok
- checked license, ok

Best,
Congxian


Robert Metzger  于2020年5月27日周三 下午6:05写道:


+1 (binding)

Checks:
- diff to flink-shaded 1.10:



https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1

- mvn clean install passes on the source archive
- sha of source archive is correct
- source archive is signed by Chesnay
- mvn staging repo looks reasonable
- flink-shaded-zookeeper 3 jar license documentation seems correct



On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler 
wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version

11.0,

as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release to be deployed to

dist.apache.org

[2], which are signed with the key with fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-11.0-rc1" [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]



https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784

[2]

https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/

[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]


https://repository.apache.org/content/repositories/orgapacheflink-1372/

[5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
[6] https://github.com/apache/flink-web/pull/340






Re: [VOTE] Release flink-shaded 11.0, release candidate #1

2020-05-29 Thread Hequn Cheng
Sorry about my late check.
I found one blocker just now that the year in the root NOTICE file should
be updated. I have created a jira(FLINK-18031) for it.

Best,
Hequn


On Fri, May 29, 2020 at 12:47 AM Yu Li  wrote:

> +1 (non-binding)
>
> Checked issues listed in release notes: ok (*)
> - Add shaded-11.0 as fixed version for FLINK-17513
> - Minor: FLINK-16454 is listed in the release note but found no changes in
> commit history
> Checked sums and signatures: ok
> Checked the maven central artifices: ok
> Built from source: ok (8u101)
> Built from source (with -Dshade-sources): ok (8u101)
> Checked website pull request listing the new release: ok
>
> Best Regards,
> Yu
>
>
> On Fri, 29 May 2020 at 00:32, Till Rohrmann  wrote:
>
> > +1 (binding)
> >
> > - verified checksums and signature
> > - mvn clean verify passes on source release
> > - verified licenses
> > - checked pom.xml changes
> >
> > Cheers,
> > Till
> >
> > On Thu, May 28, 2020 at 1:05 PM Congxian Qiu 
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > checked
> > > - mvn clean verify, ok
> > > - gpg & sha512, ok
> > > - all pom files point to the same version, ok
> > > - checked license, ok
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > Robert Metzger  于2020年5月27日周三 下午6:05写道:
> > >
> > > > +1 (binding)
> > > >
> > > > Checks:
> > > > - diff to flink-shaded 1.10:
> > > >
> > > >
> > >
> >
> https://github.com/apache/flink-shaded/compare/release-10.0...release-11.0-rc1
> > > >
> > > > - mvn clean install passes on the source archive
> > > > - sha of source archive is correct
> > > > - source archive is signed by Chesnay
> > > > - mvn staging repo looks reasonable
> > > > - flink-shaded-zookeeper 3 jar license documentation seems correct
> > > >
> > > >
> > > >
> > > > On Mon, May 25, 2020 at 7:14 PM Chesnay Schepler  >
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > > Please review and vote on the release candidate #1 for the version
> > > 11.0,
> > > > > as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release to be deployed to
> > dist.apache.org
> > > > > [2], which are signed with the key with fingerprint 11D464BA [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag "release-11.0-rc1" [5],
> > > > > * website pull request listing the new release [6].
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> > majority
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > > Thanks,
> > > > > Chesnay
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12347784
> > > > > [2]
> > > https://dist.apache.org/repos/dist/dev/flink/flink-shaded-11.0-rc1/
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4]
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1372/
> > > > > [5] https://github.com/apache/flink-shaded/tree/release-11.0-rc1
> > > > > [6] https://github.com/apache/flink-web/pull/340
> > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-18031) Update the copyright year in the NOTICE file in flink-shaded repo

2020-05-29 Thread Hequn Cheng (Jira)
Hequn Cheng created FLINK-18031:
---

 Summary: Update the copyright year in the NOTICE file in 
flink-shaded repo
 Key: FLINK-18031
 URL: https://issues.apache.org/jira/browse/FLINK-18031
 Project: Flink
  Issue Type: Bug
Affects Versions: shaded-10.0
Reporter: Hequn Cheng
 Fix For: shaded-11.0


The year in the root NOTICE file should be updated from `2014-2017` to 
`2014-2020`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18030) Hive UDF doesn't accept empty string literal parameter

2020-05-29 Thread Rui Li (Jira)
Rui Li created FLINK-18030:
--

 Summary: Hive UDF doesn't accept empty string literal parameter
 Key: FLINK-18030
 URL: https://issues.apache.org/jira/browse/FLINK-18030
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: Rui Li
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18029) Add more ITCases for Kafka with new formats

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18029:
--

 Summary: Add more ITCases for Kafka with new formats
 Key: FLINK-18029
 URL: https://issues.apache.org/jira/browse/FLINK-18029
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


- Add ITCase for Kafka read/write CSV
- Add ITCase for Kafka read/write Avro
- Add ITCase for Kafka read canal-json



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18028) E2E tests manually for Kafka 2 all kinds of other connectors

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18028:
--

 Summary: E2E tests manually for Kafka 2 all kinds of other 
connectors
 Key: FLINK-18028
 URL: https://issues.apache.org/jira/browse/FLINK-18028
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


- test Kafka 2 MySQL
- test Kafka 2 ES
- test Kafka ES temporal join
- test Kafka MySQL temporal join
- test Kafka Hbase temporal join



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18027) ROW value constructor cannot deal with complex expressions

2020-05-29 Thread Benchao Li (Jira)
Benchao Li created FLINK-18027:
--

 Summary: ROW value constructor cannot deal with complex expressions
 Key: FLINK-18027
 URL: https://issues.apache.org/jira/browse/FLINK-18027
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Benchao Li


{code:java}
create table my_source (
my_row row
) with (...);

create table my_sink (
my_row row
) with (...);

insert into my_sink
select ROW(my_row.a, my_row.b) 
from my_source;{code}
will throw excepions:
{code:java}
Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
parse failed. Encountered "." at line 1, column 18.Exception in thread "main" 
org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
"." at line 1, column 18.Was expecting one of:    ")" ...    "," ...     at 
org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
 at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:64) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:627)
 at com.bytedance.demo.KafkaTableSource.main(KafkaTableSource.java:76)Caused 
by: org.apache.calcite.sql.parser.SqlParseException: Encountered "." at line 1, 
column 18.Was expecting one of:    ")" ...    "," ...     at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:416)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:201)
 at org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:148) 
at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:163) at 
org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:188) at 
org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:54)
 ... 3 moreCaused by: org.apache.flink.sql.parser.impl.ParseException: 
Encountered "." at line 1, column 18.Was expecting one of:    ")" ...    "," 
...     at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:36161)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:35975)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.ParenthesizedSimpleIdentifierList(FlinkSqlParserImpl.java:21432)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression3(FlinkSqlParserImpl.java:17164)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression2b(FlinkSqlParserImpl.java:16820)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression2(FlinkSqlParserImpl.java:16861)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression(FlinkSqlParserImpl.java:16792)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectExpression(FlinkSqlParserImpl.java:11091)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectItem(FlinkSqlParserImpl.java:10293)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectList(FlinkSqlParserImpl.java:10267)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlSelect(FlinkSqlParserImpl.java:6943)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.LeafQuery(FlinkSqlParserImpl.java:658)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.LeafQueryOrExpr(FlinkSqlParserImpl.java:16775)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.QueryOrExpr(FlinkSqlParserImpl.java:16238)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.OrderedQueryOrExpr(FlinkSqlParserImpl.java:532)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmt(FlinkSqlParserImpl.java:3761)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtEof(FlinkSqlParserImpl.java:3800)
 at 
org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtEof(FlinkSqlParserImpl.java:248)
 at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:161) ... 
5 more
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18026) E2E tests manually for new SQL connectors and formats

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18026:
--

 Summary: E2E tests manually for new SQL connectors and formats
 Key: FLINK-18026
 URL: https://issues.apache.org/jira/browse/FLINK-18026
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


Use the SQL-CLI to test all kinds of new formats with the new Kafka source.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18025) E2E tests manually for Hive integrate streaming

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18025:
--

 Summary: E2E tests manually for Hive integrate streaming
 Key: FLINK-18025
 URL: https://issues.apache.org/jira/browse/FLINK-18025
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


- Hive streaming source
- Hive streaming sink
- Hive dim join



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18024) E2E tests manually for new Hive dependency jars

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18024:
--

 Summary: E2E tests manually for new Hive dependency jars
 Key: FLINK-18024
 URL: https://issues.apache.org/jira/browse/FLINK-18024
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


Test the 4 version jars.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Subscription mail

2020-05-29 Thread Marta Paes Moreira
Hi, Kartheek.

To subscribe, please send an e-mail to dev-subscr...@flink.apache.org. If
you have any questions, you can find more detailed instructions in [1].

Marta

[1]
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list

On Fri, 29 May 2020 at 08:38, kartheek annamaneni <
kartheekannamanen...@gmail.com> wrote:

> Hi,
>   How can I subscribe to this?
>


[jira] [Created] (FLINK-18023) E2E tests manually for new filesystem connector

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18023:
--

 Summary: E2E tests manually for new filesystem connector
 Key: FLINK-18023
 URL: https://issues.apache.org/jira/browse/FLINK-18023
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


- test all supported formats
- test compatibility with Hive
- test streaming sink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Subscription mail

2020-05-29 Thread tison
Hi Kartheek,

Please mailto dev-subscr...@flink.apache.org with any content. The robot
will guide you to subscribe this mailing list.

See also this page
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list

Best,
tison.


kartheek annamaneni  于2020年5月29日周五 下午2:38写道:

> Hi,
>   How can I subscribe to this?
>


回复:Subscription mail

2020-05-29 Thread 夏帅
send email to dev-subscr...@flink.apache.org


--
发件人:kartheek annamaneni 
发送时间:2020年5月29日(星期五) 14:38
收件人:dev 
主 题:Subscription mail

Hi,
  How can I subscribe to this?



[jira] [Created] (FLINK-18022) Add e2e test for new streaming file sink

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18022:
--

 Summary: Add e2e test for new streaming file sink
 Key: FLINK-18022
 URL: https://issues.apache.org/jira/browse/FLINK-18022
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Subscription mail

2020-05-29 Thread kartheek annamaneni
Hi,
  How can I subscribe to this?


[jira] [Created] (FLINK-18021) Complement tests for 1.11 SQL

2020-05-29 Thread Danny Chen (Jira)
Danny Chen created FLINK-18021:
--

 Summary: Complement tests for 1.11 SQL
 Key: FLINK-18021
 URL: https://issues.apache.org/jira/browse/FLINK-18021
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0


This is an umbrella issue to collect all kinds of tests (e2e and ITCases) that 
need to cover for 1.11 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18020) SQLClientKafkaITCase.testKafka failed on Travis

2020-05-29 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-18020:


 Summary: SQLClientKafkaITCase.testKafka failed on Travis
 Key: FLINK-18020
 URL: https://issues.apache.org/jira/browse/FLINK-18020
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Table SQL / Client, Tests
Affects Versions: 1.11.0, 1.12.0
Reporter: Dawid Wysakowicz


{code}
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 151.386 
s <<< FAILURE! - in org.apache.flink.tests.util.kafka.SQLClientKafkaITCase
[ERROR] testKafka[1: kafka-version:0.11 
kafka-sql-version:.*kafka-0.11.jar](org.apache.flink.tests.util.kafka.SQLClientKafkaITCase)
  Time elapsed: 49.811 s  <<< FAILURE!
java.lang.AssertionError: 

Expected: ["2018-03-12 08:00:00.000,Alice,This was a warning.,2,Success 
constant folding.", "2018-03-12 09:00:00.000,Bob,This was another 
warning.,1,Success constant folding.", "2018-03-12 09:00:00.000,Steve,This was 
another info.,2,Success constant folding.", "2018-03-12 09:00:00.000,Alice,This 
was a info.,1,Success constant folding."] in any order
 but: Not matched: "2018-03-12 08:00:00.000,Alice,This was a 
warning.,6,Success constant folding."
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.flink.tests.util.kafka.SQLClientKafkaITCase.checkCsvResultFile(SQLClientKafkaITCase.java:241)
at 
org.apache.flink.tests.util.kafka.SQLClientKafkaITCase.testKafka(SQLClientKafkaITCase.java:172)
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2323=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)