Congrats, Zhu Zhu!
Paul Lam 于2019年12月14日周六 上午10:29写道:
> Congrats Zhu Zhu!
>
> Best,
> Paul Lam
>
> Kurt Young 于2019年12月14日周六 上午10:22写道:
>
> > Congratulations Zhu Zhu!
> >
> > Best,
> > Kurt
> >
> >
> > On Sat, Dec 14, 2019 at 10:04 AM jincheng sun
> > wrote:
> >
> > > Congrats ZhuZhu and
Congrats Zhu Zhu!
Best,
Paul Lam
Kurt Young 于2019年12月14日周六 上午10:22写道:
> Congratulations Zhu Zhu!
>
> Best,
> Kurt
>
>
> On Sat, Dec 14, 2019 at 10:04 AM jincheng sun
> wrote:
>
> > Congrats ZhuZhu and welcome on board!
> >
> > Best,
> > Jincheng
> >
> >
> > Jark Wu 于2019年12月14日周六 上午9:55写道:
>
Congratulations Zhu Zhu!
Best,
Kurt
On Sat, Dec 14, 2019 at 10:04 AM jincheng sun
wrote:
> Congrats ZhuZhu and welcome on board!
>
> Best,
> Jincheng
>
>
> Jark Wu 于2019年12月14日周六 上午9:55写道:
>
> > Congratulations, Zhu Zhu!
> >
> > Best,
> > Jark
> >
> > On Sat, 14 Dec 2019 at 08:20, Yangze Guo
Congrats ZhuZhu and welcome on board!
Best,
Jincheng
Jark Wu 于2019年12月14日周六 上午9:55写道:
> Congratulations, Zhu Zhu!
>
> Best,
> Jark
>
> On Sat, 14 Dec 2019 at 08:20, Yangze Guo wrote:
>
> > Congrats, ZhuZhu!
> >
> > Bowen Li 于 2019年12月14日周六 上午5:37写道:
> >
> > > Congrats!
> > >
> > > On Fri,
Congratulations, Zhu Zhu!
Best,
Jark
On Sat, 14 Dec 2019 at 08:20, Yangze Guo wrote:
> Congrats, ZhuZhu!
>
> Bowen Li 于 2019年12月14日周六 上午5:37写道:
>
> > Congrats!
> >
> > On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z wrote:
> >
> > > Congratulations, Zhu Zhu!
> > >
> > > On Fri, Dec 13, 2019 at
Congrats, ZhuZhu!
Bowen Li 于 2019年12月14日周六 上午5:37写道:
> Congrats!
>
> On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z wrote:
>
> > Congratulations, Zhu Zhu!
> >
> > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang >
> > wrote:
> >
> > > Congratulations!:)
> > >
> > > On Fri, Dec 13, 2019 at 9:45 AM Piotr
Bowen Li created FLINK-15259:
Summary: HiveInspector.toInspectors() should convert Flink
constant to Hive constant
Key: FLINK-15259
URL: https://issues.apache.org/jira/browse/FLINK-15259
Project: Flink
Bowen Li created FLINK-15258:
Summary: HiveModuleFactory doesn't take hive-version
Key: FLINK-15258
URL: https://issues.apache.org/jira/browse/FLINK-15258
Project: Flink
Issue Type: Bug
Congrats!
On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z wrote:
> Congratulations, Zhu Zhu!
>
> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang
> wrote:
>
> > Congratulations!:)
> >
> > On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> > wrote:
> >
> > > Congratulations! :)
> > >
> > > > On 13 Dec
Bowen Li created FLINK-15257:
Summary: convert HiveCatalogITCase.testCsvTableViaAPI() to use
blink planner
Key: FLINK-15257
URL: https://issues.apache.org/jira/browse/FLINK-15257
Project: Flink
Bowen Li created FLINK-15256:
Summary: unable to drop table in HiveCatalogITCase
Key: FLINK-15256
URL: https://issues.apache.org/jira/browse/FLINK-15256
Project: Flink
Issue Type: Bug
Bowen Li created FLINK-15255:
Summary: document how to create Hive table from java API and DDL
Key: FLINK-15255
URL: https://issues.apache.org/jira/browse/FLINK-15255
Project: Flink
Issue Type:
Bowen Li created FLINK-15254:
Summary: hive module cannot be named "hive"
Key: FLINK-15254
URL: https://issues.apache.org/jira/browse/FLINK-15254
Project: Flink
Issue Type: Test
Hi there,
We had seen growing interest of using large window and interval join operation.
What is recommended way of handling these use cases?(e.g DeltaLake in Spark)
After some benchmark, we found performance seems a bottleneck (still) on
support those use cases.
How is performance
I was going to suggest the same thing as Seth. So yes, I’m against having Flink
distributions that contain Hive but for convenience downloads as we have for
Hadoop.
Best,
Aljoscha
> On 13. Dec 2019, at 18:04, Seth Wiesman wrote:
>
> I'm also -1 on separate builds.
>
> What about publishing
Congratulations, Zhu Zhu!
On Fri, Dec 13, 2019 at 10:37 AM Peter Huang
wrote:
> Congratulations!:)
>
> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> wrote:
>
> > Congratulations! :)
> >
> > > On 13 Dec 2019, at 18:05, Fabian Hueske wrote:
> > >
> > > Congrats Zhu Zhu and welcome on board!
Thanks all for the healthy discussions. I'd just like to point out a light
difference between standard and standard compatibility. Most of DB vendors
meant the latter when they claim following a sql standard. However, it
doesn't mean they don't have any syntax beyond the standard grammar.
Maximilian Michels created FLINK-15253:
--
Summary: Accumulators are not checkpointed
Key: FLINK-15253
URL: https://issues.apache.org/jira/browse/FLINK-15253
Project: Flink
Issue Type:
Maximilian Michels created FLINK-15252:
--
Summary: Heartbeat with large accumulator payload may cause
instable clusters
Key: FLINK-15252
URL: https://issues.apache.org/jira/browse/FLINK-15252
Congratulations!:)
On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski wrote:
> Congratulations! :)
>
> > On 13 Dec 2019, at 18:05, Fabian Hueske wrote:
> >
> > Congrats Zhu Zhu and welcome on board!
> >
> > Best, Fabian
> >
> > Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb Till Rohrmann <
> >
Congratulations! :)
> On 13 Dec 2019, at 18:05, Fabian Hueske wrote:
>
> Congrats Zhu Zhu and welcome on board!
>
> Best, Fabian
>
> Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb Till Rohrmann <
> trohrm...@apache.org>:
>
>> Hi everyone,
>>
>> I'm very happy to announce that Zhu Zhu accepted
Congrats Zhu Zhu and welcome on board!
Best, Fabian
Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb Till Rohrmann <
trohrm...@apache.org>:
> Hi everyone,
>
> I'm very happy to announce that Zhu Zhu accepted the offer of the Flink PMC
> to become a committer of the Flink project.
>
> Zhu Zhu has been
I'm also -1 on separate builds.
What about publishing convenience jars that contain the dependencies for
each version? For example, there could be a flink-hive-1.2.1-uber.jar that
users could just add to their lib folder that contains all the necessary
dependencies to connect to that hive
Hi everyone,
I'm very happy to announce that Zhu Zhu accepted the offer of the Flink PMC
to become a committer of the Flink project.
Zhu Zhu has been an active community member for more than a year now. Zhu
Zhu played an essential role in the scheduler refactoring, helped
implementing fine
Aljoscha Krettek created FLINK-15251:
Summary: Fabric8FlinkKubeClient doesn't work if ingress has
hostname but no IP
Key: FLINK-15251
URL: https://issues.apache.org/jira/browse/FLINK-15251
Thanks for your feedback.
I will then go for option B.
On Fri, Dec 13, 2019 at 2:51 PM Till Rohrmann wrote:
> Thanks for starting this discussion Robert.
>
> I can see benefits for both options as already mentioned in this thread.
> However, given that we already have the profile splits and
I'm generally not opposed to convenience binaries, if a huge number of
people would benefit from them, and the overhead for the Flink project is
low. I did not see a huge demand for such binaries yet (neither for the
Flink + Hive integration). Looking at Apache Spark, they are also only
offering
This discussion has resulted in the following PR:
https://github.com/apache/flink/pull/10559
On Tue, Dec 10, 2019 at 10:14 PM Bowen Li wrote:
> +1 to drop vendor related docs. Links to vendors’ webpages should be enough
>
> On Tue, Dec 10, 2019 at 08:15 Seth Wiesman wrote:
>
> > @uce Agreed.
-1
We shouldn't need to deploy additional binaries to have a feature be
remotely usable.
This usually points to something else being done incorrectly.
If it is indeed such a hassle to setup hive on Flink, then my conclusion
would be that either
a) the documentation needs to be improved
b)
Thanks for starting this discussion Robert.
I can see benefits for both options as already mentioned in this thread.
However, given that we already have the profile splits and that it would
considerably decrease feedback for developers on their personal Azure
accounts, I'd be in favour of option
It’s a though question. One the one hand I like less complexity in the build
system. But one of the most important things for developers is fast iteration
cycles.
So I would prefer the solution that keeps the iteration time low.
Best,
Aljoscha
> On 13. Dec 2019, at 14:41, Chesnay Schepler
It depends on how to define "split"; if you split by module (as we do
currently) you have the same complexity as we have right now;
caching of artifacts and brittle definition of splits.
But there are other ways to split builds, for example into unit and
integration tests; could also add
Thanks a lot Hequn for being our release manager and to the community for
making this release happen :-)
Cheers,
Till
On Thu, Dec 12, 2019 at 9:02 AM Zhu Zhu wrote:
> Thanks Hequn for driving the release and everyone who makes this release
> possible!
>
> Thanks,
> Zhu Zhu
>
> Wei Zhong
Leonard Xu created FLINK-15250:
--
Summary: Docs of Table Connect to External Systems is outdate and
need to fix
Key: FLINK-15250
URL: https://issues.apache.org/jira/browse/FLINK-15250
Project: Flink
Chongchen Chen created FLINK-15249:
--
Summary: Improve PipelinedRegions calculation with Union Set
Key: FLINK-15249
URL: https://issues.apache.org/jira/browse/FLINK-15249
Project: Flink
Another proposal that was brought up was to provide a script for
generating an SSL certificate with the distribution.
On 12/12/2019 17:45, Robert Metzger wrote:
Hi all,
There was recently a private report to the Flink PMC, as well as publicly
[1] about Flink's ability to execute arbitrary
Wei Zhong created FLINK-15248:
-
Summary: FileUtils#compressDirectory behaves buggy when processing
relative directory path
Key: FLINK-15248
URL: https://issues.apache.org/jira/browse/FLINK-15248
Project:
Gary Yao created FLINK-15247:
Summary: Closing (Testing)MiniCluster may cause
ConcurrentModificationException
Key: FLINK-15247
URL: https://issues.apache.org/jira/browse/FLINK-15247
Project: Flink
xiaojin.wy created FLINK-15246:
--
Summary: Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL]
not equal to TableSink schema:[EXPR$0: TIMESTAMP(3)]
Key: FLINK-15246
URL:
I can confirm that the Docker images are available [1]. Thanks, Patrick!
Looking forward to your ideas to integrate the Docker builds into the
release process. I'm happy to support you on this effort.
– Ufuk
[1] $ docker pull flink:1.8.3
1.8.3: Pulling from library/flink
844c33c7e6ea: Pull
Rui Li created FLINK-15245:
--
Summary: Flink running in one cluster cannot write data to Hive
tables in another cluster
Key: FLINK-15245
URL: https://issues.apache.org/jira/browse/FLINK-15245
Project: Flink
Hi Bowen,
Thanks for driving this.
+1 for this proposal.
Due to our multi version support, users are required to rely on
different dependencies, it does break the "out of box" experience.
Now that the client has changed to go to child first class loader resolve
by default, it puts forward higher
Hi Timo,
Thanks for your feedback.
The reason of `The DDL can like this (With hive dialect)` is:
The syntax of creating partition table is controversial, so we think we
should put it aside for the time being to make it invisible to users. Since
we implemented this syntax in 1.9, we decided to
Hi Bowen~
Thanks for driving on this. I have tried using sql client with hive connector
about two weeks ago, it’s painful to set up the environment from my experience.
+ 1 for this proposal.
Best,
Terry Wang
> 2019年12月13日 16:44,Bowen Li 写道:
>
> Hi all,
>
> I want to propose to have a
+1, this is definitely necessary for better user experience. Setting up
environment is always painful for many big data tools.
Bowen Li 于2019年12月13日周五 下午5:02写道:
> cc user ML in case anyone want to chime in
>
> On Fri, Dec 13, 2019 at 00:44 Bowen Li wrote:
>
>> Hi all,
>>
>> I want to propose
cc user ML in case anyone want to chime in
On Fri, Dec 13, 2019 at 00:44 Bowen Li wrote:
> Hi all,
>
> I want to propose to have a couple separate Flink distributions with Hive
> dependencies on specific Hive versions (2.3.4 and 1.2.1). The distributions
> will be provided to users on Flink
Wei Zhong created FLINK-15244:
-
Summary: FileUtils#deleteDirectoryQuietly will delete files in the
symbolic link which point to a directory
Key: FLINK-15244
URL: https://issues.apache.org/jira/browse/FLINK-15244
Jark Wu created FLINK-15243:
---
Summary: Add documentation about how to set line feed as delimiter
for csv format
Key: FLINK-15243
URL: https://issues.apache.org/jira/browse/FLINK-15243
Project: Flink
Hi everyone,
sorry, I was not aware that FLIP-63 already lists a lot of additional
SQL grammar. It was accepted though an official voting process so I
guess we can adopt the listed grammar for Flink SQL.
The only thing that confuses me is the mentioning of `The DDL can like
this (With hive
Hi all,
I want to propose to have a couple separate Flink distributions with Hive
dependencies on specific Hive versions (2.3.4 and 1.2.1). The distributions
will be provided to users on Flink download page [1].
A few reasons to do this:
1) Flink-Hive integration is important to many many Flink
Terry Wang created FLINK-15242:
--
Summary: Add doc to introduce ddls or dmls supported by sql cli
Key: FLINK-15242
URL: https://issues.apache.org/jira/browse/FLINK-15242
Project: Flink
Issue
Yangze Guo created FLINK-15241:
--
Summary: Revert the unexpected change for the configuration of
Mesos CPU cores
Key: FLINK-15241
URL: https://issues.apache.org/jira/browse/FLINK-15241
Project: Flink
52 matches
Mail list logo