Hi everyone,
as you all know, currently the Table & SQL API is implemented in Scala.
This decision was made a long-time ago when the initital code base was
created as part of a master's thesis. The community kept Scala because
of the nice language features that enable a fluent Table API like
Hi Piotr,
thanks for bringing up this discussion. I was not involved in the design
discussions at that time but I also find the logic about upserts and
retractions in multiple stages quite confusing. So in general +1 for
simplification, however, by using a RelShuttle instead of rules we might
on, Jun 4, 2018 at 12:57 AM, Timo Walther <mailto:twal...@apache.org>> wrote:
Hi,
as you can see in code [1] Kafka09JsonTableSource takes a
TableSchema. You can create table schema from type information
see [2].
Regards,
Timo
[1]
https://github.com
+1
- Checked the commits that went into this release
- Build from source
I found one minor thing. There are two "create_release_branch.sh"
scripts in the tools directory.
Regards,
Timo
Am 01.06.18 um 08:32 schrieb Tzu-Li (Gordon) Tai:
+1
- Checked signatures and hashes
- Source builds
.map(**t -> t.f1*
*)*.returns(GenericRecord.class);
On Wed, Jan 3, 2018 at 4:18 PM, Timo Walther <twal...@apache.org> wrote:
Hi Amit,
are you using lambdas as parameters of a Flink Function or in a member
variable? If yes, can you share an lambda example that fails?
Regar
ypes "CREATE MATERIALIZED
VIEW" and "SELECT" is not clear to me. Add a subsection and explain what is
described there?
- Implementation plan: Add which result retrieval modes will be supported
in the initial version? Which configuration will be available?
Best, Fabian
2017-12-1
@Jincheng: Yes, I think we should include the two Table API PRs.
Am 1/16/18 um 5:28 PM schrieb jincheng sun:
Thanks for starting the discussion Gordon.
I think it is better to add the commits of
`https://issues.apache.org/jira/browse/FLINK-8355`
and
Hi Aljoscha,
it would be great if we can include the first version of the SQL client
(see FLIP-24, Implementation Plan 1). I will open a PR this week. I
think we can merge this with explicit "experimental/alpha" status. It is
far away from feature completeness but will be a great tool for
5, 2018 at 2:47 PM, Timo Walther <twal...@apache.org> wrote:
Hi Aljoscha,
it would be great if we can include the first version of the SQL client
(see FLIP-24, Implementation Plan 1). I will open a PR this week. I think
we can merge this with explicit "experimental/alpha" status. I
t
and there are some pending issues related to scala api and documentation.
Thanks,
Kostas
On Feb 5, 2018, at 5:37 PM, Timo Walther <twal...@apache.org> wrote:
Hi Shuyi,
I will take a look at it again this week. I'm pretty sure it will be
part of 1.5.0.
Regards,
Timo
Am 2/5/18 um 5:25 PM
Hi Amit,
how is the memory consumption when the jobs get stuck? Is the Java GC
active? Are you using off-heap memory?
Regards,
Timo
Am 2/12/18 um 10:10 AM schrieb Amit Jain:
Hi,
We have created Batch job where we are trying to merge set of S3
directories in TextFormat with the old snapshot
Hi Chesnay,
thanks for working on a new flink-shaded version. I just had an offline
discussion with Stephan Ewen about shading Jackson also in
flink-sql-client. The problem is that we use jackson-dataformat-yaml
there which is incompatible with our shaded version. Would it be
possible to
I also almost have a fix ready for FLINK-8451. I think it should also go
into 1.4.2.
Regards,
Timo
Am 2/22/18 um 11:29 AM schrieb Aljoscha Krettek:
They reason they didn't catch this is that the bug only occurs if users use a
custom timestamp/watermark assigner. But yes, we should be able
Hi Amit,
are you using lambdas as parameters of a Flink Function or in a member
variable? If yes, can you share an lambda example that fails?
Regards,
Timo
Am 1/3/18 um 11:41 AM schrieb Amit Jain:
Hi,
I'm writing a job to merge old data with changelogs using DataSet API where
I'm reading
Hi everyone,
as you may know a first minimum version of FLIP-24 [1] for the upcoming
Flink SQL Client has been merged to the master. We also merged
possibilities to discover and configure table sources without a single
line of code using string-based properties [2] and Java service provider
Hi Sampath,
I added you as a contributor. You can now assign issues to yourself.
Regards,
Timo
Am 03.08.18 um 11:42 schrieb Sampath Bhat:
Hello Till
Could you please add sampathBhat to the list of contributors in JIRA.
Thank you
Sampath
On Fri, Aug 3, 2018 at 1:18 PM, Till Rohrmann wrote:
Hi Amol,
the dot operation is reserved for calling functions on fields. If you
want to get a nested field in the Table API, use the
`.get("applicationId")` operation. See also [1] under "Value access
functions".
Regards,
Timo
[1]
.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com
On Fri, Jul 27, 2018 at 1:08 PM, Timo Walther wrote:
Hi Amol,
the dot operation is reserved for calling functions on fields. If you want
to get a nested field in the Table API
I tried to reproduce your error but everything worked fine. Which Flink
version are you using?
Inner joins are a Flink 1.5 feature.
Am 27.07.18 um 13:28 schrieb Amol S - iProgrammer:
Table master = table1.filter("ns === 'Master'").select("o as master,
'accessBasicDBObject(applicationId,o)'
Hi,
the Maven enforcer plugin produced invalid errors under certain
conditions. See also the discussion here [1]. If you start the build
again, it should succeed. This issue is fixed in 1.5.2.
Regards,
Timo
[1] https://issues.apache.org/jira/browse/FLINK-9091
Am 01.08.18 um 11:26 schrieb
Thanks for managing the release process Chesnay!
Timo
Am 31.07.18 um 10:05 schrieb Chesnay Schepler:
I'm happy to announce that we have approved this release.
There are 5 approving votes, 3 of which are binding:
* Yaz (non-binding)
* Thomas (non-binding)
* Till (binding)
* Timo (binding)
*
I agree with Chesnay. Only regressions in the release candidate should
cancel minor releases.
Timo
Am 26.07.18 um 15:02 schrieb Chesnay Schepler:
Since the regression already existed in 1.5.0 I will not cancel the vote,
as there's no benefit to canceling the bugfix release.
If a fix is
+1
- run a couple of programs from the Web UI
- run SQL Client table programs (failing and non-failing)
- run a couple end-to-end tests on my local machine
Caveat: The test_streaming_classloader.sh does not work on releases. But
this is a bug in the test, not in the release (see FLINK-9987).
Thank you for starting this discussion.
+1 for this
Regards,
Timo
Am 16.08.18 um 09:27 schrieb vino yang:
Agree! This sounds very good.
Till Rohrmann 于2018年8月16日周四 下午3:14写道:
+1 for starting the release process 1.5.3 immediately. We can always
create another bug fix release afterwards. I
Hi,
I gave you contributor permissions.
Regards,
Timo
Am 10.08.18 um 13:05 schrieb 邓林:
Hi Flink community,
I want to contribute code to Flink, can anyone give me
contribution permission? My JIRA account name is lyndldeng.
Thanks.
+1
- successfully run `mvn clean verify` locally
- successfully run end-to-end tests locally (except for SQL Client
end-to-end test)
Found a bug in the class loading of SQL JAR files. This is not a blocker
but a bug that we should fix soon. As an easy workaround user should not
use
Hi Eron,
yes, FLINK-9172 covers how the SQL Client will discover ExternalCatalog
similar to how it discovers connectors and formats today. The exact
design has to be fleshed out but the SQL Client's environment file will
declare catalogs and their properties. The SQL Client's gateway will
a problem.
Best,
wangsan
On Aug 21, 2018, at 10:16 PM, Timo Walther wrote:
Hi,
this sounds like a bug to me. Maybe the explain() method is not implemented
correctly. Can you open an issue for it in Jira?
Thanks,
Timo
Am 21.08.18 um 15:04 schrieb wangsan:
Hi all,
I noticed
Hi,
this sounds like a bug to me. Maybe the explain() method is not
implemented correctly. Can you open an issue for it in Jira?
Thanks,
Timo
Am 21.08.18 um 15:04 schrieb wangsan:
Hi all,
I noticed that the DataStreamRel#translateToPlan is non-idempotent, and that
may cause the execution
Hi Titus,
have you looked into ProcessFunction? ProcessFunction[1] gives you
access to the two important streaming primitives "time" and "state".
So in your case you can decide flexibly what you want to put into state
and when you want to set and fire a timer (for clean-up) per key.
Congratulations, Gary!
Timo
Am 07.09.18 um 16:46 schrieb Ufuk Celebi:
Great addition to the committers. Congrats, Gary!
– Ufuk
On Fri, Sep 7, 2018 at 4:45 PM, Kostas Kloudas
wrote:
Congratulations Gary! Well deserved!
Cheers,
Kostas
On Sep 7, 2018, at 4:43 PM, Fabian Hueske wrote:
+1
- built and run tests locally
- run some training exercises locally
- looked over the commits
I found one minor thing that should be mentioned. It seems that there is
one unintended commit in the release:
2455df962c76d20f1c07a57d6ed0118d1d1a067c [FLINK-7283][python] fix
+1
I also went through the list of changes and checked for suspicious code. Looks
good from my side.
Found one minor thing: We should also update our sbt/gitter templates for the
improved quickstart.
> Am 07.03.2018 um 16:24 schrieb Aljoscha Krettek :
>
> +1
>
> -
Hi everyone,
after reviewing a bunch of end-to-end tests, I'm wondering if we should
really continue implementing everything in bash scripts. Wouldn't it be
nicer to implement them in Java code that just calls the interfaces of
Flink (e.g. "./bin/flink run" or REST API)?
Here are some
eamSQL queriable state
Great, thank you.
I'll start by writing a design doc.
On Fri, Mar 2, 2018 at 6:40 PM Timo Walther <twal...@apache.org> wrote:
I gave you contributor permissions in Jira. You should be able to
assign it to yourself now.
Am 3/2/18 um 11:33 AM schrieb Renjie Liu:
Hi, Timo:
wondering is there any better way to provide backward-compatible
support though. I played around with it and seems like every "protected"
field will create a private Java member and a public getter, should we
add
them all and annotate with "@Deprecated" ?
--
Rong
On Thu, M
Hi everyone,
I'm currently thinking about how to implement FLINK-8606. The reason
behind it is that Java users are able to see all variables and methods
that are declared 'private[flink]' or even 'protected' in Scala. Classes
such as TableEnvironment look very messy from the outside in Java.
is add a profile or build target to each connector to also create
the fat jar.
- Storage space is no longer really a problem. Worst case we host the fat
jars in an S3 bucket.
On Mon, Feb 26, 2018 at 7:33 PM, Timo Walther <twal...@apache.org> wrote:
Hi everyone,
as you may know a first min
+1
I verified the included commits. The changes look good to me.
Am 2/20/18 um 4:32 PM schrieb Aljoscha Krettek:
+1
- verified hashes and signatures
- verified that we didn't add dependencies with incompatible licenses
On 19. Feb 2018, at 14:40, Chesnay Schepler
- I think it may actually be even simpler to maintain for us, because
all
it does is add a profile or build target to each connector to also
create
the fat jar.
- Storage space is no longer really a problem. Worst case we host the
fat
jars in an S3 bucket.
On Mon, Feb 26, 2018 at 7:33 PM,
to support.
Regards,
Timo
Am 3/2/18 um 11:10 AM schrieb Renjie Liu:
Hi, Timo, I've been planning on the same thing and would like to contribute
that.
On Fri, Mar 2, 2018 at 6:05 PM Timo Walther <twal...@apache.org> wrote:
Hi Stefano,
yes there are plan in this direction. Actually, I a
Hi Stefano,
yes there are plan in this direction. Actually, I already worked on such
a QueryableStateTableSink [1] in the past but never finished it because
of priority shifts. Would be great if somebody wants to contribute this
functionality :)
Regards,
Timo
[1]
...@gmail.com
On Fri, Mar 2, 2018 at 6:24 PM Timo Walther <twal...@apache.org> wrote:
Hi Renjie,
that would be great. There is already a Jira issue for it:
https://issues.apache.org/jira/browse/FLINK-6968
Feel free to assign it to yourself. You can reuse parts of my code if
you want. But
Hi Jincheng,
I was also thinking about introducing a process function for the Table
API several times. This would allow to define more complex logic (custom
windows, timers, etc.) embedded into a relational API with schema
awareness and optimization around the black box. Of course this would
Welcome to the Flink community!
I gave you contributor permissions.
Regards,
Timo
Am 13.11.18 um 14:09 schrieb Shuiqiang Chen:
Hi guys:
Could somebody give me contributor permissions? my jira username is :
csq.
Thanks.
Hi everyone,
as some of you might have noticed, in the last two releases we aimed to
unify SQL connectors and make them more modular. The first connectors
and formats have been implemented and are usable via the SQL Client and
Java/Scala/SQL APIs.
However, after writing more
properties is still useful as it can restrict a certain connector to be
only source/sink, for example, we usually want a Kafka topic to be either
read-only or write-only, but not both.
Shuyi
On Mon, Oct 1, 2018 at 1:53 AM Timo Walther wrote:
Hi everyone,
as some of you might have noticed, in the last
Hi Xuefu,
thanks for your proposal, it is a nice summary. Here are my thoughts to
your list:
1. I think this is also on our current mid-term roadmap. Flink lacks a
poper catalog support for a very long time. Before we can connect
catalogs we need to define how to map all the information
+1
- I built locally and checked the JAR files for suspicious things.
- I went throught the change diff between 4 and 5 as well.
Could not find anything blocking this release.
Thanks,
Timo
Am 10.10.18 um 17:22 schrieb Aljoscha Krettek:
+1
I did
- verify all changes between 4.0 and 5.0
age of mapping format fields to fields with a
different name in the schema.
Besides that, all existing functionality is preserved although the syntax
changes a bit.
Best,
Fabian
Am Mo., 1. Okt. 2018 um 10:53 Uhr schrieb Timo Walther :
Hi everyone,
as some of you might have noticed, in the last two relea
+1 (binding)
- Checked all issues that went into the release (I found one JIRA issue
that has been incorrectly marked)
- Built from source (On my machine SelfJoinDeadlockITCase is failing due
to a timeout, in the IDE it works correctly. I guess my machine was just
too busy.)
- Run some
+1 (binding)
- Checked all issues that went into the release (I found two JIRA issue
that have been incorrectly marked)
- Built the source and verify locally successfully
- Run a couple of end-to-end tests successfully
Regards,
Timo
Am 20.09.18 um 11:13 schrieb Tzu-Li (Gordon) Tai:
+1
I totally agree with Chesnay here. A bot just treats the symptoms but
not the cause.
Maybe this needs no immediate action but we as committers should aim for
a more honest communication. A lot of PRs have a reason for being stale
but instead of communicating this reason we just don't touch
Thanks for driving these efforts, Stephan! Great news that the Blink
code base will be available for everyone soon. I already got access to
it and the added functionality and improved architecture is impressive.
There will be nice additions to Flink.
I guess the Blink code base will be
Hi Kurt,
I would not make the Blink's documentation visible to users or search
engines via a website. Otherwise this would communicate that Blink is an
official release. I would suggest to put the Blink docs into `/docs` and
people can build it with `./docs/build.sh -pi` if there are
Timo Walther wrote:
Hi Kurt,
I would not make the Blink's documentation visible to users or search
engines via a website. Otherwise this would communicate that Blink is an
official release. I would suggest to put the Blink docs into `/docs` and
people can build it with `./docs/build.sh -pi
+1 for Stephan's suggestion. For example, SQL connectors have never been
part of the main distribution and nobody complained about this so far. I
think what is more important than a big dist bundle is a helpful
"Downloads" page where users can easily find available filesystems,
connectors,
I suggest we first agree on the MVP feature list and the MVP grammar. And
then we can either continue the discussion of the future improvements
here,
or create separate JIRAs for each item and discuss further in the JIRA.
What do you guys think?
Shuyi
On Fri, Dec 7, 2018 at 7:54 AM Timo Wal
Hi Dian,
I proposed a solution that should be backwards compatible and solves our
Maven dependency problems in the corresponding issue.
I'm happy about feedback.
Regards,
Timo
Am 11.12.18 um 11:23 schrieb fudian.fd:
Hi Timo,
Thanks a lot for your reply. I think the cause to this problem
+1
- manually checked the commit diff and could not spot any issues
- run mvn clean verify locally with success
- run a couple of e2e tests locally with success
Thanks,
Timo
Am 18.12.18 um 11:28 schrieb Chesnay Schepler:
FLINK-10874 and FLINK-10987 were fixed for 1.7.0 .
I will remove
Hi Jincheng,
thanks for the proposal. I totally agree with the problem of having 3
StreamTableEnvironments and 3 BatchTableEnvironments. We also identified
this problem when doing Flink trainings and introductions to the Table &
SQL API.
Actually, @Dawid and I were already discussing to
also have to achieve this for the streaming API.
Best,
Aljoscha
On 29. Nov 2018, at 16:58, Timo Walther wrote:
Thanks for the feedback, everyone!
I created a FLIP for these efforts:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free
I
Hi Shengyang,
I gave you contributor permissions. Please also have a look at our
contributor guidelines:
https://flink.apache.org/contribute-code.html
Regards,
Timo
Am 14.12.18 um 11:47 schrieb Shengyang Sha:
Hi,
I've been interested in flink for a long time and have worked on a few
hen we can either continue the discussion of the future
improvements
here,
or create separate JIRAs for each item and discuss further in the
JIRA.
What do you guys think?
Shuyi
On Fri, Dec 7, 2018 at 7:54 AM Timo Walther
wrote:
Hi all,
I think we are making good progress. Thanks for a
Hi Tony,
I gave you contributor permissions. Please also have a look at our
contributor guidelines before you start working on issues:
https://flink.apache.org/contribute-code.html
Regards,
Timo
Am 14.12.18 um 03:01 schrieb 宋辛童(五藏):
Hi there,
Could anyone kindly give me the contributor
Hi everyone,
I just noticed that FLINK-9555 [1] has been accidently merged to the
release-1.7 branch. How do we want to deal with that?
The Scala Shell is not a super crucial Flink feature. But this commit
does not only introduce a new feature and adds new dependencies but also
introduces a
+1
- manually checked the commit diff and could not spot any issues
- run mvn clean verify locally with success
- run a couple of e2e tests locally with success
Thanks,
Timo
Am 19.12.18 um 18:28 schrieb Aljoscha Krettek:
+1
- signatures/hashes are ok
- verified that the log contains no
+1
- manually checked the commit diff and could not sport any issues
- run mvn clean verify locally with success
- run a couple of e2e tests locally with success
Thanks,
Timo
Am 19.12.18 um 18:36 schrieb Aljoscha Krettek:
+1
- signatures/hashes are ok
- manually checked the logs after
n one file each time,
let other people have more time to resolve the conflicts.
Best,
Kurt
On Tue, Nov 27, 2018 at 8:37 PM Timo Walther wrote:
Hi Kurt,
I understand your concerns. However, there is no concrete roadmap for
Flink 2.0 and (as Vino said) the flink-table is developed very actively
Hi everyone,
thanks for starting the discussion. In general, I like the idea of
making Flink SQL queries more concise.
However, I don't like to diverge from standard SQL. So far, we managed
to add a lot of operators and functionality while being standard
compliant. Personally, I don't see a
n DDL
The main differences from two DDL docs (sth maybe missed, welcome to
point
out):
*(1.3) watermark*: this is the main and the most important
difference,
it
would be great if @Timo Walther @Fabian Hueske
give some feedbacks.
(1.1) Type definition:
(a) Should VARCHAR carry a len
ot;) "
From the point of my view, this ddl is invalid because the primary key
constraint already references two columns but types unseen.
And Xuefu pointed a important matching problem, so let's put schema
derivation as a follow-up extension ?
Timo Walther 于2018年12月6日周四 下午6:05写道:
I like your `contract name` proposal,
e.g., `WITH (format.type = avro)`, the framework can recognize some
`contract name` like `format.type`, `connector.type` and etc.
And also derive the table schema from an existing schema file can be handy
especially one with too many table columns.
Regards
Hi,
welcome to the Flink community. If you give me your JIRA username, I can
give your contributor permissions.
Thanks,
Timo
Am 06.12.18 um 12:12 schrieb shen lei:
Hi All,
Could you give me the permission to solve the flink's jira issues? I
am interested in Flink, and I want to find
Thanks for being the release manager Till and thanks for the great work
Flink community!
Regards,
Timo
Am 30.11.18 um 10:39 schrieb Till Rohrmann:
The Apache Flink community is very happy to announce the release of Apache
Flink 1.7.0, which is the next major release.
Apache Flink® is an
e) independent of the API and
planner modules, we could start porting these classes once the code is
split into the new module structure.
The benefits of a Scala-free flink-table-runtime would be a Scala-free
execution Jar.
Best, Fabian
Am Do., 22. Nov. 2018 um 10:54 Uhr schrieb Timo Walther <
twal...
res written in Java and so that they can coexist with old
Scala code until we gradually switch from Scala to Java.
Piotrek
On 13 Jun 2018, at 11:32, Timo Walther wrote:
Hi everyone,
as you all know, currently the Table & SQL API is implemented in
Scala.
This decision was made a long-t
Thanks for offering your help here, Xuefu. It would be great to move
these efforts forward. I agree that the DDL is somehow releated to the
unified connector API design but we can also start with the basic
functionality now and evolve the DDL during this release and next releases.
For
://issues.apache.org/jira/browse/FLINK-11001
On Fri, Nov 23, 2018 at 5:36 PM Timo Walther wrote:
Hi everyone,
thanks for the great feedback so far. I updated the document with the
input I got so far
@Fabian: I moved the porting of flink-table-runtime classes up in the
list.
@Xiaowei: Could you
nt to change program language.
Best,
Kurt
On Tue, Nov 27, 2018 at 5:57 PM Timo Walther wrote:
Hi Hequn,
thanks for your feedback. Yes, migrating the test cases is another issue
that is not represented in the document but should naturally go along
with the migration.
I agree that we should
document/d/1Y9it78yaUvbv4g572ZK_lZnZaAGjqwM_EhjdOv4yJtw/edit#
Am 07.01.19 um 13:51 schrieb Timo Walther:
Hi Eron,
thank you very much for the contributions. I merged the first little
bug fixes. For the remaining PRs I think we can review and merge them
soon. As you said, the code is agnostic to the details of the
Exter
Hi Eron,
thank you very much for the contributions. I merged the first little bug
fixes. For the remaining PRs I think we can review and merge them soon.
As you said, the code is agnostic to the details of the ExternalCatalog
interface and I don't expect bigger merge conflicts in the near
and Stephan both mentioned that `common` would fit better in
our current naming scheme.
I will open a PR for FLIP-28 step 1 shortly and looking forward to feedback.
Thanks,
Timo
Am 11.12.18 um 09:10 schrieb Timo Walther:
Hi Aljoscha,
thanks for your feedback. I also don't like the fact that an API
Thanks for bringing up this discussion again. +1 for a bot solution.
However, we should discuss a good process for closing PRs.
In many cases, PRs are closed not because the contributor did not
respond but because no committer prioritizes the PR high enough. Or the
PR has issues that might
Hi,
this is a known problem that occurs if you have big expressions. For
example, a big CASE WHEN clause. Currenlty, we only split by field not
within expressions. But this might be fixed soon as there is a PR
available [1].
As a workaround, use a UDF instead.
Regards,
Timo
[1]
ive it now
before we convert it into a FLIP.
Thanks,
Timo
[1]
https://docs.google.com/document/d/1Y9it78yaUvbv4g572ZK_lZnZaAGjqwM_EhjdOv4yJtw/edit#
Am 07.01.19 um 13:51 schrieb Timo Walther:
Hi Eron,
thank you very much for the contributions. I merged the first little
bug fixes. For the remai
the string-based API in 1.9 or make the
decision
in 1.10 after some feedbacks ?
On Thu, 21 Mar 2019 at 21:32, Timo Walther wrote:
Thanks for your feedback Rong and Jark.
@Jark: Yes, you are right that the string-based API is used quite a
lot.
On the other side, the potential user base in
Hi everyone,
some of you might have already read FLIP-32 [1] where we've described an
approximate roadmap of how to handle the big Blink SQL contribution and
how we can make the Table & SQL API equally important to the existing
DataStream API.
As mentioned there (Advance the API and Unblock
physical representation, I think we
should aim to introduce that and keep it separated.
Best,
Dawid
On 28/03/2019 08:51, Kurt Young wrote:
Big +1 to this! I left some comments in google doc.
Best,
Kurt
On Wed, Mar 27, 2019 at 11:32 PM Timo Walther wrote:
Hi everyone,
some of you might
Hi,
welcome to the Flink community! I gave you contributor permissions.
Please also have a look at the contribution guidelines.
https://flink.apache.org/how-to-contribute.html
Thanks,
Timo
Am 28.02.19 um 08:27 schrieb hdxg1101300123:
Hi Guys,
I want to contribute to Apache Flink.
Would
Hi,
yes I also fully agree that it is time to write down all these implicit
convensions that we've learned throught the last years. The Flink
community is growing quite rapidly right now and we must ensure that the
same mistakes do not repeat again.
Keeping the number of dependencies low is
Hi everyone,
as some of you might have noticed during the last weeks, the Flink
community grew quite a bit. A lot of people have applied for contributor
permissions and started working on issues, which is great for the growth
of Flink!
However, we've also observed that managing JIRA and
es
more
effort/participation from committer's side. From my own side, it's
exciting
to
see our committers become more active :-)
Best,
tison.
Chesnay Schepler 于2019年2月27日周三 下午5:06写道:
We currently cannot change the JIRA permissions. Have you asked
INFRA
whether it is possible to setup a Flin
I just found https://issues.apache.org/jira/browse/FLINK-11901
According to Chesnay, this is a release blocker.
Regards,
Timo
Am 13.03.19 um 09:48 schrieb Driesprong, Fokko:
-1 (non-binding)
I'd like to see both get into Flink 1.8:
https://github.com/apache/flink/pull/7547
Hi,
welcome to the Flink community! I gave you contributor permissions.
Please also have a look at the contribution guidelines.
https://flink.apache.org/how-to-contribute.html
Thanks,
Timo
Am 11.03.19 um 07:25 schrieb chenkaibit:
Hi:
I want to contribute to Apache Flink.
Would you please
Hi,
welcome to the Flink community! I you should already have contributor
permissions. Please also have a look at the contribution guidelines.
https://flink.apache.org/how-to-contribute.html
Thanks,
Timo
Am 09.03.19 um 10:05 schrieb hdxg1101300123:
Hi Guys,
I want to contribute to Apache
Hi everyone,
some of you might have already noticed the JIRA issue that I opened
recently [1] about introducing a proper Java expression DSL for the
Table API. Instead of using string-based expressions, we should aim for
a unified, maintainable, programmatic Java DSL.
Some background: The
doc. and also some features that I
think will
be beneficial to the final outcome. Please kindly take a look @Timo.
Many thanks,
Rong
On Mon, Mar 18, 2019 at 7:15 AM Timo Walther mailto:twal...@apache.org>> wrote:
> Hi everyone,
>
> some of you might h
Hi everyone,
I also tried to summarize the previous discussion and would add an
additional `Ecosystem` component. I would suggest:
Table SQL / API
Table SQL / Client
Table SQL / Legacy Planner
Table SQL / Planner
Table SQL / Runtime
Table SQL / Ecosystem (such as table connectors, formats,
Hi Robert,
thanks for starting this discussion. I was also about to suggest
splitting the `Table API & SQL` component because it contains already
more than 1000 issues.
My comments:
- Rename "SQL/Shell" to "SQL/Client" because the long-term goal might
not only be a CLI interface. I would
101 - 200 of 1344 matches
Mail list logo