sunjincheng created FLINK-15937:
---
Summary: Correct the Development Status for PyFlink
Key: FLINK-15937
URL: https://issues.apache.org/jira/browse/FLINK-15937
Project: Flink
Issue Type: Bug
Gary Yao created FLINK-15936:
Summary: TaskExecutorTest#testSlotAcceptance deadlocks
Key: FLINK-15936
URL: https://issues.apache.org/jira/browse/FLINK-15936
Project: Flink
Issue Type: Bug
-1, I just found one critical issue
https://issues.apache.org/jira/browse/FLINK-15935
This ticket means user unable to use watermark in sql if he specify both
flink planner and blink planner in pom.xml
org.apache.flink
flink-table-planner_${scala.binary.version}
${project.version}
Jeff Zhang created FLINK-15935:
--
Summary: Unable to use watermark when depends both on flink
planner and blink planner
Key: FLINK-15935
URL: https://issues.apache.org/jira/browse/FLINK-15935
Project:
forideal created FLINK-15934:
Summary: RocksDB rocksdb_delete_helper return false
Key: FLINK-15934
URL: https://issues.apache.org/jira/browse/FLINK-15934
Project: Flink
Issue Type: Bug
Bowen Li created FLINK-15933:
Summary: update content of how generic table schema is stored in
hive via HiveCatalog
Key: FLINK-15933
URL: https://issues.apache.org/jira/browse/FLINK-15933
Project: Flink
+1, LGTM
On Tue, Feb 4, 2020 at 11:28 PM Jark Wu wrote:
> +1 form my side.
> Thanks for driving this.
>
> Btw, could you also attach a JIRA issue with the changes described in it,
> so that users can find the issue through the mailing list in the future.
>
> Best,
> Jark
>
> On Wed, 5 Feb 2020
Jingsong Lee created FLINK-15932:
Summary: Add download url to hive dependencies
Key: FLINK-15932
URL: https://issues.apache.org/jira/browse/FLINK-15932
Project: Flink
Issue Type:
Tzu-Li (Gordon) Tai created FLINK-15931:
---
Summary: Add utility scripts / tooling for releasing Stateful
Functions
Key: FLINK-15931
URL: https://issues.apache.org/jira/browse/FLINK-15931
Tzu-Li (Gordon) Tai created FLINK-15930:
---
Summary: Setup Stateful Function's Spotless plugin to check
Javadoc violations to comply with Maven Javadoc plugin
Key: FLINK-15930
URL:
Dian Fu created FLINK-15929:
---
Summary: test_dependency failed on travis
Key: FLINK-15929
URL: https://issues.apache.org/jira/browse/FLINK-15929
Project: Flink
Issue Type: Test
Fanbin Bu created FLINK-15928:
-
Summary: Batch mode in blink planner caused
IndexOutOfBoundsException error
Key: FLINK-15928
URL: https://issues.apache.org/jira/browse/FLINK-15928
Project: Flink
I deployed commit 81cf2f9e59259389a6549b07dcf822ec63c899a4 and can confirm
that the dataformat-cbor and checkpoint alignment metric issues are
resolved.
On Wed, Feb 5, 2020 at 11:26 AM Gary Yao wrote:
> Note that there is currently an ongoing discussion about whether
> FLINK-15917
> and
I think that's a good idea.
(Opt-in for existing users, until the backward compatibility issues are
resolved.)
On Wed, Feb 5, 2020 at 11:57 AM Arvid Heise wrote:
> Couldn't we treat a missing option as legacy, but set the new scheduler as
> the default value in all newly shipped
Couldn't we treat a missing option as legacy, but set the new scheduler as
the default value in all newly shipped flink-conf.yaml?
In this way, old users get the old behavior (either implicitly or
explicitly) unless they explicitly upgrade.
New users benefit from the new scheduler.
On Wed, Feb
Stephan Ewen created FLINK-15927:
Summary: TaskExecutor should treat it as a fatal error is Task
cannot be failed
Key: FLINK-15927
URL: https://issues.apache.org/jira/browse/FLINK-15927
Project:
Note that there is currently an ongoing discussion about whether FLINK-15917
and FLINK-15918 should be fixed in 1.10.0 [1].
[1]
http://mail-archives.apache.org/mod_mbox/flink-dev/202002.mbox/%3CCA%2B5xAo3D21-T5QysQg3XOdm%3DL9ipz3rMkA%3DqMzxraJRgfuyg2A%40mail.gmail.com%3E
On Wed, Feb 5, 2020 at
It is indeed unfortunate that these issues are discovered only now. I think
Thomas has a valid point, and we might be risking the trust of our users
here.
What are our options?
1. Document this behavior and how to work around it copiously in the
release notes [1]
2. Try to restore the
Hi everyone,
Please review and vote on the release candidate #2 for the version 1.10.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
@Till Rohrmann
You are completely right that the Atlas hook itself should not live inside
Flink. All other hooks for the other projects are implemented as part of
Atlas,
and the Atlas community is ready to maintain it once we have a working
version. The discussion is more about changes that we
Should we make these a blocker? I am not sure - we could also clearly state
in the release notes how to restore the old behavior, if your setup assumes
that behavior.
Release candidates for this release have been out since mid December, it is
a bit unfortunate that these things have been raised
Hi Gary,
Thanks for the clarification!
When we upgrade to a new Flink release, we don't start with a default
flink-conf.yaml but upgrade our existing tooling and configuration.
Therefore we notice this issue as part of the upgrade to 1.10, and not when
we upgraded to 1.9.
I would expect many
As far as I know, Atlas entries can be created with a rest call. Can we not
create an abstracted Flink operator that makes the rest call on job
execution/submission?
Regards,
Taher Koitawala
On Wed, Feb 5, 2020, 10:16 PM Flavio Pompermaier
wrote:
> Hi Gyula,
> thanks for taking care of
Hi Gyula,
thanks for taking care of integrating Flink with Atlas (and Egeria
initiative in the end) that is IMHO the most important part of all the
Hadoop ecosystem and that, unfortunately, was quite overlooked. I can
confirm that the integration with Atlas/Egeria is absolutely of big
interest.
Thanks for starting this discussion Chesnay. +1 for starting a new
flink-shaded release.
Cheers,
Till
On Wed, Feb 5, 2020 at 2:10 PM Chesnay Schepler wrote:
> Hello,
>
> I would like to kick off the next release of flink-shaded. The main
> feature are new modules that bundle zookeeper, that
Hi Gyula,
thanks for starting this discussion. Before diving in the details of how to
implement this feature, I wanted to ask whether it is strictly required
that the Atlas integration lives within Flink or not? Could it also work if
you have tool which receives job submissions, extracts the
Kostas Kloudas created FLINK-15926:
--
Summary: Add DataStream.broadcast(StateDescriptor) to available
transformations docs
Key: FLINK-15926
URL: https://issues.apache.org/jira/browse/FLINK-15926
Chesnay Schepler created FLINK-15925:
Summary: TaskExecutors don't work out-of-the-box on Windows
Key: FLINK-15925
URL: https://issues.apache.org/jira/browse/FLINK-15925
Project: Flink
Till Rohrmann created FLINK-15924:
-
Summary: Detect and log blocking main thread operations
Key: FLINK-15924
URL: https://issues.apache.org/jira/browse/FLINK-15924
Project: Flink
Issue Type:
-1
- this is not a source release by definition, since a source release
must not contain binaries. This is a convenience binary, or possibly
even a distributed-channel appropriate version of our existing
convenience binary. A user downloading this package should know what
they are
Some thoughts about other options we have:
- Put fat/shaded jars for the common versions into "flink-shaded" and
offer them for download on the website, similar to pre-bundles Hadoop
versions.
- Look at the Presto code (Metastore protocol) and see if we can reuse
that
- Have a setup
Jiayi Liao created FLINK-15923:
--
Summary: Remove DISCARDED in TaskAcknowledgeResult
Key: FLINK-15923
URL: https://issues.apache.org/jira/browse/FLINK-15923
Project: Flink
Issue Type:
Hello,
I would like to kick off the next release of flink-shaded. The main
feature are new modules that bundle zookeeper, that will allow
us to support zk 3.4 and 3.5 .
Additionally we fixed an issue where slightly older dependencies than
intended were bundled in the
> also notice that the exception causing a restart is no longer displayed
> in the UI, which is probably related?
Yes, this is also related to the new scheduler. I created FLINK-15917 [1] to
track this. Moreover, I created a ticket about the uptime metric not
resetting
[2]. Both issues already
Stephan Ewen created FLINK-15922:
Summary: Show "Warn - received late message for checkpoint" only
when checkpoint actually expired
Key: FLINK-15922
URL: https://issues.apache.org/jira/browse/FLINK-15922
sunjincheng created FLINK-15921:
---
Summary: PYTHON exited with EXIT CODE: 143 in travis-ci
Key: FLINK-15921
URL: https://issues.apache.org/jira/browse/FLINK-15921
Project: Flink
Issue Type:
Hi Zhenghua,
After removing TableSource::getTableSchema, during optimization, I could
imagine
the schema information might come from relational nodes such as TableScan.
Best,
Kurt
On Wed, Feb 5, 2020 at 8:24 PM Kurt Young wrote:
> Hi Jingsong,
>
> Yes current TableFactory is not ideal for
Hi Jingsong,
Yes current TableFactory is not ideal for users to use either. I think we
should
also spend some time in 1.11 to improve the usability of TableEnvironment
when
users trying to read or write something. Automatic scheme inference would
be
one of them. Other from this, we also support
Stephan Ewen created FLINK-15920:
Summary: Show thread name in logs on CI
Key: FLINK-15920
URL: https://issues.apache.org/jira/browse/FLINK-15920
Project: Flink
Issue Type: Improvement
Yu Li created FLINK-15919:
-
Summary: MemoryManager shouldn't allow releasing more memory than
reserved
Key: FLINK-15919
URL: https://issues.apache.org/jira/browse/FLINK-15919
Project: Flink
Issue
Hi all!
We have started some preliminary work on the Flink - Atlas integration at
Cloudera. It seems that the integration will require some new hook
interfaces at the jobgraph generation and submission phases, so I figured I
will open a discussion thread with my initial ideas to get some early
+1 to remove these methods.
One concern about invocations of TableSource::getTableSchema:
By removing such methods, we can stop calling TableSource::getTableSchema
in some place(such
as BatchTableEnvImpl/TableEnvironmentImpl#validateTableSource,
ConnectorCatalogTable, TableSourceQueryOperation).
Gary Yao created FLINK-15918:
Summary: Uptime Metric not reset on Job Restart
Key: FLINK-15918
URL: https://issues.apache.org/jira/browse/FLINK-15918
Project: Flink
Issue Type: Bug
Gary Yao created FLINK-15917:
Summary: Root Exception not shown in Web UI
Key: FLINK-15917
URL: https://issues.apache.org/jira/browse/FLINK-15917
Project: Flink
Issue Type: Bug
Stephan Ewen created FLINK-15916:
Summary: Remove outdated sections for Network Buffers and Async
Checkpoints from the Large State Tuning Guide
Key: FLINK-15916
URL:
Hi Jingsong,
Thanks a lot for the valuable feedback.
1. The configurations "python.fn-execution.bundle.size" and
"python.fn-execution.arrow.batch.size" are used for separate purposes and I
think they are both needed. If they are unified, the Python operator has to
wait the execution results
Hi all,
I also agree with Stephan and Timo that the SQL Client should be a simple
"shell around the table environment". About "making this a standalone
project", I agree with Timo, and I think keeping SQL client in Flink
codebase can ensure SQL client integrity (has both embedded mode and
gateway
Hi everyone,
FLIP-39[1] rebuilds the Flink ML pipeline on top of TableAPI and introduces
a new set of Java APIs. As Python is widely used in ML areas, providing
Python ML Pipeline APIs for Flink can not only make it easier to write ML
jobs for Python users but also broaden the adoption of Flink
Timo Walther created FLINK-15915:
Summary: Bump Jackson to 2.10.1 in flink-table-planner
Key: FLINK-15915
URL: https://issues.apache.org/jira/browse/FLINK-15915
Project: Flink
Issue Type:
Hi Dian,
+1 for this, thanks driving.
Documentation looks very good. I can imagine a huge performance improvement
and better integration to other Python libraries.
A few thoughts:
- About data split: "python.fn-execution.arrow.batch.size", can we unify it
with "python.fn-execution.bundle.size"?
Hi Thomas,
The reason of missing barrier alignment metric is found and I create the ticket
[1] for tracing the progress. I guess it would be fixed soon. Thanks for
reporting this.
[1] https://issues.apache.org/jira/browse/FLINK-15914
Best,
Zhijiang
zhijiang created FLINK-15914:
Summary: Miss the barrier alignment metric for the case of two
inputs
Key: FLINK-15914
URL: https://issues.apache.org/jira/browse/FLINK-15914
Project: Flink
Issue
HI Kurt,
+1 to remove these methods.
But one concern is that some of the current TableSource/TableSink may not
be ready, such as the JDBCUpsertTableSink, which accepts a JDBCDialect, but
through the TableFactory, there is no way to pass in the JDBCDialect at
present. But I also believe we have
Thanks for driving this Kurt.
Because we already have DDL and Descriptor as an alternative of these
deprecated methods, removing them would reduce ambiguity and make the near
future work more easier.
As we discussed offline, although some of the connectors may still have
attributes that
+1, thanks for the efforts.
On Wed, Feb 5, 2020 at 4:00 PM Jingsong Li wrote:
> Hi all,
>
> As Jark suggested in VOTE thread.
> JIRA created: https://issues.apache.org/jira/browse/FLINK-15912
>
> Best,
> Jingsong Lee
>
> On Wed, Feb 5, 2020 at 10:57 AM Jingsong Li
> wrote:
>
> > Hi Timo,
> >
>
Hi Wei,
Thanks for your vote and I appreciate that you kindly help to take the
ticket.
I've assigned the JIRAs to you!
Best,
Jincheng
Wei Zhong 于2020年2月5日周三 下午3:55写道:
> Hi,
>
> Thanks for driving this, Jincheng.
>
> +1 (non-binding)
>
> - Verified signatures and checksums.
> - `pip install
Huang Xingbo created FLINK-15913:
Summary: Add Python Table Function Runner And Operator
Key: FLINK-15913
URL: https://issues.apache.org/jira/browse/FLINK-15913
Project: Flink
Issue Type:
Hi all,
As Jark suggested in VOTE thread.
JIRA created: https://issues.apache.org/jira/browse/FLINK-15912
Best,
Jingsong Lee
On Wed, Feb 5, 2020 at 10:57 AM Jingsong Li wrote:
> Hi Timo,
>
> G ood catch!
>
> I really love the idea 2, a full Flink config looks very good to me.
>
> Try to
58 matches
Mail list logo