Hi dawid:
It is difficult to describe specific examples.
Sometimes users will generate some java converters through some
Java code, or generate some Java classes through third-party
libraries. Of course, these can be best done through properties.
But this requires additional work from users.My
Hi guys,
There is an issue about supporting Elasticsearch 7.x.[1]
Based on our validation and discussion. We found that Elasticsearch 7.x
does not guarantee API compatibility. Therefore, it does not have the
ability to provide a universal connector like Kafka. It seems that we have
to provide a
Hi Tang,
Actually in my case we implement a totally different KeyedStateBackend and
its' factory based on data store other than Heap or RocksDB.
Also for state factory of heap and rocksdb, you've made a quite good point
and I agree with you opinion.
Best,
Shimin
shimin yang 于2019年9月9日周一
Till Rohrmann created FLINK-14008:
-
Summary: Flink Scala 2.12 binary distribution contains incorrect
NOTICE-binary file
Key: FLINK-14008
URL: https://issues.apache.org/jira/browse/FLINK-14008
Alex created FLINK-14011:
Summary: Make some fields final and initialize them during
construction in AsyncWaitOperator
Key: FLINK-14011
URL: https://issues.apache.org/jira/browse/FLINK-14011
Project: Flink
Congrats, Kostas! Well deserved.
Best,
Jingsong Lee
--
From:Kostas Kloudas
Send Time:2019年9月9日(星期一) 15:50
To:dev ; Yun Gao
Subject:Re: [ANNOUNCE] Kostas Kloudas joins the Flink PMC
Thanks a lot everyone for the warm welcome!
+1 (binding)
- checked signatures [SUCCESS]
- built from source without tests [SUCCESS]
- ran some tests in IDE [SUCCESS]
- start local cluster and submit word count example [SUCCESS]
- announcement PR for website looks good! (I have left a few comments)
Best,
Jincheng
Jark Wu 于2019年9月6日周五
Thanks a lot everyone for the warm welcome!
Cheers,
Kostas
On Mon, Sep 9, 2019 at 4:54 AM Yun Gao wrote:
>
> Congratulations, Kostas!
>
> Best,
> Yun
>
>
>
>
> --
> From:Becket Qin
> Send Time:2019 Sep. 9 (Mon.)
Yes you are correct Dominik. The committed Kafka offsets tell you what the
program has read as input from the Kafka topic. But depending on the actual
program logic this does not mean that you have output the results of
processing these input events up to this point. As you have said, there are
TisonKun created FLINK-14010:
Summary: Dispatcher & JobManagers don't give up leadership when AM
is shut down
Key: FLINK-14010
URL: https://issues.apache.org/jira/browse/FLINK-14010
Project: Flink
Hi Tison,
thanks for starting this discussion. I think your mail includes multiple
points which are worth being treated separately (might even make sense to
have separate discussion threads). Please correct me if I understood things
wrongly:
1. Adding new non-ha HAServices:
Based on your
Hi devs,
I'd like to start a discussion thread on the topic how we provide
retrieval services in non-high-availability scenario. To clarify
terminology, non-high-availability scenario refers to
StandaloneHaServices and EmbeddedHaServices.
***The problem***
We notice that retrieval services of
Hi Yu,
For the first question, I would say yes. I was talking about managed
states, to be more specific, it's managed keyed states. And the reason why
we need the framework to manage life cycle is that we need checkpoint to
guarantee exact once semantic in our customized keyed state backend.
For
Hi Jingsong,
Although it would be nice if the accumulators in GlobalAggregateManager is
fault-tolerant, we could still take advantage of managed state to guarantee
the semantic and use the accumulators to implement distributed barrier or
lock to solve the distributed access problem.
Best,
Shimin
Thank you Bowen and Becket.
What's the take from Flink community? Shall we wait for FLIP-27 or shall we
proceed to next steps? And what the next steps are? :-)
Thanks,
Sijie
On Thu, Sep 5, 2019 at 2:43 PM Bowen Li wrote:
> Hi,
>
> I think having a Pulsar connector in Flink can be a good
Daebeom Lee created FLINK-14012:
---
Summary: Failed to start job for consuming Secure Kafka after the
job cancel
Key: FLINK-14012
URL: https://issues.apache.org/jira/browse/FLINK-14012
Project: Flink
Hi all,
Thanks all for so much feedbacks received in the doc so far.
I saw a general agreement on using computed column to support proctime
attribute and extract timestamps.
So we will prepare a computed column FLIP and share in the dev ML soon.
Feel free to leave more comments!
Best,
Jark
Dian Fu created FLINK-14013:
---
Summary: Support Flink Python User-Defined Stateless Function for
Table
Key: FLINK-14013
URL: https://issues.apache.org/jira/browse/FLINK-14013
Project: Flink
Issue
Dian Fu created FLINK-14015:
---
Summary: Introduce PythonScalarFunctionOperator as a standalone
StreamOperator for Python ScalarFunction execution
Key: FLINK-14015
URL: https://issues.apache.org/jira/browse/FLINK-14015
Thanks for confirming.
We have a
```
public class ParquetSinkWriter implements Writer
```
that handles the serialization of the data.
We implemented it starting from:
https://medium.com/hadoop-noob/flink-parquet-writer-d127f745b519
+1 (non-binding)
- built from source successfully (mvn clean install -DskipTests)
- checked gpg signature and hashes of the source release and binary release
packages
- All artifacts have been deployed to the maven central repository
- no new dependencies were added since 1.8.1
- run a couple of
Thanks Jincheng a lot for the remind and thanks all for the voting. I'm closing
the vote now.
So far, the vote has received:
- 5 binding +1 votes (Jincheng, Hequn, Jark, Shaoxuan, Becket)
- 5 non-binding +1 votes (Wei, Xingbo, Terry, Yu, Jeff)
- No 0/-1 votes
There are more than 3 binding
Dian Fu created FLINK-14014:
---
Summary: Introduce PythonScalarFunctionRunner to handle the
communication with Python worker for Python ScalarFunction execution
Key: FLINK-14014
URL:
Dian Fu created FLINK-14016:
---
Summary: Introduce RelNodes FlinkLogicalPythonScalarFunctionExec
and DataStreamPythonScalarFunctionExec which are containers for Python
PythonScalarFunctions
Key: FLINK-14016
URL:
Hi Jingsong,
Good point!
1. If it doesn't matter which task performs the finalize work, then I think
task-0 suggested by Jark is a very good solution.
2. If it requires the last finished task to perform the finalize work, then we
have to consider other solutions.
WRT fault-tolerant of
Till Rohrmann created FLINK-14009:
-
Summary: Cron jobs broken due to verifying incorrect NOTICE-binary
file
Key: FLINK-14009
URL: https://issues.apache.org/jira/browse/FLINK-14009
Project: Flink
Hi Enrico,
Sorry for the late reply. I think your understanding is correct.
The best way to do it is to write your own ParquetBulkWriter and the
corresponding factory.
Out of curiosity, I guess that in the BucketingSink you were using the
AvroKeyValueSinkWriter, right?
Cheers,
Kostas
On Fri,
Yun Tang created FLINK-14032:
Summary: Make the cache size of RocksDBPriorityQueueSetFactory
configurable
Key: FLINK-14032
URL: https://issues.apache.org/jira/browse/FLINK-14032
Project: Flink
Zhenqiu Huang created FLINK-14033:
-
Summary: Distributed caches are not registered in Yarn Per Job
Cluster Mode
Key: FLINK-14033
URL: https://issues.apache.org/jira/browse/FLINK-14033
Project: Flink
Dian Fu created FLINK-14027:
---
Summary: Add documentation for Python user-defined functions
Key: FLINK-14027
URL: https://issues.apache.org/jira/browse/FLINK-14027
Project: Flink
Issue Type:
Piyush Narang created FLINK-14029:
-
Summary: Update Flink's Mesos scheduling behavior to reject all
expired offers
Key: FLINK-14029
URL: https://issues.apache.org/jira/browse/FLINK-14029
Project:
Congrats, Kostas!
Thanks,
Biao /'bɪ.aʊ/
On Mon, 9 Sep 2019 at 16:07, JingsongLee
wrote:
> Congrats, Kostas! Well deserved.
>
> Best,
> Jingsong Lee
>
>
> --
> From:Kostas Kloudas
> Send Time:2019年9月9日(星期一) 15:50
> To:dev ; Yun
Dian Fu created FLINK-14023:
---
Summary: Support accessing job parameters in Python user-defined
functions
Key: FLINK-14023
URL: https://issues.apache.org/jira/browse/FLINK-14023
Project: Flink
Dian Fu created FLINK-14022:
---
Summary: Add validation check for places where Python
ScalarFunction cannot be used
Key: FLINK-14022
URL: https://issues.apache.org/jira/browse/FLINK-14022
Project: Flink
Jimmy Wong created FLINK-14031:
--
Summary: flink-examples should add blink dependency on
flink-examples-table
Key: FLINK-14031
URL: https://issues.apache.org/jira/browse/FLINK-14031
Project: Flink
Dian Fu created FLINK-14018:
---
Summary: Add Python building blocks to make sure the basic
functionality of Python ScalarFunction could work
Key: FLINK-14018
URL: https://issues.apache.org/jira/browse/FLINK-14018
Dian Fu created FLINK-14024:
---
Summary: Support use-defined metrics in Python user-defined
functions
Key: FLINK-14024
URL: https://issues.apache.org/jira/browse/FLINK-14024
Project: Flink
Issue
Dian Fu created FLINK-14026:
---
Summary: Manage the resource of Python worker properly
Key: FLINK-14026
URL: https://issues.apache.org/jira/browse/FLINK-14026
Project: Flink
Issue Type: Sub-task
One thing that I just came across: Some of these options should also have a
corresponding value for the JobManager, like JVM overhead, metaspace,
direct memory.
On Fri, Sep 6, 2019 at 4:34 AM Xintong Song wrote:
> Thanks all for the votes.
> So far, we have
>
>- 4 binding +1 votes (Stephan,
Dian Fu created FLINK-14025:
---
Summary: Support to run the Python worker in docker mode
Key: FLINK-14025
URL: https://issues.apache.org/jira/browse/FLINK-14025
Project: Flink
Issue Type: Sub-task
Dian Fu created FLINK-14019:
---
Summary: Python environment and dependency management
Key: FLINK-14019
URL: https://issues.apache.org/jira/browse/FLINK-14019
Project: Flink
Issue Type: Sub-task
Leonard Xu created FLINK-14030:
--
Summary: Nonequivalent conversion happens in Table planner
Key: FLINK-14030
URL: https://issues.apache.org/jira/browse/FLINK-14030
Project: Flink
Issue Type:
Dian Fu created FLINK-14017:
---
Summary: Support to start up Python worker in process mode
Key: FLINK-14017
URL: https://issues.apache.org/jira/browse/FLINK-14017
Project: Flink
Issue Type: Sub-task
Dian Fu created FLINK-14020:
---
Summary: User Apache Arrow as the serializer for data transmission
between Java operator and Python harness
Key: FLINK-14020
URL: https://issues.apache.org/jira/browse/FLINK-14020
Dian Fu created FLINK-14021:
---
Summary: Add rules to push down the Python ScalarFunctions
contained in the join condition of Correlate node
Key: FLINK-14021
URL: https://issues.apache.org/jira/browse/FLINK-14021
Dian Fu created FLINK-14028:
---
Summary: Support logging aggregation in Python user-defined
functions
Key: FLINK-14028
URL: https://issues.apache.org/jira/browse/FLINK-14028
Project: Flink
Issue
Niels van Kaam created FLINK-14034:
--
Summary: In FlinkKafkaProducer, KafkaTransactionState should be
made public or invoke should be made final
Key: FLINK-14034
URL:
Leonard Xu created FLINK-14036:
--
Summary: function log(f0,f1) in Table API do not support decimal
type
Key: FLINK-14036
URL: https://issues.apache.org/jira/browse/FLINK-14036
Project: Flink
Hi Sijie,
If we agree that the goal is to have Pulsar connector in 1.10, how about we
do the following:
0. Start a FLIP to add Pulsar connector to Flink main repo as it is a new
public interface to Flink main repo.
1. Start to review the Pulsar sink right away as there is no change to the
sink
liupengcheng created FLINK-14037:
Summary: Deserializing the input/output formats failed: unread
block data
Key: FLINK-14037
URL: https://issues.apache.org/jira/browse/FLINK-14037
Project: Flink
liupengcheng created FLINK-14038:
Summary: ExecutionGraph deploy failed due to akka timeout
Key: FLINK-14038
URL: https://issues.apache.org/jira/browse/FLINK-14038
Project: Flink
Issue Type:
Hi all,
During the discussion of how to support Hive built-in functions in Flink in
FLIP-57 [1], an idea of "modular built-in functions" was brought up with
examples of "Extension" in Postgres [2] and "Plugin" in Presto [3]. Thus
I'd like to kick off a discussion to see if we should adopt such an
Congxian Qiu(klion26) created FLINK-14035:
-
Summary: Introduce/Change some log for snapshot to better analysis
checkpoint problem
Key: FLINK-14035
URL: https://issues.apache.org/jira/browse/FLINK-14035
53 matches
Mail list logo