Jark Wu created FLINK-14610:
---
Summary: Add documentation for how to use watermark syntax in DDL
Key: FLINK-14610
URL: https://issues.apache.org/jira/browse/FLINK-14610
Project: Flink
Issue Type:
Danny Chen created FLINK-14609:
--
Summary: Add doc for Flink SQL computed columns
Key: FLINK-14609
URL: https://issues.apache.org/jira/browse/FLINK-14609
Project: Flink
Issue Type: Sub-task
Kurt Young created FLINK-14608:
--
Summary: avoid using Java Streams in JsonRowDeserializationSchema
Key: FLINK-14608
URL: https://issues.apache.org/jira/browse/FLINK-14608
Project: Flink
Issue
hello:
I am currently learning flink.I recently had a problem with Flink for disaster
recovery testing.I tried to find an answer on the official website and blog but
failed.I am trying to find community help.
The current situation is:I have two servers, each with one slot.My application
has
cc @Fabian here, thought you might be interesting to review this.
Best,
Kurt
On Thu, Oct 31, 2019 at 1:39 PM Kurt Young wrote:
> Thanks Terry for bringing this up. TableEnv's interface is really critical
> not only
> to users, but also for components built upon it like SQL CLI. Your
>
Hi Dominik,
just to add to Dawids explanation: to have a proper schema evolution on
Avro data, it needs to know the schema with which it was written. For state
that means that we are storing the used schema once for all state records
in the state file, since they all belong to the same schema
Zhu Zhu created FLINK-14607:
---
Summary: SharedSlot cannot fulfill pending slot requests before
it's totally released
Key: FLINK-14607
URL: https://issues.apache.org/jira/browse/FLINK-14607
Project: Flink
Zhu Zhu created FLINK-14606:
---
Summary: Simplify params of Execution#processFail
Key: FLINK-14606
URL: https://issues.apache.org/jira/browse/FLINK-14606
Project: Flink
Issue Type: Sub-task
Rui Li created FLINK-14605:
--
Summary: Use Hive-1.1.0 as the profile to test against 1.1.x
Key: FLINK-14605
URL: https://issues.apache.org/jira/browse/FLINK-14605
Project: Flink
Issue Type: Test
Wei Zhong created FLINK-14604:
-
Summary: Bump commons-cli to 1.4
Key: FLINK-14604
URL: https://issues.apache.org/jira/browse/FLINK-14604
Project: Flink
Issue Type: Improvement
Hi yanjun,
I have just checked the CheckStyle-IDEA plugin (latest version 5.33.1) [1].
8.14 is available in my environment (macOS).
I have no idea why it's unavailable in your IDE. Here are just some guesses.
1. There are some similar plugins of style checking. Have you install the
correct one?
Hi everyone,
In FLIP-58[1] it will add the support of Python user-defined stateless function
for Python Table API. It will launch a separate Python process for Python
user-defined function execution. The resources used by the Python process
should be managed properly by Flink’s resource
Kurt Young created FLINK-14603:
--
Summary:
NetworkBufferPoolTest.testBlockingRequestFromMultiLocalBufferPool timeout in
travis
Key: FLINK-14603
URL: https://issues.apache.org/jira/browse/FLINK-14603
vinoyang created FLINK-14602:
Summary: Degenerate the current ConcurrentHashMap type of tasks to
a normal Hashmap type.
Key: FLINK-14602
URL: https://issues.apache.org/jira/browse/FLINK-14602
Project:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/state/checkpoints.html
At 2019-11-05 09:37:47, "Simon Su" wrote:
>Hi All
>
>
>Does current Flink support to set checkpoint properties while using Flink SQL
>?
>For example, statebackend choices, checkpoint interval and so on ...
>
Hi Simon,
Absolutely, yes. Before using Flink SQL, you need to initialize a
StreamExecutionEnvirnoment instance[1], then call
StreamExecutionEnvirnoment#setStateBackend
or StreamExecutionEnvirnoment#enableCheckpointing to specify the
information what you want.
[1]:
Hi Simon
If you are using table API, you could set state backend via environment like
`env.setStateBackend()`
If you just launch a cluster with SQL-client, you could configure state backend
and checkpoint options [1] within `flink-conf.yaml` before launching the
cluster .
[1]
Hi All
Does current Flink support to set checkpoint properties while using Flink SQL ?
For example, statebackend choices, checkpoint interval and so on ...
Thanks,
SImon
Thanks Yu for bringing this topic.
+1 for this proposal. Glad to have an e2e performance testing.
It seems this proposal is separated into several stages. Is there a more
detailed plan?
Thanks,
Biao /'bɪ.aʊ/
On Mon, 4 Nov 2019 at 19:54, Congxian Qiu wrote:
> +1 for this idea.
>
>
Hi Thomas,
Event time alignment is absolutely one of the important considerations of
FLIP-27. That said, we are not implementing that in FLIP-27, but just make
sure such feature can be easily added in the future. The design was to make
the communication between SplitEnumerator and SourceReader
Hunter Jackson created FLINK-14601:
--
Summary: CLI documentation for list is missing '-a'
Key: FLINK-14601
URL: https://issues.apache.org/jira/browse/FLINK-14601
Project: Flink
Issue Type:
Hey Dawid,
Thanks a lot. I have indeed missed the part that this is actually about
State not the Deserialization itself.
This seems to be clear and consistent now.
Thanks again,
Best Regards,
Dom.
pon., 4 lis 2019 o 13:18 Dawid Wysakowicz
napisał(a):
> Hi Dominik,
>
> I am not sure which
vinoyang created FLINK-14600:
Summary: Degenerate the current AtomicInteger type of
verticesFinished to a normal int type.
Key: FLINK-14600
URL: https://issues.apache.org/jira/browse/FLINK-14600
Project:
Hi Becket,
Thanks for the reply, it is good to know that there is activity on FLIP-27.
A while ago I was wondering if event time alignment is on the radar [1],
can you please clarify that?
There is a parallel discussion of adding it to the existing Kafka consumer
[2], could you please take a
Hi Steven and Thomas,
Sorry about missing the update of FLIP-27.
I am working on the implementation of FLIP-27 at this point. It is about
70% done. Right now I am integrating the source coordinator to the job
master. Hopefully I can get the basics of Kafka connector work from end to
end by this
Hi Xiao Dao,
Is it possible to ingest into Druid with exactly-once guarantee?
Thanks,
Qi
On Mon, Nov 4, 2019 at 4:18 PM xiao dao wrote:
> Dear Flink Community!
> I would like to open the discussion of contributiong incubator-druid Flink
> connector to Flink.
>
> ## A brief introduction to
Hi Dominik,
I am not sure which documentation do you refer to when saying: "According to
the docs the schema resolution is compatible with the Avro docs", but I
assume this one[1]. If this is the case then the
AvroDeserializationSchema plays no role in this process. That page
describes evolution
Hi Diogo,
There is also a Scala version about AllWindowTranslationTest[1] which
contains the example code snippet.
[1]:
https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/AllWindowTranslationTest.scala#L1644
Best,
Vino
Diogo
+1 for this idea.
Currently, we have the micro benchmark for flink, which can help us find
the regressions. And I think the e2e jobs performance testing can also help
us to cover more scenarios.
Best,
Congxian
Jingsong Li 于2019年11月4日周一 下午5:37写道:
> +1 for the idea. Thanks Yu for driving this.
Even though your answer was helpful, I am testing my flink pipeline in scala
not in java.
Is there any Scala example you can provide me?
Thanks in advance
Diogo Araújo | Rockstar Developer
diogo.ara...@criticaltechworks.com
+351 912882824
Rua do Campo Alegre, nº 17, piso 0 | 4150-177 Porto
Zhenghua Gao created FLINK-14599:
Summary: Support precision of TimestampType
Key: FLINK-14599
URL: https://issues.apache.org/jira/browse/FLINK-14599
Project: Flink
Issue Type: Sub-task
Hi
I met the same problem before. After some digging, I find that the idea
will detect the JDK version
and choose whether to use the jdk11 option to run the flink maven building.
if you are in jdk11 env, then
it will add the option --add-exports when maven building in IDEA.
For my case, I was
Hey,
I have a question regarding Avro Types and schema evolution. According to
the docs the schema resolution is compatible with the Avro docs [1].
But I have done some testing. For example, I have created a record, written
it to Kafka, and then changed the order the fields in schema and tried to
Try to reimport that maven project. This should resolve this issue.
Cheers,
Till
On Mon, Nov 4, 2019 at 10:34 AM 刘建刚 wrote:
> Hi, I am using flink 1.9 in idea. But when I run a unit test in idea.
> The idea reports the following error:"Error:java: 无效的标记:
>
+1 for the idea. Thanks Yu for driving this.
Just curious about that can we collect the metrics about Job scheduling and
task launch. the speed of this part is also important.
We can add tests for watch it too.
Look forward to more batch test support.
Best,
Jingsong Lee
On Mon, Nov 4, 2019 at
luojiangyu created FLINK-14598:
--
Summary: The use of FlinkUserCodeClassLoaders is unreasonable.
Key: FLINK-14598
URL: https://issues.apache.org/jira/browse/FLINK-14598
Project: Flink
Issue
Hi Peter,
I checked out you proposal FLIP-85 and think that we are in the very
similar direction.
For any reason in your proposal we can create PackagedProgram in server
side and thus
if we can configure environment properly we can directly invoke main method.
In addition to your design
Dear Flink Community!
I would like to open the discussion of contributiong incubator-druid Flink
connector to Flink.
## A brief introduction to Apache/incubator-druid
Apache Druid[1] (incubating) is a real-time analytics database designed for
fast slice-and-dice analytics ("OLAP" queries) on
38 matches
Mail list logo