Hi Thomas,
IIUC this "launcher" should run on client endpoint instead
of dispatcher endpoint. "jar run" will extract the job graph
and submit it to the dispatcher, which has mismatched
semantic from your willing.
Could you run it with CliFrontend? Or propose that "jar run"
supports running
godfrey he created FLINK-13502:
--
Summary: CatalogTableStatisticsConverter should be in
planner.utils package
Key: FLINK-13502
URL: https://issues.apache.org/jira/browse/FLINK-13502
Project: Flink
Xuefu Zhang created FLINK-13501:
---
Summary: Fixes a few issues in documentation for Hive integration
Key: FLINK-13501
URL: https://issues.apache.org/jira/browse/FLINK-13501
Project: Flink
Issue
Stephan Ewen created FLINK-13499:
Summary: Remove dependency on MapR artifact repository
Key: FLINK-13499
URL: https://issues.apache.org/jira/browse/FLINK-13499
Project: Flink
Issue Type:
Nico Kruber created FLINK-13498:
---
Summary: Reduce Kafka producer startup time by aborting
transactions in parallel
Key: FLINK-13498
URL: https://issues.apache.org/jira/browse/FLINK-13498
Project: Flink
Till Rohrmann created FLINK-13497:
-
Summary: Checkpoints can complete after CheckpointFailureManager
fails job
Key: FLINK-13497
URL: https://issues.apache.org/jira/browse/FLINK-13497
Project: Flink
Hi!
Are you looking for online access or offline access?
For online access, you can to key lookups via queryable state.
For offline access, you can read and write rocksDB state using the new
state processor API in Flink 1.9
I will open a PR later today, changing the module to use reflection rather
than a hard MapR dependency.
On Tue, Jul 30, 2019 at 6:40 AM Rong Rong wrote:
> We've also experienced some issues with our internal JFrog artifactory. I
> am suspecting some sort of mirroring problem but somehow it only
Hi Shilpa,
The easiest way to do this is the make the Rocks DB state queryable.
Then use the Flink queryable state client to access the state you have
created.
Regards
Taher Koitawala
On Tue, Jul 30, 2019, 4:58 PM Shilpa Deshpande wrote:
> Hello All,
>
> I am new to Apache Flink. In my
Hi,
With a one-week survey in user list[1], nobody expect Flavio and Jeff
participant the thread.
Flavio shared his experience with a revised Program like interface.
This could be regraded as downstream integration and in client api
enhancements document we propose rich interface for this
Hello All,
I am new to Apache Flink. In my company we are thinking of using Flink to
perform transformation of the data. The source of the data is Apache Kafka
topics. Each message that we receive on Kafka topic, we want to transform
it and store it on RocksDB. The messages can come out of order.
Hi Lakeshen,
Thanks for trying out blink planner.
First question, are you using blink-1.5.1 or flink-1.9-table-planner-blink
?
We suggest to use the latter one because we don't maintain blink-1.5.1, you
can try flink 1.9 instead.
Best,
Jark
On Tue, 30 Jul 2019 at 17:02, LakeShen wrote:
> Hi
Hi all,
Progress updates:
1. the bui...@flink.apache.org can be subscribed now (thanks @Robert), you
can send an email to builds-subscr...@flink.apache.org to subscribe.
2. We have a pull request [1] to send only apache/flink builds
notifications and it works well.
3. However, all the
Yun Tang created FLINK-13496:
Summary: Correct the documentation of Gauge metric initialization
Key: FLINK-13496
URL: https://issues.apache.org/jira/browse/FLINK-13496
Project: Flink
Issue Type:
Jingsong Lee created FLINK-13495:
Summary: blink-planner should support decimal precision to table
source
Key: FLINK-13495
URL: https://issues.apache.org/jira/browse/FLINK-13495
Project: Flink
Zhenghua Gao created FLINK-13494:
Summary: Blink planner changes source parallelism which causes
stream SQL e2e test fails
Key: FLINK-13494
URL: https://issues.apache.org/jira/browse/FLINK-13494
There is nothing to report; we already know what the problem is but it
cannot be fixed.
On 30/07/2019 08:46, Yun Tang wrote:
I met this problem again at https://api.travis-ci.com/v3/job/220732163/log.txt
. Is there any place we could ask for help to contact tarvis or any clues we
could use
Hi all,when I use blink flink-sql-parser module,the maven dependency like
this:
com.alibaba.blink
flink-sql-parser
1.5.1
I also import the flink 1.9 blink-table-planner module , I
use FlinkPlannerImpl to parse the sql to get the List. But
when I run the program , it throws the exception like
zhijiang created FLINK-13493:
Summary: BoundedBlockingSubpartition only notifies
onConsumedSubpartition when all the readers are empty
Key: FLINK-13493
URL: https://issues.apache.org/jira/browse/FLINK-13493
Simon Su created FLINK-13492:
Summary: BoundedOutOfOrderTimestamps cause Watermark's timestamp
leak
Key: FLINK-13492
URL: https://issues.apache.org/jira/browse/FLINK-13492
Project: Flink
Issue
Piotr Nowojski created FLINK-13491:
--
Summary: AsyncWaitOperator doesn't handle endOfInput call properly
Key: FLINK-13491
URL: https://issues.apache.org/jira/browse/FLINK-13491
Project: Flink
I met this problem again at https://api.travis-ci.com/v3/job/220732163/log.txt
. Is there any place we could ask for help to contact tarvis or any clues we
could use to figure out this?
Best
Yun Tang
From: Yun Tang
Sent: Monday, June 24, 2019 14:22
To:
Hi Biao,
Thanks for working on FLINK-9900. The ticket is already assigned to you now.
Cheers,
Gordon
On Tue, Jul 30, 2019 at 2:31 PM Biao Liu wrote:
> Hi Gordon,
>
> Thanks for updating progress.
>
> Currently I'm working on FLINK-9900. I need a committer to assign the
> ticket to me.
>
>
Hi Gordon,
Thanks for updating progress.
Currently I'm working on FLINK-9900. I need a committer to assign the
ticket to me.
Tzu-Li (Gordon) Tai 于2019年7月30日 周二13:01写道:
> Hi all,
>
> There are quite a few instabilities in our builds right now (master +
> release-1.9), some of which are directed
24 matches
Mail list logo