Jeroen Steggink created FLINK-10901:
---
Summary: Jobmanager REST ip binds to ip address instead of hostname
Key: FLINK-10901
URL: https://issues.apache.org/jira/browse/FLINK-10901
Project: Flink
Jeroen Steggink created FLINK-10902:
---
Summary: Jobmanager in HA setup communicates the ip address
instead of hostnames
Key: FLINK-10902
URL: https://issues.apache.org/jira/browse/FLINK-10902
Hi,
One more thing. I think the Kafka client would be a good example of a connector
that could use of this `isBlocked()`/callbacks single threaded API from the
“Pattern 2”
If we have N threads per N splits, there would be no need for the (N+1)th
thread. It could be implemented as a non
> And each split has its own (internal) thread for reading from Kafka and
putting messages in an internal queue to pull from. This is similar to how
the current Kafka source is implemented, which has a separate fetcher
thread.
Aljoscha, in kafka case, one split may contain multiple kafka
Hi
Re: Becket
> WRT the confusion between advance() / getCurrent(), do you think it would
> help if we combine them and have something like:
>
> CompletableFuture getNext();
> long getWatermark();
> long getCurrentTimestamp();
I think that technically this would work the same as
Hi,
Isn’t the problem of multiple expressions limited only to `flat***` functions
and to be more specific only to having two (or more) different table functions
passed as an expressions? `.flatAgg(TableAggA('a), scalarFunction1(‘b),
scalarFunction2(‘c))` seems to be well defined (duplicate
Mark Cho created FLINK-10907:
Summary: Job recovery on the same JobManager causes JobManager
metrics to report stale values
Key: FLINK-10907
URL: https://issues.apache.org/jira/browse/FLINK-10907
Luka Jurukovski created FLINK-10904:
---
Summary: Expose Classloader before Pipeline execution
Key: FLINK-10904
URL: https://issues.apache.org/jira/browse/FLINK-10904
Project: Flink
Issue
Konstantin Knauf created FLINK-10906:
Summary: docker-entrypoint.sh logs credentails during startup
Key: FLINK-10906
URL: https://issues.apache.org/jira/browse/FLINK-10906
Project: Flink
One quick follow up on this. The PR has been open for about a month. I
don't want to bug the folks to review it, but I'm not sure what the
etiquette is for getting PRs reviewed.
Any help would be appreciated!
Thanks!
-Joey
On Wed, Nov 7, 2018 at 6:44 AM Joey Echeverria wrote:
> Thanks Till!
Thanks for keeping on improving the overall design, Xuefu! It looks quite
good to me now.
Would be nice that cc-ed Flink committers can help to review and confirm!
One minor suggestion: Since the last section of design doc already touches
some new sql statements, shall we add another section
Luka Jurukovski created FLINK-10903:
---
Summary: Shade Internal Akka Dependencies
Key: FLINK-10903
URL: https://issues.apache.org/jira/browse/FLINK-10903
Project: Flink
Issue Type: Wish
Konstantin Knauf created FLINK-10905:
Summary: HadoopConfigLoader logs Credentials on DEBUG level
Key: FLINK-10905
URL: https://issues.apache.org/jira/browse/FLINK-10905
Project: Flink
Thanks Aljoscha for getting this effort going!
There's been plenty of discussion here already and I'll add my big +1 to
making this interface very simple to implement for a new
Source/SplitReader. Writing a new production quality connector for Flink
is very difficult today and requires a lot of
Dawid Wysakowicz created FLINK-10893:
Summary: Streaming File Sink s3 end-to-end test failed on travis
Key: FLINK-10893
URL: https://issues.apache.org/jira/browse/FLINK-10893
Project: Flink
Jeff Zhang created FLINK-10892:
--
Summary: Flink Yarn app name is hard coded
Key: FLINK-10892
URL: https://issues.apache.org/jira/browse/FLINK-10892
Project: Flink
Issue Type: Improvement
I think it sound like a good idea to be able to specify an options factory
in the flink-conf.yaml. Please go ahead with creating the respective JIRA
issues.
Cheers,
Till
On Wed, Nov 14, 2018 at 7:33 AM Yun Tang wrote:
> Hi all
>
> We already found the programmatic way to configure RocksDB was
Dawid Wysakowicz created FLINK-10895:
Summary: TypeSerializerSnapshotMigrationITCase.testSavepoint test
failed on travis
Key: FLINK-10895
URL: https://issues.apache.org/jira/browse/FLINK-10895
Dawid Wysakowicz created FLINK-10894:
Summary: Resuming Externalized Checkpoint (file, async, scale
down) end-to-end test failed on travis
Key: FLINK-10894
URL:
Hi Addison,
I think it is a good idea to add some more details to the documentation.
Thus, it would be great if you could contribute how to enable compression.
Concerning the RollingPolicy, I've pulled in Klou who might give you more
details about the design decisions.
Cheers,
Till
On Wed, Nov
Tzu-Li (Gordon) Tai created FLINK-10897:
---
Summary: Support POJO state schema evolution
Key: FLINK-10897
URL: https://issues.apache.org/jira/browse/FLINK-10897
Project: Flink
Issue
Tzu-Li (Gordon) Tai created FLINK-10896:
---
Summary: Extend state schema evolution support for more composite
types
Key: FLINK-10896
URL: https://issues.apache.org/jira/browse/FLINK-10896
Hi,
I thought I had sent this mail a while ago but I must have forgotten to send it.
There is another thing we should consider for splits: the range of timestamps
that it can contain. For example, the splits of a file source would know what
the minimum and maximum timestamp in the splits is,
xymaqingxiang created FLINK-10898:
-
Summary: Add yarn.application-attempts-failures-validity-interval
in YarnConfigOptions
Key: FLINK-10898
URL: https://issues.apache.org/jira/browse/FLINK-10898
Till Rohrmann created FLINK-10899:
-
Summary: Don't explicitly set version in flink-end-to-end-tests
sub modules
Key: FLINK-10899
URL: https://issues.apache.org/jira/browse/FLINK-10899
Project: Flink
Thanks Jincheng,
That makes sense to me.
Another differentiation of Table API and DataStream API would be the access
to the timer service.
The DataStream API can register and act on timers while the Table API would
not have this feature.
Best, Fabian
Am Mi., 14. Nov. 2018 um 02:02 Uhr schrieb
Till Rohrmann created FLINK-10900:
-
Summary: Mark Kafka 2.0 connector as beta feature
Key: FLINK-10900
URL: https://issues.apache.org/jira/browse/FLINK-10900
Project: Flink
Issue Type: Task
Hi Jincheng,
I said before, that I think that the append() method is better than
implicitly forwarding keys, but still, I believe it adds unnecessary boiler
plate code.
Moreover, I haven't seen a convincing argument why map(Expression*) is
worse than map(Expression). In either case we need to do
28 matches
Mail list logo