Hi Bowen, thanks for your reply.
> will there be a base module like "flink-connector-hive-base" which holds
all the common logic of these proposed modules
Maybe we don't need, their implementation is only "pom.xml". Different
versions have different dependencies.
> it's more common to set the
Hi Dawid,
> INHERITS creates a new table with a "link" to the original table.
Yes, INHERITS is a "link" to the original table in PostgreSQL.
But INHERITS is not SQL standard, I think it's fine for vendors to define
theire semantics.
> Standard also allows declaring the clause after the schema
+1 to star voting.
Best,
tison.
Yang Wang 于2020年3月5日周四 下午2:29写道:
> Hi Peter,
> Really thanks for your response.
>
> Hi all @Kostas Kloudas @Zili Chen
> @Peter Huang @Rong
> Rong
> It seems that we have reached an agreement. The “application mode”
> is regarded as the enhanced “per-job”.
Thanks Jingsong for your explanation! I'm +1 for this initiative.
According to your description, I think it makes sense to incorporate
support of Hive 2.2 to that of 2.0/2.1 and reducing the number of ranges to
4.
A couple minor followup questions:
1) will there be a base module like
Hi Peter,
Really thanks for your response.
Hi all @Kostas Kloudas @Zili Chen
@Peter Huang @Rong Rong
It seems that we have reached an agreement. The “application mode”
is regarded as the enhanced “per-job”. It is
orthogonal with “cluster deploy”. Currently, we bind the “per-job” to
Thanks Bowen for involving.
> why you proposed segregating hive versions into the 5 ranges above? &
what different Hive features are supported in the 5 ranges?
For only higher client dependencies version support lower hive metastore
versions:
- Hive 1.0.0 - 1.2.2, thrift change is OK, only hive
I'm glad to announce that the voting of FLIP-93 has passed, with 7 +1 (3
binding: Jingsong, Kurt, Jark, 4 non-binding: Benchao, zoudan, Terry,
Leonard) and no -1.
Thanks everyone for participating!
Cheers,
Bowen
On Mon, Mar 2, 2020 at 7:33 AM Leonard Xu wrote:
> +1 (non-binding).
>
> Very
Thanks, Jingsong, for bringing this up. We've received lots of feedbacks in
the past few months that the complexity involved in different Hive versions
has been quite painful for users to start with. So it's great to step
forward and deal with such issue.
Before getting on a decision, can you
Hi Yang and Kostas,
Thanks for the clarification. It makes more sense to me if the long term
goal is to replace per job mode to application mode
in the future (at the time that multiple execute can be supported). Before
that, It will be better to keep the concept of
application mode internally.
Thanks Dawid for starting this discussion.
I like the "LIKE".
1.For "INHERITS", I think this is a good feature too, yes, ALTER TABLE will
propagate any changes in column data definitions and check constraints down
the inheritance hierarchy. A inherits B, A and B share every things, they
have the
Thanks a for this proposal.
As a new contributor to Flink, it would be very helpful to have such blogs
for us to understand the future of Flink and get involved
BTW, I have a question whether the dev blog needs a template like FLIP.
Of course, There is no doubt that dev blogs do not need to be
Hi devs,
Here is a proposal to reverse the dependency from flink-streaming-java to
flink-client, for a proper
module dependency graph. Since it changes current structure, it should be
discussed publicly.
The original idea comes from that flink-streaming-java acts as an API only
module just as
Zhu Zhu created FLINK-16430:
---
Summary: Pipelined region scheduling
Key: FLINK-16430
URL: https://issues.apache.org/jira/browse/FLINK-16430
Project: Flink
Issue Type: Task
Components:
Yu Yang created FLINK-16429:
---
Summary: failed to restore flink job from checkpoints due to
unhandled exceptions
Key: FLINK-16429
URL: https://issues.apache.org/jira/browse/FLINK-16429
Project: Flink
I see that the majority would like to have an uncomplicated process to
publish an article first to gather feedback and then like to have polished
versions on the blog with official review process.
Then, the obvious solution is to have a process that is two-fold:
* First a draft is published and
you would need to reference the table with fully qualified name with
catalog and database
On Wed, Mar 4, 2020 at 02:17 Gyula Fóra wrote:
> I guess it will only work now if you specify the catalog name too when
> referencing the table.
>
>
> On Wed, Mar 4, 2020 at 11:15 AM Gyula Fóra wrote:
>
>
Have you tried to just export Hadoop 3's classpath to `HADOOP_CLASSPATH`
and see if that works out of the box?
If the main use case is HDFS access, then there is a fair chance it might
just work, because Flink uses only a small subset of the Hadoop FS API
which is stable between 2.x and 3.x, as
Big +1 for this proposal and second Ufuk's feeling!
I guess "Engine room" section in Wiki would attract lots of technical fans.:)
Best,
Zhijiang
--
From:Yu Li
Send Time:2020 Mar. 4 (Wed.) 14:42
To:dev
Cc:vthinkxie
Subject:Re:
Zhijiang created FLINK-16428:
Summary: Fine-grained network buffer management for backpressure
Key: FLINK-16428
URL: https://issues.apache.org/jira/browse/FLINK-16428
Project: Flink
Issue Type:
Zili Chen created FLINK-16427:
-
Summary: Remove directly throw ProgramInvocationExceptions in
RemoteStreamEnvironment
Key: FLINK-16427
URL: https://issues.apache.org/jira/browse/FLINK-16427
Project:
Zou created FLINK-16426:
---
Summary: Add rate limiting feature for kafka table source
Key: FLINK-16426
URL: https://issues.apache.org/jira/browse/FLINK-16426
Project: Flink
Issue Type: Improvement
Zou created FLINK-16425:
---
Summary: Add rate limiting feature for kafka table source
Key: FLINK-16425
URL: https://issues.apache.org/jira/browse/FLINK-16425
Project: Flink
Issue Type: Improvement
Bob created FLINK-16424:
---
Summary: Can't verify PGP signatures of Flink 1.9.2 and 1.10.0
Key: FLINK-16424
URL: https://issues.apache.org/jira/browse/FLINK-16424
Project: Flink
Issue Type: Improvement
Robert Metzger created FLINK-16423:
--
Summary: test_ha_per_job_cluster_datastream.sh gets stuck
Key: FLINK-16423
URL: https://issues.apache.org/jira/browse/FLINK-16423
Project: Flink
Issue
+1
On Wed, 4 Mar 2020 at 20:29, Kurt Young wrote:
> +1
>
> Best,
> Kurt
>
>
> On Wed, Mar 4, 2020 at 8:19 PM Gary Yao wrote:
>
> > +1 (binding)
> >
> > Best,
> > Gary
> >
> > On Wed, Mar 4, 2020 at 1:18 PM Yadong Xie wrote:
> >
> > > Hi all
> > >
> > > I want to start the vote for FLIP-100,
+1
Best,
Kurt
On Wed, Mar 4, 2020 at 8:19 PM Gary Yao wrote:
> +1 (binding)
>
> Best,
> Gary
>
> On Wed, Mar 4, 2020 at 1:18 PM Yadong Xie wrote:
>
> > Hi all
> >
> > I want to start the vote for FLIP-100, which proposes to add attempt
> > information inside subtask and timeline in web UI.
>
+1 (binding)
Best,
Gary
On Wed, Mar 4, 2020 at 1:18 PM Yadong Xie wrote:
> Hi all
>
> I want to start the vote for FLIP-100, which proposes to add attempt
> information inside subtask and timeline in web UI.
>
> To help everyone better understand the proposal, we spent some efforts on
> making
Hi all
I want to start the vote for FLIP-100, which proposes to add attempt
information inside subtask and timeline in web UI.
To help everyone better understand the proposal, we spent some efforts on
making an online POC
Timeline Attempt (click the vertex timeline to see the differences):
Hi Gary Kurt, and Jark
I am canceling the vote and restart it since the POC has some changes from
the initial one.
All the changes are following the proposal in this mail thread.
please vote again in the new thread, thanks
Jark Wu 于2020年3月4日周三 下午12:13写道:
> +1 from my side.
>
> Best,
> Jark
>
Gyula Fora created FLINK-16422:
--
Summary: Cannot use [catalog].[db].table with Hive catalog
Key: FLINK-16422
URL: https://issues.apache.org/jira/browse/FLINK-16422
Project: Flink
Issue Type:
Gyula Fora created FLINK-16421:
--
Summary: Changing default catalog to hive without changing default
database fails
Key: FLINK-16421
URL: https://issues.apache.org/jira/browse/FLINK-16421
Project: Flink
Jark Wu created FLINK-16420:
---
Summary: Support CREATE TABLE with schema inference
Key: FLINK-16420
URL: https://issues.apache.org/jira/browse/FLINK-16420
Project: Flink
Issue Type: New Feature
I guess it will only work now if you specify the catalog name too when
referencing the table.
On Wed, Mar 4, 2020 at 11:15 AM Gyula Fóra wrote:
> You are right but still if the default catalog is something else and
> that's the one containing the table then it still wont work currently.
>
>
You are right but still if the default catalog is something else and that's
the one containing the table then it still wont work currently.
Gyula
On Wed, Mar 4, 2020 at 5:08 AM Bowen Li wrote:
> Hi Gyula,
>
> What line 622 (the link you shared) does is not registering catalogs, but
> setting
It's not an actual problem for us since we make sure that only leaf
modules have the suffix.
It can result in problems if parent modules have the suffix, since
children _may_ no longer be able to resolve the path to the parent since
the property value is usually defined in the parent.
On a
Hi,
When building Flink I see a LOT of messages from maven that are similar to
this:
[WARNING]
[WARNING] Some problems were encountered while building the effective model
for org.apache.flink:flink-runtime_2.11:jar:1.11-SNAPSHOT
[WARNING] 'artifactId' contains an expression but should be a
Hi All,
If you have ever touched the docker topic in Flink, you
probably noticed that we have multiple places in docs and repos which
address its various concerns.
We have prepared a FLIP [1] to simplify the perception of docker topic in
Flink by users. It mostly advocates for an approach of
Jun Qin created FLINK-16419:
---
Summary: Avoid to recommit transactions which are known committed
successfully to Kafka upon recovery
Key: FLINK-16419
URL: https://issues.apache.org/jira/browse/FLINK-16419
Jingsong Lee created FLINK-16418:
Summary: Hide hive version to avoid user confuse
Key: FLINK-16418
URL: https://issues.apache.org/jira/browse/FLINK-16418
Project: Flink
Issue Type: Bug
Sorry to forget the title [DISCUSS].
Close this thread.
Best,
Jingsong Lee
On Wed, Mar 4, 2020 at 4:57 PM Jingsong Lee wrote:
> Hi all,
>
> I'd like to propose introduce flink-connector-hive-xx modules.
>
> We have documented the dependencies detailed information[2]. But still has
> some
Hi all,
I'd like to propose introduce flink-connector-hive-xx modules.
We have documented the dependencies detailed information[2]. But still has
some inconvenient:
- Too many versions, users need to pick one version from 8 versions.
- Too many versions, It's not friendly to our developers
Hi all,
I'd like to propose introduce flink-connector-hive-xx modules.
We have documented the dependencies detailed information[2]. But still has
some inconvenient:
- Too many versions, users need to pick one version from 8 versions.
- Too many versions, It's not friendly to our developers
Sorry, forgot one question.
4. Can we make the value.fields-include more orthogonal? Like one can
specify it as "EXCEPT_KEY, EXCEPT_TIMESTAMP".
With current EXCEPT_KEY and EXCEPT_KEY_TIMESTAMP, users can not config to
just ignore timestamp but keep key.
Best,
Kurt
On Wed, Mar 4, 2020 at 4:42
Hi Dawid,
I have a couple of questions around key fields, actually I also have some
other questions but want to be focused on key fields first.
1. I don't fully understand the usage of "key.fields". Is this option only
valid during write operation? Because for
reading, I can't imagine how such
Robert Metzger created FLINK-16417:
--
Summary: ConnectedComponents iterations with high parallelism
end-to-end test fails with OutOfMemoryError: Direct buffer memory
Key: FLINK-16417
URL:
Hi all,
I had a really quick look and from my perspective the proposal looks fine.
I share Jarks opinion that the instantiation could be done at a later
stage. I agree with Wei it requires some changes in the internal
implementation of the FunctionCatalog, to store temporary functions as
catalog
Hi Sivaprasanna,
we don't upload the source jars for the flink-shaded modules. However you
can build them yourself and install by cloning the flink-shaded repository
[1] and then call `mvn package -Dshade-sources`.
[1] https://github.com/apache/flink-shaded
Cheers,
Till
On Tue, Mar 3, 2020 at
47 matches
Mail list logo