Hi Gyula,
Sorry for the late reply. I think it is definitely a challenge in terms of
log visibility.
However, for your requirement I think you can customize your Flink job by
utilizing a customized log formatter/encoder (e.g. log4j.properties or
logback.xml) and a suitable logger implementation.
Thanks for the information.
I am able to see all the files using hdfs shell command.
Even I am able to pull the data on flink with
environment.readTextFile("hdfs://host:port/qlake/logs/sa_structured_events")
The issue is only with orcdatasource implementation.
Here is my configuration files.
Maybe you can paste your flink configuration and hdfs-site.xml and check if
there are some problems on the hdfs fileSystem related conf. Also you should
check whether this path really exists on hdfs with a hdfs shell command(e.g.
hdfs dfs -ls /xxx, see
Yadong Xie created FLINK-14393:
--
Summary: add an option to enable/disable cancel job in web ui
Key: FLINK-14393
URL: https://issues.apache.org/jira/browse/FLINK-14393
Project: Flink
Issue Type:
Zili Chen created FLINK-14392:
-
Summary: Introduce JobClient API(FLIP-74)
Key: FLINK-14392
URL: https://issues.apache.org/jira/browse/FLINK-14392
Project: Flink
Issue Type: New Feature
Hi all,
+1 from my side.
Given the current state of this voting thread, FLIP-74 is accepted
with 3 binding vote and 2 non-binding vote. Thanks for your
participation!
I will update the wiki to reflect that the result of the vote.
Best,
tison.
Zili Chen 于2019年10月11日周五 下午8:48写道:
> Well. Then
Sorry for joining the discussion late.
The Beam environment already supports artifact staging, it works out of the
box with the Docker environment. I think it would be helpful to explain in
the FLIP how this proposal relates to what Beam offers / how it would be
integrated.
Thanks,
Thomas
On
I'm obviously pro about promoting the usage of this amazing library but,
maybe, in this early stage I'd try to keep it as a separate project.
However, this really depends about how frequently the code is goin to
change..the Flink main repo is becoming more and more complex to handle due
to the
Hi,
I am trying to use orcsourcetable to fetch data stored in hive tables on
hdfs.
I am able to use the orcsourcetable to fetch the data and deserialize on
local cluster.
But when I am trying to use the hdfs path, it is throwing me file not found
error.
Any help will be appreciated on the
+1
Hequn Cheng 于2019年10月14日周一 下午10:55写道:
> +1
>
> Good job, Wei!
>
> Best, Hequn
>
> On Mon, Oct 14, 2019 at 2:54 PM Dian Fu wrote:
>
> > Hi Wei,
> >
> > +1 (non-binding). Thanks for driving this.
> >
> > Thanks,
> > Dian
> >
> > > 在 2019年10月14日,下午1:40,jincheng sun 写道:
> > >
> > > +1
> > >
>
+1
Good job, Wei!
Best, Hequn
On Mon, Oct 14, 2019 at 2:54 PM Dian Fu wrote:
> Hi Wei,
>
> +1 (non-binding). Thanks for driving this.
>
> Thanks,
> Dian
>
> > 在 2019年10月14日,下午1:40,jincheng sun 写道:
> >
> > +1
> >
> > Wei Zhong 于2019年10月12日周六 下午8:41写道:
> >
> >> Hi all,
> >>
> >> I would like
Hi Dom,
If you consider ignoring checkpoint failures, you can use this API:
setTolerableCheckpointFailureNumber[1].
But for Jobs with checkpoints enabled and failed operators containing
states, Flink can't ignore these failures without restarting Jobs.
Subsequent regional recovery may be
+1
- Verify that the source archives do not contains any binaries
- Start the cluster locally and ran some examples successfully
Best,
Kurt
On Mon, Oct 14, 2019 at 4:32 AM Jark Wu wrote:
> Thanks @Hequn and @Yun Tang, I set the fixVersion of FLINK-14385 to 1.8.3
> and 1.9.2.
>
> Btw, I would
+1
Best,
Kurt
On Fri, Oct 11, 2019 at 1:39 PM Dawid Wysakowicz
wrote:
> Hi everyone,
> I would like to start a vote on FLIP-64. The discussion seems to have
> reached an agreement.
>
> Please vote for the following design document:
>
>
>
+1 to add Stateful Function to flink core to let it stay in the Flink
repository.
Best,
Vino
Stephan Ewen 于2019年10月14日周一 下午7:29写道:
> Thank you all for the encouraging feedback! So far the reaction to add this
> to Flink was exclusively positive, which is really great to see!
>
> To make this
Aljoscha Krettek created FLINK-14391:
Summary: Remove FlinkPlan Interface
Key: FLINK-14391
URL: https://issues.apache.org/jira/browse/FLINK-14391
Project: Flink
Issue Type: Sub-task
Hey,
I have a question that I have not been able to find an answer for in the
docs nor in any other source. Suppose we have a business system and we are
using Elasticsearch sink, but not for the purpose of business case, but
rather for keeping info on the data that is flowing through the system.
Xu Yang created FLINK-14390:
---
Summary: Add class for SqlOperators, and add sql operations to
AlgoOperator, BatchOperator and StreamOperator.
Key: FLINK-14390
URL: https://issues.apache.org/jira/browse/FLINK-14390
Thank you all for the encouraging feedback! So far the reaction to add this
to Flink was exclusively positive, which is really great to see!
To make this happen, here would be the next steps:
(1) As per the bylaws, a contribution like that would need a PMC vote,
because it is a commitment to
+1 to add Stateful Function to FLINK core repository.
Best,
tison.
Becket Qin 于2019年10月14日周一 下午4:16写道:
> +1 to adding Stateful Function to Flink. It is a very useful addition to
> the Flink ecosystem.
>
> Given this is essentially a new top-level / first-citizen API of Flink, it
> seems
+1 to adding Stateful Function to Flink. It is a very useful addition to
the Flink ecosystem.
Given this is essentially a new top-level / first-citizen API of Flink, it
seems better to have it the Flink core repo. This will also avoid letting
this important new API to be blocked on potential
Hi all,
Big +1 for contributing Stateful Functions to Flink and as for the
main question at hand, I would vote for putting it in the main
repository.
I understand that this can couple the release cadence of Flink and
Stateful Functions although I think the pros of having a "you break
it,
you fix
Gary Yao created FLINK-14389:
Summary: Restore task state in new DefaultScheduler
Key: FLINK-14389
URL: https://issues.apache.org/jira/browse/FLINK-14389
Project: Flink
Issue Type: Sub-task
Hi Stephan,
Big +1 for adding stateful functions to Apache Flink! The use cases unlocked
with this feature are very interesting and promising.
Regarding to whether to place it into Flink core repository, personally I
perfer to put it in the main repository. This feature introduces a new set of
Dian Fu created FLINK-14388:
---
Summary: Support all the data types in Python user-defined
functions
Key: FLINK-14388
URL: https://issues.apache.org/jira/browse/FLINK-14388
Project: Flink
Issue
Hi Wei,
+1 (non-binding). Thanks for driving this.
Thanks,
Dian
> 在 2019年10月14日,下午1:40,jincheng sun 写道:
>
> +1
>
> Wei Zhong 于2019年10月12日周六 下午8:41写道:
>
>> Hi all,
>>
>> I would like to start the vote for FLIP-78[1] which is discussed and
>> reached consensus in the discussion thread[2].
Jingsong Lee created FLINK-14387:
Summary: Insert into catalog table failed when catalog table not
implement table source factory
Key: FLINK-14387
URL: https://issues.apache.org/jira/browse/FLINK-14387
+1
Wei Zhong 于2019年10月12日周六 下午8:41写道:
> Hi all,
>
> I would like to start the vote for FLIP-78[1] which is discussed and
> reached consensus in the discussion thread[2].
>
> The vote will be open for at least 72 hours. I'll try to close it by
> 2019-10-16 18:00 UTC, unless there is an objection
28 matches
Mail list logo