POJO. Are all fields in SourceA
> public? There are some requirements for POJO classes [1].
> >
> > [1]:
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html#pojos
> >
> > Regards,
> > Chiwan Park
> >
> >> On
as indicated by Chiwan.
Cheers,
Till
On Tue, Feb 9, 2016 at 11:53 AM, Dominique Rondé
<dominique.ro...@codecentric.de
<mailto:dominique.ro...@codecentric.de>> wrote:
The fields in SourceA and SourceB are private but have public
getters and setters. The classes pro
ould you post the complete example code (Flink job including the type
definitions). For example, if the data sets are of type
|DataSet|, then it will be treated as a |GenericType|. Judging
from your pseudo code, it looks fine on the first glance.
Cheers,
Till
On Tue, Feb 9, 2016
n DataSet API.
I guess that's not the case since all code snippets were Java so far.
Can you post the full stacktrace of the exception?
2016-02-09 20:13 GMT+01:00 Dominique Rondé
<dominique.ro...@codecentric.de <mailto:dominique.ro...@codecentric.de>>:
Hi all,
ntly, we
added and removed some strategies. So it might be that the strategy
enum of client and jobmanager got out of sync.
Cheers, Fabian
2016-02-10 7:33 GMT+01:00 Dominique Rondé
<dominique.ro...@codecentric.de <mailto:dominique.ro...@codecentric.de>>:
Hi,
your guess i
Hi folks,
i have a kafka topic with messages from the last 7 days. Now i have a
new flink streaming process and like to consume the messages from the
beginning. If I just bring up the topology, the consumer starts from
this moment and not from beginning.
THX
Dominique
Hi @all,
i have a yarn cluster with 5 Nodes with a running flink (0.10.2)
instance. Today we shut down one of the Yarn-Hosts due to maintance
reasons. After the restart we have some flink streaming routes in a
restarting status (see stacktrace below). Now I want to restart these
routes to
Hi all,
once again I need a "kick" to the right direction. I have a datastream
with request and responses identified by an ReqResp-ID. I like to
calculate the (avg, 95%, 99%) time between the request and response and
also like to count them. I thought of
Hi folks,
on the first view I have a very simple problem. I like to get datasets
out of some textfiles in HDFS and send them to a kafka topic. I use the
following code to do that:
DataStream hdfsDatasource = env.readTextFile("hdfs://" +
parameterTool.getRequired("hdfs_env") + "/user/flink/"
Hi Curtis,
we implemented this today. But without a REST-Interface. We transfer out
artifacts and a script with a scp call from out Bamboo server and execute the
script. This script kills the yarn application, start a new flink application
in yarn and submit all routes to the cluster.
Hi @ll,
we just change the backend from filesystem to RocksDB. Since that time (3 Days)
we got 451 files with 1.8 GB stored in the tmp-Directory. All files are named
librocksdb*.so
Did we something wrong or is it a Bug?
GreetsDominique
Von meinem Samsung Gerät gesendet.
Thanks,
Kostas
> On Nov 15, 2016, at 2:57 PM, Dominique Rondé <dominique.ro...@allsecur.de>
> wrote:
>
> Hi @all!
>
> I figured out a strange behavior with the Rolling HDFS-Sink. We consume
> events from a kafka topic and write them into a HDFS Filesystem. We use
ers,
> Aljoscha
>
> On Mon, 7 Nov 2016 at 09:21 Dominique Rondé
> <dominique.ro...@allsecur.de <mailto:dominique.ro...@allsecur.de>> wrote:
>
> First of all, thanks for the explanation. That sounds reasonable.
>
> But I started the flink routes 3 days ago a
Hi @all!
I figured out a strange behavior with the Rolling HDFS-Sink. We consume
events from a kafka topic and write them into a HDFS Filesystem. We use
the RollingSink-Implementation in this way:
RollingSink sink = new
RollingSink("/some/hdfs/directory") //
.setBucketer(new
Dear all,
i got some trouble during the start of Flink in a Yarn-Container based
on Cloudera. I have a start script like that:
sla:/applvg/home/flink/mvp $ cat run.sh
export FLINK_HOME_DIR=/applvg/home/flink/mvp/flink-1.2.0/
export FLINK_JAR_DIR=/applvg/home/flink/mvp/cache
export
Hi folks,
I just become into the need to bring Flink into a yarn system, that is
configured with kerberos. According to the documentation, I changed the
flink.conf.yaml like that:
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.contexts: Client
I know that providing a
Gordon
>
>
> On 30 May 2017 at 4:29:08 PM, Dominique Rondé
> (dominique.ro...@allsecur.de <mailto:dominique.ro...@allsecur.de>) wrote:
>
>> Hi folks,
>>
>> I just become into the need to bring Flink into a yarn system, that
>> is configured with kerberos. A
17 matches
Mail list logo