Gordon
>
>
> On 30 May 2017 at 4:29:08 PM, Dominique Rondé
> (dominique.ro...@allsecur.de <mailto:dominique.ro...@allsecur.de>) wrote:
>
>> Hi folks,
>>
>> I just become into the need to bring Flink into a yarn system, that
>> is configured with kerberos. A
Hi folks,
I just become into the need to bring Flink into a yarn system, that is
configured with kerberos. According to the documentation, I changed the
flink.conf.yaml like that:
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.contexts: Client
I know that providing a keyt
Dear all,
i got some trouble during the start of Flink in a Yarn-Container based
on Cloudera. I have a start script like that:
sla:/applvg/home/flink/mvp $ cat run.sh
export FLINK_HOME_DIR=/applvg/home/flink/mvp/flink-1.2.0/
export FLINK_JAR_DIR=/applvg/home/flink/mvp/cache
export YARN_CONF_D
016, at 2:57 PM, Dominique Rondé
> wrote:
>
> Hi @all!
>
> I figured out a strange behavior with the Rolling HDFS-Sink. We consume
> events from a kafka topic and write them into a HDFS Filesystem. We use
> the RollingSink-Implementation in this way:
>
> Rollin
Hi @all!
I figured out a strange behavior with the Rolling HDFS-Sink. We consume
events from a kafka topic and write them into a HDFS Filesystem. We use
the RollingSink-Implementation in this way:
RollingSink sink = new
RollingSink("/some/hdfs/directory") //
.setBucketer(new DateT
ers,
> Aljoscha
>
> On Mon, 7 Nov 2016 at 09:21 Dominique Rondé
> mailto:dominique.ro...@allsecur.de>> wrote:
>
> First of all, thanks for the explanation. That sounds reasonable.
>
> But I started the flink routes 3 days ago and went out for the
> weekend
are a big problem because they are located in
the tmp directory and they can be automatically cleaned by OS.
在 2016年11月7日,上午8:24,Dominique Rondé 写道:
Hi @ll,
we just change the backend from filesystem to RocksDB. Since that time (3 Days)
we got 451 files with 1.8 GB stored in the tmp-Directory.
Hi @ll,
we just change the backend from filesystem to RocksDB. Since that time (3 Days)
we got 451 files with 1.8 GB stored in the tmp-Directory. All files are named
librocksdb*.so
Did we something wrong or is it a Bug?
GreetsDominique
Von meinem Samsung Gerät gesendet.
Hi Curtis,
we implemented this today. But without a REST-Interface. We transfer out
artifacts and a script with a scp call from out Bamboo server and execute the
script. This script kills the yarn application, start a new flink application
in yarn and submit all routes to the cluster.
Work
Hi all,
once again I need a "kick" to the right direction. I have a datastream
with request and responses identified by an ReqResp-ID. I like to
calculate the (avg, 95%, 99%) time between the request and response and
also like to count them. I thought of
".keyBy("ReqRespID").timeWindowAll(Tim
Hi folks,
on the first view I have a very simple problem. I like to get datasets
out of some textfiles in HDFS and send them to a kafka topic. I use the
following code to do that:
DataStream hdfsDatasource = env.readTextFile("hdfs://" +
parameterTool.getRequired("hdfs_env") + "/user/flink/"
Hi @all,
i have a yarn cluster with 5 Nodes with a running flink (0.10.2)
instance. Today we shut down one of the Yarn-Hosts due to maintance
reasons. After the restart we have some flink streaming routes in a
restarting status (see stacktrace below). Now I want to restart these
routes to con
tting the Kafka property
|props.put("auto.offset.reset", "smallest");|?
Cheers,
Till
On Thu, Mar 17, 2016 at 1:39 PM, Dominique Rondé
mailto:dominique.ro...@codecentric.de>> wrote:
Hi folks,
i have a kafka topic with m
Hi folks,
i have a kafka topic with messages from the last 7 days. Now i have a
new flink streaming process and like to consume the messages from the
beginning. If I just bring up the topology, the consumer starts from
this moment and not from beginning.
THX
Dominique
ntly, we
added and removed some strategies. So it might be that the strategy
enum of client and jobmanager got out of sync.
Cheers, Fabian
2016-02-10 7:33 GMT+01:00 Dominique Rondé
mailto:dominique.ro...@codecentric.de>>:
Hi,
your guess is correct. I use java all the time... H
ainst the Python DataSet API.
I guess that's not the case since all code snippets were Java so far.
Can you post the full stacktrace of the exception?
2016-02-09 20:13 GMT+01:00 Dominique Rondé
mailto:dominique.ro...@codecentric.de>>:
Hi all,
i finally figured out tha
, Dominique Rondé
<mailto:dominique.ro...@codecentric.de>> wrote:
Here we go!
ExecutionEnvironment env =
ExecutionEnvironment.createRemoteEnvironment("xxx.xxx.xxx.xxx",
53408,"flink-job.jar");
DataSource datasourceA=
env.read
you post the complete example code (Flink job including the type
definitions). For example, if the data sets are of type
|DataSet|, then it will be treated as a |GenericType|. Judging
from your pseudo code, it looks fine on the first glance.
Cheers,
Till
On Tue, Feb 9, 2016 at 2:25 PM, Dom
as indicated by Chiwan.
Cheers,
Till
On Tue, Feb 9, 2016 at 11:53 AM, Dominique Rondé
<mailto:dominique.ro...@codecentric.de>> wrote:
The fields in SourceA and SourceB are private but have public
getters and setters. The classes provide an empty and public
constructor.
some requirements for POJO classes [1].
> >
> > [1]:
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html#pojos
> >
> > Regards,
> > Chiwan Park
> >
> >> On Feb 9, 2016, at 7:42 PM, Dominique Rondé <
> dominique
Hi folks,
i try to join two datasets containing some PoJos. Each PoJo inherit a
field "sessionId" from the parent class. The field is private but has a
public getter.
The join is like this:
DataSet> joinedDataSet =
sourceA.join(SourceB).where("sessionId").equalTo("sessionId");
But the res
21 matches
Mail list logo