Re: Checkpointing SIGSEGV

2017-05-26 Thread Jason Brelloch
; " + bytes[bytes.length - 1]); > } >} >final long endGet = System.nanoTime(); > >System.out.println("end get - duration: " + ((endGet - beginGet) / > 1_000_000) + " ms"); > } > > > Depending on how smooth the 1.3 release

Re: Checkpointing SIGSEGV

2017-05-26 Thread Jason Brelloch
a reason for your problem? > > Am 26.05.2017 um 15:50 schrieb Robert Metzger <rmetz...@apache.org>: > > Hi Jason, > > This error is unexpected. I don't think its caused by insufficient memory. > I'm including Stefan into the conversation, he's the RocksDB expert :) > > On

Re: Flink parallel tasks, slots and vcores

2017-05-26 Thread Jason Brelloch
or > use of any of the information contained in or attached to this transmission > is STRICTLY PROHIBITED. If you have received this transmission in error, > please immediately notify the sender by telephone or return e-mail and > delete the original transmission and its attachments w

Re: ConnectedStream keyby issues

2017-05-04 Thread Jason Brelloch
blic void flatMap2(RulesEvent rulesEvent, > Collector<Tuple2TrackEvent, RulesEvent>> collector) throws Exception { > t2.f1 = rulesEvent; > //collector.collect(t2); > } > }); > ds.printToErr(); > > Best, > > >

Re: Tuning RocksDB

2017-05-03 Thread Jason Brelloch
ps://github.com/facebook/rocksdb/issues/1988 > > We provide a custom version of RocksDB with Flink 1.2.1 (where we fixed > the slow merge operations) until we can upgrade to a newer version of > RocksDB. So updating to 1.2.1 should fix the slowdown you observe. > > Am 03.05.201

Re: Tuning RocksDB

2017-05-03 Thread Jason Brelloch
gt; actions w.r.t. how serializers are used are kind of inverted between > operation and checkpointing. For Flink 1.3 we also will introduce > incremental checkpoints on RocksDB that piggyback on the SST files. Flink > 1.2 is writing checkpoints and savepoints fully and in a custom format. > &

Tuning RocksDB

2017-05-03 Thread Jason Brelloch
anything I can do to increase the speed of the checkpoints, or anywhere I can look to debug the issue? (Nothing seems out of the ordinary in the flink logs or rocksDB logs) Thanks! -- *Jason Brelloch* | Product Developer 3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305 <http:

Fetching metrics failed.

2017-04-20 Thread Jason Brelloch
these metrics requests, or does anyone know what is causing them? Thanks, -- *Jason Brelloch* | Product Developer 3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305 <http://www.bettercloud.com/> Subscribe to the BetterCloud Monitor <https://www.bettercloud.com/monitor?u

Re: Flink Cluster is savepointing jobmanager instead of external filesystem

2016-10-16 Thread Jason Brelloch
pache. > org/projects/flink/flink-docs-release-1.2/setup/savepoints. > html#configuration > > Cheers, > Aljoscha > > On Fri, 14 Oct 2016 at 19:03 Jason Brelloch <jb.bc@gmail.com> wrote: > >> It is a standalone cluster. >> >> On Fri, Oct 14, 2016

Re: Generate timestamps in front of event for event time windows

2016-08-04 Thread Jason Brelloch
works >> and I only get the two events I am supposed to: >> >> val stream = env.fromCollection(inputEvents) >> .assignAscendingTimestamps((e: QualifiedEvent) => { >> e.event.created.toEpochMilli }) >> .keyBy((e: QualifiedEvent) => { >>

Re: Generate timestamps in front of event for event time windows

2016-08-03 Thread Jason Brelloch
rtConfiguration.alertId.toString }) .timeWindow(Time.minutes(5)) .apply(new GrouperFunction).name("Grouper Function") On Wed, Aug 3, 2016 at 2:29 PM, Jason Brelloch <jb.bc@gmail.com> wrote: > Hey guys, > > I am trying to use event time along with a custom

Generate timestamps in front of event for event time windows

2016-08-03 Thread Jason Brelloch
minute window. Is there someway to force the timestamp to arrive in the window before the event that generated it? Thanks! -- *Jason Brelloch* | Product Developer 3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305 <http://www.bettercloud.com/> Subscribe to the BetterCloud Monitor

Re: Reprocessing data in Flink / rebuilding Flink state

2016-07-29 Thread Jason Brelloch
from the Kafka log). Since I would > have everything needed to rebuild the state persisted in a Kafka topic, I > don't think I would need a second Flink job for this? > > Thanks, > Josh > > > > > On Thu, Jul 28, 2016 at 6:57 PM, Jason Brelloch <jb.bc@gmail.com>

Re: Reprocessing data in Flink / rebuilding Flink state

2016-07-28 Thread Jason Brelloch
hich I believe is the Samza > solution): > > http://oi67.tinypic.com/219ri95.jpg > > Has anyone done something like this already with Flink? If so are there > any examples of how to do this replay & switchover (rebuild state by > consuming from a historical log, then switch o

Re: Flink HDFS State Backend

2016-04-18 Thread Jason Brelloch
d storing a handle to that in the JobManager would be more expensive. > > Cheers, > Aljoscha > > On Mon, 18 Apr 2016 at 17:20 Jason Brelloch <jb.bc@gmail.com> wrote: > >> Hi everyone, >> >> I am trying to set up flink with a hdfs state backend. I co

Flink HDFS State Backend

2016-04-18 Thread Jason Brelloch
Hi everyone, I am trying to set up flink with a hdfs state backend. I configured state.backend and state.backend.fs.checkpointdir parameters in the flink-conf.yaml. I run the flink task and the checkpoint directories are created in hdfs, so it appears it can connect and talk to hdfs just fine.