; " + bytes[bytes.length - 1]);
> }
>}
>final long endGet = System.nanoTime();
>
>System.out.println("end get - duration: " + ((endGet - beginGet) /
> 1_000_000) + " ms");
> }
>
>
> Depending on how smooth the 1.3 release
a reason for your problem?
>
> Am 26.05.2017 um 15:50 schrieb Robert Metzger <rmetz...@apache.org>:
>
> Hi Jason,
>
> This error is unexpected. I don't think its caused by insufficient memory.
> I'm including Stefan into the conversation, he's the RocksDB expert :)
>
> On
or
> use of any of the information contained in or attached to this transmission
> is STRICTLY PROHIBITED. If you have received this transmission in error,
> please immediately notify the sender by telephone or return e-mail and
> delete the original transmission and its attachments w
blic void flatMap2(RulesEvent rulesEvent,
> Collector<Tuple2TrackEvent, RulesEvent>> collector) throws Exception {
> t2.f1 = rulesEvent;
> //collector.collect(t2);
> }
> });
> ds.printToErr();
>
> Best,
>
>
>
ps://github.com/facebook/rocksdb/issues/1988
>
> We provide a custom version of RocksDB with Flink 1.2.1 (where we fixed
> the slow merge operations) until we can upgrade to a newer version of
> RocksDB. So updating to 1.2.1 should fix the slowdown you observe.
>
> Am 03.05.201
gt; actions w.r.t. how serializers are used are kind of inverted between
> operation and checkpointing. For Flink 1.3 we also will introduce
> incremental checkpoints on RocksDB that piggyback on the SST files. Flink
> 1.2 is writing checkpoints and savepoints fully and in a custom format.
>
&
anything I can do to increase the speed of the checkpoints, or
anywhere I can look to debug the issue? (Nothing seems out of the ordinary
in the flink logs or rocksDB logs)
Thanks!
--
*Jason Brelloch* | Product Developer
3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305
<http:
these metrics requests, or does anyone
know what is causing them?
Thanks,
--
*Jason Brelloch* | Product Developer
3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305
<http://www.bettercloud.com/>
Subscribe to the BetterCloud Monitor
<https://www.bettercloud.com/monitor?u
pache.
> org/projects/flink/flink-docs-release-1.2/setup/savepoints.
> html#configuration
>
> Cheers,
> Aljoscha
>
> On Fri, 14 Oct 2016 at 19:03 Jason Brelloch <jb.bc@gmail.com> wrote:
>
>> It is a standalone cluster.
>>
>> On Fri, Oct 14, 2016
works
>> and I only get the two events I am supposed to:
>>
>> val stream = env.fromCollection(inputEvents)
>> .assignAscendingTimestamps((e: QualifiedEvent) => {
>> e.event.created.toEpochMilli })
>> .keyBy((e: QualifiedEvent) => {
>>
rtConfiguration.alertId.toString })
.timeWindow(Time.minutes(5))
.apply(new GrouperFunction).name("Grouper Function")
On Wed, Aug 3, 2016 at 2:29 PM, Jason Brelloch <jb.bc@gmail.com> wrote:
> Hey guys,
>
> I am trying to use event time along with a custom
minute window.
Is there someway to force the timestamp to arrive in the window before the
event that generated it?
Thanks!
--
*Jason Brelloch* | Product Developer
3405 Piedmont Rd. NE, Suite 325, Atlanta, GA 30305
<http://www.bettercloud.com/>
Subscribe to the BetterCloud Monitor
from the Kafka log). Since I would
> have everything needed to rebuild the state persisted in a Kafka topic, I
> don't think I would need a second Flink job for this?
>
> Thanks,
> Josh
>
>
>
>
> On Thu, Jul 28, 2016 at 6:57 PM, Jason Brelloch <jb.bc@gmail.com>
hich I believe is the Samza
> solution):
>
> http://oi67.tinypic.com/219ri95.jpg
>
> Has anyone done something like this already with Flink? If so are there
> any examples of how to do this replay & switchover (rebuild state by
> consuming from a historical log, then switch o
d storing a handle to that in the JobManager would be more expensive.
>
> Cheers,
> Aljoscha
>
> On Mon, 18 Apr 2016 at 17:20 Jason Brelloch <jb.bc@gmail.com> wrote:
>
>> Hi everyone,
>>
>> I am trying to set up flink with a hdfs state backend. I co
Hi everyone,
I am trying to set up flink with a hdfs state backend. I configured
state.backend and state.backend.fs.checkpointdir parameters in the
flink-conf.yaml. I run the flink task and the checkpoint directories are
created in hdfs, so it appears it can connect and talk to hdfs just fine.
16 matches
Mail list logo