Hi,
I think if the Table API/SQL API evolves enough it should be able to supply
a Flink program as just an SQL query with source/sink definitions.
Hopefully, in the future. :-)
Cheers,
Aljoscha
On Fri, 22 Apr 2016 at 23:10 Fabian Hueske wrote:
> Hi Alex,
>
> welcome to the
Hi Prez,
Thanks for sharing, the community is always glad to welcome new Flink users.
Best,
Marton
On Sat, Apr 23, 2016 at 6:01 AM, Prez Cannady
wrote:
> We’ve completed our first full sweep on a five node Flink cluster and it
> went beautifully. On behalf of my
We’ve completed our first full sweep on a five node Flink cluster and it went
beautifully. On behalf of my team, thought I’d say thanks for all the support.
Lots more learning and work to do, so we look forward to working with you all.
Prez Cannady
p: 617 500 3378
e:
I was trying to implement this (force flink to handle all values from
input) but had no success...
Probably I am not getting smth with flink windowing mechanism
I've created my 'finishing' trigger which is basically a copy of purging
trigger
But was not able to make it work:
Thanks for taking the time. That seems like it would complicated without
good knowledge of the overall architecture. I might give it a shot anyway.
On Fri, Apr 22, 2016 at 4:22 PM, Fabian Hueske wrote:
> Hi Jonathan,
>
> I thought about your use case again. I'm afraid, the
Hi Konstantin,
this exception is thrown if you do not set the time characteristic to event
time and assign timestamps.
Please try to add
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
after you obtained the StreamExecutionEnvironment.
Best, Fabian
2016-04-22 15:47 GMT+02:00
Hi
I am sending data using kafkaProducer API
imageRecord = new ProducerRecord(topic,messageKey, imageData);
producer.send(imageRecord);
And in flink program try to fect data using FlinkKafkaConsumer08 . below
are the sample code
Hello,
Next issue in a string of things I'm solving is that my application fails
with the message 'Connection unexpectedly closed by remote task manager'.
Yarn log shows the following:
Container [pid=4102,containerID=container_1461341357870_0004_01_15] is
running beyond physical memory
Actually, a follow-up question: is map function single-threaded (within one
task manager, that is). If it's not then lazy initialization wont' work, is
it right?
On Fri, Apr 22, 2016 at 11:50 AM, Stephan Ewen wrote:
> You may also be able to initialize the client only in the
Hi guys!
I’m new to Flink, and actually to this mailing list as well :) this is my first
message.
I’m still reading the documentation and I would say Flink is an amazing
system!! Thanks everybody who participated in the development!
The information I didn’t find in the documentation - if it is
Outstanding! Thanks, Aljoscha.
On Fri, Apr 22, 2016 at 2:06 AM, Aljoscha Krettek
wrote:
> Hi,
> you could use a RichMapFunction that has an open method:
>
> data.map(new RichMapFunction[...]() {
> def open(): () = {
> // initialize client
> }
>
> def map(input:
On 21/04/2016 16:46, Aljoscha Krettek wrote:
Hi,
I would be very happy about improvements to our RocksDB performance.
What are the RocksDB Java benchmarks that you are running? In Flink,
we also have to serialize/deserialize every time that we access
RocksDB using our TypeSerializer. Maybe
No problems at all, there is not much flink people and a lot of asking guys
- it should be hard to understand each person's issues :)
Yes, it is not as easy as 'contains' operator: I need to collect some
amount of tuples in order to create a in-memory lucene index. After that I
will filter
Hi guys,
trying to run this example:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource> source = env.addSource(new
SourceFunction>() {
@Override
public void
Hi Jonathan,
I thought about your use case again. I'm afraid, the approach I proposed is
not working due to limitations of the Evictor interface.
The only way that I see to implement you use case is to implement a custom
stream operator by extending AbstractStreamOperator and implementing the
If you've serialized your data with a custom format, you can also implement
a custom deserializer using the KeyedDeserializationSchema.
On Fri, Apr 22, 2016 at 2:35 PM, Till Rohrmann wrote:
> Depending on how the key value pair is encoded, you could use the
>
Depending on how the key value pair is encoded, you could use the
TypeInformationKeyValueSerializationSchema where you provide the
BasicTypeInfo.STRING_TYPE_INFO and
PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO as the key and value
type information. But this only works if your data was
But be aware that this method only returns a non null string if the
binaries have been built with Maven. Otherwise it will return null.
Cheers,
Till
On Fri, Apr 22, 2016 at 12:12 AM, Trevor Grant
wrote:
> dug through the codebase, in case any others want to know:
>
>
Hi Shannon,
if you need this feature (assigning range of web server ports) for your use
case, then we would have to add it. If you want to do it, then it would
help us a lot.
I think the documentation is a bit outdated here. The port is either chosen
from the range of ports or a ephemeral port
Hi,
I'm afraid you found a bug. I opened a Jira issue for it:
https://issues.apache.org/jira/browse/FLINK-3803
Cheers,
Aljoscha
On Fri, 22 Apr 2016 at 13:20 Aljoscha Krettek wrote:
> Hi,
> I'm investigating.
>
> Cheers,
> Aljoscha
>
> On Tue, 19 Apr 2016 at 13:08
Hi,
I'm investigating.
Cheers,
Aljoscha
On Tue, 19 Apr 2016 at 13:08 Konstantin Knauf
wrote:
> Hi everyone,
>
> we are using a long running yarn session and changed
> jobmanager.web.checkpoints.history to 20. On the dashboard's job manager
> panel I can see the
Hi Ron,
I see that this leads to a bit of a hassle for you. I'm very reluctant to
allow the general RichFunction interface in functions that are used inside
state because this has quite some implications. Maybe we can add a
simplified interface just for functions that are used inside state to
Hi,
I think the "auto.offset.reset" parameter is only used if your consumer
never read from a topic. To simulate being a new consumer you can set "
group.id" property to a new random value.
Cheers,
Aljoscha
On Fri, 22 Apr 2016 at 03:10 Jack Huang wrote:
> Hi all,
>
Hi,
you could use a RichMapFunction that has an open method:
data.map(new RichMapFunction[...]() {
def open(): () = {
// initialize client
}
def map(input: INT): OUT = {
// use client
}
}
the open() method is called before any elements are passed to the function.
The counterpart
Hi Sowmya,
I'm afraid at the moment it is not possible to store custom state in the
filter or select function. If you need access to the whole sequence of
matching events then you have to put this code in the select clause of your
pattern stream.
Cheers,
Till
On Fri, Apr 22, 2016 at 7:55 AM,
25 matches
Mail list logo