Hi Enrico,
the docs of the 1.3-SNAPSHOT are a bit out of sync right now, but they
will be updated in the next days/1-2 weeks.
We recently introduced so-called "time indicators". These are attributes
that correspond to Flink's time and watermarks. You declare a logical
field that represents
Hey Mauro,
I'm not aware of any reason for that. I loop in Chesnay, maybe he knows
why. @Chesnay wouldn't it be helpful to also archive the jars using the
HistoryServer?
Timo
Am 17.05.17 um 12:31 schrieb Mauro Cortellazzi:
Hi Flink comunity,
is there a particular reason to delete the
Hi,
in general, a class level variable is not managed by Flink if it is not
defined as state or the function does not implemented ListCheckpointed
interface. Allowing infinite lateness also means that your window
content has to be stored infinitely. I'm not sure if I understand your
This is called "stop" in Flink. You can find a short description here:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/cli.html
" The difference between cancelling and stopping a (streaming) job is
the following:
On a cancel call, the operators in a job immediately receive
Hi Gwenhael,
I'm not a Kafka expert but if something is hardcoded that should not, it
might be worth opening an issue for it. I loop in somebody who might
knows more your problem.
Timo
Am 26/04/17 um 14:47 schrieb Gwenhael Pasquiers:
Hello,
Up to now we’ve been using kafka with jaas
Hi,
you are right. There are some limitation about RichReduceFunctions on
windows. Maybe the new AggregateFunction `window.aggregate()` could
solve your problem, you can provide an accumulator which is your custom
state that you can update for each record. I couldn't find a
documentation
Hi,
the Flink community decided to have the most important connectors (e.g.
Kafka) in the core repository. All other connectors are in Apache Bahir
(http://bahir.apache.org/). You can find the flink-connector-redis there.
Timo
Am 26/04/17 um 12:54 schrieb yunfan123:
It exists in 1.1.5.
Hi Marc,
maybe Greg (in CC) can help answering your question?
Regards,
Timo
Am 29/03/17 um 11:50 schrieb Kaepke, Marc:
Hi guys,
I can’t found on web which graph partitioning are supported by Gelly.
During my search I found this link. But the ticket is still open.
Hi Kamil,
the performance implications might be the result of which state the
underlying functions are using internally. WindowFunctions use ListState
or ReducingState, fold() uses FoldingState. It also depends on the size
of your state and the state backend you are using. I recommend the
I think it very depends on your use case, maybe you can use combiner
first to reduce the amount of records per key. Maybe you can explain
your application a bit more (which window, type of aggregations).
It often helps e.g. to introduce an artifical key und merge the result
of multiple
Hi Gwenhael,
I will loop in Gordon becaue he is more familar with the Kafka
connectors. Have you experiences with two versions in the same project?
Am 20/03/17 um 15:57 schrieb Gwenhael Pasquiers:
Hi,
Before doing it myself I thought it would be better to ask.
We need to consume from
Hi,
using keyBy Flink ensures that every set of records with same key is
send to the same operator, otherwise it would not be possible to process
them as a whole. It depends on your use case if it is also ok that
another operator processes parts of this set of records. You can
implement you
Hi Abhinav,
can you check if you have configured your AWS setup correctly? The S3
configuration might be missing.
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#missing-s3-filesystem-configuration
Regards,
Timo
Am 18/03/17 um 01:33 schrieb Bajaj, Abhinav:
Hi,
Hi Justin,
thank you for reporting your issues. I never tried the Table API with
SBT but `flink-table` should not declare dependencies to core modules,
this is only done in `test` scope, maybe you have to specify the right
scope manually? You are right, the mentioned Jira should be fixed
Hi Sam,
could you explain the behavior a bit more? How does the window function
behave? Is it not triggered or what is the content? What is the result
if you don't use a window function?
Timo
Am 08/03/17 um 02:59 schrieb Sam Huang:
btw, the reduce function works well, I've printed out the
Hi Dominik,
did you take a look into the logs? Maybe the exception is not shown in
the CLI but in the logs.
Timo
Am 07/03/17 um 23:58 schrieb Dominik Safaric:
Hi all,
I would appreciate for any help or advice in regard to default Java runtime
shutdown hooks and canceling Flink jobs.
Hi Vadim,
this of course depends on your use case. The question is how large is
your state per pane and how much memory is available for Flink?
Are you using incremental aggregates such that only the aggregated value
per pane has to be kept in memory?
Regards,
Timo
Am 20/02/17 um 16:34
Forget what I said about omitting `var`, this would remove the field
from the POJO. I opened a PR for fixing the issue:
https://github.com/apache/flink/pull/3318
As a workaround: If you just want to have a POJO for the Cassandra Sink
you don't need to add the `@BeanProperty` annotation. Flink
Hi Adarsh,
I looked into your issue. The problem is that `var` generates
Scala-style getters/setters and the annotation generates Java-style
getters/setters. Right now Flink only supports one style in a POJO, I
don't know why we have this restriction. I will work on a fix for that.
Is it
Hi,
java.sql.Timestamps have to have a format like " -mm-dd
hh:mm:ss.[fff...]". In your case you need to parse this as a String and
write your own scalar function for parsing.
Regards,
Timo
Am 04/02/17 um 17:46 schrieb nsengupta:
"4/1/2014 0:11:00",40.769,-73.9549,"B02512"
I created an issue to make this a bit more user-friendly in the future.
https://issues.apache.org/jira/browse/FLINK-5714
Timo
Am 05/02/17 um 06:08 schrieb nsengupta:
Thanks, Till, for taking time to share your understanding.
-- N
On Sun, Feb 5, 2017 at 12:49 AM, Till Rohrmann [via Apache
Hi Nico,
writeAsCsv has limited functionality in this case. I recommend to use
the Bucketing File Sink[1] where you can specify a interval and batch
size when to flush.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/filesystem_sink.html#bucketing-file-sink
Hi Matt,
the keyBy() on ConnectedStream has two parameters to specify the key of
the left and of the right stream. Same keys end up in the same
CoMapFunction/CoFlatMapFunction. If you want to group both streams on a
common key, then you can use .union() instead of .connect().
I hope that
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010
was not present at that time. You need to upgrade to Flink 1.2.
Timo
Am 17/01/17 um 15:58 schrieb Neil Derraugh:
This is really a Zeppelin question, and I’ve already posted to the user list
there. I’m just trying to
Hi Sunil,
what is the content of args[0] when you execute public static void
main(String[] args) { System.out.println(args[0]); }
Am 17/01/17 um 14:55 schrieb raikarsunil:
Hi, I am not able to replace value into spring place holder .Below is
the xml code snippet .
This sounds like a RocksDB issue. Maybe Stefan (in CC) has an idea?
Timo
Am 17/01/17 um 14:52 schrieb Avihai Berkovitz:
Hello,
I am running a streaming job on a small cluster, and after a few hours
I noticed that my TaskManager processes are being killed by the OOM
killer. The processes
Hi Denis,
the first 1.2 RC0 has already been released and the RC1 is on the way
(maybe already this week). I think that we can expect a 1.2 release in
3-4 weeks.
Regards,
Timo
Am 17/01/17 um 10:04 schrieb denis.doll...@thomsonreuters.com:
Hi all,
Do you have some ballpark estimate for a
Hi Dmitry,
the runtime supports an arbitrary number of inputs, however, the API
does currently not provide a convenient way. You could use the "union"
operator to reduce the number of inputs. Otherwise I think you have to
implement your own operator. That depends on your use case though.
Hi Yuhong,
as a solution you can specify the order of your Pojo fields when
converting from DataStream to Table.
Table table = tableEnv
.fromDataSet(env.fromCollection(data), "department AS a, " +
"age AS b, " +
"salary AS c, " +
"name AS d")
.select("a, b, c, d");
I'm not a Kafka expert but maybe Gordon (in CC) knows more.
Timo
Am 09/01/17 um 11:51 schrieb Renjie Liu:
Hi, all:
I'm using flink 1.1.3 and kafka consumer 09. I read its code and it
says that the kafka consumer will turn on auto offset commit if
checkpoint is not enabled. I've turned off
Hi Tao,
no, streaming jobs do not use managed memory yet. Managed memory is
useful for sorting, joining and grouping bounded data. Unbounded stream
do not need that.
It could be used in the future e.g. to store state or for new operators,
but is this is not on the roadmap so far.
Regards,
Hi David,
thanks for looking into this. Since you already looked into this issue
and solved/tested your fix, it would be great if you could open a pull
request for it. Every contribution is very welcome.
Regards,
Timo
Am 04/12/16 um 23:35 schrieb Torok, David:
I spent close to two days
]:
https://github.com/apache/flink/blob/master/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L506
<https://github.com/apache/flink/blob/master/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L506>
On 18 November 2016 at 10:25, Timo Walther
I think I identified the problem. Input type inference can not be used
because of missing information.
@Vasia: Why is the TypeExtractor in Graph. mapVertices(mapper) called
without information about the input type? Isn't the input of the
MapFunction known at this point? ( vertices.getType())
ng list archive at
Nabble.com.
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
Hi,
Flink uses the signature of the class to determine the return types of a
function ("class MyClass implements MapFunction"). You always
have to keep in mind that every generic in a "new
()" gets erasured by Java to "new ()". So
variable K has to be
I have opened a PR (https://github.com/apache/flink/pull/2619). Would be
great if you could try it and comment if it solves you problem.
Timo
Am 10/10/16 um 17:48 schrieb Timo Walther:
I could reproduce the error locally. I will prepare a fix for it.
Timo
Am 10/10/16 um 11:54 schrieb
.flink.api.java.io
<http://org.apache.flink.api.java.io> import
org.apache.flink.api.java.io
<http://org.apache.flink.api.java.io>.jdbc.JDBCInputFormat
Then, I can't use:
Imágenes integradas 1
I tried to download code from git and recompile, also
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
e it.can you give me a example.
----- 原始邮件 -
发件人:Timo Walther <twal...@apache.org>
收件人:user@flink.apache.org
主题:Re: modify coGroup GlobalWindows GlobalWindow
日期:2016年09月06日 17点52分
Hi,
will words2 always remain constant? If yes, you don't have to create a
stream out of it and c
List archive. mailing list archive at
Nabble.com.
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
[(a,1)],then input a,2,T1 is [(a,2)],not
T1=[(a,1), (a,2)]),but T2 will not change.
i rewrite the "GlobalWindows",but it do not work,i read the code,find
must rewrite the "GlobalWindow",and must modify "the class Serializer
extends TypeSerializer",but when i run,it can not into
there,why? some can tell me?
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
<rimin...@sina.cn> wrote:
> Hi,
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val tr = env.fromParallelCollection(data)
>
> the data i do not know initialize,some one can tell me..
> --------
>
>
>
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:137)
... 35 more
Am I missing something?
Thank you,
Davran.
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
this, but not work.
dataSet.as(‘id as ‘id, ‘amount as ‘amount)
dataSet.as(‘id, ‘amount)
dataSet.as(“id, amount”)
thanks.
On Aug 1, 2016, at 6:03 PM, Timo Walther <twal...@apache.org> wrote:
I think you need to use ".as()" instead of "toTable()" to supply the field
or
.select(‘id, ‘amount.sum as ‘amount)
.where(‘amount > 0)
.toDataSet[TestPojo]
.print()
Thanks.
On Aug 1, 2016, at 5:50 PM, Timo Walther <twal...@apache.org> wrote:
Hi Kim,
as the exception says: POJOs have no deterministic
n exception like this.
org.apache.flink.api.table.ExpressionException: You cannot rename fields upon Table
creation: Field order of input type PojoType<….> is not deterministic.
There is an error not in java program, but in scala program.
how can I use java POJO with scala Table API.
--
Freundliche Grüße / Kind Regards
Timo Walther
Fol
= TableEnvironment.getTableEnvironment(env);
.
Table table =
tableEnv.registerTable( "table1", table );
Table table = tableEnv.sql( "select * from table1" );
.
.
Is it possible to "unregister" table or replace it with another one?
Thank you.
--
Freundliche Grüße / Kind
with
parallelism between 5-30. Would it help if we turn that down?
We do set taskmanager.heap.mb explicitly, but not the advanced config options.
Thanks,
Istvan
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
undliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
e all variables in the return type
can be deduced from the input type(s)./
This puzzles me as Flink should be able to infer the type from
arguments. I know returns(...) or other workarounds to give
type hint, but they are kind of verbose. Any suggestions?
--
Freundliche Grüße / Kind Regards
Timo Walther
Follow me: @twalthr
https://www.linkedin.com/in/twalthr
I will assign this issue to me and fix it soon, if that's ok?
Regards,
Timo
On 30.03.2016 11:30, Stephan Ewen wrote:
Looks like something we should fix though. Probably just needs a case
distinction in the TypeExtractor.
@Andrew, can you post the stack trace into the me linked issue?
We'll
I think your problem is that you declared "TupleEvent2" as a
TypeVariable in your code but I think you want to use a class that you
defined, right?
If so this is the correct declaration:
MySourceFunction implements SourceFunction
On 09.03.2016 09:28, Wang Yangjun wrote:
Hello,
I think in
Hi Radu,
the exception can have multiple causes. It would be great if you could
share some example code. In most cases the problem is the following:
public class MapFunction { }
new MapFunction();
The type WhatEverType is type erasured by Java. The type
+1 for adding it to the website instead of wiki.
"Who is using Flink?" is always a question difficult to answer to
interested users.
On 19.10.2015 15:08, Suneel Marthi wrote:
+1 to this.
On Mon, Oct 19, 2015 at 3:00 PM, Fabian Hueske > wrote:
, Timo opened the thread.
On Tue, Aug 18, 2015 at 2:04 PM, Kristoffer Sjögren
sto...@gmail.com mailto:sto...@gmail.com
wrote:
Yeah, I think I found the thread already... by Timo Walther?
On Tue, Aug 18, 2015 at 2:01 PM, Stephan Ewen
se...@apache.org mailto:se
Hello Michael,
every time you code a Java program you should avoid object creation if
you want an efficient program, because every created object needs to be
garbage collected later (which slows down your program performance).
You can have small Pojos, just try to avoid the call new in your
601 - 657 of 657 matches
Mail list logo