Hi Emmanuel,
In Java, the garbage collector will always run periodically. So remotely
executing it won't make any difference.
If you want to reuse the existing Java process without restarting it, you
have to stop the program code from executing which is causing the
OutOfMemoryError. Usually,
Hi Giacomo,
If I understand you correctly, you want your Flink job to execute with a
parallelism of 5. Just call setDegreeOfParallelism(5) on your
ExecutionEnvironment. That way, all operations, when possible, will be
performed using 5 parallel instances. This is also true for the DataSink
which
ordered, I mean, it contains in order output1,
output1 ... output5? Or are them mixed?
Thanks a lot,
Giacomo
On Tue, Apr 14, 2015 at 11:58 AM, Maximilian Michels m...@apache.org
wrote:
Hi Giacomo,
If I understand you correctly, you want your Flink job to execute with a
parallelism of 5. Just
Hi Giacomo,
If you have your data stored in a Tuple inside a DataSet, then a call to
dataSet.sum(int field) should do it.
See Aggregation under
http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#transformations
Best,
Max
On Tue, Apr 28, 2015 at 2:52 PM, Giacomo
This is because of recent changes to the documentation layout and
structure. The programming guide is now located at
http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html
Best,
Max
On Fri, May 1, 2015 at 8:42 AM, sirinath sirinath19...@gmail.com wrote:
I have come
The exception indicates that you're still using the old version. It takes
some time for the new Maven artifact to get deployed to the snapshot
repository. Apparently, a artifact has already been deployed this morning.
Did you delete the jar files in your .m2 folder?
On Wed, Apr 15, 2015 at 1:38
-04-15 5:59 GMT-05:00 Maximilian Michels m...@apache.org:
Hi Flavio,
Here's an simple example of a Left Outer Join:
https://gist.github.com/mxm/c2e9c459a9d82c18d789
As Stephan pointed out, this can be very easily modified to construct a
Right Outer Join (just exchange leftElements
Hi Flavio,
Do you have an exapmple? The DistinctOperator should return a typed output
just like all the other operators do.
Best,
Max
On Fri, Apr 17, 2015 at 10:07 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Hi guys,
I'm trying to make (in Java) a project().distinct() but then I
Hi Flavio,
Here's an simple example of a Left Outer Join:
https://gist.github.com/mxm/c2e9c459a9d82c18d789
As Stephan pointed out, this can be very easily modified to construct a
Right Outer Join (just exchange leftElements and rightElements in the two
loops).
Here's an excerpt with the most
Saved the date! This sounds very exciting. Looking forward to hearing a lot
of nice talks and meeting a lot of great people!
On Tue, Apr 7, 2015 at 2:24 PM, Kostas Tzoumas ktzou...@apache.org wrote:
Hi everyone,
The folks at data Artisans and the Berlin Big Data Center are organizing
the
Love the purple. Have fun! :)
On Wed, Apr 8, 2015 at 5:05 PM, Henry Saputra henry.sapu...@gmail.com
wrote:
Nice, congrats!
On Wed, Apr 8, 2015 at 7:39 AM, Gyula Fóra gyf...@apache.org wrote:
Hey Everyone!
We our proud to announce the first Apache Flink meetup group in
Stockholm.
Thank you for your kind wishes :) Good luck from me as well!
I was just wondering, is it possible to stream the talks or watch them
later on?
On Mon, Jun 8, 2015 at 2:54 AM, Hawin Jiang hawin.ji...@gmail.com wrote:
Hi All
As you know that Kostas Tzoumas and Robert Metzger will give us two
Hi Jean,
I think it would be a nice to have feature to display some metrics on the
command line after a job has completed. We already have the run time and
the accumulator results available at the CLI and printing those would be
easy. What metrics in particular are you looking for?
Best,
Max
On
Hi Max!
Nowadays, the default target when building from source is Hadoop 2. So a
simple mvn clean package -DskipTests should do it. You only need the flag
when you build for Hadoop 1: -Dhadoop.profile=1.
Cheers,
The other Max
On Tue, Jun 23, 2015 at 2:03 PM, Maximilian Alber
Hi Tamara!
Yes, there is. Since count/collect/print trigger an execution of the
ExecutionEnvironment, you can get the result afterwards using
env.getLastExecutionResult().
Best,
Max
On Thu, Jun 18, 2015 at 3:57 PM, Tamara Mendt tammyme...@gmail.com wrote:
Hey!
I am currently running a job
on the web
site and the documentation.
I think this duplication is not needed. We need to merge the duplication.
Regards,
Chiwan Park
On Jun 25, 2015, at 9:01 PM, Maximilian Michels m...@apache.org wrote:
Thanks. Fixed. Actually, that one is not linked anywhere, right? Just
realized
...@apache.org
wrote:
How to contribute, and coding guidelines are also duplicated on the web
site and the documentation.
I think this duplication is not needed. We need to merge the duplication.
Regards,
Chiwan Park
On Jun 25, 2015, at 9:01 PM, Maximilian Michels m...@apache.org wrote
Hi Max,
Thanks for noticing! Fixed on the master and for the 0.9.1 release.
Cheers,
Max
On Tue, Jun 23, 2015 at 5:09 PM, Maximilian Alber
alber.maximil...@gmail.com wrote:
Hi Flinksters,
just some minor:
http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html
in the
The Apache Flink community is pleased to announce the availability of the
0.9.0 release.
Apache Flink is an open source platform for scalable batch and stream data
processing. Flink’s core consists of a streaming dataflow engine that
provides data distribution, communication, and fault tolerance
Hi Bill,
You're right. Simply increasing the task manager slots doesn't do anything.
It is correct to set the parallelism to taskManagers*slots. Simply increase
the number of network buffers in the flink-conf.yaml, e.g. to 4096. In the
future, we will configure this setting dynamically.
Let us
I can confirm that mvn clean package fails. However, Travis builds fine
after Till's fix: https://travis-ci.org/apache/flink/builds/64537794
On Fri, May 29, 2015 at 11:51 AM, Till Rohrmann trohrm...@apache.org
wrote:
Yes, this is another error. Seems to be related to the new scala shell.
On
Fixed it on the master.
Problem were some classes belonging to package org.apache.flink.api.java
were in the folder src/main/java/*org.apache.flink*/api/java/ instead of
src/main/java/org/apache/flink/api/java/.
On Fri, May 29, 2015 at 2:04 PM, Maximilian Michels m...@apache.org wrote:
Yes
.
On Tue, Jun 30, 2015 at 5:09 PM, Robert Metzger rmetz...@apache.org
wrote:
+1
lets remove the FAQ from the source repo and put it on the website only.
On Thu, Jun 25, 2015 at 3:14 PM, Ufuk Celebi u...@apache.org wrote:
On 25 Jun 2015, at 14:31, Maximilian Michels m...@apache.org wrote:
Thanks
Hi Michele,
If you don't set the parallelism, the default parallelism is used. For the
visualization in the web client, a parallelism of one is used. When you run
your example from your IDE, the default parallelism is set to the number of
(virtual) cores of your CPU.
Moreover, Flink will
.: I'm using the latest 0.10-SNAPSHOT and HDFS 1.2.1.
On 30.06.2015 16:51, Maximilian Michels wrote:
HI Mihail,
Thank you for your question. Do you have a short example that reproduces
the problem? It is hard to find the cause without an error message or some
example code.
I wonder
the latest output, you can use two files and let them alternate to be
input and output.
Let me know if you have any further questions.
Kind regards,
Max
On Thu, Jul 2, 2015 at 10:20 AM, Maximilian Michels m...@apache.org wrote:
Hi Mihail,
Thanks for the code. I'm trying to reproduce the problem
Yes, there is a loop to recursively search for files in directory but that
should be ok. The code fails when adding a new InputSplit to an ArrayList.
This is a standard operation.
Oh, I think I found a bug in `addNestedFiles`. It does not pick up the
length of the recursively found files in line
:20 PM, Maximilian Michels m...@apache.org
wrote:
Yes, there is a loop to recursively search for files in directory but
that should be ok. The code fails when adding a new InputSplit to an
ArrayList. This is a standard operation.
Oh, I think I found a bug in `addNestedFiles`. It does not pick
Hi Tamara,
Quoted strings should not contain the quoting character. The way to work
around this is to escape the quote characters. However, currently there is
no option to escape quotes which pretty much forbids any use of quote
characters within quoted fields. This should be fixed. I opened a
, the
reducer and sink will be executed with the default parallelism.
Best, Fabian
2015-06-30 10:25 GMT+02:00 Maximilian Michels m...@apache.org:
Hi Michele,
If you don't set the parallelism, the default parallelism is used. For
the visualization in the web client, a parallelism of one is used
Hi Stefan,
The problem is that the CsvParser does not know how to parse types other
than the ones that are supported. It would be nice if it supported a custom
parser which is either manually specified or included in the PoJo class
itself.
You can either change your PoJo fields to be of a
Hi Chenliang,
I've posted a comment in the associated JIRA issue:
https://issues.apache.org/jira/browse/FLINK-2367
Thanks,
Max
On Fri, Jul 17, 2015 at 8:27 AM, Chenliang (Liang, DataSight)
chenliang...@huawei.com wrote:
*One improvement suggestion, please check if it is valid?*
For
Hi Lydia,
Here are some examples of how to read/write data from/to HBase:
https://github.com/apache/flink/tree/master/flink-staging/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example
Hope that helps you to develop your Flink job. If not feel free to ask!
Best,
Max
On Sat, Jul 11,
Hi Michele,
Sorry to hear you are experiencing problems with the web client. Which
version of Flink are you using?
Could you paste the whole error message you see? Thank you.
Best regards,
Max
On Sun, Jul 12, 2015 at 11:21 AM, Michele Bertoni
michele1.bert...@mail.polimi.it wrote:
I think
HI Vinh,
If you run your program locally, then Flink uses the local execution mode
which allocates only little managed memory. Managed memory is used by Flink
to perform operations on serialized data. These operations can get slow if
too little memory gets allocated because data needs to be
Hi Michele,
Thanks for reporting the problem. It seems like we changed the way we
compare generic types like your GValue type. I'm debugging that now. We can
get a fix in for the 0.9.1 release.
Cheers,
Max
On Tue, Jul 14, 2015 at 5:35 PM, Michele Bertoni
michele1.bert...@mail.polimi.it wrote:
? :-)
Cheers,
Max
On Mon, Jul 20, 2015 at 10:50 AM, Maximilian Michels m...@apache.org
wrote:
Hi Max,
You are right, there is no support for nested iterations yet. As far as I
know, there are no concrete plans to add support for it. So it is up to
debate how the support for resuming from
Hi Shivani,
Flink doesn't have enough memory to perform a hash join. You need to
provide Flink with more memory. You can either increase the
taskmanager.heap.mb config variable or set taskmanager.memory.fraction
to some value greater than 0.7 and smaller then 1.0. The first config
variable
fine in Collection execution mode.
Thanks and Regards,
Shivani
On Mon, Jul 20, 2015 at 2:14 PM, Maximilian Michels m...@apache.org
wrote:
Hi Shivani,
Flink doesn't have enough memory to perform a hash join. You need to
provide Flink with more memory. You can either increase
On Mon, Jul 13, 2015 at 9:07 AM, Maximilian Michels m...@apache.org
wrote:
Hi Jay,
Great to hear there is effort to integrate Flink with BigTop. Please let
us know if any questions come up in the course of the integration!
Best,
Max
On Sun, Jul 12, 2015 at 3:57 PM, jay vyas jayunit100
current branch?
Cheers,
Max
On Mon, Jul 20, 2015 at 4:02 PM, Maximilian Michels m...@apache.org
wrote:
Now that makes more sense :) I thought by nested iterations you meant
iterations in Flink that can be nested, i.e. starting an iteration inside
an iteration.
The caching/pinning
Hi,
Are you running this locally or in a cluster environment? Did you put the
zkClient-0.5.jar in the /lib directory of every node (also task managers)?
It seems like sbt should include the zkClient dependency in the fat jar. So
there might be something wrong with your build process.
Best
, Maximilian Michels m...@apache.org wrote:
Hi Michele,
Thanks for reporting the problem. It seems like we changed the way we
compare generic types like your GValue type. I'm debugging that now. We can
get a fix in for the 0.9.1 release.
Cheers,
Max
On Tue, Jul 14, 2015 at 5:35 PM, Michele
Hi Guido,
This depends on your use case but you may read those values as type String
and treat them accordingly.
Cheers,
Max
On Fri, Oct 23, 2015 at 1:59 PM, Guido wrote:
> Hello,
> I would like to ask if there were any particular ways to read or treat
> null (e.g. Name,
Hi Philip,
How about making the empty field of type String? Then you can read the CSV
into a DataSet and treat the empty string as a null value. Not very nice
but a workaround. As of now, Flink deliberately doesn't support null values.
Regards,
Max
On Thu, Oct 22, 2015 at 4:30 PM, Philip Lee
Hi Flavio,
Which version of Flink are you using?
Cheers,
Max
On Fri, Oct 23, 2015 at 2:45 PM, Flavio Pompermaier
wrote:
> Hi to all,
> I'm trying to run a job from the web interface but I get this error:
>
> java.lang.RuntimeException: java.io.FileNotFoundException: JAR
Hi Niels,
Thank you for your question. Flink relies entirely on the Kerberos
support of Hadoop. So your question could also be rephrased to "Does
Hadoop support long-term authentication using Kerberos?". And the
answer is: Yes!
While Hadoop uses Kerberos tickets to authenticate users with
s null) can then be used on the resulting
> DataSet[Row].
>
>
> On Fri, Oct 23, 2015 at 7:38 PM, Maximilian Michels <m...@apache.org>
> wrote:
>
>> Hi Philip,
>>
>> How about making the empty field of type String? Then you can read the
>> CSV int
Hi Liang,
We greatly appreciate you introduced Flink to the Chinese users at CNCC! We
would love to hear how people like Flink.
Please keep us up to date and point the users to the mailing list or
Stackoverflow if they have any difficulties.
Best regards,
Max
On Sat, Oct 24, 2015 at 5:48 PM,
s null) can then be used on the resulting
> DataSet[Row].
>
> On Fri, Oct 23, 2015 at 7:40 PM, Maximilian Michels <m...@apache.org>
> wrote:
>
>> Hi Guido,
>>
>> This depends on your use case but you may read those values as type
>> String and treat them according
s Basjes
>
> On Fri, Oct 23, 2015 at 11:45 AM, Maximilian Michels <m...@apache.org>
> wrote:
>
>> Hi Niels,
>>
>> Thank you for your question. Flink relies entirely on the Kerberos
>> support of Hadoop. So your question could also be rephras
maier <pomperma...@okkam.it>
> wrote:
>
>> Yes, the job manager starts as a root process, while taskmanagers with my
>> user..is that normal?
>> I was convinced that start-cluster.sh was starting all processes with the
>> same user :O
>>
>> On Mon,
Hi Flavio,
Are you runing your Flink cluster with root permissions? The directory to
hold the output splits are created by the JobManager. So if you run then
JobManager with root permissions, it will create a folder owned by root. If
the task managers are not run with root permissions, this could
That's odd. Does it also execute with parallelism 36 then?
On Mon, Oct 26, 2015 at 3:06 PM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> No, I just use the default parallelism
>
> On Mon, Oct 26, 2015 at 3:05 PM, Maximilian Michels <m...@apache.org>
> wrote:
>
Hi Camelia,
Flink 0.9.X supports Java 6. So this can't be the issue.
Out of curiosity, I gave it a spin on a Linux machine with OpenJDK 6. I was
able to start the command-line interface, job manager and task managers.
java version "1.6.0_36"
OpenJDK Runtime Environment (IcedTea6 1.13.8)
Hi Brian,
We are currently in the process of releasing 0.10.0. Thus, the master
version has already been updated to 1.0 which is the next scheduled
release.
If you want to use the latest SNAPSHOT version, you may build it from
source or use the SNAPSHOT Maven artifacts. For more information,
Hi Welly,
There is a protocol for communicating with other processes. This is
reflected in flink-language-binding-generic module. I'm not aware how
Spark or Storm communication protocols work but this protocol is
rather low level.
Cheers,
Max
On Fri, Nov 13, 2015 at 9:49 AM, Welly Tambunan
heers
>
> On Fri, Nov 13, 2015 at 5:07 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Hi Welly,
>>
>> There is a protocol for communicating with other processes. This is
>> reflected in flink-language-binding-generic module. I'm not aware how
>&
Hi Thomas,
It appears Flink couldn't pick up the Hadoop configuration. Did you
set the environment variables HADOOP_CONF_DIR or HADOOP_HOME?
Best,
Max
On Sun, Nov 8, 2015 at 7:52 PM, Thomas Götzinger wrote:
> Sorry for Confusing,
>
> the flink cluster throws following
nism level: Failed to
>>> find any Kerberos tgt)]
>>> at
>>> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
>>> at
>>> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.j
Hi Philip,
Thank you for your questions. I think you have mapped the HIVE
functions to the Flink ones correctly. Just a remark on the ORDER BY.
You wrote that it produces a total order of all the records. In this
case, you'd have do a SortPartition operation with parallelism set to
1. This is
-list people about it.
>> I am pretty sure these two mails will really help you.
>>
>> I will take a note by following this contents on our google docs.
>> This note will also help big-benchmark people later.
>>
>> Regards,
>> Philip
>>
>>
>>
>&g
Hi Jakob,
Thank you for reporting the bug. Could you please post your
configuration here? In particular, could you please tell us the value
of the following configuration variables:
taskmanager.heap.mb
taskmanager.network.numberOfBuffers
taskmanager.memory.off-heap
Are you running the Flink
+1 Let's collect in the Wiki for now. At some point in time, we might
want to have a dedicated page on the Flink homepage.
On Mon, Oct 19, 2015 at 3:31 PM, Timo Walther wrote:
> Ah ok, sorry. I think linking to the wiki is also ok.
>
>
> On 19.10.2015 15:18, Fabian Hueske
I forgot to ask you: Which version of Flink are you using? 0.9.1 or
0.10-SNAPSHOT?
On Mon, Oct 19, 2015 at 5:05 PM, Maximilian Michels <m...@apache.org> wrote:
> Hi Jakob,
>
> Thanks. Flink allocates its network memory as direct memory outside
> the normal Java heap. By def
On Mon, Oct 19, 2015 at 4:32 PM, Jakob Ericsson
<jakob.erics...@gmail.com> wrote:
> Hi,
>
> See answers below.
>
> /Jakob
>
> On Mon, Oct 19, 2015 at 4:03 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Hi Jakob,
>>
>> Th
at 5:26 PM, Gyula Fóra <gyula.f...@gmail.com> wrote:
> It's 0.10-SNAPSHOT
>
> Gyula
>
> Maximilian Michels <m...@apache.org> ezt írta (időpont: 2015. okt. 19., H,
> 17:13):
>>
>> I forgot to ask you: Which version of Flink are you using? 0.9.1 or
>> 0
the process with G1 again and 20GB as taskmanager.heap.mb.
> Lets see if it will be stable during the night.
>
>
> On Mon, Oct 19, 2015 at 6:31 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> You can see the revision number and the build date in the JobManager
>&g
rk
> now, I do have a SUCCEEDED too.
>
> Best regards,
> Arnaud
>
> -----Message d'origine-
> De : Maximilian Michels [mailto:m...@apache.org]
> Envoyé : jeudi 8 octobre 2015 14:34
> À : user@flink.apache.org; LINZ, Arnaud <al...@bouyguestelecom.fr>
> Objet
You can see the revision number and the build date in the JobManager
log file, e.g. "Starting JobManager (Version: 0.10-SNAPSHOT,
Rev:1b79bc1, Date:18.10.2015 @ 20:15:08 CEST)"
On Mon, Oct 19, 2015 at 5:53 PM, Maximilian Michels <m...@apache.org> wrote:
> When was the last ti
Hi Ronkainen,
Sorry for the late reply. Unfortunately, this is a bug in the Python
API. I've reproduced the issue and fixed it for the upcoming releases.
The fix will be included in the 0.9.2 and the 0.10 release. If you
don't mind, you could already use the 0.10-SNAPSHOT version (0.10
release
I think Travis will fix this hickup soon. Maybe you could provide them
with the stuck builds in a mail to supp...@travis-ci.com.
On Fri, Oct 9, 2015 at 3:39 PM, Sachin Goel wrote:
> Found another one: https://travis-ci.org/apache/flink/jobs/84473635
>
> -- Sachin Goel
>
ns that the buffering is done
>> (almost) solely in the sink and not in the outputformat any more.
>>
>> On Mon, Oct 26, 2015 at 10:11 AM, Maximilian Michels <m...@apache.org>
>> wrote:
>>>
>>> Not sure whether we really want to flush at every invoke call. If you
&
Hi Welly,
Thanks for sharing! The videos are coming. They soon will all be available.
Cheers,
Max
On Fri, Nov 13, 2015 at 11:08 AM, Welly Tambunan wrote:
> Hi All,
>
> I've just notice that the video has already available for this one.
>
>
Nice. More configuration options :)
On Wed, Aug 26, 2015 at 5:58 PM, Robert Metzger rmetz...@apache.org wrote:
Therefore, my change will include a configuration option to set a custom
location for the file.
On Wed, Aug 26, 2015 at 5:55 PM, Maximilian Michels m...@apache.org wrote:
The only
Can't we write the file to the system's temp directory or the user
home? IMHO this is more standard practice for these type of session
information.
On Wed, Aug 26, 2015 at 3:25 PM, Robert Metzger rmetz...@apache.org wrote:
Great ;)
Not yet, but you are the second user to request this.
I think
Hi Jerry,
If you don't want to use Hadoop, simply pick _any_ Flink version. We
recommend the Hadoop 1 version because it contains the least dependencies,
i.e. you need to download less and the installation occupies less space.
Other than that, it doesn't really matter if you don't use the HDFS
+1 for dropping Hadoop 2.2.0 binary and source-compatibility. The
release is hardly used and complicates the important high-availability
changes in Flink.
On Fri, Sep 4, 2015 at 9:33 AM, Stephan Ewen wrote:
> I am good with that as well. Mind that we are not only dropping a
serialization.
>
> On Wed, Sep 2, 2015 at 4:16 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Ok but that would not prevent the above error, right? Serializing is
>> not the issue here.
>>
>> Nevertheless, it would catch all errors during initial s
Hi Andreas,
Thank you for reporting the problem and including the code to reproduce the
problem. I think there is a problem with the class serialization or
deserialization. Arrays.asList uses a private ArrayList class
(java.util.Arrays$ArrayList) which is not the one you would normally use
rogram is constructed,
> rather than later, when it is shipped.
>
> On Wed, Sep 2, 2015 at 12:56 PM, Maximilian Michels <m...@apache.org>
> wrote:
>
>> Here's the JIRA issue: https://issues.apache.org/jira/browse/FLINK-2608
>>
>> On Wed, Sep 2, 2015 at 12:49
Hi Michele,
Please supply a log4j.properties file path as a Java VM property like
so: -Dlog4j.configuration=/path/to/log4j.properties
Your IDE should have an option to adjust VM arguments.
Cheers,
Max
On Wed, Sep 2, 2015 at 9:10 AM, Michele Bertoni
wrote:
> Hi
Here's the JIRA issue: https://issues.apache.org/jira/browse/FLINK-2608
On Wed, Sep 2, 2015 at 12:49 PM, Maximilian Michels <m...@apache.org> wrote:
> Hi Andreas,
>
> Thank you for reporting the problem and including the code to reproduce
> the problem. I think there is a prob
Hi Lydia,
Till already pointed you to the documentation. If you want to run the
WordCount example, you can do so by executing the following command:
./bin/flink run -c com.dataartisans.flink.dataflow.examples.WordCount
/path/to/dataflow.jar --input /path/to/input --output /path/to/output
If you
ommented it out
> for testing, seems to be working. Happily waiting for the fix. Thanks again.
>
> Robert
>
> On Wed, Sep 30, 2015 at 1:42 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Hi Robert,
>>
>> This is a regression on the current master due t
, Oct 1, 2015 at 10:18 AM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Hi Robert,
>>
>> Just a quick update: The issue has been resolved in the latest Maven
>> 0.10-SNAPSHOT dependency.
>>
>> Cheers,
>> Max
>>
>> On Wed, Sep 30, 2015 at
Great to hear :)
On Thu, Oct 1, 2015 at 11:21 AM, Robert Schmidtke
<ro.schmid...@gmail.com> wrote:
> I pulled the current master branch and rebuilt Flink completely anyway.
> Works like a charm.
>
> On Thu, Oct 1, 2015 at 11:11 AM, Maximilian Michels <m...@apache.org> wrot
Hi Hanen,
It appears that the environment variables are not set. Thus, Flink cannot
pick up the Hadoop configuration. Could you please paste the output of
"echo $HADOOP_HOME" and "echo $HADOOP_CONF_DIR" here?
In any case, your problem looks similar to the one discussed here:
ear ...
> Our shop is doing about 10 orders per second these days ...
>
> So they won't upgrade until next January/February
>
> Niels
>
> On Wed, Dec 2, 2015 at 3:59 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> Hi Niels,
>>
>> You mentioned you hav
?).
> When the new ticket has been obtained it retries the call that previously
> failed.
> To me it seemed that this call can fail over the invalid Token yet it cannot
> be retried.
>
> At this moment I'm thinking a bug in Hadoop.
>
> Niels
>
> On Wed, Dec 2, 20
to build and I'll give it a try.
>
> Niels
>
> On Wed, Dec 2, 2015 at 4:28 PM, Maximilian Michels <m...@apache.org> wrote:
>>
>> I mentioned that the exception gets thrown when requesting container
>> status information. We need this to send a heartbeat to
TCH_HEAD
https://github.com/mxm/flink/archive/a41f3866f4097586a7b2262093088861b62930cd.zip
Thanks,
Max
On Wed, Dec 2, 2015 at 6:39 PM, Maximilian Michels <m...@apache.org> wrote:
> Great. Here is the commit to try out:
> https://github.com/mxm/flink/commit/f49b9635bec703541f19cb8c615f302
Hi Welly,
We still have to decide on the next release date but I would expect
Flink 0.10.2 within the next weeks. If you can't work around the union
limitation, you may build your own Flink either from the master or the
release-0.10 branch which will eventually be Flink 0.10.2.
Cheers,
Max
On
Hi Brian,
I don't recall Docker requires commands to run in the foreground. Still, if
that is your requirement, simply remove the "&" at the end of this line in
flink-daemon.sh:
$JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath
"`manglePathList
hat I see there. One
> thing that's not clear to me in that example is why supervisor is used to
> keep the container alive, rather than using some simpler means. It doesn't
> look like it's been configured to supervise anything.
>
> On Wed, Dec 2, 2015 at 11:44 AM, Maximilian Michels &
/tree/kerberos-yarn-heartbeat-fail-0.10.1
git fetch https://github.com/mxm/flink/ \
kerberos-yarn-heartbeat-fail-0.10.1 && git checkout FETCH_HEAD
https://github.com/mxm/flink/archive/kerberos-yarn-heartbeat-fail-0.10.1.zip
Thanks,
Max
On Wed, Dec 2, 2015 at 6:52 PM, Maximilian Mi
Hi Naveen,
Were you using Maven before? The syncing of changes in the master
always takes a while for Maven. The documentation happened to be
updated before Maven synchronized. Building and installing manually
(what you did) solves the problem.
Strangely, when I run your code on my machine with
Thanks Welly!
We have already corrected that in the snapshot documentation at
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#transformations
I fixed it also for the 0.10 documentation.
Best,
Max
On Fri, Dec 4, 2015 at 6:24 AM, Welly Tambunan
Hi Madhu,
Not yet. The API has changed slightly. We'll add one very soon. In the
meantime I've created an issue to keep track of the status:
https://issues.apache.org/jira/browse/FLINK-3115
Thanks,
Max
On Thu, Dec 3, 2015 at 10:50 PM, Madhukar Thota
wrote:
> is
r",
> "localhost:9300"));
>
>
> DataStreamSink elastic = messageStream.rebalance().addSink(new
> ElasticsearchSink<>(config, (IndexRequestBuilder) (element,
> runtimeContext) -> {
> String[] line = element.toLowerCase().split("
> +(?=(?:([^\"]*\")
1 - 100 of 280 matches
Mail list logo