Didn't we have a similar issue before the 0.7.0-incubating release as well?
I thought I've tested submitting a streaming program with the web frontend
for the 0.8 release and it worked.
On Thu, Jan 22, 2015 at 2:31 PM, Gyula Fóra gyf...@apache.org wrote:
Hey,
While trying to add support for
Hi guys,
I would like to bundle a minor bugfix release for Flink soon.
Some users were complaining about incomplete Kryo support, in particular
for Avro.
Also, we fixed some other issues which are easy to to port to 0.8.1 (some
of them are already in the branch).
I would like to start the vote
Hi,
looking at your code, it seems that you are creating a DataSet for each
file in the directory. Flink can also read entire directories.
Now regarding the actual problem:
How are you starting the Flink job?
Out of your IDE, or using the ./bin/flink run tool?
Best,
Robert
On Fri, Feb 6,
Sorry, I didn't see the pathID which is added in the map() method.
Then your approach looks good for using Flink locally.
On Fri, Feb 6, 2015 at 3:03 PM, Nam-Luc Tran namluc.t...@euranova.eu
wrote:
Thank you for your replies.
@Stephen
Updating to 0.9-SNAPSHOT and using the return statement
, 2015 at 2:27 PM, Robert Metzger rmetz...@apache.org
wrote:
Hi guys,
I would like to bundle a minor bugfix release for Flink soon.
Some users were complaining about incomplete Kryo support, in
particular
for Avro.
Also, we fixed some other issues which are easy to to port
Yes, there are two NOTICE files.
They differ because bin and src releases require different licensing
notices. (The bin NOTICE is bigger)
On Wed, Jan 14, 2015 at 8:13 PM, Henry Saputra henry.sapu...@gmail.com
wrote:
Ah, we have 2 copies of NOTICE file?
Can binary diet just use the same one
Is the git hook something we can control for everybody? I thought its more
like a personal thing everybody can set up if wanted?
I'm against enforcing something like this for every committer. I don't want
to wait for 15 minutes for pushing a typo fix to the documentation.
On Wed, Jan 21, 2015
:-)
On Mon, Feb 16, 2015 at 4:38 PM, Fabian Hueske fhue...@gmail.com
wrote:
- checked all checksums and signatures
- checked running examples with build-in data on local setup on Windows
8.1
(hadoop1.tgz, hadoop2.tgz)
2015-02-16 15:54 GMT+01:00 Robert Metzger rmetz...@apache.org
Please vote on releasing the following candidate as Apache Flink version
0.8.1
Please check the release carefully. There were quite a few last-minute
quickfixes
This is a bugfix release for 0.8.0.
-
The commit to be voted on is in the
I agree with Stephan that we should remove the scalastyle rule enforcing
lines of 100 characters length.
On Mon, Jan 5, 2015 at 10:21 AM, Henry Saputra henry.sapu...@gmail.com
wrote:
@Stephan - sure I could work on it. Been wanting to do it for a while.
No, it is not the checkstyle issue.
I'm also in favor of shading commonly used libraries to resolve this issue
for our upstream users.
I recently wrote this distributed TPC-H datagenerator, which had a hard
dependency on a newer guava version. So I needed to shade guava in my
project to make it work.
Another candidate to shade is
Thats indeed a great idea and we had support for that.
We also have already the infrastructure to do it in place, I just had not
enough time to figure out how ASF's buildbot works. (I know how it should
work theoretically but I haven't tried it)
This is the respective JIRA:
Hi,
thank you for trying out Flink.
I'm sorry that you ran into this issue. Flink does not have support for
Hadoop YARN before 2.2.0.
The reason for that is that Hadoop has changed the YARN APIs with the 2.2.0
release. (the pre 2.2.0 APIs are marked as alpha)
If you take a closer look into the
.
2015-01-27 15:46 GMT+01:00 Robert Metzger rmetz...@apache.org:
Hi,
Hadoop has annotations for tagging the stability and audience of classes
and methods.
Through that, you can have @InterfaceAudience.Public, Private,
LimitedPrivate
and also @InterfaceStability.Evolving, Unstable
Hi,
it seems that you are not subscribed to our mailing list, so I had to
manually accept your mail. Would be good if you could subscribe.
Can you send us also the log output of the JobManager?
If your YARN cluster has log aggregation activated, you can retrieve the
logs of a stopped YARN
in the long run.
On Mon, Jan 26, 2015 at 12:00 PM, Robert Metzger rmetz...@apache.org
wrote:
I've added a JIRA issue to create the module:
https://issues.apache.org/jira/browse/FLINK-1452
On Mon, Jan 26, 2015 at 11:39 AM, Till Rohrmann trohrm...@apache.org
wrote:
+1 for Robert's
Hi,
Am I right that you basically fear that if you are allowing users to
manually modify DataSetRow's that you're loosing control of the types
etc.?
I think that integrating the expression API into the existing API is nicer,
because it gives users more flexibility. It should also lead to a lower
:00 Robert Metzger rmetz...@apache.org:
Okay, the tests have finished on my local machine, and they
passed.
So
it
looks like an environment specific issue.
Maybe the log helps me already to figure out whats the issue.
We should make sure that our tests are passing
I'm also in favor of option 1) with a flink-contrib maven module.
I agree with Ted that we should certainly think about establishing a highly
visible, easy to contribute and easy to use infrastructure for all kinds of
contributions around the project.
But I suspect that we need some time to come
Hey,
is it a input format for reading JSON data or an IF for reading tweets in
some format into a pojo?
I think a JSON Input Format would be something very useful for our users.
Maybe you can add that and use the Tweet IF as a concrete example for that?
Do you have a preview of the code
for the idea.
We need to make sure PMC of Flink maintains knowledge of standard
Flink distribution, hence the flink-contrib should not be part of
the release.
- Henry
On Sun, Jan 25, 2015 at 10:33 AM, Robert Metzger rmetz...@apache.org
wrote:
I'm also in favor of option 1) with a flink
Okay, the tests have finished on my local machine, and they passed. So it
looks like an environment specific issue.
Maybe the log helps me already to figure out whats the issue.
We should make sure that our tests are passing on all platforms ;)
On Sat, Jan 24, 2015 at 11:06 AM, Robert Metzger
?
On Sat, Jan 10, 2015 at 7:28 PM, Robert Metzger rmetz...@apache.org
wrote:
I would really like to include this commit into the 0.8 release as well:
https://github.com/apache/flink/commit/ec2bb573d185429f8b3efe111850b8f0e67f2704
A user is affected by this issue.
If you agree, I can merge
Hey Andreas,
thanks for sharing. Due to the press announcement by the ASF today, there
is quite some attention in the news.
For all non-germans: Heise is one of the biggest IT-news sites here.
On Mon, Jan 12, 2015 at 3:10 PM, Andreas Kunft andreas.ku...@tu-berlin.de
wrote:
FYI
+1
Checked:
- hadoop1 and hadoop2 profiles
- files in the lib/ directory
- deployed the flink-yarn version on google compute cloud (hadoop 2.4.1)
- set flink 0.8.0 as a repository in a flink project and build it from
these dependencies
- I ran some jobs on google compute cloud, with YARN.
On
Hi,
Right now, you have to parse the logs (of the TaskManager running the Sync
Task).
We've created the linked images using this (very hacky) tool:
https://github.com/project-flink/flink-perf/blob/master/flink-jobs/src/main/java/com/github/projectflink/utils/IterationParser.java
.
I'm currently
:00 Ufuk Celebi u...@apache.org:
Nice, this is a great tool. :)
On 09 Feb 2015, at 17:05, Robert Metzger rmetz...@apache.org wrote:
However, the website is not really helpful:
http://www.tpc.org/trademarks/
As one data point, the Apache Calcite (incubating) project also depends
. Pushing it as soon as travis passes.
On Fri, Feb 6, 2015 at 2:26 PM, Robert Metzger rmetz...@apache.org
wrote:
It seems that quite a few important fixes still need some work until they
are ready.
I'll extend the deadline to Monday morning (CET), since we can not vote
during the weekends
Hi,
we recently added the flink-contrib module for user contributed tools etc.
On one of the last weekends, I've created a distributed tpch generator,
based on this libary: https://github.com/airlift/tpch (which is from a
PrestoDB developer and available on Maven central).
You can find my code
I would appreciate if everyone who is merging pull requests is properly
setting the fix version in JIRA.
So in most cases, the fix version is the next major release, currently
0.9.
If we're not setting this, the issue will not appear in the changelog of
the release. Also, I think that users may
I'm against changing the indentation, for the same reasons as Stephan
listed.
In my opinion, the codebase has grown too large to just switch the
indentation or the entire code style (to the google style or whatever).
We have 235870 LOC of Java and 24173 LOC of Scala.
Therefore, I'm proposing to:
Hey Andra,
I've checked out your repository and made some changes.
It seems to compile, also the Files thing seems to work (at least thats
what IntelliJ is telling me).
https://github.com/rmetzger/scratch/commit/203d647086d089575fb27223462d79c87771f1d1
Let me know if this is sufficient or if you
Did you send an empty email to user-subscr...@flink.apache.org ? That
should subscribe you.
On Thu, Mar 19, 2015 at 9:25 AM, Andra Lungu lungu.an...@gmail.com wrote:
Hello,
I've used delta iterations several times up until now, but I just realized
that I never fully understood what happens
request.
Otherwise, let me know if you have troubles rebasing your changes.
On Mon, Mar 2, 2015 at 9:13 PM, Chiwan Park chiwanp...@icloud.com wrote:
+1 for Scala 2.11
Regards.
Chiwan Park (Sent with iPhone)
On Mar 3, 2015, at 2:43 AM, Robert Metzger rmetz...@apache.org wrote:
I'm +1
alexander.s.alexand...@gmail.com:
Yes, will do.
2015-03-10 16:39 GMT+01:00 Robert Metzger rmetz...@apache.org:
Very nice work.
The changes are probably somewhat easy to merge. Except for the version
properties in the parent pom, there should not be a bigger issue.
Can you also
alexander.s.alexand...@gmail.com wrote:
We have is almost ready here:
https://github.com/stratosphere/flink/commits/scala_2.11_rebased
I wanted to open a PR today
2015-03-10 11:28 GMT+01:00 Robert Metzger rmetz...@apache.org:
Hey Alex,
I don't know the exact status of the Scala 2.11 integration. But I
I've reopened https://issues.apache.org/jira/browse/FLINK-1650 because the
issue is still occurring.
On Thu, Mar 12, 2015 at 7:05 PM, Ufuk Celebi u...@apache.org wrote:
On Thursday, March 12, 2015, Till Rohrmann till.rohrm...@gmail.com
wrote:
Have you run the 20 builds with the new shading
Hi guys,
the build queue on travis is getting very very long. It seems that it takes
4 days now until commits to master are build. The nightly builds from the
website and the maven snapshots are also delayed by that.
Right now, there are 33 pull request builds scheduled (
I haven't understood what the issue is actually.
I though that maven is using the properties from flink-parent, which is
setting the scala version to 2.10 by default anyways.
(By the way: mailing list replies to JIRA mails are not automatically added
as comments to the JIRA)
On Wed, Mar 25,
I suspect this error only happens once in a while. We didn't change
anything on these tests recently.
Your PR for fixing this issue looks good, maybe its fixing it.
On Thu, Mar 26, 2015 at 3:15 AM, Henry Saputra henry.sapu...@gmail.com
wrote:
Hi All,
I just pulled from master and seemed like
for such a release would be mainly about
the legal aspects of the release rather than the stability. So I suspect
that the vote will go through much quicker.
On Fri, Mar 13, 2015 at 12:01 PM, Robert Metzger rmetz...@apache.org
wrote:
I've reopened https://issues.apache.org/jira/browse/FLINK-1650
I created a starter task JIRA for this.
https://issues.apache.org/jira/browse/FLINK-1787
On Sun, Mar 8, 2015 at 3:23 PM, Aljoscha Krettek aljos...@apache.org
wrote:
+1 I also tend to use guava.
On Sun, Mar 8, 2015 at 3:21 PM, Ufuk Celebi u...@apache.org wrote:
On 08 Mar 2015, at 15:05,
I didn't know that there was already an issue for this. I closed FLINK-1787.
The correct issue is this one:
https://issues.apache.org/jira/browse/FLINK-1711
+Table
On Thu, Mar 26, 2015 at 10:13 AM, Aljoscha Krettek aljos...@apache.org
wrote:
Thanks Henry. :D
+Relation
On Thu, Mar 26, 2015 at 9:36 AM, Till Rohrmann trohrm...@apache.org
wrote:
+Table
On Thu, Mar 26, 2015 at 9:32 AM, Márton Balassi
balassi.mar...@gmail.com
wrote:
Hi,
In an offline discussion with other Flink committers, we came up with the
idea to mark new components from the flink-staging module with a Beta
badge in the documentation.
This way, we make it very clear that the component is still under heavy
development.
If we agree on this, I'll file a
-compatibility.
On Sun, Mar 29, 2015 at 8:20 PM, Henry Saputra henry.sapu...@gmail.com
wrote:
+1 to this.
Was thinking about the same thing.
- Henry
On Sun, Mar 29, 2015 at 7:38 AM, Robert Metzger rmetz...@apache.org
wrote:
Hi,
In an offline discussion with other Flink
Okay, I think we have reached consensus on this.
I'll create a RC0 non-voting, preview release candidate for 0.9.0-
milestone-1 on Thursday (April 2) this week so that we have version to
tests against.
Once all issues of RC0 have been resolved, we'll start voting in the week
of April 6. (The
...@gmail.com
wrote:
Awesome news!
On Thursday, March 26, 2015, Robert Metzger rmetz...@apache.org wrote:
Travis replied me with very good news: Somebody from INFRA was asking the
same question around the same time as I did and Travis is working on
adding
more build capacity for the apache
Cool. I would like to have the ability to search the docs, so +1 for this
idea!
On Wed, Apr 1, 2015 at 12:10 PM, Ufuk Celebi u...@apache.org wrote:
Hey all,
I think our documentation has grown to a point where we need to think about
how to make it more accessible.
I would like to add a
...@icloud.com
wrote:
Is taskmanager.numberOfTaskSlots: -1 normal?
On Feb 24, 2015, at 9:44 PM, Robert Metzger rmetz...@apache.org
wrote:
Hi,
I could not find the logfiles attached to your mails. I think the
mailinglists are not accepting attachments.
Can you put the logs
:55 AM, Robert Metzger rmetz...@apache.org
wrote:
I'm glad you've found the how to contribute guide.
I can not describe the process to open a pull request better than
already
written in the guide.
Maybe this link is also helpful for you:
https://help.github.com/articles/creating
I'm +1 if this doesn't affect existing Scala 2.10 users.
I would also suggest to add a scala 2.11 build to travis as well to ensure
everything is working with the different Hadoop/JVM versions.
It shouldn't be a big deal to offer scala_version x hadoop_version builds
for newer releases.
You only
, Robert Metzger rmetz...@apache.org
wrote:
Hi,
I'm currently working on
https://issues.apache.org/jira/browse/FLINK-1605
and its a hell of a mess.
I got almost everything working, except for the hadoop 2.0.0-alpha
profile.
The profile exists because google protobuf has a different
into Pojo. I use an event-driven
parser, I retrieve most of the tweet into Java Pojos, it was tested on 1TB
dataset, for a Flink ETL job, and the performance was pretty good.
On Sun, Jan 25, 2015 at 7:38 PM, Robert Metzger rmetz...@apache.org
wrote:
Hey,
is it a input format for reading JSON
Hi Niraj,
Welcome to the Flink community ;)
I'm really excited that you want to contribute to our project, and since
you've asked for something in the security area, I actually have something
very concrete in mind.
We recently added some support for accessing (Kerberos) secured HDFS
clusters in
:-
Java Pojos for the tweet object, and the nested objects. Parser class
using event-driven approach, and the SimpleTweetInputFormat itself.
Would you guide me how to push the code, just to save sometime :)
On Fri, Feb 27, 2015 at 10:42 AM, Robert Metzger rmetz...@apache.org
wrote:
Hi
+1 for Marton as a release manager. Thank you!
On Tue, Mar 3, 2015 at 7:56 PM, Henry Saputra henry.sapu...@gmail.com
wrote:
Ah, thanks Márton.
So we are chartering to the similar concept of Spark RRD staging execution
=P
I suppose there will be a runtime configuration or hint to tell the
Hi Johannes,
This change will allow users to pass a custom configuration to the
LocalExecutor: https://github.com/apache/flink/pull/427.
Is that what you're looking for?
On Wed, Mar 4, 2015 at 11:46 AM, Kirschnick, Johannes
johannes.kirschn...@tu-berlin.de wrote:
Hi Stephan,
I just came
this. It bypasses the singleton
GlobalConfiguration that I personally hope to get rid of.
On Wed, Mar 4, 2015 at 11:49 AM, Robert Metzger rmetz...@apache.org
wrote:
Hi Johannes,
This change will allow users to pass a custom configuration to the
LocalExecutor: https://github.com/apache/flink/pull/427
programs. They replied they were just
revising their license to allow that.
Should be possible now. Good idea to ping them again to make sure that it
is approved now and that it holds for code as well...
On Wed, Feb 11, 2015 at 2:22 PM, Robert Metzger rmetz...@apache.org
wrote:
Okay, thank you
Hi Santosh,
I'm not aware of any existing tools in Flink to process RDFs. However,
Flink should be useful for processing such data.
You can probably use an existing RDF parser for Java to get the data into
the system.
Best,
Robert
On Fri, Feb 27, 2015 at 4:48 PM, santosh_rajaguru
Hi,
thank you for your interest in contributing to Flink.
As you've seen from the discussion on the #421 pull request, there are some
feature requests by users regarding this feature.
if you are interested, feel free to pick up the work from there and extend
it.
Let me know if you have
Hi Karim,
also have a look at this old discussion from the user@ list:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/read-gz-files-td760.html
On Sun, Feb 22, 2015 at 10:33 AM, Felix Neutatz neut...@googlemail.com
wrote:
Hi Karim,
you can use a Hadoop Input
, 2015, at 6:37 PM, Robert Metzger rmetz...@apache.org wrote:
Can you run mvn -version to verify that?
Maybe maven is using a different java version?
On Sun, Feb 22, 2015 at 2:05 PM, Dulaj Viduranga vidura...@icloud.com
wrote:
Hi,
But I’m using Oracle java 8 (javac 1.8.0_05
Hi Dulaj,
you are using an unsupported compiler to compile Flink. You can compile
Flink only with OpenJDK 6 and all JDKs above 6. The Oracle JDK 6's compiler
contains a bug.
You can run Flink with all JREs 6+ (including Oracle JDK 6).
I would recommend you to upgrade your Java version to 7
Hi,
you said in the other email thread that the error only occurs for
Wordcount, not for Kmeans.
Can you copy me the commands for both examples?
I can not really believe that there is a difference between the two jobs.
Can you also send us the contents of the jobmanager log file?
Best,
Robert
right now. I would
recommend
to use the 0.8.1 release for a stable experience.
Greetings,
Stephan
On Mon, Feb 23, 2015 at 7:39 PM, Robert Metzger rmetz...@apache.org
wrote:
Thank you for the quick reply.
The log you've send is from the webclient. Can you also send the log
Hi,
There is a guide for new contributors here:
http://flink.apache.org/how-to-contribute.html
I would recommend you to run some examples to get familiar with Flink.
Regards,
Robert
On Tue, Feb 24, 2015 at 3:58 PM, Kanwarpal Singh kanwarpal...@gmail.com
wrote:
Hi,
I am working with Apache
To update the local repository, you have to do execute the install goal.
I can recommend to always do a mvn clean install
On Thu, Feb 26, 2015 at 10:11 AM, Matthias J. Sax
mj...@informatik.hu-berlin.de wrote:
Thanks for clarifying Marton!
I was on the latest build already. However, my local
Hey,
since you've already read the documentation, I can recommend checking out
some slides about Flink on Slideshare as well.
Here is our How to Contribute guide:
http://flink.apache.org/how-to-contribute.html
Best,
Robert
On Wed, Feb 25, 2015 at 11:09 AM, amit pal amit5...@gmail.com wrote:
Hey,
This little screencast shows how to run WordCount in IntelliJ.
Note that it will take a bit more time the first time because IntelliJ will
compile all required classes:
https://www.youtube.com/watch?v=JIV_rX-OIQMfeature=youtu.be
Let us know if you need more help.
Robert
On Fri, Mar 6, 2015
...@gmail.com
wrote:
I also like the travis infrastucture. Thanks for bringing this up and
reaching out to the travis guys.
On Tue, Mar 24, 2015 at 3:38 PM, Robert Metzger rmetz...@apache.org
wrote:
Hi guys,
the build queue on travis is getting very very long. It seems
Hi,
you have to add guava as a dependency to your project.
With the shading, users won't see Flink depending on Guava anymore. This
allows them (users) to use any guava version they want.
On Mon, Mar 23, 2015 at 5:11 PM, Vasiliki Kalavri vasilikikala...@gmail.com
wrote:
Hi squirrels,
I
about fair
usage of example data?
- Henry
On Sat, Feb 28, 2015 at 12:07 PM, Robert Metzger rmetz...@apache.org
wrote:
I tried twice writing them but I didn't receive an answer.
But given that Apache Calcite is also using airlift/tpch in its
dependencies as well, I would like to add
from TPC-H.
I think giving trademark nudge to TPC-H in our NOTICE file should be good.
- Henry
On Mon, Mar 23, 2015 at 11:23 AM, Robert Metzger rmetz...@apache.org
wrote:
I've send a message to admin-i...@tpc.org and never got an answer. (on
http://www.tpc.org/trademarks/ they list ad
Hi Matthias,
I think there is no utility method right now to cancel a running job. I'll
file a JIRA for this.
Have a look at how the CLI frontend is cancelling a job:
Hi All,
As discussed on this list, we've decided to create a release outside the
regular 3 monthly release schedule for the ApacheCon announcement and for
giving our users a convenient way of trying out the great new features.
This thread is not an official release vote. It is meant for testing
, Robert Metzger rmetz...@apache.org
javascript:; wrote:
We actually have 7 fixes for 0.8.2:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%200.8.2%20ORDER%20BY%20updated%20DESC%2C%20priority%20DESC%2C%20created%20ASC
So if you
Can you push the fix to the release-0.8 branch?
On Mon, Apr 13, 2015 at 12:38 AM, Fabian Hueske fhue...@gmail.com wrote:
We should also get the HadoopOF fix in.
On Apr 12, 2015 10:14 AM, Robert Metzger rmetz...@apache.org wrote:
Hi,
in this thread [1] we started a discussion whether we
Hi Stefan,
you can use Flink to load data into HDFS.
The CSV reader is suited for reading delimiter separated text files into
the system. But you can also read data from a lot of other sources (avro,
jdbc, mongodb, hcatalog).
We don't have any utilities to make writing to HCatalog very easy, but
I think the mailinglists are indexed by Google as well, through various web
based mirrors.
SO is good for questions with clear answers, but if a user wants more
interactive feedback (with questions back and forth) the user@ list is
probably better.
Also, people which are experienced with Apache
Hi,
The Python API pull request [1] has been open for quite some time now.
I was wondering whether we are planning to merge it or not.
I took a closer look at the Python API a few weeks ago and I think we
should merge it to expose it to our users to collect feedback.
I hope by merging it, we'll
Hi,
looking at the last builds on Travis, you'll notice that our builds are in
a pretty bad state: https://travis-ci.org/apache/flink/builds.
It seems that the last 15 builds on master all failed.
These are the errors I saw + their status:
- Deadlock during cache up/download: I asked travis and
decided to keep Java 6, then
I guess we have to install a custom Maven version on Travis.
Best,
Max
On Tue, Apr 28, 2015 at 1:34 PM, Robert Metzger rmetz...@apache.org
wrote:
Hi,
looking at the last builds on Travis, you'll notice that our builds are
in
a pretty bad state: https
Looks good, thank you!
On Tue, Apr 28, 2015 at 1:18 PM, Ufuk Celebi u...@apache.org wrote:
Stephan and I came up with the following document about how to handle
failures of tasks and how to make sure we properly attribute the failure to
the correct root cause and suppress follow-up failures.
(Sent with iPhone)
On Apr 30, 2015, at 6:52 PM, Fabian Hueske fhue...@gmail.com wrote:
excellent! :-)
2015-04-30 11:47 GMT+02:00 Stephan Ewen se...@apache.org:
git for the win!
On Thu, Apr 30, 2015 at 11:39 AM, Robert Metzger rmetz...@apache.org
wrote:
Great
There is already support for inflate compressed files and I introduced logic to
handle unsplittable formats.
Sent from my iPhone
On 30.04.2015, at 19:39, Stephan Ewen se...@apache.org wrote:
I think that would be very worthwhile :-) Happy to hear that you want to
contribute that!
Hi,
I don't know any Flink committer who is using Eclipse to develop Flink.
Setting up Eclipse for Flink is quite hard because we are a mixed
Scala/Java project.
Check out this page in the documentation:
http://ci.apache.org/projects/flink/flink-docs-master/internals/ide_setup.html
On Mon, May
: Robert Metzger [metrob...@gmail.com]
Sent: Thursday, April 30, 2015 21:01
To: dev@flink.apache.org
Subject: Re: Gzip support
There is already support for inflate compressed files and I introduced
logic to handle unsplittable formats.
Sent from my iPhone
On 30.04.2015, at 19:39, Stephan
Hey Andra,
if you want, you can also fix the broken links yourself and open a pull
request with the fixes.
The documentation is located in the docs/ folder. There is also a bash
script which allows you to preview the changes locally (on localhost:4000)
... once you have installed jekyll.
On
I think its fine when the front page looks a little bit more like a
documentation.
Flink is no fancy hipster app, our users are either sysops running it on a
cluster or developers programming against APIs.
I think the new website will convince this target audience.
I agree with you that aligning
Hi Tobias,
sorry for the late reply.
Did you checkout the code for starting flink on docker already?
https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
Maybe that will save you some time ;)
Benchmarks using TPC-* data are quite popular.
Maybe this is also helpful for you:
Hi,
Thank you for starting the discussion Marton!
I would really like to merge the storm compat to our source repo. I think
that code which is not merged there will not get enough attention.
I'm against splitting flink-contrib into small maven modules. I totally
understand your reasoning (mixed
+1 for cutting a release soon.
The planning document looks reasonable ..
On Wed, May 13, 2015 at 1:37 PM, Stephan Ewen se...@apache.org wrote:
Hi Squirrels!
I think it is time we started finalizing the the 0.9 release. The latest
milestone is a few weeks old and given the sheer amount of
, don't know why
it's only failing in Travis build. Not sure if I am missing something in
my
local environment.
Thanks,
Lokesh
On Thu, May 14, 2015 at 1:39 AM, Robert Metzger rmetz...@apache.org
wrote:
I think flink-spargel is missing the guava dependency.
On Thu, May 14, 2015
and trigger build again. is that right?
Thanks Robert, Aljoscha for super fast reply/help.
Thanks,
Lokesh
On Thu, May 14, 2015 at 8:39 AM, Robert Metzger rmetz...@apache.org
wrote:
However, you can only restart runs in your travis account, not on the
apache account (also used for validating
+1 ship it
On Fri, May 15, 2015 at 1:56 PM, Kostas Tzoumas ktzou...@apache.org wrote:
+1
On Fri, May 15, 2015 at 11:49 AM, Vasiliki Kalavri
vasilikikala...@gmail.com wrote:
+1 :))
On 15 May 2015 at 12:42, Ufuk Celebi u...@apache.org wrote:
On 14 May 2015, at 12:39, Vasiliki
I think flink-spargel is missing the guava dependency.
On Thu, May 14, 2015 at 8:18 AM, Aljoscha Krettek aljos...@apache.org
wrote:
@Robert, this seems like a problem with the Shading?
On Thu, May 14, 2015 at 5:41 AM, Lokesh Rajaram
rajaram.lok...@gmail.com wrote:
Thanks Aljioscha. I was
.
-Matthias
On 05/12/2015 10:46 AM, Robert Metzger wrote:
Hi,
Thank you for starting the discussion Marton!
I would really like to merge the storm compat to our source repo. I
think
that code which is not merged there will not get enough attention.
I'm against splitting
, 2015 at 11:49 PM, Aljoscha Krettek
aljos...@apache.org
wrote:
I will look into it once I have some time (end of this week, or next
week probably)
On Tue, Apr 14, 2015 at 8:51 PM, Robert Metzger rmetz...@apache.org
wrote:
Hey Nikolaas,
Thank you
1 - 100 of 1663 matches
Mail list logo