Hi,
we would need a little bit more of background on the job you're running and
the cluster setup to help you. Could you please post this information on
the u...@flink.apache.org ML where this belongs to?
Cheers,
Till
On Tue, Apr 28, 2015 at 8:45 AM, 东方不败 wrote:
> I am trying out Flink. I thin
Hi Folks,
right now .print() on DataSet creates a DataSink that prints to the
local stdout of a TaskManager. This is not very helpful when running
in a distributed environment, especially when using something like an
interactive Scala Shell in a cluster.
I propose to change print() to use collect(
I think this is the 3rd discussion about this ;-)
AFAIK, the consensus in previous discussions was to do it exactly like
collect() and print to the client.
The only open question was how do we deal with the break in the API. Right
now, the programs contain a "execute()" call after the print(), wh
I think we should break the API and remove the unnecessary execute() calls.
On Tue, Apr 28, 2015 at 10:59 AM, Stephan Ewen wrote:
> I think this is the 3rd discussion about this ;-)
>
> AFAIK, the consensus in previous discussions was to do it exactly like
> collect() and print to the client.
>
>
+1 for the breaking change
Let's not to this any more than necessary, bu this is a good case...
On Tue, Apr 28, 2015 at 11:23 AM, Aljoscha Krettek
wrote:
> I think we should break the API and remove the unnecessary execute() calls.
>
> On Tue, Apr 28, 2015 at 10:59 AM, Stephan Ewen wrote:
> >
Robert Metzger created FLINK-1952:
-
Summary: Cannot run ConnectedComponents example: Could not
allocate a slot on instance
Key: FLINK-1952
URL: https://issues.apache.org/jira/browse/FLINK-1952
Project
On 28 Apr 2015, at 12:31, Stephan Ewen wrote:
> +1 for the breaking change
>
> Let's not to this any more than necessary, bu this is a good case...
+1
Stephan and I came up with the following document about how to handle failures
of tasks and how to make sure we properly attribute the failure to the correct
root cause and suppress follow-up failures. The document defines the behaviour
that should be followed for different kinds of task failure
+1 for the breaking change
2015-04-28 13:18 GMT+02:00 Ufuk Celebi :
>
> On 28 Apr 2015, at 12:31, Stephan Ewen wrote:
>
> > +1 for the breaking change
> >
> > Let's not to this any more than necessary, bu this is a good case...
>
> +1
>
Hi,
looking at the last builds on Travis, you'll notice that our builds are in
a pretty bad state: https://travis-ci.org/apache/flink/builds.
It seems that the last 15 builds on master all failed.
These are the errors I saw + their status:
- Deadlock during cache up/download: I asked travis and t
Hi Robert,
Thanks for investigating the Travis build issues. I'm very much in favor
for dropping Java 6. It's deprecated. All major Linux distributions are
shipping at least Java 7. It's a rare use case that requires a lot of
effort for us to maintain backwards compatibility.
I don't recall the d
I looked a bit closer into the Maven issue, maybe Travis is going to
provide a compatible Maven version for the Java6 build environment:
https://github.com/travis-ci/travis-ci/issues/3778
On Tue, Apr 28, 2015 at 1:49 PM, Maximilian Michels wrote:
> Hi Robert,
>
> Thanks for investigating the Tra
I agree, print should print on the client. However, let's introduce some
big hint in the error message in case of a second execute() that this error
may arise from a previous execution.
Instead of "No sinks defined", let's print "The Flink job didn't contain
any sinks. This may be because the sink
Hi all!
I have collected a few next steps to push the streaming API and runtime a
step further. Below is the list of issues, together with some comments
gathered from offline discussions.
Feel free to complement and extend.
Greetings,
Stephan
Sounds good, Max, let's to this in one fix.
We can maintain a counter in the ExecutionEnvironment that tracks how many
executions have happened.
In case of no prior execution, simply warn that no sinks are defined.
In case a prior execution happened, point out that nothing new is pending
execution
Stephan Ewen created FLINK-1953:
---
Summary: Rework Checkpoint Coordinator
Key: FLINK-1953
URL: https://issues.apache.org/jira/browse/FLINK-1953
Project: Flink
Issue Type: Bug
Component
On 28 Apr 2015, at 13:49, Maximilian Michels wrote:
> Hi Robert,
>
> Thanks for investigating the Travis build issues. I'm very much in favor
> for dropping Java 6. It's deprecated. All major Linux distributions are
> shipping at least Java 7. It's a rare use case that requires a lot of
> effor
+1 Very nice addition.
On Tue, Apr 28, 2015 at 2:12 PM, Stephan Ewen wrote:
> Sounds good, Max, let's to this in one fix.
>
> We can maintain a counter in the ExecutionEnvironment that tracks how many
> executions have happened.
> In case of no prior execution, simply warn that no sinks are defi
+1
On Tue, Apr 28, 2015 at 3:19 PM, Maximilian Michels wrote:
> +1 Very nice addition.
>
> On Tue, Apr 28, 2015 at 2:12 PM, Stephan Ewen wrote:
>
> > Sounds good, Max, let's to this in one fix.
> >
> > We can maintain a counter in the ExecutionEnvironment that tracks how
> many
> > executions h
Why aren't we adding a new method "printLocal()" which is doing the local
printing?
That would not break any existing code.
How much work would it be to implement this "properly" without piggybacking
the accumulators?
I assume we would need to write the data to one or more partitions and
request t
>
> Why aren't we adding a new method "printLocal()" which is doing the local
> printing?
Simply because it is counter-intuitive that a regular print does not print
on the Client. Introducing a new method would not solve this confusion. I
think it's fine to break the API as long as we provide a m
Concerning the failed builds in the hadoop2.0.0-alpha profile I see a lot
of
07:47:57,927 ERROR akka.actor.ActorSystemImpl
- Uncaught fatal error from thread
[flink-akka.remote.default-remote-dispatcher-7] shutting down ActorSystem
[flink]
java.lang.VerifyError: (class:
org/jboss/netty/channe
Implementing this "properly" is the same thing as supporting larger
collect() payloads and larger accumulators.
For that, data needs to go through the BLOB manager, rather than be part of
the actor messages.
Stephan
On Tue, Apr 28, 2015 at 5:27 PM, Maximilian Michels wrote:
> >
> > Why aren't
For now, the right way to use older Maven version BUT I would
recommend, similar to Spargel, make 0.9 be last release for Java6.
It is already end of life and more and more developers start flocking
to Java7 and Java8.
- Henry
On Tue, Apr 28, 2015 at 4:34 AM, Robert Metzger wrote:
> Hi,
>
> lo
Agree, I would also like to drop Java 6 after 0.9.
On Tue, Apr 28, 2015 at 7:19 PM, Henry Saputra
wrote:
> For now, the right way to use older Maven version BUT I would
> recommend, similar to Spargel, make 0.9 be last release for Java6.
>
> It is already end of life and more and more developers
Following the lively exchange in Twitter (sic!) I would like to bring together
Ignite and Flink communities to discuss the benefits of the integration and
see where we can start it.
We have this recently opened ticket
https://issues.apache.org/jira/browse/IGNITE-813
and Fabian has listed the f
Thanks Cos.
Hello Flink Community.
>From Ignite standpoint we definitely would be interested in providing Flink
processing API on top of Ignite Data Grid or IGFS. It would be interesting
to hear what steps would be required for such integration or if there are
other integration points.
D.
On Tu
Thanks Cos for starting this discussion, hi to the Ignite community!
The probably easiest and most straightforward integration of Flink and
Ignite would be to go through Ignite's IGFS. Flink can be easily extended
to support additional filesystems.
However, the Flink community is currently also l
On Tue, Apr 28, 2015 at 5:55 PM, Fabian Hueske wrote:
> Thanks Cos for starting this discussion, hi to the Ignite community!
>
> The probably easiest and most straightforward integration of Flink and
> Ignite would be to go through Ignite's IGFS. Flink can be easily extended
> to support addition
29 matches
Mail list logo