[jira] [Created] (FLINK-5303) Add CUBE/ROLLUP/GROUPING SETS operator in SQL

2016-12-08 Thread Alexander Chermenin (JIRA)
Alexander Chermenin created FLINK-5303:
--

 Summary: Add CUBE/ROLLUP/GROUPING SETS operator in SQL
 Key: FLINK-5303
 URL: https://issues.apache.org/jira/browse/FLINK-5303
 Project: Flink
  Issue Type: New Feature
  Components: Documentation, Table API & SQL
Reporter: Alexander Chermenin
Assignee: Alexander Chermenin


Add support for such operators as CUBE, ROLLUP and GROUPING SETS in SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] (FLINK-2980) Add support for CUBE / ROLLUP / GROUPING SETS in Table API

2016-12-08 Thread Alexander Chermenin
Hi Fabian,

thanks for your comments.
I think that's good approach.
I will create new issue today and change my PR after that.

Regards,
Alexander

On Thu, Dec 8, 2016 at 11:40 PM, Fabian Hueske  wrote:

> Hi Alexander,
>
> thanks for the PR!
> It might take sometime until it is reviewed. We have received quite a few
> PRs for the Table API lately and there is kind of a review queue at the
> moment.
>
> The scope of the current issue (FLINK-2980) is to support grouping sets,
> cube, and rollup in the Table API while your PR adds support for SQL.
> I think it would be good if you could open a JIRA issue to support grouping
> sets, cube, and rollup in SQL and "retarget" your PR to the new issue
> (basically change the title + commit messages)
>
> For now, support in SQL is good enough, IMO. Adding feature to the Table
> API is blowing up the API quite a bit. I think we should first see if there
> is interest in this feature.
> The good message is that once we support it in SQL, we "only" need to add
> the API and validation part for the Table API.
>
> What do you think?
> Best, Fabian
>
>
> 2016-12-08 13:11 GMT+01:00 Alexander Chermenin :
>
> > Hi folks!
> >
> > I would like to discuss next question:
> > In PR #2965 (https://github.com/apache/flink/pull/2965) I have added
> > support for such operators as CUBE / ROLLUP / GROUPING SETS in SQL
> queries.
> > Should we do support for the same functionality in Table API? And if so,
> > how will it look?
> > Something like this `table.groupingBy("cube(...)").select("...")`, or
> > `table.cube(...).select(...)` or something else?
> >
> > Regards,
> > Alexander
> >
>


[jira] [Created] (FLINK-5302) Log failure cause at Execution

2016-12-08 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5302:
--

 Summary: Log failure cause at Execution 
 Key: FLINK-5302
 URL: https://issues.apache.org/jira/browse/FLINK-5302
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi
 Fix For: 1.2.0, 1.1.4


It can be helpful to log the failure cause that made an {{Execution}} switch to 
state {{FAILED}}. We currently only see a "root cause" logged on the 
JobManager, which happens to be the first failure cause that makes it to 
{{ExecutionGraph#fail()}}. This depends on relative timings of messages. For 
debugging it can be helpful to have all causes available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] (FLINK-2980) Add support for CUBE / ROLLUP / GROUPING SETS in Table API

2016-12-08 Thread Fabian Hueske
Hi Alexander,

thanks for the PR!
It might take sometime until it is reviewed. We have received quite a few
PRs for the Table API lately and there is kind of a review queue at the
moment.

The scope of the current issue (FLINK-2980) is to support grouping sets,
cube, and rollup in the Table API while your PR adds support for SQL.
I think it would be good if you could open a JIRA issue to support grouping
sets, cube, and rollup in SQL and "retarget" your PR to the new issue
(basically change the title + commit messages)

For now, support in SQL is good enough, IMO. Adding feature to the Table
API is blowing up the API quite a bit. I think we should first see if there
is interest in this feature.
The good message is that once we support it in SQL, we "only" need to add
the API and validation part for the Table API.

What do you think?
Best, Fabian


2016-12-08 13:11 GMT+01:00 Alexander Chermenin :

> Hi folks!
>
> I would like to discuss next question:
> In PR #2965 (https://github.com/apache/flink/pull/2965) I have added
> support for such operators as CUBE / ROLLUP / GROUPING SETS in SQL queries.
> Should we do support for the same functionality in Table API? And if so,
> how will it look?
> Something like this `table.groupingBy("cube(...)").select("...")`, or
> `table.cube(...).select(...)` or something else?
>
> Regards,
> Alexander
>


[jira] [Created] (FLINK-5301) Can't upload job via Web UI when using a proxy

2016-12-08 Thread JIRA
Mischa Krüger created FLINK-5301:


 Summary: Can't upload job via Web UI when using a proxy
 Key: FLINK-5301
 URL: https://issues.apache.org/jira/browse/FLINK-5301
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Reporter: Mischa Krüger


Using DC/OS with Flink service in current development 
(https://github.com/mesosphere/dcos-flink-service). For reproduction:
1. Install a DC/OS cluster
2. Follow the instruction on mentioned repo for setting up a universe server 
with the flink app.
3. Install the flink app via the universe
4. Access the Web UI
5. Upload a job

Experience:
The upload reaches 100%, and then says "Saving..." forever.

Upload works when using ssh forwarding to access the node directly serving the 
Flink Web UI.

DC/OS uses a proxy to access the Web UI. The webpage is delivered by a 
component called the "Admin Router".

Side note:
Interestingly also the new favicon does not appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5300) FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete non-empty directory

2016-12-08 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5300:


 Summary: FileStateHandle#discard & 
FsCheckpointStateOutputStream#close tries to delete non-empty directory
 Key: FLINK-5300
 URL: https://issues.apache.org/jira/browse/FLINK-5300
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.1.3, 1.2.0
Reporter: Till Rohrmann
Priority: Minor


Flink's behaviour to delete {{FileStateHandles}} and closing 
{{FsCheckpointStateOutputStream}} always triggers a delete operation on the 
parent directory. Often this call will fail because the directory still 
contains some other files.

A user reported that the SRE of their Hadoop cluster noticed this behaviour in 
the logs. It might be more system friendly if we first checked whether the 
directory is empty or not. This would prevent many error message to appear in 
the Hadoop logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5299) DataStream support for arrays as keys

2016-12-08 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-5299:
---

 Summary: DataStream support for arrays as keys
 Key: FLINK-5299
 URL: https://issues.apache.org/jira/browse/FLINK-5299
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Affects Versions: 1.2.0
Reporter: Chesnay Schepler


It is currently not possible to use an array as a key in the DataStream api, as 
it relies on hashcodes which aren't stable for arrays.

One way to implement this would be to check for the key type and inject a 
KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5298) Web UI crashes when TM log not existant

2016-12-08 Thread JIRA
Mischa Krüger created FLINK-5298:


 Summary: Web UI crashes when TM log not existant
 Key: FLINK-5298
 URL: https://issues.apache.org/jira/browse/FLINK-5298
 Project: Flink
  Issue Type: Bug
Reporter: Mischa Krüger


{code}
java.io.FileNotFoundException: flink-taskmanager.out (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at 
org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at 
org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
at akka.dispatch.Mailbox.run(Mailbox.scala:221)
at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2016-12-08 16:45:14,995 INFO  
org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Stopping 
TaskManager akka://flink/user/taskmanager#1361882659.
2016-12-08 16:45:14,995 INFO  
org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - 
Disassociating from JobManager
2016-12-08 16:45:14,997 INFO  org.apache.flink.runtime.blob.BlobCache   
- Shutting down BlobCache
2016-12-08 16:45:15,006 INFO  
org.apache.flink.runtime.io.disk.iomanager.IOManager  - I/O manager 
removed spill file directory /tmp/flink-io-e61f717b-630c-4a2a-b3e3-62ccb40aa2f9
2016-12-08 16:45:15,006 INFO  
org.apache.flink.runtime.io.network.NetworkEnvironment- Shutting down 
the network environment and its components.
2016-12-08 16:45:15,008 INFO  
org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
shutdown (took 1 ms).
2016-12-08 16:45:15,009 INFO  
org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
shutdown (took 0 ms).
2016-12-08 16:45:15,020 INFO  
org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Task 
manager akka://flink/user/taskmanager is completely shut down.
2016-12-08 16:45:15,023 ERROR org.apache.flink.runtime.taskmanager.TaskManager  
- Actor akka://flink/user/taskmanager#1361882659 terminated, 
stopping process...
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Schedule and Scope for Flink 1.2

2016-12-08 Thread Till Rohrmann
We might also think about addressing:

Relocate Flink's Hadoop dependency and its transitive dependencies
(FLINK-5297),

because a user reported that they cannot use the system due to a dependency
issue.

Cheers,
Till

On Thu, Dec 8, 2016 at 10:17 AM, Robert Metzger  wrote:

> Thank you for your responses Max and Vijay.
> So I understand that Mesos is basically ready for the 1.2 release.
>
> Regarding the security changes: Having Hadoop, Kafka and Zookeeper
> integration is a big improvement and a much requested feature. I'm super
> excited to have that in :)
> Are all the other security changes useless without authorization, or could
> we consider releasing 1.2 without it? (Another way to think about it: How
> close is the PR to being merged. If its just a final review & we are done,
> I would actually try to get it in. But if there's a lot of uncertainty, I
> would prefer to move it to the next release)
>
> I agree regarding FLINK-2821, that's important for many deployments.
>
>
> The updated list:
> - RESOLVED dynamic Scaling / Key Groups (FLINK-3755)
> - RESOLVED Add Rescalable Non-Partitioned State (FLINK-4379)
> - RESOLVED [Split for 1.3] Add Flink 1.1 savepoint backwards compatability
> (FLINK-4797)
> - RESOLVED [Split for 1.3] Integrate Flink with Apache Mesos (FLINK-1984)
> - UNRESOLVED Secure Data Access (FLINK-3930)
> - RESOLVED Queryable State (FLINK-3779)
> - RESOLVED Metrics in Webinterface (FLINK-4389)
> - RESOLVED Kafka 0.10 support (FLINK-4035)
> - RESOLVED Table API: Group Window Aggregates (FLINK-4691, FLIP-11)
> - RESOLVED Table API: Scalar Functions (FLINK-3097)
> Added by Stephan:
> - NON-BLOCKING [Pending PR] Provide support for asynchronous operations
> over streams (FLINK-4391)
> - ONGOING [beginning of next week] Unify Savepoints and Checkpoints
> (FLINK-4484)
> Added by Fabian:
> - ONGOING [Pending PR] Clean up the packages of the Table API (FLINK-4704)
> - UNRESOLVED Move Row to flink-core (FLINK-5186)
> Added by Max:
> - ONGOING [Pending PR] Change Akka configuration to allow accessing actors
> from different URLs (FLINK-2821)
>
>
> On Wed, Dec 7, 2016 at 12:40 PM, Maximilian Michels 
> wrote:
>
> > > - UNRESOLVED Integrate Flink with Apache Mesos (FLINK-1984)
> >
> > The initial integration is already completed with the last issues
> > being resolved in the Mesos component:
> > https://issues.apache.org/jira/browse/FLINK/component/12331068/ The
> > implementation will be further refined after the next release and with
> > the merge of FLIP-6. We're missing documentation on how to deploy a
> > Flink Mesos cluster.
> >
> > > - UNRESOLVED Secure Data Access (FLINK-3930)
> >
> > We have support for Kerberos authentication with Haddop, Kafka,
> > Zookeper, and all services supporting JAAS. Additionally, we
> > implemented SSL encryption for all communications paths, i.e. web
> > interface, Akka, Netty, BlobServer. We still lack support for
> > authorization: Vijay's PR is blocked because we haven't found time to
> > properly review the sensitive network changes.
> >
> > I'd like to add the Akka changes for containered environments which
> > should be ready by the end of the week:
> > https://issues.apache.org/jira/browse/FLINK-2821
> >
> > -Max
> >
> > On Tue, Dec 6, 2016 at 8:57 PM, Vijay 
> > wrote:
> > >>>Secure Data Access (FLINK-3930)
> > >
> > > The PR for the work is still under review and I hope this could be
> > included in the release.
> > >
> > > Regards,
> > > Vijay
> > >
> > > Sent from my iPhone
> > >
> > >> On Dec 6, 2016, at 11:51 AM, Robert Metzger 
> > wrote:
> > >>
> > >> UNRESOLVED Secure Data Access (FLINK-3930)
> > >
> >
>


[jira] [Created] (FLINK-5297) Relocate Flink's Hadoop dependency and its transitive dependencies

2016-12-08 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5297:


 Summary: Relocate Flink's Hadoop dependency and its transitive 
dependencies
 Key: FLINK-5297
 URL: https://issues.apache.org/jira/browse/FLINK-5297
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.2.0
Reporter: Till Rohrmann
 Fix For: 1.2.0


A user reported that they have a dependency conflict with one of Hadoop's 
dependencies. More concretely it is the {{aws-java-sdk-*}} dependency which is 
not backward compatible. The user is dependent on a newer {{aws-java-sdk}} 
version which cannot be used by Hadoop version 2.7.

A solution for future dependency conflicts could be to relocate Hadoop's 
dependencies or even all of the Hadoop dependency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2016-12-08 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5296:
-

 Summary: Expose the old AlignedWindowOperators to the user through 
explicit commands.
 Key: FLINK-5296
 URL: https://issues.apache.org/jira/browse/FLINK-5296
 Project: Flink
  Issue Type: Bug
  Components: Windowing Operators
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5295) Migrate the AlignedWindowOperators to the WindowOperator and make it backwards compatible.

2016-12-08 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5295:
-

 Summary: Migrate the AlignedWindowOperators to the WindowOperator 
and make it backwards compatible.
 Key: FLINK-5295
 URL: https://issues.apache.org/jira/browse/FLINK-5295
 Project: Flink
  Issue Type: Bug
  Components: Windowing Operators
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5294) Make the WindowOperator backwards compatible.

2016-12-08 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5294:
-

 Summary: Make the WindowOperator backwards compatible.
 Key: FLINK-5294
 URL: https://issues.apache.org/jira/browse/FLINK-5294
 Project: Flink
  Issue Type: Bug
  Components: Windowing Operators
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5293) Make the Kafka consumer backwards compatible.

2016-12-08 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5293:
-

 Summary: Make the Kafka consumer backwards compatible.
 Key: FLINK-5293
 URL: https://issues.apache.org/jira/browse/FLINK-5293
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5292) Make the remaining operators backwards compatible.

2016-12-08 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5292:
-

 Summary: Make the remaining operators backwards compatible.
 Key: FLINK-5292
 URL: https://issues.apache.org/jira/browse/FLINK-5292
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5291) Ensure backwards compatibility of the hashes used to generate JobVertexIds

2016-12-08 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-5291:
-

 Summary: Ensure backwards compatibility of the hashes used to 
generate JobVertexIds
 Key: FLINK-5291
 URL: https://issues.apache.org/jira/browse/FLINK-5291
 Project: Flink
  Issue Type: Sub-task
Reporter: Stefan Richter
Assignee: Stefan Richter


The way in which hashes for JobVertexIds are generated changed between Flink 
1.1 and 1.2 (parallelism was considered for the hash in 1.1). We need to be 
backwards compatible to old JobVertexId generation so that we can still assign 
state from old savepoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5290) Ensure backwards compatibility of the hashes used to generate JobVertexIds

2016-12-08 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-5290:
-

 Summary: Ensure backwards compatibility of the hashes used to 
generate JobVertexIds
 Key: FLINK-5290
 URL: https://issues.apache.org/jira/browse/FLINK-5290
 Project: Flink
  Issue Type: Sub-task
Reporter: Stefan Richter
Assignee: Stefan Richter


The way in which hashes for JobVertexIds are generated changed between Flink 
1.1 and 1.2 (parallelism was considered for the hash in 1.1). We need to be 
backwards compatible to old JobVertexId generation so that we can still assign 
state from old savepoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5289) NPE when using value state on non-keyed stream

2016-12-08 Thread Timo Walther (JIRA)
Timo Walther created FLINK-5289:
---

 Summary: NPE when using value state on non-keyed stream
 Key: FLINK-5289
 URL: https://issues.apache.org/jira/browse/FLINK-5289
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Timo Walther


Using a {{ValueStateDescriptor}} and 
{{getRuntimeContext().getState(descriptor)}} on a non-keyed stream leads to 
{{NullPointerException}} which is not very helpful for users:

{code}
java.lang.NullPointerException
at 
org.apache.flink.streaming.api.operators.StreamingRuntimeContext.getState(StreamingRuntimeContext.java:109)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] (FLINK-2980) Add support for CUBE / ROLLUP / GROUPING SETS in Table API

2016-12-08 Thread Alexander Chermenin
Hi folks!

I would like to discuss next question:
In PR #2965 (https://github.com/apache/flink/pull/2965) I have added
support for such operators as CUBE / ROLLUP / GROUPING SETS in SQL queries.
Should we do support for the same functionality in Table API? And if so,
how will it look?
Something like this `table.groupingBy("cube(...)").select("...")`, or
`table.cube(...).select(...)` or something else?

Regards,
Alexander


[jira] [Created] (FLINK-5287) Test randomly fails with wrong result: testWithAtomic2[Execution mode = CLUSTER](org.apache.flink.api.scala.operators.JoinITCase)

2016-12-08 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5287:
-

 Summary: Test randomly fails with wrong result: 
testWithAtomic2[Execution mode = 
CLUSTER](org.apache.flink.api.scala.operators.JoinITCase)
 Key: FLINK-5287
 URL: https://issues.apache.org/jira/browse/FLINK-5287
 Project: Flink
  Issue Type: Bug
  Components: Scala API
Reporter: Robert Metzger


I encountered this issue here: 
https://api.travis-ci.org/jobs/182009802/log.txt?deansi=true

{code}
testWithAtomic2[Execution mode = 
CLUSTER](org.apache.flink.api.scala.operators.JoinITCase)  Time elapsed: 0.237 
sec  <<< FAILURE!
java.lang.AssertionError: Different number of lines in expected and obtained 
result. expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:316)
at 
org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:302)
at 
org.apache.flink.api.scala.operators.JoinITCase.after(JoinITCase.scala:51)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5286) Build instability: WordCountSubclassPOJOITCase fails with IOException: Stream Closed

2016-12-08 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5286:
-

 Summary: Build instability: WordCountSubclassPOJOITCase fails with 
IOException: Stream Closed
 Key: FLINK-5286
 URL: https://issues.apache.org/jira/browse/FLINK-5286
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime
Affects Versions: 1.2.0
Reporter: Robert Metzger


I saw this failure on our recent master:

{code}
Running org.apache.flink.test.exampleJavaPrograms.WordCountSubclassPOJOITCase
Running org.apache.flink.test.exampleJavaPrograms.PageRankITCase
Job execution failed.
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:905)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: Stream Closed
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:272)
at 
org.apache.flink.core.fs.local.LocalDataInputStream.read(LocalDataInputStream.java:72)
at 
org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:59)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.fillBuffer(DelimitedInputFormat.java:619)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.readLine(DelimitedInputFormat.java:513)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.nextRecord(DelimitedInputFormat.java:479)
at 
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:158)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:653)
at java.lang.Thread.run(Thread.java:745)
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.231 sec <<< 
FAILURE! - in 
org.apache.flink.test.exampleJavaPrograms.WordCountSubclassPOJOITCase
testJobWithObjectReuse(org.apache.flink.test.exampleJavaPrograms.WordCountSubclassPOJOITCase)
  Time elapsed: 0.453 sec  <<< FAILURE!
java.lang.AssertionError: Error while calling the test program: Job execution 
failed.
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithObjectReuse(JavaProgramTestBase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at