[GitHub] flink pull request: [FLINK-3192] Add explain support to print ast ...

2016-01-06 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1477#issuecomment-169270202
  
Thanks @gallenvara. 
Flink uses Jackson to handle JSON data. You can use it to parse the JSON 
String.

Looking forward to your update :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

2016-01-06 Thread ajaybhat
GitHub user ajaybhat opened a pull request:

https://github.com/apache/flink/pull/1486

[FLINK-2445] Add tests for HadoopOutputFormats



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ajaybhat/flink test-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1486


commit be59077d5f5df099c1792329631c08441cc25345
Author: Ajay Bhat 
Date:   2016-01-06T08:56:46Z

[FLINK-2445] Add tests for HadoopOutputFormats




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2445) Add tests for HadoopOutputFormats

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085296#comment-15085296
 ] 

ASF GitHub Bot commented on FLINK-2445:
---

GitHub user ajaybhat opened a pull request:

https://github.com/apache/flink/pull/1486

[FLINK-2445] Add tests for HadoopOutputFormats



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ajaybhat/flink test-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1486


commit be59077d5f5df099c1792329631c08441cc25345
Author: Ajay Bhat 
Date:   2016-01-06T08:56:46Z

[FLINK-2445] Add tests for HadoopOutputFormats




> Add tests for HadoopOutputFormats
> -
>
> Key: FLINK-2445
> URL: https://issues.apache.org/jira/browse/FLINK-2445
> Project: Flink
>  Issue Type: Test
>  Components: Hadoop Compatibility, Tests
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Fabian Hueske
>  Labels: starter
>
> The HadoopOutputFormats and HadoopOutputFormatBase classes are not 
> sufficiently covered by unit tests.
> We need tests that ensure that the methods of the wrapped Hadoop 
> OutputFormats are correctly called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2445) Add tests for HadoopOutputFormats

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085382#comment-15085382
 ] 

ASF GitHub Bot commented on FLINK-2445:
---

Github user ajaybhat commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169292981
  
Hi @chiwanpark 
> Could you re-implement DummyRecordWriter to make tests fast?

Its because I saw many tests creating temporary files that I used the same 
approach.
> By the way, the author email of your commit (be59077) seems different 
email address of your github account. Is this intentional?

Its a mistake, I'll fix it


> Add tests for HadoopOutputFormats
> -
>
> Key: FLINK-2445
> URL: https://issues.apache.org/jira/browse/FLINK-2445
> Project: Flink
>  Issue Type: Test
>  Components: Hadoop Compatibility, Tests
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Fabian Hueske
>  Labels: starter
>
> The HadoopOutputFormats and HadoopOutputFormatBase classes are not 
> sufficiently covered by unit tests.
> We need tests that ensure that the methods of the wrapped Hadoop 
> OutputFormats are correctly called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3203) DistributedCache crashing when run in OGS

2016-01-06 Thread Omar Alvarez (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omar Alvarez updated FLINK-3203:

Description: 
When trying to execute the Python example without HDFS, the FlatMap fails with 
the following error:

{code:title=PyExample|borderStyle=solid}
01/05/2016 13:09:38 Job execution switched to status RUNNING.
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to SCHEDULED
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to DEPLOYING
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to RUNNING
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to SCHEDULED
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to DEPLOYING
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to FINISHED
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to RUNNING
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to FAILED
java.lang.Exception: The user defined 'open()' method caused an exception: An 
error occurred while copying the file.
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:484)
at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:354)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: An error occurred while copying the file.
at 
org.apache.flink.api.common.cache.DistributedCache.getFile(DistributedCache.java:78)
at 
org.apache.flink.languagebinding.api.java.python.streaming.PythonStreamer.startPython(PythonStreamer.java:68)
at 
org.apache.flink.languagebinding.api.java.python.streaming.PythonStreamer.setupProcess(PythonStreamer.java:58)
at 
org.apache.flink.languagebinding.api.java.common.streaming.Streamer.open(Streamer.java:67)
at 
org.apache.flink.languagebinding.api.java.python.functions.PythonMapPartition.open(PythonMapPartition.java:47)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:480)
... 3 more
Caused by: java.io.FileNotFoundException: File file:/tmp/flink does not exist 
or the user running Flink ('omar.alvarez') has insufficient permissions to 
access it.
at 
org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:107)
at org.apache.flink.runtime.filecache.FileCache.copy(FileCache.java:242)
at 
org.apache.flink.runtime.filecache.FileCache$CopyProcess.call(FileCache.java:322)
at 
org.apache.flink.runtime.filecache.FileCache$CopyProcess.call(FileCache.java:306)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
{code}

It is important to mention that I am using modified Flink cluster launch 
scripts to use the OGS engine. The modified scripts and usage case can be found 
in https://github.com/omaralvarez/flink-OGS-GE.

The same example in the Java API works correctly when not using the 
DistributedCache, and the user has sufficient permissions to write the file. If 
I use interactive nodes instead of the qsub command to run the example it does 
not fail.



  was:
When trying to execute the Python example without HDFS, the FlatMap fails with 
the following error:

{code:title=PyExample|borderStyle=solid}
01/05/2016 13:09:38 Job execution switched to status RUNNING.
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to SCHEDULED
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to DEPLOYING
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to RUNNING
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to SCHEDULED
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to DEPLOYING
01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to FINISHED
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to RUNNING
01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
switched to FAILED
java.lang.Exception: The user defined 'open()' method caused an 

[GitHub] flink pull request: [FLINK-3140] NULL value data layout in Row Ser...

2016-01-06 Thread twalthr
Github user twalthr commented on the pull request:

https://github.com/apache/flink/pull/1465#issuecomment-169275924
  
Thanks for your feedback @fhueske. I will rework this PR during this week.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3140) NULL value data layout in Row Serializer/Comparator

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085292#comment-15085292
 ] 

ASF GitHub Bot commented on FLINK-3140:
---

Github user twalthr commented on the pull request:

https://github.com/apache/flink/pull/1465#issuecomment-169275924
  
Thanks for your feedback @fhueske. I will rework this PR during this week.


> NULL value data layout in Row Serializer/Comparator
> ---
>
> Key: FLINK-3140
> URL: https://issues.apache.org/jira/browse/FLINK-3140
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API
>Reporter: Chengxiang Li
>Assignee: Timo Walther
>
> To store/materialize NULL value in Row objects, we should need new Row 
> Serializer/Comparator which is aware of NULL value fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3204) TaskManagers are not shutting down properly on YARN

2016-01-06 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3204:
-

 Summary: TaskManagers are not shutting down properly on YARN
 Key: FLINK-3204
 URL: https://issues.apache.org/jira/browse/FLINK-3204
 Project: Flink
  Issue Type: Bug
  Components: YARN Client
Affects Versions: 1.0.0
Reporter: Robert Metzger


While running some experiments on a YARN cluster, I saw the following error

{code}
10:15:24,741 INFO  org.apache.flink.yarn.YarnJobManager 
 - Stopping YARN JobManager with status SUCCEEDED and diagnostic Flink YARN 
Client requested shutdown.
10:15:24,748 INFO  org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl
 - Waiting for application to be successfully unregistered.
10:15:24,852 INFO  
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl  - Interrupted 
while waiting for queue
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:275)
10:15:24,875 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_10when stopping 
NMClientImpl
10:15:24,899 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_07when stopping 
NMClientImpl
10:15:24,954 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_06when stopping 
NMClientImpl
10:15:24,982 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_09when stopping 
NMClientImpl
10:15:25,013 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_11when stopping 
NMClientImpl
10:15:25,037 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_08when stopping 
NMClientImpl
10:15:25,041 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_12when stopping 
NMClientImpl
10:15:25,072 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_05when stopping 
NMClientImpl
10:15:25,075 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_03when stopping 
NMClientImpl
10:15:25,077 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_04when stopping 
NMClientImpl
10:15:25,079 ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl  
 - Failed to stop Container container_1452019681933_0002_01_02when stopping 
NMClientImpl
10:15:25,080 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-0.c.astral-sorter-757.internal:8041
10:15:25,080 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-1.c.astral-sorter-757.internal:8041
10:15:25,080 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-master.c.astral-sorter-757.internal:8041
10:15:25,080 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-4.c.astral-sorter-757.internal:8041
10:15:25,081 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-2.c.astral-sorter-757.internal:8041
10:15:25,081 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-3.c.astral-sorter-757.internal:8041
10:15:25,081 INFO  
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
Closing proxy : cdh544-worker-5.c.astral-sorter-757.internal:8041
10:15:25,085 INFO  org.apache.flink.yarn.YarnJobManager 
 - Stopping JobManager akka.tcp://flink@10.240.221.7:46845/user/jobmanager.
10:15:25,092 INFO  org.apache.flink.runtime.blob.BlobServer 
 - Stopped BLOB server at 0.0.0.0:35997
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3203) DistributedCache crashing when run in OGS

2016-01-06 Thread Omar Alvarez (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omar Alvarez updated FLINK-3203:

 Labels: DistributedCache OGS  (was: )
Component/s: Java API
Summary: DistributedCache crashing when run in OGS  (was: Python API 
crashing when run in OGS)

> DistributedCache crashing when run in OGS
> -
>
> Key: FLINK-3203
> URL: https://issues.apache.org/jira/browse/FLINK-3203
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Python API
>Affects Versions: 0.10.0
> Environment: Rocks 6.1 SP1, CentOS release 6.7 
> (2.6.32-573.7.1.el6.x86_64), java/oraclejdk/1.8.0_45, Python 2.6.6
>Reporter: Omar Alvarez
>  Labels: DistributedCache, OGS
>
> When trying to execute the Python example without HDFS, the FlatMap fails 
> with the following error:
> {code:title=PyExample|borderStyle=solid}
> 01/05/2016 13:09:38 Job execution switched to status RUNNING.
> 01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to SCHEDULED
> 01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to DEPLOYING
> 01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to RUNNING
> 01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
> switched to SCHEDULED
> 01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
> switched to DEPLOYING
> 01/05/2016 13:09:38 DataSource (ValueSource)(1/1) switched to FINISHED
> 01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
> switched to RUNNING
> 01/05/2016 13:09:38 MapPartition (PythonFlatMap -> PythonCombine)(1/1) 
> switched to FAILED
> java.lang.Exception: The user defined 'open()' method caused an exception: An 
> error occurred while copying the file.
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:484)
> at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:354)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: An error occurred while copying the 
> file.
> at 
> org.apache.flink.api.common.cache.DistributedCache.getFile(DistributedCache.java:78)
>   at 
> org.apache.flink.languagebinding.api.java.python.streaming.PythonStreamer.startPython(PythonStreamer.java:68)
>   at 
> org.apache.flink.languagebinding.api.java.python.streaming.PythonStreamer.setupProcess(PythonStreamer.java:58)
>   at 
> org.apache.flink.languagebinding.api.java.common.streaming.Streamer.open(Streamer.java:67)
>   at 
> org.apache.flink.languagebinding.api.java.python.functions.PythonMapPartition.open(PythonMapPartition.java:47)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:480)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File file:/tmp/flink does not exist 
> or the user running Flink ('omar.alvarez') has insufficient permissions to 
> access it.
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:107)
>   at 
> org.apache.flink.runtime.filecache.FileCache.copy(FileCache.java:242)
>   at 
> org.apache.flink.runtime.filecache.FileCache$CopyProcess.call(FileCache.java:322)
>   at 
> org.apache.flink.runtime.filecache.FileCache$CopyProcess.call(FileCache.java:306)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   ... 1 more
> {code}
> It is important to mention that I am using modified Flink cluster launch 
> scripts to use the OGS engine. The modified scripts and usage case can be 
> found in https://github.com/omaralvarez/flink-OGS-GE.
> The same example in the Java API works correctly, and the user has sufficient 
> permissions to write the file. If use interactive nodes instead of the qsub 
> command to run the example it does not fail.



--
This message was sent by Atlassian JIRA

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

2016-01-06 Thread ajaybhat
Github user ajaybhat commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169292981
  
Hi @chiwanpark 
> Could you re-implement DummyRecordWriter to make tests fast?

Its because I saw many tests creating temporary files that I used the same 
approach.
> By the way, the author email of your commit (be59077) seems different 
email address of your github account. Is this intentional?

Its a mistake, I'll fix it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2111) Add "stop" signal to cleanly shutdown streaming jobs

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085210#comment-15085210
 ] 

ASF GitHub Bot commented on FLINK-2111:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-169267260
  
Just updated this. I did not add an additional YARN test, because I test 
`GET` and `DELETE` in the REST test. About your last comment regarding 
`Execution`: behavior of STOP is the same as for CANCEL. I think we should keep 
it this way for consistency.

Btw: I never tested in a YARN setup...


> Add "stop" signal to cleanly shutdown streaming jobs
> 
>
> Key: FLINK-2111
> URL: https://issues.apache.org/jira/browse/FLINK-2111
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime, JobManager, Local Runtime, 
> Streaming, TaskManager, Webfrontend
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> Currently, streaming jobs can only be stopped using "cancel" command, what is 
> a "hard" stop with no clean shutdown.
> The new introduced "stop" signal, will only affect streaming source tasks 
> such that the sources can stop emitting data and shutdown cleanly, resulting 
> in a clean shutdown of the whole streaming job.
> This feature is a pre-requirment for 
> https://issues.apache.org/jira/browse/FLINK-1929



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

2016-01-06 Thread chiwanpark
Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169283768
  
Hi @ajaybhat, thanks for opening pull request.

I read your pull request quickly and have a comment.

Currently, Flink has a lot of tests that create a temporary file and write 
temporary result to the file. It makes time to test slow. So Flink community 
decided to change these tests to avoid temporary file.

Could you re-implement  `DummyRecordWriter` to make tests fast?

By the way, the author email of your commit (be59077) seems different email 
address of your github account. Is this intentional?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2445) Add tests for HadoopOutputFormats

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085363#comment-15085363
 ] 

ASF GitHub Bot commented on FLINK-2445:
---

Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169283768
  
Hi @ajaybhat, thanks for opening pull request.

I read your pull request quickly and have a comment.

Currently, Flink has a lot of tests that create a temporary file and write 
temporary result to the file. It makes time to test slow. So Flink community 
decided to change these tests to avoid temporary file.

Could you re-implement  `DummyRecordWriter` to make tests fast?

By the way, the author email of your commit (be59077) seems different email 
address of your github account. Is this intentional?


> Add tests for HadoopOutputFormats
> -
>
> Key: FLINK-2445
> URL: https://issues.apache.org/jira/browse/FLINK-2445
> Project: Flink
>  Issue Type: Test
>  Components: Hadoop Compatibility, Tests
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Fabian Hueske
>  Labels: starter
>
> The HadoopOutputFormats and HadoopOutputFormatBase classes are not 
> sufficiently covered by unit tests.
> We need tests that ensure that the methods of the wrapped Hadoop 
> OutputFormats are correctly called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2016-01-06 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-169267260
  
Just updated this. I did not add an additional YARN test, because I test 
`GET` and `DELETE` in the REST test. About your last comment regarding 
`Execution`: behavior of STOP is the same as for CANCEL. I think we should keep 
it this way for consistency.

Btw: I never tested in a YARN setup...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3192) Add explain support to print ast and sql physical execution plan.

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085231#comment-15085231
 ] 

ASF GitHub Bot commented on FLINK-3192:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1477#issuecomment-169270202
  
Thanks @gallenvara. 
Flink uses Jackson to handle JSON data. You can use it to parse the JSON 
String.

Looking forward to your update :-)


> Add explain support to print ast and sql physical execution plan. 
> --
>
> Key: FLINK-3192
> URL: https://issues.apache.org/jira/browse/FLINK-3192
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Reporter: GaoLun
>Assignee: GaoLun
>Priority: Minor
>  Labels: features
>
> Table API doesn't support sql-explanation now. Add the explain support to 
> print ast (abstract syntax tree) and the physical execution plan of sql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3057][py] Bidirectional Connection

2016-01-06 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1432#issuecomment-169329884
  
I've quickly scrolled through the change. I'm wondering whether we should 
change the development process for the Python API a bit.
It seems that you are currently the only committer working on the Python 
API. Maybe it makes sense that you push changes directly to `master` if they 
don't touch the rest of the system. Do you think that would improve the ease of 
development of the python API?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3057) [py] Provide a way to pass information back to the plan process

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085559#comment-15085559
 ] 

ASF GitHub Bot commented on FLINK-3057:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1432#issuecomment-169329884
  
I've quickly scrolled through the change. I'm wondering whether we should 
change the development process for the Python API a bit.
It seems that you are currently the only committer working on the Python 
API. Maybe it makes sense that you push changes directly to `master` if they 
don't touch the rest of the system. Do you think that would improve the ease of 
development of the python API?


> [py] Provide a way to pass information back to the plan process
> ---
>
> Key: FLINK-3057
> URL: https://issues.apache.org/jira/browse/FLINK-3057
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Affects Versions: 0.10.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Added, to the DataStream API guide guide, an e...

2016-01-06 Thread filipefigcorreia
GitHub user filipefigcorreia opened a pull request:

https://github.com/apache/flink/pull/1487

Added, to the DataStream API guide guide, an example of how to collect 
results.

This is similar to what already exists for the DataSet API guide: 

https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html#collection-data-sources-and-sinks

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/filipefigcorreia/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1487.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1487


commit 2c121e27846594ad7b3a29b0799af0c5dd6267d3
Author: Filipe Correia 
Date:   2016-01-06T12:21:28Z

Added, to the streaming guide, an example of how to collect results.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3093) Introduce annotations for interface stability

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085576#comment-15085576
 ] 

ASF GitHub Bot commented on FLINK-3093:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1427#issuecomment-169335005
  
Thanks a lot for looking into this!

In the `ExecutionConfig` I've made the following methods experimental:
- setAutoWatermarkInterval
- enableTimestamps
- disableTimestamps
- areTimestampsEnabled
- getAutoWatermarkInterval
- setCodeAnalysisMode
- getCodeAnalysisMode

Regarding the `ExecutionMode`, I'll bring the issue to the mailing list.

`JobExecutionResult.getIntCounterResult()` marked Experimental and 
Deprecated

I also remember that there was an issue like this with the `Partitioner`. 
I'll also bring this to the mailing list.

I removed the `@Public` annotation from `DefaultInputSplitAssigner`.


> Introduce annotations for interface stability
> -
>
> Key: FLINK-3093
> URL: https://issues.apache.org/jira/browse/FLINK-3093
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: 1.0.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>
> For the upcoming 1.0 release, we want to mark interfaces as public/stable so 
> that we can automatically ensure that newer Flink releases (1.1, 1.2, ..) are 
> not breaking them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3093] Introduce annotations for interfa...

2016-01-06 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1427#issuecomment-169335005
  
Thanks a lot for looking into this!

In the `ExecutionConfig` I've made the following methods experimental:
- setAutoWatermarkInterval
- enableTimestamps
- disableTimestamps
- areTimestampsEnabled
- getAutoWatermarkInterval
- setCodeAnalysisMode
- getCodeAnalysisMode

Regarding the `ExecutionMode`, I'll bring the issue to the mailing list.

`JobExecutionResult.getIntCounterResult()` marked Experimental and 
Deprecated

I also remember that there was an issue like this with the `Partitioner`. 
I'll also bring this to the mailing list.

I removed the `@Public` annotation from `DefaultInputSplitAssigner`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3167) Stopping start-local.bat doesn't shutdown JM on W10

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085539#comment-15085539
 ] 

ASF GitHub Bot commented on FLINK-3167:
---

GitHub user rtudoran opened a pull request:

https://github.com/apache/flink/pull/1488

[FLINK-3167] Improve documentation for window apply

Small correction and extension for the window apply example

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rtudoran/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1488


commit 0d986dda1cec697cf501839f5b0140c526741fb8
Author: rtudoran 
Date:   2016-01-06T13:02:41Z

[FLINK-3167] Improve documentation for window apply




> Stopping start-local.bat doesn't shutdown JM on W10
> ---
>
> Key: FLINK-3167
> URL: https://issues.apache.org/jira/browse/FLINK-3167
> Project: Flink
>  Issue Type: Improvement
>  Components: Start-Stop Scripts
>Affects Versions: 0.10.1
> Environment: Windows 10
>Reporter: Chesnay Schepler
>
> When using the start-local.bat on a windows 10 it says the JobManager can be 
> stopped using Ctrl+C. This doesn't work on my W10 System; the JM stays alive, 
> i can still submit jobs and view the web dashboard.
> Since there is no stop-local.bat script i now have to always kill the process 
> using the TaskManager :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3167] Improve documentation for window ...

2016-01-06 Thread rtudoran
GitHub user rtudoran opened a pull request:

https://github.com/apache/flink/pull/1488

[FLINK-3167] Improve documentation for window apply

Small correction and extension for the window apply example

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rtudoran/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1488


commit 0d986dda1cec697cf501839f5b0140c526741fb8
Author: rtudoran 
Date:   2016-01-06T13:02:41Z

[FLINK-3167] Improve documentation for window apply




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3167] Improve documentation for window ...

2016-01-06 Thread rtudoran
Github user rtudoran closed the pull request at:

https://github.com/apache/flink/pull/1488


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3167] Improve documentation for window ...

2016-01-06 Thread rtudoran
GitHub user rtudoran reopened a pull request:

https://github.com/apache/flink/pull/1488

[FLINK-3167] Improve documentation for window apply

Small correction and extension for the window apply example

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rtudoran/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1488


commit 0d986dda1cec697cf501839f5b0140c526741fb8
Author: rtudoran 
Date:   2016-01-06T13:02:41Z

[FLINK-3167] Improve documentation for window apply




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3167) Stopping start-local.bat doesn't shutdown JM on W10

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085542#comment-15085542
 ] 

ASF GitHub Bot commented on FLINK-3167:
---

Github user rtudoran closed the pull request at:

https://github.com/apache/flink/pull/1488


> Stopping start-local.bat doesn't shutdown JM on W10
> ---
>
> Key: FLINK-3167
> URL: https://issues.apache.org/jira/browse/FLINK-3167
> Project: Flink
>  Issue Type: Improvement
>  Components: Start-Stop Scripts
>Affects Versions: 0.10.1
> Environment: Windows 10
>Reporter: Chesnay Schepler
>
> When using the start-local.bat on a windows 10 it says the JobManager can be 
> stopped using Ctrl+C. This doesn't work on my W10 System; the JM stays alive, 
> i can still submit jobs and view the web dashboard.
> Since there is no stop-local.bat script i now have to always kill the process 
> using the TaskManager :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3167) Stopping start-local.bat doesn't shutdown JM on W10

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085543#comment-15085543
 ] 

ASF GitHub Bot commented on FLINK-3167:
---

GitHub user rtudoran reopened a pull request:

https://github.com/apache/flink/pull/1488

[FLINK-3167] Improve documentation for window apply

Small correction and extension for the window apply example

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rtudoran/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1488


commit 0d986dda1cec697cf501839f5b0140c526741fb8
Author: rtudoran 
Date:   2016-01-06T13:02:41Z

[FLINK-3167] Improve documentation for window apply




> Stopping start-local.bat doesn't shutdown JM on W10
> ---
>
> Key: FLINK-3167
> URL: https://issues.apache.org/jira/browse/FLINK-3167
> Project: Flink
>  Issue Type: Improvement
>  Components: Start-Stop Scripts
>Affects Versions: 0.10.1
> Environment: Windows 10
>Reporter: Chesnay Schepler
>
> When using the start-local.bat on a windows 10 it says the JobManager can be 
> stopped using Ctrl+C. This doesn't work on my W10 System; the JM stays alive, 
> i can still submit jobs and view the web dashboard.
> Since there is no stop-local.bat script i now have to always kill the process 
> using the TaskManager :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3058] Add support for Kafka 0.9.0.0

2016-01-06 Thread rmetzger
GitHub user rmetzger opened a pull request:

https://github.com/apache/flink/pull/1489

[FLINK-3058] Add support for Kafka 0.9.0.0

For adding Kafka 0.9.0.0 support, this commit changes the following:
- Split up of the kafka connector into a 
flink-connector-kafka-(base|0.9|0.8) with different dependencies
- The base package contains common test cases and implementations (for 
example the producer for 0.9 and 0.8 relies on exactly the same code)
- the 0.8 package contains a kafka connector implementation against the 
SimpleConsumer (low level) API of Kafka 0.8. There are some additional tests 
for the ZK offset committing
- The 0.9 package relies on the new Consumer API of Kafka 0.9.0.0
- Support for metrics for all producers and the 0.9 consumer through 
Flink's accumulators.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmetzger/flink 
flink3058-second-rebased-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1489.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1489


commit d1a1659029b246cc164fe3cb274b02d87696e679
Author: Robert Metzger 
Date:   2015-12-16T16:29:42Z

[FLINK-3058] Add support for Kafka 0.9.0.0

For adding Kafka 0.9.0.0 support, this commit changes the following:
- Split up of the kafka connector into a 
flink-connector-kafka-(base|0.9|0.8) with different dependencies
- The base package contains common test cases, classes and implementations 
(the producer for 0.9 and 0.8 relies on exactly the same code)
- the 0.8 package contains a kafka connector implementation against the 
SimpleConsumer (low level) API of Kafka 0.8. There are some additional tests 
for the ZK offset committing
- The 0.9 package relies on the new Consumer API of Kafka 0.9.0.0
- Support for metrics for all producers and the 0.9 consumer through 
Flink's accumulators.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-3205) Remove the flink-staging module

2016-01-06 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-3205.
-
   Resolution: Duplicate
Fix Version/s: (was: 1.0.0)

Duplicate of FLINK-1712

> Remove the flink-staging module
> ---
>
> Key: FLINK-3205
> URL: https://issues.apache.org/jira/browse/FLINK-3205
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2016-01-06 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-169344534
  
Travis failed with weird compilation error. I guess this is an caching 
issue. It builds locally and on my personal Travis: 
https://travis-ci.org/mjsax/flink/builds/100576933


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2111) Add "stop" signal to cleanly shutdown streaming jobs

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085608#comment-15085608
 ] 

ASF GitHub Bot commented on FLINK-2111:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-169344534
  
Travis failed with weird compilation error. I guess this is an caching 
issue. It builds locally and on my personal Travis: 
https://travis-ci.org/mjsax/flink/builds/100576933


> Add "stop" signal to cleanly shutdown streaming jobs
> 
>
> Key: FLINK-2111
> URL: https://issues.apache.org/jira/browse/FLINK-2111
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime, JobManager, Local Runtime, 
> Streaming, TaskManager, Webfrontend
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> Currently, streaming jobs can only be stopped using "cancel" command, what is 
> a "hard" stop with no clean shutdown.
> The new introduced "stop" signal, will only affect streaming source tasks 
> such that the sources can stop emitting data and shutdown cleanly, resulting 
> in a clean shutdown of the whole streaming job.
> This feature is a pre-requirment for 
> https://issues.apache.org/jira/browse/FLINK-1929



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3195) Restructure examples projects and package streaming examples

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085695#comment-15085695
 ] 

ASF GitHub Bot commented on FLINK-3195:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1482#issuecomment-169362242
  
I'd like to finish the project restructure and I'll use this PR as a base. 
I'll also address Fabian's comments.


> Restructure examples projects and package streaming examples
> 
>
> Key: FLINK-3195
> URL: https://issues.apache.org/jira/browse/FLINK-3195
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.0.0
>
>
> We should have the Java / Scala examples in one Maven project and have a 
> common parent project for the streaming and batch examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3195] [examples] Consolidate Java/Scala...

2016-01-06 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1482#issuecomment-169362242
  
I'd like to finish the project restructure and I'll use this PR as a base. 
I'll also address Fabian's comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3167) Stopping start-local.bat doesn't shutdown JM on W10

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085622#comment-15085622
 ] 

ASF GitHub Bot commented on FLINK-3167:
---

Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1488#issuecomment-169346238
  
it appears that you used the wrong issue ID; it should be 31**76** but is 
31**67**.


> Stopping start-local.bat doesn't shutdown JM on W10
> ---
>
> Key: FLINK-3167
> URL: https://issues.apache.org/jira/browse/FLINK-3167
> Project: Flink
>  Issue Type: Improvement
>  Components: Start-Stop Scripts
>Affects Versions: 0.10.1
> Environment: Windows 10
>Reporter: Chesnay Schepler
>
> When using the start-local.bat on a windows 10 it says the JobManager can be 
> stopped using Ctrl+C. This doesn't work on my W10 System; the JM stays alive, 
> i can still submit jobs and view the web dashboard.
> Since there is no stop-local.bat script i now have to always kill the process 
> using the TaskManager :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3167] Improve documentation for window ...

2016-01-06 Thread zentol
Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1488#issuecomment-169346238
  
it appears that you used the wrong issue ID; it should be 31**76** but is 
31**67**.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-2833) Unstage Gelly and Module refactoring

2016-01-06 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-2833:
--
Issue Type: Sub-task  (was: Task)
Parent: FLINK-3205

> Unstage Gelly and Module refactoring
> 
>
> Key: FLINK-2833
> URL: https://issues.apache.org/jira/browse/FLINK-2833
> Project: Flink
>  Issue Type: Sub-task
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
> Fix For: 0.10.0
>
>
> This is for moving Gelly out of 
> {{flink-staging}} and adding it into a new module, {{flink-libraries}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2877) Move Streaming API out of Staging

2016-01-06 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-2877:
--
Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-3205

> Move Streaming API out of Staging
> -
>
> Key: FLINK-2877
> URL: https://issues.apache.org/jira/browse/FLINK-2877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10.0
>
>
> As discussed on the mailing list we want to move the Streaming API out of the 
> staging package structure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3205) Remove the flink-staging module

2016-01-06 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3205:
-

 Summary: Remove the flink-staging module
 Key: FLINK-3205
 URL: https://issues.apache.org/jira/browse/FLINK-3205
 Project: Flink
  Issue Type: Task
  Components: Build System
Reporter: Robert Metzger
Assignee: Robert Metzger
Priority: Blocker
 Fix For: 1.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2877) Move Streaming API out of Staging

2016-01-06 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-2877:
--
Parent Issue: FLINK-1712  (was: FLINK-3205)

> Move Streaming API out of Staging
> -
>
> Key: FLINK-2877
> URL: https://issues.apache.org/jira/browse/FLINK-2877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10.0
>
>
> As discussed on the mailing list we want to move the Streaming API out of the 
> staging package structure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2833) Unstage Gelly and Module refactoring

2016-01-06 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-2833:
--
Parent Issue: FLINK-1712  (was: FLINK-3205)

> Unstage Gelly and Module refactoring
> 
>
> Key: FLINK-2833
> URL: https://issues.apache.org/jira/browse/FLINK-2833
> Project: Flink
>  Issue Type: Sub-task
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
> Fix For: 0.10.0
>
>
> This is for moving Gelly out of 
> {{flink-staging}} and adding it into a new module, {{flink-libraries}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: corrections in the javadoc examples of tumblin...

2016-01-06 Thread vasia
GitHub user vasia opened a pull request:

https://github.com/apache/flink/pull/1490

corrections in the javadoc examples of tumbling and sliding windows



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vasia/flink window-javadoc-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1490.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1490


commit 371e3c9cf354a623594979c2c4f67677f2a1323a
Author: vasia 
Date:   2016-01-06T18:38:34Z

fix javadoc example in tumbling and sliding windows




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-3207) Add a Pregel iteration abstraction to Gelly

2016-01-06 Thread Vasia Kalavri (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasia Kalavri reassigned FLINK-3207:


Assignee: Vasia Kalavri

> Add a Pregel iteration abstraction to Gelly
> ---
>
> Key: FLINK-3207
> URL: https://issues.apache.org/jira/browse/FLINK-3207
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> This issue proposes to add a Pregel/Giraph-like iteration abstraction to 
> Gelly that will only expose one UDF to the user, {{compute()}}. {{compute()}} 
> will have access to both the vertex state and the incoming messages, and will 
> be able to produce messages and update the vertex value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3207) Add a Pregel iteration abstraction to Gelly

2016-01-06 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-3207:


 Summary: Add a Pregel iteration abstraction to Gelly
 Key: FLINK-3207
 URL: https://issues.apache.org/jira/browse/FLINK-3207
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Reporter: Vasia Kalavri


This issue proposes to add a Pregel/Giraph-like iteration abstraction to Gelly 
that will only expose one UDF to the user, {{compute()}}. {{compute()}} will 
have access to both the vertex state and the incoming messages, and will be 
able to produce messages and update the vertex value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-3208) Rename Gelly vertex-centric model to scatter-gather

2016-01-06 Thread Vasia Kalavri (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasia Kalavri reassigned FLINK-3208:


Assignee: Vasia Kalavri

> Rename Gelly vertex-centric model to scatter-gather
> ---
>
> Key: FLINK-3208
> URL: https://issues.apache.org/jira/browse/FLINK-3208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The idea is to have the following naming:
> - Pregel model: vertex-centric iteration
> - Spargel model: scatter-gather iteration
> - GSA model: as is
> Open to suggestions!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3208) Rename Gelly vertex-centric model to scatter-gather

2016-01-06 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-3208:


 Summary: Rename Gelly vertex-centric model to scatter-gather
 Key: FLINK-3208
 URL: https://issues.apache.org/jira/browse/FLINK-3208
 Project: Flink
  Issue Type: Sub-task
  Components: Gelly
Reporter: Vasia Kalavri


The idea is to have the following naming:
- Pregel model: vertex-centric iteration
- Spargel model: scatter-gather iteration
- GSA model: as is

Open to suggestions!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3118] [Gelly] Consider ResultTypeQuerya...

2016-01-06 Thread s1ck
Github user s1ck commented on the pull request:

https://github.com/apache/flink/pull/1471#issuecomment-169421134
  
Hi, sorry for the delay. I'll add the test asap.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3118) Check if MessageFunction implements ResultTypeQueryable

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086070#comment-15086070
 ] 

ASF GitHub Bot commented on FLINK-3118:
---

Github user s1ck commented on the pull request:

https://github.com/apache/flink/pull/1471#issuecomment-169421134
  
Hi, sorry for the delay. I'll add the test asap.


> Check if MessageFunction implements ResultTypeQueryable
> ---
>
> Key: FLINK-3118
> URL: https://issues.apache.org/jira/browse/FLINK-3118
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Martin Junghanns
>Assignee: Martin Junghanns
>
> To generalize message values in vertex centric computations, it is necessary 
> to let the user define the {{TypeInformation}} via {{ResultTypeQueryable}}. 
> This needs to be checked in {{VertexCentricIteration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3206) Heap size for non-pre-allocated off-heap memory

2016-01-06 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-3206:
-

 Summary: Heap size for non-pre-allocated off-heap memory
 Key: FLINK-3206
 URL: https://issues.apache.org/jira/browse/FLINK-3206
 Project: Flink
  Issue Type: Bug
  Components: Start-Stop Scripts
Affects Versions: 1.0.0
Reporter: Greg Hogan


In {{taskmanager.sh}} the heap size is adjusted for off-heap memory only with 
pre-allocation set to true. In {{TaskManager.scala}} the computation is 
reversed to compute the {{directMemorySize}}. The effect is the JVM heap 
settings are too high and the assumed size of direct memory is also too high.

{noformat}
taskmanager.memory.fraction: 0.9
taskmanager.memory.off-heap: true
taskmanager.heap.mb: 18000
{noformat}

With {{taskmanager.memory.preallocate: false}}
{noformat}
13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - -Xms18000M
13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - -Xmx18000M
13:44:30,591 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - Limiting managed memory to 0.9 of the maximum memory size (161999 MB), 
memory will be allocated lazily.
{noformat}

With {{taskmanager.memory.preallocate: true}}
{noformat}
13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - -Xms1800M
13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - -Xmx1800M
13:53:45,743 INFO  org.apache.flink.runtime.taskmanager.TaskManager 
 - Using 0.9 of the maximum memory size for managed off-heap memory (15524 MB).
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3206) Heap size for non-pre-allocated off-heap memory

2016-01-06 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085791#comment-15085791
 ] 

Stephan Ewen commented on FLINK-3206:
-

Are you running this on YARN?

I recently discovered this as a bug in the YARN code. Fixing this as part of my 
YARN refactoring...

> Heap size for non-pre-allocated off-heap memory
> ---
>
> Key: FLINK-3206
> URL: https://issues.apache.org/jira/browse/FLINK-3206
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 1.0.0
>Reporter: Greg Hogan
>
> In {{taskmanager.sh}} the heap size is adjusted for off-heap memory only with 
> pre-allocation set to true. In {{TaskManager.scala}} the computation is 
> reversed to compute the {{directMemorySize}}. The effect is the JVM heap 
> settings are too high and the assumed size of direct memory is also too high.
> {noformat}
> taskmanager.memory.fraction: 0.9
> taskmanager.memory.off-heap: true
> taskmanager.heap.mb: 18000
> {noformat}
> With {{taskmanager.memory.preallocate: false}}
> {noformat}
> 13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xms18000M
> 13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xmx18000M
> 13:44:30,591 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Limiting managed memory to 0.9 of the maximum memory size (161999 MB), 
> memory will be allocated lazily.
> {noformat}
> With {{taskmanager.memory.preallocate: true}}
> {noformat}
> 13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xms1800M
> 13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xmx1800M
> 13:53:45,743 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Using 0.9 of the maximum memory size for managed off-heap memory (15524 
> MB).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3206) Heap size for non-pre-allocated off-heap memory

2016-01-06 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085839#comment-15085839
 ] 

Greg Hogan commented on FLINK-3206:
---

No, running standalone.

> Heap size for non-pre-allocated off-heap memory
> ---
>
> Key: FLINK-3206
> URL: https://issues.apache.org/jira/browse/FLINK-3206
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 1.0.0
>Reporter: Greg Hogan
>
> In {{taskmanager.sh}} the heap size is adjusted for off-heap memory only with 
> pre-allocation set to true. In {{TaskManager.scala}} the computation is 
> reversed to compute the {{directMemorySize}}. The effect is the JVM heap 
> settings are too high and the assumed size of direct memory is also too high.
> {noformat}
> taskmanager.memory.fraction: 0.9
> taskmanager.memory.off-heap: true
> taskmanager.heap.mb: 18000
> {noformat}
> With {{taskmanager.memory.preallocate: false}}
> {noformat}
> 13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xms18000M
> 13:44:29,015 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xmx18000M
> 13:44:30,591 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Limiting managed memory to 0.9 of the maximum memory size (161999 MB), 
> memory will be allocated lazily.
> {noformat}
> With {{taskmanager.memory.preallocate: true}}
> {noformat}
> 13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xms1800M
> 13:53:44,127 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- -Xmx1800M
> 13:53:45,743 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Using 0.9 of the maximum memory size for managed off-heap memory (15524 
> MB).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3197) InputStream not closed in BinaryInputFormat#createStatistics

2016-01-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-3197:
--
Description: 
Here is related code:
{code}
  FSDataInputStream fdis = 
file.getPath().getFileSystem().open(file.getPath(), blockInfo.getInfoSize());
  fdis.seek(file.getLen() - blockInfo.getInfoSize());

  blockInfo.read(new DataInputViewStreamWrapper(fdis));
  totalCount += blockInfo.getAccumulatedRecordCount();
{code}

fdis / wrapper should be closed upon leaving the method

  was:
Here is related code:
{code}
  FSDataInputStream fdis = 
file.getPath().getFileSystem().open(file.getPath(), blockInfo.getInfoSize());
  fdis.seek(file.getLen() - blockInfo.getInfoSize());

  blockInfo.read(new DataInputViewStreamWrapper(fdis));
  totalCount += blockInfo.getAccumulatedRecordCount();
{code}
fdis / wrapper should be closed upon leaving the method


> InputStream not closed in BinaryInputFormat#createStatistics
> 
>
> Key: FLINK-3197
> URL: https://issues.apache.org/jira/browse/FLINK-3197
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
>   FSDataInputStream fdis = 
> file.getPath().getFileSystem().open(file.getPath(), blockInfo.getInfoSize());
>   fdis.seek(file.getLen() - blockInfo.getInfoSize());
>   blockInfo.read(new DataInputViewStreamWrapper(fdis));
>   totalCount += blockInfo.getAccumulatedRecordCount();
> {code}
> fdis / wrapper should be closed upon leaving the method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3196) InputStream should be closed in EnvironmentInformation#getRevisionInformation()

2016-01-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-3196:
--
Description: 
Here is related code:
{code}
  InputStream propFile = 
EnvironmentInformation.class.getClassLoader().getResourceAsStream(".version.properties");
  if (propFile != null) {
Properties properties = new Properties();
properties.load(propFile);
{code}

propFile should be closed upon leaving the method.

  was:
Here is related code:
{code}
  InputStream propFile = 
EnvironmentInformation.class.getClassLoader().getResourceAsStream(".version.properties");
  if (propFile != null) {
Properties properties = new Properties();
properties.load(propFile);
{code}
propFile should be closed upon leaving the method.


> InputStream should be closed in 
> EnvironmentInformation#getRevisionInformation()
> ---
>
> Key: FLINK-3196
> URL: https://issues.apache.org/jira/browse/FLINK-3196
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
>   InputStream propFile = 
> EnvironmentInformation.class.getClassLoader().getResourceAsStream(".version.properties");
>   if (propFile != null) {
> Properties properties = new Properties();
> properties.load(propFile);
> {code}
> propFile should be closed upon leaving the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2445) Add tests for HadoopOutputFormats

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086780#comment-15086780
 ] 

ASF GitHub Bot commented on FLINK-2445:
---

Github user ajaybhat commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169543630
  
The build has failed, but the failure seems unrelated to this PR.


> Add tests for HadoopOutputFormats
> -
>
> Key: FLINK-2445
> URL: https://issues.apache.org/jira/browse/FLINK-2445
> Project: Flink
>  Issue Type: Test
>  Components: Hadoop Compatibility, Tests
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Fabian Hueske
>  Labels: starter
>
> The HadoopOutputFormats and HadoopOutputFormatBase classes are not 
> sufficiently covered by unit tests.
> We need tests that ensure that the methods of the wrapped Hadoop 
> OutputFormats are correctly called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

2016-01-06 Thread ajaybhat
Github user ajaybhat commented on the pull request:

https://github.com/apache/flink/pull/1486#issuecomment-169543630
  
The build has failed, but the failure seems unrelated to this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3192] Add explain support to print ast ...

2016-01-06 Thread gallenvara
Github user gallenvara commented on the pull request:

https://github.com/apache/flink/pull/1477#issuecomment-169532718
  
@fhueske , codes has been finished. I have drop previous method of 
plan-generator and rewrite a new parser named `PlanJsonParser` to parse the 
existing JSON plan. Could you help with review work? :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3192) Add explain support to print ast and sql physical execution plan.

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086705#comment-15086705
 ] 

ASF GitHub Bot commented on FLINK-3192:
---

Github user gallenvara commented on the pull request:

https://github.com/apache/flink/pull/1477#issuecomment-169532718
  
@fhueske , codes has been finished. I have drop previous method of 
plan-generator and rewrite a new parser named `PlanJsonParser` to parse the 
existing JSON plan. Could you help with review work? :)


> Add explain support to print ast and sql physical execution plan. 
> --
>
> Key: FLINK-3192
> URL: https://issues.apache.org/jira/browse/FLINK-3192
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Reporter: GaoLun
>Assignee: GaoLun
>Priority: Minor
>  Labels: features
>
> Table API doesn't support sql-explanation now. Add the explain support to 
> print ast (abstract syntax tree) and the physical execution plan of sql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3115) Update Elasticsearch connector to 2.X

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086217#comment-15086217
 ] 

ASF GitHub Bot commented on FLINK-3115:
---

Github user smarthi closed the pull request at:

https://github.com/apache/flink/pull/1479


> Update Elasticsearch connector to 2.X
> -
>
> Key: FLINK-3115
> URL: https://issues.apache.org/jira/browse/FLINK-3115
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 0.10.0, 1.0.0, 0.10.1
>Reporter: Maximilian Michels
>Assignee: Suneel Marthi
> Fix For: 1.0.0
>
>
> The Elasticsearch connector is not up to date anymore. In version 2.X the API 
> changed. The code needs to be adapted. Probably it makes sense to have a new 
> class {{ElasticsearchSink2}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: FLINK-3115:[WIP] Update ElasticSearch connecto...

2016-01-06 Thread smarthi
Github user smarthi closed the pull request at:

https://github.com/apache/flink/pull/1479


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: FLINK-3115:[WIP] Update ElasticSearch connecto...

2016-01-06 Thread smarthi
Github user smarthi commented on the pull request:

https://github.com/apache/flink/pull/1479#issuecomment-169450577
  
Closing this PR, will replace with a newer PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3115) Update Elasticsearch connector to 2.X

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086216#comment-15086216
 ] 

ASF GitHub Bot commented on FLINK-3115:
---

Github user smarthi commented on the pull request:

https://github.com/apache/flink/pull/1479#issuecomment-169450577
  
Closing this PR, will replace with a newer PR.


> Update Elasticsearch connector to 2.X
> -
>
> Key: FLINK-3115
> URL: https://issues.apache.org/jira/browse/FLINK-3115
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 0.10.0, 1.0.0, 0.10.1
>Reporter: Maximilian Michels
>Assignee: Suneel Marthi
> Fix For: 1.0.0
>
>
> The Elasticsearch connector is not up to date anymore. In version 2.X the API 
> changed. The code needs to be adapted. Probably it makes sense to have a new 
> class {{ElasticsearchSink2}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)