Jenkins build is back to normal : beam_PostCommit_Python_Verify #2059

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : beam_PostCommit_Java_ValidatesRunner_Spark #1858

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build is unstable: beam_PostCommit_Java_ValidatesRunner_Spark #1857

2017-04-30 Thread Apache Jenkins Server
See 




[GitHub] beam pull request #2791: [BEAM-59] DataflowRunner: Sink is always a FileBase...

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2791


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: [BEAM-59] DataflowRunner: Sink is always a FileBasedSink now

2017-04-30 Thread dhalperi
Repository: beam
Updated Branches:
  refs/heads/master 9f2733ac4 -> 1197bef19


[BEAM-59] DataflowRunner: Sink is always a FileBasedSink now


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/b51e860c
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/b51e860c
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/b51e860c

Branch: refs/heads/master
Commit: b51e860c7830202d0f332a52adae43634e6733fb
Parents: 9f2733a
Author: Dan Halperin 
Authored: Sun Apr 30 10:47:37 2017 -0700
Committer: Dan Halperin 
Committed: Sun Apr 30 22:13:45 2017 -0700

--
 .../apache/beam/runners/dataflow/DataflowRunner.java| 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/b51e860c/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java
--
diff --git 
a/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java
 
b/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java
index a61fe49..97a4ded 100644
--- 
a/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java
+++ 
b/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java
@@ -837,13 +837,11 @@ public class DataflowRunner extends 
PipelineRunner {
 
 @Override
 public PDone expand(PCollection input) {
-  if (transform.getSink() instanceof FileBasedSink) {
-FileBasedSink sink = (FileBasedSink) transform.getSink();
-if (sink.getBaseOutputFilenameProvider().isAccessible()) {
-  PathValidator validator = runner.options.getPathValidator();
-  validator.validateOutputFilePrefixSupported(
-  sink.getBaseOutputFilenameProvider().get());
-}
+  FileBasedSink sink = transform.getSink();
+  if (sink.getBaseOutputFilenameProvider().isAccessible()) {
+PathValidator validator = runner.options.getPathValidator();
+validator.validateOutputFilePrefixSupported(
+sink.getBaseOutputFilenameProvider().get());
   }
   return transform.expand(input);
 }



[jira] [Commented] (BEAM-2129) KafkaIOTest.testUnboundedSourceMetrics is flaky

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990583#comment-15990583
 ] 

ASF GitHub Bot commented on BEAM-2129:
--

GitHub user aviemzur opened a pull request:

https://github.com/apache/beam/pull/2797

[BEAM-2129] Fix flaky KafkaIOTest#testUnboundedSourceMetrics

Gauge results are flaky on Jenkins, instead of asserting on value
assert on the gauge's existence instead.

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aviemzur/beam fix-flaky-kafkaio-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2797


commit f922bd46ef6899c775e36eacfdb11068cc011209
Author: Aviem Zur 
Date:   2017-05-01T04:53:39Z

[BEAM-2129] Fix flaky KafkaIOTest#testUnboundedSourceMetrics

Gauge results are flaky on Jenkins, instead of asserting on value
assert on the gauge's existence instead.




> KafkaIOTest.testUnboundedSourceMetrics is flaky
> ---
>
> Key: BEAM-2129
> URL: https://issues.apache.org/jira/browse/BEAM-2129
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core, testing
>Reporter: Daniel Halperin
>Assignee: Aviem Zur
>
> This has been pretty flaky in precommit and postcommit. Here's a recent 
> postcommit failure.
> https://builds.apache.org/view/Beam/job/beam_PostCommit_Java_MavenInstall/3544/org.apache.beam$beam-sdks-java-io-kafka/testReport/org.apache.beam.sdk.io.kafka/KafkaIOTest/testUnboundedSourceMetrics/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2797: [BEAM-2129] Fix flaky KafkaIOTest#testUnboundedSour...

2017-04-30 Thread aviemzur
GitHub user aviemzur opened a pull request:

https://github.com/apache/beam/pull/2797

[BEAM-2129] Fix flaky KafkaIOTest#testUnboundedSourceMetrics

Gauge results are flaky on Jenkins, instead of asserting on value
assert on the gauge's existence instead.

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aviemzur/beam fix-flaky-kafkaio-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2797


commit f922bd46ef6899c775e36eacfdb11068cc011209
Author: Aviem Zur 
Date:   2017-05-01T04:53:39Z

[BEAM-2129] Fix flaky KafkaIOTest#testUnboundedSourceMetrics

Gauge results are flaky on Jenkins, instead of asserting on value
assert on the gauge's existence instead.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (BEAM-2120) DataflowPipelineJob processes all log messages with each waitUntilFinish

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-2120:
--
Summary: DataflowPipelineJob processes all log messages with each 
waitUntilFinish  (was: TestDataflowRunner prints all messages from the job, 
repeatedly)

> DataflowPipelineJob processes all log messages with each waitUntilFinish
> 
>
> Key: BEAM-2120
> URL: https://issues.apache.org/jira/browse/BEAM-2120
> Project: Beam
>  Issue Type: Bug
>  Components: runner-dataflow
>Reporter: Eugene Kirpichov
>Assignee: Kenneth Knowles
>
> This is due to 
> https://github.com/apache/beam/commit/ed8bd62652126d8d0cf054cee5cc79dda88e3415
> Every call to job.waitUntilFinish() prints all messages because the 
> pagination by lastTimestamp happens inside job.waitUntilFinish() rather than 
> outside.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2796: [BEAM-2120] Do not repeat log messages in DataflowP...

2017-04-30 Thread kennknowles
GitHub user kennknowles opened a pull request:

https://github.com/apache/beam/pull/2796

[BEAM-2120] Do not repeat log messages in DataflowPipelineJob

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kennknowles/beam DataflowPipelineJob

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2796


commit 401eef09a0dc94dc2ed1e4afcaa30259d922ba3f
Author: Kenneth Knowles 
Date:   2017-05-01T04:24:23Z

Do not repeat log messages in DataflowPipelineJob




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (BEAM-1676) SdkCoreApiSurfaceTest Failed When Directory Contains Space

2017-04-30 Thread Davor Bonaci (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990572#comment-15990572
 ] 

Davor Bonaci commented on BEAM-1676:


This is the best path forward, almost certainly.

Fixed Guava requires (or, will require) Java 8, which we may not be ready to 
upgrade to. So, waiting for a Guava fix is not the right strategy, and we 
should move from the Guava implementation somewhere else.

If there's no other good, easy-to-use, lean library that provides this, we 
should just re-implement/copy-paste Guava's code. No worries from the licensing 
perspective; it's Apache 2-licensed code.

> SdkCoreApiSurfaceTest Failed When Directory Contains Space
> --
>
> Key: BEAM-1676
> URL: https://issues.apache.org/jira/browse/BEAM-1676
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Mark Liu
>Assignee: Stas Levin
>
> Test failed if build directory contains space. For example: "~/dir with 
> space/beam/..."
> The failure happened on Jenkins and can be reproduced from local.
> GcpApiSurfaceTest may have same problem.
> error is:
> {code}
> Failed tests: 
>   SdkCoreApiSurfaceTest.testSdkApiSurface:59 
> Expected: API surface to include only:
>   Classes in package "org.apache.beam"
>   Classes in package "com.google.api.client"
>   Classes in package "com.google.api.services.bigquery"
>   Classes in package "com.google.api.services.cloudresourcemanager"
>   Classes in package "com.google.api.services.pubsub"
>   Classes in package "com.google.api.services.storage"
>   Classes in package "com.google.auth"
>   Classes in package "com.google.protobuf"
>   Classes in package "com.fasterxml.jackson.annotation"
>   Classes in package "com.fasterxml.jackson.core"
>   Classes in package "com.fasterxml.jackson.databind"
>   Classes in package "org.apache.avro"
>   Classes in package "org.hamcrest"
>   Classes in package "org.codehaus.jackson"
>   Classes in package "org.joda.time"
>   Classes in package "org.junit"
>   
>  but: The following white-listed scopes did not have matching classes on 
> the API surface:
>   No Classes in package "com.fasterxml.jackson.annotation"
>   No Classes in package "com.fasterxml.jackson.core"
>   No Classes in package "com.fasterxml.jackson.databind"
>   No Classes in package "com.google.api.client"
>   No Classes in package "com.google.api.services.bigquery"
>   No Classes in package "com.google.api.services.cloudresourcemanager"
>   No Classes in package "com.google.api.services.pubsub"
>   No Classes in package "com.google.api.services.storage"
>   No Classes in package "com.google.auth"
>   No Classes in package "com.google.protobuf"
>   No Classes in package "org.apache.avro"
>   No Classes in package "org.apache.beam"
>   No Classes in package "org.codehaus.jackson"
>   No Classes in package "org.hamcrest"
>   No Classes in package "org.joda.time"
>   No Classes in package "org.junit"
> {code}
> Job link from Jenkins:
> https://builds.apache.org/job/beam_PostCommit_Java_Version_Test/14/
> One of the Jenkins job uses "JDK 1.8 (latest)" which is also part of project 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3549

2017-04-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_Verify #2058

2017-04-30 Thread Apache Jenkins Server
See 


--
[...truncated 596.10 KB...]
}, 
{
  "kind": "ParallelDo", 
  "name": "s13", 
  "properties": {
"display_data": [
  {
"key": "fn", 
"label": "Transform Function", 
"namespace": "apache_beam.transforms.core.CallableWrapperDoFn", 
"type": "STRING", 
"value": "_merge_tagged_vals_under_key"
  }, 
  {
"key": "fn", 
"label": "Transform Function", 
"namespace": "apache_beam.transforms.core.ParDo", 
"shortValue": "CallableWrapperDoFn", 
"type": "STRING", 
"value": "apache_beam.transforms.core.CallableWrapperDoFn"
  }
], 
"non_parallel_inputs": {}, 
"output_info": [
  {
"encoding": {
  "@type": "kind:windowed_value", 
  "component_encodings": [
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": [
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": []
}, 
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": []
}
  ], 
  "is_pair_like": true
}, 
{
  "@type": "kind:global_window"
}
  ], 
  "is_wrapper": true
}, 
"output_name": "out", 
"user_name": 
"assert_that/Group/Map(_merge_tagged_vals_under_key).out"
  }
], 
"parallel_input": {
  "@type": "OutputReference", 
  "output_name": "out", 
  "step_name": "s12"
}, 
"serialized_fn": "", 
"user_name": "assert_that/Group/Map(_merge_tagged_vals_under_key)"
  }
}, 
{
  "kind": "ParallelDo", 
  "name": "s14", 
  "properties": {
"display_data": [
  {
"key": "fn", 
"label": "Transform Function", 
"namespace": "apache_beam.transforms.core.CallableWrapperDoFn", 
"type": "STRING", 
"value": ""
  }, 
  {
"key": "fn", 
"label": "Transform Function", 
"namespace": "apache_beam.transforms.core.ParDo", 
"shortValue": "CallableWrapperDoFn", 
"type": "STRING", 
"value": "apache_beam.transforms.core.CallableWrapperDoFn"
  }
], 
"non_parallel_inputs": {}, 
"output_info": [
  {
"encoding": {
  "@type": "kind:windowed_value", 
  "component_encodings": [
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": [
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": []
}, 
{
  "@type": 
"FastPrimitivesCoder$eNprYEpOLEhMzkiNT0pNzNVLzk9JLSqGUlxuicUlAUWZuZklmWWpxc4gQa5CBs3GQsbaQqZQ/vi0xJycpMTk7Hiw+kJmPEYFZCZn56RCjWABGsFaW8iWVJykBwDlGS3/",
 
  "component_encodings": []
}
  ], 
  "is_pair_like": true
}, 
{
  "@type": "kind:global_window"
}
  ], 
  "is_wrapper": true
}, 
"output_name": "out", 
"user_name": "assert_that/Unkey.out"
  }
], 
"parallel_input": {
  "@type": "OutputReference", 
  "output_name": "out", 
  "step_name": "s13"
}, 
"serialized_fn": "", 
"user_name": "assert_that/Unkey"
  }
}, 
{
  "kind": "ParallelDo", 
  "name": "s15", 
  "properties": {
"display_data": [
  {
"key": "fn", 
"label": "Transform Function", 
"namespace": "apache_beam.transforms.core.CallableWrapperDoFn", 
"type": "STRING", 
"value": 

[jira] [Resolved] (BEAM-2049) Remove KeyedCombineFn

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles resolved BEAM-2049.
---
Resolution: Fixed

> Remove KeyedCombineFn
> -
>
> Key: BEAM-2049
> URL: https://issues.apache.org/jira/browse/BEAM-2049
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-java-core
>Reporter: Kenneth Knowles
>Assignee: Kenneth Knowles
> Fix For: First stable release
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2049) Remove KeyedCombineFn

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990544#comment-15990544
 ] 

ASF GitHub Bot commented on BEAM-2049:
--

Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2636


> Remove KeyedCombineFn
> -
>
> Key: BEAM-2049
> URL: https://issues.apache.org/jira/browse/BEAM-2049
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-java-core
>Reporter: Kenneth Knowles
>Assignee: Kenneth Knowles
> Fix For: First stable release
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[4/6] beam git commit: Remove KeyedCombineFn

2017-04-30 Thread kenn
http://git-wip-us.apache.org/repos/asf/beam/blob/7e04924e/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkBroadcastStateInternals.java
--
diff --git 
a/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkBroadcastStateInternals.java
 
b/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkBroadcastStateInternals.java
index d015c38..31e931c 100644
--- 
a/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkBroadcastStateInternals.java
+++ 
b/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkBroadcastStateInternals.java
@@ -97,92 +97,74 @@ public class FlinkBroadcastStateInternals implements 
StateInternals {
   StateTag address,
   final StateContext context) {
 
-return address.bind(new StateTag.StateBinder() {
+return address.bind(
+new StateTag.StateBinder() {
 
-  @Override
-  public  ValueState bindValue(
-  StateTag> address,
-  Coder coder) {
+  @Override
+  public  ValueState bindValue(
+  StateTag> address, Coder coder) {
 
-return new FlinkBroadcastValueState<>(stateBackend, address, 
namespace, coder);
-  }
+return new FlinkBroadcastValueState<>(stateBackend, address, 
namespace, coder);
+  }
 
-  @Override
-  public  BagState bindBag(
-  StateTag> address,
-  Coder elemCoder) {
+  @Override
+  public  BagState bindBag(
+  StateTag> address, Coder elemCoder) {
 
-return new FlinkBroadcastBagState<>(stateBackend, address, namespace, 
elemCoder);
-  }
-
-  @Override
-  public  SetState bindSet(
-  StateTag> address,
-  Coder elemCoder) {
-throw new UnsupportedOperationException(
-String.format("%s is not supported", 
SetState.class.getSimpleName()));
-  }
+return new FlinkBroadcastBagState<>(stateBackend, address, 
namespace, elemCoder);
+  }
 
-  @Override
-  public  MapState bindMap(
-  StateTag> spec,
-  Coder mapKeyCoder, Coder mapValueCoder) {
-throw new UnsupportedOperationException(
-String.format("%s is not supported", 
MapState.class.getSimpleName()));
-  }
+  @Override
+  public  SetState bindSet(
+  StateTag> address, Coder elemCoder) {
+throw new UnsupportedOperationException(
+String.format("%s is not supported", 
SetState.class.getSimpleName()));
+  }
 
-  @Override
-  public 
-  CombiningState
-  bindCombiningValue(
-  StateTag> address,
-  Coder accumCoder,
-  Combine.CombineFn combineFn) {
+  @Override
+  public  MapState bindMap(
+  StateTag> spec,
+  Coder mapKeyCoder,
+  Coder mapValueCoder) {
+throw new UnsupportedOperationException(
+String.format("%s is not supported", 
MapState.class.getSimpleName()));
+  }
 
-return new FlinkCombiningState<>(
-stateBackend, address, combineFn, namespace, accumCoder);
-  }
+  @Override
+  public 
+  CombiningState bindCombiningValue(
+  StateTag> 
address,
+  Coder accumCoder,
+  Combine.CombineFn combineFn) {
 
-  @Override
-  public 
-  CombiningState bindKeyedCombiningValue(
-  StateTag> address,
-  Coder accumCoder,
-  final Combine.KeyedCombineFn 
combineFn) {
-return new FlinkKeyedCombiningState<>(
-stateBackend,
-address,
-combineFn,
-namespace,
-accumCoder,
-FlinkBroadcastStateInternals.this);
-  }
+return new FlinkCombiningState<>(
+stateBackend, address, combineFn, namespace, accumCoder);
+  }
 
-  @Override
-  public 
-  CombiningState 
bindKeyedCombiningValueWithContext(
-  StateTag> address,
-  Coder accumCoder,
-  CombineWithContext.KeyedCombineFnWithContext<
-  ? super K, InputT, AccumT, OutputT> combineFn) {
-return new FlinkCombiningStateWithContext<>(
-stateBackend,
-address,
-combineFn,
-namespace,
-accumCoder,
-FlinkBroadcastStateInternals.this,
-

[3/6] beam git commit: Remove KeyedCombineFn

2017-04-30 Thread kenn
http://git-wip-us.apache.org/repos/asf/beam/blob/7e04924e/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/Combine.java
--
diff --git 
a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/Combine.java 
b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/Combine.java
index 5ffaef8..0be8517 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/Combine.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/Combine.java
@@ -43,12 +43,9 @@ import org.apache.beam.sdk.coders.VarIntCoder;
 import org.apache.beam.sdk.coders.VoidCoder;
 import org.apache.beam.sdk.options.PipelineOptions;
 import org.apache.beam.sdk.transforms.CombineFnBase.AbstractGlobalCombineFn;
-import org.apache.beam.sdk.transforms.CombineFnBase.AbstractPerKeyCombineFn;
 import org.apache.beam.sdk.transforms.CombineFnBase.GlobalCombineFn;
-import org.apache.beam.sdk.transforms.CombineFnBase.PerKeyCombineFn;
 import org.apache.beam.sdk.transforms.CombineWithContext.CombineFnWithContext;
 import org.apache.beam.sdk.transforms.CombineWithContext.Context;
-import 
org.apache.beam.sdk.transforms.CombineWithContext.KeyedCombineFnWithContext;
 import 
org.apache.beam.sdk.transforms.CombineWithContext.RequiresContextInternal;
 import org.apache.beam.sdk.transforms.View.CreatePCollectionView;
 import org.apache.beam.sdk.transforms.display.DisplayData;
@@ -155,7 +152,7 @@ public class Combine {
*/
   public static  PerKey perKey(
   SerializableFunction fn) {
-return perKey(IterableCombineFn.of(fn).asKeyedFn(), 
displayDataForFn(fn));
+return perKey(IterableCombineFn.of(fn), displayDataForFn(fn));
   }
 
   /**
@@ -176,32 +173,11 @@ public class Combine {
*/
   public static  PerKey perKey(
   GlobalCombineFn fn) {
-return perKey(fn.asKeyedFn(), displayDataForFn(fn));
-  }
-
-  /**
-   * Returns a {@link PerKey Combine.PerKey} {@code PTransform} that
-   * first groups its input {@code PCollection} of {@code KV}s by keys and
-   * windows, then invokes the given function on each of the key/values-lists
-   * pairs to produce a combined value, and then returns a
-   * {@code PCollection} of {@code KV}s mapping each distinct key to
-   * its combined value for each window.
-   *
-   * Each output element is in the window by which its corresponding input
-   * was grouped, and has the timestamp of the end of that window.  The output
-   * {@code PCollection} has the same
-   * {@link org.apache.beam.sdk.transforms.windowing.WindowFn}
-   * as the input.
-   *
-   * See {@link PerKey Combine.PerKey} for more information.
-   */
-  public static  PerKey perKey(
-  PerKeyCombineFn fn) {
 return perKey(fn, displayDataForFn(fn));
   }
 
   private static  PerKey perKey(
-  PerKeyCombineFn fn,
+  GlobalCombineFn fn,
   DisplayData.ItemSpec> fnDisplayData) {
 return new PerKey<>(fn, fnDisplayData, false /*fewKeys*/);
   }
@@ -211,7 +187,7 @@ public class Combine {
* in {@link GroupByKey}.
*/
   private static  PerKey fewKeys(
-  PerKeyCombineFn fn,
+  GlobalCombineFn fn,
   DisplayData.ItemSpec> fnDisplayData) {
 return new PerKey<>(fn, fnDisplayData, true /*fewKeys*/);
   }
@@ -239,7 +215,7 @@ public class Combine {
*/
   public static  GroupedValues groupedValues(
   SerializableFunction fn) {
-return groupedValues(IterableCombineFn.of(fn).asKeyedFn(), 
displayDataForFn(fn));
+return groupedValues(IterableCombineFn.of(fn), displayDataForFn(fn));
   }
 
   /**
@@ -265,37 +241,11 @@ public class Combine {
*/
   public static  GroupedValues 
groupedValues(
   GlobalCombineFn fn) {
-return groupedValues(fn.asKeyedFn(), displayDataForFn(fn));
-  }
-
-  /**
-   * Returns a {@link GroupedValues Combine.GroupedValues}
-   * {@code PTransform} that takes a {@code PCollection} of
-   * {@code KV}s where a key maps to an {@code Iterable} of values, e.g.,
-   * the result of a {@code GroupByKey}, then uses the given
-   * {@code KeyedCombineFn} to combine all the values associated with
-   * each key.  The combining function is provided the key.  The types
-   * of the input and output values can differ.
-   *
-   * Each output element has the same timestamp and is in the same window
-   * as its corresponding input element, and the output
-   * {@code PCollection} has the same
-   * {@link org.apache.beam.sdk.transforms.windowing.WindowFn}
-   * associated with it as the input.
-   *
-   * See {@link GroupedValues Combine.GroupedValues} for more information.
-   *
-   * Note that {@link #perKey(CombineFnBase.PerKeyCombineFn)} is typically
-   * more convenient to use than {@link 

[6/6] beam git commit: This closes #2636: Remove KeyedCombineFn

2017-04-30 Thread kenn
This closes #2636: Remove KeyedCombineFn

  Update Dataflow worker version to beam-master-20170430
  Remove KeyedCombineFn


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/9f2733ac
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/9f2733ac
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/9f2733ac

Branch: refs/heads/master
Commit: 9f2733ac460ce42d6b3bd49f3db1bacb771ef85c
Parents: a198f8d 07ca542
Author: Kenneth Knowles <k...@google.com>
Authored: Sun Apr 30 18:38:56 2017 -0700
Committer: Kenneth Knowles <k...@google.com>
Committed: Sun Apr 30 18:38:56 2017 -0700

--
 .../translation/utils/ApexStateInternals.java   |  38 +-
 .../runners/core/GlobalCombineFnRunner.java |  78 +++
 .../runners/core/GlobalCombineFnRunners.java| 193 ++
 .../runners/core/InMemoryStateInternals.java|  50 +-
 .../runners/core/PerKeyCombineFnRunner.java |  79 ---
 .../runners/core/PerKeyCombineFnRunners.java| 161 -
 .../org/apache/beam/runners/core/StateTag.java  |  18 +-
 .../org/apache/beam/runners/core/StateTags.java |  43 +-
 .../beam/runners/core/SystemReduceFn.java   |  15 +-
 .../beam/runners/core/ReduceFnRunnerTest.java   |  36 +-
 .../beam/runners/core/ReduceFnTester.java   |  15 +-
 .../apache/beam/runners/core/StateTagTest.java  |  22 +-
 .../CopyOnAccessInMemoryStateInternals.java |  66 +-
 .../CopyOnAccessInMemoryStateInternalsTest.java |  34 -
 .../flink/FlinkBatchTransformTranslators.java   |   9 +-
 .../functions/AbstractFlinkCombineRunner.java   |  44 +-
 .../FlinkMergingNonShuffleReduceFunction.java   |  10 +-
 .../functions/FlinkPartialReduceFunction.java   |   6 +-
 .../functions/FlinkReduceFunction.java  |  10 +-
 .../functions/SortingFlinkCombineRunner.java|   1 -
 .../state/FlinkBroadcastStateInternals.java | 173 ++---
 .../state/FlinkKeyGroupStateInternals.java  | 119 ++--
 .../state/FlinkSplitStateInternals.java | 119 ++--
 .../streaming/state/FlinkStateInternals.java| 173 ++---
 runners/google-cloud-dataflow-java/pom.xml  |   2 +-
 .../spark/stateful/SparkStateInternals.java |  40 +-
 .../spark/translation/SparkKeyedCombineFn.java  |  26 +-
 .../spark/translation/TransformTranslator.java  |  44 +-
 .../streaming/StreamingTransformTranslator.java |   4 +-
 .../runners/spark/SparkRunnerDebuggerTest.java  |   7 +-
 .../src/main/resources/beam/findbugs-filter.xml |   2 +-
 .../sdk/transforms/ApproximateQuantiles.java|   8 +-
 .../beam/sdk/transforms/ApproximateUnique.java  |   3 +-
 .../org/apache/beam/sdk/transforms/Combine.java | 672 +--
 .../beam/sdk/transforms/CombineFnBase.java  | 136 
 .../apache/beam/sdk/transforms/CombineFns.java  | 448 +
 .../beam/sdk/transforms/CombineWithContext.java | 174 +
 .../org/apache/beam/sdk/transforms/Top.java |   6 +-
 .../org/apache/beam/sdk/transforms/View.java|   2 +-
 .../apache/beam/sdk/util/AppliedCombineFn.java  |  35 +-
 .../org/apache/beam/sdk/util/CombineFnUtil.java | 123 ++--
 .../apache/beam/sdk/util/state/StateBinder.java |  19 +-
 .../apache/beam/sdk/util/state/StateSpecs.java  | 177 ++---
 .../beam/sdk/transforms/CombineFnsTest.java | 114 ++--
 .../apache/beam/sdk/transforms/CombineTest.java | 213 +++---
 .../apache/beam/sdk/transforms/ParDoTest.java   |   2 +-
 .../apache/beam/sdk/transforms/ViewTest.java|   2 +-
 .../apache/beam/sdk/util/CombineFnUtilTest.java |  18 +-
 48 files changed, 1179 insertions(+), 2610 deletions(-)
--




[5/6] beam git commit: Remove KeyedCombineFn

2017-04-30 Thread kenn
Remove KeyedCombineFn


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/7e04924e
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/7e04924e
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/7e04924e

Branch: refs/heads/master
Commit: 7e04924ee7b31e28326f761618173749a55789d0
Parents: a198f8d
Author: Kenneth Knowles 
Authored: Fri Apr 21 14:04:02 2017 -0700
Committer: Kenneth Knowles 
Committed: Sun Apr 30 18:17:42 2017 -0700

--
 .../translation/utils/ApexStateInternals.java   |  38 +-
 .../runners/core/GlobalCombineFnRunner.java |  78 +++
 .../runners/core/GlobalCombineFnRunners.java| 193 ++
 .../runners/core/InMemoryStateInternals.java|  50 +-
 .../runners/core/PerKeyCombineFnRunner.java |  79 ---
 .../runners/core/PerKeyCombineFnRunners.java| 161 -
 .../org/apache/beam/runners/core/StateTag.java  |  18 +-
 .../org/apache/beam/runners/core/StateTags.java |  43 +-
 .../beam/runners/core/SystemReduceFn.java   |  15 +-
 .../beam/runners/core/ReduceFnRunnerTest.java   |  36 +-
 .../beam/runners/core/ReduceFnTester.java   |  15 +-
 .../apache/beam/runners/core/StateTagTest.java  |  22 +-
 .../CopyOnAccessInMemoryStateInternals.java |  66 +-
 .../CopyOnAccessInMemoryStateInternalsTest.java |  34 -
 .../flink/FlinkBatchTransformTranslators.java   |   9 +-
 .../functions/AbstractFlinkCombineRunner.java   |  44 +-
 .../FlinkMergingNonShuffleReduceFunction.java   |  10 +-
 .../functions/FlinkPartialReduceFunction.java   |   6 +-
 .../functions/FlinkReduceFunction.java  |  10 +-
 .../functions/SortingFlinkCombineRunner.java|   1 -
 .../state/FlinkBroadcastStateInternals.java | 173 ++---
 .../state/FlinkKeyGroupStateInternals.java  | 119 ++--
 .../state/FlinkSplitStateInternals.java | 119 ++--
 .../streaming/state/FlinkStateInternals.java| 173 ++---
 .../spark/stateful/SparkStateInternals.java |  40 +-
 .../spark/translation/SparkKeyedCombineFn.java  |  26 +-
 .../spark/translation/TransformTranslator.java  |  44 +-
 .../streaming/StreamingTransformTranslator.java |   4 +-
 .../runners/spark/SparkRunnerDebuggerTest.java  |   7 +-
 .../src/main/resources/beam/findbugs-filter.xml |   2 +-
 .../sdk/transforms/ApproximateQuantiles.java|   8 +-
 .../beam/sdk/transforms/ApproximateUnique.java  |   3 +-
 .../org/apache/beam/sdk/transforms/Combine.java | 672 +--
 .../beam/sdk/transforms/CombineFnBase.java  | 136 
 .../apache/beam/sdk/transforms/CombineFns.java  | 448 +
 .../beam/sdk/transforms/CombineWithContext.java | 174 +
 .../org/apache/beam/sdk/transforms/Top.java |   6 +-
 .../org/apache/beam/sdk/transforms/View.java|   2 +-
 .../apache/beam/sdk/util/AppliedCombineFn.java  |  35 +-
 .../org/apache/beam/sdk/util/CombineFnUtil.java | 123 ++--
 .../apache/beam/sdk/util/state/StateBinder.java |  19 +-
 .../apache/beam/sdk/util/state/StateSpecs.java  | 177 ++---
 .../beam/sdk/transforms/CombineFnsTest.java | 114 ++--
 .../apache/beam/sdk/transforms/CombineTest.java | 213 +++---
 .../apache/beam/sdk/transforms/ParDoTest.java   |   2 +-
 .../apache/beam/sdk/transforms/ViewTest.java|   2 +-
 .../apache/beam/sdk/util/CombineFnUtilTest.java |  18 +-
 47 files changed, 1178 insertions(+), 2609 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/7e04924e/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/utils/ApexStateInternals.java
--
diff --git 
a/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/utils/ApexStateInternals.java
 
b/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/utils/ApexStateInternals.java
index ec8f666..e682894 100644
--- 
a/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/utils/ApexStateInternals.java
+++ 
b/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/utils/ApexStateInternals.java
@@ -42,8 +42,7 @@ import org.apache.beam.sdk.coders.CoderException;
 import org.apache.beam.sdk.coders.InstantCoder;
 import org.apache.beam.sdk.coders.ListCoder;
 import org.apache.beam.sdk.transforms.Combine.CombineFn;
-import org.apache.beam.sdk.transforms.Combine.KeyedCombineFn;
-import 
org.apache.beam.sdk.transforms.CombineWithContext.KeyedCombineFnWithContext;
+import org.apache.beam.sdk.transforms.CombineWithContext.CombineFnWithContext;
 import org.apache.beam.sdk.transforms.windowing.BoundedWindow;
 import org.apache.beam.sdk.transforms.windowing.TimestampCombiner;
 import org.apache.beam.sdk.util.CoderUtils;
@@ -145,7 +144,7 @@ public class ApexStateInternals implements 
StateInternals {
   address,
   accumCoder,
   key,
-  

[1/6] beam git commit: Update Dataflow worker version to beam-master-20170430

2017-04-30 Thread kenn
Repository: beam
Updated Branches:
  refs/heads/master a198f8d23 -> 9f2733ac4


Update Dataflow worker version to beam-master-20170430


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/07ca542f
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/07ca542f
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/07ca542f

Branch: refs/heads/master
Commit: 07ca542f282c2ce3aff9f3f2966fa80422a59242
Parents: 7e04924
Author: Kenneth Knowles <k...@google.com>
Authored: Sun Apr 30 15:08:44 2017 -0700
Committer: Kenneth Knowles <k...@google.com>
Committed: Sun Apr 30 18:17:42 2017 -0700

--
 runners/google-cloud-dataflow-java/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/07ca542f/runners/google-cloud-dataflow-java/pom.xml
--
diff --git a/runners/google-cloud-dataflow-java/pom.xml 
b/runners/google-cloud-dataflow-java/pom.xml
index bfcb189..cb0fa7f 100644
--- a/runners/google-cloud-dataflow-java/pom.xml
+++ b/runners/google-cloud-dataflow-java/pom.xml
@@ -33,7 +33,7 @@
   jar
 
   
-
beam-master-20170428-2
+    
beam-master-20170430
 
1
 
6
   



Jenkins build is back to stable : beam_PostCommit_Java_ValidatesRunner_Spark #1855

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3548

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build became unstable: beam_PostCommit_Java_ValidatesRunner_Spark #1854

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Resolved] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved BEAM-2123.
--
   Resolution: Not A Problem
Fix Version/s: Not applicable

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
> Fix For: Not applicable
>
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990529#comment-15990529
 ] 

Ted Yu commented on BEAM-2123:
--

Thanks for the confirmation (I didn't see structuralValue() either)

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Daniel Halperin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990527#comment-15990527
 ] 

Daniel Halperin commented on BEAM-2123:
---

https://github.com/apache/beam/blob/a198f8d232bda1ae68467b17027565d9fcef63a8/sdks/java/core/src/test/java/org/apache/beam/sdk/coders/StructuredCoderTest.java#L45

I typoed -- NullBooleanCoder not Nullable, but it should have been obvious what 
I meant.

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990526#comment-15990526
 ] 

Ted Yu commented on BEAM-2123:
--

I looked at 
./sdks/java/core/src/test/java/org/apache/beam/sdk/coders/StructuredCoderTest.java
 but didn't see NullableBooleanCoder

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2128) PostCommit Java_MavenInstall broken

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990525#comment-15990525
 ] 

ASF GitHub Bot commented on BEAM-2128:
--

GitHub user kennknowles opened a pull request:

https://github.com/apache/beam/pull/2795

[BEAM-2128] Use DataflowRunner and set tempLocation in Jenkins IT pipeline 
options

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [x] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [x] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [x] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---

This should fix the MavenInstall postcommit breakage, which is root caused 
to a coupled failure between a design issue in `TestDataflowRunner` and the 
particular pipeline options that Jenkins provides.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kennknowles/beam BigQueryIO

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2795


commit 2be9f58b6b2babcaad2fd8be8a34c4e32e98c968
Author: Kenneth Knowles 
Date:   2017-05-01T00:10:17Z

Use DataflowRunner and set tempLocation in Jenkins IT pipeline options




> PostCommit Java_MavenInstall broken
> ---
>
> Key: BEAM-2128
> URL: https://issues.apache.org/jira/browse/BEAM-2128
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core, testing
>Reporter: Daniel Halperin
>Assignee: Kenneth Knowles
>
> The test 
> {{org.apache.beam.examples.cookbook.BigQueryTornadoesIT.testE2EBigQueryTornadoes}}
> is broken since PR #2666 was merged to master.
> https://builds.apache.org/view/Beam/job/beam_PostCommit_Java_MavenInstall/3533/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2795: [BEAM-2128] Use DataflowRunner and set tempLocation...

2017-04-30 Thread kennknowles
GitHub user kennknowles opened a pull request:

https://github.com/apache/beam/pull/2795

[BEAM-2128] Use DataflowRunner and set tempLocation in Jenkins IT pipeline 
options

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [x] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [x] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [x] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---

This should fix the MavenInstall postcommit breakage, which is root caused 
to a coupled failure between a design issue in `TestDataflowRunner` and the 
particular pipeline options that Jenkins provides.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kennknowles/beam BigQueryIO

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2795


commit 2be9f58b6b2babcaad2fd8be8a34c4e32e98c968
Author: Kenneth Knowles 
Date:   2017-05-01T00:10:17Z

Use DataflowRunner and set tempLocation in Jenkins IT pipeline options




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (BEAM-59) Switch from IOChannelFactory to FileSystems

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-59?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990523#comment-15990523
 ] 

ASF GitHub Bot commented on BEAM-59:


Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2792


> Switch from IOChannelFactory to FileSystems
> ---
>
> Key: BEAM-59
> URL: https://issues.apache.org/jira/browse/BEAM-59
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core, sdk-java-gcp
>Reporter: Daniel Halperin
>Assignee: Daniel Halperin
> Fix For: First stable release
>
>
> Right now, FileBasedSource and FileBasedSink communication is mediated by 
> IOChannelFactory. There are a number of issues:
> * Global configuration -- e.g., all 'gs://' URIs use the same credentials. 
> This should be per-source/per-sink/etc.
> * Supported APIs -- currently IOChannelFactory is in the "non-public API" 
> util package and subject to change. We need users to be able to add new 
> backends ('s3://', 'hdfs://', etc.) directly, without fear that they will be 
> broken.
> * Per-backend features: e.g., creating buckets in GCS/s3, setting expiration 
> time, etc.
> Updates:
> Design docs posted on dev@ list:
> Part 1: IOChannelFactory Redesign: 
> https://docs.google.com/document/d/11TdPyZ9_zmjokhNWM3Id-XJsVG3qel2lhdKTknmZ_7M/edit#
> Part 2: Configurable BeamFileSystem:
> https://docs.google.com/document/d/1-7vo9nLRsEEzDGnb562PuL4q9mUiq_ZVpCAiyyJw8p8/edit#heading=h.p3gc3colc2cs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2792: [BEAM-59] Minor style cleanups to WriteOneWindowPer...

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2792


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: [BEAM-59] Minor style cleanups to WriteOneWindowPerFile

2017-04-30 Thread dhalperi
Repository: beam
Updated Branches:
  refs/heads/master 46ca02aba -> a198f8d23


[BEAM-59] Minor style cleanups to WriteOneWindowPerFile

Makes the changes in #2779 more standalone


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/1ecc6eba
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/1ecc6eba
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/1ecc6eba

Branch: refs/heads/master
Commit: 1ecc6ebad39f11df8fc3dc7d34cb69599c284230
Parents: 46ca02a
Author: Dan Halperin 
Authored: Sun Apr 30 10:57:56 2017 -0700
Committer: Dan Halperin 
Committed: Sun Apr 30 17:12:08 2017 -0700

--
 .../beam/examples/common/WriteOneFilePerWindow.java  | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/1ecc6eba/examples/java/src/main/java/org/apache/beam/examples/common/WriteOneFilePerWindow.java
--
diff --git 
a/examples/java/src/main/java/org/apache/beam/examples/common/WriteOneFilePerWindow.java
 
b/examples/java/src/main/java/org/apache/beam/examples/common/WriteOneFilePerWindow.java
index 2ed8a74..6609828 100644
--- 
a/examples/java/src/main/java/org/apache/beam/examples/common/WriteOneFilePerWindow.java
+++ 
b/examples/java/src/main/java/org/apache/beam/examples/common/WriteOneFilePerWindow.java
@@ -38,8 +38,8 @@ import org.joda.time.format.ISODateTimeFormat;
  */
 public class WriteOneFilePerWindow extends PTransform {
 
-  private static DateTimeFormatter formatter = ISODateTimeFormat.hourMinute();
-  private String filenamePrefix;
+  private static final DateTimeFormatter FORMATTER = 
ISODateTimeFormat.hourMinute();
+  private final String filenamePrefix;
 
   public WriteOneFilePerWindow(String filenamePrefix) {
 this.filenamePrefix = filenamePrefix;
@@ -48,7 +48,10 @@ public class WriteOneFilePerWindow extends 
PTransform

[2/2] beam git commit: This closes #2792

2017-04-30 Thread dhalperi
This closes #2792


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/a198f8d2
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/a198f8d2
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/a198f8d2

Branch: refs/heads/master
Commit: a198f8d232bda1ae68467b17027565d9fcef63a8
Parents: 46ca02a 1ecc6eb
Author: Dan Halperin 
Authored: Sun Apr 30 17:12:13 2017 -0700
Committer: Dan Halperin 
Committed: Sun Apr 30 17:12:13 2017 -0700

--
 .../beam/examples/common/WriteOneFilePerWindow.java  | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--




Build failed in Jenkins: beam_PerformanceTests_JDBC #163

2017-04-30 Thread Apache Jenkins Server
See 


Changes:

[thw] BEAM-2022 fix triggering for processing time timers

[kirpichov] [BEAM-2114] Fixed display data for Kafka read/write with coders

[kirpichov] [BEAM-2114] Throw instead of warning when KafkaIO cannot infer coder

[kirpichov] [BEAM-2114] Tests for KafkaIO: use ExpectedException rule

[iemejia] Fix hamcrest-core version in parent pom

[dhalperi] Do not prune branches in Jenkins

--
[...truncated 393.11 KB...]
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
(7404a711069ded05): java.lang.RuntimeException: 
org.apache.beam.sdk.util.UserCodeException: org.postgresql.util.PSQLException: 
The connection attempt failed.
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: 
org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$eNHsayNm.invokeSetup(Unknown
 Source)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:66)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:48)
at 
com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:99)
at 
com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:363)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
at 
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.(PgConnection.java:215)
at 

[jira] [Commented] (BEAM-2128) PostCommit Java_MavenInstall broken

2017-04-30 Thread Kenneth Knowles (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990522#comment-15990522
 ] 

Kenneth Knowles commented on BEAM-2128:
---

Noting a few things while isolating the issue:

 - all commits succeed on direct runner
 - the Jenkins configuration only runs against Dataflow
 - the Jenkins configuration only works with {{TestDataflowRunner}}, which is 
probably not appropriate for end-to-end tests
 - in particular, {{TestDataflowRunner.fromOptions}} copies the 
{{\-\-tempRoot}} pipeline option to {{\-\-tempLocation}}
 - BigQueryIO needs {{tempLocation}} during construction (a stable release 
blocker) but there is no correct runner-independent place for it to get 
populated. Best for Jenkins to set {{tempLocation}} directly and not use a test 
runner

So superficial cause is 19aa8ba576c2d43166dc6d67a4c5c103b3522870 but multiple 
more serious issues as root cause, and fix is trivial to rollforwards.

> PostCommit Java_MavenInstall broken
> ---
>
> Key: BEAM-2128
> URL: https://issues.apache.org/jira/browse/BEAM-2128
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core, testing
>Reporter: Daniel Halperin
>Assignee: Kenneth Knowles
>
> The test 
> {{org.apache.beam.examples.cookbook.BigQueryTornadoesIT.testE2EBigQueryTornadoes}}
> is broken since PR #2666 was merged to master.
> https://builds.apache.org/view/Beam/job/beam_PostCommit_Java_MavenInstall/3533/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: beam_PerformanceTests_Dataflow #357

2017-04-30 Thread Apache Jenkins Server
See 


Changes:

[dhalperi] Do not prune branches in Jenkins

--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam5 (beam) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* +refs/pull/*:refs/remotes/origin/pr/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 46ca02aba5f4453b636e1c3eb799fca66cef9cfc (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 46ca02aba5f4453b636e1c3eb799fca66cef9cfc
 > git rev-list a19ceaf47cbf6a755ee523141c88bbc53f05 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[beam_PerformanceTests_Dataflow] $ /bin/bash -xe 
/tmp/hudson3701123732946005273.sh
+ rm -rf PerfKitBenchmarker
[beam_PerformanceTests_Dataflow] $ /bin/bash -xe 
/tmp/hudson8305364138761762006.sh
+ git clone https://github.com/GoogleCloudPlatform/PerfKitBenchmarker.git
Cloning into 'PerfKitBenchmarker'...
[beam_PerformanceTests_Dataflow] $ /bin/bash -xe 
/tmp/hudson4622022496640876743.sh
+ pip install --user -r PerfKitBenchmarker/requirements.txt
Requirement already satisfied (use --upgrade to upgrade): python-gflags==3.1.1 
in /home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 14))
Requirement already satisfied (use --upgrade to upgrade): jinja2>=2.7 in 
/usr/local/lib/python2.7/dist-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 15))
Requirement already satisfied (use --upgrade to upgrade): setuptools in 
/usr/lib/python2.7/dist-packages (from -r PerfKitBenchmarker/requirements.txt 
(line 16))
Requirement already satisfied (use --upgrade to upgrade): 
colorlog[windows]==2.6.0 in /home/jenkins/.local/lib/python2.7/site-packages 
(from -r PerfKitBenchmarker/requirements.txt (line 17))
  Installing extra requirements: 'windows'
Requirement already satisfied (use --upgrade to upgrade): blinker>=1.3 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 18))
Requirement already satisfied (use --upgrade to upgrade): futures>=3.0.3 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 19))
Requirement already satisfied (use --upgrade to upgrade): PyYAML==3.11 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 20))
Requirement already satisfied (use --upgrade to upgrade): pint>=0.7 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 21))
Requirement already satisfied (use --upgrade to upgrade): numpy in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 22))
Requirement already satisfied (use --upgrade to upgrade): functools32 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 23))
Requirement already satisfied (use --upgrade to upgrade): contextlib2>=0.5.1 in 
/home/jenkins/.local/lib/python2.7/site-packages (from -r 
PerfKitBenchmarker/requirements.txt (line 24))
Cleaning up...
[beam_PerformanceTests_Dataflow] $ /bin/bash -xe 
/tmp/hudson4157783115163909445.sh
+ python PerfKitBenchmarker/pkb.py --project=apache-beam-testing 
--dpb_log_level=INFO --maven_binary=/home/jenkins/tools/maven/latest/bin/mvn 
--bigquery_table=beam_performance.pkb_results --official=true 
--benchmarks=dpb_wordcount_benchmark 
--dpb_dataflow_staging_location=gs://temp-storage-for-perf-tests/staging 
--dpb_wordcount_input=dataflow-samples/shakespeare/kinglear.txt 
--config_override=dpb_wordcount_benchmark.dpb_service.service_type=dataflow
WARNING:root:File resource loader root perfkitbenchmarker/data/ycsb is not a 
directory.
2017-05-01 00:00:09,649 dd94e0d2 MainThread INFO Verbose logging to: 
/tmp/perfkitbenchmarker/runs/dd94e0d2/pkb.log
2017-05-01 00:00:09,649 dd94e0d2 MainThread INFO PerfKitBenchmarker 
version: v1.11.0-45-g6f31bb3
2017-05-01 00:00:09,650 dd94e0d2 MainThread INFO Flag values:
--maven_binary=/home/jenkins/tools/maven/latest/bin/mvn
--project=apache-beam-testing

[jira] [Commented] (BEAM-2130) Ensure options id is never null

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990511#comment-15990511
 ] 

ASF GitHub Bot commented on BEAM-2130:
--

GitHub user lukecwik opened a pull request:

https://github.com/apache/beam/pull/2794

[BEAM-2130] Ensure the options id is never null.

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lukecwik/incubator-beam options_id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2794


commit 43b8dabcd249a83d69a072898ee5c812ceeb03c7
Author: Lukasz Cwik 
Date:   2017-04-30T23:43:33Z

[BEAM-2130] Ensure the options id is never null.




> Ensure options id is never null
> ---
>
> Key: BEAM-2130
> URL: https://issues.apache.org/jira/browse/BEAM-2130
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Luke Cwik
>Assignee: Luke Cwik
>Priority: Minor
>
> Options id is a pipeline level unique token which can be used across the 
> entire pipeline for idempotency at the pipeline level.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2794: [BEAM-2130] Ensure the options id is never null.

2017-04-30 Thread lukecwik
GitHub user lukecwik opened a pull request:

https://github.com/apache/beam/pull/2794

[BEAM-2130] Ensure the options id is never null.

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lukecwik/incubator-beam options_id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2794


commit 43b8dabcd249a83d69a072898ee5c812ceeb03c7
Author: Lukasz Cwik 
Date:   2017-04-30T23:43:33Z

[BEAM-2130] Ensure the options id is never null.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (BEAM-2130) Ensure options id is never null

2017-04-30 Thread Luke Cwik (JIRA)
Luke Cwik created BEAM-2130:
---

 Summary: Ensure options id is never null
 Key: BEAM-2130
 URL: https://issues.apache.org/jira/browse/BEAM-2130
 Project: Beam
  Issue Type: Bug
  Components: sdk-java-core
Reporter: Luke Cwik
Assignee: Luke Cwik
Priority: Minor


Options id is a pipeline level unique token which can be used across the entire 
pipeline for idempotency at the pipeline level.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (BEAM-2129) KafkaIOTest.testUnboundedSourceMetrics is flaky

2017-04-30 Thread Daniel Halperin (JIRA)
Daniel Halperin created BEAM-2129:
-

 Summary: KafkaIOTest.testUnboundedSourceMetrics is flaky
 Key: BEAM-2129
 URL: https://issues.apache.org/jira/browse/BEAM-2129
 Project: Beam
  Issue Type: Bug
  Components: sdk-java-core, testing
Reporter: Daniel Halperin
Assignee: Aviem Zur


This has been pretty flaky in precommit and postcommit. Here's a recent 
postcommit failure.

https://builds.apache.org/view/Beam/job/beam_PostCommit_Java_MavenInstall/3544/org.apache.beam$beam-sdks-java-io-kafka/testReport/org.apache.beam.sdk.io.kafka/KafkaIOTest/testUnboundedSourceMetrics/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (BEAM-2128) PostCommit Java_MavenInstall broken

2017-04-30 Thread Daniel Halperin (JIRA)
Daniel Halperin created BEAM-2128:
-

 Summary: PostCommit Java_MavenInstall broken
 Key: BEAM-2128
 URL: https://issues.apache.org/jira/browse/BEAM-2128
 Project: Beam
  Issue Type: Bug
  Components: sdk-java-core, testing
Reporter: Daniel Halperin
Assignee: Kenneth Knowles


The test 
{{org.apache.beam.examples.cookbook.BigQueryTornadoesIT.testE2EBigQueryTornadoes}}

is broken since PR #2666 was merged to master.

https://builds.apache.org/view/Beam/job/beam_PostCommit_Java_MavenInstall/3533/




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Daniel Halperin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990486#comment-15990486
 ] 

Daniel Halperin commented on BEAM-2123:
---

There is one in {{StructuredCoderTest}}: {{NullableBooleanCoder}}. I suspect 
{{NullableCoder}} could be turned into one of these, too.

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990387#comment-15990387
 ] 

Ted Yu commented on BEAM-2123:
--

I am interested in seeing a counter example where null value produces 
meaningful output.

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Dataflow #2986

2017-04-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Dataflow #2985

2017-04-30 Thread Apache Jenkins Server
See 


Changes:

[iemejia] Fix hamcrest-core version in parent pom

--
[...truncated 5.48 MB...]
[INFO] 2017-04-30T20:36:12.430Z: (82b31b2de7b1e8db): Fusing unzipped copy of 
PAssert$290/GroupGlobally/GroupDummyAndContents/Write, through flatten , into 
producer PAssert$290/GroupGlobally/GroupDummyAndContents/Reify
[INFO] 2017-04-30T20:36:12.432Z: (82b31b2de7b1e889): Fusing consumer 
PAssert$290/GroupGlobally/GroupDummyAndContents/Write into 
PAssert$290/GroupGlobally/GroupDummyAndContents/Reify
[INFO] 2017-04-30T20:36:12.434Z: (82b31b2de7b1e837): Fusing consumer 
PAssert$290/RunChecks into PAssert$290/GetPane/Map
[INFO] 2017-04-30T20:36:12.437Z: (82b31b2de7b1e7e5): Fusing consumer 
PAssert$290/GroupGlobally/GroupDummyAndContents/GroupByWindow into 
PAssert$290/GroupGlobally/GroupDummyAndContents/Read
[INFO] 2017-04-30T20:36:12.439Z: (82b31b2de7b1e793): Fusing consumer 
PAssert$290/GroupGlobally/Values/Values/Map into 
PAssert$290/GroupGlobally/GroupDummyAndContents/GroupByWindow
[INFO] 2017-04-30T20:36:12.442Z: (82b31b2de7b1e741): Fusing consumer 
PAssert$290/GroupGlobally/ParDo(Concat) into 
PAssert$290/GroupGlobally/Values/Values/Map
[INFO] 2017-04-30T20:36:12.444Z: (82b31b2de7b1e6ef): Fusing consumer TestDoFn 
into Create.Values/Read(CreateSource)
[INFO] 2017-04-30T20:36:12.446Z: (82b31b2de7b1e69d): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Write into 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Reify
[INFO] 2017-04-30T20:36:12.448Z: (82b31b2de7b1e64b): Fusing consumer 
PAssert$290/GroupGlobally/KeyForDummy/AddKeys/Map into 
PAssert$290/GroupGlobally/RewindowActuals/Window.Assign
[INFO] 2017-04-30T20:36:12.450Z: (82b31b2de7b1e5f9): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/ParDo(ReifyTimestampsAndWindows) 
into PAssert$290/GroupGlobally/Window.Into()/Window.Assign
[INFO] 2017-04-30T20:36:12.453Z: (82b31b2de7b1e5a7): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/GroupByWindow into 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Read
[INFO] 2017-04-30T20:36:12.455Z: (82b31b2de7b1e555): Fusing consumer 
PAssert$290/GroupGlobally/RewindowActuals/Window.Assign into 
PAssert$290/GroupGlobally/GatherAllOutputs/Values/Values/Map
[INFO] 2017-04-30T20:36:12.457Z: (82b31b2de7b1e503): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/WithKeys/AddKeys/Map into 
PAssert$290/GroupGlobally/GatherAllOutputs/ParDo(ReifyTimestampsAndWindows)
[INFO] 2017-04-30T20:36:12.459Z: (82b31b2de7b1e4b1): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Reify into 
PAssert$290/GroupGlobally/GatherAllOutputs/Window.Into()/Window.Assign
[INFO] 2017-04-30T20:36:12.462Z: (82b31b2de7b1e45f): Fusing consumer 
PAssert$290/GroupGlobally/Window.Into()/Window.Assign into TestDoFn
[INFO] 2017-04-30T20:36:12.464Z: (82b31b2de7b1e40d): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/Window.Into()/Window.Assign into 
PAssert$290/GroupGlobally/GatherAllOutputs/WithKeys/AddKeys/Map
[INFO] 2017-04-30T20:36:12.467Z: (82b31b2de7b1e3bb): Fusing consumer 
PAssert$290/GroupGlobally/GatherAllOutputs/Values/Values/Map into 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/GroupByWindow
[INFO] 2017-04-30T20:36:12.540Z: (82b31b2de7b1e8d7): Adding StepResource setup 
and teardown to workflow graph.
[INFO] 2017-04-30T20:36:12.584Z: (6c75e13846c4b70f): Executing operation 
PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Create
[INFO] 2017-04-30T20:36:12.789Z: (50d3589702707755): Starting 1 workers...
[INFO] 2017-04-30T20:36:12.812Z: (6c75e13846c4bea8): Executing operation 
Create.Values/Read(CreateSource)+TestDoFn+PAssert$290/GroupGlobally/Window.Into()/Window.Assign+PAssert$290/GroupGlobally/GatherAllOutputs/ParDo(ReifyTimestampsAndWindows)+PAssert$290/GroupGlobally/GatherAllOutputs/WithKeys/AddKeys/Map+PAssert$290/GroupGlobally/GatherAllOutputs/Window.Into()/Window.Assign+PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Reify+PAssert$290/GroupGlobally/GatherAllOutputs/GroupByKey/Write
[INFO] 2017-04-30T20:36:13.985Z: (8c059abdbba9130): Expanding GroupByKey 
operations into optimizable parts.
[INFO] 2017-04-30T20:36:13.989Z: (8c059abdbba9132): Lifting 
ValueCombiningMappingFns into MergeBucketsMappingFns
[INFO] 2017-04-30T20:36:14.009Z: (8c059abdbba913c): Fusing adjacent ParDo, 
Read, Write, and Flatten operations
[INFO] 2017-04-30T20:36:14.012Z: (8c059abdbba913e): Elided trivial flatten 
[INFO] 2017-04-30T20:36:14.014Z: (8c059abdbba9140): Elided trivial flatten 
[INFO] 2017-04-30T20:36:14.017Z: (8c059abdbba9142): Elided trivial flatten 
[INFO] 2017-04-30T20:36:14.027Z: (8c059abdbba9148): Unzipping flatten s30 for 
input s25.25
[INFO] 2017-04-30T20:36:14.030Z: (8c059abdbba914a): Fusing unzipped copy of 
PAssert$291/GroupGlobally/GroupDummyAndContents/Reify, through 

[jira] [Commented] (BEAM-1324) Able to define the thread count via PipelineOptions

2017-04-30 Thread Borisa Zivkovic (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990366#comment-15990366
 ] 

Borisa Zivkovic commented on BEAM-1324:
---

Is this different than targetParallelism in DirectOptions?

> Able to define the thread count via PipelineOptions
> ---
>
> Key: BEAM-1324
> URL: https://issues.apache.org/jira/browse/BEAM-1324
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-direct
>Reporter: Jean-Baptiste Onofré
>  Labels: newbie, starter
>
> Today, the direct runner can create number of threads (to simulate unbounded 
> and parallel) as he wants. It would be great to be able to configure a max 
> number of threads via the {{PipelineOptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-1980) Seeming deadlock using Apex with relatively small data

2017-04-30 Thread Kenneth Knowles (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990363#comment-15990363
 ] 

Kenneth Knowles commented on BEAM-1980:
---

Let's try to repro and close this out, now, then.

> Seeming deadlock using Apex with relatively small data
> --
>
> Key: BEAM-1980
> URL: https://issues.apache.org/jira/browse/BEAM-1980
> Project: Beam
>  Issue Type: Bug
>  Components: runner-apex
>Reporter: Daniel Halperin
>Assignee: Thomas Weise
> Fix For: First stable release
>
>
> I'm running the "beam portability demo" at 
> https://github.com/dhalperi/beam-portability-demo/tree/apex
> Made a very small input file:
> {code}
> gsutil cat gs://apache-beam-demo/data2/small-game.csv | head -n 10 > 
> tiny.csv
> {code}
> Ran the job in embedded mode using an Apex fat-jar from the pom in that 
> branch (and adding in {{slf4j-jdk14.jar}} for debugging info):
> {code}
> java -cp 
> ~/.m2/repository/org/slf4j/slf4j-jdk14/1.7.14/slf4j-jdk14-1.7.14.jar:target/portability-demo-bundled-apex.jar
>  demo.HourlyTeamScore --runner=ApexRunner 
> --outputPrefix=gs://clouddfe-dhalperi/output/apex --input=tiny.csv
> {code}
> A good run takes O(25 seconds):
> {code}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/dhalperi/.m2/repository/org/slf4j/slf4j-jdk14/1.7.14/slf4j-jdk14-1.7.14.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/dhalperi/beam-portability-demo/target/portability-demo-bundled-apex.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
> log4j:WARN No appenders could be found for logger 
> (org.apache.commons.beanutils.converters.BooleanConverter).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> Apr 14, 2017 1:20:55 PM com.datatorrent.common.util.AsyncFSStorageAgent save
> INFO: using 
> /var/folders/7r/s0gg2qb11jz4g4pcb8n15gkc009j1y/T/chkp8074838277485202831 as 
> the basepath for checkpointing.
> Apr 14, 2017 1:20:56 PM com.datatorrent.bufferserver.storage.DiskStorage 
> 
> INFO: using /var/folders/7r/s0gg2qb11jz4g4pcb8n15gkc009j1y/T as the basepath 
> for spooling.
> Apr 14, 2017 1:20:56 PM com.datatorrent.bufferserver.server.Server registered
> INFO: Server started listening at /0:0:0:0:0:0:0:0:61087
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.StramLocalCluster 
> INFO: Buffer server started: localhost:61087
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-0
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-1
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-2
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-3
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-4
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$LocalStreamingContainerLauncher run
> INFO: Started container container-5
> Apr 14, 2017 1:20:56 PM com.datatorrent.stram.Journal write
> WARNING: Journal output stream is null. Skipping write to the WAL.
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$UmbilicalProtocolLocalImpl log
> INFO: container-2 msg: [container-2] Entering heartbeat loop..
> Apr 14, 2017 1:20:56 PM 
> com.datatorrent.stram.StramLocalCluster$UmbilicalProtocolLocalImpl log
> INFO: container-1 msg: [container-1] Entering heartbeat loop..
> Apr 14, 2017 1:20:56 PM 
> 

Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Apex #1285

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3546

2017-04-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Apex #1284

2017-04-30 Thread Apache Jenkins Server
See 


--
[...truncated 829.08 KB...]
2017-04-30T18:52:42.856 [INFO] Excluding com.sun.jersey:jersey-core:jar:1.9 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.apex:malhar-library:jar:3.4.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-servlet:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-security:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-server:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-continuation:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding com.sun.mail:javax.mail:jar:1.5.0 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding javax.activation:activation:jar:1.1 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding com.sun.jersey:jersey-client:jar:1.9 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding javax.jms:jms-api:jar:1.1-rev-1 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.activemq:activemq-client:jar:5.8.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.geronimo.specs:geronimo-jms_1.1_spec:jar:1.1.1 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding org.fusesource.hawtbuf:hawtbuf:jar:1.9 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.geronimo.specs:geronimo-j2ee-management_1.1_spec:jar:1.0.1 from the 
shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.14 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
com.github.tony19:named-regexp:jar:0.2.3 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.codehaus.janino:commons-compiler:jar:2.7.8 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-websocket:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-util:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-io:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.eclipse.jetty:jetty-http:jar:8.1.10.v20130312 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
commons-beanutils:commons-beanutils:jar:1.8.3 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
commons-logging:commons-logging:jar:1.1.1 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding joda-time:joda-time:jar:2.4 from the 
shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding it.unimi.dsi:fastutil:jar:7.0.6 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.apex:apex-shaded-ning19:jar:1.0.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
com.fasterxml.jackson.core:jackson-databind:jar:2.8.8 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
com.fasterxml.jackson.core:jackson-annotations:jar:2.8.8 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
com.fasterxml.jackson.core:jackson-core:jar:2.8.8 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding org.apache.apex:apex-engine:jar:3.5.0 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding org.apache.bval:bval-jsr303:jar:0.5 
from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding org.apache.bval:bval-core:jar:0.5 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.apex:apex-bufferserver:jar:3.5.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.httpcomponents:httpclient:jar:4.3.5 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.httpcomponents:httpcore:jar:4.3.2 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
com.sun.jersey.contribs:jersey-apache-client4:jar:1.9 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.hadoop:hadoop-yarn-client:jar:2.6.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding commons-lang:commons-lang:jar:2.6 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding commons-cli:commons-cli:jar:1.2 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding log4j:log4j:jar:1.2.17 from the shaded 
jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.hadoop:hadoop-annotations:jar:2.6.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.hadoop:hadoop-yarn-api:jar:2.6.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding 
org.apache.hadoop:hadoop-yarn-common:jar:2.6.0 from the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding javax.xml.bind:jaxb-api:jar:2.2.2 from 
the shaded jar.
2017-04-30T18:52:42.856 [INFO] Excluding javax.xml.stream:stax-api:jar:1.0-2 
from the shaded jar.
2017-04-30T18:52:42.856 

[GitHub] beam pull request #2785: Do not prune branches in Jenkins

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2785


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: Do not prune branches in Jenkins

2017-04-30 Thread dhalperi
Repository: beam
Updated Branches:
  refs/heads/master a19ceaf47 -> 46ca02aba


Do not prune branches in Jenkins

It seems that pruning in Jenkins has bugs which result in
branches being pruned when they should not, resulting in
log spam and build delays.


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/4fd44500
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/4fd44500
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/4fd44500

Branch: refs/heads/master
Commit: 4fd4450042282a9c51d528d1f870ecdf392bc13a
Parents: a19ceaf
Author: Kenneth Knowles 
Authored: Sat Apr 29 21:25:03 2017 -0700
Committer: Dan Halperin 
Committed: Sun Apr 30 12:11:47 2017 -0700

--
 .test-infra/jenkins/common_job_properties.groovy | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/4fd44500/.test-infra/jenkins/common_job_properties.groovy
--
diff --git a/.test-infra/jenkins/common_job_properties.groovy 
b/.test-infra/jenkins/common_job_properties.groovy
index ee10281..56eb0de 100644
--- a/.test-infra/jenkins/common_job_properties.groovy
+++ b/.test-infra/jenkins/common_job_properties.groovy
@@ -69,7 +69,6 @@ class common_job_properties {
 branch('${sha1}')
 extensions {
   cleanAfterCheckout()
-  pruneBranches()
 }
   }
 }



[2/2] beam git commit: This closes #2785

2017-04-30 Thread dhalperi
This closes #2785


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/46ca02ab
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/46ca02ab
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/46ca02ab

Branch: refs/heads/master
Commit: 46ca02aba5f4453b636e1c3eb799fca66cef9cfc
Parents: a19ceaf 4fd4450
Author: Dan Halperin 
Authored: Sun Apr 30 12:11:51 2017 -0700
Committer: Dan Halperin 
Committed: Sun Apr 30 12:11:51 2017 -0700

--
 .test-infra/jenkins/common_job_properties.groovy | 1 -
 1 file changed, 1 deletion(-)
--




Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3543

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Commented] (BEAM-59) Switch from IOChannelFactory to FileSystems

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-59?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990338#comment-15990338
 ] 

ASF GitHub Bot commented on BEAM-59:


GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2793

[BEAM-59] Switch mimeType from mutable protected field to constructor

Protected mutable fields are a terrible design pattern, and this
information is known at construction time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam mimetype

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2793


commit 7d8585a1a730443a4f787b6ec14f29749d8d69c8
Author: Dan Halperin 
Date:   2017-04-30T18:48:12Z

[BEAM-59] Switch mimeType from mutable protected field to constructor

Protected mutable fields are a terrible design pattern




> Switch from IOChannelFactory to FileSystems
> ---
>
> Key: BEAM-59
> URL: https://issues.apache.org/jira/browse/BEAM-59
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core, sdk-java-gcp
>Reporter: Daniel Halperin
>Assignee: Daniel Halperin
> Fix For: First stable release
>
>
> Right now, FileBasedSource and FileBasedSink communication is mediated by 
> IOChannelFactory. There are a number of issues:
> * Global configuration -- e.g., all 'gs://' URIs use the same credentials. 
> This should be per-source/per-sink/etc.
> * Supported APIs -- currently IOChannelFactory is in the "non-public API" 
> util package and subject to change. We need users to be able to add new 
> backends ('s3://', 'hdfs://', etc.) directly, without fear that they will be 
> broken.
> * Per-backend features: e.g., creating buckets in GCS/s3, setting expiration 
> time, etc.
> Updates:
> Design docs posted on dev@ list:
> Part 1: IOChannelFactory Redesign: 
> https://docs.google.com/document/d/11TdPyZ9_zmjokhNWM3Id-XJsVG3qel2lhdKTknmZ_7M/edit#
> Part 2: Configurable BeamFileSystem:
> https://docs.google.com/document/d/1-7vo9nLRsEEzDGnb562PuL4q9mUiq_ZVpCAiyyJw8p8/edit#heading=h.p3gc3colc2cs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2793: [BEAM-59] Switch mimeType from mutable protected fi...

2017-04-30 Thread dhalperi
GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2793

[BEAM-59] Switch mimeType from mutable protected field to constructor

Protected mutable fields are a terrible design pattern, and this
information is known at construction time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam mimetype

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2793


commit 7d8585a1a730443a4f787b6ec14f29749d8d69c8
Author: Dan Halperin 
Date:   2017-04-30T18:48:12Z

[BEAM-59] Switch mimeType from mutable protected field to constructor

Protected mutable fields are a terrible design pattern




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is unstable: beam_PostCommit_Java_MavenInstall #3544

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Closed] (BEAM-2114) KafkaIO broken with CoderException

2017-04-30 Thread Eugene Kirpichov (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Kirpichov closed BEAM-2114.
--
Resolution: Fixed

> KafkaIO broken with CoderException
> --
>
> Key: BEAM-2114
> URL: https://issues.apache.org/jira/browse/BEAM-2114
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-extensions
>Reporter: Devon Meunier
>Assignee: Devon Meunier
> Fix For: First stable release
>
>
> For a KafkaIO.Read I simply replaced {{withKeyCoder}} and 
> {{withValueCoder}} with {{withKeyDeserializer}} and {{withValueDeserializer}} 
> using `StringDeserializer.class` on dataflow and I receive the following 
> traceback:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.beam.sdk.coders.CoderException: cannot encode a null String
>   at 
> org.apache.beam.runners.direct.ImmutabilityCheckingBundleFactory$ImmutabilityEnforcingBundle.add(ImmutabilityCheckingBundleFactory.java:115)
>   at 
> org.apache.beam.runners.direct.UnboundedReadEvaluatorFactory$UnboundedReadEvaluator.processElement(UnboundedReadEvaluatorFactory.java:136)
>   at 
> org.apache.beam.runners.direct.TransformExecutor.processElements(TransformExecutor.java:139)
>   at 
> org.apache.beam.runners.direct.TransformExecutor.run(TransformExecutor.java:107)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   ... 1 more
> Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null 
> String
>   at 
> org.apache.beam.sdk.coders.StringUtf8Coder.encode(StringUtf8Coder.java:75)
>   at 
> org.apache.beam.sdk.coders.StringUtf8Coder.encode(StringUtf8Coder.java:41)
>   at org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:88)
>   at 
> org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:60)
>   at 
> org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:36)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToSafeStream(CoderUtils.java:122)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToByteArray(CoderUtils.java:106)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToByteArray(CoderUtils.java:91)
>   at 
> org.apache.beam.sdk.util.MutationDetectors$CodedValueMutationDetector.(MutationDetectors.java:106)
>   at 
> org.apache.beam.sdk.util.MutationDetectors.forValueWithCoder(MutationDetectors.java:44)
>   at 
> org.apache.beam.runners.direct.ImmutabilityCheckingBundleFactory$ImmutabilityEnforcingBundle.add(ImmutabilityCheckingBundleFactory.java:113)
>   ... 8 more
> {code}
> attempting to use {{readWithCoders(StringUtf8Coder.of(), 
> StringUtf8Coder.of())}} instead yields:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder$PopulateDisplayDataException:
>  Error while populating display data for component: 
> org.apache.beam.sdk.io.kafka.KafkaIO$TypedWithoutMetadata
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder.include(DisplayData.java:801)
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder.access$100(DisplayData.java:733)
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData.from(DisplayData.java:81)
>   at 
> org.apache.beam.runners.direct.DisplayDataValidator.evaluateDisplayData(DisplayDataValidator.java:47)
>   at 
> org.apache.beam.runners.direct.DisplayDataValidator.access$100(DisplayDataValidator.java:29)
>   at 
> 

Build failed in Jenkins: beam_PerformanceTests_JDBC #162

2017-04-30 Thread Apache Jenkins Server
See 


--
[...truncated 241.57 KB...]
 x [deleted] (none) -> origin/pr/942/head
 x [deleted] (none) -> origin/pr/942/merge
 x [deleted] (none) -> origin/pr/943/head
 x [deleted] (none) -> origin/pr/943/merge
 x [deleted] (none) -> origin/pr/944/head
 x [deleted] (none) -> origin/pr/945/head
 x [deleted] (none) -> origin/pr/945/merge
 x [deleted] (none) -> origin/pr/946/head
 x [deleted] (none) -> origin/pr/946/merge
 x [deleted] (none) -> origin/pr/947/head
 x [deleted] (none) -> origin/pr/947/merge
 x [deleted] (none) -> origin/pr/948/head
 x [deleted] (none) -> origin/pr/948/merge
 x [deleted] (none) -> origin/pr/949/head
 x [deleted] (none) -> origin/pr/949/merge
 x [deleted] (none) -> origin/pr/95/head
 x [deleted] (none) -> origin/pr/95/merge
 x [deleted] (none) -> origin/pr/950/head
 x [deleted] (none) -> origin/pr/951/head
 x [deleted] (none) -> origin/pr/951/merge
 x [deleted] (none) -> origin/pr/952/head
 x [deleted] (none) -> origin/pr/952/merge
 x [deleted] (none) -> origin/pr/953/head
 x [deleted] (none) -> origin/pr/954/head
 x [deleted] (none) -> origin/pr/954/merge
 x [deleted] (none) -> origin/pr/955/head
 x [deleted] (none) -> origin/pr/955/merge
 x [deleted] (none) -> origin/pr/956/head
 x [deleted] (none) -> origin/pr/957/head
 x [deleted] (none) -> origin/pr/958/head
 x [deleted] (none) -> origin/pr/959/head
 x [deleted] (none) -> origin/pr/959/merge
 x [deleted] (none) -> origin/pr/96/head
 x [deleted] (none) -> origin/pr/96/merge
 x [deleted] (none) -> origin/pr/960/head
 x [deleted] (none) -> origin/pr/960/merge
 x [deleted] (none) -> origin/pr/961/head
 x [deleted] (none) -> origin/pr/962/head
 x [deleted] (none) -> origin/pr/962/merge
 x [deleted] (none) -> origin/pr/963/head
 x [deleted] (none) -> origin/pr/963/merge
 x [deleted] (none) -> origin/pr/964/head
 x [deleted] (none) -> origin/pr/965/head
 x [deleted] (none) -> origin/pr/965/merge
 x [deleted] (none) -> origin/pr/966/head
 x [deleted] (none) -> origin/pr/967/head
 x [deleted] (none) -> origin/pr/967/merge
 x [deleted] (none) -> origin/pr/968/head
 x [deleted] (none) -> origin/pr/968/merge
 x [deleted] (none) -> origin/pr/969/head
 x [deleted] (none) -> origin/pr/969/merge
 x [deleted] (none) -> origin/pr/97/head
 x [deleted] (none) -> origin/pr/97/merge
 x [deleted] (none) -> origin/pr/970/head
 x [deleted] (none) -> origin/pr/970/merge
 x [deleted] (none) -> origin/pr/971/head
 x [deleted] (none) -> origin/pr/971/merge
 x [deleted] (none) -> origin/pr/972/head
 x [deleted] (none) -> origin/pr/973/head
 x [deleted] (none) -> origin/pr/974/head
 x [deleted] (none) -> origin/pr/974/merge
 x [deleted] (none) -> origin/pr/975/head
 x [deleted] (none) -> origin/pr/975/merge
 x [deleted] (none) -> origin/pr/976/head
 x [deleted] (none) -> origin/pr/976/merge
 x [deleted] (none) -> origin/pr/977/head
 x [deleted] (none) -> origin/pr/977/merge
 x [deleted] (none) -> origin/pr/978/head
 x [deleted] (none) -> origin/pr/978/merge
 x [deleted] (none) -> origin/pr/979/head
 x [deleted] (none) -> origin/pr/979/merge
 x [deleted] (none) -> origin/pr/98/head
 x [deleted] (none) -> origin/pr/980/head
 x [deleted] (none) -> origin/pr/980/merge
 x [deleted] (none) -> origin/pr/981/head
 x [deleted] (none) -> origin/pr/982/head
 x [deleted] (none) -> origin/pr/982/merge
 x [deleted] (none) -> origin/pr/983/head
 x [deleted] (none) -> origin/pr/983/merge
 x [deleted] (none) -> origin/pr/984/head
 x [deleted] (none) -> origin/pr/984/merge
 x [deleted] (none) -> origin/pr/985/head
 x [deleted] (none) -> origin/pr/985/merge
 x [deleted] (none) -> origin/pr/986/head
 x [deleted] (none) -> origin/pr/986/merge
 x [deleted] (none) -> origin/pr/987/head
 x [deleted] (none) -> origin/pr/988/head
 x [deleted] (none) -> origin/pr/988/merge
 x [deleted] (none) -> origin/pr/989/head
 x 

Build failed in Jenkins: beam_PerformanceTests_Dataflow #356

2017-04-30 Thread Apache Jenkins Server
See 


Changes:

[thw] BEAM-2022 fix triggering for processing time timers

[kirpichov] [BEAM-2114] Fixed display data for Kafka read/write with coders

[kirpichov] [BEAM-2114] Throw instead of warning when KafkaIO cannot infer coder

[kirpichov] [BEAM-2114] Tests for KafkaIO: use ExpectedException rule

[iemejia] Fix hamcrest-core version in parent pom

--
[...truncated 244.20 KB...]
 x [deleted] (none) -> origin/pr/93/head
 x [deleted] (none) -> origin/pr/930/head
 x [deleted] (none) -> origin/pr/930/merge
 x [deleted] (none) -> origin/pr/931/head
 x [deleted] (none) -> origin/pr/931/merge
 x [deleted] (none) -> origin/pr/932/head
 x [deleted] (none) -> origin/pr/932/merge
 x [deleted] (none) -> origin/pr/933/head
 x [deleted] (none) -> origin/pr/933/merge
 x [deleted] (none) -> origin/pr/934/head
 x [deleted] (none) -> origin/pr/934/merge
 x [deleted] (none) -> origin/pr/935/head
 x [deleted] (none) -> origin/pr/936/head
 x [deleted] (none) -> origin/pr/936/merge
 x [deleted] (none) -> origin/pr/937/head
 x [deleted] (none) -> origin/pr/937/merge
 x [deleted] (none) -> origin/pr/938/head
 x [deleted] (none) -> origin/pr/939/head
 x [deleted] (none) -> origin/pr/94/head
 x [deleted] (none) -> origin/pr/940/head
 x [deleted] (none) -> origin/pr/940/merge
 x [deleted] (none) -> origin/pr/941/head
 x [deleted] (none) -> origin/pr/941/merge
 x [deleted] (none) -> origin/pr/942/head
 x [deleted] (none) -> origin/pr/942/merge
 x [deleted] (none) -> origin/pr/943/head
 x [deleted] (none) -> origin/pr/943/merge
 x [deleted] (none) -> origin/pr/944/head
 x [deleted] (none) -> origin/pr/945/head
 x [deleted] (none) -> origin/pr/945/merge
 x [deleted] (none) -> origin/pr/946/head
 x [deleted] (none) -> origin/pr/946/merge
 x [deleted] (none) -> origin/pr/947/head
 x [deleted] (none) -> origin/pr/947/merge
 x [deleted] (none) -> origin/pr/948/head
 x [deleted] (none) -> origin/pr/948/merge
 x [deleted] (none) -> origin/pr/949/head
 x [deleted] (none) -> origin/pr/949/merge
 x [deleted] (none) -> origin/pr/95/head
 x [deleted] (none) -> origin/pr/95/merge
 x [deleted] (none) -> origin/pr/950/head
 x [deleted] (none) -> origin/pr/951/head
 x [deleted] (none) -> origin/pr/951/merge
 x [deleted] (none) -> origin/pr/952/head
 x [deleted] (none) -> origin/pr/952/merge
 x [deleted] (none) -> origin/pr/953/head
 x [deleted] (none) -> origin/pr/954/head
 x [deleted] (none) -> origin/pr/954/merge
 x [deleted] (none) -> origin/pr/955/head
 x [deleted] (none) -> origin/pr/955/merge
 x [deleted] (none) -> origin/pr/956/head
 x [deleted] (none) -> origin/pr/957/head
 x [deleted] (none) -> origin/pr/958/head
 x [deleted] (none) -> origin/pr/959/head
 x [deleted] (none) -> origin/pr/959/merge
 x [deleted] (none) -> origin/pr/96/head
 x [deleted] (none) -> origin/pr/96/merge
 x [deleted] (none) -> origin/pr/960/head
 x [deleted] (none) -> origin/pr/960/merge
 x [deleted] (none) -> origin/pr/961/head
 x [deleted] (none) -> origin/pr/962/head
 x [deleted] (none) -> origin/pr/962/merge
 x [deleted] (none) -> origin/pr/963/head
 x [deleted] (none) -> origin/pr/963/merge
 x [deleted] (none) -> origin/pr/964/head
 x [deleted] (none) -> origin/pr/965/head
 x [deleted] (none) -> origin/pr/965/merge
 x [deleted] (none) -> origin/pr/966/head
 x [deleted] (none) -> origin/pr/967/head
 x [deleted] (none) -> origin/pr/967/merge
 x [deleted] (none) -> origin/pr/968/head
 x [deleted] (none) -> origin/pr/968/merge
 x [deleted] (none) -> origin/pr/969/head
 x [deleted] (none) -> origin/pr/969/merge
 x [deleted] (none) -> origin/pr/97/head
 x [deleted] (none) -> origin/pr/97/merge
 x [deleted] (none) -> origin/pr/970/head
 x [deleted] (none) -> origin/pr/970/merge
 x [deleted] (none) -> origin/pr/971/head
 x [deleted] (none) -> origin/pr/971/merge
 x [deleted] (none) -> origin/pr/972/head
 x [deleted] (none) -> origin/pr/973/head
 x [deleted] 

[GitHub] beam pull request #2790: Fix hamcrest-core version in parent pom

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2790


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: Fix hamcrest-core version in parent pom

2017-04-30 Thread iemejia
Repository: beam
Updated Branches:
  refs/heads/master 3d47b335c -> a19ceaf47


Fix hamcrest-core version in parent pom


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/fab43fbe
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/fab43fbe
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/fab43fbe

Branch: refs/heads/master
Commit: fab43fbe2b2de3860808216bf954820c43ef6f5c
Parents: 3d47b33
Author: Ismaël Mejía 
Authored: Sun Apr 30 15:06:07 2017 +0200
Committer: Ismaël Mejía 
Committed: Sun Apr 30 19:58:37 2017 +0200

--
 pom.xml| 7 +++
 sdks/java/io/elasticsearch/pom.xml | 1 -
 sdks/java/io/xml/pom.xml   | 2 +-
 3 files changed, 8 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/fab43fbe/pom.xml
--
diff --git a/pom.xml b/pom.xml
index dae4e85..9d92d54 100644
--- a/pom.xml
+++ b/pom.xml
@@ -980,6 +980,13 @@
   
 
   
+org.hamcrest
+hamcrest-core
+${hamcrest.version}
+test
+  
+
+  
 junit
 junit
 ${junit.version}

http://git-wip-us.apache.org/repos/asf/beam/blob/fab43fbe/sdks/java/io/elasticsearch/pom.xml
--
diff --git a/sdks/java/io/elasticsearch/pom.xml 
b/sdks/java/io/elasticsearch/pom.xml
index bca8b22..1cf3c33 100644
--- a/sdks/java/io/elasticsearch/pom.xml
+++ b/sdks/java/io/elasticsearch/pom.xml
@@ -109,7 +109,6 @@
 
   org.hamcrest
   hamcrest-core
-  1.3
   test
 
 

http://git-wip-us.apache.org/repos/asf/beam/blob/fab43fbe/sdks/java/io/xml/pom.xml
--
diff --git a/sdks/java/io/xml/pom.xml b/sdks/java/io/xml/pom.xml
index 51f1c6c..25d5d0f 100644
--- a/sdks/java/io/xml/pom.xml
+++ b/sdks/java/io/xml/pom.xml
@@ -101,7 +101,7 @@
 
   org.hamcrest
   hamcrest-core
-  1.3
+  test
 
 
 



[2/2] beam git commit: This closes #2790

2017-04-30 Thread iemejia
This closes #2790


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/a19ceaf4
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/a19ceaf4
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/a19ceaf4

Branch: refs/heads/master
Commit: a19ceaf47cbf6a755ee523141c88bbc53f05
Parents: 3d47b33 fab43fb
Author: Ismaël Mejía 
Authored: Sun Apr 30 19:58:49 2017 +0200
Committer: Ismaël Mejía 
Committed: Sun Apr 30 19:58:49 2017 +0200

--
 pom.xml| 7 +++
 sdks/java/io/elasticsearch/pom.xml | 1 -
 sdks/java/io/xml/pom.xml   | 2 +-
 3 files changed, 8 insertions(+), 2 deletions(-)
--




[GitHub] beam pull request #2792: [BEAM-59] Minor style cleanups to WriteOneWindowPer...

2017-04-30 Thread dhalperi
GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2792

[BEAM-59] Minor style cleanups to WriteOneWindowPerFile

Makes the changes in #2779 more standalone

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam writeonefileperwindow-minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2792


commit 4cd313aa54278b15c2bf2f4d478a028d6b1f6c65
Author: Dan Halperin 
Date:   2017-04-30T17:57:56Z

[BEAM-59] Minor style cleanups to WriteOneWindowPerFile

Makes the changes in #2779 more standalone




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (BEAM-59) Switch from IOChannelFactory to FileSystems

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-59?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990321#comment-15990321
 ] 

ASF GitHub Bot commented on BEAM-59:


GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2792

[BEAM-59] Minor style cleanups to WriteOneWindowPerFile

Makes the changes in #2779 more standalone

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam writeonefileperwindow-minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2792


commit 4cd313aa54278b15c2bf2f4d478a028d6b1f6c65
Author: Dan Halperin 
Date:   2017-04-30T17:57:56Z

[BEAM-59] Minor style cleanups to WriteOneWindowPerFile

Makes the changes in #2779 more standalone




> Switch from IOChannelFactory to FileSystems
> ---
>
> Key: BEAM-59
> URL: https://issues.apache.org/jira/browse/BEAM-59
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core, sdk-java-gcp
>Reporter: Daniel Halperin
>Assignee: Daniel Halperin
> Fix For: First stable release
>
>
> Right now, FileBasedSource and FileBasedSink communication is mediated by 
> IOChannelFactory. There are a number of issues:
> * Global configuration -- e.g., all 'gs://' URIs use the same credentials. 
> This should be per-source/per-sink/etc.
> * Supported APIs -- currently IOChannelFactory is in the "non-public API" 
> util package and subject to change. We need users to be able to add new 
> backends ('s3://', 'hdfs://', etc.) directly, without fear that they will be 
> broken.
> * Per-backend features: e.g., creating buckets in GCS/s3, setting expiration 
> time, etc.
> Updates:
> Design docs posted on dev@ list:
> Part 1: IOChannelFactory Redesign: 
> https://docs.google.com/document/d/11TdPyZ9_zmjokhNWM3Id-XJsVG3qel2lhdKTknmZ_7M/edit#
> Part 2: Configurable BeamFileSystem:
> https://docs.google.com/document/d/1-7vo9nLRsEEzDGnb562PuL4q9mUiq_ZVpCAiyyJw8p8/edit#heading=h.p3gc3colc2cs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3542

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Commented] (BEAM-59) Switch from IOChannelFactory to FileSystems

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-59?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990317#comment-15990317
 ] 

ASF GitHub Bot commented on BEAM-59:


GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2791

[BEAM-59] DataflowRunner: Sink is always a FileBasedSink now



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam dataflow-runner-minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2791.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2791


commit 84713fd62ef9935fa4cf882da0f0302f387b39b2
Author: Dan Halperin 
Date:   2017-04-30T17:47:37Z

[BEAM-59] DataflowRunner: Sink is always a FileBasedSink now




> Switch from IOChannelFactory to FileSystems
> ---
>
> Key: BEAM-59
> URL: https://issues.apache.org/jira/browse/BEAM-59
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core, sdk-java-gcp
>Reporter: Daniel Halperin
>Assignee: Daniel Halperin
> Fix For: First stable release
>
>
> Right now, FileBasedSource and FileBasedSink communication is mediated by 
> IOChannelFactory. There are a number of issues:
> * Global configuration -- e.g., all 'gs://' URIs use the same credentials. 
> This should be per-source/per-sink/etc.
> * Supported APIs -- currently IOChannelFactory is in the "non-public API" 
> util package and subject to change. We need users to be able to add new 
> backends ('s3://', 'hdfs://', etc.) directly, without fear that they will be 
> broken.
> * Per-backend features: e.g., creating buckets in GCS/s3, setting expiration 
> time, etc.
> Updates:
> Design docs posted on dev@ list:
> Part 1: IOChannelFactory Redesign: 
> https://docs.google.com/document/d/11TdPyZ9_zmjokhNWM3Id-XJsVG3qel2lhdKTknmZ_7M/edit#
> Part 2: Configurable BeamFileSystem:
> https://docs.google.com/document/d/1-7vo9nLRsEEzDGnb562PuL4q9mUiq_ZVpCAiyyJw8p8/edit#heading=h.p3gc3colc2cs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2791: [BEAM-59] DataflowRunner: Sink is always a FileBase...

2017-04-30 Thread dhalperi
GitHub user dhalperi opened a pull request:

https://github.com/apache/beam/pull/2791

[BEAM-59] DataflowRunner: Sink is always a FileBasedSink now



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhalperi/beam dataflow-runner-minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2791.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2791


commit 84713fd62ef9935fa4cf882da0f0302f387b39b2
Author: Dan Halperin 
Date:   2017-04-30T17:47:37Z

[BEAM-59] DataflowRunner: Sink is always a FileBasedSink now




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : beam_PostCommit_Python_Verify #2052

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Assigned] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Daniel Halperin (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Halperin reassigned BEAM-2123:
-

Assignee: Thomas Groh  (was: Davor Bonaci)

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2123) Passing potential null pointer to encode() in StructuredCoder#structuralValue

2017-04-30 Thread Daniel Halperin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990311#comment-15990311
 ] 

Daniel Halperin commented on BEAM-2123:
---

It looks like you believe `null` is not a valid value for any structured coder. 
Is that actually true? I don't believe so.

It's up to the extending coder to throw on `null` values.

> Passing potential null pointer to encode() in StructuredCoder#structuralValue
> -
>
> Key: BEAM-2123
> URL: https://issues.apache.org/jira/browse/BEAM-2123
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Ted Yu
>Assignee: Thomas Groh
>Priority: Minor
>
> {code}
>   public Object structuralValue(T value) {
> if (value != null && consistentWithEquals()) {
>   return value;
> } else {
>   try {
> ByteArrayOutputStream os = new ByteArrayOutputStream();
> encode(value, os, Context.OUTER);
> {code}
> If value is null, encode() would throw CoderException (I checked 
> ByteArrayCoder and KvCoder) which would be caught and converted to 
> IllegalArgumentException.
> Looks like structuralValue() can check null value directly and throw 
> CoderException. This would result in clearer exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2114) KafkaIO broken with CoderException

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990289#comment-15990289
 ] 

ASF GitHub Bot commented on BEAM-2114:
--

Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2780


> KafkaIO broken with CoderException
> --
>
> Key: BEAM-2114
> URL: https://issues.apache.org/jira/browse/BEAM-2114
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-extensions
>Reporter: Devon Meunier
>Assignee: Devon Meunier
> Fix For: First stable release
>
>
> For a KafkaIO.Read I simply replaced {{withKeyCoder}} and 
> {{withValueCoder}} with {{withKeyDeserializer}} and {{withValueDeserializer}} 
> using `StringDeserializer.class` on dataflow and I receive the following 
> traceback:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.beam.sdk.coders.CoderException: cannot encode a null String
>   at 
> org.apache.beam.runners.direct.ImmutabilityCheckingBundleFactory$ImmutabilityEnforcingBundle.add(ImmutabilityCheckingBundleFactory.java:115)
>   at 
> org.apache.beam.runners.direct.UnboundedReadEvaluatorFactory$UnboundedReadEvaluator.processElement(UnboundedReadEvaluatorFactory.java:136)
>   at 
> org.apache.beam.runners.direct.TransformExecutor.processElements(TransformExecutor.java:139)
>   at 
> org.apache.beam.runners.direct.TransformExecutor.run(TransformExecutor.java:107)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   ... 1 more
> Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null 
> String
>   at 
> org.apache.beam.sdk.coders.StringUtf8Coder.encode(StringUtf8Coder.java:75)
>   at 
> org.apache.beam.sdk.coders.StringUtf8Coder.encode(StringUtf8Coder.java:41)
>   at org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:88)
>   at 
> org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:60)
>   at 
> org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:36)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToSafeStream(CoderUtils.java:122)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToByteArray(CoderUtils.java:106)
>   at 
> org.apache.beam.sdk.util.CoderUtils.encodeToByteArray(CoderUtils.java:91)
>   at 
> org.apache.beam.sdk.util.MutationDetectors$CodedValueMutationDetector.(MutationDetectors.java:106)
>   at 
> org.apache.beam.sdk.util.MutationDetectors.forValueWithCoder(MutationDetectors.java:44)
>   at 
> org.apache.beam.runners.direct.ImmutabilityCheckingBundleFactory$ImmutabilityEnforcingBundle.add(ImmutabilityCheckingBundleFactory.java:113)
>   ... 8 more
> {code}
> attempting to use {{readWithCoders(StringUtf8Coder.of(), 
> StringUtf8Coder.of())}} instead yields:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder$PopulateDisplayDataException:
>  Error while populating display data for component: 
> org.apache.beam.sdk.io.kafka.KafkaIO$TypedWithoutMetadata
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder.include(DisplayData.java:801)
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData$InternalBuilder.access$100(DisplayData.java:733)
>   at 
> org.apache.beam.sdk.transforms.display.DisplayData.from(DisplayData.java:81)
>   at 
> org.apache.beam.runners.direct.DisplayDataValidator.evaluateDisplayData(DisplayDataValidator.java:47)
>   at 
> 

[GitHub] beam pull request #2780: [BEAM-2114] KafkaIO: fix display data for read/writ...

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2780


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[2/4] beam git commit: [BEAM-2114] Throw instead of warning when KafkaIO cannot infer coder

2017-04-30 Thread jkff
[BEAM-2114] Throw instead of warning when KafkaIO cannot infer coder


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/10fc5f86
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/10fc5f86
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/10fc5f86

Branch: refs/heads/master
Commit: 10fc5f86fac066423c77d9b6d9e7ed87ab32ef01
Parents: 10b3e3e
Author: peay 
Authored: Sat Apr 29 11:31:15 2017 +0200
Committer: Eugene Kirpichov 
Committed: Sun Apr 30 09:42:52 2017 -0700

--
 .../org/apache/beam/sdk/io/kafka/KafkaIO.java   | 18 +-
 .../apache/beam/sdk/io/kafka/KafkaIOTest.java   | 26 +---
 2 files changed, 35 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/10fc5f86/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
--
diff --git 
a/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java 
b/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
index b3591ce..8f94b8a 100644
--- a/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
+++ b/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
@@ -291,17 +291,23 @@ public class KafkaIO {
   if (parameterizedType.getRawType() == Deserializer.class) {
 Type parameter = parameterizedType.getActualTypeArguments()[0];
 
+@SuppressWarnings("unchecked")
+Class clazz = (Class) parameter;
+
 try {
-  @SuppressWarnings("unchecked")
-  Class clazz = (Class) parameter;
   return NullableCoder.of(coderRegistry.getDefaultCoder(clazz));
 } catch (CannotProvideCoderException e) {
-  LOG.warn("Could not infer coder from deserializer type", e);
+  throw new RuntimeException(
+  String.format("Unable to automatically infer a Coder for "
++ "the Kafka Deserializer %s: no coder 
registered for type %s",
+deserializer, clazz));
 }
   }
 }
 
-throw new RuntimeException("Could not extract deserializer type from " + 
deserializer);
+throw new RuntimeException(
+String.format("Could not extract the Kafaka Deserializer type from 
%s",
+  deserializer));
   }
 
   /**
@@ -634,14 +640,14 @@ public class KafkaIO {
   Coder keyCoder =
   checkNotNull(
   getKeyCoder() != null ? getKeyCoder() : inferCoder(registry, 
getKeyDeserializer()),
-  "Key coder must be set");
+  "Key coder must be inferable from input or set using 
readWithCoders");
 
   Coder valueCoder =
   checkNotNull(
   getValueCoder() != null
   ? getValueCoder()
   : inferCoder(registry, getValueDeserializer()),
-  "Value coder must be set");
+  "Value coder must be inferable from input or set using 
readWithCoders");
 
   // Handles unbounded source to bounded conversion if maxNumRecords or 
maxReadTime is set.
   Unbounded> unbounded =

http://git-wip-us.apache.org/repos/asf/beam/blob/10fc5f86/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
--
diff --git 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
index a9c318d..2f895fe 100644
--- 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
+++ 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
@@ -63,7 +63,6 @@ import org.apache.beam.sdk.metrics.MetricsFilter;
 import org.apache.beam.sdk.metrics.SinkMetrics;
 import org.apache.beam.sdk.metrics.SourceMetrics;
 import org.apache.beam.sdk.options.PipelineOptionsFactory;
-import org.apache.beam.sdk.testing.NeedsRunner;
 import org.apache.beam.sdk.testing.PAssert;
 import org.apache.beam.sdk.testing.TestPipeline;
 import org.apache.beam.sdk.transforms.Count;
@@ -605,7 +604,6 @@ public class KafkaIOTest {
   }
 
   @Test
-  @Category(NeedsRunner.class)
   public void testUnboundedSourceMetrics() {
 int numElements = 1000;
 
@@ -917,7 +915,24 @@ public class KafkaIOTest {
 
 @Override
 public void close() {
+}
+  }
+
+  // class for testing coder inference
+  private static class ObjectDeserializer
+  implements Deserializer {
+
+@Override
+public void configure(Map configs, boolean isKey) {
+}
 
+@Override
+public Object deserialize(String topic, byte[] bytes) 

[3/4] beam git commit: [BEAM-2114] Tests for KafkaIO: use ExpectedException rule

2017-04-30 Thread jkff
[BEAM-2114] Tests for KafkaIO: use ExpectedException rule


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/34e00465
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/34e00465
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/34e00465

Branch: refs/heads/master
Commit: 34e0046512282872f205166162b16b616f834e93
Parents: 10fc5f8
Author: peay 
Authored: Sun Apr 30 18:22:38 2017 +0200
Committer: Eugene Kirpichov 
Committed: Sun Apr 30 09:45:30 2017 -0700

--
 .../apache/beam/sdk/io/kafka/KafkaIOTest.java   | 33 +---
 1 file changed, 21 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/34e00465/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
--
diff --git 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
index 2f895fe..591c099 100644
--- 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
+++ 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
@@ -102,7 +102,6 @@ import 
org.hamcrest.collection.IsIterableContainingInAnyOrder;
 import org.joda.time.Instant;
 import org.junit.Rule;
 import org.junit.Test;
-import org.junit.experimental.categories.Category;
 import org.junit.rules.ExpectedException;
 import org.junit.runner.RunWith;
 import org.junit.runners.JUnit4;
@@ -240,8 +239,8 @@ public class KafkaIOTest {
   }
 
   /**
-   * Creates a consumer with two topics, with 5 partitions each.
-   * numElements are (round-robin) assigned all the 10 partitions.
+   * Creates a consumer with two topics, with 10 partitions each.
+   * numElements are (round-robin) assigned all the 20 partitions.
*/
   private static KafkaIO.Read mkKafkaReadTransform(
   int numElements,
@@ -266,8 +265,9 @@ public class KafkaIOTest {
   }
 
   /**
-   * Creates a consumer with two topics, with 5 partitions each.
-   * numElements are (round-robin) assigned all the 10 partitions.
+   * Creates a consumer with two topics, with 10 partitions each.
+   * numElements are (round-robin) assigned all the 20 partitions.
+   * Coders are specified explicitly.
*/
   private static KafkaIO.Read mkKafkaReadTransformWithCoders(
   int numElements,
@@ -918,17 +918,22 @@ public class KafkaIOTest {
 }
   }
 
+  // class for which a coder cannot be infered
+  private static class NonInferableObject {
+
+  }
+
   // class for testing coder inference
-  private static class ObjectDeserializer
-  implements Deserializer {
+  private static class NonInferableObjectDeserializer
+  implements Deserializer {
 
 @Override
 public void configure(Map configs, boolean isKey) {
 }
 
 @Override
-public Object deserialize(String topic, byte[] bytes) {
-  return new Object();
+public NonInferableObject deserialize(String topic, byte[] bytes) {
+  return new NonInferableObject();
 }
 
 @Override
@@ -953,10 +958,14 @@ public class KafkaIOTest {
 instanceof VarLongCoder);
   }
 
-  @Test(expected = RuntimeException.class)
-  public void testInferKeyCoderFailure() {
+  @Rule public ExpectedException cannotInferException = 
ExpectedException.none();
+
+  @Test
+  public void testInferKeyCoderFailure() throws Exception {
+cannotInferException.expect(RuntimeException.class);
+
 CoderRegistry registry = CoderRegistry.createDefault();
-KafkaIO.inferCoder(registry, ObjectDeserializer.class);
+KafkaIO.inferCoder(registry, NonInferableObjectDeserializer.class);
   }
 
   @Test



[1/4] beam git commit: [BEAM-2114] Fixed display data for Kafka read/write with coders

2017-04-30 Thread jkff
Repository: beam
Updated Branches:
  refs/heads/master 202aae9d3 -> 3d47b335c


[BEAM-2114] Fixed display data for Kafka read/write with coders


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/10b3e3e7
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/10b3e3e7
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/10b3e3e7

Branch: refs/heads/master
Commit: 10b3e3e7391603e00a64933fe74b7748b58bc590
Parents: 202aae9
Author: peay 
Authored: Sat Apr 29 11:08:21 2017 +0200
Committer: Eugene Kirpichov 
Committed: Sun Apr 30 09:39:45 2017 -0700

--
 .../org/apache/beam/sdk/io/kafka/KafkaIO.java   |   9 +-
 .../apache/beam/sdk/io/kafka/KafkaIOTest.java   | 102 +++
 2 files changed, 109 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/10b3e3e7/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
--
diff --git 
a/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java 
b/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
index 000df70..b3591ce 100644
--- a/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
+++ b/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
@@ -688,9 +688,11 @@ public class KafkaIO {
  */
 private static final Map IGNORED_CONSUMER_PROPERTIES = 
ImmutableMap.of(
 ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "Set keyDeserializer 
instead",
-ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "Set valueDeserializer 
instead"
+ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "Set valueDeserializer 
instead",
 // "group.id", "enable.auto.commit", "auto.commit.interval.ms" :
 // lets allow these, applications can have better resume point for 
restarts.
+CoderBasedKafkaDeserializer.configForKeyDeserializer(), "Use 
readWithCoders instead",
+CoderBasedKafkaDeserializer.configForValueDeserializer(), "Use 
readWithCoders instead"
 );
 
 // set config defaults
@@ -1526,7 +1528,10 @@ public class KafkaIO {
  */
 private static final Map IGNORED_PRODUCER_PROPERTIES = 
ImmutableMap.of(
 ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "Use withKeySerializer 
instead",
-ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "Use withValueSerializer 
instead"
+ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "Use withValueSerializer 
instead",
+
+CoderBasedKafkaSerializer.configForKeySerializer(), "Use 
writeWithCoders instead",
+CoderBasedKafkaSerializer.configForValueSerializer(), "Use 
writeWithCoders instead"
  );
 
 @Override

http://git-wip-us.apache.org/repos/asf/beam/blob/10b3e3e7/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
--
diff --git 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
index feb65da..a9c318d 100644
--- 
a/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
+++ 
b/sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java
@@ -266,6 +266,34 @@ public class KafkaIOTest {
 }
   }
 
+  /**
+   * Creates a consumer with two topics, with 5 partitions each.
+   * numElements are (round-robin) assigned all the 10 partitions.
+   */
+  private static KafkaIO.Read mkKafkaReadTransformWithCoders(
+  int numElements,
+  @Nullable SerializableFunction, Instant> 
timestampFn) {
+
+List topics = ImmutableList.of("topic_a", "topic_b");
+
+KafkaIO.Read reader = KafkaIO
+.readWithCoders(VarIntCoder.of(), VarLongCoder.of())
+.withBootstrapServers("myServer1:9092,myServer2:9092")
+.withTopics(topics)
+.withConsumerFactoryFn(new ConsumerFactoryFn(
+topics, 10, numElements, OffsetResetStrategy.EARLIEST)) // 
20 partitions
+.withKeyDeserializer(IntegerDeserializer.class)
+.withValueDeserializer(LongDeserializer.class)
+.withMaxNumRecords(numElements);
+
+if (timestampFn != null) {
+  return reader.withTimestampFn(timestampFn);
+} else {
+  return reader;
+}
+  }
+
+
   private static class AssertMultipleOf implements 
SerializableFunction {
 private final int num;
 
@@ -316,6 +344,19 @@ public class KafkaIOTest {
   }
 
   @Test
+  public void testUnboundedSourceWithCoders() {
+int 

[4/4] beam git commit: This closes #2780

2017-04-30 Thread jkff
This closes #2780


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/3d47b335
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/3d47b335
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/3d47b335

Branch: refs/heads/master
Commit: 3d47b335cbe22f92fc83e2ba2a7c35847bcadca3
Parents: 202aae9 34e0046
Author: Eugene Kirpichov 
Authored: Sun Apr 30 09:59:10 2017 -0700
Committer: Eugene Kirpichov 
Committed: Sun Apr 30 09:59:10 2017 -0700

--
 .../org/apache/beam/sdk/io/kafka/KafkaIO.java   |  27 ++--
 .../apache/beam/sdk/io/kafka/KafkaIOTest.java   | 143 ++-
 2 files changed, 156 insertions(+), 14 deletions(-)
--




Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3541

2017-04-30 Thread Apache Jenkins Server
See 




[jira] [Assigned] (BEAM-2127) Test Spark streaming pipelines with maxRecordsPerBatch

2017-04-30 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur reassigned BEAM-2127:
---

Assignee: (was: Amit Sela)

> Test Spark streaming pipelines with maxRecordsPerBatch
> --
>
> Key: BEAM-2127
> URL: https://issues.apache.org/jira/browse/BEAM-2127
> Project: Beam
>  Issue Type: Test
>  Components: runner-spark
>Reporter: Aviem Zur
>Priority: Minor
>  Labels: newbie, starter
>
> Add tests for Spark streaming pipelines with maxRecordsPerBatch configured in 
> {{SparkPipelineOptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (BEAM-2127) Test Spark streaming pipelines with maxRecordsPerBatch

2017-04-30 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur updated BEAM-2127:

Priority: Minor  (was: Major)

> Test Spark streaming pipelines with maxRecordsPerBatch
> --
>
> Key: BEAM-2127
> URL: https://issues.apache.org/jira/browse/BEAM-2127
> Project: Beam
>  Issue Type: Test
>  Components: runner-spark
>Reporter: Aviem Zur
>Assignee: Amit Sela
>Priority: Minor
>  Labels: newbie, starter
>
> Add tests for Spark streaming pipelines with maxRecordsPerBatch configured in 
> {{SparkPipelineOptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (BEAM-2127) Test Spark streaming pipelines with maxRecordsPerBatch

2017-04-30 Thread Aviem Zur (JIRA)
Aviem Zur created BEAM-2127:
---

 Summary: Test Spark streaming pipelines with maxRecordsPerBatch
 Key: BEAM-2127
 URL: https://issues.apache.org/jira/browse/BEAM-2127
 Project: Beam
  Issue Type: Test
  Components: runner-spark
Reporter: Aviem Zur
Assignee: Amit Sela


Add tests for Spark streaming pipelines with maxRecordsPerBatch configured in 
{{SparkPipelineOptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (BEAM-827) Remove PipelineOptions from construction time in WriteFiles

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-827:
-
Summary: Remove PipelineOptions from construction time in WriteFiles  (was: 
Remove PipelineOptions from WriteFiles)

> Remove PipelineOptions from construction time in WriteFiles
> ---
>
> Key: BEAM-827
> URL: https://issues.apache.org/jira/browse/BEAM-827
> Project: Beam
>  Issue Type: Sub-task
>  Components: sdk-java-core
>Reporter: Thomas Groh
>Assignee: Kenneth Knowles
>  Labels: backward-incompatible
> Fix For: First stable release
>
>
> This may require validating the sink is properly configured at Runtime rather 
> than at apply-time.
> Requires making the writer-result coder part of the sink interface, rather 
> than the WriteOperation interface, as otherwise the coder is not available at 
> runtime.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (BEAM-827) Remove PipelineOptions from construction time in WriteFiles

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles resolved BEAM-827.
--
Resolution: Fixed

> Remove PipelineOptions from construction time in WriteFiles
> ---
>
> Key: BEAM-827
> URL: https://issues.apache.org/jira/browse/BEAM-827
> Project: Beam
>  Issue Type: Sub-task
>  Components: sdk-java-core
>Reporter: Thomas Groh
>Assignee: Kenneth Knowles
>  Labels: backward-incompatible
> Fix For: First stable release
>
>
> This may require validating the sink is properly configured at Runtime rather 
> than at apply-time.
> Requires making the writer-result coder part of the sink interface, rather 
> than the WriteOperation interface, as otherwise the coder is not available at 
> runtime.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (BEAM-828) Remove PipelineOptions from construction time in BigQueryIO

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-828:
-
Summary: Remove PipelineOptions from construction time in BigQueryIO  (was: 
Remove PipelineOptions from BigQueryIO)

> Remove PipelineOptions from construction time in BigQueryIO
> ---
>
> Key: BEAM-828
> URL: https://issues.apache.org/jira/browse/BEAM-828
> Project: Beam
>  Issue Type: Sub-task
>  Components: sdk-java-gcp
>Reporter: Thomas Groh
>Assignee: Kenneth Knowles
>  Labels: backward-incompatible
> Fix For: First stable release
>
>
> BigQueryIO uses PipelineOptions to configure itself at construction time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (BEAM-2022) ApexTimerInternals seems to treat processing time timers as event time timers

2017-04-30 Thread Kenneth Knowles (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles resolved BEAM-2022.
---
Resolution: Fixed

> ApexTimerInternals seems to treat processing time timers as event time timers
> -
>
> Key: BEAM-2022
> URL: https://issues.apache.org/jira/browse/BEAM-2022
> Project: Beam
>  Issue Type: Bug
>  Components: runner-apex
>Reporter: Kenneth Knowles
>Assignee: Thomas Weise
> Fix For: First stable release
>
>
> I first noticed that {{currentProcessingTime()}} was using {{Instant.now()}}, 
> which has some bad issues in a distributed setting. But it seemed on 
> inspection that processing time timers are simply treated as event time. 
> Perhaps I am reading the code wrong?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2022) ApexTimerInternals seems to treat processing time timers as event time timers

2017-04-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990269#comment-15990269
 ] 

ASF GitHub Bot commented on BEAM-2022:
--

Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2782


> ApexTimerInternals seems to treat processing time timers as event time timers
> -
>
> Key: BEAM-2022
> URL: https://issues.apache.org/jira/browse/BEAM-2022
> Project: Beam
>  Issue Type: Bug
>  Components: runner-apex
>Reporter: Kenneth Knowles
>Assignee: Thomas Weise
> Fix For: First stable release
>
>
> I first noticed that {{currentProcessingTime()}} was using {{Instant.now()}}, 
> which has some bad issues in a distributed setting. But it seemed on 
> inspection that processing time timers are simply treated as event time. 
> Perhaps I am reading the code wrong?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2782: [BEAM-2022] fix triggering for processing time time...

2017-04-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2782


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: BEAM-2022 fix triggering for processing time timers

2017-04-30 Thread kenn
Repository: beam
Updated Branches:
  refs/heads/master fc55d2f81 -> 202aae9d3


BEAM-2022 fix triggering for processing time timers


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/eb860388
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/eb860388
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/eb860388

Branch: refs/heads/master
Commit: eb860388a8626837655e82171c8480421384e419
Parents: 2b6cb8c
Author: Thomas Weise 
Authored: Sat Apr 29 01:17:22 2017 -0700
Committer: Thomas Weise 
Committed: Sat Apr 29 01:17:22 2017 -0700

--
 .../operators/ApexGroupByKeyOperator.java   |  41 ++-
 .../operators/ApexTimerInternals.java   | 155 +---
 .../translation/ApexStateInternalsTest.java | 368 ---
 .../operators/ApexTimerInternalsTest.java   |  78 +++-
 .../utils/ApexStateInternalsTest.java   | 367 ++
 5 files changed, 567 insertions(+), 442 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/eb860388/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/operators/ApexGroupByKeyOperator.java
--
diff --git 
a/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/operators/ApexGroupByKeyOperator.java
 
b/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/operators/ApexGroupByKeyOperator.java
index f8b6653..3c9f5ab 100644
--- 
a/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/operators/ApexGroupByKeyOperator.java
+++ 
b/runners/apex/src/main/java/org/apache/beam/runners/apex/translation/operators/ApexGroupByKeyOperator.java
@@ -25,11 +25,9 @@ import com.datatorrent.api.DefaultOutputPort;
 import com.datatorrent.api.Operator;
 import com.datatorrent.api.StreamCodec;
 import com.datatorrent.api.annotation.OutputPortFieldAnnotation;
-import com.datatorrent.netlet.util.Slice;
 import com.esotericsoftware.kryo.serializers.FieldSerializer.Bind;
 import com.esotericsoftware.kryo.serializers.JavaSerializer;
 import com.google.common.base.Throwables;
-import com.google.common.collect.Multimap;
 import java.util.Collection;
 import java.util.Collections;
 import org.apache.beam.runners.apex.ApexPipelineOptions;
@@ -41,6 +39,7 @@ import org.apache.beam.runners.core.ReduceFnRunner;
 import org.apache.beam.runners.core.StateInternalsFactory;
 import org.apache.beam.runners.core.SystemReduceFn;
 import org.apache.beam.runners.core.TimerInternals;
+import org.apache.beam.runners.core.TimerInternals.TimerData;
 import org.apache.beam.runners.core.construction.Triggers;
 import org.apache.beam.runners.core.triggers.ExecutableTriggerStateMachine;
 import org.apache.beam.runners.core.triggers.TriggerStateMachines;
@@ -49,8 +48,8 @@ import org.apache.beam.sdk.coders.KvCoder;
 import org.apache.beam.sdk.transforms.GroupByKey;
 import org.apache.beam.sdk.transforms.windowing.BoundedWindow;
 import org.apache.beam.sdk.transforms.windowing.PaneInfo;
-import org.apache.beam.sdk.util.CoderUtils;
 import org.apache.beam.sdk.util.NullSideInputReader;
+import org.apache.beam.sdk.util.TimeDomain;
 import org.apache.beam.sdk.util.WindowedValue;
 import org.apache.beam.sdk.util.WindowingStrategy;
 import org.apache.beam.sdk.values.KV;
@@ -68,7 +67,8 @@ import org.slf4j.LoggerFactory;
  * @param  key type
  * @param  value type
  */
-public class ApexGroupByKeyOperator implements Operator {
+public class ApexGroupByKeyOperator implements Operator,
+ApexTimerInternals.TimerProcessor {
   private static final Logger LOG = 
LoggerFactory.getLogger(ApexGroupByKeyOperator.class);
   private boolean traceTuples = true;
 
@@ -106,7 +106,7 @@ public class ApexGroupByKeyOperator implements 
Operator {
 }
 processElement(t.getValue());
   } catch (Exception e) {
-Throwables.propagateIfPossible(e);
+Throwables.throwIfUnchecked(e);
 throw new RuntimeException(e);
   }
 }
@@ -143,6 +143,8 @@ public class ApexGroupByKeyOperator implements 
Operator {
 
   @Override
   public void endWindow() {
+
timerInternals.fireReadyTimers(timerInternals.currentProcessingTime().getMillis(),
+this, TimeDomain.PROCESSING_TIME);
   }
 
   @Override
@@ -195,7 +197,6 @@ public class ApexGroupByKeyOperator implements 
Operator {
 serializedOptions.get());
   }
 
-
   private void processElement(WindowedValue> windowedValue) throws 
Exception {
 final KV kv = windowedValue.getValue();
 final WindowedValue updatedWindowedValue = 
WindowedValue.of(kv.getValue(),
@@ -209,19 +210,23 @@ public class ApexGroupByKeyOperator implements 
Operator {
 reduceFnRunner.persist();
   }
 
-  private void 

[2/2] beam git commit: This closes #2782: fix triggering for processing time timers

2017-04-30 Thread kenn
This closes #2782: fix triggering for processing time timers


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/202aae9d
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/202aae9d
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/202aae9d

Branch: refs/heads/master
Commit: 202aae9d3a85a4fff41943e4c9a13618bcf8acc4
Parents: fc55d2f eb86038
Author: Kenneth Knowles 
Authored: Sun Apr 30 08:43:35 2017 -0700
Committer: Kenneth Knowles 
Committed: Sun Apr 30 08:43:35 2017 -0700

--
 .../operators/ApexGroupByKeyOperator.java   |  41 ++-
 .../operators/ApexTimerInternals.java   | 155 +---
 .../translation/ApexStateInternalsTest.java | 368 ---
 .../operators/ApexTimerInternalsTest.java   |  78 +++-
 .../utils/ApexStateInternalsTest.java   | 367 ++
 5 files changed, 567 insertions(+), 442 deletions(-)
--




[jira] [Commented] (BEAM-1970) Cannot run UserScore on Flink runner due to AvroCoder classload issues

2017-04-30 Thread Kenneth Knowles (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990266#comment-15990266
 ] 

Kenneth Knowles commented on BEAM-1970:
---

I wrote a very small unit test for {{AvroCoder}} that should definitely have 
passed but did not, so it isn't a Flink issue, but Avro. Flink just happens to 
exercise the broken code path. It could bite other non-default class loading 
setups, too. The workaround is pretty easy and surgical in {{AvroCoder}}.

> Cannot run UserScore on Flink runner due to AvroCoder classload issues
> --
>
> Key: BEAM-1970
> URL: https://issues.apache.org/jira/browse/BEAM-1970
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: Ahmet Altay
>Assignee: Kenneth Knowles
> Fix For: First stable release
>
>
> Fails with error:
> ClassCastException: 
> org.apache.beam.examples.complete.game.UserScore$GameActionInfo cannot be 
> cast to org.apache.beam.examples.complete.game.UserScore$GameActionInfo
> full stack:
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error.
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
> at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
> at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
> Caused by: java.lang.RuntimeException: Pipeline execution failed
> at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:119)
> at org.apache.beam.sdk.Pipeline.run(Pipeline.java:265)
> at 
> org.apache.beam.examples.complete.game.UserScore.main(UserScore.java:238)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
> ... 13 more
> Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
> program execution failed: Job execution failed.
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
> at 
> org.apache.flink.yarn.YarnClusterClient.submitJob(YarnClusterClient.java:210)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:387)
> at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
> at 
> org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:111)
> at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:116)
> ... 20 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> 

[jira] [Assigned] (BEAM-2095) SourceRDD hasNext not idempotent

2017-04-30 Thread Stas Levin (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stas Levin reassigned BEAM-2095:


Assignee: Stas Levin  (was: Amit Sela)

> SourceRDD hasNext not idempotent
> 
>
> Key: BEAM-2095
> URL: https://issues.apache.org/jira/browse/BEAM-2095
> Project: Beam
>  Issue Type: Bug
>  Components: runner-spark
>Affects Versions: 0.6.0
>Reporter: Arvid Heise
>Assignee: Stas Levin
>
> When reading an Avro from HDFS with the new HDFSFileSource, we experience the 
> following exceptions:
> {code}
> 17/04/27 11:48:38 ERROR executor.Executor: Exception in task 2.0 in stage 1.0 
> (TID 32)
> java.util.NoSuchElementException
>   at 
> com.gfk.hyperlane.engine.target_group_evaluation.dataset.HDFSFileSource$HDFSFileReader.getCurrent(HDFSFileSource.java:498)
>   at 
> org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.next(SourceRDD.java:142)
>   at 
> org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.next(SourceRDD.java:111)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:42)
>   at 
> org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
>   at scala.collection.Iterator$$anon$12.next(Iterator.scala:357)
>   at 
> org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
>   at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)
>   at 
> org.apache.beam.runners.spark.translation.SparkProcessContext$ProcCtxtIterator.computeNext(SparkProcessContext.java:165)
>   at 
> org.apache.beam.spark.repackaged.com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:145)
>   at 
> org.apache.beam.spark.repackaged.com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:140)
>   at 
> org.apache.beam.runners.spark.translation.SparkProcessContext$ProcCtxtIterator.computeNext(SparkProcessContext.java:162)
>   at 
> org.apache.beam.spark.repackaged.com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:145)
>   at 
> org.apache.beam.spark.repackaged.com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:140)
>   at 
> org.apache.beam.runners.spark.translation.SparkProcessContext.processPartition(SparkProcessContext.java:64)
>   at 
> org.apache.beam.runners.spark.translation.DoFnFunction.call(DoFnFunction.java:105)
>   at 
> org.apache.beam.runners.spark.translation.DoFnFunction.call(DoFnFunction.java:48)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> 

[GitHub] beam pull request #2790: Fix hamcrest-core version in parent pom

2017-04-30 Thread iemejia
GitHub user iemejia opened a pull request:

https://github.com/apache/beam/pull/2790

Fix hamcrest-core version in parent pom

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[BEAM-] Description of pull request`
 - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf).

---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iemejia/beam fix-hamcrest-core-dep

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2790.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2790


commit 73a12f638a87ac996c5ad48bb26b616f680e5a48
Author: Ismaël Mejía 
Date:   2017-04-30T13:06:07Z

Fix hamcrest-core version in parent pom




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is still unstable: beam_PostCommit_Java_MavenInstall #3540

2017-04-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PerformanceTests_JDBC #161

2017-04-30 Thread Apache Jenkins Server
See 


--
[...truncated 848.65 KB...]
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
(790b8609f940638d): java.lang.RuntimeException: 
org.apache.beam.sdk.util.UserCodeException: org.postgresql.util.PSQLException: 
The connection attempt failed.
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: 
org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$cSlrWKiP.invokeSetup(Unknown
 Source)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:66)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:48)
at 
com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:99)
at 
com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:363)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
at 
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.(PgConnection.java:215)
at org.postgresql.Driver.makeConnection(Driver.java:404)
at org.postgresql.Driver.connect(Driver.java:272)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at 
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:86)
at 
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:71)
at 

Build failed in Jenkins: beam_PerformanceTests_Dataflow #355

2017-04-30 Thread Apache Jenkins Server
See 


--
[...truncated 275.61 KB...]
 * [new ref] refs/pull/2716/head -> origin/pr/2716/head
 * [new ref] refs/pull/2716/merge -> origin/pr/2716/merge
 * [new ref] refs/pull/2717/head -> origin/pr/2717/head
 * [new ref] refs/pull/2717/merge -> origin/pr/2717/merge
 * [new ref] refs/pull/2718/head -> origin/pr/2718/head
 * [new ref] refs/pull/2718/merge -> origin/pr/2718/merge
 * [new ref] refs/pull/2719/head -> origin/pr/2719/head
 * [new ref] refs/pull/2719/merge -> origin/pr/2719/merge
 * [new ref] refs/pull/2720/head -> origin/pr/2720/head
 * [new ref] refs/pull/2721/head -> origin/pr/2721/head
 * [new ref] refs/pull/2721/merge -> origin/pr/2721/merge
 * [new ref] refs/pull/2722/head -> origin/pr/2722/head
 * [new ref] refs/pull/2722/merge -> origin/pr/2722/merge
 * [new ref] refs/pull/2723/head -> origin/pr/2723/head
 * [new ref] refs/pull/2723/merge -> origin/pr/2723/merge
 * [new ref] refs/pull/2724/head -> origin/pr/2724/head
 * [new ref] refs/pull/2724/merge -> origin/pr/2724/merge
 * [new ref] refs/pull/2725/head -> origin/pr/2725/head
 * [new ref] refs/pull/2726/head -> origin/pr/2726/head
 * [new ref] refs/pull/2727/head -> origin/pr/2727/head
 * [new ref] refs/pull/2727/merge -> origin/pr/2727/merge
 * [new ref] refs/pull/2728/head -> origin/pr/2728/head
 * [new ref] refs/pull/2729/head -> origin/pr/2729/head
 * [new ref] refs/pull/2729/merge -> origin/pr/2729/merge
 * [new ref] refs/pull/2730/head -> origin/pr/2730/head
 * [new ref] refs/pull/2730/merge -> origin/pr/2730/merge
 * [new ref] refs/pull/2731/head -> origin/pr/2731/head
 * [new ref] refs/pull/2732/head -> origin/pr/2732/head
 * [new ref] refs/pull/2732/merge -> origin/pr/2732/merge
 * [new ref] refs/pull/2733/head -> origin/pr/2733/head
 * [new ref] refs/pull/2733/merge -> origin/pr/2733/merge
 * [new ref] refs/pull/2734/head -> origin/pr/2734/head
 * [new ref] refs/pull/2735/head -> origin/pr/2735/head
 * [new ref] refs/pull/2735/merge -> origin/pr/2735/merge
 * [new ref] refs/pull/2736/head -> origin/pr/2736/head
 * [new ref] refs/pull/2736/merge -> origin/pr/2736/merge
 * [new ref] refs/pull/2737/head -> origin/pr/2737/head
 * [new ref] refs/pull/2737/merge -> origin/pr/2737/merge
 * [new ref] refs/pull/2738/head -> origin/pr/2738/head
 * [new ref] refs/pull/2738/merge -> origin/pr/2738/merge
 * [new ref] refs/pull/2739/head -> origin/pr/2739/head
 * [new ref] refs/pull/2739/merge -> origin/pr/2739/merge
 * [new ref] refs/pull/2740/head -> origin/pr/2740/head
 * [new ref] refs/pull/2740/merge -> origin/pr/2740/merge
 * [new ref] refs/pull/2741/head -> origin/pr/2741/head
 * [new ref] refs/pull/2742/head -> origin/pr/2742/head
 * [new ref] refs/pull/2743/head -> origin/pr/2743/head
 * [new ref] refs/pull/2743/merge -> origin/pr/2743/merge
 * [new ref] refs/pull/2744/head -> origin/pr/2744/head
 * [new ref] refs/pull/2744/merge -> origin/pr/2744/merge
 * [new ref] refs/pull/2745/head -> origin/pr/2745/head
 * [new ref] refs/pull/2745/merge -> origin/pr/2745/merge
 * [new ref] refs/pull/2746/head -> origin/pr/2746/head
 * [new ref] refs/pull/2746/merge -> origin/pr/2746/merge
 * [new ref] refs/pull/2747/head -> origin/pr/2747/head
 * [new ref] refs/pull/2747/merge -> origin/pr/2747/merge
 * [new ref] refs/pull/2748/head -> origin/pr/2748/head
 * [new ref] refs/pull/2748/merge -> origin/pr/2748/merge
 * [new ref] refs/pull/2749/head -> origin/pr/2749/head
 * [new ref] refs/pull/2749/merge -> origin/pr/2749/merge
 * [new ref] refs/pull/2750/head -> origin/pr/2750/head
 * [new ref] refs/pull/2751/head -> origin/pr/2751/head
 * [new ref] refs/pull/2751/merge -> origin/pr/2751/merge
 * [new ref] refs/pull/2752/head -> origin/pr/2752/head
 * [new ref] refs/pull/2752/merge -> origin/pr/2752/merge
 * [new ref] refs/pull/2753/head -> origin/pr/2753/head
 * [new ref] refs/pull/2753/merge -> origin/pr/2753/merge
 * [new ref] refs/pull/2754/head -> origin/pr/2754/head
 * [new ref] refs/pull/2754/merge -> origin/pr/2754/merge
 * [new ref] refs/pull/2755/head -> origin/pr/2755/head
 * [new ref] refs/pull/2756/head -> origin/pr/2756/head
 * [new ref] refs/pull/2756/merge -> origin/pr/2756/merge
 * [new ref] refs/pull/2757/head -> origin/pr/2757/head
 * [new ref] refs/pull/2757/merge -> origin/pr/2757/merge
 * [new ref] 

[jira] [Comment Edited] (BEAM-1676) SdkCoreApiSurfaceTest Failed When Directory Contains Space

2017-04-30 Thread Stas Levin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990193#comment-15990193
 ] 

Stas Levin edited comment on BEAM-1676 at 4/30/17 11:32 AM:


We can internalize (euphemism for copy-paste) the fixed {{ClassPath}} class 
implementation from the {{Guava}} master, and use it within {{ApiSurface}} (by 
keeping it private to {{ApiSurface}} it should prevent any clashes with the 
other {{ClassPath}} class residing in the current {{Guava}} version).

I have played with it briefly and it looks ok.

What do you think?




was (Author: staslev):
We can internalize (euphemism for copy-paste) the fixed {{ClassPath}} class 
implementation from the {{Guava}} master, and use it within {{ApiSurface}} (by 
keeping it private to {{ApiSurface}} it should prevent any clashes with the 
other {{ClassPath}} class residing in the current {{Guava}} version).

I have played with it briefly and it looks ok.

What do you think?

> SdkCoreApiSurfaceTest Failed When Directory Contains Space
> --
>
> Key: BEAM-1676
> URL: https://issues.apache.org/jira/browse/BEAM-1676
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Mark Liu
>Assignee: Stas Levin
>
> Test failed if build directory contains space. For example: "~/dir with 
> space/beam/..."
> The failure happened on Jenkins and can be reproduced from local.
> GcpApiSurfaceTest may have same problem.
> error is:
> {code}
> Failed tests: 
>   SdkCoreApiSurfaceTest.testSdkApiSurface:59 
> Expected: API surface to include only:
>   Classes in package "org.apache.beam"
>   Classes in package "com.google.api.client"
>   Classes in package "com.google.api.services.bigquery"
>   Classes in package "com.google.api.services.cloudresourcemanager"
>   Classes in package "com.google.api.services.pubsub"
>   Classes in package "com.google.api.services.storage"
>   Classes in package "com.google.auth"
>   Classes in package "com.google.protobuf"
>   Classes in package "com.fasterxml.jackson.annotation"
>   Classes in package "com.fasterxml.jackson.core"
>   Classes in package "com.fasterxml.jackson.databind"
>   Classes in package "org.apache.avro"
>   Classes in package "org.hamcrest"
>   Classes in package "org.codehaus.jackson"
>   Classes in package "org.joda.time"
>   Classes in package "org.junit"
>   
>  but: The following white-listed scopes did not have matching classes on 
> the API surface:
>   No Classes in package "com.fasterxml.jackson.annotation"
>   No Classes in package "com.fasterxml.jackson.core"
>   No Classes in package "com.fasterxml.jackson.databind"
>   No Classes in package "com.google.api.client"
>   No Classes in package "com.google.api.services.bigquery"
>   No Classes in package "com.google.api.services.cloudresourcemanager"
>   No Classes in package "com.google.api.services.pubsub"
>   No Classes in package "com.google.api.services.storage"
>   No Classes in package "com.google.auth"
>   No Classes in package "com.google.protobuf"
>   No Classes in package "org.apache.avro"
>   No Classes in package "org.apache.beam"
>   No Classes in package "org.codehaus.jackson"
>   No Classes in package "org.hamcrest"
>   No Classes in package "org.joda.time"
>   No Classes in package "org.junit"
> {code}
> Job link from Jenkins:
> https://builds.apache.org/job/beam_PostCommit_Java_Version_Test/14/
> One of the Jenkins job uses "JDK 1.8 (latest)" which is also part of project 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (BEAM-1676) SdkCoreApiSurfaceTest Failed When Directory Contains Space

2017-04-30 Thread Stas Levin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990193#comment-15990193
 ] 

Stas Levin edited comment on BEAM-1676 at 4/30/17 11:31 AM:


We can internalize (euphemism for copy-paste) the fixed {{ClassPath}} class 
implementation from the {{Guava}} master, and use it within {{ApiSurface}} (by 
keeping it private to {{ApiSurface}} it should prevent any clashes with the 
other {{ClassPath}} class residing in the current {{Guava}} version).

I have played with it briefly and it looks ok.

What do you think?


was (Author: staslev):
We can internalise (euphemism for copy-paste) the fixed {{ClassPath}} class 
implementation from the {{Guava}} master, and use it within {{ApiSurface}} (by 
keeping it private to {{ApiSurface}} it should prevent any clashes with the 
other {{ClassPath}} class residing in the current {{Guava}} version).

I have played with it briefly and it looks ok.

What do you think?

> SdkCoreApiSurfaceTest Failed When Directory Contains Space
> --
>
> Key: BEAM-1676
> URL: https://issues.apache.org/jira/browse/BEAM-1676
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Mark Liu
>Assignee: Stas Levin
>
> Test failed if build directory contains space. For example: "~/dir with 
> space/beam/..."
> The failure happened on Jenkins and can be reproduced from local.
> GcpApiSurfaceTest may have same problem.
> error is:
> {code}
> Failed tests: 
>   SdkCoreApiSurfaceTest.testSdkApiSurface:59 
> Expected: API surface to include only:
>   Classes in package "org.apache.beam"
>   Classes in package "com.google.api.client"
>   Classes in package "com.google.api.services.bigquery"
>   Classes in package "com.google.api.services.cloudresourcemanager"
>   Classes in package "com.google.api.services.pubsub"
>   Classes in package "com.google.api.services.storage"
>   Classes in package "com.google.auth"
>   Classes in package "com.google.protobuf"
>   Classes in package "com.fasterxml.jackson.annotation"
>   Classes in package "com.fasterxml.jackson.core"
>   Classes in package "com.fasterxml.jackson.databind"
>   Classes in package "org.apache.avro"
>   Classes in package "org.hamcrest"
>   Classes in package "org.codehaus.jackson"
>   Classes in package "org.joda.time"
>   Classes in package "org.junit"
>   
>  but: The following white-listed scopes did not have matching classes on 
> the API surface:
>   No Classes in package "com.fasterxml.jackson.annotation"
>   No Classes in package "com.fasterxml.jackson.core"
>   No Classes in package "com.fasterxml.jackson.databind"
>   No Classes in package "com.google.api.client"
>   No Classes in package "com.google.api.services.bigquery"
>   No Classes in package "com.google.api.services.cloudresourcemanager"
>   No Classes in package "com.google.api.services.pubsub"
>   No Classes in package "com.google.api.services.storage"
>   No Classes in package "com.google.auth"
>   No Classes in package "com.google.protobuf"
>   No Classes in package "org.apache.avro"
>   No Classes in package "org.apache.beam"
>   No Classes in package "org.codehaus.jackson"
>   No Classes in package "org.hamcrest"
>   No Classes in package "org.joda.time"
>   No Classes in package "org.junit"
> {code}
> Job link from Jenkins:
> https://builds.apache.org/job/beam_PostCommit_Java_Version_Test/14/
> One of the Jenkins job uses "JDK 1.8 (latest)" which is also part of project 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-1676) SdkCoreApiSurfaceTest Failed When Directory Contains Space

2017-04-30 Thread Stas Levin (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990193#comment-15990193
 ] 

Stas Levin commented on BEAM-1676:
--

We can internalise (euphemism for copy-paste) the fixed {{ClassPath}} class 
implementation from the {{Guava}} master, and use it within {{ApiSurface}} (by 
keeping it private to {{ApiSurface}} it should prevent any clashes with the 
other {{ClassPath}} class residing in the current {{Guava}} version).

I have played with it briefly and it looks ok.

What do you think?

> SdkCoreApiSurfaceTest Failed When Directory Contains Space
> --
>
> Key: BEAM-1676
> URL: https://issues.apache.org/jira/browse/BEAM-1676
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Mark Liu
>Assignee: Stas Levin
>
> Test failed if build directory contains space. For example: "~/dir with 
> space/beam/..."
> The failure happened on Jenkins and can be reproduced from local.
> GcpApiSurfaceTest may have same problem.
> error is:
> {code}
> Failed tests: 
>   SdkCoreApiSurfaceTest.testSdkApiSurface:59 
> Expected: API surface to include only:
>   Classes in package "org.apache.beam"
>   Classes in package "com.google.api.client"
>   Classes in package "com.google.api.services.bigquery"
>   Classes in package "com.google.api.services.cloudresourcemanager"
>   Classes in package "com.google.api.services.pubsub"
>   Classes in package "com.google.api.services.storage"
>   Classes in package "com.google.auth"
>   Classes in package "com.google.protobuf"
>   Classes in package "com.fasterxml.jackson.annotation"
>   Classes in package "com.fasterxml.jackson.core"
>   Classes in package "com.fasterxml.jackson.databind"
>   Classes in package "org.apache.avro"
>   Classes in package "org.hamcrest"
>   Classes in package "org.codehaus.jackson"
>   Classes in package "org.joda.time"
>   Classes in package "org.junit"
>   
>  but: The following white-listed scopes did not have matching classes on 
> the API surface:
>   No Classes in package "com.fasterxml.jackson.annotation"
>   No Classes in package "com.fasterxml.jackson.core"
>   No Classes in package "com.fasterxml.jackson.databind"
>   No Classes in package "com.google.api.client"
>   No Classes in package "com.google.api.services.bigquery"
>   No Classes in package "com.google.api.services.cloudresourcemanager"
>   No Classes in package "com.google.api.services.pubsub"
>   No Classes in package "com.google.api.services.storage"
>   No Classes in package "com.google.auth"
>   No Classes in package "com.google.protobuf"
>   No Classes in package "org.apache.avro"
>   No Classes in package "org.apache.beam"
>   No Classes in package "org.codehaus.jackson"
>   No Classes in package "org.hamcrest"
>   No Classes in package "org.joda.time"
>   No Classes in package "org.junit"
> {code}
> Job link from Jenkins:
> https://builds.apache.org/job/beam_PostCommit_Java_Version_Test/14/
> One of the Jenkins job uses "JDK 1.8 (latest)" which is also part of project 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is unstable: beam_PostCommit_Java_MavenInstall #3539

2017-04-30 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : beam_PostCommit_Java_ValidatesRunner_Dataflow #2980

2017-04-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PerformanceTests_JDBC #160

2017-04-30 Thread Apache Jenkins Server
See 


Changes:

[lcwik] Remove useless return statement

[lcwik] Remove redundant private on enum constructors

[lcwik] Remove useless continue statements

--
[...truncated 959.06 KB...]
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
(d48dc35e56c74ee3): java.lang.RuntimeException: 
org.apache.beam.sdk.util.UserCodeException: org.postgresql.util.PSQLException: 
The connection attempt failed.
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: 
org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$eJfuLyKH.invokeSetup(Unknown
 Source)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:66)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:48)
at 
com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:99)
at 
com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:363)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
at 
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.(PgConnection.java:215)
at org.postgresql.Driver.makeConnection(Driver.java:404)
at org.postgresql.Driver.connect(Driver.java:272)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at 

  1   2   >