[jira] [Created] (BEAM-8344) Add infer schema support in ParquetIO and refactor ParquetTableProvider

2019-10-02 Thread Vishwas (Jira)
Vishwas created BEAM-8344:
-

 Summary: Add infer schema support in ParquetIO and refactor 
ParquetTableProvider
 Key: BEAM-8344
 URL: https://issues.apache.org/jira/browse/BEAM-8344
 Project: Beam
  Issue Type: Improvement
  Components: dsl-sql, io-java-parquet
Reporter: Vishwas
Assignee: Vishwas


Add support for inferring Beam Schema in ParquetIO.
Refactor ParquetTable code to use Convert.rows().
Remove unnecessary java class GenericRecordReadConverter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (BEAM-8331) Vendored calcite breaks if another calcite is on the class path

2019-10-02 Thread Kai Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Jiang reassigned BEAM-8331:
---

Assignee: Kai Jiang

> Vendored calcite breaks if another calcite is on the class path
> ---
>
> Key: BEAM-8331
> URL: https://issues.apache.org/jira/browse/BEAM-8331
> Project: Beam
>  Issue Type: Bug
>  Components: dsl-sql
>Affects Versions: 2.15.0, 2.16.0
>Reporter: Andrew Pilloud
>Assignee: Kai Jiang
>Priority: Major
>
> If the beam vendored calcite and a non-vendored calcite are both on the 
> classpath, neither version works. This is because the non-JDBC calcite path 
> uses JDBC as a easy way to perform reflection. (This affects the non-JDBC 
> version of calcite.) We need to rewrite the calcite JDBC urls as part of our 
> vendoring (for example 'jdbc:calcite:' to 'jdbc:beam-vendor-calcite:'). 
> Example of where this happens: 
> [https://github.com/apache/calcite/blob/0cce229903a845a7b8ed36cf86d6078fd82d73d3/core/src/main/java/org/apache/calcite/tools/Frameworks.java#L175]
>  
> {code:java}
> java.lang.RuntimeException: java.lang.RuntimeException: Property 
> 'org.apache.beam.sdk.extensions.sql.impl.planner.BeamRelDataTypeSystem' not 
> valid for plugin type org.apache.calcite.rel.type.RelDataTypeSystem
>   at 
> org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:160)
>   at 
> org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:115)
>   at 
> org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLPlannerImpl.(ZetaSQLPlannerImpl.java:86)
>   at 
> org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.(ZetaSQLQueryPlanner.java:55){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8331) Vendored calcite breaks if another calcite is on the class path

2019-10-02 Thread Kai Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943340#comment-16943340
 ] 

Kai Jiang commented on BEAM-8331:
-

sure. I assigned to myself.

> Vendored calcite breaks if another calcite is on the class path
> ---
>
> Key: BEAM-8331
> URL: https://issues.apache.org/jira/browse/BEAM-8331
> Project: Beam
>  Issue Type: Bug
>  Components: dsl-sql
>Affects Versions: 2.15.0, 2.16.0
>Reporter: Andrew Pilloud
>Priority: Major
>
> If the beam vendored calcite and a non-vendored calcite are both on the 
> classpath, neither version works. This is because the non-JDBC calcite path 
> uses JDBC as a easy way to perform reflection. (This affects the non-JDBC 
> version of calcite.) We need to rewrite the calcite JDBC urls as part of our 
> vendoring (for example 'jdbc:calcite:' to 'jdbc:beam-vendor-calcite:'). 
> Example of where this happens: 
> [https://github.com/apache/calcite/blob/0cce229903a845a7b8ed36cf86d6078fd82d73d3/core/src/main/java/org/apache/calcite/tools/Frameworks.java#L175]
>  
> {code:java}
> java.lang.RuntimeException: java.lang.RuntimeException: Property 
> 'org.apache.beam.sdk.extensions.sql.impl.planner.BeamRelDataTypeSystem' not 
> valid for plugin type org.apache.calcite.rel.type.RelDataTypeSystem
>   at 
> org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:160)
>   at 
> org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:115)
>   at 
> org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLPlannerImpl.(ZetaSQLPlannerImpl.java:86)
>   at 
> org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.(ZetaSQLQueryPlanner.java:55){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7389) Colab examples for element-wise transforms (Python)

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7389?focusedWorklogId=322389=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322389
 ]

ASF GitHub Bot logged work on BEAM-7389:


Author: ASF GitHub Bot
Created on: 03/Oct/19 01:04
Start Date: 03/Oct/19 01:04
Worklog Time Spent: 10m 
  Work Description: aaltay commented on issue #9664: [BEAM-7389] Created 
code files to match doc filenames
URL: https://github.com/apache/beam/pull/9664#issuecomment-537742898
 
 
   Is this ready to review?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322389)
Time Spent: 62h 20m  (was: 62h 10m)

> Colab examples for element-wise transforms (Python)
> ---
>
> Key: BEAM-7389
> URL: https://issues.apache.org/jira/browse/BEAM-7389
> Project: Beam
>  Issue Type: Improvement
>  Components: website
>Reporter: Rose Nguyen
>Assignee: David Cavazos
>Priority: Minor
>  Time Spent: 62h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread Ahmet Altay (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Altay updated BEAM-8286:
--
Status: Open  (was: Triage Needed)

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread Ahmet Altay (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943272#comment-16943272
 ] 

Ahmet Altay commented on BEAM-8286:
---

Can we close this, the last PR seemingly resolved the issue?

btw, for completion, the list of new sub documentation links: 
https://github.com/googleapis/google-cloud-python/issues/9386#issuecomment-537649044

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (BEAM-7049) Merge multiple input to one BeamUnionRel

2019-10-02 Thread sridhar Reddy (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943253#comment-16943253
 ] 

sridhar Reddy edited comment on BEAM-7049 at 10/3/19 12:07 AM:
---

Sounds good. Take your time. I appreciate you helping people on the Outreachy 
program besides me.  You are a great apache beam community member!


was (Author: sridharg):
Sounds good. Take your time. I appreciate you helping people on the Outreachy 
program besides me.  

> Merge multiple input to one BeamUnionRel
> 
>
> Key: BEAM-7049
> URL: https://issues.apache.org/jira/browse/BEAM-7049
> Project: Beam
>  Issue Type: Improvement
>  Components: dsl-sql
>Reporter: Rui Wang
>Assignee: sridhar Reddy
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> BeamUnionRel assumes inputs are two and rejects more. So `a UNION b UNION c` 
> will have to be created as UNION(a, UNION(b, c)) and have two shuffles. If 
> BeamUnionRel can handle multiple shuffles, we will have only one shuffle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-7049) Merge multiple input to one BeamUnionRel

2019-10-02 Thread sridhar Reddy (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943253#comment-16943253
 ] 

sridhar Reddy commented on BEAM-7049:
-

Sounds good. Take your time. I appreciate you helping people on the Outreachy 
program besides me.  

> Merge multiple input to one BeamUnionRel
> 
>
> Key: BEAM-7049
> URL: https://issues.apache.org/jira/browse/BEAM-7049
> Project: Beam
>  Issue Type: Improvement
>  Components: dsl-sql
>Reporter: Rui Wang
>Assignee: sridhar Reddy
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> BeamUnionRel assumes inputs are two and rejects more. So `a UNION b UNION c` 
> will have to be created as UNION(a, UNION(b, c)) and have two shuffles. If 
> BeamUnionRel can handle multiple shuffles, we will have only one shuffle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (BEAM-8321) beam_PostCommit_PortableJar_Flink postcommit failing

2019-10-02 Thread Kyle Weaver (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Weaver closed BEAM-8321.
-
Fix Version/s: 2.17.0
   Resolution: Fixed

> beam_PostCommit_PortableJar_Flink postcommit failing
> 
>
> Key: BEAM-8321
> URL: https://issues.apache.org/jira/browse/BEAM-8321
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
> Fix For: 2.17.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Python docker container image names/tags need updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7738) Support PubSubIO to be configured externally for use with other SDKs

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7738?focusedWorklogId=322355=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322355
 ]

ASF GitHub Bot logged work on BEAM-7738:


Author: ASF GitHub Bot
Created on: 02/Oct/19 23:20
Start Date: 02/Oct/19 23:20
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9268: [BEAM-7738] Add 
external transform support to PubsubIO
URL: https://github.com/apache/beam/pull/9268#issuecomment-537721270
 
 
   This is now waiting on the BooleanCoder in python
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322355)
Time Spent: 4h 50m  (was: 4h 40m)

> Support PubSubIO to be configured externally for use with other SDKs
> 
>
> Key: BEAM-7738
> URL: https://issues.apache.org/jira/browse/BEAM-7738
> Project: Beam
>  Issue Type: New Feature
>  Components: io-java-gcp, runner-flink, sdk-py-core
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Labels: portability
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Now that KafkaIO is supported via the external transform API (BEAM-7029) we 
> should add support for PubSub.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8335) Add streaming support to Interactive Beam

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8335?focusedWorklogId=322338=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322338
 ]

ASF GitHub Bot logged work on BEAM-8335:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:50
Start Date: 02/Oct/19 22:50
Worklog Time Spent: 10m 
  Work Description: rohdesamuel commented on issue #9720: [BEAM-8335] Add 
initial modules for interactive streaming support
URL: https://github.com/apache/beam/pull/9720#issuecomment-537713992
 
 
   R: @robertwb can you please review?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322338)
Time Spent: 40m  (was: 0.5h)

> Add streaming support to Interactive Beam
> -
>
> Key: BEAM-8335
> URL: https://issues.apache.org/jira/browse/BEAM-8335
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-py-interactive
>Reporter: Sam Rohde
>Assignee: Sam Rohde
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This issue tracks the work items to introduce streaming support to the 
> Interactive Beam experience. This will allow users to:
>  * Write and run a streaming job in IPython
>  * Automatically cache records from unbounded sources
>  * Add a replay experience that replays all cached records to simulate the 
> original pipeline execution
>  * Add controls to play/pause/stop/step individual elements from the cached 
> records
>  * Add ability to inspect/visualize unbounded PCollections



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322337=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322337
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:45
Start Date: 02/Oct/19 22:45
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on pull request #9706: [BEAM-8213] 
Split out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322337)
Time Spent: 13h 10m  (was: 13h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322332
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:44
Start Date: 02/Oct/19 22:44
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537712360
 
 
   We can see that precommit was triggered successfully, I think this is ready 
to merge.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322332)
Time Spent: 13h  (was: 12h 50m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Kyle Weaver (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943213#comment-16943213
 ] 

Kyle Weaver commented on BEAM-8183:
---

For pipeline options, I filed https://issues.apache.org/jira/browse/BEAM-8115, 
which should be mostly orthogonal to the issue of multiple pipelines.

Also, if we do decide to support multi-pipeline jars, we might just do it in 
Java, because it will require some Java changes anyway, and Java's jar utility 
libraries are a lot less clunky than shell.

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322329=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322329
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:41
Start Date: 02/Oct/19 22:41
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537711508
 
 
   Running the python precommit to make sure it still works now that the seed 
job succeded.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322329)
Time Spent: 12h 50m  (was: 12h 40m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322326=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322326
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:40
Start Date: 02/Oct/19 22:40
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537711292
 
 
   Python_PVR_Flink failures are not related. Build scan is broken for a 
different reason (there is a thread on dev@ about that). 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322326)
Time Spent: 12.5h  (was: 12h 20m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322328=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322328
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:40
Start Date: 02/Oct/19 22:40
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537711375
 
 
   Run Python PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322328)
Time Spent: 12h 40m  (was: 12.5h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322323=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322323
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:37
Start Date: 02/Oct/19 22:37
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537710321
 
 
   I triple checked that I haven't changed anything about Python_PVR_Flink.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322323)
Time Spent: 12h 20m  (was: 12h 10m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8335) Add streaming support to Interactive Beam

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8335?focusedWorklogId=322320=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322320
 ]

ASF GitHub Bot logged work on BEAM-8335:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:27
Start Date: 02/Oct/19 22:27
Worklog Time Spent: 10m 
  Work Description: rohdesamuel commented on issue #9720: [BEAM-8335] Add 
initial modules for interactive streaming support
URL: https://github.com/apache/beam/pull/9720#issuecomment-537707933
 
 
   Run PythonLint PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322320)
Time Spent: 0.5h  (was: 20m)

> Add streaming support to Interactive Beam
> -
>
> Key: BEAM-8335
> URL: https://issues.apache.org/jira/browse/BEAM-8335
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-py-interactive
>Reporter: Sam Rohde
>Assignee: Sam Rohde
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This issue tracks the work items to introduce streaming support to the 
> Interactive Beam experience. This will allow users to:
>  * Write and run a streaming job in IPython
>  * Automatically cache records from unbounded sources
>  * Add a replay experience that replays all cached records to simulate the 
> original pipeline execution
>  * Add controls to play/pause/stop/step individual elements from the cached 
> records
>  * Add ability to inspect/visualize unbounded PCollections



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8335) Add streaming support to Interactive Beam

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8335?focusedWorklogId=322315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322315
 ]

ASF GitHub Bot logged work on BEAM-8335:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:19
Start Date: 02/Oct/19 22:19
Worklog Time Spent: 10m 
  Work Description: rohdesamuel commented on pull request #9720: 
[BEAM-8335] Add initial modules for interactive streaming support
URL: https://github.com/apache/beam/pull/9720
 
 
   - Adds the InteractiveService proto and impl for sending remote scripts to a 
TestStream
   - Adds the StreamingCache that knows how to read from an underlying cache 
and correctly emit elements to the TestStream
   
   
   
   Thank you for your contribution! Follow this checklist to help us 
incorporate your contribution quickly and easily:
   
- [ ] [**Choose 
reviewer(s)**](https://beam.apache.org/contribute/#make-your-change) and 
mention them in a comment (`R: @username`).
- [ ] Format the pull request title like `[BEAM-XXX] Fixes bug in 
ApproximateQuantiles`, where you replace `BEAM-XXX` with the appropriate JIRA 
issue, if applicable. This will automatically link the pull request to the 
issue.
- [ ] If this contribution is large, please file an Apache [Individual 
Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   Post-Commit Tests Status (on master branch)
   

   
   Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
   --- | --- | --- | --- | --- | --- | --- | ---
   Go | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/)
   Java | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/)
   Python | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/)[![Build
 

[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Ankur Goenka (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943201#comment-16943201
 ] 

Ankur Goenka commented on BEAM-8183:


I agree, changing a bit of configuration in the proto will serve a lot of use 
cases. A few can be the input/output data file etc.
{quote} 
 You are correct that the Python entry point / driver program would need to be 
(re)executed for a fully generic solution. But that's not necessary for the 
majority of use cases. Those are artifact + configuration. If there is a way to 
parameterize configuration values in the proto, we can address that majority of 
use cases with a single job jar artifact.
{quote}
Will 
[value_provider|[https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/value_provider.py]]
 help in this case? Dataflow templates use this.

Also, we can enhance the driver class to swap the actual option values in the 
options proto to parameters provided at the submission time.
{quote} 
 But beyond that we also have (in our infrastructure) the use case of multiple 
entry points that the user can pick at submit time.
  
{quote}
 
 Thats a valid usecase. I can't imagine a good way to model it in beam as all 
the beam notions are build considering a single pipeline at a time. Will a 
shell script capable of merging merging the jars for different pipeline.

I think a pipeline docker can resolve a lot of these issues as it will be 
capable of running the submission code in a consistent manner based on the 
arguments provided.

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BEAM-8342) upgrade samza runner to use samza 1.2

2019-10-02 Thread Hai Lu (Jira)
Hai Lu created BEAM-8342:


 Summary: upgrade samza runner to use samza 1.2
 Key: BEAM-8342
 URL: https://issues.apache.org/jira/browse/BEAM-8342
 Project: Beam
  Issue Type: Task
  Components: runner-samza
Reporter: Hai Lu
Assignee: Hai Lu


some changes are needed to support v1.2 of samza



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BEAM-8341) basic bundling support for samza portable runner

2019-10-02 Thread Hai Lu (Jira)
Hai Lu created BEAM-8341:


 Summary: basic bundling support for samza portable runner
 Key: BEAM-8341
 URL: https://issues.apache.org/jira/browse/BEAM-8341
 Project: Beam
  Issue Type: Task
  Components: runner-samza
Reporter: Hai Lu
Assignee: Hai Lu


bundling support for samza portable runner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8340) Support stable id in stateful transforms

2019-10-02 Thread Hai Lu (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Lu updated BEAM-8340:
-
Issue Type: Task  (was: Bug)

> Support stable id in stateful transforms
> 
>
> Key: BEAM-8340
> URL: https://issues.apache.org/jira/browse/BEAM-8340
> Project: Beam
>  Issue Type: Task
>  Components: runner-samza
>Reporter: Hai Lu
>Assignee: Hai Lu
>Priority: Major
>
> stable id work for samza runner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8340) Support stable id in stateful transforms

2019-10-02 Thread Hai Lu (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Lu updated BEAM-8340:
-
Description: stable id work for samza runner

> Support stable id in stateful transforms
> 
>
> Key: BEAM-8340
> URL: https://issues.apache.org/jira/browse/BEAM-8340
> Project: Beam
>  Issue Type: Bug
>  Components: runner-samza
>Reporter: Hai Lu
>Assignee: Hai Lu
>Priority: Major
>
> stable id work for samza runner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BEAM-8340) Support stable id in stateful transforms

2019-10-02 Thread Hai Lu (Jira)
Hai Lu created BEAM-8340:


 Summary: Support stable id in stateful transforms
 Key: BEAM-8340
 URL: https://issues.apache.org/jira/browse/BEAM-8340
 Project: Beam
  Issue Type: Bug
  Components: runner-samza
Reporter: Hai Lu
Assignee: Hai Lu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8334) Expose Language Options for testing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8334?focusedWorklogId=322303=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322303
 ]

ASF GitHub Bot logged work on BEAM-8334:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:06
Start Date: 02/Oct/19 22:06
Worklog Time Spent: 10m 
  Work Description: apilloud commented on pull request #9704: [BEAM-8334] 
Expose Language Options for testing
URL: https://github.com/apache/beam/pull/9704#discussion_r330792712
 
 

 ##
 File path: 
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/zetasql/SqlAnalyzer.java
 ##
 @@ -84,15 +84,19 @@ static Builder withQueryParams(Map params) {
* resolution strategy set in the context.
*/
   ResolvedStatement analyze(String sql) {
-AnalyzerOptions options = initAnalyzerOptions(builder.queryParams);
+AnalyzerOptions options = initAnalyzerOptions();
+for (Map.Entry entry : builder.queryParams.entrySet()) {
 
 Review comment:
   Done. Please take another look.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322303)
Time Spent: 1h  (was: 50m)

> Expose Language Options for testing
> ---
>
> Key: BEAM-8334
> URL: https://issues.apache.org/jira/browse/BEAM-8334
> Project: Beam
>  Issue Type: New Feature
>  Components: dsl-sql-zetasql
>Reporter: Andrew Pilloud
>Assignee: Andrew Pilloud
>Priority: Trivial
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Google has a set of compliance tests for ZetaSQL. The test framework needs 
> access to LanguageOptions to determine what tests are supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BEAM-8339) Fix bugs in automation scripts

2019-10-02 Thread Mark Liu (Jira)
Mark Liu created BEAM-8339:
--

 Summary: Fix bugs in automation scripts
 Key: BEAM-8339
 URL: https://issues.apache.org/jira/browse/BEAM-8339
 Project: Beam
  Issue Type: Sub-task
  Components: testing
Reporter: Mark Liu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8115) Overwrite portable Flink application jar pipeline options at runtime

2019-10-02 Thread Kyle Weaver (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943183#comment-16943183
 ] 

Kyle Weaver commented on BEAM-8115:
---

> user pipeline options that are processed within the main entry point, i.e. 
>that become properties of transforms

It would be much more complicated to rewrite the pipeline itself, especially if 
these properties are set according to some arbitrary logic at pipeline 
construction time. If this class of options-turned-transforms is not too 
common, it would be better to work around them, rather than hacking on the jar 
code.

> Overwrite portable Flink application jar pipeline options at runtime
> 
>
> Key: BEAM-8115
> URL: https://issues.apache.org/jira/browse/BEAM-8115
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>
> In the first iteration of portable Flink application jars, all pipeline 
> options are set at job creation time and cannot be later modified at runtime. 
> There should be a way to pass arguments to the jar to write/overwrite 
> pipeline options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8312) Flink portable pipeline jars do not need to stage artifacts remotely

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8312?focusedWorklogId=322278=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322278
 ]

ASF GitHub Bot logged work on BEAM-8312:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:30
Start Date: 02/Oct/19 21:30
Worklog Time Spent: 10m 
  Work Description: ibzib commented on pull request #9717: [BEAM-8312] 
Flink jars retrieve artifacts from classloader
URL: https://github.com/apache/beam/pull/9717
 
 
   R: @robertwb 
   
   Post-Commit Tests Status (on master branch)
   

   
   Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
   --- | --- | --- | --- | --- | --- | --- | ---
   Go | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/)
   Java | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/)
   Python | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python36/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python36/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python37/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python37/lastCompletedBuild/)
 | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/lastCompletedBuild/)[![Build
 

[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943170#comment-16943170
 ] 

Thomas Weise commented on BEAM-8183:


{quote}The major issue to me seems to be that we need to execute pipeline 
construction code which is environment dependent. To generate new pipelines for 
an environment, we need to execute the pipeline submission code in that 
environment. And this is where I see a problem. Python pipelines have to 
execute user code in python using python sdk to construct the pipeline.
{quote}
You are correct that the Python entry point / driver program would need to be 
(re)executed for a fully generic solution. But that's not necessary for the 
majority of use cases. Those are artifact + configuration. If there is a way to 
parameterize configuration values in the proto, we can address that majority of 
use cases with a single job jar artifact.

My fallback for the exception path would be to generate multiple protos into a 
single jar, which is why I'm interested in this capability. So that jar would 
contain "mypipeline_staging" and "mypipeline_production" and the deployment 
would select the pipeline via its configuration (parameter to the Flink entry 
point). Similar would work for Spark.

But beyond that we also have (in our infrastructure) the use case of multiple 
entry points that the user can pick at submit time.

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322274=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322274
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:19
Start Date: 02/Oct/19 21:19
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537685875
 
 
   I'm not sure why these keep failing
   
   The build scan link doesn't work for me "Your build scan could not be 
displayed"
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322274)
Time Spent: 12h 10m  (was: 12h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322273=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322273
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:18
Start Date: 02/Oct/19 21:18
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537685455
 
 
   Run Python_PVR_Flink PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322273)
Time Spent: 12h  (was: 11h 50m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322268=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322268
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:16
Start Date: 02/Oct/19 21:16
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537684713
 
 
   Run CommunityMetrics PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322268)
Time Spent: 11h 20m  (was: 11h 10m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322270=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322270
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:16
Start Date: 02/Oct/19 21:16
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537684849
 
 
   Run Python_PVR_Flink PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322270)
Time Spent: 11h 40m  (was: 11.5h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322269=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322269
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:16
Start Date: 02/Oct/19 21:16
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537684794
 
 
   Run Java PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322269)
Time Spent: 11.5h  (was: 11h 20m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322272=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322272
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:16
Start Date: 02/Oct/19 21:16
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537684929
 
 
   Run Python_PVR_Flink PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322272)
Time Spent: 11h 50m  (was: 11h 40m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322267=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322267
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:15
Start Date: 02/Oct/19 21:15
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537684429
 
 
   
   
   Run CommunityMetrics PreCommit
   Run Java PreCommit
   Run Python_PVR_Flink PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322267)
Time Spent: 11h 10m  (was: 11h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Ankur Goenka (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943161#comment-16943161
 ] 

Ankur Goenka commented on BEAM-8183:


I see. Thanks for explaining the use case.

I think hardcoded pipeline options are definitely a limitation as of now. We 
can see how we can use Beam's ValueProvider to give dynamic arguments. We can 
also think of overwriting the pipeline options when submitting the jar to fink.
{quote}Running the same pipeline in different environments with different 
parameters is a common need. Virtually everyone has dev/staging/prod or 
whatever their environments are and they want to use the same build artifact. 
That normally requires some amount of parameterization.
{quote}
I don't really have a good solution for dev/staging/prod use case. This is not 
going to be solved by jar with multiple pipelines (as each pipeline will have a 
static set of pipeline options) but by a jar creating dynamic pipelines (as the 
pipeline changes based on the pipeline options and environment).

The major issue to me seems to be that we need to execute pipeline construction 
code which is environment dependent. To generate new pipelines for an 
environment, we need to execute the pipeline submission code in that 
environment. And this is where I see a problem. Python pipelines have to 
execute user code in python using python sdk to construct the pipeline.

Considering this jar as the artifact would not be idle for different 
environment as the actual sdk/lib etc can differ between environments. From 
environment point of view, a docker container capable of submitting the 
pipeline should be an artifact as it has all the dependencies bundled in it and 
is capable of executing code with consistent dependencies. And if we don't want 
consistent dependency across environment, then pipeline code should be 
considered as an artifact as it can work with different dependencies.

 

For context, In dataflow we pack multiple pipeline in a single jar for java and 
for python we generate separate par for each pipeline (We do publish them as a 
single mpm). Further, this does not materialize the pipeline but create an 
executable which is later used in an environment having the right sdk 
installed. The submission process just runs the "python test_pipeline.par 
--runner=DataflowRunner --apiary=testapiary" which goes though dataflow job 
submission api and is submitted as a regular dataflow job.

This is similar to docker model just that instead of docker we use par file and 
execute it using python/java.
{quote}The other use case is bundling multiple pipelines into the same 
container and select which to run at launch time.
{quote}
This will save some space at the time of deployment. Specifically the jobserver 
jar and pipeline staged artifacts if they are shared. We don't really 
introspect the staged artifacts so we don't know what can be shared and what 
can't across pipelines. I think a better approach would be to just write a 
separate script to merge multiple pipeline jars (jar with single pipeline) and 
replace main class to consider the name of the pipeline to pick the right 
proto. The script can be infrastructure infrastructure aware and can make the 
appropriate lib changes. Beam does not have a notion of multiple pipelines in 
any sense so it will be interesting to see how we model this if we decide to 
introduce it in beam.

Note: As the pipelines are materialized, they will still not work across 
environments.

 

Please let me know if you have any ideas for solution this.

 

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-6995) SQL aggregation with where clause fails to plan

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-6995?focusedWorklogId=32=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-32
 ]

ASF GitHub Bot logged work on BEAM-6995:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:29
Start Date: 02/Oct/19 20:29
Worklog Time Spent: 10m 
  Work Description: 11moon11 commented on issue #9703: [BEAM-6995] Beam 
basic aggregation rule only when not windowed
URL: https://github.com/apache/beam/pull/9703#issuecomment-537666830
 
 
   Run Direct Runner Nexmark Tests
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 32)
Time Spent: 1h 40m  (was: 1.5h)

> SQL aggregation with where clause fails to plan
> ---
>
> Key: BEAM-6995
> URL: https://issues.apache.org/jira/browse/BEAM-6995
> Project: Beam
>  Issue Type: Bug
>  Components: dsl-sql
>Affects Versions: 2.11.0
>Reporter: David McIntosh
>Assignee: Kirill Kozlov
>Priority: Minor
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I'm finding that this code fails with a CannotPlanException listed below.
> {code:java}
> Schema schema = Schema.builder()
> .addInt32Field("id")
>     .addInt32Field("val")
>     .build();
> Row row = Row.withSchema(schema).addValues(1, 2).build();
> PCollection inputData = p.apply("row input", 
> Create.of(row).withRowSchema(schema));
> inputData.apply("sql",
> SqlTransform.query(
> "SELECT id, SUM(val) "
> + "FROM PCOLLECTION "
> + "WHERE val > 0 "
> + "GROUP BY id"));{code}
> If the WHERE clause is removed the code runs successfully.
> This may be similar to BEAM-5384 since I was able to work around this by 
> adding an extra column to the input that isn't reference in the sql.
> {code:java}
> Schema schema = Schema.builder()
> .addInt32Field("id")
>     .addInt32Field("val")
>     .addInt32Field("extra")
>     .build();{code}
>  
> {code:java}
> org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.plan.RelOptPlanner$CannotPlanException:
>  Node [rel#100:Subset#2.BEAM_LOGICAL] could not be implemented; planner state:
> Root: rel#100:Subset#2.BEAM_LOGICAL
> Original rel:
> LogicalAggregate(subset=[rel#100:Subset#2.BEAM_LOGICAL], group=[{0}], 
> EXPR$1=[SUM($1)]): rowcount = 5.0, cumulative cost = {5.687500238418579 rows, 
> 0.0 cpu, 0.0 io}, id = 98
>   LogicalFilter(subset=[rel#97:Subset#1.NONE], condition=[>($1, 0)]): 
> rowcount = 50.0, cumulative cost = {50.0 rows, 100.0 cpu, 0.0 io}, id = 96
> BeamIOSourceRel(subset=[rel#95:Subset#0.BEAM_LOGICAL], table=[[beam, 
> PCOLLECTION]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 
> 0.0 io}, id = 92
> Sets:
> Set#0, type: RecordType(INTEGER id, INTEGER val)
> rel#95:Subset#0.BEAM_LOGICAL, best=rel#92, 
> importance=0.7291
> rel#92:BeamIOSourceRel.BEAM_LOGICAL(table=[beam, 
> PCOLLECTION]), rowcount=100.0, cumulative cost={100.0 rows, 101.0 cpu, 0.0 io}
> rel#110:Subset#0.ENUMERABLE, best=rel#109, 
> importance=0.36455
> 
> rel#109:BeamEnumerableConverter.ENUMERABLE(input=rel#95:Subset#0.BEAM_LOGICAL),
>  rowcount=100.0, cumulative cost={1.7976931348623157E308 rows, 
> 1.7976931348623157E308 cpu, 1.7976931348623157E308 io}
> Set#1, type: RecordType(INTEGER id, INTEGER val)
> rel#97:Subset#1.NONE, best=null, importance=0.81
> 
> rel#96:LogicalFilter.NONE(input=rel#95:Subset#0.BEAM_LOGICAL,condition=>($1, 
> 0)), rowcount=50.0, cumulative cost={inf}
> 
> rel#102:LogicalCalc.NONE(input=rel#95:Subset#0.BEAM_LOGICAL,expr#0..1={inputs},expr#2=0,expr#3=>($t1,
>  $t2),id=$t0,val=$t1,$condition=$t3), rowcount=50.0, cumulative cost={inf}
> rel#104:Subset#1.BEAM_LOGICAL, best=rel#103, importance=0.405
> 
> rel#103:BeamCalcRel.BEAM_LOGICAL(input=rel#95:Subset#0.BEAM_LOGICAL,expr#0..1={inputs},expr#2=0,expr#3=>($t1,
>  $t2),id=$t0,val=$t1,$condition=$t3), rowcount=50.0, cumulative cost={150.0 
> rows, 801.0 cpu, 0.0 io}
> rel#106:Subset#1.ENUMERABLE, best=rel#105, importance=0.405
> 
> rel#105:BeamEnumerableConverter.ENUMERABLE(input=rel#104:Subset#1.BEAM_LOGICAL),
>  rowcount=50.0, cumulative cost={1.7976931348623157E308 rows, 
> 1.7976931348623157E308 cpu, 1.7976931348623157E308 io}
> Set#2, type: RecordType(INTEGER id, INTEGER EXPR$1)
> rel#99:Subset#2.NONE, best=null, importance=0.9
> 

[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322209
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:21
Start Date: 02/Oct/19 20:21
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537663603
 
 
   Thanks, @chadrik !
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322209)
Time Spent: 11h  (was: 10h 50m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322206
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:19
Start Date: 02/Oct/19 20:19
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9706: [BEAM-8213] Split 
out lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537662858
 
 
   Run Seed Job
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322206)
Time Spent: 10h 40m  (was: 10.5h)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-6829) Duplicate metric warnings clutter log

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-6829?focusedWorklogId=322204=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322204
 ]

ASF GitHub Bot logged work on BEAM-6829:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:13
Start Date: 02/Oct/19 20:13
Worklog Time Spent: 10m 
  Work Description: tweise commented on issue #8585: [BEAM-6829] Use 
transform/pcollection name for metric namespace if none provided
URL: https://github.com/apache/beam/pull/8585#issuecomment-537660439
 
 
   Sounds good, I will try it again with the latest changes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322204)
Time Spent: 2.5h  (was: 2h 20m)

> Duplicate metric warnings clutter log
> -
>
> Key: BEAM-6829
> URL: https://issues.apache.org/jira/browse/BEAM-6829
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Affects Versions: 2.11.0
>Reporter: Thomas Weise
>Assignee: Maximilian Michels
>Priority: Major
>  Labels: portability
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Logs fill up quickly with these warnings: 
> {code:java}
> WARN org.apache.flink.metrics.MetricGroup - Name collision: Group already 
> contains a Metric with the name ...{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7933) Adding timeout to JobServer grpc calls

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7933?focusedWorklogId=322201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322201
 ]

ASF GitHub Bot logged work on BEAM-7933:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:04
Start Date: 02/Oct/19 20:04
Worklog Time Spent: 10m 
  Work Description: ibzib commented on pull request #9673: [BEAM-7933] Add 
job server request timeout (default to 60 seconds)
URL: https://github.com/apache/beam/pull/9673#discussion_r330744414
 
 

 ##
 File path: sdks/python/apache_beam/options/pipeline_options.py
 ##
 @@ -818,11 +818,15 @@ class PortableOptions(PipelineOptions):
   """
   @classmethod
   def _add_argparse_args(cls, parser):
-parser.add_argument('--job_endpoint',
-default=None,
-help=
-('Job service endpoint to use. Should be in the form '
- 'of address and port, e.g. localhost:3000'))
+parser.add_argument(
+'--job_endpoint', default=None,
+help=('Job service endpoint to use. Should be in the form of address '
+  'and port, e.g. localhost:3000'))
+parser.add_argument(
+'--job-server-timeout', default=60, type=int,
+help=('Job service request timeout in seconds. The timeout '
+  'determines the max time the driver program will wait to '
+  'get a response from the job server.'))
 
 Review comment:
   Might want to make a note here that timeouts do not apply to the actual 
pipeline itself.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322201)
Time Spent: 3h 20m  (was: 3h 10m)

> Adding timeout to JobServer grpc calls
> --
>
> Key: BEAM-7933
> URL: https://issues.apache.org/jira/browse/BEAM-7933
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-core
>Affects Versions: 2.14.0
>Reporter: Enrico Canzonieri
>Assignee: Enrico Canzonieri
>Priority: Minor
>  Labels: portability
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> grpc calls to the JobServer from the Python SDK do not have timeouts. That 
> means that the call to pipeline.run()could hang forever if the JobServer is 
> not running (or failing to start).
> E.g. 
> [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/portability/portable_runner.py#L307]
>  the call to Prepare() doesn't provide any timeout value and the same applies 
> to other JobServer requests.
> As part of this ticket we could add a default timeout of 60 seconds as the 
> default timeout for http client.
> Additionally, we could consider adding a --job-server-request-timeout to the 
> [PortableOptions|https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py#L805]
>  class to be used in the JobServer interactions inside probable_runner.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7623) Support select MAP with Row as values

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7623?focusedWorklogId=322165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322165
 ]

ASF GitHub Bot logged work on BEAM-7623:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:13
Start Date: 02/Oct/19 19:13
Worklog Time Spent: 10m 
  Work Description: amaliujia commented on issue #9701: [BEAM-7623] UT for 
BeamSql DDL with map field having row as value
URL: https://github.com/apache/beam/pull/9701#issuecomment-537638288
 
 
   LGTM
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322165)
Time Spent: 1h  (was: 50m)

> Support select MAP with Row as values
> -
>
> Key: BEAM-7623
> URL: https://issues.apache.org/jira/browse/BEAM-7623
> Project: Beam
>  Issue Type: Improvement
>  Components: dsl-sql
>Reporter: Rui Wang
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
>  Schema primitiveFieldsScema = 
> Schema.builder().addStringField("color").build();
> Schema inputSchema = 
> Schema.builder().addMapField("mapWithValueAsRow", FieldType.STRING, 
> FieldType.row(primitiveFieldsScema)).build();  
> Map mapWithValueAsRow = new HashMap<>();
> Row row = 
> Row.withSchema(primitiveFieldsScema).addValue("RED").build();
> mapWithValueAsRow.put("key", row);
> 
> Row rowOfMap = 
> Row.withSchema(inputSchema).addValue(mapWithValueAsRow).build();
> 
> Query used:
>select  PCOLLECTION.mapWithValueAsRow['key'].color as color  
> from PCOLLECTION
> {code}
> With exception:
> {code}
> java.lang.RuntimeException: CalcFn failed to evaluate: {
>   final java.util.Map inp0_ = ((org.apache.beam.sdk.values.Row) 
> c.element()).getMap(0);
>   
> c.output(org.apache.beam.sdk.values.Row.withSchema(outputSchema).addValue(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key") == null ? (String) null : (String) 
> org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.structAccess(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key"), 0, "color")).build());
> }
> org.apache.beam.sdk.Pipeline$PipelineExecutionException: 
> java.lang.RuntimeException: CalcFn failed to evaluate: {
>   final java.util.Map inp0_ = ((org.apache.beam.sdk.values.Row) 
> c.element()).getMap(0);
>   
> c.output(org.apache.beam.sdk.values.Row.withSchema(outputSchema).addValue(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key") == null ? (String) null : (String) 
> org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.structAccess(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key"), 0, "color")).build());
> }
>   at 
> org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:348)
>   at 
> org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:318)
>   at 
> org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:213)
>   at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:67)
>   at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331)
>   at 
> org.apache.beam.sdk.extensions.sql.BeamSqlMapTest.testAccessMapElementWithRowValue(BeamSqlMapTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:265)
>   at 
> 

[jira] [Work logged] (BEAM-7623) Support select MAP with Row as values

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7623?focusedWorklogId=322157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322157
 ]

ASF GitHub Bot logged work on BEAM-7623:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:59
Start Date: 02/Oct/19 18:59
Worklog Time Spent: 10m 
  Work Description: apilloud commented on pull request #9701: [BEAM-7623] 
UT for BeamSql DDL with map field having row as value
URL: https://github.com/apache/beam/pull/9701
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322157)
Time Spent: 50m  (was: 40m)

> Support select MAP with Row as values
> -
>
> Key: BEAM-7623
> URL: https://issues.apache.org/jira/browse/BEAM-7623
> Project: Beam
>  Issue Type: Improvement
>  Components: dsl-sql
>Reporter: Rui Wang
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
>  Schema primitiveFieldsScema = 
> Schema.builder().addStringField("color").build();
> Schema inputSchema = 
> Schema.builder().addMapField("mapWithValueAsRow", FieldType.STRING, 
> FieldType.row(primitiveFieldsScema)).build();  
> Map mapWithValueAsRow = new HashMap<>();
> Row row = 
> Row.withSchema(primitiveFieldsScema).addValue("RED").build();
> mapWithValueAsRow.put("key", row);
> 
> Row rowOfMap = 
> Row.withSchema(inputSchema).addValue(mapWithValueAsRow).build();
> 
> Query used:
>select  PCOLLECTION.mapWithValueAsRow['key'].color as color  
> from PCOLLECTION
> {code}
> With exception:
> {code}
> java.lang.RuntimeException: CalcFn failed to evaluate: {
>   final java.util.Map inp0_ = ((org.apache.beam.sdk.values.Row) 
> c.element()).getMap(0);
>   
> c.output(org.apache.beam.sdk.values.Row.withSchema(outputSchema).addValue(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key") == null ? (String) null : (String) 
> org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.structAccess(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key"), 0, "color")).build());
> }
> org.apache.beam.sdk.Pipeline$PipelineExecutionException: 
> java.lang.RuntimeException: CalcFn failed to evaluate: {
>   final java.util.Map inp0_ = ((org.apache.beam.sdk.values.Row) 
> c.element()).getMap(0);
>   
> c.output(org.apache.beam.sdk.values.Row.withSchema(outputSchema).addValue(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key") == null ? (String) null : (String) 
> org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.structAccess(org.apache.beam.repackaged.sql.org.apache.calcite.runtime.SqlFunctions.mapItemOptional(inp0_,
>  "key"), 0, "color")).build());
> }
>   at 
> org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:348)
>   at 
> org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:318)
>   at 
> org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:213)
>   at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:67)
>   at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331)
>   at 
> org.apache.beam.sdk.extensions.sql.BeamSqlMapTest.testAccessMapElementWithRowValue(BeamSqlMapTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:265)
>   at 
> 

[jira] [Work logged] (BEAM-6829) Duplicate metric warnings clutter log

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-6829?focusedWorklogId=322156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322156
 ]

ASF GitHub Bot logged work on BEAM-6829:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:59
Start Date: 02/Oct/19 18:59
Worklog Time Spent: 10m 
  Work Description: mxm commented on issue #8585: [BEAM-6829] Use 
transform/pcollection name for metric namespace if none provided
URL: https://github.com/apache/beam/pull/8585#issuecomment-537633043
 
 
   Test failures are unrelated: 
https://builds.apache.org/job/beam_PreCommit_Java_Commit/7949/
   ```
   Test Result (2 failures / +2)
   
   org.apache.beam.sdk.io.TFRecordIOTest.testReadInvalidRecord
   
org.apache.beam.sdk.transforms.ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundleStateful
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322156)
Time Spent: 2h 20m  (was: 2h 10m)

> Duplicate metric warnings clutter log
> -
>
> Key: BEAM-6829
> URL: https://issues.apache.org/jira/browse/BEAM-6829
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Affects Versions: 2.11.0
>Reporter: Thomas Weise
>Assignee: Maximilian Michels
>Priority: Major
>  Labels: portability
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Logs fill up quickly with these warnings: 
> {code:java}
> WARN org.apache.flink.metrics.MetricGroup - Name collision: Group already 
> contains a Metric with the name ...{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943077#comment-16943077
 ] 

Kenneth Knowles commented on BEAM-8319:
---

See what the actual version of Guava used is when you comment it out. And I 
guess I don't even know what level of testing is required to upgrade. But 
generally it looks like 26.0-android is the best version for most of these 
libraries. I'm not sure if the libraries could use upgrading, too, or what.

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> org.gradle.api.internal.tasks.execution.FinishSnapshotTaskInputsBuildOperationTaskExecuter.execute(FinishSnapshotTaskInputsBuildOperationTaskExecuter.java:51)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBuildCacheKeyExecuter.execute(ResolveBuildCacheKeyExecuter.java:102)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
> at 
> org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
> at 
> org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
> at 
> org.gradle.api.internal.tasks.execution.StartSnapshotTaskInputsBuildOperationTaskExecuter.execute(StartSnapshotTaskInputsBuildOperationTaskExecuter.java:52)
> at 
> org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
> at 
> org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
> at 
> org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
> at 
> org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
> at 
> org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
> at 
> 

[jira] [Commented] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943076#comment-16943076
 ] 

Kenneth Knowles commented on BEAM-8319:
---

{code:java}
$ ./gradlew :sdks:java:io:google-cloud-platform:dependencyReport
$ cat sdks/java/io/google-cloud-platform/build/reports/project/dependencies.txt

{code}
Trimmed to just the guava deps:
{code:java}
compile - Dependencies for source set 'main' (deprecated, use 'implementation' 
instead).
+--- project :sdks:java:extensions:google-cloud-platform-core
|+--- com.google.auth:google-auth-library-oauth2-http:0.12.0
||\--- com.google.guava:guava:20.0
|+--- com.google.api-client:google-api-client:1.27.0
||+--- com.google.oauth-client:google-oauth-client:1.27.0
|||\--- com.google.guava:guava:20.0
||\--- com.google.guava:guava:20.0
|+--- com.google.cloud.bigdataoss:gcsio:1.9.16
||+--- com.google.guava:guava:27.0.1-jre -> 20.0
||\--- com.google.cloud.bigdataoss:util:1.9.16
|| +--- com.google.guava:guava:27.0.1-jre -> 20.0+--- 
com.google.api:gax-grpc:1.38.0
|+--- com.google.api:gax:1.38.0
||+--- com.google.guava:guava:26.0-android -> 20.0
||+--- com.google.api:api-common:1.7.0
|||\--- com.google.guava:guava:19.0 -> 20.0
|+--- io.grpc:grpc-stub:1.17.1
||\--- io.grpc:grpc-core:1.17.1
|| +--- com.google.guava:guava:26.0-android -> 20.0
|+--- io.grpc:grpc-protobuf:1.17.1
||+--- com.google.guava:guava:26.0-android -> 20.0
||\--- io.grpc:grpc-protobuf-lite:1.17.1
|| \--- com.google.guava:guava:26.0-android -> 20.0
|+--- com.google.guava:guava:26.0-android -> 20.0
|\--- io.grpc:grpc-alts:1.17.1
| \--- io.grpc:grpc-grpclb:1.17.1
|  \--- com.google.protobuf:protobuf-java-util:3.5.1 -> 3.6.0
|   +--- com.google.guava:guava:19.0 -> 20.0+--- 
com.google.cloud:google-cloud-bigquerystorage:0.79.0-alpha
|+--- com.google.cloud:google-cloud-core:1.61.0
||+--- com.google.guava:guava:26.0-android -> 20.0
|+--- com.google.cloud:google-cloud-core-grpc:1.61.0
||+--- com.google.guava:guava:26.0-android -> 20.0+--- 
com.google.cloud.bigtable:bigtable-client-core:1.8.0
|+--- com.google.guava:guava:26.0-android -> 20.0
|+--- com.google.cloud:google-cloud-core-http:1.55.0
||+--- com.google.guava:guava:26.0-android -> 20.0
||+--- com.google.api:gax-httpjson:0.52.0
|||+--- com.google.guava:guava:26.0-android -> 20.0
||+--- io.opencensus:opencensus-contrib-http-util:0.15.0
|||\--- com.google.guava:guava:20.0+--- 
com.google.cloud.datastore:datastore-v1-proto-client:1.6.0
|\--- com.google.guava:guava:18.0 -> 20.0+--- io.grpc:grpc-all:1.17.1
|+--- io.grpc:grpc-protobuf-nano:1.17.1
||\--- com.google.guava:guava:26.0-android -> 20.0

+--- com.google.guava:guava:20.0
 {code}
There's lots of 26.0-android and some 19.0 and 27.0-jre. There are a couple of 
key items that are actually transitive 20.0 deps.

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> 

[jira] [Work logged] (BEAM-8334) Expose Language Options for testing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8334?focusedWorklogId=322149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322149
 ]

ASF GitHub Bot logged work on BEAM-8334:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:53
Start Date: 02/Oct/19 18:53
Worklog Time Spent: 10m 
  Work Description: amaliujia commented on pull request #9704: [BEAM-8334] 
Expose Language Options for testing
URL: https://github.com/apache/beam/pull/9704#discussion_r330715056
 
 

 ##
 File path: 
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/zetasql/SqlAnalyzer.java
 ##
 @@ -84,15 +84,19 @@ static Builder withQueryParams(Map params) {
* resolution strategy set in the context.
*/
   ResolvedStatement analyze(String sql) {
-AnalyzerOptions options = initAnalyzerOptions(builder.queryParams);
+AnalyzerOptions options = initAnalyzerOptions();
+for (Map.Entry entry : builder.queryParams.entrySet()) {
 
 Review comment:
   Ok maybe just a function and we can leave more work for future. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322149)
Time Spent: 50m  (was: 40m)

> Expose Language Options for testing
> ---
>
> Key: BEAM-8334
> URL: https://issues.apache.org/jira/browse/BEAM-8334
> Project: Beam
>  Issue Type: New Feature
>  Components: dsl-sql-zetasql
>Reporter: Andrew Pilloud
>Assignee: Andrew Pilloud
>Priority: Trivial
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Google has a set of compliance tests for ZetaSQL. The test framework needs 
> access to LanguageOptions to determine what tests are supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322139=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322139
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:43
Start Date: 02/Oct/19 18:43
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330710466
 
 

 ##
 File path: website/src/documentation/programming-guide.md
 ##
 @@ -39,6 +39,9 @@ how to implement Beam concepts in your pipelines.
   
 
 
+{:.language-py}
+New versions of the Python SDK will only support Python 3.5 or higher. 
Currently, the Python SDK still supports Python 2.7.x. We recommend using the 
latest Python 3 version.
 
 Review comment:
   I think we can't say more specific than soon at this point.
   
   sure, we can drop "new" - and recommend the switch to all users.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322139)
Time Spent: 2.5h  (was: 2h 20m)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-7049) Merge multiple input to one BeamUnionRel

2019-10-02 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943072#comment-16943072
 ] 

Rui Wang commented on BEAM-7049:


Sorry [~sridharG] I probably have missed your message and indeed we are waiting 
each other.  Ok you actually some help on how to split UNION ALL and UNION 
right? Give me sometime to investigate and see what I can get. I will very 
likely to check how FlinkSQL does such optimization as they also use Calcite 
but has been much mature over us.

> Merge multiple input to one BeamUnionRel
> 
>
> Key: BEAM-7049
> URL: https://issues.apache.org/jira/browse/BEAM-7049
> Project: Beam
>  Issue Type: Improvement
>  Components: dsl-sql
>Reporter: Rui Wang
>Assignee: sridhar Reddy
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> BeamUnionRel assumes inputs are two and rejects more. So `a UNION b UNION c` 
> will have to be created as UNION(a, UNION(b, c)) and have two shuffles. If 
> BeamUnionRel can handle multiple shuffles, we will have only one shuffle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322132=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322132
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:39
Start Date: 02/Oct/19 18:39
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9700: [BEAM-8287] Python3 
GA docs updates
URL: https://github.com/apache/beam/pull/9700#issuecomment-537622584
 
 
   We don't have a sunsetting doc yet, but Beam SDK itself comes with a warning 
that Python 2 will sunset in future releases. We can link   
https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
 that has a discussion.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322132)
Time Spent: 2h 20m  (was: 2h 10m)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8334) Expose Language Options for testing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8334?focusedWorklogId=322133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322133
 ]

ASF GitHub Bot logged work on BEAM-8334:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:39
Start Date: 02/Oct/19 18:39
Worklog Time Spent: 10m 
  Work Description: apilloud commented on pull request #9704: [BEAM-8334] 
Expose Language Options for testing
URL: https://github.com/apache/beam/pull/9704#discussion_r330708770
 
 

 ##
 File path: 
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/zetasql/SqlAnalyzer.java
 ##
 @@ -84,15 +84,19 @@ static Builder withQueryParams(Map params) {
* resolution strategy set in the context.
*/
   ResolvedStatement analyze(String sql) {
-AnalyzerOptions options = initAnalyzerOptions(builder.queryParams);
+AnalyzerOptions options = initAnalyzerOptions();
+for (Map.Entry entry : builder.queryParams.entrySet()) {
 
 Review comment:
   I agree that we should have a single place that performs common init of 
AnalizerOptions and I think this change still achieves that goal. The query 
parameters are only set in the analyze path, so I think it actually creates 
more confusion to move it into a separate function. I have no strong opinions 
and can add a function if you'd like one.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322133)
Time Spent: 40m  (was: 0.5h)

> Expose Language Options for testing
> ---
>
> Key: BEAM-8334
> URL: https://issues.apache.org/jira/browse/BEAM-8334
> Project: Beam
>  Issue Type: New Feature
>  Components: dsl-sql-zetasql
>Reporter: Andrew Pilloud
>Assignee: Andrew Pilloud
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Google has a set of compliance tests for ZetaSQL. The test framework needs 
> access to LanguageOptions to determine what tests are supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322129
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:31
Start Date: 02/Oct/19 18:31
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on issue #9700: [BEAM-8287] Python3 
GA docs updates
URL: https://github.com/apache/beam/pull/9700#issuecomment-537622584
 
 
   We don't have a sunsetting doc yet.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322129)
Time Spent: 2h 10m  (was: 2h)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322128=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322128
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:31
Start Date: 02/Oct/19 18:31
Worklog Time Spent: 10m 
  Work Description: tvalentyn commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330705073
 
 

 ##
 File path: website/src/roadmap/python-sdk.md
 ##
 @@ -22,7 +22,7 @@ limitations under the License.
 
 ## Python 3 Support
 
 Review comment:
   We won't have Dataflow containers for 3.8 released with 2.16.0 for example, 
we won't have have tests until we add them, and some dependencies may have 
issues on 3.8 (e.g. dill).  Beam itself on direct runner may just work, but it 
would be more safe to say 3.5, 3.6, 3.7 as those are the classifiers we add on 
pypi.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322128)
Time Spent: 2h  (was: 1h 50m)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322120=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322120
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:25
Start Date: 02/Oct/19 18:25
Worklog Time Spent: 10m 
  Work Description: aaltay commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330336476
 
 

 ##
 File path: website/src/roadmap/python-sdk.md
 ##
 @@ -22,7 +22,7 @@ limitations under the License.
 
 ## Python 3 Support
 
 Review comment:
   @tvalentyn Should we say python 3.5 or higher or spell out 3.5, 3.6, 3.7? 
Would we support 3.8 automatically?
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322120)
Time Spent: 1h 50m  (was: 1h 40m)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322118=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322118
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:25
Start Date: 02/Oct/19 18:25
Worklog Time Spent: 10m 
  Work Description: aaltay commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330336312
 
 

 ##
 File path: website/src/roadmap/python-sdk.md
 ##
 @@ -22,7 +22,7 @@ limitations under the License.
 
 ## Python 3 Support
 
-Apache Beam first offered Python 3.5 support with the 2.11.0 SDK release and 
added Python 3.6, Python 3.7 support with the 2.14.0 version. However, we 
continue to polish some [rough 
edges](https://issues.apache.org/jira/browse/BEAM-1251?focusedCommentId=16890504=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-1689050)
 and strengthen Beam's Python 3 offering:
+Apache Beam supports Python 3.5 or higher with the 2.16.0 SDK release. Python 
3.5 beta support was first offered with the 2.11.0 SDK release; beta support 
for Python 3.6, Python 3.7 was added with the 2.14.0 version. We continue to 
polish some [rough 
edges](https://issues.apache.org/jira/browse/BEAM-1251?focusedCommentId=16890504=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-1689050)
 and strengthen Beam's Python 3 offering:
 
 Review comment:
   I think as of this release we are confident enough to say:
   
   ```
   As of Apache Beam 2.16.0, Beam supports Python 3.5 and higher.
   
   
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322118)
Time Spent: 1h 40m  (was: 1.5h)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322119=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322119
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:25
Start Date: 02/Oct/19 18:25
Worklog Time Spent: 10m 
  Work Description: aaltay commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330335888
 
 

 ##
 File path: website/src/get-started/quickstart-py.md
 ##
 @@ -27,6 +27,8 @@ If you're interested in contributing to the Apache Beam 
Python codebase, see the
 * TOC
 {:toc}
 
+New versions of the Python SDK will only support Python 3.5 or higher. 
Currently, the Python SDK still supports Python 2.7.x. We recommend using the 
latest Python 3 version.
 
 Review comment:
   Should we add a deprecation warning here. And probably to the download page 
and to the 2.16 blog.
   
   /cc @markflyhigh 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322119)
Time Spent: 1h 40m  (was: 1.5h)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8287) Update documentation for Python 3 support after Beam 2.16.0.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8287?focusedWorklogId=322117=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322117
 ]

ASF GitHub Bot logged work on BEAM-8287:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:25
Start Date: 02/Oct/19 18:25
Worklog Time Spent: 10m 
  Work Description: aaltay commented on pull request #9700: [BEAM-8287] 
Python3 GA docs updates
URL: https://github.com/apache/beam/pull/9700#discussion_r330335767
 
 

 ##
 File path: website/src/documentation/programming-guide.md
 ##
 @@ -39,6 +39,9 @@ how to implement Beam concepts in your pipelines.
   
 
 
+{:.language-py}
+New versions of the Python SDK will only support Python 3.5 or higher. 
Currently, the Python SDK still supports Python 2.7.x. We recommend using the 
latest Python 3 version.
 
 Review comment:
   Is there  a link to a site/document/mailing list discussion we can link to 
related to sunsetting plan?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322117)
Time Spent: 1h 40m  (was: 1.5h)

> Update documentation for Python 3 support after Beam 2.16.0.
> 
>
> Key: BEAM-8287
> URL: https://issues.apache.org/jira/browse/BEAM-8287
> Project: Beam
>  Issue Type: Sub-task
>  Components: website
>Reporter: Valentyn Tymofieiev
>Assignee: Cyrus Maden
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread Ahmet Altay (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Altay reassigned BEAM-8286:
-

Assignee: Kyle Weaver

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread Ahmet Altay (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943060#comment-16943060
 ] 

Ahmet Altay commented on BEAM-8286:
---

Filed a new issue: https://github.com/googleapis/google-cloud-python/issues/9386

I agree with [~ibzib] that it might be better to remove intersphinx references. 
We cannot guarantee that they will be not broken in the future.

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8334) Expose Language Options for testing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8334?focusedWorklogId=322111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322111
 ]

ASF GitHub Bot logged work on BEAM-8334:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:17
Start Date: 02/Oct/19 18:17
Worklog Time Spent: 10m 
  Work Description: amaliujia commented on pull request #9704: [BEAM-8334] 
Expose Language Options for testing
URL: https://github.com/apache/beam/pull/9704#discussion_r330698802
 
 

 ##
 File path: 
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/zetasql/SqlAnalyzer.java
 ##
 @@ -84,15 +84,19 @@ static Builder withQueryParams(Map params) {
* resolution strategy set in the context.
*/
   ResolvedStatement analyze(String sql) {
-AnalyzerOptions options = initAnalyzerOptions(builder.queryParams);
+AnalyzerOptions options = initAnalyzerOptions();
+for (Map.Entry entry : builder.queryParams.entrySet()) {
 
 Review comment:
   I am slightly against this move because the original intention is to 
initialize `AnalyzerOptions` in the same place to avoid many pieces of 
initializations (which will be hard to maintain). I meant "slightly" because I 
can see why you are moving it: it will require static parameters as input.
   
   
   What do you think about the idea of having a AnalyzerOptions builder with 
default settings on fields? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322111)
Time Spent: 0.5h  (was: 20m)

> Expose Language Options for testing
> ---
>
> Key: BEAM-8334
> URL: https://issues.apache.org/jira/browse/BEAM-8334
> Project: Beam
>  Issue Type: New Feature
>  Components: dsl-sql-zetasql
>Reporter: Andrew Pilloud
>Assignee: Andrew Pilloud
>Priority: Trivial
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Google has a set of compliance tests for ZetaSQL. The test framework needs 
> access to LanguageOptions to determine what tests are supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322106
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:07
Start Date: 02/Oct/19 18:07
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537613153
 
 
   Run Seed Job
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322106)
Time Spent: 10.5h  (was: 10h 20m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7933) Adding timeout to JobServer grpc calls

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7933?focusedWorklogId=322105=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322105
 ]

ASF GitHub Bot logged work on BEAM-7933:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:04
Start Date: 02/Oct/19 18:04
Worklog Time Spent: 10m 
  Work Description: ibzib commented on issue #9673: [BEAM-7933] Add job 
server request timeout (default to 60 seconds)
URL: https://github.com/apache/beam/pull/9673#issuecomment-537612066
 
 
   Run Python PreCommit
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322105)
Time Spent: 3h 10m  (was: 3h)

> Adding timeout to JobServer grpc calls
> --
>
> Key: BEAM-7933
> URL: https://issues.apache.org/jira/browse/BEAM-7933
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-core
>Affects Versions: 2.14.0
>Reporter: Enrico Canzonieri
>Assignee: Enrico Canzonieri
>Priority: Minor
>  Labels: portability
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> grpc calls to the JobServer from the Python SDK do not have timeouts. That 
> means that the call to pipeline.run()could hang forever if the JobServer is 
> not running (or failing to start).
> E.g. 
> [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/portability/portable_runner.py#L307]
>  the call to Prepare() doesn't provide any timeout value and the same applies 
> to other JobServer requests.
> As part of this ticket we could add a default timeout of 60 seconds as the 
> default timeout for http client.
> Additionally, we could consider adding a --job-server-request-timeout to the 
> [PortableOptions|https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py#L805]
>  class to be used in the JobServer interactions inside probable_runner.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7981) ParDo function wrapper doesn't support Iterable output types

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7981?focusedWorklogId=322103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322103
 ]

ASF GitHub Bot logged work on BEAM-7981:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:01
Start Date: 02/Oct/19 18:01
Worklog Time Spent: 10m 
  Work Description: udim commented on issue #9708: [BEAM-7981] Fix double 
iterable stripping
URL: https://github.com/apache/beam/pull/9708#issuecomment-537611027
 
 
   python :docs failure should be fixed in 
https://github.com/apache/beam/pull/9714
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322103)
Time Spent: 20m  (was: 10m)

> ParDo function wrapper doesn't support Iterable output types
> 
>
> Key: BEAM-7981
> URL: https://issues.apache.org/jira/browse/BEAM-7981
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-py-core
>Reporter: Udi Meiri
>Assignee: Udi Meiri
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I believe the bug is in CallableWrapperDoFn.default_type_hints, which 
> converts Iterable[str] to str.
> This test will be included (commented out) in 
> https://github.com/apache/beam/pull/9283
> {code}
>   def test_typed_callable_iterable_output(self):
> @typehints.with_input_types(int)
> @typehints.with_output_types(typehints.Iterable[str])
> def do_fn(element):
>   return [[str(element)] * 2]
> result = [1, 2] | beam.ParDo(do_fn)
> self.assertEqual([['1', '1'], ['2', '2']], sorted(result))
> {code}
> Result:
> {code}
> ==
> ERROR: test_typed_callable_iterable_output 
> (apache_beam.typehints.typed_pipeline_test.MainInputTest)
> --
> Traceback (most recent call last):
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/typehints/typed_pipeline_test.py",
>  line 104, in test_typed_callable_iterable_output
> result = [1, 2] | beam.ParDo(do_fn)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/transforms/ptransform.py",
>  line 519, in __ror__
> p.run().wait_until_finish()
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/pipeline.py", 
> line 406, in run
> self._options).run(False)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/pipeline.py", 
> line 419, in run
> return self.runner.run_pipeline(self, self._options)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/direct/direct_runner.py",
>  line 129, in run_pipeline
> return runner.run_pipeline(pipeline, options)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 366, in run_pipeline
> default_environment=self._default_environment))
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 373, in run_via_runner_api
> return self.run_stages(stage_context, stages)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 455, in run_stages
> stage_context.safe_coders)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 733, in _run_stage
> result, splits = bundle_manager.process_bundle(data_input, data_output)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 1663, in process_bundle
> part, expected_outputs), part_inputs):
>   File "/usr/lib/python3.7/concurrent/futures/_base.py", line 586, in 
> result_iterator
> yield fs.pop().result()
>   File "/usr/lib/python3.7/concurrent/futures/_base.py", line 432, in result
> return self.__get_result()
>   File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in 
> __get_result
> raise self._exception
>   File "/usr/lib/python3.7/concurrent/futures/thread.py", line 57, in run
> result = self.fn(*self.args, **self.kwargs)
>   File 
> "/usr/local/google/home/ehudm/src/beam/sdks/python/apache_beam/runners/portability/fn_api_runner.py",
>  line 1663, in 
> part, expected_outputs), part_inputs):
>   File 
> 

[jira] [Work logged] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8286?focusedWorklogId=322102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322102
 ]

ASF GitHub Bot logged work on BEAM-8286:


Author: ASF GitHub Bot
Created on: 02/Oct/19 18:01
Start Date: 02/Oct/19 18:01
Worklog Time Spent: 10m 
  Work Description: udim commented on pull request #9714: [BEAM-8286] move 
datastore intersphinx link again
URL: https://github.com/apache/beam/pull/9714
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322102)
Time Spent: 1.5h  (was: 1h 20m)

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8303) Filesystems not properly registered using FileIO.write()

2019-10-02 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943039#comment-16943039
 ] 

Maximilian Michels commented on BEAM-8303:
--

Backported to {{release-2.16.0}}. Closing this when it is clear whether it can 
be included in the next RC.

> Filesystems not properly registered using FileIO.write()
> 
>
> Key: BEAM-8303
> URL: https://issues.apache.org/jira/browse/BEAM-8303
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Affects Versions: 2.15.0
>Reporter: Preston Koprivica
>Assignee: Maximilian Michels
>Priority: Critical
> Fix For: 2.17.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I’m getting the following error when attempting to use the FileIO apis 
> (beam-2.15.0) and integrating with AWS S3.  I have setup the PipelineOptions 
> with all the relevant AWS options, so the filesystem registry **should** be 
> properly seeded by the time the graph is compiled and executed:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found for scheme s3
>     at 
> org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
>     at 
> org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1149)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1105)
>     at org.apache.beam.sdk.coders.Coder.decode(Coder.java:159)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:83)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:32)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:543)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:534)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:480)
>     at 
> org.apache.beam.runners.flink.translation.types.CoderTypeSerializer.deserialize(CoderTypeSerializer.java:93)
>     at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>     at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:106)
>     at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:72)
>     at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:47)
>     at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(ReaderIterator.java:73)
>     at 
> org.apache.flink.runtime.operators.FlatMapDriver.run(FlatMapDriver.java:107)
>     at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
>     at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>     at java.lang.Thread.run(Thread.java:748)
>  {code}
> For reference, the write code resembles this:
> {code:java}
>  FileIO.Write write = FileIO.write()
>     .via(ParquetIO.sink(schema))
>     .to(options.getOutputDir()). // will be something like: 
> s3:///
>     .withSuffix(".parquet");
> records.apply(String.format("Write(%s)", options.getOutputDir()), 
> write);{code}
> The issue does not appear to be related to ParquetIO.sink().  I am able to 
> reliably reproduce the issue using JSON formatted records and TextIO.sink(), 
> as well.  Moreover, AvroIO is affected if withWindowedWrites() option is 
> added.
> Just trying some different knobs, I went ahead and set the following option:
> {code:java}
> write = write.withNoSpilling();{code}
> This actually seemed to fix the issue, only to have it reemerge as I scaled 
> up the data set size.  The stack trace, while very similar, reads:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found for scheme s3
>     at 
> org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
>     at 
> org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1149)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1105)
>     at org.apache.beam.sdk.coders.Coder.decode(Coder.java:159)
>     at org.apache.beam.sdk.coders.KvCoder.decode(KvCoder.java:82)
>     at org.apache.beam.sdk.coders.KvCoder.decode(KvCoder.java:36)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:543)
>   

[jira] [Updated] (BEAM-8288) Cleanup Interactive Beam Python 2 support

2019-10-02 Thread Ning Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Kang updated BEAM-8288:

Component/s: (was: examples-python)
 runner-py-interactive

> Cleanup Interactive Beam Python 2 support
> -
>
> Key: BEAM-8288
> URL: https://issues.apache.org/jira/browse/BEAM-8288
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-py-interactive
>Reporter: Ning Kang
>Assignee: Ning Kang
>Priority: Minor
>
> As Beam is retiring Python 2, some special handle in Interactive Beam code 
> and tests will need to be cleaned up.
> This Jira ticket tracks those changes to be cleaned up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8337) Publish portable job server container images

2019-10-02 Thread Kyle Weaver (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943038#comment-16943038
 ] 

Kyle Weaver commented on BEAM-8337:
---

I imagine this would be a process.

However, we should first consider if it's even worth doing – that is, if job 
server container images would be useful to users. If their use is mostly 
confined to testing, it's probably not worth the effort to publish them.

> Publish portable job server container images
> 
>
> Key: BEAM-8337
> URL: https://issues.apache.org/jira/browse/BEAM-8337
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-flink, runner-spark
>Reporter: Kyle Weaver
>Priority: Major
>
> Could be added to the release process similar to how we now publish SDK 
> worker images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8016) Render Beam Pipeline as DOT with Interactive Beam

2019-10-02 Thread Ning Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Kang updated BEAM-8016:

Component/s: (was: examples-python)
 runner-py-interactive

> Render Beam Pipeline as DOT with Interactive Beam  
> ---
>
> Key: BEAM-8016
> URL: https://issues.apache.org/jira/browse/BEAM-8016
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-py-interactive
>Reporter: Ning Kang
>Assignee: Ning Kang
>Priority: Major
>
> With work in https://issues.apache.org/jira/browse/BEAM-7760, Beam pipeline 
> converted to DOT then rendered should mark user defined variables on edges.
> With work in https://issues.apache.org/jira/browse/BEAM-7926, it might be 
> redundant or confusing to render arbitrary random sample PCollection data on 
> edges.
> We'll also make sure edges in the graph corresponds to output -> input 
> relationship in the user defined pipeline. Each edge is one output. If 
> multiple down stream inputs take the same output, it should be rendered as 
> one edge diverging into two instead of two edges.
> For advanced interactivity highlight where each execution highlights the part 
> of the pipeline really executed from the original pipeline, we'll also 
> provide the support in beta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-7926) Visualize PCollection with Interactive Beam

2019-10-02 Thread Ning Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Kang updated BEAM-7926:

Component/s: (was: examples-python)
 runner-py-interactive

> Visualize PCollection with Interactive Beam
> ---
>
> Key: BEAM-7926
> URL: https://issues.apache.org/jira/browse/BEAM-7926
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-py-interactive
>Reporter: Ning Kang
>Assignee: Ning Kang
>Priority: Major
>
> Support auto plotting / charting of materialized data of a given PCollection 
> with Interactive Beam.
> Say an Interactive Beam pipeline defined as
> p = create_pipeline()
> pcoll = p | 'Transform' >> transform()
> The use can call a single function and get auto-magical charting of the data 
> as materialized pcoll.
> e.g., visualize(pcoll)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-7923) Interactive Beam

2019-10-02 Thread Ning Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Kang updated BEAM-7923:

Component/s: (was: examples-python)
 runner-py-interactive

> Interactive Beam
> 
>
> Key: BEAM-7923
> URL: https://issues.apache.org/jira/browse/BEAM-7923
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-py-interactive
>Reporter: Ning Kang
>Assignee: Ning Kang
>Priority: Major
>
> This is the top level ticket for all efforts leveraging [interactive 
> Beam|[https://github.com/apache/beam/tree/master/sdks/python/apache_beam/runners/interactive]]
> As the development goes, blocking tickets will be added to this one.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-7760) Interactive Beam Caching PCollections bound to user defined vars in notebook

2019-10-02 Thread Ning Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Kang updated BEAM-7760:

Component/s: (was: examples-python)
 runner-py-interactive

> Interactive Beam Caching PCollections bound to user defined vars in notebook
> 
>
> Key: BEAM-7760
> URL: https://issues.apache.org/jira/browse/BEAM-7760
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-py-interactive
>Reporter: Ning Kang
>Assignee: Ning Kang
>Priority: Major
>  Time Spent: 17h 10m
>  Remaining Estimate: 0h
>
> Cache only PCollections bound to user defined variables in a pipeline when 
> running pipeline with interactive runner in jupyter notebooks.
> [Interactive 
> Beam|[https://github.com/apache/beam/tree/master/sdks/python/apache_beam/runners/interactive]]
>  has been caching and using caches of "leaf" PCollections for interactive 
> execution in jupyter notebooks.
> The interactive execution is currently supported so that when appending new 
> transforms to existing pipeline for a new run, executed part of the pipeline 
> doesn't need to be re-executed. 
> A PCollection is "leaf" when it is never used as input in any PTransform in 
> the pipeline.
> The problem with building caches and pipeline to execute around "leaf" is 
> that when a PCollection is consumed by a sink with no output, the pipeline to 
> execute built will miss the subgraph generating and consuming that 
> PCollection.
> An example, "ReadFromPubSub --> WirteToPubSub" will result in an empty 
> pipeline.
> Caching around PCollections bound to user defined variables and replacing 
> transforms with source and sink of caches could resolve the pipeline to 
> execute properly under the interactive execution scenario. Also, cached 
> PCollection now can trace back to user code and can be used for user data 
> visualization if user wants to do it.
> E.g.,
> {code:java}
> // ...
> p = beam.Pipeline(interactive_runner.InteractiveRunner(),
>   options=pipeline_options)
> messages = p | "Read" >> beam.io.ReadFromPubSub(subscription='...')
> messages | "Write" >> beam.io.WriteToPubSub(topic_path)
> result = p.run()
> // ...
> visualize(messages){code}
>  The interactive runner automatically figures out that PCollection
> {code:java}
> messages{code}
> created by
> {code:java}
> p | "Read" >> beam.io.ReadFromPubSub(subscription='...'){code}
> should be cached and reused if the notebook user appends more transforms.
>  And once the pipeline gets executed, the user could use any 
> visualize(PCollection) module to visualize the data statically (batch) or 
> dynamically (stream)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-8303) Filesystems not properly registered using FileIO.write()

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8303?focusedWorklogId=322092=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322092
 ]

ASF GitHub Bot logged work on BEAM-8303:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:47
Start Date: 02/Oct/19 17:47
Worklog Time Spent: 10m 
  Work Description: mxm commented on pull request #9711: 
[release-2.16.0][BEAM-8303] Ensure FileSystems registration code runs in non 
UDF Flink operators
URL: https://github.com/apache/beam/pull/9711
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322092)
Time Spent: 2h 50m  (was: 2h 40m)

> Filesystems not properly registered using FileIO.write()
> 
>
> Key: BEAM-8303
> URL: https://issues.apache.org/jira/browse/BEAM-8303
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Affects Versions: 2.15.0
>Reporter: Preston Koprivica
>Assignee: Maximilian Michels
>Priority: Critical
> Fix For: 2.17.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I’m getting the following error when attempting to use the FileIO apis 
> (beam-2.15.0) and integrating with AWS S3.  I have setup the PipelineOptions 
> with all the relevant AWS options, so the filesystem registry **should** be 
> properly seeded by the time the graph is compiled and executed:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found for scheme s3
>     at 
> org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
>     at 
> org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1149)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1105)
>     at org.apache.beam.sdk.coders.Coder.decode(Coder.java:159)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:83)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:32)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:543)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:534)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:480)
>     at 
> org.apache.beam.runners.flink.translation.types.CoderTypeSerializer.deserialize(CoderTypeSerializer.java:93)
>     at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>     at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:106)
>     at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:72)
>     at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:47)
>     at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(ReaderIterator.java:73)
>     at 
> org.apache.flink.runtime.operators.FlatMapDriver.run(FlatMapDriver.java:107)
>     at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
>     at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>     at java.lang.Thread.run(Thread.java:748)
>  {code}
> For reference, the write code resembles this:
> {code:java}
>  FileIO.Write write = FileIO.write()
>     .via(ParquetIO.sink(schema))
>     .to(options.getOutputDir()). // will be something like: 
> s3:///
>     .withSuffix(".parquet");
> records.apply(String.format("Write(%s)", options.getOutputDir()), 
> write);{code}
> The issue does not appear to be related to ParquetIO.sink().  I am able to 
> reliably reproduce the issue using JSON formatted records and TextIO.sink(), 
> as well.  Moreover, AvroIO is affected if withWindowedWrites() option is 
> added.
> Just trying some different knobs, I went ahead and set the following option:
> {code:java}
> write = write.withNoSpilling();{code}
> This actually seemed to fix the issue, only to have it reemerge as I scaled 
> up the data set size.  The stack trace, while very similar, reads:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found 

[jira] [Work logged] (BEAM-8303) Filesystems not properly registered using FileIO.write()

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8303?focusedWorklogId=322091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322091
 ]

ASF GitHub Bot logged work on BEAM-8303:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:46
Start Date: 02/Oct/19 17:46
Worklog Time Spent: 10m 
  Work Description: mxm commented on issue #9711: 
[release-2.16.0][BEAM-8303] Ensure FileSystems registration code runs in non 
UDF Flink operators
URL: https://github.com/apache/beam/pull/9711#issuecomment-537604995
 
 
   CC @tweise 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322091)
Time Spent: 2h 40m  (was: 2.5h)

> Filesystems not properly registered using FileIO.write()
> 
>
> Key: BEAM-8303
> URL: https://issues.apache.org/jira/browse/BEAM-8303
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Affects Versions: 2.15.0
>Reporter: Preston Koprivica
>Assignee: Maximilian Michels
>Priority: Critical
> Fix For: 2.17.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> I’m getting the following error when attempting to use the FileIO apis 
> (beam-2.15.0) and integrating with AWS S3.  I have setup the PipelineOptions 
> with all the relevant AWS options, so the filesystem registry **should** be 
> properly seeded by the time the graph is compiled and executed:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found for scheme s3
>     at 
> org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
>     at 
> org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1149)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1105)
>     at org.apache.beam.sdk.coders.Coder.decode(Coder.java:159)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:83)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:32)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:543)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:534)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:480)
>     at 
> org.apache.beam.runners.flink.translation.types.CoderTypeSerializer.deserialize(CoderTypeSerializer.java:93)
>     at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>     at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:106)
>     at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:72)
>     at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:47)
>     at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(ReaderIterator.java:73)
>     at 
> org.apache.flink.runtime.operators.FlatMapDriver.run(FlatMapDriver.java:107)
>     at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
>     at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>     at java.lang.Thread.run(Thread.java:748)
>  {code}
> For reference, the write code resembles this:
> {code:java}
>  FileIO.Write write = FileIO.write()
>     .via(ParquetIO.sink(schema))
>     .to(options.getOutputDir()). // will be something like: 
> s3:///
>     .withSuffix(".parquet");
> records.apply(String.format("Write(%s)", options.getOutputDir()), 
> write);{code}
> The issue does not appear to be related to ParquetIO.sink().  I am able to 
> reliably reproduce the issue using JSON formatted records and TextIO.sink(), 
> as well.  Moreover, AvroIO is affected if withWindowedWrites() option is 
> added.
> Just trying some different knobs, I went ahead and set the following option:
> {code:java}
> write = write.withNoSpilling();{code}
> This actually seemed to fix the issue, only to have it reemerge as I scaled 
> up the data set size.  The stack trace, while very similar, reads:
> {code:java}
>  

[jira] [Work logged] (BEAM-8303) Filesystems not properly registered using FileIO.write()

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8303?focusedWorklogId=322090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322090
 ]

ASF GitHub Bot logged work on BEAM-8303:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:45
Start Date: 02/Oct/19 17:45
Worklog Time Spent: 10m 
  Work Description: mxm commented on issue #9711: 
[release-2.16.0][BEAM-8303] Ensure FileSystems registration code runs in non 
UDF Flink operators
URL: https://github.com/apache/beam/pull/9711#issuecomment-537604587
 
 
   Unrelated test failures: 
https://builds.apache.org/job/beam_PreCommit_Java_Commit/7944/testReport/
   ```
   
org.apache.beam.sdk.transforms.ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundleStateful
 
   org.apache.beam.sdk.io.TextIOWriteTest.testWriteViaSink
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322090)
Time Spent: 2.5h  (was: 2h 20m)

> Filesystems not properly registered using FileIO.write()
> 
>
> Key: BEAM-8303
> URL: https://issues.apache.org/jira/browse/BEAM-8303
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Affects Versions: 2.15.0
>Reporter: Preston Koprivica
>Assignee: Maximilian Michels
>Priority: Critical
> Fix For: 2.17.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> I’m getting the following error when attempting to use the FileIO apis 
> (beam-2.15.0) and integrating with AWS S3.  I have setup the PipelineOptions 
> with all the relevant AWS options, so the filesystem registry **should** be 
> properly seeded by the time the graph is compiled and executed:
> {code:java}
>  java.lang.IllegalArgumentException: No filesystem found for scheme s3
>     at 
> org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
>     at 
> org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1149)
>     at 
> org.apache.beam.sdk.io.FileBasedSink$FileResultCoder.decode(FileBasedSink.java:1105)
>     at org.apache.beam.sdk.coders.Coder.decode(Coder.java:159)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:83)
>     at 
> org.apache.beam.sdk.transforms.join.UnionCoder.decode(UnionCoder.java:32)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:543)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:534)
>     at 
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.decode(WindowedValue.java:480)
>     at 
> org.apache.beam.runners.flink.translation.types.CoderTypeSerializer.deserialize(CoderTypeSerializer.java:93)
>     at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>     at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:106)
>     at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:72)
>     at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:47)
>     at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(ReaderIterator.java:73)
>     at 
> org.apache.flink.runtime.operators.FlatMapDriver.run(FlatMapDriver.java:107)
>     at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
>     at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>     at java.lang.Thread.run(Thread.java:748)
>  {code}
> For reference, the write code resembles this:
> {code:java}
>  FileIO.Write write = FileIO.write()
>     .via(ParquetIO.sink(schema))
>     .to(options.getOutputDir()). // will be something like: 
> s3:///
>     .withSuffix(".parquet");
> records.apply(String.format("Write(%s)", options.getOutputDir()), 
> write);{code}
> The issue does not appear to be related to ParquetIO.sink().  I am able to 
> reliably reproduce the issue using JSON formatted records and TextIO.sink(), 
> as well.  Moreover, AvroIO is affected if withWindowedWrites() option is 
> added.
> Just trying some different knobs, I went ahead and set the 

[jira] [Work logged] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8286?focusedWorklogId=322083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322083
 ]

ASF GitHub Bot logged work on BEAM-8286:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:39
Start Date: 02/Oct/19 17:39
Worklog Time Spent: 10m 
  Work Description: ibzib commented on pull request #9714: [BEAM-8286] move 
datastore intersphinx link again
URL: https://github.com/apache/beam/pull/9714
 
 
   I'm also fine with just removing the intersphinx references if we don't want 
to take our chances with how long this new link will stay around.
   
   R: @udim 
   
   Post-Commit Tests Status (on master branch)
   

   
   Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
   --- | --- | --- | --- | --- | --- | --- | ---
   Go | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/)
 | --- | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go_VR_Spark/lastCompletedBuild/)
   Java | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/)
   Python | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python2/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python35/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python36/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python36/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Python37/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python37/lastCompletedBuild/)
 | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/)[![Build
 
Status](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/)
 | [![Build 

[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943015#comment-16943015
 ] 

Thomas Weise commented on BEAM-8183:


For context: 
[https://lists.apache.org/thread.html/2122928a0a5f678d475ec15af538eb7303f73557870af174b1fdef7e@%3Cdev.beam.apache.org%3E]

Running the same pipeline in different environments with different parameters 
is a common need. Virtually everyone has dev/staging/prod or whatever their 
environments are and they want to use the same build artifact. That normally 
requires some amount of parameterization.

The other use case is bundling multiple pipelines into the same container and 
select which to run at launch time.

I was surprised about the question given prior discussion. Even more so 
considering that Beam already has the concept of user options. The approach of 
generating the jar file currently is equivalent to hard coding all pipeline 
options and asking the user to recompile.

Yes, we could generate a new jar file for every option or environment but 
please not it bloats the container images (job server is > 100MB). We can also 
create separate Docker images, now we are in the GB range. 

 

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (BEAM-7933) Adding timeout to JobServer grpc calls

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-7933?focusedWorklogId=322078=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322078
 ]

ASF GitHub Bot logged work on BEAM-7933:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:29
Start Date: 02/Oct/19 17:29
Worklog Time Spent: 10m 
  Work Description: ibzib commented on issue #9673: [BEAM-7933] Add job 
server request timeout (default to 60 seconds)
URL: https://github.com/apache/beam/pull/9673#issuecomment-537597980
 
 
   Docs failure is probably https://issues.apache.org/jira/browse/BEAM-8286
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322078)
Time Spent: 3h  (was: 2h 50m)

> Adding timeout to JobServer grpc calls
> --
>
> Key: BEAM-7933
> URL: https://issues.apache.org/jira/browse/BEAM-7933
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-core
>Affects Versions: 2.14.0
>Reporter: Enrico Canzonieri
>Assignee: Enrico Canzonieri
>Priority: Minor
>  Labels: portability
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> grpc calls to the JobServer from the Python SDK do not have timeouts. That 
> means that the call to pipeline.run()could hang forever if the JobServer is 
> not running (or failing to start).
> E.g. 
> [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/portability/portable_runner.py#L307]
>  the call to Prepare() doesn't provide any timeout value and the same applies 
> to other JobServer requests.
> As part of this ticket we could add a default timeout of 60 seconds as the 
> default timeout for http client.
> Additionally, we could consider adding a --job-server-request-timeout to the 
> [PortableOptions|https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py#L805]
>  class to be used in the JobServer interactions inside probable_runner.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (BEAM-8286) Python precommit (:sdks:python:test-suites:tox:py2:docs) failing

2019-10-02 Thread Kyle Weaver (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Weaver reopened BEAM-8286:
---

Looks like the replacement site we used was also taken down. Maybe we'd be 
better off removing the intersphinx references to avoid this dependency 
altogether.

> Python precommit (:sdks:python:test-suites:tox:py2:docs) failing
> 
>
> Key: BEAM-8286
> URL: https://issues.apache.org/jira/browse/BEAM-8286
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Priority: Major
> Fix For: Not applicable
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Example failure: 
> [https://builds.apache.org/job/beam_PreCommit_Python_Commit/8638/console]
> 17:29:13 * What went wrong:
> 17:29:13 Execution failed for task ':sdks:python:test-suites:tox:py2:docs'.
> 17:29:13 > Process 'command 'sh'' finished with non-zero exit value 1
> Fails on my local machine (on head) as well. Can't determine exact cause.
> ERROR: InvocationError for command /usr/bin/time scripts/generate_pydoc.sh 
> (exited with code 1) 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8318) Add a num_threads_per_worker pipeline option to Python SDK.

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8318:
--
Status: Open  (was: Triage Needed)

> Add a num_threads_per_worker pipeline option to Python SDK.
> ---
>
> Key: BEAM-8318
> URL: https://issues.apache.org/jira/browse/BEAM-8318
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-core
>Reporter: Chamikara Madhusanka Jayalath
>Assignee: Chamikara Madhusanka Jayalath
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Similar to what we have here for Java: 
> [https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/options/DataflowPipelineDebugOptions.java#L178]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8318) Add a num_threads_per_worker pipeline option to Python SDK.

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943002#comment-16943002
 ] 

Kenneth Knowles commented on BEAM-8318:
---

Is this done? PR is merged.

> Add a num_threads_per_worker pipeline option to Python SDK.
> ---
>
> Key: BEAM-8318
> URL: https://issues.apache.org/jira/browse/BEAM-8318
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-core
>Reporter: Chamikara Madhusanka Jayalath
>Assignee: Chamikara Madhusanka Jayalath
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Similar to what we have here for Java: 
> [https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/options/DataflowPipelineDebugOptions.java#L178]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8322) Log when artifacts can't be fetched on startup

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8322:
--
Status: Open  (was: Triage Needed)

> Log when artifacts can't be fetched on startup
> --
>
> Key: BEAM-8322
> URL: https://issues.apache.org/jira/browse/BEAM-8322
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-py-harness
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Minor
>
> I noticed when I was testing my new artifact retrieval service:
> While I'm sure artifact retrieval is the cause of error, there is nothing in 
> the logs at all to indicate that.
>  
> EDIT: I realized the root cause might be because I wasn't returning an 
> artifact response, making the client wait forever. There should probably be 
> timeouts for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8162) NPE error when add flink 1.9 runner

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8162:
--
Fix Version/s: (was: 2.17.0)

> NPE error when add flink 1.9 runner
> ---
>
> Key: BEAM-8162
> URL: https://issues.apache.org/jira/browse/BEAM-8162
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When add flink 1.9 runner in https://github.com/apache/beam/pull/9296, we 
> find an NPE error when run the `PortableTimersExecutionTest`. 
> the detail can be found here: 
> https://github.com/apache/beam/pull/9296#issuecomment-525262607



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8323) Testing Guideline using deprecated DoFnTester

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8323:
--
Status: Open  (was: Triage Needed)

> Testing Guideline using deprecated DoFnTester
> -
>
> Key: BEAM-8323
> URL: https://issues.apache.org/jira/browse/BEAM-8323
> Project: Beam
>  Issue Type: Bug
>  Components: website
>Reporter: Reza ardeshir rokni
>Priority: Major
>
> [https://beam.apache.org/documentation/pipelines/test-your-pipeline/]
> Uses deprecated DoFnTester 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Lukasz Gajowy (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942998#comment-16942998
 ] 

Lukasz Gajowy edited comment on BEAM-8319 at 10/2/19 5:18 PM:
--

To provide more information - this might not be perfectly clear: if I comment 
out "guava" entry from the library.java map, I can use errorprone 0.8.1 in 
sdks/java/core successfully (version 20.0 is not forced). This is due to the 
forcing of versions described in the issue mentioned above. Otherwise not.


was (Author: łukaszg):
To bring up more information - this might be not so clear: if I comment out 
"guava" entry from the library.java map, I can use errorprone 0.8.1 in 
sdks/java/core successfully (version 20.0 is not forced). This is due to the 
forcing of versions described in the issue mentioned above. Otherwise not.

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> org.gradle.api.internal.tasks.execution.FinishSnapshotTaskInputsBuildOperationTaskExecuter.execute(FinishSnapshotTaskInputsBuildOperationTaskExecuter.java:51)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBuildCacheKeyExecuter.execute(ResolveBuildCacheKeyExecuter.java:102)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
> at 
> org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
> at 
> org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
> at 
> org.gradle.api.internal.tasks.execution.StartSnapshotTaskInputsBuildOperationTaskExecuter.execute(StartSnapshotTaskInputsBuildOperationTaskExecuter.java:52)
> at 
> org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
> at 
> org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
> at 
> org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
> at 
> org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
> at 
> org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
> at 
> 

[jira] [Commented] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Lukasz Gajowy (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942998#comment-16942998
 ] 

Lukasz Gajowy commented on BEAM-8319:
-

To bring up more information - this might be not so clear: if I comment out 
"guava" entry from the library.java map, I can use errorprone 0.8.1 in 
sdks/java/core successfully (version 20.0 is not forced). This is due to the 
forcing of versions described in the issue mentioned above. Otherwise not.

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> org.gradle.api.internal.tasks.execution.FinishSnapshotTaskInputsBuildOperationTaskExecuter.execute(FinishSnapshotTaskInputsBuildOperationTaskExecuter.java:51)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBuildCacheKeyExecuter.execute(ResolveBuildCacheKeyExecuter.java:102)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
> at 
> org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
> at 
> org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
> at 
> org.gradle.api.internal.tasks.execution.StartSnapshotTaskInputsBuildOperationTaskExecuter.execute(StartSnapshotTaskInputsBuildOperationTaskExecuter.java:52)
> at 
> org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
> at 
> org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
> at 
> org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
> at 
> org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
> at 
> org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
> at 
> 

[jira] [Work logged] (BEAM-8213) Run and report python tox tasks separately within Jenkins

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8213?focusedWorklogId=322064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322064
 ]

ASF GitHub Bot logged work on BEAM-8213:


Author: ASF GitHub Bot
Created on: 02/Oct/19 17:15
Start Date: 02/Oct/19 17:15
Worklog Time Spent: 10m 
  Work Description: chadrik commented on issue #9706: [BEAM-8213] Split out 
lint job from monolithic python preCommit tests on jenkins
URL: https://github.com/apache/beam/pull/9706#issuecomment-537592524
 
 
   Run CommunityMetrics PreCommit
   Run Java PreCommit
   Run Python_PVR_Flink PreCommit
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322064)
Time Spent: 10h 20m  (was: 10h 10m)

> Run and report python tox tasks separately within Jenkins
> -
>
> Key: BEAM-8213
> URL: https://issues.apache.org/jira/browse/BEAM-8213
> Project: Beam
>  Issue Type: Improvement
>  Components: build-system
>Reporter: Chad Dombrova
>Assignee: Chad Dombrova
>Priority: Major
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> As a python developer, the speed and comprehensibility of the jenkins 
> PreCommit job could be greatly improved.
> Here are some of the problems
> - when a lint job fails, it's not reported in the test results summary, so 
> even though the job is marked as failed, I see "Test Result (no failures)" 
> which is quite confusing
> - I have to wait for over an hour to discover the lint failed, which takes 
> about a minute to run on its own
> - The logs are a jumbled mess of all the different tasks running on top of 
> each other
> - The test results give no indication of which version of python they use.  I 
> click on Test results, then the test module, then the test class, then I see 
> 4 tests named the same thing.  I assume that the first is python 2.7, the 
> second is 3.5 and so on.   It takes 5 clicks and then reading the log output 
> to know which version of python a single error pertains to, then I need to 
> repeat for each failure.  This makes it very difficult to discover problems, 
> and deduce that they may have something to do with python version mismatches.
> I believe the solution to this is to split up the single monolithic python 
> PreCommit job into sub-jobs (possibly using a pipeline with steps).  This 
> would give us the following benefits:
> - sub job results should become available as they finish, so for example, 
> lint results should be available very early on
> - sub job results will be reported separately, and there will be a job for 
> each py2, py35, py36 and so on, so it will be clear when an error is related 
> to a particular python version
> - sub jobs without reports, like docs and lint, will have their own failure 
> status and logs, so when they fail it will be more obvious what went wrong.
> I'm happy to help out once I get some feedback on the desired way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8162) NPE error when add flink 1.9 runner

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942986#comment-16942986
 ] 

Kenneth Knowles commented on BEAM-8162:
---

Also, was it fixed in 2.16.0 or 2.15.0 and is now released? I did not 
investigate the branch, I just set it to the next unreleased version.

> NPE error when add flink 1.9 runner
> ---
>
> Key: BEAM-8162
> URL: https://issues.apache.org/jira/browse/BEAM-8162
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
> Fix For: 2.17.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When add flink 1.9 runner in https://github.com/apache/beam/pull/9296, we 
> find an NPE error when run the `PortableTimersExecutionTest`. 
> the detail can be found here: 
> https://github.com/apache/beam/pull/9296#issuecomment-525262607



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8327) beam_Prober_CommunityMetrics hits cache giving wrong results

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942983#comment-16942983
 ] 

Kenneth Knowles commented on BEAM-8327:
---

I think "community metrics" maybe should be its own component.

> beam_Prober_CommunityMetrics hits cache giving wrong results
> 
>
> Key: BEAM-8327
> URL: https://issues.apache.org/jira/browse/BEAM-8327
> Project: Beam
>  Issue Type: Bug
>  Components: project-management
>Reporter: Mikhail Gryzykhin
>Priority: Major
>
> We need to fix beam_Prober_CommunityMetrics target to not hit cache. It 
> always fetches fresh data from website even though binaries are the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8162) NPE error when add flink 1.9 runner

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942984#comment-16942984
 ] 

Kenneth Knowles commented on BEAM-8162:
---

Is this fixed? I see that the PR is merged.

> NPE error when add flink 1.9 runner
> ---
>
> Key: BEAM-8162
> URL: https://issues.apache.org/jira/browse/BEAM-8162
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When add flink 1.9 runner in https://github.com/apache/beam/pull/9296, we 
> find an NPE error when run the `PortableTimersExecutionTest`. 
> the detail can be found here: 
> https://github.com/apache/beam/pull/9296#issuecomment-525262607



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8162) NPE error when add flink 1.9 runner

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8162:
--
Status: Open  (was: Triage Needed)

> NPE error when add flink 1.9 runner
> ---
>
> Key: BEAM-8162
> URL: https://issues.apache.org/jira/browse/BEAM-8162
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When add flink 1.9 runner in https://github.com/apache/beam/pull/9296, we 
> find an NPE error when run the `PortableTimersExecutionTest`. 
> the detail can be found here: 
> https://github.com/apache/beam/pull/9296#issuecomment-525262607



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8162) NPE error when add flink 1.9 runner

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8162:
--
Fix Version/s: 2.17.0

> NPE error when add flink 1.9 runner
> ---
>
> Key: BEAM-8162
> URL: https://issues.apache.org/jira/browse/BEAM-8162
> Project: Beam
>  Issue Type: Bug
>  Components: runner-flink
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
> Fix For: 2.17.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When add flink 1.9 runner in https://github.com/apache/beam/pull/9296, we 
> find an NPE error when run the `PortableTimersExecutionTest`. 
> the detail can be found here: 
> https://github.com/apache/beam/pull/9296#issuecomment-525262607



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8327) beam_Prober_CommunityMetrics hits cache giving wrong results

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942981#comment-16942981
 ] 

Kenneth Knowles commented on BEAM-8327:
---

Can this be marked triaged?

> beam_Prober_CommunityMetrics hits cache giving wrong results
> 
>
> Key: BEAM-8327
> URL: https://issues.apache.org/jira/browse/BEAM-8327
> Project: Beam
>  Issue Type: Bug
>  Components: project-management
>Reporter: Mikhail Gryzykhin
>Priority: Major
>
> We need to fix beam_Prober_CommunityMetrics target to not hit cache. It 
> always fetches fresh data from website even though binaries are the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Lukasz Gajowy (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942980#comment-16942980
 ] 

Lukasz Gajowy commented on BEAM-8319:
-

>From what I see io/google-cloud-dataflow and extensions/sql/jdbc require guava 
>in version 20. Other than that the version is forced for all modules, and I 
>just described it in the guava issue: 
>https://issues.apache.org/jira/browse/BEAM-5559 (sorry if this should be in 
>one thread). 

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> org.gradle.api.internal.tasks.execution.FinishSnapshotTaskInputsBuildOperationTaskExecuter.execute(FinishSnapshotTaskInputsBuildOperationTaskExecuter.java:51)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBuildCacheKeyExecuter.execute(ResolveBuildCacheKeyExecuter.java:102)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
> at 
> org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
> at 
> org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
> at 
> org.gradle.api.internal.tasks.execution.StartSnapshotTaskInputsBuildOperationTaskExecuter.execute(StartSnapshotTaskInputsBuildOperationTaskExecuter.java:52)
> at 
> org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
> at 
> org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
> at 
> org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
> at 
> org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
> at 
> org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
> at 
> 

[jira] [Commented] (BEAM-8321) beam_PostCommit_PortableJar_Flink postcommit failing

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942977#comment-16942977
 ] 

Kenneth Knowles commented on BEAM-8321:
---

Is the issue now fixed? I see the PR is merged. (I'm just going through 
untriaged bugs)

> beam_PostCommit_PortableJar_Flink postcommit failing
> 
>
> Key: BEAM-8321
> URL: https://issues.apache.org/jira/browse/BEAM-8321
> Project: Beam
>  Issue Type: Bug
>  Components: test-failures
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Python docker container image names/tags need updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8328) remove :beam-test-infra-metrics:test from build target.

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942978#comment-16942978
 ] 

Kenneth Knowles commented on BEAM-8328:
---

Which build target? I'm afraid I don't understand this one.

> remove :beam-test-infra-metrics:test from build target.
> ---
>
> Key: BEAM-8328
> URL: https://issues.apache.org/jira/browse/BEAM-8328
> Project: Beam
>  Issue Type: Bug
>  Components: project-management
>Reporter: Mikhail Gryzykhin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BEAM-8183) Optionally bundle multiple pipelines into a single Flink jar

2019-10-02 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/BEAM-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942974#comment-16942974
 ] 

Kenneth Knowles commented on BEAM-8183:
---

I may not understand the issue here, but I can share that for internal Dataflow 
testing we definitely make a huge jar with all the test pipelines in it.

> Optionally bundle multiple pipelines into a single Flink jar
> 
>
> Key: BEAM-8183
> URL: https://issues.apache.org/jira/browse/BEAM-8183
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-flink
>Reporter: Kyle Weaver
>Assignee: Kyle Weaver
>Priority: Major
>  Labels: portability-flink
>
> [https://github.com/apache/beam/pull/9331#issuecomment-526734851]
> "With Flink you can bundle multiple entry points into the same jar file and 
> specify which one to use with optional flags. It may be desirable to allow 
> inclusion of multiple pipelines for this tool also, although that would 
> require a different workflow. Absent this option, it becomes quite convoluted 
> for users that need the flexibility to choose which pipeline to launch at 
> submission time."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-8319) Errorprone 0.0.13 fails during JDK11 build

2019-10-02 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-8319:
--
Status: Open  (was: Triage Needed)

> Errorprone 0.0.13 fails during JDK11 build
> --
>
> Key: BEAM-8319
> URL: https://issues.apache.org/jira/browse/BEAM-8319
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Lukasz Gajowy
>Assignee: Lukasz Gajowy
>Priority: Major
>
> I'm using openjdk 1.11.02. After switching version to;
> {code:java}
> javaVersion = 11 {code}
> in BeamModule Plugin and running
> {code:java}
> ./gradlew clean build -p sdks/java/code -xtest {code}
> building fails. I was able to run errorprone after upgrading it but had 
> problems with conflicting guava version. See more here: 
> https://issues.apache.org/jira/browse/BEAM-5085
>  
> Stacktrace:
> {code:java}
> org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
> ':model:pipeline:compileJava'.
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
> at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
> at 
> org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
> at 
> org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
> at 
> org.gradle.api.internal.tasks.execution.FinishSnapshotTaskInputsBuildOperationTaskExecuter.execute(FinishSnapshotTaskInputsBuildOperationTaskExecuter.java:51)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBuildCacheKeyExecuter.execute(ResolveBuildCacheKeyExecuter.java:102)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
> at 
> org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
> at 
> org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
> at 
> org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
> at 
> org.gradle.api.internal.tasks.execution.StartSnapshotTaskInputsBuildOperationTaskExecuter.execute(StartSnapshotTaskInputsBuildOperationTaskExecuter.java:52)
> at 
> org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
> at 
> org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
> at 
> org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
> at 
> org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
> at 
> org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
> at 
> org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
> at 
> org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
> at 
> org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
> at 
> 

  1   2   >