[jira] [Updated] (BEAM-6903) Go IT fails on quota issues frequently
[ https://issues.apache.org/jira/browse/BEAM-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6903: -- Priority: P1 (was: P2) > Go IT fails on quota issues frequently > -- > > Key: BEAM-6903 > URL: https://issues.apache.org/jira/browse/BEAM-6903 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Boyuan Zhang >Assignee: Jason Kuster >Priority: P1 > Labels: flake, stale-assigned > > https://builds.apache.org/job/beam_PostCommit_Go/3002/ > https://builds.apache.org/job/beam_PostCommit_Go/3000/ > https://builds.apache.org/job/beam_PostCommit_Go/2997/ > https://builds.apache.org/job/beam_PostCommit_Go/2993/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-6903) Go IT fails on quota issues frequently
[ https://issues.apache.org/jira/browse/BEAM-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6903: -- Labels: beam-fixit flake stale-assigned (was: flake) > Go IT fails on quota issues frequently > -- > > Key: BEAM-6903 > URL: https://issues.apache.org/jira/browse/BEAM-6903 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Boyuan Zhang >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > https://builds.apache.org/job/beam_PostCommit_Go/3002/ > https://builds.apache.org/job/beam_PostCommit_Go/3000/ > https://builds.apache.org/job/beam_PostCommit_Go/2997/ > https://builds.apache.org/job/beam_PostCommit_Go/2993/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5070) nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint is flaky
[ https://issues.apache.org/jira/browse/BEAM-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5070: -- Labels: beam-fixit flake stale-assigned (was: flake stale-assigned) > nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint is flaky > -- > > Key: BEAM-5070 > URL: https://issues.apache.org/jira/browse/BEAM-5070 > Project: Beam > Issue Type: Bug > Components: testing >Affects Versions: 2.5.0 >Reporter: Reuven Lax >Assignee: Anton Kedin >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > Time Spent: 2h 40m > Remaining Estimate: 0h > > This test fails fairly frequently. > History: > [https://builds.apache.org/view/A-D/view/Beam/job/beam_PostCommit_Java_GradleBuild/1219/testReport/junit/org.apache.beam.sdk.nexmark.sources/UnboundedEventSourceTest/resumeFromCheckpoint/history/] > Sample job: > https://builds.apache.org/view/A-D/view/Beam/job/beam_PostCommit_Java_GradleBuild/1219/testReport/org.apache.beam.sdk.nexmark.sources/UnboundedEventSourceTest/resumeFromCheckpoint/ > Failure log: > org.junit.ComparisonFailure: > expected:<...":"UTC"},"afterNow":[true,"beforeNow":fals]e,"equalNow":false},...> > but > was:<...":"UTC"},"afterNow":[false,"beforeNow":tru]e,"equalNow":false},...> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.beam.sdk.nexmark.sources.UnboundedEventSourceTest$EventIdChecker.add(UnboundedEventSourceTest.java:71) > at > org.apache.beam.sdk.nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint(UnboundedEventSourceTest.java:96) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5070) nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint is flaky
[ https://issues.apache.org/jira/browse/BEAM-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5070: -- Labels: flake stale-assigned (was: stale-assigned) > nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint is flaky > -- > > Key: BEAM-5070 > URL: https://issues.apache.org/jira/browse/BEAM-5070 > Project: Beam > Issue Type: Bug > Components: testing >Affects Versions: 2.5.0 >Reporter: Reuven Lax >Assignee: Anton Kedin >Priority: P1 > Labels: flake, stale-assigned > Time Spent: 2h 40m > Remaining Estimate: 0h > > This test fails fairly frequently. > History: > [https://builds.apache.org/view/A-D/view/Beam/job/beam_PostCommit_Java_GradleBuild/1219/testReport/junit/org.apache.beam.sdk.nexmark.sources/UnboundedEventSourceTest/resumeFromCheckpoint/history/] > Sample job: > https://builds.apache.org/view/A-D/view/Beam/job/beam_PostCommit_Java_GradleBuild/1219/testReport/org.apache.beam.sdk.nexmark.sources/UnboundedEventSourceTest/resumeFromCheckpoint/ > Failure log: > org.junit.ComparisonFailure: > expected:<...":"UTC"},"afterNow":[true,"beforeNow":fals]e,"equalNow":false},...> > but > was:<...":"UTC"},"afterNow":[false,"beforeNow":tru]e,"equalNow":false},...> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.beam.sdk.nexmark.sources.UnboundedEventSourceTest$EventIdChecker.add(UnboundedEventSourceTest.java:71) > at > org.apache.beam.sdk.nexmark.sources.UnboundedEventSourceTest.resumeFromCheckpoint(UnboundedEventSourceTest.java:96) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-6903) Go IT fails on quota issues frequently
[ https://issues.apache.org/jira/browse/BEAM-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles reassigned BEAM-6903: - Assignee: Robert Burke (was: Jason Kuster) > Go IT fails on quota issues frequently > -- > > Key: BEAM-6903 > URL: https://issues.apache.org/jira/browse/BEAM-6903 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Boyuan Zhang >Assignee: Robert Burke >Priority: P1 > Labels: flake, stale-assigned > > https://builds.apache.org/job/beam_PostCommit_Go/3002/ > https://builds.apache.org/job/beam_PostCommit_Go/3000/ > https://builds.apache.org/job/beam_PostCommit_Go/2997/ > https://builds.apache.org/job/beam_PostCommit_Go/2993/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-6903) Go IT fails on quota issues frequently
[ https://issues.apache.org/jira/browse/BEAM-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6903: -- Labels: flake stale-assigned (was: stale-assigned) > Go IT fails on quota issues frequently > -- > > Key: BEAM-6903 > URL: https://issues.apache.org/jira/browse/BEAM-6903 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Boyuan Zhang >Assignee: Jason Kuster >Priority: P2 > Labels: flake, stale-assigned > > https://builds.apache.org/job/beam_PostCommit_Go/3002/ > https://builds.apache.org/job/beam_PostCommit_Go/3000/ > https://builds.apache.org/job/beam_PostCommit_Go/2997/ > https://builds.apache.org/job/beam_PostCommit_Go/2993/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7014) Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: 0
[ https://issues.apache.org/jira/browse/BEAM-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7014: -- Priority: P1 (was: P2) > Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: > 0 > --- > > Key: BEAM-7014 > URL: https://issues.apache.org/jira/browse/BEAM-7014 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Chamikara Madhusanka Jayalath >Priority: P1 > Labels: flake, stale-assigned > > The flake was observed in Precommit Direct Runner IT (wordcount). > Full log output: https://pastebin.com/raw/DP5J7Uch. > {noformat} > Traceback (most recent call last): > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/gcsio.py", > line 583, in _start_upload > 08:42:57 self._client.objects.Insert(self._insert_request, > upload=self._upload) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", > line 1154, in Insert > 08:42:57 upload=upload, upload_config=upload_config) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/base_api.py", > line 715, in _RunMethod > 08:42:57 http_request, client=self.client) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 885, in InitializeUpload > 08:42:57 return self.StreamInChunks() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 997, in StreamInChunks > 08:42:57 additional_headers=additional_headers) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 948, in __StreamMedia > 08:42:57 self.RefreshResumableUploadState() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 850, in RefreshResumableUploadState > 08:42:57 self.stream.seek(self.progress) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/filesystemio.py", > line 269, in seek > 08:42:57 offset, whence, self.position, self.last_position)) > 08:42:57 NotImplementedError: offset: 0, whence: 0, position: 48944, last: 0 > {noformat} > [~chamikara] Might have context to triage this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7014) Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: 0
[ https://issues.apache.org/jira/browse/BEAM-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7014: -- Labels: flake stale-assigned (was: stale-assigned) > Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: > 0 > --- > > Key: BEAM-7014 > URL: https://issues.apache.org/jira/browse/BEAM-7014 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Chamikara Madhusanka Jayalath >Priority: P2 > Labels: flake, stale-assigned > > The flake was observed in Precommit Direct Runner IT (wordcount). > Full log output: https://pastebin.com/raw/DP5J7Uch. > {noformat} > Traceback (most recent call last): > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/gcsio.py", > line 583, in _start_upload > 08:42:57 self._client.objects.Insert(self._insert_request, > upload=self._upload) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", > line 1154, in Insert > 08:42:57 upload=upload, upload_config=upload_config) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/base_api.py", > line 715, in _RunMethod > 08:42:57 http_request, client=self.client) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 885, in InitializeUpload > 08:42:57 return self.StreamInChunks() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 997, in StreamInChunks > 08:42:57 additional_headers=additional_headers) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 948, in __StreamMedia > 08:42:57 self.RefreshResumableUploadState() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 850, in RefreshResumableUploadState > 08:42:57 self.stream.seek(self.progress) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/filesystemio.py", > line 269, in seek > 08:42:57 offset, whence, self.position, self.last_position)) > 08:42:57 NotImplementedError: offset: 0, whence: 0, position: 48944, last: 0 > {noformat} > [~chamikara] Might have context to triage this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7014) Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: 0
[ https://issues.apache.org/jira/browse/BEAM-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7014: -- Labels: beam-fixit flake stale-assigned (was: flake stale-assigned) > Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: > 0 > --- > > Key: BEAM-7014 > URL: https://issues.apache.org/jira/browse/BEAM-7014 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Chamikara Madhusanka Jayalath >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > The flake was observed in Precommit Direct Runner IT (wordcount). > Full log output: https://pastebin.com/raw/DP5J7Uch. > {noformat} > Traceback (most recent call last): > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/gcsio.py", > line 583, in _start_upload > 08:42:57 self._client.objects.Insert(self._insert_request, > upload=self._upload) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", > line 1154, in Insert > 08:42:57 upload=upload, upload_config=upload_config) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/base_api.py", > line 715, in _RunMethod > 08:42:57 http_request, client=self.client) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 885, in InitializeUpload > 08:42:57 return self.StreamInChunks() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 997, in StreamInChunks > 08:42:57 additional_headers=additional_headers) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 948, in __StreamMedia > 08:42:57 self.RefreshResumableUploadState() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 850, in RefreshResumableUploadState > 08:42:57 self.stream.seek(self.progress) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/filesystemio.py", > line 269, in seek > 08:42:57 offset, whence, self.position, self.last_position)) > 08:42:57 NotImplementedError: offset: 0, whence: 0, position: 48944, last: 0 > {noformat} > [~chamikara] Might have context to triage this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7014) Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: 0
[ https://issues.apache.org/jira/browse/BEAM-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7014: -- Status: Triage Needed (was: Open) > Flake in gcsio.py / filesystemio.py - NotImplementedError: offset: 0, whence: > 0 > --- > > Key: BEAM-7014 > URL: https://issues.apache.org/jira/browse/BEAM-7014 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Chamikara Madhusanka Jayalath >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > The flake was observed in Precommit Direct Runner IT (wordcount). > Full log output: https://pastebin.com/raw/DP5J7Uch. > {noformat} > Traceback (most recent call last): > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/gcsio.py", > line 583, in _start_upload > 08:42:57 self._client.objects.Insert(self._insert_request, > upload=self._upload) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", > line 1154, in Insert > 08:42:57 upload=upload, upload_config=upload_config) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/base_api.py", > line 715, in _RunMethod > 08:42:57 http_request, client=self.client) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 885, in InitializeUpload > 08:42:57 return self.StreamInChunks() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 997, in StreamInChunks > 08:42:57 additional_headers=additional_headers) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 948, in __StreamMedia > 08:42:57 self.RefreshResumableUploadState() > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/build/gradleenv/1327086738/local/lib/python2.7/site-packages/apitools/base/py/transfer.py", > line 850, in RefreshResumableUploadState > 08:42:57 self.stream.seek(self.progress) > 08:42:57 File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python_Verify_PR/src/sdks/python/apache_beam/io/filesystemio.py", > line 269, in seek > 08:42:57 offset, whence, self.position, self.last_position)) > 08:42:57 NotImplementedError: offset: 0, whence: 0, position: 48944, last: 0 > {noformat} > [~chamikara] Might have context to triage this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9318) Py 2 Precommit Flake: PortableRunnerTestWithLocalDocker test flaky
[ https://issues.apache.org/jira/browse/BEAM-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9318: -- Priority: P1 (was: P2) > Py 2 Precommit Flake: PortableRunnerTestWithLocalDocker test flaky > -- > > Key: BEAM-9318 > URL: https://issues.apache.org/jira/browse/BEAM-9318 > Project: Beam > Issue Type: Bug > Components: sdk-py-core, testing >Reporter: Ahmet Altay >Assignee: Robert Bradshaw >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > Log: [https://builds.apache.org/job/beam_PreCommit_Python_Commit/11178/] > Precommit PR: (This looks like an unrelated change): > https://github.com/apache/beam/pull/10856 > Maybe result object is not available yet? > Error: > 08:29:55 self = > testMethod=test_metrics> > 08:29:55 check_gauge = True > 08:29:55 > 08:29:55 def test_metrics(self, check_gauge=True): > 08:29:55 p = self.create_pipeline() > 08:29:55 > 08:29:55 counter = beam.metrics.Metrics.counter('ns', 'counter') > 08:29:55 distribution = beam.metrics.Metrics.distribution('ns', > 'distribution') > 08:29:55 gauge = beam.metrics.Metrics.gauge('ns', 'gauge') > 08:29:55 > 08:29:55 pcoll = p | beam.Create(['a', 'zzz']) > 08:29:55 # pylint: disable=expression-not-assigned > 08:29:55 pcoll | 'count1' >> beam.FlatMap(lambda x: counter.inc()) > 08:29:55 pcoll | 'count2' >> beam.FlatMap(lambda x: counter.inc(len(x))) > 08:29:55 pcoll | 'dist' >> beam.FlatMap(lambda x: distribution.update(len(x))) > 08:29:55 pcoll | 'gauge' >> beam.FlatMap(lambda x: gauge.set(3)) > 08:29:55 > 08:29:55 res = p.run() > 08:29:55 res.wait_until_finish() > 08:29:55 > c1, = > res.metrics().query(beam.metrics.MetricsFilter().with_step('count1'))[ > 08:29:55 'counters'] > 08:29:55 > 08:29:55 apache_beam/runners/portability/fn_api_runner_test.py:699: > 08:29:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 08:29:55 apache_beam/runners/portability/portable_runner.py:415: in metrics > 08:29:55 beam_job_api_pb2.GetJobMetricsRequest(job_id=self._job_id)) > 08:29:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 08:29:55 > 08:29:55 self = > 0x7f5ada471590> > 08:29:55 request = job_id: "job-be9e2a01-154f-4707-b04a-3d7ffbc39afb" > 08:29:55 , context = None > 08:29:55 > 08:29:55 def GetJobMetrics(self, request, context=None): > 08:29:55 if request.job_id not in self._jobs: > 08:29:55 raise LookupError("Job {} does not exist".format(request.job_id)) > 08:29:55 > 08:29:55 result = self._jobs[request.job_id].result > 08:29:55 monitoring_info_list = [] > 08:29:55 > for mi in result._monitoring_infos_by_stage.values(): > 08:29:55 E AttributeError: 'NoneType' object has no attribute > '_monitoring_infos_by_stage' > 08:29:55 > 08:29:55 apache_beam/runners/portability/local_job_service.py:157: > AttributeError -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9318) Py 2 Precommit Flake: PortableRunnerTestWithLocalDocker test flaky
[ https://issues.apache.org/jira/browse/BEAM-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9318: -- Labels: beam-fixit flake stale-assigned (was: stale-assigned) > Py 2 Precommit Flake: PortableRunnerTestWithLocalDocker test flaky > -- > > Key: BEAM-9318 > URL: https://issues.apache.org/jira/browse/BEAM-9318 > Project: Beam > Issue Type: Bug > Components: sdk-py-core, testing >Reporter: Ahmet Altay >Assignee: Robert Bradshaw >Priority: P2 > Labels: beam-fixit, flake, stale-assigned > > Log: [https://builds.apache.org/job/beam_PreCommit_Python_Commit/11178/] > Precommit PR: (This looks like an unrelated change): > https://github.com/apache/beam/pull/10856 > Maybe result object is not available yet? > Error: > 08:29:55 self = > testMethod=test_metrics> > 08:29:55 check_gauge = True > 08:29:55 > 08:29:55 def test_metrics(self, check_gauge=True): > 08:29:55 p = self.create_pipeline() > 08:29:55 > 08:29:55 counter = beam.metrics.Metrics.counter('ns', 'counter') > 08:29:55 distribution = beam.metrics.Metrics.distribution('ns', > 'distribution') > 08:29:55 gauge = beam.metrics.Metrics.gauge('ns', 'gauge') > 08:29:55 > 08:29:55 pcoll = p | beam.Create(['a', 'zzz']) > 08:29:55 # pylint: disable=expression-not-assigned > 08:29:55 pcoll | 'count1' >> beam.FlatMap(lambda x: counter.inc()) > 08:29:55 pcoll | 'count2' >> beam.FlatMap(lambda x: counter.inc(len(x))) > 08:29:55 pcoll | 'dist' >> beam.FlatMap(lambda x: distribution.update(len(x))) > 08:29:55 pcoll | 'gauge' >> beam.FlatMap(lambda x: gauge.set(3)) > 08:29:55 > 08:29:55 res = p.run() > 08:29:55 res.wait_until_finish() > 08:29:55 > c1, = > res.metrics().query(beam.metrics.MetricsFilter().with_step('count1'))[ > 08:29:55 'counters'] > 08:29:55 > 08:29:55 apache_beam/runners/portability/fn_api_runner_test.py:699: > 08:29:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 08:29:55 apache_beam/runners/portability/portable_runner.py:415: in metrics > 08:29:55 beam_job_api_pb2.GetJobMetricsRequest(job_id=self._job_id)) > 08:29:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 08:29:55 > 08:29:55 self = > 0x7f5ada471590> > 08:29:55 request = job_id: "job-be9e2a01-154f-4707-b04a-3d7ffbc39afb" > 08:29:55 , context = None > 08:29:55 > 08:29:55 def GetJobMetrics(self, request, context=None): > 08:29:55 if request.job_id not in self._jobs: > 08:29:55 raise LookupError("Job {} does not exist".format(request.job_id)) > 08:29:55 > 08:29:55 result = self._jobs[request.job_id].result > 08:29:55 monitoring_info_list = [] > 08:29:55 > for mi in result._monitoring_infos_by_stage.values(): > 08:29:55 E AttributeError: 'NoneType' object has no attribute > '_monitoring_infos_by_stage' > 08:29:55 > 08:29:55 apache_beam/runners/portability/local_job_service.py:157: > AttributeError -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-8879) IOError flake in PortableRunnerTest
[ https://issues.apache.org/jira/browse/BEAM-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8879: -- Priority: P1 (was: P2) > IOError flake in PortableRunnerTest > --- > > Key: BEAM-8879 > URL: https://issues.apache.org/jira/browse/BEAM-8879 > Project: Beam > Issue Type: Bug > Components: runner-core, sdk-py-core >Reporter: Udi Meiri >Assignee: Kyle Weaver >Priority: P1 > Labels: flake, stale-assigned > > Running in Py2.7 virtualenv, using this command: > {code} > for i in `seq 100`; do pytest > apache_beam/runners/portability/portable_runner_test.py::PortableRunnerTest > || break; done > {code} > {code} > self = , type = > None, value = None, traceback = None > def __exit__(self, type, value, traceback): > if type is None: > try: > > self.gen.next() > E IOError: seek() called during concurrent operation on the > same file object > /usr/lib/python2.7/contextlib.py:24: IOError > {code} > So far I've tried this twice and gotten the above failure in 2 different > tests: > PortableRunnerTest.test_error_traceback_includes_user_code > and > PortableRunnerTest.test_assert_that -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9757) Flake in JavaPortabilityApi precommit
[ https://issues.apache.org/jira/browse/BEAM-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9757: -- Labels: flake stale-assigned (was: stale-assigned) > Flake in JavaPortabilityApi precommit > - > > Key: BEAM-9757 > URL: https://issues.apache.org/jira/browse/BEAM-9757 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Boyuan Zhang >Priority: P1 > Labels: flake, stale-assigned > > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing > Error Message > java.lang.AssertionError > Stacktrace > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:87) > at org.junit.Assert.assertTrue(Assert.java:42) > at org.junit.Assert.assertTrue(Assert.java:53) > at > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing(MemoryMonitorTest.java:93) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Standard Error > Apr 13, 2020 11:38:54 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 189/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:56 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit4868537985750235077/junit1782833439507703448/heap_dump.hprof > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 210/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit5110608932509920991/junit3762538188841792209/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 231/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Looking for heap dump at > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > detected, attempting to upload to GCS > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof uploaded > to > /tmp/junit419632844612311154/junit2761624807817345489/heap_dumpa8934b66-834c-4c26-8b05-47c995150ef8.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Deleted local heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 247/1511/1820 MB, GC last/max = > 120.00/120.00 %, #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor waitForResources > INFO: Waiting for resources for Test2. Memory is used/total/max = > 247/1511/1820 MB, GC last/max = 100.00/120.00 %, #pushbacks=1, gc > thrashing=true > Apr 13, 2020 11:39:00 PM >
[jira] [Updated] (BEAM-9757) Flake in JavaPortabilityApi precommit
[ https://issues.apache.org/jira/browse/BEAM-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9757: -- Priority: P1 (was: P2) > Flake in JavaPortabilityApi precommit > - > > Key: BEAM-9757 > URL: https://issues.apache.org/jira/browse/BEAM-9757 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Boyuan Zhang >Priority: P1 > Labels: stale-assigned > > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing > Error Message > java.lang.AssertionError > Stacktrace > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:87) > at org.junit.Assert.assertTrue(Assert.java:42) > at org.junit.Assert.assertTrue(Assert.java:53) > at > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing(MemoryMonitorTest.java:93) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Standard Error > Apr 13, 2020 11:38:54 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 189/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:56 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit4868537985750235077/junit1782833439507703448/heap_dump.hprof > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 210/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit5110608932509920991/junit3762538188841792209/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 231/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Looking for heap dump at > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > detected, attempting to upload to GCS > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof uploaded > to > /tmp/junit419632844612311154/junit2761624807817345489/heap_dumpa8934b66-834c-4c26-8b05-47c995150ef8.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Deleted local heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 247/1511/1820 MB, GC last/max = > 120.00/120.00 %, #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor waitForResources > INFO: Waiting for resources for Test2. Memory is used/total/max = > 247/1511/1820 MB, GC last/max = 100.00/120.00 %, #pushbacks=1, gc > thrashing=true > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor
[jira] [Updated] (BEAM-8879) IOError flake in PortableRunnerTest
[ https://issues.apache.org/jira/browse/BEAM-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8879: -- Labels: beam-fixit flake stale-assigned (was: flake stale-assigned) > IOError flake in PortableRunnerTest > --- > > Key: BEAM-8879 > URL: https://issues.apache.org/jira/browse/BEAM-8879 > Project: Beam > Issue Type: Bug > Components: runner-core, sdk-py-core >Reporter: Udi Meiri >Assignee: Kyle Weaver >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > Running in Py2.7 virtualenv, using this command: > {code} > for i in `seq 100`; do pytest > apache_beam/runners/portability/portable_runner_test.py::PortableRunnerTest > || break; done > {code} > {code} > self = , type = > None, value = None, traceback = None > def __exit__(self, type, value, traceback): > if type is None: > try: > > self.gen.next() > E IOError: seek() called during concurrent operation on the > same file object > /usr/lib/python2.7/contextlib.py:24: IOError > {code} > So far I've tried this twice and gotten the above failure in 2 different > tests: > PortableRunnerTest.test_error_traceback_includes_user_code > and > PortableRunnerTest.test_assert_that -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5122) [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] Suspect on pubsub initialization timeout.
[ https://issues.apache.org/jira/browse/BEAM-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5122: -- Priority: P1 (was: P2) > [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] > Suspect on pubsub initialization timeout. > - > > Key: BEAM-5122 > URL: https://issues.apache.org/jira/browse/BEAM-5122 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Mikhail Gryzykhin >Assignee: Anton Kedin >Priority: P1 > Labels: flake, stale-assigned > Fix For: Not applicable > > Time Spent: 7h 10m > Remaining Estimate: 0h > > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/junit/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/history/] > Test flakes with timeout of getting update on pubsub: > java.lang.AssertionError: Did not receive signal on > projects/apache-beam-testing/subscriptions/result-subscription--6677803195159868432 > in 60s at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.pollForResultForDuration(TestPubsubSignal.java:269) > at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.waitForSuccess(TestPubsubSignal.java:237) > at > org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq(PubsubJsonIT.java:206) > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9757) Flake in JavaPortabilityApi precommit
[ https://issues.apache.org/jira/browse/BEAM-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9757: -- Labels: beam-fixit flake stale-assigned (was: flake stale-assigned) > Flake in JavaPortabilityApi precommit > - > > Key: BEAM-9757 > URL: https://issues.apache.org/jira/browse/BEAM-9757 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Boyuan Zhang >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing > Error Message > java.lang.AssertionError > Stacktrace > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:87) > at org.junit.Assert.assertTrue(Assert.java:42) > at org.junit.Assert.assertTrue(Assert.java:53) > at > org.apache.beam.runners.dataflow.worker.util.MemoryMonitorTest.detectGCThrashing(MemoryMonitorTest.java:93) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Standard Error > Apr 13, 2020 11:38:54 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 189/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:56 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit4868537985750235077/junit1782833439507703448/heap_dump.hprof > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 210/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:58 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit5110608932509920991/junit3762538188841792209/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 231/1511/1820 MB, GC last/max = 0.00/0.00 %, > #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor dumpHeap > WARNING: Heap dumped to > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Looking for heap dump at > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:38:59 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > detected, attempting to upload to GCS > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > WARNING: Heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof uploaded > to > /tmp/junit419632844612311154/junit2761624807817345489/heap_dumpa8934b66-834c-4c26-8b05-47c995150ef8.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor > tryUploadHeapDumpIfItExists > INFO: Deleted local heap dump > /tmp/junit419632844612311154/junit923609148290982010/heap_dump.hprof > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor run > INFO: Memory is used/total/max = 247/1511/1820 MB, GC last/max = > 120.00/120.00 %, #pushbacks=0, gc thrashing=false > Apr 13, 2020 11:39:00 PM > org.apache.beam.runners.dataflow.worker.util.MemoryMonitor waitForResources > INFO: Waiting for resources for Test2. Memory is used/total/max = > 247/1511/1820 MB, GC last/max = 100.00/120.00 %, #pushbacks=1, gc > thrashing=true > Apr 13, 2020 11:39:00 PM >
[jira] [Closed] (BEAM-5122) [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] Suspect on pubsub initialization timeout.
[ https://issues.apache.org/jira/browse/BEAM-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles closed BEAM-5122. - Fix Version/s: Not applicable Resolution: Fixed > [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] > Suspect on pubsub initialization timeout. > - > > Key: BEAM-5122 > URL: https://issues.apache.org/jira/browse/BEAM-5122 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Mikhail Gryzykhin >Assignee: Anton Kedin >Priority: P2 > Labels: flake, stale-assigned > Fix For: Not applicable > > Time Spent: 7h 10m > Remaining Estimate: 0h > > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/junit/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/history/] > Test flakes with timeout of getting update on pubsub: > java.lang.AssertionError: Did not receive signal on > projects/apache-beam-testing/subscriptions/result-subscription--6677803195159868432 > in 60s at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.pollForResultForDuration(TestPubsubSignal.java:269) > at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.waitForSuccess(TestPubsubSignal.java:237) > at > org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq(PubsubJsonIT.java:206) > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5122) [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] Suspect on pubsub initialization timeout.
[ https://issues.apache.org/jira/browse/BEAM-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5122: -- Labels: flake stale-assigned (was: stale-assigned) > [beam_PostCommit_Java_GradleBuild][org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq][Flake] > Suspect on pubsub initialization timeout. > - > > Key: BEAM-5122 > URL: https://issues.apache.org/jira/browse/BEAM-5122 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Mikhail Gryzykhin >Assignee: Anton Kedin >Priority: P2 > Labels: flake, stale-assigned > Time Spent: 7h 10m > Remaining Estimate: 0h > > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/junit/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/history/] > Test flakes with timeout of getting update on pubsub: > java.lang.AssertionError: Did not receive signal on > projects/apache-beam-testing/subscriptions/result-subscription--6677803195159868432 > in 60s at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.pollForResultForDuration(TestPubsubSignal.java:269) > at > org.apache.beam.sdk.io.gcp.pubsub.TestPubsubSignal.waitForSuccess(TestPubsubSignal.java:237) > at > org.apache.beam.sdk.extensions.sql.meta.provider.pubsub.PubsubJsonIT.testUsesDlq(PubsubJsonIT.java:206) > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1216/testReport/org.apache.beam.sdk.extensions.sql.meta.provider.pubsub/PubsubJsonIT/testUsesDlq/] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-8879) IOError flake in PortableRunnerTest
[ https://issues.apache.org/jira/browse/BEAM-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8879: -- Labels: flake stale-assigned (was: stale-assigned) > IOError flake in PortableRunnerTest > --- > > Key: BEAM-8879 > URL: https://issues.apache.org/jira/browse/BEAM-8879 > Project: Beam > Issue Type: Bug > Components: runner-core, sdk-py-core >Reporter: Udi Meiri >Assignee: Kyle Weaver >Priority: P2 > Labels: flake, stale-assigned > > Running in Py2.7 virtualenv, using this command: > {code} > for i in `seq 100`; do pytest > apache_beam/runners/portability/portable_runner_test.py::PortableRunnerTest > || break; done > {code} > {code} > self = , type = > None, value = None, traceback = None > def __exit__(self, type, value, traceback): > if type is None: > try: > > self.gen.next() > E IOError: seek() called during concurrent operation on the > same file object > /usr/lib/python2.7/contextlib.py:24: IOError > {code} > So far I've tried this twice and gotten the above failure in 2 different > tests: > PortableRunnerTest.test_error_traceback_includes_user_code > and > PortableRunnerTest.test_assert_that -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (BEAM-5973) [Flake] Various ValidatesRunner Post-commits flaking due to quota issues.
[ https://issues.apache.org/jira/browse/BEAM-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles closed BEAM-5973. - Fix Version/s: Not applicable Resolution: Fixed > [Flake] Various ValidatesRunner Post-commits flaking due to quota issues. > - > > Key: BEAM-5973 > URL: https://issues.apache.org/jira/browse/BEAM-5973 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Daniel Oliveira >Assignee: Boyuan Zhang >Priority: P3 > Labels: flake, stale-assigned > Fix For: Not applicable > > > Multiple post-commits all seem to have failed at the same time due to > extremely similar GCP errors: > beam_PostCommit_Java_GradleBuild: > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1822/] > Several tests fail with one of the two following errors: > {noformat} > Nov 04, 2018 6:40:14 PM > org.apache.beam.runners.dataflow.TestDataflowRunner$ErrorMonitorMessagesHandler > process > INFO: Dataflow job 2018-11-04_10_37_12-7420261977214120411 threw exception. > Failure message was: Startup of the worker pool in zone us-central1-b failed > to bring up any of the desired 1 workers. QUOTA_EXCEEDED: Quota > 'DISKS_TOTAL_GB' exceeded. Limit: 20.0 in region us-central1.{noformat} > {noformat} > Nov 04, 2018 6:39:14 PM > org.apache.beam.runners.dataflow.TestDataflowRunner$ErrorMonitorMessagesHandler > process INFO: Dataflow job 2018-11-04_10_37_11-14433481609734431843 threw > exception. Failure message was: Startup of the worker pool in zone > us-central1-b failed to bring up any of the desired 1 workers. > QUOTA_EXCEEDED: Quota 'CPUS' exceeded. Limit: 750.0 in region us-central1. > {noformat} > beam_PostCommit_Java_ValidatesRunner_PortabilityApi_Dataflow_Gradle: > [https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_PortabilityApi_Dataflow_Gradle/31/] > Test failures include the errors pasted above, plus one new one: > > {noformat} > Nov 04, 2018 6:38:13 PM > org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process > SEVERE: 2018-11-04T18:38:04.612Z: Workflow failed. Causes: Project > apache-beam-testing has insufficient quota(s) to execute this workflow with 1 > instances in region us-central1. Quota summary (required/available): 1/7192 > instances, 1/202 CPUs, 250/121 disk GB, 0/4046 SSD disk GB, 1/267 instance > groups, 1/267 managed instance groups, 1/242 instance templates, 1/446 in-use > IP addresses.{noformat} > > beam_PostCommit_Java_PVR_Flink: > [https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink/214/] > The error appears differently but is caused by a lack of memory, so it seems > related to the quota issues above. > > {noformat} > Java HotSpot(TM) 64-Bit Server VM warning: > INFO: os::commit_memory(0x0003acd8, 6654787584, 0) failed; > error='Cannot allocate memory' (errno=12) > # > # There is insufficient memory for the Java Runtime Environment to continue. > # Native memory allocation > (mmap) failed to map > 6654787584 > bytes > for > committing reserved memory.{noformat} > Project > beam_PostCommit_Java_ValidatesRunner_Flink_Gradle:[https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink_Gradle/2101/] > I couldn't find a visible error with the failure in this job, but I'm > grouping it together with the other failures due to it flaking at the same > time as the other Flink VR Post-commit. > > > I may be grouping these failures a bit too aggressively. If anyone believes > that the failures are caused by different reasons please split this into > multiple bugs. > > A possibility is that these errors are caused by us running all our > post-commits at the same time, causing resources to be used up in bursts. > Maybe if we stagger our post-commits some of these quota issues could be > avoided. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5973) [Flake] Various ValidatesRunner Post-commits flaking due to quota issues.
[ https://issues.apache.org/jira/browse/BEAM-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5973: -- Labels: flake stale-assigned (was: stale-assigned) > [Flake] Various ValidatesRunner Post-commits flaking due to quota issues. > - > > Key: BEAM-5973 > URL: https://issues.apache.org/jira/browse/BEAM-5973 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Daniel Oliveira >Assignee: Boyuan Zhang >Priority: P3 > Labels: flake, stale-assigned > > Multiple post-commits all seem to have failed at the same time due to > extremely similar GCP errors: > beam_PostCommit_Java_GradleBuild: > [https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1822/] > Several tests fail with one of the two following errors: > {noformat} > Nov 04, 2018 6:40:14 PM > org.apache.beam.runners.dataflow.TestDataflowRunner$ErrorMonitorMessagesHandler > process > INFO: Dataflow job 2018-11-04_10_37_12-7420261977214120411 threw exception. > Failure message was: Startup of the worker pool in zone us-central1-b failed > to bring up any of the desired 1 workers. QUOTA_EXCEEDED: Quota > 'DISKS_TOTAL_GB' exceeded. Limit: 20.0 in region us-central1.{noformat} > {noformat} > Nov 04, 2018 6:39:14 PM > org.apache.beam.runners.dataflow.TestDataflowRunner$ErrorMonitorMessagesHandler > process INFO: Dataflow job 2018-11-04_10_37_11-14433481609734431843 threw > exception. Failure message was: Startup of the worker pool in zone > us-central1-b failed to bring up any of the desired 1 workers. > QUOTA_EXCEEDED: Quota 'CPUS' exceeded. Limit: 750.0 in region us-central1. > {noformat} > beam_PostCommit_Java_ValidatesRunner_PortabilityApi_Dataflow_Gradle: > [https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_PortabilityApi_Dataflow_Gradle/31/] > Test failures include the errors pasted above, plus one new one: > > {noformat} > Nov 04, 2018 6:38:13 PM > org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process > SEVERE: 2018-11-04T18:38:04.612Z: Workflow failed. Causes: Project > apache-beam-testing has insufficient quota(s) to execute this workflow with 1 > instances in region us-central1. Quota summary (required/available): 1/7192 > instances, 1/202 CPUs, 250/121 disk GB, 0/4046 SSD disk GB, 1/267 instance > groups, 1/267 managed instance groups, 1/242 instance templates, 1/446 in-use > IP addresses.{noformat} > > beam_PostCommit_Java_PVR_Flink: > [https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink/214/] > The error appears differently but is caused by a lack of memory, so it seems > related to the quota issues above. > > {noformat} > Java HotSpot(TM) 64-Bit Server VM warning: > INFO: os::commit_memory(0x0003acd8, 6654787584, 0) failed; > error='Cannot allocate memory' (errno=12) > # > # There is insufficient memory for the Java Runtime Environment to continue. > # Native memory allocation > (mmap) failed to map > 6654787584 > bytes > for > committing reserved memory.{noformat} > Project > beam_PostCommit_Java_ValidatesRunner_Flink_Gradle:[https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink_Gradle/2101/] > I couldn't find a visible error with the failure in this job, but I'm > grouping it together with the other failures due to it flaking at the same > time as the other Flink VR Post-commit. > > > I may be grouping these failures a bit too aggressively. If anyone believes > that the failures are caused by different reasons please split this into > multiple bugs. > > A possibility is that these errors are caused by us running all our > post-commits at the same time, causing resources to be used up in bursts. > Maybe if we stagger our post-commits some of these quota issues could be > avoided. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10227) python_version qualifiers are ignored for typing dependency.
[ https://issues.apache.org/jira/browse/BEAM-10227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10227: --- Status: Open (was: Triage Needed) > python_version qualifiers are ignored for typing dependency. > > > Key: BEAM-10227 > URL: https://issues.apache.org/jira/browse/BEAM-10227 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Valentyn Tymofieiev >Priority: P2 > > {noformat} > :~$ docker -- run -it --entrypoint=/bin/bash > gcr.io/cloud-dataflow/v1beta3/python3:2.22.0 > root@bcd3693fbfa1:/# python --version > Python 3.5.9 > root@bcd3693fbfa1:/# pip install 'typing; python_version < "3.5"' > Ignoring typing: markers 'python_version < "3.5"' don't match your environment > root@bcd3693fbfa1:/# pip install 'typing; python_version < "3.5.3"' > Collecting typing > Downloading typing-3.7.4.1-py3-none-any.whl (25 kB) > Installing collected packages: typing > Successfully installed typing-3.7.4.1 > {noformat} > The second download should not be happening according to the expressed > intent, but it does. using python_full_version fixes that. See also: > https://www.python.org/dev/peps/pep-0508/. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10225) Add message when starting job server
[ https://issues.apache.org/jira/browse/BEAM-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10225: --- Status: Open (was: Triage Needed) > Add message when starting job server > - > > Key: BEAM-10225 > URL: https://issues.apache.org/jira/browse/BEAM-10225 > Project: Beam > Issue Type: Improvement > Components: jobserver >Reporter: Anna Qin >Assignee: Anna Qin >Priority: P4 > > Currently, the job server blocks while waiting for jobs, but the terminal > outputs a misleading percentage indicator that stops at 98%. Add a message to > clarify when jobs are ready to be submitted and that the build only > terminates upon error or ctrl+c -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10203) Replace fastjson with jackson
[ https://issues.apache.org/jira/browse/BEAM-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10203: --- Labels: beam-fixit (was: ) > Replace fastjson with jackson > - > > Key: BEAM-10203 > URL: https://issues.apache.org/jira/browse/BEAM-10203 > Project: Beam > Issue Type: Bug > Components: dsl-sql >Reporter: Andrew Pilloud >Assignee: Andrew Pilloud >Priority: P2 > Labels: beam-fixit > > fastjson is only used by Beam SQL, we should switch to jackson to match the > rest of Beam and reduce our dependency update responsibilities. > This is an actual issue, at least once we've hit a security vulnerability we > weren't tracking: https://github.com/apache/beam/pull/11758 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10218) Add FnApiRunner to cross-language validate runner test suite
[ https://issues.apache.org/jira/browse/BEAM-10218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10218: --- Status: Open (was: Triage Needed) > Add FnApiRunner to cross-language validate runner test suite > > > Key: BEAM-10218 > URL: https://issues.apache.org/jira/browse/BEAM-10218 > Project: Beam > Issue Type: Improvement > Components: cross-language, runner-direct >Reporter: Heejong Lee >Assignee: Heejong Lee >Priority: P2 > > Add FnApiRunner to cross-language validate runner test suite -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1890) No INFO/DEBUG log if Python ValidatesRunner test timeout
[ https://issues.apache.org/jira/browse/BEAM-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1890: -- Labels: beam-fixit triaged (was: triaged) > No INFO/DEBUG log if Python ValidatesRunner test timeout > > > Key: BEAM-1890 > URL: https://issues.apache.org/jira/browse/BEAM-1890 > Project: Beam > Issue Type: Bug > Components: sdk-py-core, testing >Reporter: Mark Liu >Assignee: Mark Liu >Priority: P3 > Labels: beam-fixit, triaged > > Python service test (ValidatesRunner test and Integration test with service > runner) enabled multiprocess execution in Postcommit through Nose. When > TimedOutException happens, only stack trace are printed out which contains > very little information about job info. Printing whatever logs until timeout > happens, which is captured by Nose, will make debugging on service more > easier. > This is a issue on Nose framework side and there is a link to track: > https://github.com/nose-devs/nose/issues/1044 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5758) Load tests for SyntheticSources in Python
[ https://issues.apache.org/jira/browse/BEAM-5758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5758: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Load tests for SyntheticSources in Python > - > > Key: BEAM-5758 > URL: https://issues.apache.org/jira/browse/BEAM-5758 > Project: Beam > Issue Type: Test > Components: testing >Reporter: Kasia Kucharczyk >Priority: P2 > Labels: beam-fixit, stale-P2 > Time Spent: 3h > Remaining Estimate: 0h > > For purpose of load testing the SyntheticSources there should be created > tests with metrics which will be sent to BigQuery. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7789) :beam-test-tools project fails to build locally
[ https://issues.apache.org/jira/browse/BEAM-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7789: -- Labels: beam-fixit stale-P2 (was: stale-P2) > :beam-test-tools project fails to build locally > --- > > Key: BEAM-7789 > URL: https://issues.apache.org/jira/browse/BEAM-7789 > Project: Beam > Issue Type: Bug > Components: testing >Reporter: Anton Kedin >Priority: P2 > Labels: beam-fixit, stale-P2 > > Running the release-verification build (global build of everything) in turn > triggers the build of the `beam-test-tools` project, which has some test > infrastructure scripts that we run on Jenkins. It seems to work fine on > jenkins. However running the build of the project locally fails: > https://scans.gradle.com/s/kqhkzyozbpiua/console-log#L6 > What seems to happen is the gradle vendoring plugin caches the dependencies > locally, but fails to cache simplelru. > One workaround (based on ./gradlew :beam-test-tools:showGopathGoroot) > {code} > export GOPATH=$PWD/.test-infra/tools/.gogradle/project_gopath > go get github.com/hashicorp/golang-lru/simplelru > ./gradlew :beam-test-tools:build > {code} > It is able to find the `lrumap` and `simplelru` during the dependency > resolution step, and I can see it mentioned in couple of artifacts produced > by the `gogradle` plugin. But when it does `:installDepedencies` to actually > copy them to `vendor` directory, this specific package is missing. This > reproduces for me on a couple of different machines I tried, both on release > and master branches -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-6335) GroupByKey uses data insertion pipeline in streaming tests
[ https://issues.apache.org/jira/browse/BEAM-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6335: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > GroupByKey uses data insertion pipeline in streaming tests > -- > > Key: BEAM-6335 > URL: https://issues.apache.org/jira/browse/BEAM-6335 > Project: Beam > Issue Type: Sub-task > Components: testing >Reporter: Kasia Kucharczyk >Assignee: Kasia Kucharczyk >Priority: P2 > Labels: beam-fixit, stale-assigned > Time Spent: 50m > Remaining Estimate: 0h > > Uses prepared Java Data Insertion Pipeline to update GroupByKey in Python to > load test streaming. > This task contains following steps: > # Create GroupByKey streaming test that accepts bytes > # To stop test after arrived messages it is required to add a matcher. The > matcher should work on number of messages because in case of load testing it > would be difficult to compare big load of bytes (also casting bytes to string > to compare load would be difficult). > # All data is generated by SyntheticDataPublisher.java, which is sending > bytes produced by synthetic source to PubSub. PubSub is used as a streaming > source for the Python test. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9559) Remove smoke load test for Java and Python SDK
[ https://issues.apache.org/jira/browse/BEAM-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9559: -- Summary: Remove smoke load test for Java and Python SDK (was: Remove smoke load test for Java and Python dsk) > Remove smoke load test for Java and Python SDK > -- > > Key: BEAM-9559 > URL: https://issues.apache.org/jira/browse/BEAM-9559 > Project: Beam > Issue Type: Wish > Components: testing >Reporter: Lukasz Gajowy >Assignee: Michał Walenia >Priority: P3 > Labels: beam-fixit, stale-assigned > > As discussed in PR: > [https://github.com/apache/beam/pull/11135#discussion_r392852028] > No one seems to use them and regular load test will fail too whenever > something is wrong. There's plenty of load tests now and they are smaller > than they used to be at the time of creating the smoke tests. If something is > wrong there will be quite quick feedback from them. All that makes the smoke > test redundant imo. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-8130) Storing, displaying and detecting anomalies in test results
[ https://issues.apache.org/jira/browse/BEAM-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128660#comment-17128660 ] Kenneth Knowles commented on BEAM-8130: --- [~tysonjh] pinging because we discussed the issue of regression detection > Storing, displaying and detecting anomalies in test results > --- > > Key: BEAM-8130 > URL: https://issues.apache.org/jira/browse/BEAM-8130 > Project: Beam > Issue Type: Improvement > Components: testing >Reporter: Kamil Wasilewski >Assignee: Kamil Wasilewski >Priority: P3 > > An implementation of the following proposal: > https://s.apache.org/test-metrics-storage-corrected -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2637) Post commit test for mobile gaming examples
[ https://issues.apache.org/jira/browse/BEAM-2637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2637: -- Labels: beam-fixit (was: ) > Post commit test for mobile gaming examples > --- > > Key: BEAM-2637 > URL: https://issues.apache.org/jira/browse/BEAM-2637 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P2 > Labels: beam-fixit > > We need post commit test for mobile gaming examples to prevent failures > beyond DirectRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9559) Remove smoke load test for Java and Python dsk
[ https://issues.apache.org/jira/browse/BEAM-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9559: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > Remove smoke load test for Java and Python dsk > -- > > Key: BEAM-9559 > URL: https://issues.apache.org/jira/browse/BEAM-9559 > Project: Beam > Issue Type: Wish > Components: testing >Reporter: Lukasz Gajowy >Assignee: Michał Walenia >Priority: P3 > Labels: beam-fixit, stale-assigned > > As discussed in PR: > [https://github.com/apache/beam/pull/11135#discussion_r392852028] > No one seems to use them and regular load test will fail too whenever > something is wrong. There's plenty of load tests now and they are smaller > than they used to be at the time of creating the smoke tests. If something is > wrong there will be quite quick feedback from them. All that makes the smoke > test redundant imo. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7316) FileIOTest.testFileIoDynamicNaming breaks on Spark runner
[ https://issues.apache.org/jira/browse/BEAM-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7316: -- Labels: beam-fixit (was: ) > FileIOTest.testFileIoDynamicNaming breaks on Spark runner > - > > Key: BEAM-7316 > URL: https://issues.apache.org/jira/browse/BEAM-7316 > Project: Beam > Issue Type: Sub-task > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P3 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7914) Add python 3 test in crossLanguageValidateRunner task
[ https://issues.apache.org/jira/browse/BEAM-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7914: -- Labels: beam-fixit portability stale-assigned (was: portability stale-assigned) > Add python 3 test in crossLanguageValidateRunner task > - > > Key: BEAM-7914 > URL: https://issues.apache.org/jira/browse/BEAM-7914 > Project: Beam > Issue Type: Improvement > Components: testing >Reporter: Heejong Lee >Assignee: Chamikara Madhusanka Jayalath >Priority: P2 > Labels: beam-fixit, portability, stale-assigned > > add python 3 test in crossLanguageValidateRunner task -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7314) ViewTest.testEmptySingletonSideInput breaks on Spark runner
[ https://issues.apache.org/jira/browse/BEAM-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7314: -- Labels: beam-fixit (was: ) > ViewTest.testEmptySingletonSideInput breaks on Spark runner > --- > > Key: BEAM-7314 > URL: https://issues.apache.org/jira/browse/BEAM-7314 > Project: Beam > Issue Type: Sub-task > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P3 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7315) GatherAllPanesTest.multiplePanesMultipleReifiedPane breaks on Spark runner
[ https://issues.apache.org/jira/browse/BEAM-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7315: -- Labels: beam-fixit (was: ) > GatherAllPanesTest.multiplePanesMultipleReifiedPane breaks on Spark runner > -- > > Key: BEAM-7315 > URL: https://issues.apache.org/jira/browse/BEAM-7315 > Project: Beam > Issue Type: Sub-task > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P3 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7237) Run NeedsRunner test category with Spark Runner
[ https://issues.apache.org/jira/browse/BEAM-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7237: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Run NeedsRunner test category with Spark Runner > --- > > Key: BEAM-7237 > URL: https://issues.apache.org/jira/browse/BEAM-7237 > Project: Beam > Issue Type: Test > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P2 > Labels: beam-fixit, stale-P2 > > The {{:validatesRunner}} task uses the {{ValidatesRunner}} test category > which is a subtype of {{NeedsRunner}}. It would be good to expand the scope > of the tests to {{NeedsRunner}} because there are many additional tests which > currently excluded. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7313) PipelineRunnerTest testRunPTransform breaks on Spark runner
[ https://issues.apache.org/jira/browse/BEAM-7313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7313: -- Labels: beam-fixit (was: ) > PipelineRunnerTest testRunPTransform breaks on Spark runner > --- > > Key: BEAM-7313 > URL: https://issues.apache.org/jira/browse/BEAM-7313 > Project: Beam > Issue Type: Sub-task > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P3 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2339) Jenkins cross JDK version test on Windows
[ https://issues.apache.org/jira/browse/BEAM-2339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2339: -- Labels: beam-fixit (was: ) > Jenkins cross JDK version test on Windows > - > > Key: BEAM-2339 > URL: https://issues.apache.org/jira/browse/BEAM-2339 > Project: Beam > Issue Type: Task > Components: build-system, testing >Reporter: Mark Liu >Priority: P2 > Labels: beam-fixit > > We can set os variant to choose windows for Jenkins test, which can be > combined with JDK version test. So that we can have cross OS / cross JDK > version test. > This discussion came from > https://github.com/apache/beam/pull/3184#pullrequestreview-39303400 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7317) TFRecordIOTest breaks on Spark runner
[ https://issues.apache.org/jira/browse/BEAM-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7317: -- Labels: beam-fixit (was: ) > TFRecordIOTest breaks on Spark runner > - > > Key: BEAM-7317 > URL: https://issues.apache.org/jira/browse/BEAM-7317 > Project: Beam > Issue Type: Sub-task > Components: runner-spark >Reporter: Ismaël Mejía >Priority: P3 > Labels: beam-fixit > > org.apache.beam.sdk.io.TFRecordIOTest > testReadInvalidRecord FAILED > java.lang.AssertionError > org.apache.beam.sdk.io.TFRecordIOTest > testReadInvalidDataMask FAILED > java.lang.AssertionError > org.apache.beam.sdk.io.TFRecordIOTest > testReadInvalidLengthMask FAILED > java.lang.AssertionError -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-8665) Add infrastructure + test suites to run Beam tests on Windows/Mac platforms.
[ https://issues.apache.org/jira/browse/BEAM-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8665: -- Labels: beam-fixit (was: ) > Add infrastructure + test suites to run Beam tests on Windows/Mac platforms. > - > > Key: BEAM-8665 > URL: https://issues.apache.org/jira/browse/BEAM-8665 > Project: Beam > Issue Type: Test > Components: testing >Reporter: Valentyn Tymofieiev >Priority: P2 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5302) Improve performance test documentation
[ https://issues.apache.org/jira/browse/BEAM-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5302: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Improve performance test documentation > -- > > Key: BEAM-5302 > URL: https://issues.apache.org/jira/browse/BEAM-5302 > Project: Beam > Issue Type: Improvement > Components: testing >Reporter: Mark Liu >Priority: P2 > Labels: beam-fixit, stale-P2 > > Current documentation for performance testing and benchmarks missing > following areas: > How to write / use benchmark on Perfkit > How to run benchmark locally or on Jenkins > Benchmark summary / definition > How to use performance metrics data and where to find > Those documents can help new contributors to start and people who interested > in performance result to understand framework and look up results. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-8578) PubSubBigQueryIT.test_file_loads fails on Dataflow Runner
[ https://issues.apache.org/jira/browse/BEAM-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8578: -- Labels: beam-fixit beginner test (was: beginner test) > PubSubBigQueryIT.test_file_loads fails on Dataflow Runner > - > > Key: BEAM-8578 > URL: https://issues.apache.org/jira/browse/BEAM-8578 > Project: Beam > Issue Type: Test > Components: io-py-gcp >Reporter: Tanay Tummalapalli >Priority: P3 > Labels: beam-fixit, beginner, test > > The IT test - PubSubBigQueryIT fails on Dataflow Runner. > https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_test.py#L833-L838 > More context: https://github.com/apache/beam/pull/9427 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-8210) Python Integration tests: log test name
[ https://issues.apache.org/jira/browse/BEAM-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-8210: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Python Integration tests: log test name > --- > > Key: BEAM-8210 > URL: https://issues.apache.org/jira/browse/BEAM-8210 > Project: Beam > Issue Type: Improvement > Components: runner-dataflow, sdk-py-core, testing >Reporter: Udi Meiri >Priority: P2 > Labels: beam-fixit, stale-P2 > > When creating a job (on any runner), log the originating test so it's easier > to debug. > Postcommits frequently run tens of pipelines at a time and it's getting > harder to tell them apart. > By logging I mean putting it somewhere in the job proto (such as in > parameter, job name, etc.). Using the worker logger on startup won't work if > the worker fails to start. > Ideally you should be able to see the name in the runner UI (such as Dataflow > cloud console). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9118) apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses is flaky
[ https://issues.apache.org/jira/browse/BEAM-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9118: -- Priority: P1 (was: P2) > apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses > is flaky > > > Key: BEAM-9118 > URL: https://issues.apache.org/jira/browse/BEAM-9118 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > Sample errors: > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1373 > {noformat} > 4:30:12 self = > testMethod=test_pardo_unfusable_side_inputs> > 14:30:12 > 14:30:12 def test_pardo_unfusable_side_inputs(self): > 14:30:12def cross_product(elem, sides): > 14:30:12 for side in sides: > 14:30:12yield elem, side > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(pcoll)), > 14:30:12 equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 derived = ((pcoll,) | beam.Flatten() > 14:30:12 | beam.Map(lambda x: (x, x)) > 14:30:12 | beam.GroupByKey() > 14:30:12 | 'Unkey' >> beam.Map(lambda kv: kv[0])) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(derived)), > 14:30:12 > equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12 apache_beam/runners/portability/fn_api_runner_test.py:258: > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 apache_beam/pipeline.py:481: in __exit__ > 14:30:12 self.run().wait_until_finish() > 14:30:12 apache_beam/runners/portability/portable_runner.py:445: in > wait_until_finish > 14:30:12 for state_response in self._state_stream: > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:416: > in __next__ > 14:30:12 return self._next() > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:694: > in _next > 14:30:12 _common.wait(self._state.condition.wait, _response_ready) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:140: > in wait > 14:30:12 _wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:105: > in _wait_once > 14:30:12 wait_fn(timeout=timeout) > 14:30:12 /usr/lib/python3.6/threading.py:299: in wait > 14:30:12 gotit = waiter.acquire(True, timeout) > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 > 14:30:12 signum = 14, frame = > 14:30:12 > 14:30:12 def handler(signum, frame): > 14:30:12msg = 'Timed out after %s seconds.' % self.TIMEOUT_SECS > 14:30:12print('=' * 20, msg, '=' * 20) > 14:30:12traceback.print_stack(frame) > 14:30:12threads_by_id = {th.ident: th for th in threading.enumerate()} > 14:30:12for thread_id, stack in sys._current_frames().items(): > 14:30:12 th = threads_by_id.get(thread_id) > 14:30:12 print() > 14:30:12 print('# Thread:', th or thread_id) > 14:30:12 traceback.print_stack(stack) > 14:30:12 > raise BaseException(msg) > 14:30:12 E BaseException: Timed out after 60 seconds. > 14:30:12 > 14:30:12 apache_beam/runners/portability/portable_runner_test.py:77: > BaseException > {noformat} > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1366/ > {noformat} > 09:06:01 self = > testMethod=test_assert_that> > 09:06:01 > 09:06:01 def test_assert_that(self): > 09:06:01# TODO: figure out a way for fn_api_runner to parse and raise > the > 09:06:01# underlying exception. > 09:06:01with self.assertRaisesRegex(Exception, 'Failed assert'): > 09:06:01 with self.create_pipeline() as p: > 09:06:01 > assert_that(p | beam.Create(['a', 'b']), equal_to(['a'])) > 09:06:01 E AssertionError: "Failed assert" does not match "Pipeline > timed out waiting for job service subprocess." > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5907) Dataflow worker unit test suite has thread-unsafe use of mockito
[ https://issues.apache.org/jira/browse/BEAM-5907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5907: -- Labels: beam-fixit (was: ) > Dataflow worker unit test suite has thread-unsafe use of mockito > > > Key: BEAM-5907 > URL: https://issues.apache.org/jira/browse/BEAM-5907 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Kenneth Knowles >Priority: P3 > Labels: beam-fixit > Time Spent: 1h 10m > Remaining Estimate: 0h > > Some tests of portability bits failed in a test suite for the legacy worker. > Could be a naming problem or a configuration problem. Notably, they failed > due to changes in unshaded test jars, which no one should be using. > https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/1778/#showFailuresLink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9118) apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses is flaky
[ https://issues.apache.org/jira/browse/BEAM-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9118: -- Labels: beam-fixit flake stale-assigned (was: stale-assigned) > apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses > is flaky > > > Key: BEAM-9118 > URL: https://issues.apache.org/jira/browse/BEAM-9118 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P2 > Labels: beam-fixit, flake, stale-assigned > > Sample errors: > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1373 > {noformat} > 4:30:12 self = > testMethod=test_pardo_unfusable_side_inputs> > 14:30:12 > 14:30:12 def test_pardo_unfusable_side_inputs(self): > 14:30:12def cross_product(elem, sides): > 14:30:12 for side in sides: > 14:30:12yield elem, side > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(pcoll)), > 14:30:12 equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 derived = ((pcoll,) | beam.Flatten() > 14:30:12 | beam.Map(lambda x: (x, x)) > 14:30:12 | beam.GroupByKey() > 14:30:12 | 'Unkey' >> beam.Map(lambda kv: kv[0])) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(derived)), > 14:30:12 > equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12 apache_beam/runners/portability/fn_api_runner_test.py:258: > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 apache_beam/pipeline.py:481: in __exit__ > 14:30:12 self.run().wait_until_finish() > 14:30:12 apache_beam/runners/portability/portable_runner.py:445: in > wait_until_finish > 14:30:12 for state_response in self._state_stream: > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:416: > in __next__ > 14:30:12 return self._next() > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:694: > in _next > 14:30:12 _common.wait(self._state.condition.wait, _response_ready) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:140: > in wait > 14:30:12 _wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:105: > in _wait_once > 14:30:12 wait_fn(timeout=timeout) > 14:30:12 /usr/lib/python3.6/threading.py:299: in wait > 14:30:12 gotit = waiter.acquire(True, timeout) > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 > 14:30:12 signum = 14, frame = > 14:30:12 > 14:30:12 def handler(signum, frame): > 14:30:12msg = 'Timed out after %s seconds.' % self.TIMEOUT_SECS > 14:30:12print('=' * 20, msg, '=' * 20) > 14:30:12traceback.print_stack(frame) > 14:30:12threads_by_id = {th.ident: th for th in threading.enumerate()} > 14:30:12for thread_id, stack in sys._current_frames().items(): > 14:30:12 th = threads_by_id.get(thread_id) > 14:30:12 print() > 14:30:12 print('# Thread:', th or thread_id) > 14:30:12 traceback.print_stack(stack) > 14:30:12 > raise BaseException(msg) > 14:30:12 E BaseException: Timed out after 60 seconds. > 14:30:12 > 14:30:12 apache_beam/runners/portability/portable_runner_test.py:77: > BaseException > {noformat} > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1366/ > {noformat} > 09:06:01 self = > testMethod=test_assert_that> > 09:06:01 > 09:06:01 def test_assert_that(self): > 09:06:01# TODO: figure out a way for fn_api_runner to parse and raise > the > 09:06:01# underlying exception. > 09:06:01with self.assertRaisesRegex(Exception, 'Failed assert'): > 09:06:01 with self.create_pipeline() as p: > 09:06:01 > assert_that(p | beam.Create(['a', 'b']), equal_to(['a'])) > 09:06:01 E AssertionError: "Failed assert" does not match "Pipeline > timed out waiting for job service subprocess." > {noformat} -- This message was sent by Atlassian Jira
[jira] [Updated] (BEAM-9118) apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses is flaky
[ https://issues.apache.org/jira/browse/BEAM-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9118: -- Issue Type: Bug (was: Improvement) > apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses > is flaky > > > Key: BEAM-9118 > URL: https://issues.apache.org/jira/browse/BEAM-9118 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > Sample errors: > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1373 > {noformat} > 4:30:12 self = > testMethod=test_pardo_unfusable_side_inputs> > 14:30:12 > 14:30:12 def test_pardo_unfusable_side_inputs(self): > 14:30:12def cross_product(elem, sides): > 14:30:12 for side in sides: > 14:30:12yield elem, side > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(pcoll)), > 14:30:12 equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12with self.create_pipeline() as p: > 14:30:12 pcoll = p | beam.Create(['a', 'b']) > 14:30:12 derived = ((pcoll,) | beam.Flatten() > 14:30:12 | beam.Map(lambda x: (x, x)) > 14:30:12 | beam.GroupByKey() > 14:30:12 | 'Unkey' >> beam.Map(lambda kv: kv[0])) > 14:30:12 assert_that( > 14:30:12 pcoll | beam.FlatMap(cross_product, > beam.pvalue.AsList(derived)), > 14:30:12 > equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', > 'b')])) > 14:30:12 > 14:30:12 apache_beam/runners/portability/fn_api_runner_test.py:258: > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 apache_beam/pipeline.py:481: in __exit__ > 14:30:12 self.run().wait_until_finish() > 14:30:12 apache_beam/runners/portability/portable_runner.py:445: in > wait_until_finish > 14:30:12 for state_response in self._state_stream: > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:416: > in __next__ > 14:30:12 return self._next() > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_channel.py:694: > in _next > 14:30:12 _common.wait(self._state.condition.wait, _response_ready) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:140: > in wait > 14:30:12 _wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb) > 14:30:12 > target/.tox-py36-gcp-pytest/py36-gcp-pytest/lib/python3.6/site-packages/grpc/_common.py:105: > in _wait_once > 14:30:12 wait_fn(timeout=timeout) > 14:30:12 /usr/lib/python3.6/threading.py:299: in wait > 14:30:12 gotit = waiter.acquire(True, timeout) > 14:30:12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 14:30:12 > 14:30:12 signum = 14, frame = > 14:30:12 > 14:30:12 def handler(signum, frame): > 14:30:12msg = 'Timed out after %s seconds.' % self.TIMEOUT_SECS > 14:30:12print('=' * 20, msg, '=' * 20) > 14:30:12traceback.print_stack(frame) > 14:30:12threads_by_id = {th.ident: th for th in threading.enumerate()} > 14:30:12for thread_id, stack in sys._current_frames().items(): > 14:30:12 th = threads_by_id.get(thread_id) > 14:30:12 print() > 14:30:12 print('# Thread:', th or thread_id) > 14:30:12 traceback.print_stack(stack) > 14:30:12 > raise BaseException(msg) > 14:30:12 E BaseException: Timed out after 60 seconds. > 14:30:12 > 14:30:12 apache_beam/runners/portability/portable_runner_test.py:77: > BaseException > {noformat} > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1366/ > {noformat} > 09:06:01 self = > testMethod=test_assert_that> > 09:06:01 > 09:06:01 def test_assert_that(self): > 09:06:01# TODO: figure out a way for fn_api_runner to parse and raise > the > 09:06:01# underlying exception. > 09:06:01with self.assertRaisesRegex(Exception, 'Failed assert'): > 09:06:01 with self.create_pipeline() as p: > 09:06:01 > assert_that(p | beam.Create(['a', 'b']), equal_to(['a'])) > 09:06:01 E AssertionError: "Failed assert" does not match "Pipeline > timed out waiting for job service subprocess." > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9527) apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerSplitTest.test_split_crazy_sdf is flaky
[ https://issues.apache.org/jira/browse/BEAM-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9527: -- Priority: P1 (was: P2) > apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerSplitTest.test_split_crazy_sdf > is flaky > - > > Key: BEAM-9527 > URL: https://issues.apache.org/jira/browse/BEAM-9527 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Valentyn Tymofieiev >Assignee: Boyuan Zhang >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > > {noformat} > self = 0x7fe494edb450> > split_manager = > inputs = {'ref_PCollection_PCollection_3_split/Read': > ['\x7f\xdf;dZ\x1c\xac\t\x00\x00\x00\x01\x0f\x08V\xff\x80\x02capache_beam\nOffsetRange\nq\x01)\x81q\x02}q\x03(U\x04stopq\x04K\x05U\x05startq\x05K\x00ub.\x01\x00@\x14\x00\x00\x00\x00\x00\x00']} > process_bundle_id = 'bundle_2575' > def _generate_splits_for_testing(self, > split_manager, > inputs, # type: Mapping[str, > PartitionableBuffer] > process_bundle_id): > # type: (...) -> List[beam_fn_api_pb2.ProcessBundleSplitResponse] > split_results = [] # type: > List[beam_fn_api_pb2.ProcessBundleSplitResponse] > read_transform_id, buffer_data = only_element(inputs.items()) > byte_stream = b''.join(buffer_data) > num_elements = len( > list( > self._get_input_coder_impl(read_transform_id).decode_all( > byte_stream))) > > # Start the split manager in case it wants to set any breakpoints. > split_manager_generator = split_manager(num_elements) > try: > split_fraction = next(split_manager_generator) > done = False > except StopIteration: > done = True > > # Send all the data. > self._send_input_to_worker( > process_bundle_id, read_transform_id, [byte_stream]) > > assert self._worker_handler is not None > > # Execute the requested splits. > while not done: > if split_fraction is None: > split_result = None > else: > split_request = beam_fn_api_pb2.InstructionRequest( > process_bundle_split=beam_fn_api_pb2.ProcessBundleSplitRequest( > instruction_id=process_bundle_id, > desired_splits={ > read_transform_id: beam_fn_api_pb2. > ProcessBundleSplitRequest.DesiredSplit( > fraction_of_remainder=split_fraction, > estimated_input_elements=num_elements) > })) > split_response = self._worker_handler.control_conn.push( > split_request).get() # type: > beam_fn_api_pb2.InstructionResponse > for t in (0.05, 0.1, 0.2): > waiting = ('Instruction not running', 'not yet scheduled') > if any(msg in split_response.error for msg in waiting): > time.sleep(t) > split_response = self._worker_handler.control_conn.push( > split_request).get() > if 'Unknown process bundle' in split_response.error: > # It may have finished too fast. > split_result = None > elif split_response.error: > > raise RuntimeError(split_response.error) > E RuntimeError: Traceback (most recent call last): > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 190, in _execute > E response = task() > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 229, in > E lambda: self.create_worker().do_instruction(request), request) > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 416, in do_instruction > E getattr(request, request_type), request.instruction_id) > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 479, in process_bundle_split > E process_bundle_split=processor.try_split(request)) > E File >
[jira] [Updated] (BEAM-9527) apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerSplitTest.test_split_crazy_sdf is flaky
[ https://issues.apache.org/jira/browse/BEAM-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9527: -- Labels: beam-fixit flake stale-assigned (was: stale-assigned) > apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerSplitTest.test_split_crazy_sdf > is flaky > - > > Key: BEAM-9527 > URL: https://issues.apache.org/jira/browse/BEAM-9527 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Valentyn Tymofieiev >Assignee: Boyuan Zhang >Priority: P2 > Labels: beam-fixit, flake, stale-assigned > > {noformat} > self = 0x7fe494edb450> > split_manager = > inputs = {'ref_PCollection_PCollection_3_split/Read': > ['\x7f\xdf;dZ\x1c\xac\t\x00\x00\x00\x01\x0f\x08V\xff\x80\x02capache_beam\nOffsetRange\nq\x01)\x81q\x02}q\x03(U\x04stopq\x04K\x05U\x05startq\x05K\x00ub.\x01\x00@\x14\x00\x00\x00\x00\x00\x00']} > process_bundle_id = 'bundle_2575' > def _generate_splits_for_testing(self, > split_manager, > inputs, # type: Mapping[str, > PartitionableBuffer] > process_bundle_id): > # type: (...) -> List[beam_fn_api_pb2.ProcessBundleSplitResponse] > split_results = [] # type: > List[beam_fn_api_pb2.ProcessBundleSplitResponse] > read_transform_id, buffer_data = only_element(inputs.items()) > byte_stream = b''.join(buffer_data) > num_elements = len( > list( > self._get_input_coder_impl(read_transform_id).decode_all( > byte_stream))) > > # Start the split manager in case it wants to set any breakpoints. > split_manager_generator = split_manager(num_elements) > try: > split_fraction = next(split_manager_generator) > done = False > except StopIteration: > done = True > > # Send all the data. > self._send_input_to_worker( > process_bundle_id, read_transform_id, [byte_stream]) > > assert self._worker_handler is not None > > # Execute the requested splits. > while not done: > if split_fraction is None: > split_result = None > else: > split_request = beam_fn_api_pb2.InstructionRequest( > process_bundle_split=beam_fn_api_pb2.ProcessBundleSplitRequest( > instruction_id=process_bundle_id, > desired_splits={ > read_transform_id: beam_fn_api_pb2. > ProcessBundleSplitRequest.DesiredSplit( > fraction_of_remainder=split_fraction, > estimated_input_elements=num_elements) > })) > split_response = self._worker_handler.control_conn.push( > split_request).get() # type: > beam_fn_api_pb2.InstructionResponse > for t in (0.05, 0.1, 0.2): > waiting = ('Instruction not running', 'not yet scheduled') > if any(msg in split_response.error for msg in waiting): > time.sleep(t) > split_response = self._worker_handler.control_conn.push( > split_request).get() > if 'Unknown process bundle' in split_response.error: > # It may have finished too fast. > split_result = None > elif split_response.error: > > raise RuntimeError(split_response.error) > E RuntimeError: Traceback (most recent call last): > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 190, in _execute > E response = task() > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 229, in > E lambda: self.create_worker().do_instruction(request), request) > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 416, in do_instruction > E getattr(request, request_type), request.instruction_id) > E File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py2/build/srcs/sdks/python/apache_beam/runners/worker/sdk_worker.py", > line 479, in process_bundle_split > E process_bundle_split=processor.try_split(request)) > E File >
[jira] [Updated] (BEAM-6847) Add Streaming wordcount test to Dataflow ValidatesContainer test suite
[ https://issues.apache.org/jira/browse/BEAM-6847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6847: -- Labels: beam-fixit (was: ) > Add Streaming wordcount test to Dataflow ValidatesContainer test suite > -- > > Key: BEAM-6847 > URL: https://issues.apache.org/jira/browse/BEAM-6847 > Project: Beam > Issue Type: Sub-task > Components: testing >Reporter: Valentyn Tymofieiev >Priority: P3 > Labels: beam-fixit > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10215) @Ignore: Concat now works with varargs
[ https://issues.apache.org/jira/browse/BEAM-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10215: --- Status: Open (was: Triage Needed) > @Ignore: Concat now works with varargs > -- > > Key: BEAM-10215 > URL: https://issues.apache.org/jira/browse/BEAM-10215 > Project: Beam > Issue Type: Bug > Components: dsl-sql-zetasql >Reporter: Rui Wang >Assignee: Rui Wang >Priority: P2 > Labels: beam-fixit > > Will fix this ignored test: > testConcatWithSixParameters() -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10213) @Ignore: fix the test for testCastToDateWithCase
[ https://issues.apache.org/jira/browse/BEAM-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-10213: --- Status: Open (was: Triage Needed) > @Ignore: fix the test for testCastToDateWithCase > > > Key: BEAM-10213 > URL: https://issues.apache.org/jira/browse/BEAM-10213 > Project: Beam > Issue Type: Bug > Components: dsl-sql-zetasql >Reporter: Rui Wang >Assignee: Rui Wang >Priority: P2 > Labels: beam-fixit > > Fix this ignored test: > testCastToDateWithCase() -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-4037) Add Python streaming wordcount snippets and test
[ https://issues.apache.org/jira/browse/BEAM-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4037: -- Status: Triage Needed (was: Open) > Add Python streaming wordcount snippets and test > > > Key: BEAM-4037 > URL: https://issues.apache.org/jira/browse/BEAM-4037 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Charles Chen >Priority: P2 > Labels: beam-fixit > Time Spent: 1.5h > Remaining Estimate: 0h > > We should add Python streaming wordcount snippets and tests. The > documentation will refer to these snippets. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1910) test_using_slow_impl very flaky locally
[ https://issues.apache.org/jira/browse/BEAM-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1910: -- Priority: P1 (was: P2) > test_using_slow_impl very flaky locally > --- > > Key: BEAM-1910 > URL: https://issues.apache.org/jira/browse/BEAM-1910 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Eugene Kirpichov >Priority: P1 > Labels: beam-fixit, flake > > Most times this test fails on my machine when running: > mvn verify -am -T 1C > test_using_slow_impl (apache_beam.coders.slow_coders_test.SlowCoders) ... FAIL > ... > ___ summary > > ERROR: docs: commands failed > lint: commands succeeded > ERROR: py27: commands failed > py27cython: commands succeeded > py27gcp: commands succeeded > [ERROR] Command execution failed. > org.apache.commons.exec.ExecuteException: Process exited with an error: 1 > (Exit value: 1) > at > org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404) > at > org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:166) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:764) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:711) > at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:289) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:185) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:181) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Unfortunately the test doesn't print anything to maven output, so I don't > know what went wrong. I also don't know how to rerun the individual test > myself. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5642) test_pardo_state_only flaky (times out)
[ https://issues.apache.org/jira/browse/BEAM-5642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5642: -- Labels: beam-fixit flake (was: flake) > test_pardo_state_only flaky (times out) > --- > > Key: BEAM-5642 > URL: https://issues.apache.org/jira/browse/BEAM-5642 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P1 > Labels: beam-fixit, flake > > [https://builds.apache.org/job/beam_PreCommit_Python_Commit/1577/consoleFull] > > *16:43:20* > ==*16:43:20* > ERROR: test_pardo_state_only > (apache_beam.runners.portability.portable_runner_test.PortableRunnerTest)*16:43:20* > > --*16:43:20* > Traceback (most recent call last):*16:43:20* File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/apache_beam/runners/portability/fn_api_runner_test.py", > line 265, in test_pardo_state_only*16:43:20* > equal_to(expected))*16:43:20* File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/apache_beam/pipeline.py", > line 423, in __exit__*16:43:20* self.run().wait_until_finish()*16:43:20* > File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/apache_beam/runners/portability/portable_runner.py", > line 242, in wait_until_finish*16:43:20* > beam_job_api_pb2.GetJobStateRequest(job_id=self._job_id)):*16:43:20* File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/target/.tox/py3/lib/python3.5/site-packages/grpc/_channel.py", > line 363, in __next__*16:43:20* return self._next()*16:43:20* File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/target/.tox/py3/lib/python3.5/site-packages/grpc/_channel.py", > line 348, in _next*16:43:20* self._state.condition.wait()*16:43:20* > File "/usr/lib/python3.5/threading.py", line 293, in wait*16:43:20* > waiter.acquire()*16:43:20* File > "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Commit/src/sdks/python/apache_beam/runners/portability/portable_runner_test.py", > line 68, in handler*16:43:20* raise BaseException(msg)*16:43:20* > BaseException: Timed out after 30 seconds.*16:43:20* >> > begin captured stdout << - -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-4037) Add Python streaming wordcount snippets and test
[ https://issues.apache.org/jira/browse/BEAM-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4037: -- Labels: beam-fixit (was: ) > Add Python streaming wordcount snippets and test > > > Key: BEAM-4037 > URL: https://issues.apache.org/jira/browse/BEAM-4037 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Charles Chen >Priority: P2 > Labels: beam-fixit > Time Spent: 1.5h > Remaining Estimate: 0h > > We should add Python streaming wordcount snippets and tests. The > documentation will refer to these snippets. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1910) test_using_slow_impl very flaky locally
[ https://issues.apache.org/jira/browse/BEAM-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1910: -- Labels: beam-fixit flake (was: ) > test_using_slow_impl very flaky locally > --- > > Key: BEAM-1910 > URL: https://issues.apache.org/jira/browse/BEAM-1910 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Eugene Kirpichov >Priority: P2 > Labels: beam-fixit, flake > > Most times this test fails on my machine when running: > mvn verify -am -T 1C > test_using_slow_impl (apache_beam.coders.slow_coders_test.SlowCoders) ... FAIL > ... > ___ summary > > ERROR: docs: commands failed > lint: commands succeeded > ERROR: py27: commands failed > py27cython: commands succeeded > py27gcp: commands succeeded > [ERROR] Command execution failed. > org.apache.commons.exec.ExecuteException: Process exited with an error: 1 > (Exit value: 1) > at > org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404) > at > org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:166) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:764) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:711) > at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:289) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:185) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:181) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Unfortunately the test doesn't print anything to maven output, so I don't > know what went wrong. I also don't know how to rerun the individual test > myself. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-3861) Build test infra for end-to-end streaming test in Python SDK
[ https://issues.apache.org/jira/browse/BEAM-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128524#comment-17128524 ] Kenneth Knowles commented on BEAM-3861: --- This is done? > Build test infra for end-to-end streaming test in Python SDK > > > Key: BEAM-3861 > URL: https://issues.apache.org/jira/browse/BEAM-3861 > Project: Beam > Issue Type: Task > Components: testing >Reporter: Mark Liu >Assignee: Mark Liu >Priority: P2 > Labels: beam-fixit > Time Spent: 9h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-3861) Build test infra for end-to-end streaming test in Python SDK
[ https://issues.apache.org/jira/browse/BEAM-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-3861: -- Labels: beam-fixit (was: ) > Build test infra for end-to-end streaming test in Python SDK > > > Key: BEAM-3861 > URL: https://issues.apache.org/jira/browse/BEAM-3861 > Project: Beam > Issue Type: Task > Components: testing >Reporter: Mark Liu >Assignee: Mark Liu >Priority: P2 > Labels: beam-fixit > Time Spent: 9h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2818) Test for circular dependencies
[ https://issues.apache.org/jira/browse/BEAM-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2818: -- Labels: beam (was: ) > Test for circular dependencies > -- > > Key: BEAM-2818 > URL: https://issues.apache.org/jira/browse/BEAM-2818 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P3 > Labels: beam > > Add a test for checking circular dependencies. > Circular dependencies fail at run time, and depending on import order. It is > easy to introduce one because not always they lead to a test failure but they > may still fail for a user. > We can try to find a way to generally figure out if there are any circular > dependencies in the code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2818) Test for circular dependencies
[ https://issues.apache.org/jira/browse/BEAM-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2818: -- Labels: beam-fixit (was: beam) > Test for circular dependencies > -- > > Key: BEAM-2818 > URL: https://issues.apache.org/jira/browse/BEAM-2818 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P3 > Labels: beam-fixit > > Add a test for checking circular dependencies. > Circular dependencies fail at run time, and depending on import order. It is > easy to introduce one because not always they lead to a test failure but they > may still fail for a user. > We can try to find a way to generally figure out if there are any circular > dependencies in the code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-6904) Test all Coder structuralValue implementations
[ https://issues.apache.org/jira/browse/BEAM-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6904: -- Labels: beam-fixit (was: ) > Test all Coder structuralValue implementations > -- > > Key: BEAM-6904 > URL: https://issues.apache.org/jira/browse/BEAM-6904 > Project: Beam > Issue Type: Test > Components: sdk-java-core >Reporter: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > Time Spent: 5h 10m > Remaining Estimate: 0h > > Here is a test helper that check that structuralValue is consistent with > equals: > https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L200 > And here is one that tests it another way: > https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L226 > With the deprecation of consistentWithEquals and implementing all the > structualValue methods, we should add these tests to every coder. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-4358) Create test artifacts
[ https://issues.apache.org/jira/browse/BEAM-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128523#comment-17128523 ] Kenneth Knowles commented on BEAM-4358: --- Specifically, this refers to artifacts that do _not_ have the "tests" classifier, but are _main_ artifacts that ship test-related utilities. The libraries are super lightweight deps, but could be a vector for diamond deps. Not super high priority because there's not much negative impact in this case. > Create test artifacts > - > > Key: BEAM-4358 > URL: https://issues.apache.org/jira/browse/BEAM-4358 > Project: Beam > Issue Type: Improvement > Components: build-system, testing >Reporter: Anton Kedin >Priority: P3 > > Currently things like TestPipeline and TestPubsub implement TestRule and thus > require the project to depend on Junit. We need to create separate artifacts > for these test utilities and depend on Junit only in test scope. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-4358) Create test artifacts
[ https://issues.apache.org/jira/browse/BEAM-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4358: -- Priority: P3 (was: P2) > Create test artifacts > - > > Key: BEAM-4358 > URL: https://issues.apache.org/jira/browse/BEAM-4358 > Project: Beam > Issue Type: Improvement > Components: build-system, testing >Reporter: Anton Kedin >Priority: P3 > > Currently things like TestPipeline and TestPubsub implement TestRule and thus > require the project to depend on Junit. We need to create separate artifacts > for these test utilities and depend on Junit only in test scope. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-3573) Test jars should export only tests, and only be exported for select modules
[ https://issues.apache.org/jira/browse/BEAM-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-3573: -- Labels: beam-fixit (was: ) > Test jars should export only tests, and only be exported for select modules > --- > > Key: BEAM-3573 > URL: https://issues.apache.org/jira/browse/BEAM-3573 > Project: Beam > Issue Type: Bug > Components: sdk-java-core >Reporter: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > Time Spent: 1h 20m > Remaining Estimate: 0h > > Today, we have test-jars that are used as libraries for testing. That is not > what "test jar" means, and dependency management actually does not work > correctly for this. It is OK to depend on a test jar in order to run the > tests therein, and not really OK to depend on one for another reason. > This ticket is a bucket ticket for fixes to this situation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-3138) Stop depending on Test JARs
[ https://issues.apache.org/jira/browse/BEAM-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles reassigned BEAM-3138: - Assignee: (was: Kenneth Knowles) > Stop depending on Test JARs > --- > > Key: BEAM-3138 > URL: https://issues.apache.org/jira/browse/BEAM-3138 > Project: Beam > Issue Type: Bug > Components: io-java-gcp, runner-core, sdk-java-core, sdk-java-harness >Reporter: Thomas Groh >Priority: P2 > Labels: beam-fixit > Time Spent: 50m > Remaining Estimate: 0h > > Testing components can be in a testing or otherwise signaled package, but > shouldn't really be depended on by depending on a test jar in the test scope. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5811) Timeout in Python datastore_write_it_test.DatastoreWriteIT
[ https://issues.apache.org/jira/browse/BEAM-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5811: -- Labels: beam-fixit flake (was: flake) > Timeout in Python datastore_write_it_test.DatastoreWriteIT > -- > > Key: BEAM-5811 > URL: https://issues.apache.org/jira/browse/BEAM-5811 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Kenneth Knowles >Priority: P1 > Labels: beam-fixit, flake > > [https://builds.apache.org/job/beam_PostCommit_Python_Verify/6340/] > [https://scans.gradle.com/s/74drwrmqtaory/console-log?task=:beam-sdks-python:postCommitITTests] > {code:java} > TimedOutException: 'test_datastore_write_limit > (apache_beam.io.gcp.datastore_write_it_test.DatastoreWriteIT)'{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-3138) Stop depending on Test JARs
[ https://issues.apache.org/jira/browse/BEAM-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-3138: -- Labels: beam-fixit (was: triaged) > Stop depending on Test JARs > --- > > Key: BEAM-3138 > URL: https://issues.apache.org/jira/browse/BEAM-3138 > Project: Beam > Issue Type: Bug > Components: io-java-gcp, runner-core, sdk-java-core, sdk-java-harness >Reporter: Thomas Groh >Assignee: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > Time Spent: 50m > Remaining Estimate: 0h > > Testing components can be in a testing or otherwise signaled package, but > shouldn't really be depended on by depending on a test jar in the test scope. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7798) After changes in type inference, apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT.test_bigquery_read_1M_python is failing in Python 3.5 postcommits
[ https://issues.apache.org/jira/browse/BEAM-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7798: -- Priority: P1 (was: P2) > After changes in type inference, > apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT.test_bigquery_read_1M_python > is failing in Python 3.5 postcommits > --- > > Key: BEAM-7798 > URL: https://issues.apache.org/jira/browse/BEAM-7798 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P1 > Labels: beam-fixit, stale-assigned > Time Spent: 4h 10m > Remaining Estimate: 0h > > {noformat} > Error Message > Tuple[t0, t1, ...]: each t must be a type. Got Any. > Stacktrace > Traceback (most recent call last): > File "/usr/lib/python3.5/unittest/case.py", line 58, in testPartExecutor > yield > File "/usr/lib/python3.5/unittest/case.py", line 600, in run > testMethod() > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_it_test.py", > line 58, in test_bigquery_read_1M_python > self.run_bigquery_io_read_pipeline('1M') > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_it_test.py", > line 54, in run_bigquery_io_read_pipeline > **extra_opts)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_pipeline.py", > line 74, in run > p.run() > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/testing/test_pipeline.py", > line 107, in run > else test_runner_api)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 406, in run > self._options).run(False) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 419, in run > return self.runner.run_pipeline(self, self._options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/test_direct_runner.py", > line 43, in run_pipeline > self.result = super(TestDirectRunner, self).run_pipeline(pipeline, > options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/direct_runner.py", > line 129, in run_pipeline > return runner.run_pipeline(pipeline, options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/direct_runner.py", > line 355, in run_pipeline > pipeline.replace_all(_get_transform_overrides(options)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 389, in replace_all > self._replace(override) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 300, in _replace > self.visit(TransformUpdater(self)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 447, in visit > self._root_transform().visit(visitor, self, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 822, in visit > visitor.enter_composite_transform(self) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 295, in enter_composite_transform > self._replace_if_needed(transform_node) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 264, in _replace_if_needed > new_output = replacement_transform.expand(input_node) > File >
[jira] [Updated] (BEAM-7798) After changes in type inference, apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT.test_bigquery_read_1M_python is failing in Python 3.5 postcommits
[ https://issues.apache.org/jira/browse/BEAM-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7798: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > After changes in type inference, > apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT.test_bigquery_read_1M_python > is failing in Python 3.5 postcommits > --- > > Key: BEAM-7798 > URL: https://issues.apache.org/jira/browse/BEAM-7798 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P2 > Labels: beam-fixit, stale-assigned > Time Spent: 4h 10m > Remaining Estimate: 0h > > {noformat} > Error Message > Tuple[t0, t1, ...]: each t must be a type. Got Any. > Stacktrace > Traceback (most recent call last): > File "/usr/lib/python3.5/unittest/case.py", line 58, in testPartExecutor > yield > File "/usr/lib/python3.5/unittest/case.py", line 600, in run > testMethod() > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_it_test.py", > line 58, in test_bigquery_read_1M_python > self.run_bigquery_io_read_pipeline('1M') > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_it_test.py", > line 54, in run_bigquery_io_read_pipeline > **extra_opts)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/io/gcp/bigquery_io_read_pipeline.py", > line 74, in run > p.run() > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/testing/test_pipeline.py", > line 107, in run > else test_runner_api)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 406, in run > self._options).run(False) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 419, in run > return self.runner.run_pipeline(self, self._options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/test_direct_runner.py", > line 43, in run_pipeline > self.result = super(TestDirectRunner, self).run_pipeline(pipeline, > options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/direct_runner.py", > line 129, in run_pipeline > return runner.run_pipeline(pipeline, options) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/runners/direct/direct_runner.py", > line 355, in run_pipeline > pipeline.replace_all(_get_transform_overrides(options)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 389, in replace_all > self._replace(override) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 300, in _replace > self.visit(TransformUpdater(self)) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 447, in visit > self._root_transform().visit(visitor, self, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 824, in visit > part.visit(visitor, pipeline, visited) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 822, in visit > visitor.enter_composite_transform(self) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 295, in enter_composite_transform > self._replace_if_needed(transform_node) > File > "/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Python35_PR/src/sdks/python/apache_beam/pipeline.py", > line 264, in _replace_if_needed > new_output = replacement_transform.expand(input_node) > File >
[jira] [Updated] (BEAM-3138) Stop depending on Test JARs
[ https://issues.apache.org/jira/browse/BEAM-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-3138: -- Priority: P2 (was: P3) > Stop depending on Test JARs > --- > > Key: BEAM-3138 > URL: https://issues.apache.org/jira/browse/BEAM-3138 > Project: Beam > Issue Type: Bug > Components: io-java-gcp, runner-core, sdk-java-core, sdk-java-harness >Reporter: Thomas Groh >Assignee: Kenneth Knowles >Priority: P2 > Labels: triaged > Time Spent: 50m > Remaining Estimate: 0h > > Testing components can be in a testing or otherwise signaled package, but > shouldn't really be depended on by depending on a test jar in the test scope. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-2786) Update jenkins test scripts to test with Py2 & Py3
[ https://issues.apache.org/jira/browse/BEAM-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles resolved BEAM-2786. --- Fix Version/s: Not applicable Resolution: Fixed > Update jenkins test scripts to test with Py2 & Py3 > -- > > Key: BEAM-2786 > URL: https://issues.apache.org/jira/browse/BEAM-2786 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core, testing >Reporter: Holden Karau >Priority: P2 > Fix For: Not applicable > > > After BEAM-1373 and as part of BEAM-1251 we should make sure the automated > tests also run against Py3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5171) org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests are flaky in Spark runner
[ https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5171: -- Labels: beam-fixit flake stale-P2 (was: stale-P2) > org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests > are flaky in Spark runner > - > > Key: BEAM-5171 > URL: https://issues.apache.org/jira/browse/BEAM-5171 > Project: Beam > Issue Type: Bug > Components: runner-spark >Reporter: Valentyn Tymofieiev >Priority: P2 > Labels: beam-fixit, flake, stale-P2 > > Two tests: > org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits > org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits > failed in a PostCommit [Spark Validates Runner test > suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/] > with an error that seems to be common for Spark. Could this be due to > misconfiguration of Spark cluster? > Task serialization failed: java.io.IOException: Failed to create local dir in > /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d. > java.io.IOException: Failed to create local dir in > /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d. > at > org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70) > at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116) > at > org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511) > at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045) > at > org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083) > at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841) > at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404) > at > org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123) > at > org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88) > at > org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34) > at > org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) > at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482) > at > org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039) > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947) > at > org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761) > at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-3215) Add a performance test for HBaseIO
[ https://issues.apache.org/jira/browse/BEAM-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-3215: -- Labels: beam-fixit (was: ) > Add a performance test for HBaseIO > -- > > Key: BEAM-3215 > URL: https://issues.apache.org/jira/browse/BEAM-3215 > Project: Beam > Issue Type: Test > Components: io-java-hbase >Reporter: Chamikara Madhusanka Jayalath >Priority: P2 > Labels: beam-fixit > > We should add a large scale performance test for HBaseIO. We could use > PerfKitBenchmarker based performance testing framework [1] to manage a > Kubernetes based muti-node HBase cluster and to publish benchmark results. > Example docker image to use: https://hub.docker.com/r/dajobe/hbase/ > [1] https://beam.apache.org/documentation/io/testing/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9002) test_flatten_same_pcollections (apache_beam.transforms.ptransform_test.PTransformTest) does not work in Streaming VR suite on Dataflow
[ https://issues.apache.org/jira/browse/BEAM-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9002: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > test_flatten_same_pcollections > (apache_beam.transforms.ptransform_test.PTransformTest) does not work in > Streaming VR suite on Dataflow > -- > > Key: BEAM-9002 > URL: https://issues.apache.org/jira/browse/BEAM-9002 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Valentyn Tymofieiev >Assignee: Ankur Goenka >Priority: P2 > Labels: beam-fixit, stale-assigned > Time Spent: 50m > Remaining Estimate: 0h > > Per investigation in https://issues.apache.org/jira/browse/BEAM-8877, the > test times out and was recently added to VR test suite. > [~liumomo315], I will sickbay this test for streaming, could you please help > triage the failure? > Thank you! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5171) org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests are flaky in Spark runner
[ https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5171: -- Priority: P1 (was: P2) > org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests > are flaky in Spark runner > - > > Key: BEAM-5171 > URL: https://issues.apache.org/jira/browse/BEAM-5171 > Project: Beam > Issue Type: Bug > Components: runner-spark >Reporter: Valentyn Tymofieiev >Priority: P1 > Labels: beam-fixit, flake, stale-P2 > > Two tests: > org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits > org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits > failed in a PostCommit [Spark Validates Runner test > suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/] > with an error that seems to be common for Spark. Could this be due to > misconfiguration of Spark cluster? > Task serialization failed: java.io.IOException: Failed to create local dir in > /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d. > java.io.IOException: Failed to create local dir in > /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d. > at > org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70) > at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116) > at > org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511) > at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045) > at > org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083) > at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841) > at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404) > at > org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123) > at > org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88) > at > org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34) > at > org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) > at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482) > at > org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039) > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947) > at > org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761) > at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9119) apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerTest[...].test_large_elements is flaky
[ https://issues.apache.org/jira/browse/BEAM-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9119: -- Priority: P1 (was: P2) > apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerTest[...].test_large_elements > is flaky > > > Key: BEAM-9119 > URL: https://issues.apache.org/jira/browse/BEAM-9119 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P1 > Labels: beam-fixit, flake, stale-assigned > Time Spent: 1h > Remaining Estimate: 0h > > Saw 3 errors today, all manifest with: > IndexError: index out of range in apache_beam/coders/slow_stream.py", line > 169, in read_byte_py3. > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1369 > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1365 > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1370 > Sample logs: > {noformat} > 12:10:27 === FAILURES > === > 12:10:27 FnApiRunnerTestWithDisabledCaching.test_large_elements > > 12:10:27 [gw0] linux -- Python 3.6.8 > /home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py36/build/srcs/sdks/python/target/.tox-py36-gcp-pytest/py36-gcp-pytest/bin/python > 12:10:27 > 12:10:27 self = > testMethod=test_large_elements> > 12:10:27 > 12:10:27 def test_large_elements(self): > 12:10:27with self.create_pipeline() as p: > 12:10:27 big = (p > 12:10:27 | beam.Create(['a', 'a', 'b']) > 12:10:27 | beam.Map(lambda x: ( > 12:10:27 x, x * > data_plane._DEFAULT_SIZE_FLUSH_THRESHOLD))) > 12:10:27 > 12:10:27 side_input_res = ( > 12:10:27 big > 12:10:27 | beam.Map(lambda x, side: (x[0], side.count(x[0])), > 12:10:27 beam.pvalue.AsList(big | beam.Map(lambda x: > x[0] > 12:10:27 assert_that(side_input_res, > 12:10:27 equal_to([('a', 2), ('a', 2), ('b', 1)]), > label='side') > 12:10:27 > 12:10:27 gbk_res = ( > 12:10:27 big > 12:10:27 | beam.GroupByKey() > 12:10:27 | beam.Map(lambda x: x[0])) > 12:10:27 > assert_that(gbk_res, equal_to(['a', 'b']), label='gbk') > 12:10:27 > 12:10:27 apache_beam/runners/portability/fn_api_runner_test.py:617: > 12:10:27 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 12:10:27 apache_beam/pipeline.py:479: in __exit__ > 12:10:27 self.run().wait_until_finish() > 12:10:27 apache_beam/pipeline.py:459: in run > 12:10:27 self._options).run(False) > 12:10:27 apache_beam/pipeline.py:472: in run > 12:10:27 return self.runner.run_pipeline(self, self._options) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:472: in > run_pipeline > 12:10:27 default_environment=self._default_environment)) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:480: in > run_via_runner_api > 12:10:27 return self.run_stages(stage_context, stages) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:569: in run_stages > 12:10:27 stage_context.safe_coders) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:889: in _run_stage > 12:10:27 result, splits = bundle_manager.process_bundle(data_input, > data_output) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2076: in > process_bundle > 12:10:27 part, expected_outputs), part_inputs): > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:586: in > result_iterator > 12:10:27 yield fs.pop().result() > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:432: in result > 12:10:27 return self.__get_result() > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:384: in __get_result > 12:10:27 raise self._exception > 12:10:27 apache_beam/utils/thread_pool_executor.py:44: in run > 12:10:27 self._future.set_result(self._fn(*self._fn_args, > **self._fn_kwargs)) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2076: in > 12:10:27 part, expected_outputs), part_inputs): > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2020: in > process_bundle > 12:10:27 expected_outputs[output.transform_id]).append(output.data) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:285: in append > 12:10:27 windowed_key_value = > coder_impl.decode_from_stream(input_stream, True) > 12:10:27 apache_beam/coders/coder_impl.py:1153: in decode_from_stream > 12:10:27 value =
[jira] [Updated] (BEAM-4029) Test ValidatesRunner tests for BundleBasedDirectRunner
[ https://issues.apache.org/jira/browse/BEAM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4029: -- Labels: beam-fixit (was: ) > Test ValidatesRunner tests for BundleBasedDirectRunner > -- > > Key: BEAM-4029 > URL: https://issues.apache.org/jira/browse/BEAM-4029 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Charles Chen >Priority: P2 > Labels: beam-fixit > > We currently only run tests for the BundleBasedDirectRunner for streaming > tests. We should also run them for ValidatesRunner tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9119) apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerTest[...].test_large_elements is flaky
[ https://issues.apache.org/jira/browse/BEAM-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-9119: -- Labels: beam-fixit flake stale-assigned (was: stale-assigned) > apache_beam.runners.portability.fn_api_runner_test.FnApiRunnerTest[...].test_large_elements > is flaky > > > Key: BEAM-9119 > URL: https://issues.apache.org/jira/browse/BEAM-9119 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Assignee: Robert Bradshaw >Priority: P2 > Labels: beam-fixit, flake, stale-assigned > Time Spent: 1h > Remaining Estimate: 0h > > Saw 3 errors today, all manifest with: > IndexError: index out of range in apache_beam/coders/slow_stream.py", line > 169, in read_byte_py3. > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1369 > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1365 > https://builds.apache.org/job/beam_PreCommit_Python_Phrase/1370 > Sample logs: > {noformat} > 12:10:27 === FAILURES > === > 12:10:27 FnApiRunnerTestWithDisabledCaching.test_large_elements > > 12:10:27 [gw0] linux -- Python 3.6.8 > /home/jenkins/jenkins-slave/workspace/beam_PreCommit_Python_Phrase/src/sdks/python/test-suites/tox/py36/build/srcs/sdks/python/target/.tox-py36-gcp-pytest/py36-gcp-pytest/bin/python > 12:10:27 > 12:10:27 self = > testMethod=test_large_elements> > 12:10:27 > 12:10:27 def test_large_elements(self): > 12:10:27with self.create_pipeline() as p: > 12:10:27 big = (p > 12:10:27 | beam.Create(['a', 'a', 'b']) > 12:10:27 | beam.Map(lambda x: ( > 12:10:27 x, x * > data_plane._DEFAULT_SIZE_FLUSH_THRESHOLD))) > 12:10:27 > 12:10:27 side_input_res = ( > 12:10:27 big > 12:10:27 | beam.Map(lambda x, side: (x[0], side.count(x[0])), > 12:10:27 beam.pvalue.AsList(big | beam.Map(lambda x: > x[0] > 12:10:27 assert_that(side_input_res, > 12:10:27 equal_to([('a', 2), ('a', 2), ('b', 1)]), > label='side') > 12:10:27 > 12:10:27 gbk_res = ( > 12:10:27 big > 12:10:27 | beam.GroupByKey() > 12:10:27 | beam.Map(lambda x: x[0])) > 12:10:27 > assert_that(gbk_res, equal_to(['a', 'b']), label='gbk') > 12:10:27 > 12:10:27 apache_beam/runners/portability/fn_api_runner_test.py:617: > 12:10:27 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ > 12:10:27 apache_beam/pipeline.py:479: in __exit__ > 12:10:27 self.run().wait_until_finish() > 12:10:27 apache_beam/pipeline.py:459: in run > 12:10:27 self._options).run(False) > 12:10:27 apache_beam/pipeline.py:472: in run > 12:10:27 return self.runner.run_pipeline(self, self._options) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:472: in > run_pipeline > 12:10:27 default_environment=self._default_environment)) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:480: in > run_via_runner_api > 12:10:27 return self.run_stages(stage_context, stages) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:569: in run_stages > 12:10:27 stage_context.safe_coders) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:889: in _run_stage > 12:10:27 result, splits = bundle_manager.process_bundle(data_input, > data_output) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2076: in > process_bundle > 12:10:27 part, expected_outputs), part_inputs): > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:586: in > result_iterator > 12:10:27 yield fs.pop().result() > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:432: in result > 12:10:27 return self.__get_result() > 12:10:27 /usr/lib/python3.6/concurrent/futures/_base.py:384: in __get_result > 12:10:27 raise self._exception > 12:10:27 apache_beam/utils/thread_pool_executor.py:44: in run > 12:10:27 self._future.set_result(self._fn(*self._fn_args, > **self._fn_kwargs)) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2076: in > 12:10:27 part, expected_outputs), part_inputs): > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:2020: in > process_bundle > 12:10:27 expected_outputs[output.transform_id]).append(output.data) > 12:10:27 apache_beam/runners/portability/fn_api_runner.py:285: in append > 12:10:27 windowed_key_value = > coder_impl.decode_from_stream(input_stream, True) > 12:10:27 apache_beam/coders/coder_impl.py:1153: in decode_from_stream >
[jira] [Updated] (BEAM-5627) Redesign test_split_at_fraction_exhaustive tests for Python 3
[ https://issues.apache.org/jira/browse/BEAM-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5627: -- Labels: beam-fixit (was: ) > Redesign test_split_at_fraction_exhaustive tests for Python 3 > -- > > Key: BEAM-5627 > URL: https://issues.apache.org/jira/browse/BEAM-5627 > Project: Beam > Issue Type: Sub-task > Components: sdk-py-core >Reporter: Valentyn Tymofieiev >Priority: P3 > Labels: beam-fixit > Fix For: Not applicable > > Time Spent: 4.5h > Remaining Estimate: 0h > > ERROR: test_split_at_fraction_exhaustive > (apache_beam.io.source_test_utils_test.SourceTestUtilsTest) > -- > Traceback (most recent call last): >File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/apache_beam/io/source_test_utils_test.py", > line 120, in test_split_at_fraction_exhaustive > source = self._create_source(data) >File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/apache_beam/io/source_test_utils_test.py", > line 43, in _create_source > source = LineSource(self._create_file_with_data(data)) >File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/apache_beam/io/source_test_utils_test.py", > line 35, in _create_file_with_data > f.write(line + '\n') >File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/target/.tox/py3/lib/python3.5/tempfile.py", > line 622, in func_wrapper > return func(*args, **kwargs) > TypeError: a bytes-like object is required, not 'str' > Also similar: > == > ERROR: test_file_sink_writing > (apache_beam.io.filebasedsink_test.TestFileBasedSink) > -- > Traceback (most recent call last): >File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/io/filebasedsink_test.py", line 121, in > test_file_sink_writing > init_token, writer_results = self._common_init(sink) > File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/io/filebasedsink_test.py", line 103, in _common_init > writer1 = sink.open_writer(init_token, '1') > File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/options/value_provider.py", line 133, in _f > return fnc(self, *args, **kwargs) > File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/io/filebasedsink.py", line 185, in open_writer > return FileBasedSinkWriter(self, os.path.join(init_result, uid) + suffix) > File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/io/filebasedsink.py", line 385, in __init__ > self.temp_handle = self.sink.open(temp_shard_path) > File > "/usr/local/google/home/valentyn/projects/beam/clean_head/beam/sdks/python/ >apache_beam/io/filebasedsink_test.py", line 82, in open > file_handle.write('[start]') > TypeError: a bytes-like object is required, not 'str' -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-4029) Test ValidatesRunner tests for BundleBasedDirectRunner
[ https://issues.apache.org/jira/browse/BEAM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4029: -- Status: Triage Needed (was: Open) > Test ValidatesRunner tests for BundleBasedDirectRunner > -- > > Key: BEAM-4029 > URL: https://issues.apache.org/jira/browse/BEAM-4029 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Charles Chen >Priority: P2 > Labels: beam-fixit > > We currently only run tests for the BundleBasedDirectRunner for streaming > tests. We should also run them for ValidatesRunner tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2814) test_as_singleton_with_different_defaults test is flaky
[ https://issues.apache.org/jira/browse/BEAM-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2814: -- Labels: beam-fixit flake (was: ) > test_as_singleton_with_different_defaults test is flaky > --- > > Key: BEAM-2814 > URL: https://issues.apache.org/jira/browse/BEAM-2814 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P1 > Labels: beam-fixit, flake > > {{test_as_singleton_with_different_defaults}} is flaky and failed in the post > commit test 3013, but there is no related change to trigger this. > https://builds.apache.org/view/A-D/view/Beam/job/beam_PostCommit_Python_Verify/3013/consoleFull > (https://console.cloud.google.com/dataflow/jobsDetail/locations/us-central1/jobs/2017-08-28_11_08_56-17324181904913254210?project=apache-beam-testing) > Dataflow error form the console: > (b4d390f9f9e033b4): Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line > 582, in do_work > work_executor.execute() > File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", > line 166, in execute > op.start() > File "apache_beam/runners/worker/operations.py", line 294, in > apache_beam.runners.worker.operations.DoOperation.start > (apache_beam/runners/worker/operations.c:10607) > def start(self): > File "apache_beam/runners/worker/operations.py", line 295, in > apache_beam.runners.worker.operations.DoOperation.start > (apache_beam/runners/worker/operations.c:10501) > with self.scoped_start_state: > File "apache_beam/runners/worker/operations.py", line 323, in > apache_beam.runners.worker.operations.DoOperation.start > (apache_beam/runners/worker/operations.c:10322) > self.dofn_runner = common.DoFnRunner( > File "apache_beam/runners/common.py", line 378, in > apache_beam.runners.common.DoFnRunner.__init__ > (apache_beam/runners/common.c:10018) > self.do_fn_invoker = DoFnInvoker.create_invoker( > File "apache_beam/runners/common.py", line 154, in > apache_beam.runners.common.DoFnInvoker.create_invoker > (apache_beam/runners/common.c:5212) > return PerWindowInvoker( > File "apache_beam/runners/common.py", line 219, in > apache_beam.runners.common.PerWindowInvoker.__init__ > (apache_beam/runners/common.c:7109) > input_args, input_kwargs, [si[global_window] for si in side_inputs]) > File > "/usr/local/lib/python2.7/dist-packages/apache_beam/transforms/sideinputs.py", > line 63, in __getitem__ > _FilteringIterable(self._iterable, target_window), self._view_options) > File "/usr/local/lib/python2.7/dist-packages/apache_beam/pvalue.py", line > 332, in _from_runtime_iterable > 'PCollection with more than one element accessed as ' > ValueError: PCollection with more than one element accessed as a singleton > view. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1884) Add DataflowRunner unit tests
[ https://issues.apache.org/jira/browse/BEAM-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1884: -- Labels: beam-fixit (was: ) > Add DataflowRunner unit tests > - > > Key: BEAM-1884 > URL: https://issues.apache.org/jira/browse/BEAM-1884 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ahmet Altay >Priority: P3 > Labels: beam-fixit > > DataflowRunner does not have enough unit test coverage. This resulted in a > silent failure where dataflow job graph was malformed for UI to display but > job still completed > (https://pantheon.corp.google.com/dataflow/job/2017-03-31_21_56_13-6233023862008864856?project=apache-beam-testing=433637338589) > It was a simple fix (https://github.com/apache/beam/pull/2429) that could be > caught by a unit test to check step names are not empty. > Also note that it is possible to use `dataflow_job_file` flag to create job > files only without actually running jobs for the purpose of unit testing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1139) Separate Apex and Spark runner integration tests; they depend on incompatible version of Kryo
[ https://issues.apache.org/jira/browse/BEAM-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1139: -- Labels: beam-fixit (was: ) > Separate Apex and Spark runner integration tests; they depend on incompatible > version of Kryo > - > > Key: BEAM-1139 > URL: https://issues.apache.org/jira/browse/BEAM-1139 > Project: Beam > Issue Type: Improvement > Components: runner-apex >Reporter: Kenneth Knowles >Priority: P3 > Labels: beam-fixit > > https://builds.apache.org/view/Beam/job/beam_PreCommit_Java_MavenInstall/org.apache.beam$beam-examples-java/5775/testReport/junit/org.apache.beam.examples/WordCountIT/testE2EWordCount/ > This is not necessarily a bug in the Apex runner, but it looks like this > class cannot be serialized via Kryo while the Apex runner needs it to be. > Probably the fix is to roll-forwards a simple change to make it Kryo > serializable. > It is not clear to me the difference between this test run and others. > Clearly there is a coverage gap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2131) Need Jenkins tests run outside of Google environments
[ https://issues.apache.org/jira/browse/BEAM-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2131: -- Labels: beam-fixit (was: ) > Need Jenkins tests run outside of Google environments > - > > Key: BEAM-2131 > URL: https://issues.apache.org/jira/browse/BEAM-2131 > Project: Beam > Issue Type: Improvement > Components: io-java-gcp >Reporter: Luke Cwik >Priority: P3 > Labels: beam-fixit > > Now that TravisCI no longer runs, we no longer have coverage for running > tests which execute outside of a Google environment which means that > application default credentials and google project will always be found and > we will never test code paths for developers who have never setup any kind of > Google cloud integration on their development machine. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-4717) Test JSON to Beam types conversion
[ https://issues.apache.org/jira/browse/BEAM-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-4717: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Test JSON to Beam types conversion > -- > > Key: BEAM-4717 > URL: https://issues.apache.org/jira/browse/BEAM-4717 > Project: Beam > Issue Type: Improvement > Components: dsl-sql >Reporter: Rui Wang >Priority: P2 > Labels: beam-fixit, stale-P2 > > Should improve PubSub test coverage by testing more data types. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-7794) DynamoDBIOTest is blocking forever
[ https://issues.apache.org/jira/browse/BEAM-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-7794: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > DynamoDBIOTest is blocking forever > -- > > Key: BEAM-7794 > URL: https://issues.apache.org/jira/browse/BEAM-7794 > Project: Beam > Issue Type: Bug > Components: io-java-aws >Affects Versions: 2.15.0 >Reporter: Ismaël Mejía >Assignee: Cam Mach >Priority: P2 > Labels: beam-fixit, stale-assigned > Time Spent: 3.5h > Remaining Estimate: 0h > > It was reported in the mailing list that there is a problem in some > environments with the test container. Until we can reproduce it maybe is a > good idea to add the @Ignore annotation to the test class. (in the aws2 > module probably too) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-6419) [SQL] Jacoco error: Classes in bundle 'beam-sdks-java-extensions-sql' do no match with execution data.
[ https://issues.apache.org/jira/browse/BEAM-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-6419: -- Labels: beam-fixit stale-assigned (was: stale-assigned) > [SQL] Jacoco error: Classes in bundle 'beam-sdks-java-extensions-sql' do no > match with execution data. > -- > > Key: BEAM-6419 > URL: https://issues.apache.org/jira/browse/BEAM-6419 > Project: Beam > Issue Type: Bug > Components: dsl-sql >Reporter: Kenneth Knowles >Assignee: Kenneth Knowles >Priority: P2 > Labels: beam-fixit, stale-assigned > > {code} > [ant:jacocoReport] Classes in bundle 'beam-sdks-java-extensions-sql' do no > match with execution data. For report generation the same class files must be > used as at runtime. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/text/TextTable does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/udf/IsNan does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rel/BeamUnnestRel$Transform does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rel/BeamSetOperatorRelBase$OpType > does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/text/TextTableProvider$RowToCsv > does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rel/BeamSortRel does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/transform/agg/CovarianceFn does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/parser/impl/ParseException does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rule/BeamMinusRule does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/transform/BeamBuiltinAggregations$IntegerAvg > does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/kafka/KafkaTableProvider > does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rule/BeamUncollectRule does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rel/BeamJoinRel$1 does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/rel/BeamCalcRel$CalcFn does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/impl/ParseException does not match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/test/TestTable does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/UdfUdafProvider does not > match. > [ant:jacocoReport] Execution data for class > org/apache/beam/sdk/extensions/sql/meta/provider/kafka/BeamKafkaCSVTable$CsvRecorderDecoder > does not match. > {code} > ... and so on. > There's some discussion of similar-sounding issues at > https://stackoverflow.com/questions/31720139/jacoco-code-coverage-report-generator-showing-error-classes-in-bundle-code-c > If JaCoCo is looking at the class files, but tests run against the shaded > jar, this would be expected because only byte-for-byte identical class files > will match. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-1620) Add streaming Dataflow ValidatesRunner coverage
[ https://issues.apache.org/jira/browse/BEAM-1620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles reassigned BEAM-1620: - Assignee: Kenneth Knowles > Add streaming Dataflow ValidatesRunner coverage > --- > > Key: BEAM-1620 > URL: https://issues.apache.org/jira/browse/BEAM-1620 > Project: Beam > Issue Type: Test > Components: runner-dataflow, testing >Reporter: Kenneth Knowles >Assignee: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > > Currently, the runner validation test suite is not run on Dataflow in > streaming mode. In fact, it should be able to run - all the functionality is > in place. I think this is just a matter of maven + Jenkins + making sure not > to leak a bunch of streaming jobs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-5926) Expand BigTableReadIT coverage.
[ https://issues.apache.org/jira/browse/BEAM-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-5926: -- Labels: beam-fixit stale-P2 (was: stale-P2) > Expand BigTableReadIT coverage. > --- > > Key: BEAM-5926 > URL: https://issues.apache.org/jira/browse/BEAM-5926 > Project: Beam > Issue Type: Bug > Components: testing >Reporter: Jason Kuster >Priority: P2 > Labels: beam-fixit, stale-P2 > > We should add to > [https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/bigtable/BigtableReadIT.java] > a long values and a 100M read variant to ensure we have appropriate coverage > of this IO. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-2313) Continuously execute CassandraIOIT
[ https://issues.apache.org/jira/browse/BEAM-2313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-2313: -- Labels: beam-fixit (was: ) > Continuously execute CassandraIOIT > -- > > Key: BEAM-2313 > URL: https://issues.apache.org/jira/browse/BEAM-2313 > Project: Beam > Issue Type: Task > Components: io-java-cassandra >Reporter: Jean-Baptiste Onofré >Priority: P2 > Labels: beam-fixit > > It would be great to establishing continuously running IT coverage for > CassandraIO. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1620) Add streaming Dataflow ValidatesRunner coverage
[ https://issues.apache.org/jira/browse/BEAM-1620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1620: -- Labels: beam-fixit (was: ) > Add streaming Dataflow ValidatesRunner coverage > --- > > Key: BEAM-1620 > URL: https://issues.apache.org/jira/browse/BEAM-1620 > Project: Beam > Issue Type: Test > Components: runner-dataflow, testing >Reporter: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > > Currently, the runner validation test suite is not run on Dataflow in > streaming mode. In fact, it should be able to run - all the functionality is > in place. I think this is just a matter of maven + Jenkins + making sure not > to leak a bunch of streaming jobs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1681) Add Unit Tests for fixes in BEAM-1649
[ https://issues.apache.org/jira/browse/BEAM-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1681: -- Labels: beam-fixit (was: ) > Add Unit Tests for fixes in BEAM-1649 > - > > Key: BEAM-1681 > URL: https://issues.apache.org/jira/browse/BEAM-1681 > Project: Beam > Issue Type: Sub-task > Components: sdk-py-core >Reporter: Tibor Kiss >Assignee: Tibor Kiss >Priority: P3 > Labels: beam-fixit > > BEAM-1649 was delivered without UTs included. > This is a follow-up to add UT coverage for the following functions: > {noformat} > OrderedPositionRangeTracker.stop_position() > ValueStateTag.__repr__() > typehints.decorators._unpack_positional_arg_hints() > WindowedTypeConstraint.type_check() > PipelineOptions.__getattr__() > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1683) Add unit tests for counters.py
[ https://issues.apache.org/jira/browse/BEAM-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1683: -- Labels: beam-fixit (was: ) > Add unit tests for counters.py > -- > > Key: BEAM-1683 > URL: https://issues.apache.org/jira/browse/BEAM-1683 > Project: Beam > Issue Type: Sub-task > Components: sdk-py-core >Reporter: Tibor Kiss >Assignee: Rahul Sabbineni >Priority: P3 > Labels: beam-fixit > > Python-SDK's {{apache_beam/utils/counters.py}} does not have associated unit > tests and have low (indirect) test coverage. > Create the respective tests to ensure code quality and increase test coverage. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-1685) Measure and report code coverage in Python-SDK's unit tests
[ https://issues.apache.org/jira/browse/BEAM-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-1685: -- Labels: beam-fixit (was: ) > Measure and report code coverage in Python-SDK's unit tests > --- > > Key: BEAM-1685 > URL: https://issues.apache.org/jira/browse/BEAM-1685 > Project: Beam > Issue Type: Sub-task > Components: sdk-py-core >Reporter: Tibor Kiss >Assignee: Tibor Kiss >Priority: P3 > Labels: beam-fixit > > During the execution of the Python UTs the test coverage should be measured. > The results should be shown on screen & posted to coveralls.io page. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-177) Integrate code coverage to build and review process
[ https://issues.apache.org/jira/browse/BEAM-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kenneth Knowles updated BEAM-177: - Labels: beam-fixit (was: ) > Integrate code coverage to build and review process > --- > > Key: BEAM-177 > URL: https://issues.apache.org/jira/browse/BEAM-177 > Project: Beam > Issue Type: Improvement > Components: sdk-java-core >Reporter: Kenneth Knowles >Priority: P2 > Labels: beam-fixit > > We cannot use codecov, but we can use coveralls. We have the maven plugin > included in the pom and need to invoke it appropriately in our various > builds, and disseminate knowledge about browser extensions to get it into the > pull request UI. -- This message was sent by Atlassian Jira (v8.3.4#803005)