Process JobBundleFactory for portable runner

2018-08-06 Thread Thomas Weise
Hi,

Currently the portable Flink runner only works with SDK Docker containers
for execution (DockerJobBundleFactory, besides an in-process (embedded)
factory option for testing [1]). I'm considering adding another out of
process JobBundleFactory implementation that directly forks the processes
on the task manager host, eliminating the need for Docker. This would work
reasonably well in environments where the dependencies (in this case
Python) can easily be tied into the host deployment (also within an
application specific Kubernetes pod).

There was already some discussion about alternative JobBundleFactory
implementation in [2]. There is also a JIRA to make the bundle factory
pluggable [3], pending availability of runner level options.

For a "ProcessBundleFactory", in addition to the Python dependencies the
environment would also need to have the Go boot executable [4] (or a
substitute thereof) to perform the harness initialization.

Is anyone else interested in this SDK execution option or has already
investigated an alternative implementation?

Thanks,
Thomas

[1]
https://github.com/apache/beam/blob/7958a379b0a37a89edc3a6ae4b5bc82fda41fcd6/runners/flink/src/test/java/org/apache/beam/runners/flink/PortableExecutionTest.java#L83

[2]
https://lists.apache.org/thread.html/d6b6fde764796de31996db9bb5f9de3e7aaf0ab29b99d0adb52ac508@%3Cdev.beam.apache.org%3E

[3] https://issues.apache.org/jira/browse/BEAM-4819

[4] https://github.com/apache/beam/blob/master/sdks/python/container/boot.go


Re: [VOTE] Apache Beam, version 2.6.0, release candidate #2

2018-08-06 Thread Boyuan Zhang
+1 non-binding
Verified dataflow items listed in
https://s.apache.org/beam-release-validation

On Sun, Aug 5, 2018 at 6:37 AM Suneel Marthi  wrote:

> +1 non-binding
>
> 1. verified Sigs and Hashes of artifacts
> 2. tested with my sample applications with local Runner
>
> On Sun, Aug 5, 2018 at 12:47 AM, Jean-Baptiste Onofré 
> wrote:
>
>> +1 (binding)
>>
>> Tested with beam-samples, checksum and sig verified.
>>
>> Regards
>> JB
>>
>> On 04/08/2018 01:27, Pablo Estrada wrote:
>> > Hello everyone!
>> >
>> > Extra, extra! The Apache Beam 2.6.0 release candidate #2 is out.
>> >
>> > Please review and vote on the release candidate #2 for the version
>> > 2.6.0, as follows:
>> >
>> > [ ] +1, Approve the release
>> > [ ] -1, Do not approve the release (please provide specific comments)
>> >
>> > The complete staged set of artifacts is available for your review, which
>> > includes:
>> > * JIRA release notes [1],
>> > * the official Apache source release to be deployed to dist.apache.org
>> >  [2], which is signed with the key with
>> > fingerprint 2F1FEDCDF6DD7990422F482F65224E0292DD8A51 [3],
>> > * all artifacts to be deployed to the Maven Central Repository [4],
>> > * source code tag "v2.6.0-RC2" [5],
>> > * website pull request listing the release and publishing the API
>> > reference manual [6]. This did not change from the previous RC.
>> > * Python artifacts are deployed along with the source release to
>> > the dist.apache.org  [2].
>> >
>> > The vote will be open for at least 72 hours. It is adopted by majority
>> > approval, with at least 3 PMC affirmative votes.
>> >
>> > Regards
>> > -Pablo.
>> >
>> > [1]
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527=12343392
>> > [2] https://dist.apache.org/repos/dist/dev/beam/2.6.0/
>> > [3] https://dist.apache.org/repos/dist/dev/beam/KEYS
>> > [4]
>> https://repository.apache.org/content/repositories/orgapachebeam-1045/
>> > [5] https://github.com/apache/beam/tree/v2.6.0-RC2
>> > [6] https://github.com/apache/beam-site/pull/518
>> > --
>> > Got feedback? go/pabloem-feedback
>> 
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>


[DISCUSSION] Tracking & Visualizing various metrics of the Beam community

2018-08-06 Thread Huygaa Batsaikhan
Continuing the discussion

about improving Beam code review, I am looking into visualizing various
helpful Beam community metrics such as code velocity, reviewer load, and
new contributor's engagement.

So far, I found DevStats
, an open
source (github ) dashboarding tool used
by Kubernetes, seems to provide almost everything we need. For example,
they have dashboards for metrics such as:

   -

   Time to approve or merge
   
   -

   PR Time to engagement
   
   -

   New and Episodic PR contributors
   

   -

   PR reviews by contributor
   
   -

   Company statistics

It would be really cool if we can try it out for Beam. I don't have much
experience using open source projects. From what I understand: DevStats is
developed by CNCF  and they manage their incubator
projects' dashboard. Since Beam is not part of the CNCF, in order to use
DevStats, we have to fork the project and maintain it ourselves.

1. What do you think about using DevStats for Beam? Do you know how it is
usually done?
2. If you are not sure about DevStats, do you know any other tool which
could help us track & visualize Beam metrics?

Thanks, Huygaa


Re: Runner agnostic Metrics

2018-08-06 Thread Lukasz Cwik
The JIRA is https://issues.apache.org/jira/browse/BEAM-3310
Here is the link to the discussion thread on the dev ML:
https://lists.apache.org/thread.html/01a80d62f2df6b84bfa41f05e15fda900178f882877c294fed8be91e@%3Cdev.beam.apache.org%3E
And the design doc:
https://s.apache.org/runner_independent_metrics_extraction

Feel free to reach back out with any questions based on what you read above.

On Wed, Aug 1, 2018 at 7:45 AM Jozef Vilcek  wrote:

> Hello,
>
> I would like to ask about Beam's runner agnostic metrics I found in 2.5.0
> release notes.
>
> I am considering to abandon Flink specific reporter in favour of this one,
> but when looking at code, it seems feature is not fully integrated. Here:
>
>
> https://github.com/apache/beam/blob/279a05604b83a54e8e5a79e13d8761f94841f326/runners/flink/src/main/java/org/apache/beam/runners/flink/FlinkRunner.java#L127
>
> I noticed comment:
>// no metricsPusher because metrics are not supported in detached mode
>
> Question. Is there a source where I can read and learn about current
> limitations and plans to solve them. To require attached mode for metrics
> to work seems to bit fragile to me.
>
> Thanks,
> Jozef
>
>


Re: Parallelizing test runs

2018-08-06 Thread Mikhail Gryzykhin
 I don't see difference at first glance and no difference is expected.

We never utilized concurrent jobs originally, because job took ~1 hour and
was triggered once every 6 hours. At some point, I added triggering job
when new commit is available and this started triggering jobs in parallel
for each commit. That is unnecessary overhead for post-commits. Removing
concurrent job runs for post-commits triggers single job for multiple
commits that accumulated during execution of previous job.

I believe you are talking about triggering test cases concurrently withing
single Jenkins job. That was not changed.

--Mikhail

Have feedback ?


On Mon, Aug 6, 2018 at 2:44 PM Lukasz Cwik  wrote:

> How much slower did the post commits become after removing concurrency?
>
> On Thu, Aug 2, 2018 at 2:32 PM Mikhail Gryzykhin 
> wrote:
>
>> I've disabled concurrency for auto-triggered post-commits job. That
>> should reduce job scheduling considerably.
>>
>> I believe that this change should resolve quota issue we have seen this
>> time. I'll monitor if problem reappears.
>>
>> --Mikhail
>>
>> Have feedback ?
>>
>>
>> On Wed, Aug 1, 2018 at 9:40 AM Pablo Estrada  wrote:
>>
>>> It feels to me like a peak of 60 jobs per minute is pretty high. If I
>>> understand correctly, we run up to 20 dataflow jobs in parallel per test
>>> suite? Or what's the number here?
>>>
>>> It is also true that most our tests are simple NeedsRunner tests, that
>>> test a couple elements, so the whole pipeline overhead is on startup. This
>>> may be improved by lumping tests together (though might we lose
>>> debuggability?).  Our average number of jobs is, I hope, muuuch smaller
>>> than 60 per minute...
>>>
>>> With all these considerations, I would lean more towards having a retry
>>> policy as the immediate solution.
>>> -P.
>>>
>>> On Wed, Aug 1, 2018 at 9:07 AM Andrew Pilloud 
>>> wrote:
>>>
 I like 1 and 2. How do credentials get into Jenkins? Could we create a
 user per Jenkins host?

 On Tue, Jul 31, 2018 at 4:33 PM Reuven Lax  wrote:

> There was also a proposal to lump multiple tests into a single
> Dataflow job instead of spinning up a separate Dataflow job for each test.
>
> On Tue, Jul 31, 2018 at 4:26 PM Mikhail Gryzykhin 
> wrote:
>
>> I synced with Rafael. Below is summary of discussion.
>>
>> This quota is CreateRequestsPerMinutePerUser and it has 60 requests
>> per user by default.
>>
>> I've created Jira [BEAM-5053](
>> https://issues.apache.org/jira/browse/BEAM-5053) for this.
>>
>> I see following options we can utilize:
>> 1. Add retry logic. Although this limits us to 1 dataflow job start
>> per second for whole Jenkins. In long scale this can also block one test
>> job if other jobs take all the slots.
>> 2. Utilize different users to spin Dataflow jobs.
>> 3. Find way to rise quota limit on Dataflow. By default the field
>> limits value to 60 requests per minute.
>> 4. Long run generic suggestion: limit amount of dataflow jobs we spin
>> up and move tests to the form of unit or component tests.
>>
>> Please, fill in any insights or ideas you have on this.
>>
>> Regards,
>> --Mikhail
>>
>> Have feedback ?
>>
>>
>> On Tue, Jul 31, 2018 at 3:55 PM Mikhail Gryzykhin 
>> wrote:
>>
>>> Hi Everyone,
>>>
>>> Seems that we hit quota issue again:
>>> https://builds.apache.org/job/beam_PostCommit_Go_GradleBuild/553/consoleFull
>>>
>>> Can someone share information on how was this triaged last time or
>>> guide me on possible follow-up actions?
>>>
>>> Regards,
>>> --Mikhail
>>>
>>> Have feedback ?
>>>
>>>
>>> On Tue, Jul 3, 2018 at 9:12 PM Rafael Fernandez 
>>> wrote:
>>>
 Summary for all folks following this story -- and many thanks for
 explaining configs to me and pointing me to files and such.

 - Scott made changes to the config and we can now run 3
 ValidatesRunner.Dataflow in parallel (each run is about 2 hours)
 - With the latest quota changes, we peaked at ~70% capacity in
 concurrent Dataflow jobs when running those
 - I've been keeping an eye on quota peaks for all resources today
 and have not seen any worryisome limits overall.
 - Also note there are improvements planned to the
 ValidatesRunner.Dataflow test so various items get batched and the test
 itself runs faster -- I believe it's on Alan's radar

 Cheers,
 r

 On Mon, Jul 2, 2018 at 4:23 PM Rafael Fernandez <
 rfern...@google.com> wrote:

> Done!
>
> On Mon, Jul 2, 2018 at 4:10 PM Scott Wegner 
> wrote:
>
>> Hey Rafael, looks like we 

Re: Parallelizing test runs

2018-08-06 Thread Lukasz Cwik
How much slower did the post commits become after removing concurrency?

On Thu, Aug 2, 2018 at 2:32 PM Mikhail Gryzykhin  wrote:

> I've disabled concurrency for auto-triggered post-commits job. That should
> reduce job scheduling considerably.
>
> I believe that this change should resolve quota issue we have seen this
> time. I'll monitor if problem reappears.
>
> --Mikhail
>
> Have feedback ?
>
>
> On Wed, Aug 1, 2018 at 9:40 AM Pablo Estrada  wrote:
>
>> It feels to me like a peak of 60 jobs per minute is pretty high. If I
>> understand correctly, we run up to 20 dataflow jobs in parallel per test
>> suite? Or what's the number here?
>>
>> It is also true that most our tests are simple NeedsRunner tests, that
>> test a couple elements, so the whole pipeline overhead is on startup. This
>> may be improved by lumping tests together (though might we lose
>> debuggability?).  Our average number of jobs is, I hope, muuuch smaller
>> than 60 per minute...
>>
>> With all these considerations, I would lean more towards having a retry
>> policy as the immediate solution.
>> -P.
>>
>> On Wed, Aug 1, 2018 at 9:07 AM Andrew Pilloud 
>> wrote:
>>
>>> I like 1 and 2. How do credentials get into Jenkins? Could we create a
>>> user per Jenkins host?
>>>
>>> On Tue, Jul 31, 2018 at 4:33 PM Reuven Lax  wrote:
>>>
 There was also a proposal to lump multiple tests into a single Dataflow
 job instead of spinning up a separate Dataflow job for each test.

 On Tue, Jul 31, 2018 at 4:26 PM Mikhail Gryzykhin 
 wrote:

> I synced with Rafael. Below is summary of discussion.
>
> This quota is CreateRequestsPerMinutePerUser and it has 60 requests
> per user by default.
>
> I've created Jira [BEAM-5053](
> https://issues.apache.org/jira/browse/BEAM-5053) for this.
>
> I see following options we can utilize:
> 1. Add retry logic. Although this limits us to 1 dataflow job start
> per second for whole Jenkins. In long scale this can also block one test
> job if other jobs take all the slots.
> 2. Utilize different users to spin Dataflow jobs.
> 3. Find way to rise quota limit on Dataflow. By default the field
> limits value to 60 requests per minute.
> 4. Long run generic suggestion: limit amount of dataflow jobs we spin
> up and move tests to the form of unit or component tests.
>
> Please, fill in any insights or ideas you have on this.
>
> Regards,
> --Mikhail
>
> Have feedback ?
>
>
> On Tue, Jul 31, 2018 at 3:55 PM Mikhail Gryzykhin 
> wrote:
>
>> Hi Everyone,
>>
>> Seems that we hit quota issue again:
>> https://builds.apache.org/job/beam_PostCommit_Go_GradleBuild/553/consoleFull
>>
>> Can someone share information on how was this triaged last time or
>> guide me on possible follow-up actions?
>>
>> Regards,
>> --Mikhail
>>
>> Have feedback ?
>>
>>
>> On Tue, Jul 3, 2018 at 9:12 PM Rafael Fernandez 
>> wrote:
>>
>>> Summary for all folks following this story -- and many thanks for
>>> explaining configs to me and pointing me to files and such.
>>>
>>> - Scott made changes to the config and we can now run 3
>>> ValidatesRunner.Dataflow in parallel (each run is about 2 hours)
>>> - With the latest quota changes, we peaked at ~70% capacity in
>>> concurrent Dataflow jobs when running those
>>> - I've been keeping an eye on quota peaks for all resources today
>>> and have not seen any worryisome limits overall.
>>> - Also note there are improvements planned to the
>>> ValidatesRunner.Dataflow test so various items get batched and the test
>>> itself runs faster -- I believe it's on Alan's radar
>>>
>>> Cheers,
>>> r
>>>
>>> On Mon, Jul 2, 2018 at 4:23 PM Rafael Fernandez 
>>> wrote:
>>>
 Done!

 On Mon, Jul 2, 2018 at 4:10 PM Scott Wegner 
 wrote:

> Hey Rafael, looks like we need more 'INSTANCE_TEMPLATES' quota
> [1]. Can you take a look? I've filed [BEAM-4722]:
> https://issues.apache.org/jira/browse/BEAM-4722
>
> [1]
> https://github.com/apache/beam/pull/5861#issuecomment-401963630
>
> On Mon, Jul 2, 2018 at 11:33 AM Rafael Fernandez <
> rfern...@google.com> wrote:
>
>> OK, Scott just sent https://github.com/apache/beam/pull/5860 .
>> Quotas should not be a problem, if they are, please file a JIRA under
>> gcp-quota.
>>
>> Cheers,
>> r
>>
>> On Mon, Jul 2, 2018 at 10:06 AM Kenneth Knowles 
>> wrote:
>>
>>> One thing that is nice when you do this is to be able to share
>>> your results. Though if all you are sharing is "they passed" then I 
>>> guess

Maintaining Beam Dependencies

2018-08-06 Thread Yifan Zou
Hi all,

As from today, we start filing JIRA issues automatically for tracking Beam
dependency updates based on the weekly report
.
Issues will be created by the Beam JIRA Bot for each one of the dependency
listed in the report and assign to the owners. And the Java dep issues will
be grouped by their groupID that allows contributors easily see the upgrade
process of related packages. A sample issue is here: BEAM-4963


Keeping dependencies healthy is very important to make the perfect user
experience and let the development process being smooth. And maintaining on
our dependencies will be a continuous, incremental improvements that
requires the whole community to make efforts. So, please:

1. Taking care of the JIRA dependency upgrade requests assigned to you. If
the version is not good to upgrade for some reasons, you could close the
issue with 'won't fix', this will prevent the jira bot creating duplicate
issues.

2. If you are familiar with some packages which Beam depends on, you are
encouraged to take the ownership and help the community to look after
them. You can add your JIRA username in the dependency ownership files:
https://github.com/apache/beam/tree/master/ownership

3. Please also update the ownership files when introducing new dependencies
into Beam.

Please refer to the Dependencies Guide
 for more details.

Thank you very much.

Regards.
Yifan Zou


Re: Community Examples Repository

2018-08-06 Thread David Cavazos
The way I see it, the examples repo needs not be versioned. Each example
should specify it's dependencies the same way any external application
would. In Python this would be the *requirements.txt* where we can specify
*apache-beam>=2.5.0* for example. For Java, it can be specified in the
*pom.xml* file. This would show users how to use Beam for their own
applications as Jesse mentioned.

A high friction point I've experienced in Java is creating a new pom file,
so having a repository of examples that can be copy-pasted with a minimal
and working pom file would be great.

In terms of packaging, I think we can get away with it just being a
collection of *independent* examples with a testing infrastructure. Testing
should be trigger-able at the root, but each sample should be tested in its
own isolated environment. Since there are no dependencies, all examples can
be tested in parallel. For every example, create its virtual environment,
run all the tests and destroy it.

If we want the examples to also live in PyPI, then we would require
versioning (*major.minor.patch*). Since it doesn't really matter, we can
just have the *major.minor* part mirror the current *apache-beam* version
for consistency, and as new examples get added / modified we can bump the
*patch* version indefinitely.

On Fri, Aug 3, 2018 at 3:33 PM Charles Chen  wrote:

> We should separate out the decision for (1) whether examples should be
> packaged separately upon release and (2) where the example will live
> code-wise, i.e. whether we want another repo.  With respect to the first
> item, I think the proposal needs more detail before we can decide here--for
> example, if we separate out the packaging for the examples, we need to
> change our build process and potentially release additional PyPI packages
> and this should be thought about before we can make a decision.
>
> On Fri, Aug 3, 2018 at 3:23 PM Pablo Estrada  wrote:
>
>> Hello all,
>> I see a number of mixed responses. I think it would be helpful to push
>> for a decision by calling for a vote.
>>
>> Also, the proposal has a number of parts, so perhaps we could ask David
>> and other contributors of the proposal to outline a couple alternatives the
>> we can all vote on. (e.g. #1 no examples repo, #2 all examples to new repo,
>> #3 examples repo, but some examples remain in main repo).
>>
>> The outcome may be no change at all, or some change, but at least we'll
>> have a definite decision from the community.
>>
>> Does that sound reasonable?
>> -P.
>>
>> On Thu, Aug 2, 2018 at 11:09 AM Ankur Goenka  wrote:
>>
>>> I like he initiative but I feel that fragmenting the codebase will make
>>> it harder to discover examples. Having examples in a separate repo makes it
>>> easier to forget that examples should get the same love as the rest of the
>>> codebase.
>>> The other challenge is the tooling and integration which is harder with
>>> multiple repo.
>>> It makes sense to isolate the examples and make them more obvious.
>>> A sub project of examples as mentioned in the discussion might be
>>> sufficient without having much overhead.
>>>
>>> Thanks,
>>> Ankur
>>>
>>>
>>> On Thu, Aug 2, 2018 at 10:52 AM Kai Jiang  wrote:
>>>
 Agreed with Rui. We could also add more SQL examples (like, different
 IOs ) for everyone to get started with.

 Best,
 Kai

 On 2018/08/02 17:40:32, Rui Wang  wrote:
 > I might miss it: are examples to be moved including those which are
 not
 > under example/? For example there are some BeamSQL examples in
 > org/apache/beam/sdk/extensions/sql/example
 > <
 https://github.com/apache/beam/tree/master/sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/example
 >
 > .
 >
 >
 > It's better to keep BeamSQL examples in where it is because related
 API
 > might still change.
 >
 > -Rui
 >
 > On Thu, Aug 2, 2018 at 8:58 AM Ahmet Altay  wrote:
 >
 > > Robert, I agree with you in general. However there is also a second
 > > motivation. There is an increase in new PRs that are coming to add
 new
 > > examples. This is great however the core code (including
 distributions) is
 > > not a great place to host such examples. An examples repo would
 help in
 > > this case. It could also serve as an entry point for new
 contributors.
 > >
 > >
 > >
 > > On Thu, Aug 2, 2018 at 12:40 AM, Robert Bradshaw <
 rober...@google.com>
 > > wrote:
 > >
 > >> I have to admit I'm generally -1 on moving examples to a separate
 > >> repository. In particular, I think it would actually inhibit the
 > >> stated goals of increasing visibility and better keeping them up to
 > >> date, and for all the reasons we just migrated the beam-site
 directory
 > >> in. It seems the primary motivation is that it's difficult in Java
 to
 > >> have a portion of the repo that depends on another 

Beam Dependency Check Report (2018-08-06)

2018-08-06 Thread Apache Jenkins Server

High Priority Dependency Updates Of Beam Python SDK:


  Dependency Name
  Current Version
  Latest Version
  Release Date Of the Current Used Version
  Release Date Of The Latest Release
  
google-cloud-bigquery
0.25.0
1.5.0
2017-06-26
2018-08-06


google-cloud-core
0.25.0
0.28.1
2018-06-07
2018-06-07


google-cloud-pubsub
0.26.0
0.35.4
2017-06-26
2018-06-08


ply
3.8
3.11
2018-06-07
2018-06-07


High Priority Dependency Updates Of Beam Java SDK:


  Dependency Name
  Current Version
  Latest Version
  Release Date Of the Current Used Version
  Release Date Of The Latest Release
  
org.assertj:assertj-core
2.5.0
3.10.0
2016-07-03
2018-05-11


com.google.auto.service:auto-service
1.0-rc2
1.0-rc4
2018-06-25
2017-12-11


biz.aQute:bndlib
1.43.0
2.0.0.20130123-133441
2018-06-25
2018-06-25


org.apache.cassandra:cassandra-all
3.9
3.11.3
2016-09-26
2018-08-06


org.apache.commons:commons-dbcp2
2.1.1
2.5.0
2015-08-02
2018-07-16


de.flapdoodle.embed:de.flapdoodle.embed.mongo
1.50.1
2.1.1
2015-12-11
2018-06-25


de.flapdoodle.embed:de.flapdoodle.embed.process
1.50.1
2.0.5
2015-12-11
2018-06-25


org.elasticsearch:elasticsearch
5.6.3
6.3.2
2017-10-06
2018-07-30


org.elasticsearch:elasticsearch-hadoop
5.0.0
6.3.2
2016-10-26
2018-07-30


org.elasticsearch.client:elasticsearch-rest-client
5.6.3
6.3.2
2017-10-06
2018-07-30


com.alibaba:fastjson
1.2.12
1.2.49
2016-05-21
2018-08-06


org.elasticsearch.test:framework
5.6.3
6.3.2
2017-10-06
2018-07-30


org.freemarker:freemarker
2.3.25-incubating
2.3.28
2016-06-14
2018-03-30


net.ltgt.gradle:gradle-apt-plugin
0.13
0.18
2017-11-01
2018-07-23


com.commercehub.gradle.plugin:gradle-avro-plugin
0.11.0
0.14.2
2018-01-30
2018-06-06


gradle.plugin.com.palantir.gradle.docker:gradle-docker
0.13.0
0.20.1
2017-04-05
2018-07-09


com.github.ben-manes:gradle-versions-plugin
0.17.0
0.20.0
2018-06-06
2018-06-25


org.codehaus.groovy:groovy-all
2.4.13
3.0.0-alpha-3
2017-11-22
2018-06-26


com.google.guava:guava
20.0
26.0-jre
2018-07-16
2018-08-06


org.apache.hbase:hbase-common
1.2.6
2.1.0
2017-05-29
2018-07-23


org.apache.hbase:hbase-hadoop-compat
1.2.6
2.1.0
2017-05-29
2018-07-23


org.apache.hbase:hbase-hadoop2-compat
1.2.6
2.1.0
2017-05-29
2018-07-23


org.apache.hbase:hbase-server
1.2.6
2.1.0
2017-05-29
2018-07-23


org.apache.hbase:hbase-shaded-client
1.2.6
2.1.0
2017-05-29
2018-07-23


org.apache.hbase:hbase-shaded-server
1.2.6
2.0.0-alpha2
2017-05-29
2018-05-31


org.apache.hive:hive-cli
2.1.0
3.1.0.3.0.0.0-1634
2016-06-16
2018-07-16


org.apache.hive:hive-common
2.1.0
3.1.0.3.0.0.0-1634
2016-06-16
2018-07-16


org.apache.hive:hive-exec
2.1.0
3.1.0.3.0.0.0-1634
2016-06-16
2018-07-16


org.apache.hive.hcatalog:hive-hcatalog-core
2.1.0
3.1.0.3.0.0.0-1634
2016-06-16
2018-07-16


net.java.dev.javacc:javacc
4.0
7.0.3
2018-06-08
2017-11-06


jline:jline
2.14.6
3.0.0.M1
2018-03-26
2018-06-08


net.java.dev.jna:jna
4.1.0
4.5.2
2018-06-25
2018-07-16


com.esotericsoftware.kryo:kryo
2.21
2.24.0
2018-06-25
2018-06-25


org.apache.kudu:kudu-client
1.4.0
1.7.1
2018-07-31
2018-07-31


io.dropwizard.metrics:metrics-core
3.1.2

Build failed in Jenkins: beam_Release_Gradle_NightlySnapshot #133

2018-08-06 Thread Apache Jenkins Server
See 


--
[...truncated 18.72 MB...]
Task ':beam-sdks-python-container:prepare' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
Use project GOPATH: 

:beam-sdks-python-container:prepare (Thread[Task worker for ':' Thread 
3,5,main]) completed. Took 0.001 secs.
:beam-sdks-python-container:resolveBuildDependencies (Thread[Task worker for 
':' Thread 3,5,main]) started.

> Task :beam-sdks-python-container:resolveBuildDependencies
Build cache key for task ':beam-sdks-python-container:resolveBuildDependencies' 
is 8b9b155163bbd49020994445c942671f
Caching disabled for task 
':beam-sdks-python-container:resolveBuildDependencies': Caching has not been 
enabled for the task
Task ':beam-sdks-python-container:resolveBuildDependencies' is not up-to-date 
because:
  No history is available.
Cache 

 not found, skip.
Cache 

 not found, skip.
Resolving 
./github.com/apache/beam/sdks/go@
:beam-sdks-python-container:resolveBuildDependencies (Thread[Task worker for 
':' Thread 3,5,main]) completed. Took 1.485 secs.
:beam-sdks-python-container:installDependencies (Thread[Task worker for ':' 
Thread 3,5,main]) started.

> Task :beam-sdks-python-container:installDependencies
Caching disabled for task ':beam-sdks-python-container:installDependencies': 
Caching has not been enabled for the task
Task ':beam-sdks-python-container:installDependencies' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Cache 

 not found, skip.
:beam-sdks-python-container:installDependencies (Thread[Task worker for ':' 
Thread 3,5,main]) completed. Took 0.614 secs.
:beam-sdks-python-container:buildLinuxAmd64 (Thread[Task worker for ':' Thread 
3,5,main]) started.

> Task :beam-sdks-python-container:buildLinuxAmd64
Build cache key for task ':beam-sdks-python-container:buildLinuxAmd64' is 
96a676bb2799e7ccf5f31a42c61a6b8a
Caching disabled for task ':beam-sdks-python-container:buildLinuxAmd64': 
Caching has not been enabled for the task
Task ':beam-sdks-python-container:buildLinuxAmd64' is not up-to-date because:
  No history is available.
:beam-sdks-python-container:buildLinuxAmd64 (Thread[Task worker for ':' Thread 
3,5,main]) completed. Took 3.035 secs.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 3,5,main]) 
started.

> Task :beam-sdks-python-container:build
Caching disabled for task ':beam-sdks-python-container:build': Caching has not 
been enabled for the task
Task ':beam-sdks-python-container:build' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 3,5,main]) 
completed. Took 0.002 secs.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 3,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:jar
Build cache key for task ':beam-vendor-sdks-java-extensions-protobuf:jar' is 
318184f3acfe150a3a38d605826c992a
Caching disabled for task ':beam-vendor-sdks-java-extensions-protobuf:jar': 
Caching has not been enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:jar' is not up-to-date because:
  No history is available.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 3,5,main]) completed. Took 0.007 secs.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 3,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:compileTestJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:compileTestJava' as 
it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 3,5,main]) completed. Took 0.001 secs.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 3,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:processTestResources NO-SOURCE
file or directory