All seems to have settled now. Hadoopqa is running 'normally' again with
yetus 0.7.0 and some new configs (Thanks to Allen Wittenhauer for the
help/input...). That said, we need to work on curbing resources used during
test runs
St.Ack
On Wed, Jan 31, 2018 at 9:01 AM, Stack
I just set hadoopqa to be 0.7.0 again with an upped proclimit to see if
this fixes our OOME failures.. HadoopQA builds numbered 11295 and later
will have this change.
Thanks
S
On Wed, Jan 31, 2018 at 6:46 AM, Stack wrote:
> Note that I reverted our yetus version last night. It
Note that I reverted our yetus version last night. It discombobulated our
builds (OOMEs). Meantime, you'll have to do the patch naming trick for
another day or so. Our test runs seem to use an ungodly number of file
descriptors Stay tuned.
S
On Mon, Jan 29, 2018 at 10:56 PM, Stack
Our brothers and sisters over in yetus-land made a release that deals w/
the changed JIRA behavior regards ordering attached-patches. No need of
deleting all but the intended patch going forward nor gymnastics with
prefixes when naming. It seems to be working properly. The one-liner
change that
Thanks Andrew. I disabled the job. Use the nightly going forward. The jdk7
builds seem to run fine. The jdk8 has some timeout going on. Need to dig
in. You can see here:
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1.4/
Thanks,
M
On Fri, Dec 8, 2017 at 11:29 AM,
Ok with me, Stack. Thanks for asking.
On Thu, Nov 30, 2017 at 5:33 PM, Stack wrote:
> On the move over to nightly test runs:
>
> 1.2 nightly had a successful build last night after the branch-1
> stabilization effort (HBASE-19204) and fixing a few unit test failures. See
>
On the move over to nightly test runs:
1.2 nightly had a successful build last night after the branch-1
stabilization effort (HBASE-19204) and fixing a few unit test failures. See
build 150
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1.2/
It then failed, 151,
On Tue, Nov 7, 2017 at 6:10 AM, Sean Busbey wrote:
> > Should I be able to see the machine dir when I look at nightlies output?
> > (Was trying to see what else is running).
>
> Ah. we don't have the same machine sampling on nightly as we do in
> precommit. I am 80% on a patch
okay, what gets saved from test runs is controlled by a parameter to
the jenkins job called "ARCHIVE_PATTERN_LIST".
That gets used by Apache Yetus' archival feature[1]. which is
essentially a comma separated set of file name regexes to use with the
find command.
The default in the job is
> Should I be able to see the machine dir when I look at nightlies output?
> (Was trying to see what else is running).
Ah. we don't have the same machine sampling on nightly as we do in
precommit. I am 80% on a patch for HBASE-19189 (run test ad-hoc
repeatedly) that includes pulling that
I see this in the 1.2 nightly just when it gives up the ghost
[WARNING] Corrupted STDOUT by directly writing to native stream in
forked JVM 2. See FAQ web page and the dump file
/testptch/hbase/hbase-server/target/surefire-reports/2017-11-06T20-11-30_219-jvmRun2.dumpstream
.. but the pointed
On Mon, Nov 6, 2017 at 8:35 AM, Sean Busbey wrote:
> Given that all of the old post-commit tests have been posting that
> they're failing to JIRAs for what looks like a month, is there any
> reason not to switch to the new tests that also say they're failing?
>
>
No
Given that all of the old post-commit tests have been posting that
they're failing to JIRAs for what looks like a month, is there any
reason not to switch to the new tests that also say they're failing?
The reason HBASE-18467 has been sitting on hold this whole time has
been because the new
It looks like old tests branch-1.2 and branch-1.3 are failing with
some maven enforcer problem that we thought we had fixed a few times
before. It's probably fixable by changing the version of maven they
use, but I'd much rather any test effort go into the last mile of
getting our new nightly
Our builds seem pretty sick up on builds.apache.org even after the miracle
work by Allen W containing errant hadoop processes. Looking at 1.2 and 1.3,
we don't even get off the ground. Anyone been taking a look?
When I try to run the branch-1.2 and branch-1.3 unit tests locally, about
ten tests
Loads of tests timing out in test runs -- then they all pass. Anyone have
an input? I'm trying to take a look as background task...
S
On Tue, Jul 11, 2017 at 7:05 PM, Stack wrote:
> Thanks Appy.
>
> Any one looking at the 'ERROR ExecutionException Java heap space...'
> errors
Thanks Appy.
Any one looking at the 'ERROR ExecutionException Java heap space...' errors
on patch builds or failed forking? Seems common enough. Here are complaints
that remote JVM went away:
Fixed 'trends' in flaky dashboard. Since i changed the test names in last
fix, the dots in the name were messing up with CSS selectors. :)
On Mon, Jul 10, 2017 at 11:34 AM, Apekshit Sharma wrote:
> Quick update on flaky dashboard:
> Flaky dashboard wasn't working earlier
Quick update on flaky dashboard:
Flaky dashboard wasn't working earlier because our trunk build was broken.
After trunk was fixed, the format of log lines in consoleText was not the
same, so findHangingTests.py was not able to parse it correctly for
broken/hanging/timeout tests. That's been fixed
On Thu, Jul 6, 2017 at 3:45 PM, Sean Busbey wrote:
> that sounds like our project structure is broken. Please make sure there's
> a jira that tracks it and I'll take a look later.
>
>
Filed HBASE-18331 for now.
I can take a look too later.
St.Ack
> On Thu, Jul 6, 2017 at
that sounds like our project structure is broken. Please make sure there's
a jira that tracks it and I'll take a look later.
On Thu, Jul 6, 2017 at 6:15 PM, Stack wrote:
> I tried publishing hbase-3.0.0-SNAPSHOT... so hbase-checkstyle was up in
> repo (presuming it relied on
I tried publishing hbase-3.0.0-SNAPSHOT... so hbase-checkstyle was up in
repo (presuming it relied on an aged-out snapshot). Seems to have 'fixed'
it for now
St.Ack
On Thu, Jul 6, 2017 at 12:50 PM, Stack wrote:
> The 3.0.0-SNAPSHOT looks suspicious ... the hbase
On Thu, Jul 6, 2017 at 12:48 PM, Stack wrote:
> Checkstyle is currently broke on our builds... looking.
> St.Ack
>
>
Works if I run it locally (of course)
St.Ack
>
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:checkstyle
>
Checkstyle is currently broke on our builds... looking.
St.Ack
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-checkstyle-plugin:2.17:checkstyle
(default-cli) on project hbase: Execution default-cli of goal
org.apache.maven.plugins:maven-checkstyle-plugin:2.17:checkstyle
failed:
jacoco was added ages ago. I'd guess that something changed on the machines
we use to cause it to stop working.
On Thu, Jun 29, 2017 at 12:02 PM, Stack wrote:
> On Wed, Jun 28, 2017 at 8:43 AM, Josh Elser wrote:
>
> >
> >
> > On 6/27/17 7:20 PM, Stack
On Wed, Jun 28, 2017 at 8:43 AM, Josh Elser wrote:
>
>
> On 6/27/17 7:20 PM, Stack wrote:
>
>> * test-patch's whitespace plugin can configured to ignore some files (but
>>> I
>>> can't think of any we'd care to so whitelist)
>>>
>>> Generated files.
>>
>
> Oh my goodness, yes,
On 6/27/17 7:20 PM, Stack wrote:
* test-patch's whitespace plugin can configured to ignore some files (but I
can't think of any we'd care to so whitelist)
Generated files.
Oh my goodness, yes, please. This has been such a pain in the rear for
me as I've been rebasing space quota patches.
On Tue, Jun 27, 2017 at 10:28 AM, Sean Busbey wrote:
> On Tue, Jun 27, 2017 at 11:38 AM, Stack wrote:
>
> > On Tue, Jun 27, 2017 at 9:24 AM, Sean Busbey wrote:
> >
> > > FYI, I've updated the precommit build to use Yetus 0.4.0 (which is
On Tue, Jun 27, 2017 at 11:38 AM, Stack wrote:
> On Tue, Jun 27, 2017 at 9:24 AM, Sean Busbey wrote:
>
> > FYI, I've updated the precommit build to use Yetus 0.4.0 (which is the
> > current release).
> >
> > Shouldn't impact much. if things look off ping me.
On Tue, Jun 27, 2017 at 9:24 AM, Sean Busbey wrote:
> FYI, I've updated the precommit build to use Yetus 0.4.0 (which is the
> current release).
>
> Shouldn't impact much. if things look off ping me.
>
>
Thanks Sean.
Whats new in 0.4.0?
S
> On Wed, Mar 1, 2017 at 2:23 PM,
FYI, I've updated the precommit build to use Yetus 0.4.0 (which is the
current release).
Shouldn't impact much. if things look off ping me.
On Wed, Mar 1, 2017 at 2:23 PM, Mikhail Antonov
wrote:
> Ouch. Thanks Sean!
>
> I'm pretty sure at some point I was debugging 1.3-IT
Ouch. Thanks Sean!
I'm pretty sure at some point I was debugging 1.3-IT job and saw branch-1.3
getting checked out in the logs. Not sure how/when it went sideways though.
Yeah, let's see how it goes.
-Mikhail
On Wed, Mar 1, 2017 at 5:50 AM, Sean Busbey wrote:
> Fun times.
Fun times.
1) Turns out our 1.3-IT jobs have been running against branch-1.2.
Don't know how long, but as long as we have history.
2) I deleted the failing-since-august 1.2-IT job.
3) I renamed the passing 1.3-IT job that runs against branch-1.2 to be
the 1.2-IT job
4) I copied the now renamed
FYI, I updated the 1.2-IT and 1.3-IT jobs today to use Appy's
suggested "custom child workspace" of "${SHORT_COMBINATION}", since
spaces in paths had caused them to fail for a v long time.
On Fri, Oct 14, 2016 at 4:44 PM, Andrew Purtell wrote:
> Thanks Ted, that would be a
Thanks Ted, that would be a nice contribution, thank you.
On Fri, Oct 14, 2016 at 12:07 PM, Apekshit Sharma wrote:
> @Ted, here's the old jira, HBASE-14167. Use that.
>
> On Fri, Oct 14, 2016 at 12:02 PM, Ted Yu wrote:
>
> > I just ran the tests in
The hbase-spark integration tests run (and fail) for me locally whenever I
build master with 'mvn clean install -DskipITs' .
HBaseConnectionCacheSuite:
- all test cases *** FAILED ***
2 did not equal 1 (HBaseConnectionCacheSuite.scala:92)
Saw it but had to ignore/triage to get something else
Do the HBase Spark tests only run during the maven verify command?
We'll need to update our personality to say that that command should
be used for unit tests when in the hbase spark module. ugh.
On Thu, Oct 13, 2016 at 7:42 PM, Apekshit Sharma wrote:
> Our patch process isn't
Our patch process isn't running hbase-spark tests. See this for example:
https://builds.apache.org/job/PreCommit-HBASE-Build/3842/
https://builds.apache.org/job/PreCommit-HBASE-Build/3842/artifact/patchprocess/patch-unit-hbase-spark.txt/*view*/
Found it when trying to debug cause of trunk
childCustomWorkspace seems to be just the ticket. Nice find Appy.
St.Ack
On Mon, Sep 19, 2016 at 10:03 AM, Sean Busbey wrote:
> Option 2c looks to be working really well. Thanks for tackling this Appy!
>
> We still have some failures on the master build, but it looks like
>
Option 2c looks to be working really well. Thanks for tackling this Appy!
We still have some failures on the master build, but it looks like
actual problems (or perhaps a flakey). There are several passing
builds.
This should be pretty easy to replicate on the other jobs. I don't see
a downside.
So this all started with spaces-in-path issue, right? I think it has
gobbled up a lot of time of a lot of people.
Let's discuss our options and try to fix it for good. Here are what i can
think of, and my opinion about them.
1. Not use matrix build
Temporary fix. Not preferred since
The profile (or define) skipSparkTests looks like it will skip spark tests.
Setting skipIntegrationTests to true will skip it.
S
On Fri, Sep 16, 2016 at 1:40 PM, Dima Spivak wrote:
> Doesn't seem we need a matrix project for master anymore since we're just
> doing JDK 8
Doesn't seem we need a matrix project for master anymore since we're just
doing JDK 8 now, right? Also, it looks like the hbase-spark
integration-test phase is what's tripping up the build. Why not just
temporarily disable that to unblock testing?
On Friday, September 16, 2016, Apekshit Sharma
So the issue is, we set JAVA_HOME to jdk8 based on matrix param and using
tool environment. Since mvn uses the env variable, it compiles with jdk 8.
But i suspect that scalatest isn't using the env variable, instead it might
be directly using 'java' cmd, which can be jdk 7 or 8, and can vary by
I am not sure if this will help. But it looks like it is because of version
mismatch, that is, it is expecting JDK1.7 and we are compiling with jdk1.8.
That means there is some library which has to be compiled with jdk8 or
needs to be updated to a jdk8 compatible version.
--
*With Regards:-*
Emm, can it be because scalatest tries to use a different java then that
specified by JAVA_HOME (which is used to compile)
On Thu, Sep 15, 2016 at 2:10 PM, Apekshit Sharma wrote:
> Andeverything is back to red.
> Because something is plaguing our builds again. :(
>
> If
52.0 is Java 8. Sounds like the code was compiled to target a later version
than is being used at runtime. Are we accidentally using JDK 7 to run
dependencies built and deployed with JDK 8?
-Dima
On Thu, Sep 15, 2016 at 2:10 PM, Apekshit Sharma wrote:
> Andeverything is
Andeverything is back to red.
Because something is plaguing our builds again. :(
If anyone knows what's problem in this case, please reply on this thread,
otherwise i'll try to fix it later sometime today.
[INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @ hbase-spark ---
*
Great work indeed!
Agreed, occasional failed runs may not be that bad, but fairly regular
failed runs ruin the idea of CI. Especially for released or otherwise
supposedly stable branches.
-Mikhail
On Mon, Sep 12, 2016 at 4:53 PM, Sean Busbey wrote:
> awesome work Appy!
>
awesome work Appy!
That's certainly good news to hear.
On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma wrote:
> On a separate note:
> Trunk had 8 green runs in last 3 days! (
> https://builds.apache.org/job/HBase-Trunk_matrix/)
> This was due to fixing just the mass failures
On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma wrote:
> On a separate note:
> Trunk had 8 green runs in last 3 days! (
> https://builds.apache.org/job/HBase-Trunk_matrix/)
>
Woah!
> This was due to fixing just the mass failures on trunk and no change in
> flaky infra.
On a separate note:
Trunk had 8 green runs in last 3 days! (
https://builds.apache.org/job/HBase-Trunk_matrix/)
This was due to fixing just the mass failures on trunk and no change in
flaky infra. Which made me to conclude two things:
1. Flaky infra works.
2. It relies heavily on the post-commit
@Sean, Mikhail: I found the alternate solution. Using user defined axis,
tool environment and env variable injection.
See latest diff to https://builds.apache.org/job/HBase-Trunk_matrix/ job
for reference.
On Tue, Aug 30, 2016 at 7:39 PM, Mikhail Antonov
wrote:
> FYI, I
FYI, I did the same for branch-1.3 builds. I've disabled hbase-1.3 and
hbase-1.3-IT jobs and instead created
https://builds.apache.org/job/HBase-1.3-JDK8 and
https://builds.apache.org/job/HBase-1.3-JDK7
This should work for now until we figure out how to move forward.
Thanks,
Mikhail
On Wed,
/me smacks forehead
these replacement jobs, of course, also have special characters in
their names which then show up in the working path.
renaming them to skip spaces and parens.
On Wed, Aug 17, 2016 at 1:34 PM, Sean Busbey wrote:
> FYI, it looks like essentially our
FYI,
I also disabled the following jobs that are failing:
* HBase 1.2 IT
* HBase-0.94
* HBase-0.94-JDK7
* HBase-0.94-on-Hadoop-2
* HBase-0.94-security
* HBase-0.94.28
The first one, Stack has graciously volunteered to run locally for the RC.
The rest are slated for removal in HBASE-16380
On
FYI, it looks like essentially our entire CI suite is red, probably due to
parts of our codebase not tolerating spaces or other special characters in
the working directory.
I've made a stop-gap non-multi-configuration set of jobs for running unit
tests for the 1.2 branch against JDK 7 and JDK 8:
Ugh.
I sent a reply to Gav on builds@ about maybe getting names that don't
have spaces in them:
https://lists.apache.org/thread.html/8ac03dc62f9d6862d4f3d5eb37119c9c73b4059aaa3ebba52fc63bb6@%3Cbuilds.apache.org%3E
In the mean time, is this an issue we need file with Hadoop or
something we need
There are a bunch of builds that have most of the test failing.
Example:
https://builds.apache.org/job/HBase-Trunk_matrix/1392/jdk=JDK%201.7%20(latest),label=yahoo-not-h2/testReport/junit/org.apache.hadoop.hbase/TestLocalHBaseCluster/testLocalHBaseCluster/
from the stack trace looks like the
Good on you Sean.
S
On Mon, Aug 8, 2016 at 9:43 PM, Sean Busbey wrote:
> I updated all of our jobs to use the updated JDK versions from infra.
> These have spaces in the names, and those names end up in our
> workspace path, so try to keep an eye out.
>
>
>
> On Mon, Aug 8,
I updated all of our jobs to use the updated JDK versions from infra.
These have spaces in the names, and those names end up in our
workspace path, so try to keep an eye out.
On Mon, Aug 8, 2016 at 10:42 AM, Sean Busbey wrote:
> running in docker is the default now.
running in docker is the default now. relying on the default docker
image that comes with Yetus means that our protoc checks are
failing[1].
[1]: https://issues.apache.org/jira/browse/HBASE-16373
On Sat, Aug 6, 2016 at 5:03 PM, Sean Busbey wrote:
> Hi folks!
>
> this morning
Hi folks!
this morning I merged the patch that updates us to Yetus 0.3.0[1] and updated
the precommit job appropriately. I also changed it to use one of the Java
versions post the puppet changes to asf build.
The last three builds look normal (#2975 - #2977). I'm gonna try running things
in
FYI, today our precommit jobs started failing because our chosen jdk
(1.7.0.79) disappeared (mentioned on HBASE-16032).
Initially we were doing something wrong, namely directly referencing
the jenkins build tools area without telling jenkins to give us an env
variable that stated where the jdk is
Thanks Sean.
St.Ack
On Wed, Mar 16, 2016 at 12:04 PM, Sean Busbey wrote:
> FYI, I updated the precommit job today to specify that only compile time
> checks should be done against jdks other than the primary jdk7 instance.
>
> On Mon, Mar 7, 2016 at 8:43 PM, Sean Busbey
https://issues.apache.org/jira/browse/YETUS-334
2016-03-15 21:48 GMT+08:00 Sean Busbey :
> No, that definitely looks like a bug. Could you please open an issue on the
> YETUS jira with a link to the relevant builds and HBASE jiras?
>
> On Tue, Mar 15, 2016 at 5:44 AM, Phil
No, that definitely looks like a bug. Could you please open an issue on the
YETUS jira with a link to the relevant builds and HBASE jiras?
On Tue, Mar 15, 2016 at 5:44 AM, Phil Yang wrote:
> Hi all,
>
> Recently pre-commit builds seems run some commands twice. For example,
Hi all,
Recently pre-commit builds seems run some commands twice. For example, in
console of https://builds.apache.org/job/PreCommit-HBASE-Build/975/console
or https://builds.apache.org/job/PreCommit-HBASE-Build/978/console , we run
"Patch findbugs detection", "Patch javadoc verification",
https://issues.apache.org/jira/browse/HBASE-15462
Thanks Sean.
Looks like a version parse error?
St.Ack
On Mon, Mar 14, 2016 at 12:55 PM, Sean Busbey wrote:
> HBASE please, I'll refile to INFRA or wherever if I can figure out the
> source.
>
> On Mon, Mar 14, 2016 at
HBASE please, I'll refile to INFRA or wherever if I can figure out the
source.
On Mon, Mar 14, 2016 at 12:44 PM, Stack wrote:
> On Mon, Mar 14, 2016 at 12:23 PM, Sean Busbey wrote:
>
> > is there a jira I can track for the docker failures?
> >
> >
> No.
On Mon, Mar 14, 2016 at 12:23 PM, Sean Busbey wrote:
> is there a jira I can track for the docker failures?
>
>
No. All recent hadoopqas fail. Want an INFRA or HBASE issue?
Thanks,
St.Ack
> On Mon, Mar 14, 2016 at 11:08 AM, Stack wrote:
>
> > Thanks for
is there a jira I can track for the docker failures?
On Mon, Mar 14, 2016 at 11:08 AM, Stack wrote:
> Thanks for making the job configuration all nice and tidy BTW Sean.
>
> I unchecked RUN_IN_DOCKER just now to try and get us over current bout of
> docker build failures.
>
>
Thanks for making the job configuration all nice and tidy BTW Sean.
I unchecked RUN_IN_DOCKER just now to try and get us over current bout of
docker build failures.
St.Ack
On Mon, Mar 7, 2016 at 10:27 AM, Sean Busbey wrote:
> FYI, I've just updated our precommit jobs to
On Mon, Mar 7, 2016 at 7:42 PM, Mikhail Antonov
wrote:
> Cutting 1.5 hours off pre-commit build's time would be great. Would
> post-commit builds also only run on jdk7 or both?
>
>
The post-commit builds are matrix builds that do the different JDKs in
parallel. The JDKs
Cutting 1.5 hours off pre-commit build's time would be great. Would
post-commit builds also only run on jdk7 or both?
Mikhail
On Mon, Mar 7, 2016 at 7:37 PM, Ted Yu wrote:
> Running against jdk 7 only is fine by me.
>
> > On Mar 7, 2016, at 6:43 PM, Sean Busbey
Running against jdk 7 only is fine by me.
> On Mar 7, 2016, at 6:43 PM, Sean Busbey wrote:
>
> I tested things out, and while YETUS-297[1] is present the default runs all
> plugins that can do multiple jdks against those available (jdk7 and jdk8 in
> our case).
>
> We can
I tested things out, and while YETUS-297[1] is present the default runs all
plugins that can do multiple jdks against those available (jdk7 and jdk8 in
our case).
We can configure things to only do a single run of unit tests. They'll be
against jdk7, since that is our default jdk. That fine by
Hurray!
It looks like YETUS-96 is in there and we are only running on jdk build
now, the default (but testing compile against both) Will keep an eye.
St.Ack
On Mon, Mar 7, 2016 at 10:27 AM, Sean Busbey wrote:
> FYI, I've just updated our precommit jobs to use the
FYI, I've just updated our precommit jobs to use the 0.2.0 release of Yetus
that came out today.
After keeping an eye out for strangeness today I'll turn docker mode back
on by default tonight.
On Wed, Jan 13, 2016 at 10:14 AM, Sean Busbey wrote:
> FYI, I added a new
we should probably ensure the earlier branch builds also exclude H2.
I'll leave myself a note to look at it this evening. If anyone gets to it
before then, please update here.
On Fri, Jan 22, 2016 at 1:33 PM, Stack wrote:
> Related to the below, I just changed the trunk
Related to the below, I just changed the trunk matrix build job to exclude
H2 from the build roster (with Sean's help); it seems to be responsible for
this failure type -- *Caused by: java.lang.IndexOutOfBoundsException:
Index: 0, Size: 0* -- in particular. Here is recent example:
Thank you Sean (and Andrew)
St.Ack
On Jan 22, 2016 10:12 PM, "Sean Busbey" wrote:
> Andrew in infra made us a label that covers all the hosts save H2.
> I've updated all the nightly builds to use it.
>
> (specifying a label expression as we do in precommit doesn't work on
>
Andrew in infra made us a label that covers all the hosts save H2.
I've updated all the nightly builds to use it.
(specifying a label expression as we do in precommit doesn't work on
matrix builds because the & and | from the expression end up in the
filesystem path)
On Fri, Jan 22, 2016 at 3:08
On Tue, Jan 19, 2016 at 5:46 AM, Sean Busbey wrote:
> We could start forcing a clean repository on every build (though this
> seems heavy handed).
>
> IIRC, we ran into this ages ago and it was one particular dependency.
> Presuming we can track down what that was, we could
On Tue, Jan 19, 2016 at 11:48 AM, Stack wrote:
> On Tue, Jan 19, 2016 at 5:46 AM, Sean Busbey wrote:
>
> > We could start forcing a clean repository on every build (though this
> > seems heavy handed).
> >
> > IIRC, we ran into this ages ago and it was one
We could start forcing a clean repository on every build (though this
seems heavy handed).
IIRC, we ran into this ages ago and it was one particular dependency.
Presuming we can track down what that was, we could add some pre-build
work that verifies a known good copy of that dependency is
Anyone know what the refresh timeout is for the below? We seem to be in a
phase where we have a bad pom and the hadoop test builds are failing. Can
we force refresh of the local repository by doing something like a custom
build run?
Thanks,
St.Ack
P.S. Here is what I am talking about:
We've had a few precommit jobs fail because the cache for our yetus
install was present but not executable.
I've turned on debugging so we can try to figure out what's going on
the next time one happens.
On Fri, Jan 8, 2016 at 7:58 AM, Sean Busbey wrote:
> FYI, I just
Found the problem (not setting the path to commands in the case where
there is a cached install :/ ); have now turned off debug by default.
On Mon, Jan 11, 2016 at 9:18 AM, Sean Busbey wrote:
> We've had a few precommit jobs fail because the cache for our yetus
> install was
FYI, I just pushed HBASE-13525 (switch to Apache Yetus for precommit tests)
and updated our jenkins precommit build to use it.
Jenkins job has some explanation:
https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HBASE-Build/
Release note from HBASE-13525 does as well.
The old job
Notice: I'm done messing with test-patch.sh. There is a little zombies line
at the end of the report now that should do a better job of clean reporting
whenever there are sightings.
Also note that all builds last night failed with OOME. Seems to be
infrastructure that is OOMEing, not our tests
Notice: I'm messing with test-patch.sh reporting trying to improve the
zombie section. I'll likely break things for a while (I already have -- the
hadoopqa report section is curtailed at mo). Will flag when done.
St.Ack
On Wed, Dec 2, 2015 at 1:22 PM, Stack wrote:
> As part of
As part of my continuing advocacy of builds.apache.org and that their
results are now worthy of our trust and nurture, here are some highlights
from the last few days of builds:
+ hadoopqa is now finding zombies before the patch is committed.
HBASE-14888 showed "-1 core tests. The patch failed
Did more changes on zombie detector script and pushed them. Have now moved
master build over to use this zombie detector script only since it seems to
basically work (Translation: post-build, we no longer have script in a text
window up in jenkins for master build instead we call out to the
Is this an example Sean,
https://builds.apache.org/job/HBase-Trunk_matrix/442/jdk=latest1.7,label=Hadoop/consoleText
?
Thanks,
St.Ack
On Thu, Nov 5, 2015 at 8:42 AM, Sean Busbey wrote:
> If Maven has trouble grabbing a pom but not an artifact, it'll
> substitute in a
Thanks Sean. That helps.
FYI @here, am messing on trunk builds w/ post-build script section. I'll
probably mess it up a few times Be warned.
St.Ack
On Thu, Nov 5, 2015 at 8:42 AM, Sean Busbey wrote:
> If Maven has trouble grabbing a pom but not an artifact, it'll
>
If Maven has trouble grabbing a pom but not an artifact, it'll
substitute in a placeholder pom that doesn't have e.g. license
information. That can result in a local repo that fails this way until
the refresh timeout hits for grabbing a pom again.
On Wed, Nov 4, 2015 at 5:33 PM, Stack
Thanks Andrew. Weird is that it is sporadic. Will keep an eye on it.
St.Ack
On Wed, Nov 4, 2015 at 8:14 AM, Andrew Purtell
wrote:
> > [ERROR] Error invoking method 'get(java.lang.Integer)' in
> java.util.ArrayList at META-INF/LICENSE.vm[line 1627, column 22]
>
> This
Clues on how to figure root cause of:
[ERROR] Error invoking method 'get(java.lang.Integer)' in
java.util.ArrayList at META-INF/LICENSE.vm[line 1627, column 22]
comes of the initial findbugs run when it goes into assembly. See here
in Trunk matrix:
> [ERROR] Error invoking method 'get(java.lang.Integer)' in java.util.ArrayList
> at META-INF/LICENSE.vm[line 1627, column 22]
This means a Velocity macro for building LICENSE info about a component has
failed because necessary information is missing in the Maven model. When I have
seen this
1 - 100 of 127 matches
Mail list logo