hi folks!

FYI, the Hadoop folks have identified that some recent changes to HDFS
have been eating all memory on hosts where their tests run. So that
might be the culprit in our recent spat of new odd looking surefire
failures.

Yetus is working to add in some guard rails to keep Hadoop from doing
this in the future:

https://issues.apache.org/jira/browse/YETUS-561

Related, I have a temporary measure in place on our precommit runs
that grabs a bunch of machine information (cpus, memory, disks, etc).
If you look at build artifacts they'll be in a directory named
'machine' under 'patchprocess'.

Reply via email to