On Oct 23, 2015, at 7:17 AM, Allen Wittenauer <a...@altiscale.com> wrote:

> 
> On Oct 22, 2015, at 11:37 PM, Mingliang Liu <m...@hortonworks.com> wrote:
> 
>> Thanks for your reply and investigation, Allen.
>> 
>> Yes, the HDFS-Build/13119 did fail because of the git plugin was not able to 
>> clean workspace.
>> 
>> The early build HDFS-Build/13114 and latest builds 
>> (HDFS-Build/13134<https://builds.apache.org/job/PreCommit-HDFS-Build/13134/> 
>> and 
>> HDFS-Build/13145<https://builds.apache.org/job/PreCommit-HDFS-Build/13145/>) 
>> are failing not because of the “git clean” command. The “class not found” 
>> exception can not be reproduced locally (Mac and Linux). The patch basically 
>> moves HdfsConfiguration class from hadoop-hdfs module to hadoop-hdfs-client.


        Actually, these are probably (again) the shared maven cache problem… 
and since this patch (also) is making an already backwards incompatible change 
that’s not marked as backwards incompatible even worse, of course the class 
disappears when the DAILY BUILDS, NIGHTLY BUILDS, or ANY OTHER JENKINS JOB 
blows away the cached jar in the middle of the test run.  Plus, since HDFS 
takes FOREVER to run, the chances of this happening is EXTREMELY HIGH.

        Once again, with feeling:

        HADOOP -> Yetus’ test-patch, which supports per-instance maven repos
        HDFS -> trunk’s test-patch, which does not
        MAPREDUCE -> trunk’s test-patch, which does not
        YARN -> trunk’s test-patch, which does not

        So yes, HDFS is still having the exact same problems it has had for the 
past few months. It’s the same code that’s been running  since like July.

        I’m tempted to switch the rest of the builds over to Yetus since some 
teams are having trouble grasping this concept.  If you want to test your patch 
on Yetus, open a jira under HADOOP and run it there.

Reply via email to