just an FYI, the split off of hadoop hdfs into client and server is going to break things.
I know that, as my code is broken; DFSConfigKeys off the path, HdfsConfiguration, the class I've been loading to force pickup of hdfs-site.xml -all missing. This is because hadoop-client POM now depends on hadoop-hdfs-client, not hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad about DfsConfigKeys, as everybody uses it as the one hard-coded resource of HDFS constants, HDFS-6566 covering the issue of making this public, something that's been sitting around for a year. I'm fixing my build by explicitly adding a hadoop-hdfs dependency. Any application which used stuff which has now been declared server-side isn't going to compile any more, which does appear to break the compatibility guidelines we've adopted, specifically "The hadoop-client artifact (maven groupId:artifactId) stays compatible within a major release" http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts We need to do one of 1. agree that this change, is considered acceptable according to policy, and mark it as incompatible in hdfs/CHANGES.TXT 2. Change the POMs to add both hdfs-client and -hdfs server in hadoop-client -with downstream users free to exclude the server code We unintentionally caused similar grief with the move of the s3n clients to hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd. This time we know the problems going to arise, so lets explicitly make a decision this time, and share it with our users. -steve